In this video,we will discuss our very
first transformation.
We will use it as an opportunity to
introduce some terminology
and to pose some very new and very
intriguing questions!
And the way this and all other
discussions of transformations will go
is by specifying the transformation
first and to specify transformation
I have to give your rule for where each
vector
goes under the transformation. So our
first example
of course comes from the space of geometric vectors. Geometric vectors
is our go to vector
space
for inspiration and for intuition.
So it's not at all surprising that our first
few examples
will come from this status and actually
much of the terminology
of course is inspired by geometry.
There are very first transformation will be 
reflection
with respect to a straight line that
passes
through the origin,  passes through the
tip of the zero vector.
So it will be denoted by the letter R
and
here is the rule for reflection.  Any vector
if you want to discover where it goes
under this transformation you have to
draw
aligned perpendicular to the straight
line through the tip of the vector
and go as far on the other side of the
line is the tip of the vector on this
site,
your mark that point and that's the trip
of this vector reflected.
So if this vector is U
in this sector is denoted by R of U.
Notice that they're usually
no parentheses so
you can put R parentheses U, but
we're by now inspired
by the matrix notation Henry
use matrices we don't put parentheses if
we're trying to make us look like matrix
multiplication.
Because indeed we will soon think of
multiplication by matrix as a transformation
in R. So this notation
boroughs from R.  Typically when you
talk about functions
you would put parentheses around the
argument. But if we can avoid it
we won't put the parentheses around
argument that comes from matrices.
All right,  let's consider one more example.
How about
this vector we'll call this vector V
and then RV
is right here: R of V.
Okay.
How
about one more example, how about  I will use
this vector right here lies along the
line and
will call it the vector W, okay.
And to reflect the vector W you have to
pursue the same strategy
or I should say recipe which is draw a
line perpendicular to the straight line
that passes through the tip of the
vector,
go as far on the other side is the
vector on this site
and of course that's going zero
distance so we're right here
and that's the vector RW.
And it's the same as W, so RW
and let's just write what we just
realized that it's the same as W.
so this vector stays itself. Remains
itself
under the transformation. A couple of new
terms:
there this vector the result of linear
transformation is called the
image.
It is the image of this vector
and this vector is called the pre image
of this one,  and you can see how this
terminology comes from real life.
This is compared to looking in the
mirror
and seeing your reflection so that's
your image.  If this is you
then your reflection in the mirror 
appears here.  You're looking
at your image. So the term "image"
that applies to all transformations not
just
in linear algebra by in broader
applied mathematics
comes from geometry, from
almost physics, our physical experience
with mirrors,
actually more specifically from this
transformation from
reflection, so that's kind of nice. So
this is image
I won't write it on the board because
you're hearing it and this is
the pre image.  Okay that's, that's
fundamental terminology.
Okay our next question is:
is this transformation linear?  And for
that
we have to think back to that breakfast
example of converting euros into
dollars,
where it didn't matter whether you added
up the Euro months first and then
translated the total into dollars,
whether you converted the individual
amounts
and then added up the dollar amounts. The
result
was the same in both cases. And to carry
that way of thinking over to this
example we have proposed the following
question.
does it matter when we take two vectors
first,
add them together and then reflect the
result,
or whether we reflect the individual
vectors
and then add up the images.  If the order
doesn't matter
and in the same test for multiplication by
scaler,  if the order doesn't matter
and the transformation is linear. So
I don't think,  I'll actually will draw the sum but 
I don't really think we need to do it.  I
think it actually will get over
distraction. I think you can do this whole
way of thinking
in your mind. Here's the picture
that you should have in your mind.
So let's add the two vectors first by the
parallelogram rule,
okay,
so this would be your sum,  this
would be U plus V
 
And now let's reflect U plus V.
And the way that's done - I'm now going to
go to different color,  so it doesn't
become
too messy - is by reflecting according to
the specified
rule interfered with these letters, but
that's okay,
and I think we're going to end up take
this distance move it over here
right here: what an unfortunate location.
Okay so this
vector right here that's driving yellow
will be
R,  and now we do need parenthesis
because we have a sum: R of U
plus V.  And the question is: would
we have gotten the same vector if we
added up the images if we added up the
reflected vectors.
and the answer is R I tried to aim
as
best I could and I missed it a little bit
but
yes, you see that it's the same vector.  So
it doesn't matter
whether you add them up first and then
reflect the sum,
whether you reflect the individual
vectors and then
add up the reflections.  In both
cases
you end up with this vector so we think
that linearity is in the cards.
There's that one additional test the linearity
with  two things:
sum is 1 that  multiplication by a scaler
is the other.  And it's just as easy to
make sure that  rule
is satisfied as well. So
if we took this vector and multiply it
let's say by 2
and then reflected it it would be right
here. Alternatively
we can reflect this such a first and multiply 
 by the reflection by two
and we're right there at the same spot.
So basically two worlds here:  one
world  is on this side of the mirror
and the reflected world looks exactly
the same.
Except rebirth in some sense.  And so
whatever happens here is kinda
replicated here.
And that's basically the source of
a linearity, accept
the difference between this and the
physical example with the mirror
here the vectors go both ways so in all
of my examples
I think  and miss any playing but I
could have picked vectors that was a myss
play an even mix them together when
we're considering the sum.
Of course it goes every which way and it
does not matter.
Okay, so yes, we're dealing here with the
linear transformation.
And I will ask perhaps most unexpected
question
that will seem whimsical at first but
will prove to be
one of the most important questions that
you can ask about a linear
transformation.
The question is this: can you identify the
vectors that
remain parallel to themselves
under this transformation.  Parallel
is not broad enough as it turns out so
let me define parallel in algebraic
terms.
We're looking for vectors whose image
is a scaler multiple
of the  pre image. So this in here is the
sort of  vector
we are looking for. We are  looking for vector let's say,
well, might as well use V
vector whose image under the reflection
is a multiple and there's preferred
letter
that mathematicians use for that
multiple and that letter
is the Greek letter Lambda and  it here
stay out for scale. So we're looking for
vector
whose image is a multiple
of the original vector.  So in algebraic
terms
the sort of vectors we were looking for satisfy
this algebraic rule.
This is better than saying parallel to
itself,
parallel to the pre image because when
Lambda is
0, in other words when some vector
under this transformation becomes the zero
vector
it certainly satisfies this relationship
because you can choose Lambda
equal 0.  That's like writing down the
algebraic expression
is better than
saying that the image is parallel or
points along the same line.
Would be another way of saying as the
pre image. So this question truly seems
whimsical and irrelevant and it's
somewhat surprising
that ends up being the most crucial
question you can ask about a linear
transformation.
And identify the special vectors and the
special numbers that go for those
vectors
is one of the most important things you
can do
in analyzing a linear transformation. So
let me tell you what these vectors and
the corresponding numbers are called.
is another most mollifirous  sounding words but
you'll get used to them
and they'll  seem very natural to you.  So the
vector
is called the Eigenvector.  The word "Eigen"
has
German etymology, means "cell"  or proper
and the corresponding number is called
Eigenvalue.
Okay, there are
other synonyms for this term for example
proper vector
and proper value and a few more but the
terms that we'll use
is Eigenvector and Eigenvalue.  And
when the
linear transformation is specified in
geometric terms as it is here
we'll be able to identify the Eigenvalues
and Eigenvectors
by insight.  And of course that brings up
a very interesting question:
if these are so important and were not
able to identify them by
site then what do we do? And of course
as in the case of linear systems there's
a robust
technique in linear algebra for
identifying Eigenvalues and Eigenvectors
for any transformation it will be a very
big
an important discussion. Especially on
how to convert geometric problem to a
problem that you can do
on the computer. That will be one of the
pivotal moments
in our linear algebra course. Okay.
But for this simple linear
transformation
we can identify these vectors rather
easily.  So you should probably take a
moment and try to identify those vectors,
pause the video
and then come back and see if your
answer is correct.
But the first one is staring right at you.
And of course it's this one that lies on
the line.
And you can see how this equation
looks very similar to this one where
Lambda equals 1.
So this vector is indeed an Eigenvector
and the corresponding Eigenvalue is 1.
And actually any vector
stepwise along this line is an Eigenvector
with 1 corresponding to the Eigenvalue. So
as it will always be the case it's
not just a single vector
it's an entire Eigen space of vectors .
And in this case
Eigenspace is any vector that lies along
this line.
We should  remind you of the null space a
little bit
where if you found an element of the null
space any multiple of that  element
was the null  space as well.  So it wasn't
just a null vector,
it was a null space.  And there we  would
write down
out four times that vector.  The tradition
in discussions for
on Eigenvalue and Eigenvectors is a little
bit different. Yes,
it's an entire space, you can
call it
Eigenspace.  But if somebody asks you  for
s you just
pic one representative vector from that
entire space and you write it down and
say that's the Eigenvector.
And of course you're applying that any
multiple of thar vector
is an Eigenvector as well.
That understood so the tradition is to
pick  one
and to write it down, so here's our Eigenvector
and the corresponding Eigenvalue is
what.
And just to note that the Eigenspace was
two-dimensional
you would simply write down two vectors.
And  a dimensional  you would write down
a basis for that space any end vectors
you choose that represent the basis
is a good solution to that question, to that 
problem.
Okay, so here's one,  is there another
that's altogether different from this one
so if you search with your eyes for
quite a while
or maybe not so long you will be able to
find that vector.
And that vector is one that's orthogonal
to the straight line. We're running out of
letters
so let's call this vector O because its
most
far. So let's find the
image up this vector and the rule is:
draw a line orthogonal to the line for
the tip of the vector
so that line coincides with the vector
itself cross the line
go the same distance to the other side
and that's the image of the vector.
So this is RO. Okay.
And what's the relationship between all
and RO?
We can write it  down right here and we can see may be
there's enough space.
I believe there  isn't  so I'll write it right
here.
R applyed to O is
of course opposite of O negative
O. And is this relationship
fit this pattern? Yes, it does. Where
Lambda
equals  -1. And of course once again
any multiple of the vector O would
also satisfy
this equation,  this relationship with
Lambda equals -1.
So any one of those including ones on
this side up for
line can be considered Eigenvector so
we just choose one, I chose this one,
okay, and the corresponding Eigenvalue
is negative 1 and there isn't another
one
any other one certainly changes as we
see from all these examples
changes the direction or the lie
the line it lies on under reflection.
So reflection as with respect to align
as two Eigenvalues  0, excuse me,
one and -1
and
two corresponding Eigenvectors.  Let me
ask you one more question that I think
you'll find very intriguing.
so the 2  Eigenvalues associated with
this transformation
are 1 and negative 1.   I will now show
you
an algebraic equation related to this
transformation
whose roots are as if by magic one
and -1 and that will be very nice
mystery
to be solved a little bit later. So ask
yourself the following question: what
is the transformation
that's equivalent to 2 successive
reflections.
In other words take a vector reflected
and then reflect its image to 2
successive reflections.  Consider that our
new transformation. And of course you
will get the original vector back.
Let's take a look: start with this vector,
reflect it
and use this vector as our starting point
and reflect it again
and of course you're right back here.  So
successive reflections to successive
reflections
amounts to nothing. To work for nothing
and linear algebra comes once again from
matrix notation
and you use that the identity.  The identity
transformation.
so let's write it down in algebraic
terms.
So two successive reflections can be
written as
R of RV.
So let me start here it can be written
as
R apply to R of V.
So you can put parentheses if you
would like
but the tradition like I said before is to not 
use parentheses
which can be written as R square of D
which is V itself.
That's the property of this combined
transformation that we have just
discovered.
So if we were to drop V vector itself
and just write an identity and
transformations alone
which is sexy you would write something
along the lines of R-squared
equals identity.  That's an even better
way of saying neither cuz there's no
argument which
is arbitrary because it applies to all
vectors
that transfer that reflection squared equals
identity.
Twice reflecting something twice amounts
to doing nothing.
Which of course reminds us of this
algebraic equation of you think of
this transformation as they are unknown.
Of course finding equation an M
chest
you know using my algebraic inspiration
here I'm looking at something like this
and say: yes, it's an identity we just
discovered but let's treated as an
equation,
and the equation would be X square where
if we think that this is the
unknown equals 1
and what are the roots of equation X
squared equals 1?
1 and -1. Just like the
Eigenvalues of this transformation.
Is it a coincidence? Not at all as will
discover in
not so distant future.
