So, before we deal with rectangular matrices,
let us just talk a bit about some one other
special matrix; we introduced them a little
earlier just as definition of what are called
as symmetric matrices.
So, these are matrix which are equal to their
transpose A is equal to A transpose. And this
comes with some nice properties, something
which are easy to check is every square diagonal
matrix is symmetric. If I add two symmetric
matrices I will again get a symmetric matrix.
If A is symmetric A square, A cube and all
higher powers of A are also symmetric. And
similarly if A inverse exists it is symmetric
if and only if A is symmetric ok.
So, take a invertible A, if A is symmetric
A inverse if at all it exists will be symmetric.
Useful property which I not prove but, this
is interesting thing to note that all eigen
values of a symmetric matrix are real. And
this is one of the properties which will also
explain later in the course. But I will leave
this for you to do this proof as a little
exercise; if you cannot do it just let me
know whether this is an interesting proof.
So, eigen values of all symmetric matrices
are real and the eigen vectors are orthogonal
to each other right. So, if so we know the
definitions right.
So, there you take the dot product and then
get you 0. And lastly every symmetric matrix
is diagonalizable and of course, its eigen
vectors this is a thing we will think that
and its eigen vectors are linearly independent.
This is nothing really special to say about
here ok.
We now introduce the concepts of eigen decomposition
and singular value decomposition. Singular
value decomposition is essentially when you
have a rectangular matrix.
So, the eigen decomposition is one which decomposes
a square matrix A into its set of eigen values
and vectors. So, what does it mean? That if
I can write a matrix A as EDE inverse, E is
a matrix with eigenvectors of A, as its column
D is a diagonal matrix with eigen values of
a as its diagonal elements. Its a it looks
very singular to what to what we studied in
diagonalization its just a little alternate
representation. And this eigen decomposition
of course, is possible if and only if the
eigen vectors of A are a linearly independent.
So, how does this help us? Well, as usual
we were talking of computing higher powers
of A. So, A power n would simply be E D power
n E inverse. So, interesting question is what
happens when A is a rectangular matrix? So,
the answer to that will be by introducing
the concept of single singular value decomposition
ok.
Before we do a singular value decomposition,
let us first define what are singular values
of a matrix. And I will, usually associate
this with a rectangular matrix, A belonging
in to R m cross n right. So, if I have a rectangular
matrix, its maximum rank can be the minimum
of m and n. So, let us say I just call this
rank of A; let me call this as some number
r, which can add best be the minimum of m
or n. What is easy to check is that A A transpose
is a square and a symmetric matrix. Similarly
A transpose A will be in R n cross n. And
this will also be a symmetric matrix right
this is easy to check ok.
So, let us start with this and say that, I
have lambda 1 to lambda r let them denote
the nonzero eigen values of A A transpose.
And from the property of symmetric matrix,
I know that this all will be real eigen values
ok. So, I start with a matrix A right, which
is a rectangular matrix. Then I compute this
matrix A A transpose which is symmetric and
of course, square which goes without saying
all its eigen values are real ok. Now the
singular values by definition are the square
roots of eigen values of A A transpose, so
lambda i; here are the eigen values of A A
transpose ok. And the remaining singular values
will be zero, let us see if we can work out
pretty simple example.
So, I take a matrix A; which is 3 4 0, 0 0
0 and the rank of A and I can trivially check
it to be 1 ok. Now what is A A transpose is
simply this multiplication, 3 0 4; 0 0 0.
I have 3 4 0 and all other 0s. So, this will
be 9 plus 16 is 25 0 0 0. So, singular values
here would just be the square root of this
of eigen values of AA transpose would just
be the diagonal matrices, 25 and 0 right.
And then you can just compute the singular
values by this these numbers ok. So, there
will be one nonzero and the other will be
0 singular values are positive the square
root of 25 will be plus and minus 5 right.
We taken only one.
The positive one ok.
So, given a matrix how would I go about finding
its singular value decomposition or what does
it even mean by singular value decomposition?
In the diagonalization I had a transformation
E D or E inverse where that the matrix D was
just a collection of the eigen values on the
diagonal and everything else was 0 ok. So,
what does what does this just mean in terms
of a rectangular matrix? So, let us do a little
derivation here, I have written ok.
So, I start with A from R m cross n my question
is there is its a m by n matrix, again the
rank of A is equal to r, which is less than
or equal to the minimum of these numbers m
comma n ok. Now the definition here says that
A can be written as U and some S and V transpose
ok; where this U and V are such that U U transpose
is the identity and V V transpose is the identity.
So, this U U transpose would be the m dimensional
identity and this would be the n dimensional
identity equality here these are also called
as unitary matrices ok. Now suppose that I
can write or S as S r 0, 0 and 0 like. So,
this will correspond to the r non zero singular
values and then the remaining would be would
be 0 ok.
So, this thing I can write down now as, so
I have U 1, U 2 I have S r 0 0 0. So, now,
this has dimension m this is r and this is
m minus r. And similarly here so, I will have
V 1 and say some V 2 with its transpose. So,
V of n this will be r and this will be n minus
r. So, again I can rewrite this as U 1 S r
V 1 transpose and then there is a reason why
I am doing this why I am I assuming that there
are r nonzero singular values and in the remaining
are are 0. So I know these two things, so
an immature thing to also check is U 1 U 1
transpose would be the are dimension identity
similarly V 1 with V 1 transpose would be
also the r dimensional identity ok.
Now coming back to this matrix A; let me take
this A A transpose right, this is what how
I defined you know the singular values. That
these are the singular values are the square
roots of eigen values of A transpose A transpose
is A square matrix. So, this will be U 1,
call this as S r V 1 transpose ok, multiplied
by the same write U 1 S r V 1 transpose and
the entire transpose of this. So, this will
be ok, I just do all the math; this will be
U 1 S r square and V 1 we can transpose. So,
this will be U 1 transpose ok.
Now, if I multiply this by say I have A A
transpose U 1 is U 1, S r square ok. So, now,
this is let me call this some matrix A; this
is A set of vectors right A, U 1 is some U
1 again the same, S r square ok. Now if I
write down each element it will look as A;
so, you call this small u i ui is ui S r i
square ok.
So, see whatever is the ith diagonal entries
I; so, i going from 1 to r. Now this has a
very nice interpretation here right. So, I
have a matrix A multiplied by A vector will
give me again that vector multiplied by the
square of the ith diagonal element, which
are essentially the singular value. So, let
me call this the this singular values to be
sigma 1 till sigma r.
So, I have A U i is U i sigma i square ok.
Now this use or this eigen vectors are called
the left singular values of A ok. Similarly,
I can do with the other thing also right.
So, I just take and the other so, I have take
A A transpose so, what is A A transpose, this
would be sorry not A transpose; I take the
another one A transpose times A is. So, sorry,
I get U 1 S r U 1: U 1 S r V 1 transpose the
whole transpose of this times U 1, S r, V
1 with the transpose.
And I do all this stuff and I get that A transpose
A with A may be a small v i would be again
this list number sigma i square v i. And this
V's which are now elements of v 1 to v r are
the right; singular values of the matrix A.
So, this is a little proof of how you know
these things work and what is the relation
why the what is the relation between the eigen
values of A A transpose and the single singular
value.
So, why do we call them as the singular value?
So, this is a little illustration of that
ok. So, let us go back here and then read
out what this what the entire steps that we
did so far mean. So, I take a matrix A I can
write it as a product of 3 matrices U S and
V transpose; where S we will consider will
consist of all the same singular values along
the diagonal which is a generalization of
eigen decomposition. The diagonal element
of S are the singular values of A; the columns
of U were called the left eigenvectors the
column for V are were called the right singular
eigen vectors of A. Yes, it is again you can
just go through this sense and relate each
statement to the steps which we are followed
over there ok.
So, when do this see when is the singular
value decomposition and the eigen decomposition,
when are they same well its again easy to
check, if the coincide if and only if a is
symmetric and positive definite you can just
you know start from here. And say when A is
defines symmetric and positive definite then
a positive definite would mean that all eigen
values are greater than 0; that was its the
property of the symmetric matrix or sign definite
symmetric matrix.
So, A transpose A would be A square and then
you can just re rewrite all these steps to
validate this statement ok. So, the reason
I also did this that I just assumed that r,
there are only r; singular values which are
less than the minimum of this number. In the
remaining go to 0, is we can get now a nice
interpretation of what we call as the row
space and so, the column space and the null
space of A and relate them directly to the
singular values or the singular value decomposition
right.
So, the basis for C (A) would just turn out
to be the first r columns of U; I will not
do the write on the details, but I think you
can do this similarly the basis for the null
space of A will be the last n minus r columns
of V and so on right. So, this you can just
verify as a very small exercise right ok.
So, this kind of concludes the linear algebra
tools that I would want to introduce as some
basic building blocks or basic tools for the
course. And next time we will start directly
with state space models how do we compute
solutions of a state space representation
of a system that could be. So, we will again
deal with linear systems which could be time
invariant and also time variant, so that will
come up soon.
Thanks for listening.
