Hi everyone my name is Claire Tomlin I'm
a professor of electrical engineering
and Computer Sciences at Berkeley and
this is the twenty seventh module in a
series that we're recording for the
course
EEE CS 221 a linear Systems Theory at
Berkeley I'm just gonna present one
thing in this module and that's the
concept of a minimum polynomial of a
matrix a so we're always dealing with a
matrix a let's suppose it's an N by n so
it's an N by n matrix let's suppose it's
a real matrix and we've already defined
what the characteristic polynomial is of
the matrix a so we use this terminology
Chi hat a of s is equal to the
determinant of Si minus a and we can
write that out as s minus lambda 1 s
minus lambda 2 up to X minus lambda n so
when the characteristic polynomial is
set to 0 we typically call that the
characteristic equation and we can use
that to what if we solve that equation
for s we construct we can get back the
eigenvalues of the matrix a ok so this
is the structure that we had before when
we were assuming that the eigenvalues of
the matrix a were all distinct and we
showed that in that case the
eigenvectors of the matrix a are all
linearly independent from each other
suppose now you have repeated
eigenvalues ok so let's actually write
this out so it may be that lambda 1 is
equal to lambda 2 or there's some number
of repeated eigenvalues so in general
I'm going to write a chi hat a of s to
be equal to the following I'll use the
following terminology so suppose we have
Sigma distinct eigenvalues up to s minus
lambda Sigma where Sigma is less than or
equal to n but if it's less than n then
some of these eigenvalues are going to
appear with multiplicity greater than
one and so we'll call those
multiplicities d 1 d 2
d 3 up to D Sigma okay so the D eyes are
the multiplicities of the lambda eyes
where we have that D 1 plus D 2 plus D
Sigma is equal to n ok because we always
have n eigenvalues it's just some of
them may be repeated cayley-hamilton the
cayley-hamilton theorem tells us that
every matrix satisfies its own
characteristic equation so by
cayley-hamilton cayley-hamilton we know
that Chi hat a of a is equal to the zero
matrix
good so we've got this polynomial which
we call the characteristic polynomial
now let's define what we mean by the
minimum polynomial minimum polynomial
and we use Chi for characteristic
polynomial we typically use si for the
minimum polynomial Chi hat a of s so
this is the polynomial is the polynomial
polynomial of least degree such that Chi
hat a of a is equal to 0 so the point is
that you could have a polynomial of
lesser degree than n such that we still
get that evaluated at the matrix itself
it's equal to the zero matrix we know
that the characteristic polynomial has
degree n but the minimum polynomial may
have degree less than n ok so very
simple definition the polynomial of
least degrees such that it's the matrix
satisfies the minimum polynomial
equation with that definition we can
actually say a few things we can say
first of all so let's make a claim we
can say that the minimum polynomial
divides the characteristic polynomial
perfectly okay so it divides it without
any remainder and the proof is easy the
proof is just if not so if it didn't
divide it perfectly then you would come
up with a remainder so let's write that
out as follows so Chi hat a of s divided
by psy hat a of s is going to be equal
to some potion term we'll call that Q
hat of s plus some remainder term so our
hat of s / sy hat a of s okay and the
degree of R has to be less than the
degree of Chi hat a so it has to be less
than n okay but now you derive a
contradiction because if you multiply
out by sy hat a of s and then you
evaluate that at a so that would tell
you that Chi hat a of a is going to be
equal to Q hat of a times sy hat a of a
plus R hat a ok now we know that has to
be equal to 0 by the cayley-hamilton
theorem but this is equal to 0 by
definition and we've assumed that's not
equal to 0 ok so that tells us that well
I mean this tells us that our hat a has
to be equal to 0 by the fact that this
is equal to 0 this is equal to 0 and
then we derive a contradiction okay so
so it tells us that it doesn't divide it
perfectly okay so we have that the
minimum polynomial as defined as follows
divides the characteristic polynomial
perfectly so we have a form of the
characteristic polynomial since the
minimum polynomial divides it perfectly
we can come up with a similar form for
the minimum polynomial we can say that
by the fact that the minimum polynomial
our by our claim which we've proven
that the minimum polynomial divides the
characteristic polynomial perfectly we
can write out the characteristic or
sorry the minimum polynomial as s minus
lambda 1 to the M 1 s minus lambda 2 to
the M 2 up to s minus lambda Sigma to
the M Sigma that's a sigma where where
these M's we have that m 1 is less than
or equal to d 1 M 2 is less than or
equal to D 2 up to M Sigma is less than
or equal to D Sigma ok so to divide it
perfectly with this form of the
characteristic polynomial you're going
to need this form for the minimum
polynomial but these the M eyes don't
have to add up to be n they could add up
to be something less than n ok so what
does this mean so we can say a few
things about this but let's just do a
couple of examples so in general if
you're given a matrix a we'd be
interested in how do you find its
minimum polynomial so that's that's what
we're going to talk about but before we
do that or before that let's do that in
a subsequent module but we can just do
some simple examples here suppose you
had an a matrix let's make it let's make
this a matrix diagonal lambda 1 lambda 2
or actually let's do lambda 1 lambda 1
lambda 2 so it's a diagonal matrix
everything in the off diagonal is equal
to 0 in this case the characteristic
polynomial we can write that down by
inspection is just s minus lambda 1
squared s minus lambda 2 the minimum
polynomial is equal to s minus lambda 1
s minus lambda 1 okay so the minimum
polynomial has degree 2 whereas the
characteristic polynomial has degree 3
the second example I'm going to use is
something similar but it's not a
diagonal matrix but it's got the same
eigenvalues so everything is zero except
there's a one in the off diagonal
element here and then everything else is
zero so in this case we have that chi
hat a of s is equal to s minus lambda 1
s minus lambda 1 squared s minus lambda
2 it's the same but now the minimum
polynomial is going to be s minus lambda
1 squared s minus lambda 2 it's the same
as the characteristic polynomial here
the basic rule and we were as we define
jordan forms in our next module we'll
see this but the degree associated so
the multiplicity associated to an
eigenvalue in the minimum polynomial is
the size of the largest Jordan block
associated to that eigenvalue and so a
Jordan block here we have what we'll
define in the next module we have a
Jordan block associated to lambda one of
size two and a Jordan block associated
to lambda two of size one okay so where
as the characteristic polynomial has the
same degree as the matrix the minimum
polynomial it contains all of the
eigenvalue information i've just noticed
a typo here this should have been s
minus lambda two according to that
eigenvalue lambda two
but the multiplicity of that eigenvalue
depends on the size of the largest
jordan block and here we had two Jordan
blocks of size one associated to lambda
one and one Jordan block of size one
associated to lambda two here we have
one Jordan block of size two and one
Jordan block of size one okay
so we've defined the minimum polynomial
in the next module we're going to move
on and we're going to talk about the
jordan form based on our knowledge now
of the what the minimum polynomial is
and these concepts of a and berry
and direct some thanks very much
