good morning in the previous lecture we ah
studied one of the four methods ah to work
out suitable similarity transformations for
solving the eigenvalue problem that was through
plane rotations today in this lecture we consider
the second method to work out suitable similarity
transformation this is also related to geometrical
ideas and this particular method is based
on reflection so today we will study householder
transformation and tridiagonal matrices first
we consider the householder reflection transformation
in a geometric sense and then we work out
how to find the householder method for tridiagonalising
a given symmetric matrix and next we see what
to do with that resulting tridiagonal matrix
symmetric tridiagonal matrix so first consider
this in a k dimensional space consider two
vectors u and v in this k dimensional space
both having the same magnitude if u and v
have the same magnitude like this then we
proceed to find this particular vector w which
is the unit vector along the direction of
difference of u and v that u is this v is
this this vector is a difference u minus b
and we divided it with its magnitude to find
the unit vector w in this direction
now this small vector w unit vector in this
direction is perpendicular or a orthogonal
to this plane or hyper plane that actually
bisects the angle between these two rays showing
the two vectors now with this w in hand let
us construct this matrix hk which is a k by
k matrix formed by subtracting the matrix
twice ww transpose from the identity matrix
this matrix is called the householder reflection
matrix this has a lot of interesting properties
this is a matrix which is symmetric and orthogonal
at the same time symmetry is easy to see here
because identity is anyway symmetric and ww
transpose is symmetric so this defines a symmetric
to check for orthogonality we can find out
whether a hk transpose hk is identity to see
that is actually ah quite simple hk transpose
hk and symmetry we have already confirmed
we have already verified so in place of hk
transpose hk we can simply write hk so hk
into hk right identity minus twice w w transpose
into the same thing as we open this product
we get i into i that is identity minus i into
twice ww transpose and again twice ww transpose
into identity
so total four ww transpose minus minus plus
two into two four and we get ww transpose
ww transpose since matrix multiplication is
associative so which ever order you multiply
these four it doesnt matter so if you multiply
in this first then you will find that w transpose
w is unity because w is a unit vector then
what remains four ww transpose which is as
same as this so that will mean that hk transpose
hk is identity that establishes the orthogonality
of this householder reflection matrix now
what does this symmetric and orthogonal matrix
do why it is called reflection matrix to see
that consider its action on two vectors one
around w and the other perpendicular to w
or orthogonal to w orthogonal to w will mean
a vector in this plane which is shown here
as plane of reflection
so take any vector x which is orthogonal to
w that means which is in this plane so when
you apply it apply hk on x you find that ik
minus one twice ww transpose x what you will
get identity into x is x and this one as you
open you will find first w transpose x and
from the very definition of x being orthogonal
to w w transpose x will be zero so what will
remain identity x into x which is x that means
a vector orthogonal to w that is in the plane
of reflection gets mapped to the same vector
itself there is no change
on the other hand how does w itself get gets
gets mapped get mapped hkw as you apply this
on w you find that identity into w will give
u w but this fellow will give you twice w
w transpose w is one so you will get w minus
twice w that is minus w that means w itself
when operated upon by hk gets mapped to its
negative and the vector in the plane plane
of reflection gets mapped to itself that is
the way a reflection takes place this plane
of reflection operates like a mirror right
so if there is any other vector which has
some component on the plane and some component
perpendicular to it then we can consider it
like this applying hk over y which has two
components along w which is perpendicular
to the plane and perpendicular to w which
is along the plane the component which is
along w gets mapped to its negatives and the
particular one remains as it is right so this
is typically the action of a mirror reflections
so this is why this matrix is called a householder
reflection matrix in particular it will map
u to v and v to u because they are mirror
images of each other with this plane as the
mirror or plane of reflection
now this concept and this particular matrix
how do you utilise in reducing the symmetric
matrix to a form more suitable for solution
of the eigenvalue problem in this case we
will try to make it tridiagonal how do you
use that so that brings us to the point of
householder method consider an n by n symmetric
matrix 
and so on right symmetry is shown here a to
one a to one and so on now take u in that
context in the reflection context take this
u to be the vector a to one to an one this
transpose makes this row vector a column vector
through transposition
now if the matrix is m by n then this vector
u is an n minus one dimensional vector because
that top entry a one one we have left out
and that vector u starts from a to one that
means this vector this much is taken as u
ok and then v is taken to be a vector of the
same dimension but it is having a first entry
which is the same as norm of u so whatever
is the norm of this that turns out to be the
first entry of v all other entries of v are
zero that means the top entry of v will be
the norm of this vector from a two one to
a n one and all the other n minus two entries
of v will be zero
so like that construct the vector v with this
u and v we work out w u minus v divided by
its magnitude and then we work out the householder
matrix that householder matrix hk in this
case k is n minus one so it will be an n minus
one by n minus one matrix hn minus one we
call it right then out of that hn minus which
is an n minus one by n minus one matrix we
develop this larger matrix by inserting a
one here a zero row above and zero column
on the left of this h ok so this is p one
now p one is its own transpose because it
is symmetric and it is orthogonal also so
then what we do is that we apply this orthogonal
similarity transformation p one transpose
ap one since p one transpose is same as p
one so we have just written p one
now what we have here is a one one sitting
here u is this this whole thing is u transpose
and this much the trailing n by n minus one
by n minus one matrix for that we given name
call it a one whatever it is then as we apply
this p one ap one through the multiplications
you will find that this u has become in place
of u have got v now and in place of u transpose
you have got v transpose this multiplication
you can see immediately we have got p one
zero right so zero row zero column and hn
plus one n minus one here then a which has
a one one u on this side u transpose on this
side and a one here here and again the same
thing
now as you conduct the block operations in
eigenvalue problem solution methodologies
you will come often across these block operations
so first ah keep this as it is and we multiply
these two a one one scalar into one plus u
transpose row vector into zero column vector
that is zero so you get a one one next a one
one into zero row you get a zero row plus
u transpose into this what is u transpose
into this that will be the transpose of this
right
so u transpose into this will be the transpose
of this right and what is this this matrix
is its own transpose so this is simply hn
minus one u and through the property of the
householder reflection matrix that we have
seen just now this is nothing but v so therefore
u transpose this will be v transpose so you
get v transpose here next the lower row block
u column vector into one that gives you u
only plus a one into zero that is u you get
u finally this big block this is a scalar
this is a row vector this is a column vector
now we have got the trailing n minus one by
n minus one matrix here
column vector u into row vector zero that
is a zero matrix plus a one into eight we
write it finally this multiplication one into
a one scalar a one plus plus zero row into
u column that is zero so you get a one one
a one next one into v transpose row vector
v transpose plus zero into whatever so you
get v transpose here here zero into a one
one that is zero that is a column vector plus
hn minus one u that is v we will get v here
and here you will get zero into v transpose
that is zero plus h into a one into h right
so what you have got you have got in the first
column you have got a one one and then the
vector v similarly in the first row you have
got a one one and then v transpose and what
is the structure of v that we started with
first entry of v is full size of u and all
the other entries of v are zero that means
below the second entry from the top everything
else will be zero
so that is what you get here right so now
we rename ok see in their whole process a
one one has remained unchanged a one one has
not been operated upon by anything because
the first column and first row of p one is
same as identity so a one one has less has
been left unchanged now we rename a one one
as d one and whatever is the first entry of
v we name it as a two below which everything
else is zero and out of symmetry that same
e two will be sitting here on the right of
which there will be all zeros and this two
two diagonal entry we now call d two in the
next step this block will remain unchanged
though in the first step this remained unchained
a one one in the second step this much will
remain unchanged what we do in the second
step in the first column below the top two
entries everything else has be become zero
next round in the second column below the
top three entries we were went to make everything
zero this is the process to make it tridiagonal
ok
so what we consider is that below the two
top entries whatever is the vector sitting
we call that u two and then construct a similar
v two which has the same magnitude as u two
and all but the first entry all but the first
all the other entries are zero right and that
size of the matrix in a vector u two and v
two is n minus two then we construct the next
householder transformation matrix of size
n minus two by n minus two and enhance it
within identity matrix of size two by two
here equivalent number of zeros here and here
to complete the size then apply that on the
previous result this and this will keep unchanged
the leading to by two block of a one ok and
you will get the next step which will have
this much d one d two d three e two e three
in the place correct places and the first
two columns and the first two rows have been
made processed up to the extent that below
the sub diagonal and above the super diagonal
and we have got zeros in those first two columns
and rows
like that we keep on conducting steps with
smaller and smaller householder matrices in
the trailing part and the leading part we
will have the identity matrices of of gradually
increasing size after j th such steps till
this point it has been converted to tridiagonal
and remaining fellows are full and as we go
on conducting this ah this kind of steps at
the end of n minus two steps we will have
this complete transformation we want to p
one to p n minus two which will result in
a completely symmetric tridiagonal matrix
right which will look like this
let us see a quick example ok this is a five
by five matrix now this part is what we called
a one there now in order to reduce it to a
symmetric tridiagonal form we would like to
have first three zeros in these locations
right so we take u as these vectors four one
two one and we want v in which the last three
entries are zero and what is the first entry
first entry is the size of this magnitude
this vector u so what is that size
this square plus this square plus this square
plus this square so we will have sixteen seventeen
eighteen and four twenty two so root twenty
two so this will become our v and with this
u and this v it is easy to find out w the
difference of u and v and divide it by whatever
is its magnitude right so with that we find
w and then we work out twice w w transpose
subtract it from identity and that matrix
is our four by four householder transformation
matrix that four by four matrix will be sitting
here
let us call it h four zeroes here zeroes here
when this matrix is multiplied on this side
and this side to this matrix then the transformation
that will take place will make these root
twenty two zero zero zero similarly here root
twenty two zero zero zero right then we will
find at this much is secured and whatever
is here this three dimensional vector will
be then taken as the next u and then the next
v will be taken as something zero zero that
something will be the size of this and then
through the similar process in which the householder
transformation matrix in this case will be
i two h three zero zero this zero matrix is
of size three by two this is of size two by
three and so on
when this is multiplied on this side as well
as on this side you will get something here
zeroes here the third step here will make
this as zero and whatever happens on this
side will happen on this side also so you
get a symmetric tridiagonal matrix like this
now the question is that after we have reduced
the matrix to this symmetric tridiagonal form
what do we do with it that is is the solution
of the eigenvalue problem of a symmetric tridiagonal
matrix anyway simpler compared to the original
symmetric matrix the answer is yes there are
several ways one can handle this kind of symmetric
tridiagonal matrices one way we consider now
and the other way we will consider in the
next lecture there is a very interesting piece
of theory which tells you how to work out
the characteristic polynomials of sub matrices
of this that is leading one by one sub matrix
leading two by two sub matrix leading three
by three sub matrix and form a sequence out
of these characteristic polynomials and then
try to solve the eigenvalue problem based
on those interesting properties
so what will be the characteristic polynomial
of this so for that we have to find out the
determinant of lambda i minus p so this is
this the characteristic polynomial right lambda
minus d one lambda minus d two etcetera sitting
in the diagonal places and minus e two minus
e three etcetera sitting in the off diagonal
places note that d is indexed from one to
n and e the sub diagonal super diagonal entries
which is one less in number they are indexed
starting from two ok it could have been indexed
as e one to en minus one that would be equivalent
to this but in this analysis we have indexed
from e two to en so there is nothing called
e one in this analysis fine so with this characteristic
polynomial you find that the characteristic
polynomial of leading one by one part is simply
lambda minus d one right so we call it p one
p one of lambda that is the characteristic
polynomial of the leading one by one sub matrix
from t so you call it p one that is simply
ah lambda minus d one right
then for the leading two by two sub matrix
we have got the characteristic polynomial
from here lambda minus d two into lambda minus
d one minus e two square in this place lambda
minus d one can you simply put p one my p
one of lambda you can so we write this ok
similarly we can work out p three p four etcetera
but let us go one large step and try to determine
pk plus one lambda in terms of pk lambda and
pk minus one lambda so that will establish
a recursion among all these ah characteristic
polynomials of the leading sub matrices
so as we try to do that let us write down
here 
we are going to write down here the same matrix
appearing there but not up to all the way
to lambda minus d n but up to lambda minus
d k plus one 
when you try to expand this determinant from
this column what do we find we find it is
lambda minus d k plus one into this determinant
minus this thing into the determinant that
we will find by crossing out this row and
this column so let us do that and all other
entries are zeroes right
so we get lambda minus dk plus one into this
determinant which is same as the characteristic
polynomial of the sub matrix one order less
that is lambda minus dk plus one into pk lambda
minus dk plus one into pk then plus this thing
which is minus ek plus one minus ek plus one
into something we try to find out that something
what is that something that something will
be the determinant found by removing this
row and this column so let us do that remaining
thing will no longer be this so we will be
removing this column and this row this determinant
should be sitting here and see its diagonal
entries are lambda minus d one lambda minus
d two lambda minus d three etcetera etcetera
up to lambda minus dk minus one and then next
because this guy has taken this place actually
after removal of this row and in this row
other than this entry everything else is zero
so that means the determinant that we are
asking for is this into this determinant right
and this is minus ek plus one which ok earlier
that minus minus sign that plus we have we
have not made it plus so that minus will actually
now help and because this minus this is remains
minus finally and ek plus one comes once more
sorry it is square now and what else is here
that is pk minus one that is the characteristic
polynomial of the matrix of one further order
less ok and now what we do for the other than
this now this relationship this recursive
relationship will define up to p n in terms
of the older ones so p three will get defined
in terms of these two p four will get defined
in terms of p two and p three and so on through
this relationship
at the top we also put a dummy element in
this sequence in order to complete the sequence
and that is one ok so then we will say this
we will have zero roots no roots this will
have one root which is d one this will have
two roots which is we can find out what are
those two roots and so on so finally pn will
have n roots as we construct this sequence
then this sequence has some interesting properties
these expressions or rather this expression
this recursive expression which is here helps
us in evaluating these polynomials extremely
fast other than that this sequence of these
polynomials of increasing degree has further
properties they in particular they have the
property called a sturmian sequence property
that property they have if all the e js all
the sub sub diagonal and super diagonal entries
are non zero ok in that case this sequence
p zero p one p two all these polynomials the
sequence of all these polynomials has an interesting
property
now our rest of the process will directly
depend on that property but before that we
need to ascertain what we should do if there
is some ej which is zero then that is actually
for some j j say ej is zero some of the sub
diagonal and super diagonal entries turns
up to be zero that is actually a good news
because in that we can skip the matrix we
have d one d two etcetera up to dn we have
e two e three etcetera up to en now if there
is a other things are already zeroes if there
is some e which is zero here as well as here
then this is actually going to obstruct us
from using this succeeding formulation for
the complete matrix but these two zeroes will
actually help in treating the matrix into
two part because then we will have the complete
matrix in the form of a block diagram matrix
with these two zero sitting here earlier if
we had these as non zero then it was such
a huge long matrix ok large matrix n by n
now these two zeroes here will decouple the
two subspaces completely and we will actually
have this as a block diagram matrix this is
one block and this is another matrix so whenever
we have ej equal to zero at that location
we can always split the matrix into smaller
block such that we can consider each block
separately so having some ej as zero helps
us in splitting the matrix into small matrices
until each such as block has non zero ejs
all through right so we can consider only
those cases which have non zero entries here
for which the rest of the theory holds
now what is that particular property the sturmian
sequence property says that roots of pk plus
one interlace roots of pk what is that that
means if you have roots of pk sitting at locations
one five seven nine then p k plus one seven
five seven nine if these are the roots of
pk then the next one pk plus one which has
one more root five root say p four has these
as root p five which has five roots will certainly
have one root below one another root between
these two another root between these two another
root between these two and the fifth root
above nine that means the roots of pk plus
one will interlace the roots of pk which in
turn will interlace roots of pk three a p
three epk minus one and so on right so this
is the interlacing property which is shown
mathematically like this and this property
leads to a convenient procedure for finding
the eigenvalues
now i will skip the proof of this particular
property but i will just give you the line
of proof ah and strongly advise you that uh
in the textbook you go through the proof in
this textbook or in these slides which are
available in the internet uh you should go
though the proof quite carefully because the
proof has a as a has an inherent ah beauty
in it so the line of the proof is as follows
first we considered the case of k equal to
one that is if this statement true for a equal
to one and that is trivially true because
there is only one root and nothing is there
to ah interlace in the case of two then you
verify this so the statement is true for k
in the sense that roots of p two interlace
the roots of p one so the first ah entry d
one is interlaced by the eigenvalues of the
leading two by two matrix d one e two e two
d two ok this you verify that shows that the
statement is true for k equal to one next
you assume that the statement is true for
k equal to i then you denote the roots of
pi as alphas roots of pi plus one as betas
and roots of pi plus two as gammas ok
as you assume the statement to be true for
k equal to i you assume this that is the betas
interlace the alphas that is the i plus one
betas will interlace the i alphas and in the
number line you can show the alphas which
crosses and beta as bars and the picture looks
like this then you need to show that in turn
gammas will then interlace the betas i plus
two gammas will interlace i plus betas and
that you establish based on this consideration
and changes of signs in the roots of the succeeding
polynomials so rest of the proof i will omit
here in the class but i strongly suggest that
you go through the proof a little carefully
we will go rather to the procedure we examine
this sequence p zero p one p two p three up
to pn for different values of w for a particular
value of w if we know that the pk and pk plus
one and pk minus one will have their locations
of roots in this manner see one question we
are never rising that whether the roots are
real or not because that question we are never
rising because the matrix is system matrix
so all the roots are real that is anyway known
all the eigenvalues are real that is anyway
known
so if pk has this kind of relationship with
pk minus one and pk minus pk plus one their
roots then one thing is very clear if pk w
and pk plus one w have opposite signs then
the number of their roots above w can differ
by just one why because if this sign that
is in this suppose w falls here ok and pk
w has a certain sign and the at the same w
pk plus one has a sign difference from that
ok that will mean that above that value above
that w whatever is the number of roots of
this and the number of roots of this can differ
at most by one ok because at infinity all
the pks will have plus infinity value ok infinity
minus into infinity minus something into infinity
minus something and so on so at infinity that
polynomial all of these polynomials will evaluate
to plus infinity so all of them are positive
so the moment one root is encountered the
sign changes ok for each of them
so it is impossible that one of them encounters
too many roots and the other the next one
has not encountered any roots because of this
interlacing property so pk and pk plus one
two succeeding one two continuous ones in
this sequence having opposite signs will means
that the higher one has the one root more
than the lower one above w ok now we will
find that number of roots of pn above w will
be number of sign changes in the sequence
from this end to that end because as many
sign changed if compared to pn pn minus one
does not change sign that means pnn pn minus
one will have the same number of roots above
w and then from pn minus one to pn minus two
if there is a sign change then we will know
that for pn minus two one root less and so
on so in this entire sequence the number of
sign changes at w will tell us the number
of roots of pn above w so p zero has no root
so number of changes will tell you at the
end how many roots this guy has above w
now if you if we do this operation at w equal
to a and then w equal to b then above a how
many roots pn has and above b how many roots
pn has the difference of the two numbers will
tell us how many roots pn has in this interval
ab if at a particular value in this entire
examination in this entire ah investigation
the pn turns out to be zero we know that that
value is the root is a root ok so after closing
like this that how many roots in the interval
ab we can consider a plus b by two and then
see out of those roots in the interval ab
how many are in the lower half a to a plus
b by two and how many are in the upper half
a plus b y two to b and so on so like this
we can repeatedly used bisection to squeeze
each of these roots separately and then further
we can use bisection itself to go on squeezing
the interval till we find the root to our
required accuracy or rather than bisection
we can find some other equation solving process
after locating the roots and separating all
the roots ok
so with what interval we start do we start
from minus infinity to infinity then it will
be very difficult to process the whole thing
because bisection will work independently
there is a little ah trick in starting the
process if you want to solve for all the eigenvalues
ok and that tells you this all the lambda
their magnitudes are bounded by this quantity
which is the maximum over all rows of the
entries of the rows ej plus dj plus ej plus
one take all their magnitudes and whatever
is the maximum of the sum over all the rows
no lambda no eigenvalue of the matrix can
have a magnitude which is larger than that
so if you take the initial interval from minus
lambda b and d to lambda b and d then all
the identities are bound to follow in that
and then you can go on applying bisection
in order to separate the roots and separate
eigenvalues and once you have separated them
then solving for them you can apply either
bisection itself or some other equation solving
process or root finding process so that gives
you this algorithm
first identify the interval ab of interest
now interval ab of interest can be either
the entire interval minus lambda b and d to
lambda b and d if you are interested in finding
all the roots all the eigenvalues or sometimes
the your problem may suggest that you are
interested in an eigenvalue in a given domain
only in a given interval only you are not
bothered with rest of the eigenvalues which
may fall outside this interval in that case
you take that interval at a b otherwise you
take the larger interval in which you are
sured that all eigenvalue will lie now for
a degenerate case in which some sub diagonal
or super diagonal entry of the symmetric tridiagonal
matrix is zero you split the given matrix
and operate separately with the different
blocks for each of the remaining non degenerate
blogs or matrices you just do two things by
repeated use of bisection and study of the
sequence p lambda you bracket or separate
individual eigenvalues within small subintervals
and then in these bracketed subintervals by
further use of bisection itself or some substitute
some other root finding method within each
subinterval determine the individual eigenvalues
and when the interval becomes extremely small
say the interval size becomes equal to point
zero zero zero zero one then that means you
actually found the eigenvalue so there is
no further need to go into that
so in this lesson what are the important points
that we should keep focus on first point is
that the householder matrix is symmetric and
orthogonal and it effects a reflection transformation
second is a sequence householder transformations
can be used to convert a given symmetric matrix
into symmetric tridiagonal form and then the
eigenvalues of the leading squares matrices
form a sturmian sequence which has interlacing
structure in its in their roots and this property
can be used to separate and bracket eigenvalues
ok and further solve in a systematic manner
so we have a little time in hand
so let us consider a quick example at least
half way after which you can proceed on that
example suppose you have got this matrix two
three two five four three five seven by seven
matrix these are the diagonal entries and
the off diagonal entries are one zero one
two two one right sub diagonal entries are
also same and all other entries are zero
so this is a symmetric tri diagonal matrix
possibly obtained after a series of householder
transformations ok so this is the matrix which
we are going to solve for eigenvalues now
these two zeros will allow us to split the
matrix in this manner so there is a two by
two component and there is a five by five
component this is actually nothing this you
can solve from the diagonals of this you can
solve from the definition itself because that
will involve only the solution of a quadratic
this otherwise would involve the solution
of a quintic equation which is more difficult
so for this we apply this methodology based
on the sturmian sequence property so for that
we construct these polynomials first one is
trivial second will be lambda minus d one
that is p one
next p two will be lambda minus d two that
is lambda minus five into p one minus 
e one square that is one next we will have
p three which will be lambda minus d three
three into p two into p two minus e three
square and what is e three here is e three
here is two so e three square into p one right
then p four will be lambda minus b four into
p three minus e four square e four is two
into p two finally p five will be lambda minus
d five into p four minus e five square which
is one into p three c ok these things we will
now try to evaluate for different lambdas
to locate the roots of this polynomials so
for now consider the interval which you need
to consider you would have noticed the intervals
as one the rows that is one five two two four
two these are the biggest rows ok so sum of
that turns out to be eight that means no eigenvalue
of this matrix can have magnitude higher than
eight so you consider the interval minus eight
to plus eight
so at lambda equal to minus eight you try
to evaluate d this is one for all of them
this is minus ten this is minus eight into
minus five that is minus thirteen into this
that is minus ten already in hand ok so what
do you get you get minus thirteen ok sorry
ah minus five into minus ten minus five into
minus ten that is plus fifty minus one we
will get forty nine ok then you come here
minus eight minus four that is minus twelve
right minus twelve into p two what you have
already got ok minus four into this so you
will find that this turns out to be positive
then this turns out to be negative this turns
out to be positive and then you will find
that one two three four five sign changes
are there that will show this i suggest that
you would verify and check that these turn
out to be positive negative positive and so
on ok and then we will find that there are
five sign changes from top to bottom that
means above minus eight p five will have five
roots that means all the five roots are above
minus eight then you consider the case of
lambda equal to eight this is one and as you
put eight here you will get eight minus two
that is six positive ok
then you put eight here three into p one that
is p into six eighteen minus one ok so you
get seventeen then you come here eight minus
four that is four four p two four into seventeen
sixty eight minus four into sixty eight minus
twenty four you will get forty four still
positive like this you will find then in this
case all of them turn put to be positive that
will mean what is the number of roots of p
five above eight above the value eight number
of roots is the same as number of sign changes
here no sign change here so no root above
eight up to this we have verified that all
the roots are actually above minus eight five
root is above minus eight and no root above
plus eight so we have verified that bound
that is all the roots actually lie within
minus eight and eight right now applying bisection
you try to find out the number of roots above
zero ok above zero how many so this is one
as you put zero here you get minus two as
you get put zero here you get minus five into
this that is plus ten minus one that is plus
nine
then you come here and you find minus four
into nine that is minus thirty six minus four
into minus two that is minus thirty six plus
eight so minus twenty eight like this as you
continued you will find that this turns out
to be positive and this tunes out to be negative
that will show the number of sign changes
at lambda equal to zero for this polynomials
this sequence of polynomials is one two three
four five so all five roots above all zero
ok so all positive roots so this gives you
a little further information that all the
roots are within the interval zero to eight
in particular this is a positive definite
matrix because all the eigenvalues are positive
now what you will do for bisection you will
evaluate the polynomials the sequence of polynomials
at lambda equal to four ok so as you will
evaluate at lambda equal to four you will
find that you get one two and then here minus
two minus one that is minus three and then
at lambda equal to four this is zero ok minus
four into p one that is minus eight
then here one into minus eight that is minus
eight minus minus plus twelve that means four
finally here minus four minus minus plus eight
is minus four plus eight that is plus four
so how many sign changes here one sign change
here two sign changes here so above four you
will have two eigenvalues and below four you
will have three so you have started bracketing
three in this interval and one in this interval
that is three in this interval and two in
this interval right so two sign changes here
at lambda equal to four that means above four
p five will have two roots two sign changes
right so in this there will be two roots in
this there will be three roots next you will
go on splitting this next you will evaluate
for finding eigenvalues in this time interval
you will evaluate at two and then possibly
at one or three and so on similarly here like
this you go on subdividing the interval till
you have separated each of the intervals containing
exactly one root of p five and further continuation
in the same process we will squeeze the root
for you
so i suggest that you continue this process
till you find the eigenvalue is with a accuracy
of point one that will give you enough practice
us and you will find that the method for quite
comfortably there was a small error in the
calculation in the board work so please note
this correction here what you saw in the board
was this we were um analysing the eigenvalue
problem of this problem and this this is what
appeared on the board and in this ah and there
is a correction this forty nine forty for
p two at lambda equal to minus eight was not
right the correct calculation shows that it
should be one twenty nine the result of which
was that the next three signs were also mistaken
and the next three signs will be this way
minus plus and minus and with this as we will
notice that for lambda equal to minus eight
there are five sign changes and that means
that all five roots are above minus eight
and then for lambda equal to eight there is
no sign change and that shows that no root
is above eight and in between through bisection
then you will evaluate at lambda equal to
zero in which case you will find that all
five roots are above zero
so the first two columns in this data for
lambda equal to minus eight and lambda equal
to eight you basically get the verification
of the bounds of minus eight and eight for
all the eigenvalues and the third column lambda
equal to for lambda equal to zero shows that
all the five eigenvalues are positive which
means the matrix is positive definite other
than this everything else is all right in
board work
thank you
