>> I hope everybody had a nice
holiday weekend and got rested
up for the spring finish.
We have 2 more weeks
for the first time
in I guess 3 weeks we're going
to have a full schedule
this week and then also next
and as you may recall
last week before we left
for the break we were
talking about various kinds
of matrix manipulations and
some matrix mathematics.
So, what we're going to do today
at the start is we'll finish
up with the background
on matrix manipulations
and in particular
we're going to talk
about a very important
class of matrix equations
which are called
eigenvalue equations.
So, I'm going to start by
sketching what the problem is
on the board and then we'll
explore what I explained
to you now using Mathematica.
Okay, so eigenvalue
equations are ones
for which we have a matrix which
I'll call capital A and denote
with double squiggles
and I'm going
to assume this is a
square matrix with N rows,
little N rows and little end
columns and now if we multiply
that by a vector,
n-dimensional vector [phonetic].
So this is going to be a vector
that has N rows in 1 column
and if that can be
written as a constant,
which I'll call lambda,
times the vector itself this
is what's called an eigenvalue
equation and the lambda is
what's called the eigenvalue.
Now, these types of
equations arise in many,
many places in physics and
chemistry and those of you
who are in Chem 131
right now you know
about eigenvalue
problems in the context
of the Schrodinger equation,
which you're going to learn
about in Chem 131B is a matrix
formulation of quantum mechanics
in which the problem can
be expressed in this form;
the problem of determining
say the electronic structure
of a molecule and the
energies of the orbitals.
In fact, we're going to do
an example shortly probably
tomorrow from orbital theory of
aromatic molecules and that type
of problem is one that you
will cover in some detail
in physical chemistry, but the
bottom line is there's many,
many problems that can
be expressed in this form
and so it's useful to
understand how it is
that you can determine these
values lambda because that's one
of the quantities that
you're typically interested
in determining in
eigenvalue problems and then
as well the values
of these vectors,
which are called eigenvectors
is also part of the problem.
Okay, now just a quick crash
course on what one actually does
if you want to try to
solve this problem.
So, first of all we
can write this equation
as A minus lambda times the
identity matrix times the
vector, eigenvectors equal to 0.
So all I did was move the
lambda X over to the left side
and this just turns lambda times
the identity matrix is just a
matrix whose diagonal
elements are lambda.
So this is N by N identity.
All right so now what we want
to do is solve such an equation
to determine the lambda but the
lambda will solve that equation.
One possible solution is that X
is a vector with all 0s, N 0s,
and that's what we call
the trivial solution
because it's not interesting.
So for non-trivial solution
the way we can get lambdas
that satisfy this equation
is that it amounts to saying
that the determinant of A
minus lambda times the identity
matrix, is equal to 0.
Okay. Now, it may not be obvious
but when you actually
form the determinant
and solve this equation
what this is going
to give you is N roots
which we can label lambda I,
I equals 1 to N because this
detrimental equation here will
give you an [inaudible]
polynomial.
[ Pause ]
And lambda.
All right.
So that's essentially how
you determine the lambdas.
Now, to determine the
eigenvectors, the various Xs
so now there's going to
be as well and vectors X
that satisfy the equation
1 for each lambda.
We need to do a little
bit more work.
So, we're calling that there's
1 of these equations for each pi
and we can actually collect
all of them together by saying
that A times now a
matrix, capital X is equal
to a matrix lambda
times capital X
and what this X is here this is
a matrix formed by taking each
of the Xs here so there's
X1, X2, X3 dot, dot,
dot XN and I'm going to put them
as columns into this X matrix.
So I'll have X1 vector, X2
vector, dot, dot, dot to XN.
So these are each N long so
we've formed a square matrix
that contains all of the Xs
that satisfy this equation.
All right?
Then this is a diagonal N
by N diagonal matrix
with elements lambda.
So we'll see this explicitly
in a couple of minutes.
So this now provides the
basis for determining the Xs
and here's how we do it.
So first thing we do is
we're going to multiply
through both the left and the
right hand side by the inverse
of X. So if we do that, then
we get X minus 1A X is equal
to since this is a
diagonal it can be moved
out here if we want.
So it would be X to the
minus 1 X lambda and X
to the minus 1 times X is
just the identity matrix.
So then this gives us lambda.
Okay. So, here's the
punch line then right here
for how you determine the Xs.
What you want to do is find a
matrix X that by the process
of sandwiching A in
between its inverse
and X you create
a diagonal matrix.
So in the language of
eigenvalue problems,
the matrix of eigenvectors
X is the one
who diagonalizes the matrix A.
That's the language that we use.
This particular process
of diagonalizing A see
by sandwiching A
between X and minus 1
and X we get a diagonal matrix.
This has a name it's called
a similarity transform.
[ Pause ]
Okay so just to recapitulate
how it is
that we solve eigenvalue
problems
and you'll see we don't
actually have to solve them
because Mathematica will
do it for us easily,
but just to give
you the background.
We have a general problem
like this appears
many places you'll see
in physical chemistry.
You solve this polynomial
equation and you get the lambdas
and then you find the
matrix X that diagonalizes A
and then this gives you
the columns corresponding
to the vectors X1 through XN.
All right so let me just
give you one quickie preview
of what we're going to
do probably tomorrow.
So the problem tomorrow is
going to be, this is going
to be Hamiltonian, this is
going to be wave function
or orbital coefficients
specifically, this is going
to be an energy of the
orbital and then this is going
to be the orbital coefficients.
So this you can write
quantum mechanical equations
for the energies and orbitals
of a molecule in matrix form.
In fact when you use Spartan,
all of you have used
Spartan I presume,
you're essentially solving such
an equation on the computer.
So we're actually going
to do that tomorrow,
but for now I just want
to explore some aspects
of the eigenvalue problem and
show you how to use Mathematica
to solve it with
some simple examples.
So let's go ahead and do that.
Turn on my screen here.
[ Pause ]
All right so we're going
to start really simple
and we'll just do little
2X2 matrix and so I'm going
to define the matrix
A equals curly, curly.
It's going to be the
first row will be 1,
2 and then the second
one will be 3, 4.
All right and now
we can look at A
in matrix form and
so there it is.
All right.
Now, we don't actually have
to do this to solve problems
but I just want to show you
in some detail how it works.
That equation over
there that says debt
of A minus lambda times the
identity matrix equals 0 that's
what's called a secular
equation.
So let's go ahead and
see what that looks
like for our matrix here.
So I'm going to define secular
equals determinant A minus
and then we can put a lambda in
there if we want, lambda times
and to get an identity matrix
with N equal 2 we can say
identity matrix back at 2.
So that just gives us 2X2 matrix
with 1s along the diagonal.
All right.
So if I enter that, notice I
get a quadratic second order
polynomial equation
for lambda, right?
So the secular equations
for an N by a matrix end
up being nth order polynomials
which we could solve if we want.
We'll see in a minute that
there's a quicker way to get
to the answer, but just to show
you how it works we could say
solve secular equals equals 0
and then we're solving
for lambda.
So how many roots should we get?
We should get 2 and those
are the 2 eigenvalues
that satisfy the equation.
That matrix times an
eigenvector is equal
to eigenvalue times
that eigenvector.
Just for completeness
I'll show you another way
that you could generate
the solution
or the secular equation
in Mathematica.
You could also say
secular equals
and there's a command called
characteristic polynomial.
It's a lot of typing.
Then you say what the matrix is
and then what the variable is.
So this gives exactly
the same thing
as here we formed it by hand.
We said give me the determinant
of a minus the eigenvalue
times the identity matrix.
This command here
basically just does that.
So the point of this is to show
you that, in fact, the solution
of the eigenvalue problem
for the eigenvalues consists
of finding the roots to
the secular equation.
Okay, now there's an
even easier way to do it.
So in general, you're
not going to do this.
This is just for
illustrative purposes.
If you have a matrix,
a square matrix,
and you want the eigenvalues,
you don't need to go
through this business of setting
up secular equation
and solving it.
Mathematica has a nice little
command that allows you to cut
to the chase and so you could
say directly, for example,
lambda equals eigenvalues
of A. When you do that,
you see that you go
directly to the solutions
of the secular equation.
So when you want to
calculate eigenvalues
with Mathematica just use
the eigenvalues command.
Also you may want
the eigenvectors
and the way you get those is you
just ask for them, eigenvectors
of A. Now what should
we get here?
Let's think about what
it is that we should get.
Anybody want to venture a guess?
We have a 2X2 matrix so how
many eigenvectors are there?
Well, we've seen that
there's 2 eigenvalues
and for each eigenvalue there
should be an eigenvector
so we should get 2
eigenvectors and each
of them should have length 2.
Correct? All right let's
see then what we've got
and what we've got is, in
fact, a list of 2 eigenvectors.
The first one is here and
it has first element here,
the second one, and then the
second eigenvector is here.
All right now I personally
like to get the eigenvalues
and eigenvectors separately
because they tend to be useful
when they're separated,
but if you wanted them all
in 1 shot there's a
command called eigensystem.
So if you say eigensystem of A,
what you get is the
whole collection.
First a list of the
2 eigenvalues
and then 2 corresponding
eigenvectors.
So this eigenvector
corresponds to that eigenvalue
and this one to that one.
All right.
Now the next thing I
want to do is I want
to show you we're going to
verify the similarity transform.
So what I want to
do is I want to show
that if I form a matrix
x whose columns are going
to be first this one
and then this one.
So what I'm going to do
is I'm actually going
to transpose these.
This is a rho vector now.
I'm going to transpose it
into a column and I'm going
to pack those 2 into
a matrix I'm going
to call X then we're going
to calculate the inverse of X
and we'll multiply the inverse
of X times our original matrix A
and then times the matrix X
and we should get a diagonal
matrix whose elements are these
guys just to show
you that it works.
So here's how I'm going
to form the matrix X.
So X equals transpose
and then the eigenvectors
of A. Put a semicolon
and then we'll look at it
in matrix form just to
make sure it looks right.
So now notice this eigenvector
has been put like that
and it now shows up here.
That's X1.
Then this one has been
transposed and put in here, X2.
So X1, X2.
So that's our matrix
of eigenvectors.
Now let's go ahead
and get the inverse.
I'm going to say X, I
and V equals inverse
of X. Then what we'll
do is we'll say X,
I and V dot A dot X. I'm going
to put that in matrix form.
Okay. The similarity transform
tells us then what I should get
out is diagonal matrix
whose 1, 1 element is here
and 2, 2, element is here.
[ Pause ]
And maybe we need to simplify.
Okay. So there you have it.
By putting this in here
we force Mathematica
to do some algebra
to clean things up.
So as advertised 1, 1 element is
the first eigenvalue and the 2,
2 element is the
second eigenvalue.
So you see that it works.
What I hope you see even more
than it works is how easy it is
when you have a tool
like Mathematica.
Has anyone in here solved
eigenvalue problem by hand?
I know a couple of you have
had linear algebra and it tends
to be, well, for 2X2
it's not so bad, right?
It quickly becomes
very unpleasant
as you get bigger
and bigger matrices.
So this is kind of nice, huh?
All right so that's a simple
illustrative example using
a 2X2.
The next thing I want to do is
we'll just make a slightly more
complicated example of a
3X3 just to walk through
and do it one more time so
you see another example.
Then we'll have covered all
of the material that you need
to know to do your
homework so I'll go
over the homework
assignment with you.
All right so now I'm going
to define a 3X3 matrix.
I'll call it A again.
A equals curly, curly and
this one is going to be,
first row is 1, 2, 3,
second row is 2, 2,
2 and the third row is 4, 3, 3.
All right.
So let's just have
a quick look here.
By the way let me, I want to
preview a very common error
that can be made that can be a
little bit difficult to track
down when you're
working with matrices.
So I don't know if
you've noticed,
but when I enter a
matrices I enter the matrix
and then I look at
it in matrix form.
So let's do that here.
Okay. All right so there it is.
So this defines the matrix
properly and this lets me see it
in matrix form and once
I've got the matrix
in there I can do stuff with it.
So, for example, I can say
give me the determinant of A,
all right, and I get
a number as I should.
Now what you should not do is
you should not do something
like this.
Define a matrix in matrix form.
All right?
See the difference here?
Here I'm actually
setting A equal
to the matrix form
of this matrix.
This is a graphical object.
It's not a mathematical
object in Mathematica.
So watch what happens
if we do that.
It looks fine, but now if I try
to get the determinant
I get the determinant
of a graphical object, which
doesn't make any sense.
So this is just a
warning to be careful.
I think it's always nice to look
and make sure you entered your
matrix properly by looking at it
in this format, but always
do it after you define it.
Don't set anything
equal to a matrix form.
Otherwise it's unusable
for further calculations.
It's something that's done
commonly and when I was working
on the solutions to the
homework I did it myself
and just be aware that
that's a common slip up.
Okay. So let's go
back and clean this up
and put it back in matrix form.
Okay. All right.
Now I'm going to just go ahead
and directly get the eigenvalues
and we'll set them
equal to lambda.
That's a common notation.
All right.
Did I, make sure I entered this.
Okay. And notice that I get
something interesting all right.
Now it kind of looks
weird and it looks weird
because there was no
sort of exact form
of the roots for this.
It's going to be
a cubic equation
or at least it's not
able to spit that out.
That's not a problem if you
don't mind working with numbers.
So what I could do
is say instead
of saying lambda is equal
to the eigenvalues of A,
I'll just go ahead and put
the N command out in front
and I'll get some numbers.
So I should get 3
numbers, 3 eigenvalues,
because I have a 3X3 matrix.
So now I've got the 3 numbers.
Okay. Now let's have a
look at the eigenvectors.
You see that also looks like
garbage so we can just go ahead
and put the N out in front
around it here and we get
as expected 3 vectors so
a list of 3 vectors each
of which has 3 elements because
we're working with a 3X3 matrix.
Okay. So if we were interested
in solving the eigenvalue
problem for the eigenvalues
and eigenvectors of this
matrix, then we're done.
See how easy it is?
But again just for fun and
for practice let's go ahead
and use these results,
manipulate them to see that,
in fact, the similarity
transform
as written over there works.
So once again I'll say
X equals transpose.
I'll leave the N in there
so I get nice numbers
and I'll put a semicolon
and then I'll look
at X in matrix form.
So there we have our
matrix of eigenvectors.
This is X1 corresponding
to the first eigenvalue,
X2 to the second,
and X3 to the third.
Okay. We can say X, I and V
equals inverse of X semicolon
and then finally let's check
the similarity transform,
which is X, I and V dot A
dot X equals, no, not equals,
let's just leave it like that,
and put it in matrix form.
[ Inaudible question ]
We'll get to that in a second.
All right and what we see is
that we don't get a diagonal
matrix of eigenvalues.
You can see that the eigenvalues
are, in fact, along the diagonal
but then there's these
other numbers in here kind
of polluting our
beautiful diagonal matrix.
Now these are likely just due
to miracle in precisions okay?
So we can get rid
of that by putting
in something we've seen before,
which is called the chop
wrapper, which will remove,
eliminate numbers
that Mathematica
thinks are imprecise.
All right.
So we can just say give me chop
of that and now you see that,
in fact, we've got a nice clean
diagonal matrix whose diagonal
elements are the eigenvalues.
So once again proving that
the matrix X is the matrix
of eigenvectors and the
similarity transform is correct.
All right?
So there's your introduction
to solving eigenvalue
problems using Mathematica.
Those of you who haven't had a
course in matrix mathematics,
linear algebra, can't appreciate
what a wonderful simplification,
what a nice tool this is, but I
assure you that, in fact, it is.
Now let's go ahead
and have a quick look
at the homework assignment
since we know everything we
need to know to do it now.
Okay, so the first problem
is a problem that is going
to involve doing
calculations with vectors.
So what I've given you in this
table are Cartesian coordinates
for the 3 atoms in
the molecule NOCL.
So this is XY and Z coordinate
of nitrogen, oxygen
and chlorine.
Now each of these sets
of 3 coordinates defines
a position vector.
So, for example, I could say
RN equals 000 and RO equals
that and RCL equals that.
So RO, RN, RCL.
So what I want you
to do here is based
on these coordinates calculate
the bond length of the NO bond
and that amounts to this
is just the distance,
which is the magnitude of the
vector pointing from N to O
which is the magnitude of the,
difference between RO and RN.
So you're going to enter RN, RO,
RCL and calculate the difference
between RO and RN
and get its magnitude
and that's the bond
length of the NO bond
and then you're going to do
the same thing for the CLO bond
and then finally you're going
to calculate the bond angle,
which in terms of these
vectors is given here
which is just a rearrangement
of the relationship
between the dot product
and the cosign
of the angle between
two vectors.
Okay? Everybody understand
what I'm asking for there?
All right.
Okay so the next problem this is
just a very simple one but just
to get a little practice.
So, in physics as you probably
know, the angular momentum,
which is indicated as the vector
L here is the cross product
between the position
and the momentum.
So if you define R as equal
to XYZ and momentum is equal
to momentum X, momentum Y,
momentum Z then L
will have components
that I want you to evaluate.
So, basically I just want you
to enter these 2 guys using
some reasonable notation
and get the cross product to see
what the components look like.
So that's a very
straightforward problem there.
Okay. Next thing
is I just want you
to explore some relationships
from linear algebra
concerning matrices.
So we have 2 matrices here.
I want you to enter
those in and then verify
so you can calculate AB and
then BA and look at them.
You'll see that they're
not the same.
These are not symmetric
matrices, they don't commute.
So that's just a reminder to you
that in general matrix
multiplication is
not communicative.
It's a useful thing to know.
Okay. Here's an interesting one.
If you take the product
of A times B
and take its determinant,
that is actually equal
to the determinant of A times
the determinant of B. Does
that ring a bell for people
who took linear algebra?
Okay. Then you're going to
calculate the inverse and show
that inverse of A times A is
equal to the identity matrix
and A times the inverse of A is
equal to the identity matrix.
Here's another interesting one.
If you take the determinant
of the inverse,
that happens the be 1 over
the determinant of the matrix.
So, I want you to
show that also.
Then that's it for that problem.
Okay, the next thing I want you
to do is put in this matrix,
which has some variable X in it
and then just use
the soft command
to solve the equation
determinant of A equals 0.
Solve for X and you'll
get this is a 4X4 matrix
so there should be 4
roots and you'll get them
and once you enter the
matrix it should be 1 line.
Finally, I want you
to enter this matrix
and then just do what we've
done already twice here today.
So find the eigenvalues, the
eigenvalues, the eigenvectors,
and then verify the
similarity transform.
This will get you
nice and warmed
up for doing what I hope will
be a fun and interesting problem
in next week's assignment
where we're actually going
to use what we've learned
here to solve for the orbitals
of that conjugated
organic molecule.
All right so let's see we've
got a few more minutes.
So I'm going to go
ahead and start
with a few miscellaneous
examples where we can use some
of the things we've learned from
the vectors and vector analysis
and matrices that are relevant
to things that you will see
in physical chemistry.
So the first thing is
this one is sort of more
for your entertainment only.
The second example will
actually be relevant
to a homework problem.
I want to revisit the
vector analysis package
and show you how
you can have access
to alternate coordinate
systems and, in particular,
this spherical coordinate
system, which shows up a lot
in your physical
chemistry course.
All right so let's go ahead
and load the vector
analysis package.
So that's less than, less
than vector analysis backward
single quote, enter, okay,
and now I'm going to
change my coordinate system,
which by default is Cartesian.
So I'm going to say set,
capital set, coordinates,
and the system we're going to
use now is called spherical.
That's the spherical,
polar coordinates
that Mathematica knows about.
Now I'm going to define the
symbols I'm going to use
for the 3 coordinates.
Actually let me go here first
to show you what we're doing.
[ Pause ]
I don't have that.
Okay. All right.
Okay, so this is actually
what we're talking about here.
It's a coordinate system where
we go from the Cartesian XYZ
to 1 distance and 2 angles.
So R here is now the length of a
vector pointing from the origin
to the point of interest and
then we can define 2 angles.
One is called the
[inaudible] angle, theta,
which tells us how far
this vector is, well,
what's the orientation of
this vector with respect
to the Z axis and then we
have this other vector,
which is called the polar
angle which tells us how far
or what's the angle between
the X axis and the projection
of R onto the X, Y plane.
So those of you who are
in Chem 131 are familiar
with this coordinate system
because this is a
convenient coordinate system
for solving the quantum
mechanical problem
of an electron orbiting
a nucleus, right?
Okay, so these are standard
notations, R, theta and phi.
So that's what we're
actually going to use here.
We're going to define
our spherical coordinates
to be R theta and phi.
So if I enter that, it just
says verified R, theta and phi
and if I ever want to know what
coordinate system I've set,
I can just say coordinate system
and it tells me spherical
coordinates.
If I want to know the ranges
over which those coordinates
are defined, I can ask for it.
Coordinate, ranges, and if I
leave a blank bracket it will
give me all three.
So it tells me R ranges from 0
to infinity, theta from 0 to pi
and phi from minus pi to pi.
So phi is the whole circle
and theta is the semi-circle.
Now, other useful things
that you can do is you can
ask what are R, theta and phi
in terms of the Cartesian
coordinates?
So I can say give me the
coordinates from Cartesian,
bracket and then I list what I'm
calling Cartesian so I say X,
Y and Z. I can see here that
I'm going to have to clear X.
So let's go ahead and do that.
So let's see what this gives us.
All right?
Now those of you who have
studied the spherical coordinate
system before probably
recognize these formulas.
So this is the definition
of R in terms
of the Cartesian
coordinates, right?
It's just the square root
of X squared plus Y
squared plus Z squared.
This tells you how
to calculate theta
from the Cartesian coordinates
of a point and then this is phi.
Does that look familiar
to anybody?
It should look familiar
to those of you
who are in Chem 131 at least.
Okay, now what if you wanted
to know how the Cartesian
coordinates are defined in terms
of the spherical coordinates.
The way you could do that
is you could say coordinates
to Cartesian and this should
also give some familiar.
So now we've put
in our spherical
coordinates R, theta and phi.
Now it's going to tell us how X,
Y and Z in the Cartesian
coordinates are defined in terms
of R, theta and phi in
the spherical coordinates.
All right.
Again, that's a formula
that should look
familiar to some of you.
All right?
So, X is actually defined
in terms of R, theta and phi
and then Y and Z. All right.
Now, just to finish up
today we talked last week
about vector derivatives,
and I'm not going to go
through every possible one of
them but I'll show you a couple
of the ones that are somewhat
interesting especially
if you've studied the hydrogen
atom in the Chem 131 class.
So the first is I can ask for
the formula of the gradient
of a function, a scalar
function, of R, theta, and phi.
[ Pause ]
Okay. All right and if you enter
that then you get the following.
Okay so just to remind you
what this notation looks like.
So this now is remember the
gradient takes a scalar function
and gives us a vector whose
components are the derivatives
of the scalar function with
respect to the 3 coordinates.
So in the Cartesian
coordinates it's simple.
The first component was just
the derivative of the function
with respect to X, the
second with respect to Y
and the third Z.
Notice when you go
to alternate coordinate systems
such as the spherical
coordinates the formulas are a
bit more complicated.
So this is the component
of the gradient
in the R direction,
which is simple.
It's just the first derivative
of F with respect to R,
but then when you go to theta
it's the first derivative of F
with respect to theta divided by
R and then when you go to phi,
it's the first derivative of
F with respect to phi divided
by R times the cosecant
of theta.
Then those of you who are
in Chem 131A probably saw
or should have seen the
formula for the Laplacian
in spherical coordinates.
So that's del dot del.
So we can say give me the
Laplacian of R, theta and phi.
Now this is going to give us
a scalar result and as some
of you have already seen that
looks fairly complicated.
In fact, let's simplify
it a bit here.
Okay. All right so in any
case notice it's just a
single element.
So it's a scalar function
and here's the part
that has the second derivative
with respect to phi in it.
Here's the part that has second
derivative with respect to theta
and here's the part that has the
second derivative with respect
to R and then here's
another part
that has a second derivative
with respect to R. Okay,
so it looks kind of nasty
but those of you who are
in Chem 131A know that there's
certain advantages to switching
from Cartesian coordinates to
spherical polar when you want
to talk about the solution
of the hydrogen atom
electronic structure problem.
Okay, so it looks like we're
about out of time for today.
So next time what we're
going to do is we're going
to see how we can use
Mathematica's eigenvalue
solution facilities to actually
solve some interesting problems
of organic chemistry or
the electronic structure
of conjugate and
organic molecules.
So, something to look
forward to tomorrow.
------------------------------9200eca458f6--
