>> Welcome back to
Lecture Three, Chem 131A.
We're going to continue
where we left off.
Today, it's more postulates
superposition operators
and measurement.
Where we last left our
hero, we had decided
that the derivative
operator's linear,
but it was not Hermitian,
and then I introduced this
very ornate relationship
to describe what I
meant by Hermitian,
and you might wonder
what it means.
But what it means is basically
suppose we had complex numbers
and most numbers were complex,
and then we wanted to say
that a number was real, but
we only added complex numbers.
Well, one trick we could use
is we could say if z is equal
to z star, then the
number is real
because the only imaginary
part that can be equal
to opposite itself is 0,
and 0 imaginary part
means the number is real.
And so really Hermitian
is just making sure
that when we measure
something, we get a real number.
We still do believe
that probability doesn't
have an imaginary part,
and neither does energy.
When we measure it, it has
units of jewels and so forth,
and so we want to make
sure that these things
that we measure are
Hermitian, and this formula
with these integrals and
stars and the operator and,
it's just a very fancy way of
saying z is equal to z star.
Nothing more than that.
OK. Let's show that the
derivative operator,
then, is not Hermitian.
Well, here's what we have to do.
We have to do an integral
of f star d by dxg,
and we have to show that that is
or is not equal to the integral
of g star d by dxf
whole thing star.
When you see an integrand
that has a derivative in it,
the first thing you think
is I bet I can integrate
that by parts.
If you recall, integrating
by parts is basically doing the
opposite as doing the derivative
of uv as udv plus vdu.
We turn that around,
and we move the uv,
one of them to the other side,
and then we set the
integral equal to that.
Now the limits on this
integration are plus
and minus infinity, but
I won't always put them
in because it may get
a little bit messy.
But whatever it is, the way
functions have to vanish at plus
or minus infinity, and the
argument as to why they have
to vanish is if they have
any amplitude out there,
way out there, then we
couldn't normalize them.
They would get too big.
And so the only way we can have
the area under the curve crank
down to some number is
that it finally dies
out when we get far enough away.
So let's try integration
by parts.
The formula is the integral of
u d by dx of v of x is equal
to uv minus the integral the
other way around, v d by dxu,
and this is going
to be convenient
because the Hermitian thing
had them the other way around.
And now let's let u, the
function u of x conventionally
in calculus, let, let's let
that be f star, and the v,
let's let that be
g, and let's try it.
So our equation becomes this.
Fairly intimidating
looking but not too bad.
The integral of f star d by dxg
is equal to f star g evaluated
at plus or minus infinity
minus the integral
of g d by dx f star.
And that is equal to minus
the integral of g star d
by dxf whole thing star,
but that's not equal
to what we want.
And so what we have,
because we have a minus sign,
and we want it to
be a plus sign.
So it's not equal,
and, therefore,
the derivative operator
is not Hermitian.
It's called anti-symmetric
for obvious reasons.
When you swap them, it changes
sign, but it's not Hermitian.
How can we have it be Hermitian?
Well, interestingly enough,
we have to use our friend,
the square root of
negative 1 again,
and if we multiply the
derivative operator
by minus ih bar, that's
enough to do the job
because minus i star is plus i.
And, therefore, that gets rid
of the minus sign
that we got stuck
with with the regular
derivative,
and then we just follow
everything else through,
and it works.
You say, well, why is the h bar,
and the answer is this
is quantum mechanics.
Of course, there's an h bar
there because we're going
to have to have that in
almost everything we use.
And, in fact, the
momentum operator, p hat x,
which tells us the momentum when
it operates on a way function,
it tells us the momentum in
the x direction is just given
by minus ih bar d by dx, and it
is a linear Hermitian operator.
Its eigen functions are very
closely related to those
of the derivative
operator because after all,
all it has is just an
extra thing out in front,
but we want to make sure that
the eigen values are real,
and so these eigen
functions are exponentials,
but we put in an i, and here
we realize the p is real.
X is real.
H bar is real.
And so this is e to
the ipx upon h bar.
We know that we have to
have the units go away
if we take an exponential
because an exponential
is a power series.
One plus x plus x squared
over 2, and if it has units,
we're adding feet and feet
squared and feet cubed,
and that doesn't make any sense.
And so with a little bit
of dimensional analysis,
we come to the idea that
these functions here,
e to the ipx upon h bar,
are very good candidates.
So let's do another practice
problem and have a look.
So let's show that these are
the eigen functions in fact
of the derivative operator.
Well, let's take p hat x on
our function v of x. Let's put
in what p hat is minus ih
bar d by dx on the function.
Let's put in the function,
which we assume is e
to the plus ipx upon
hbar, and the derivative
of anything times
x is the thing,
excuse me anything times e to
the ax is a equal to the ax.
So we bring down
the ip upon h bar,
and now I think you can see
why we want h bar on front.
The h bars fold up.
Minus i times plus i
is minus i squared,
but minus i squared is plus 1.
That goes away, and that
leaves us with p, and that's pe
to the ipx upon h bar,
and that's p times the eigen
function, and, therefore,
we've shown that the operator p
hat returns the eigen value p,
which has the units of momentum.
So the complex exponential
is the eigen function
of the momentum operator,
and the eigen value is p.
In the language of linear
algebra, the eigen functions
of a linear Hermitian
operator form a basis.
So if I take a point in
a two-dimensional plane,
and I want to figure
out where I am,
I know that if I go a certain
unit out on x along the x axis,
and then up or down by y
that I can get to the point.
And, furthermore, any point
anywhere can be expressed
as a combination of
some distance this way
and some distance up or
down, and there's no point
that can escape, but if we have
a vector, any vector, any point,
x naught y naught, that's equal
to x naught times the coordinate
unit along the x axis plus y
naught times the core
unit along the y axis.
And just like that, we
can write any way function
as a linear combination
of eigen functions
over Hermitian operator.
They form a basis.
No function can escape,
and that's important
because if some functions
could escape,
that would mean there
were certain values
that we couldn't
measure anything,
and that would be very bad
because what would
happen to the probability.
Particles would be
disappearing then.
O postulate five is this.
It's quite a mouthful,
but we'll get to it.
When a way function is
not an eigen function
of the measured observable,
the result of the measurement
is still an eigen value,
but now the probability is
given by the square modulus
of the expansion co-efficient
of the eigen functions
of the operator.
So if I have a way function,
psi is some constant,
could be complex
number, doesn't matter
because all these
functions can be complex.
So let's just call
it c1v1 plus c2v2.
Then the probability
of obtaining the first eigen
value is the square of c1
with a, where the
absolute value.
So if there's an imaginary
part, you take c1 star c,
and the probability of obtaining
the second one is the square
of c2.
Those are the two probabilities,
and if there are only two parts
making up the way function,
those are the only two
values you can get.
Usually a way function is
made up of a whole bunch
of different eigen functions,
and so there are a lot
of different possibilities
that you can get.
The eigen functions themselves
have to be normalized.
That means if you happen
to be in an eigen function,
your chance of being somewhere
in the universe and having
that eigen value, let's say
of momentum, is equal to one.
And so the basis functions
themselves are normalized,
and we always assume that they
are normalized without comment.
And, likewise, for the way
function to be normalized,
once the basis functions
are normalized,
that means that the
probabilities
in those co-efficients
have to add up.
So the sum of the squares
of all the co-efficients
always have to add up to 1.
It's as if we have a unit
circle, and we're some point
on the circle, and we have an
x component and a y component,
then the Pythagorean Theorem
says x squared plus y squared is
equal to 1, and that's
how it works,
and it works the same
way in higher dimensions.
Second comment.
The best way to think of
these eigen functions is not
as things spraying
around in space.
Think of them as vectors.
Think of one eigen function
pointing this way telling you
the amount on this side.
The other one points this way.
A third one points up.
If I've got more, I have
to have an imagination,
but basically they're all at
right angles to each other,
and they're all telling the
amount of this special state
that is in there to begin with.
And eigen functions of a linear
Hermitian operator corresponding
to different eigen
values are orthogonal,
and that's another
reason why it's good
to think of them like vectors.
Because if I have an
eigen function here,
and this has one eigen value,
and I have another
eigen function here
that has a different
eigen value.
Then those two functions have
nothing to do with each other.
They are as different
as different can be.
They're in different directions.
They have no influence
on each other.
And to see this,
normally, suppose we have x
and y. We can tell
they're at right angles
because we can look, but suppose
I put my arms out some way,
and then I say, well,
are those orthogonal.
Well, you could try to mentally
rotate and see if it comes back
to x and y, but that's
a very, very slow
and labor intensive
way to do it.
Instead, what you do is
you take the dot product.
You take the product of
the first two components,
the second two, the third
two, you add them all up,
and you see if that
0, and if it's 0,
that means they're orthogonal.
If it's not 0, that means
that they aren't orthogonal.
So for three real components,
let's say two vectors
in 3D space, I just take
axbx plus ayby plus azbz,
and if that sum comes to 0,
doesn't matter what the
individual terms are,
if that sum comes to 0,
that means that the vector a
and the vector b are at
right angles to each other,
and that's much, much
easier to compute.
More generally, if we've
got lots of dimensions,
then we need to expand our sum.
So it could be a1b1 because
we don't want to have x, y,
and z if we've got,
let's say 5 or 6 or 20.
We run out of letters.
So we switch to numbers
where we won't run out.
A1b1 plus a2b2 plus a3b3, and we
just write that in a shorthand
as the sum over n of
anbn, and that goes
to however far we want it to
go, including in some cases
to infinity, and
that should be 0.
And the same idea holds
as functions except 1,
the sum, becomes an integral.
Because when you multiply
the functions by each other,
they both depend on
x. And so adding up,
you can't just add up.
You have to integrate to get
the answer, and number 2,
because the functions
can be complex,
we have to take the complex
conjugate of the first function.
Let's supposed we've
got two functions,
f and g. Then our orthogonality
condition is as follows.
The integral of f
star times g, dx is 0.
Now we can show based on this
and the definition of Hermitian
that the eigen functions
with different eigen
values are orthogonal.
So here's what we do.
If it's an eigen function,
it has an eigen value
that's a real number.
So let's put omega on psi
1, and we get omega 1 psi 1.
We get psi 1 back because
it's an eigen function.
We put omega on psi
2, we get omega 2.
And the only thing
we need to know is
that omega 1 is not
equal to omega 2.
They are different numbers.
They're real, and they're
unequal, and the operator,
big omega hat, is Hermitian.
Let's take the first eigen value
equation, and make a series
of operations to
both sides of it.
That's always what you do
when you simplify equations.
You do the same thing to
both sides methodically,
and if you do that,
you never get mixed up,
and nothing ever goes wrong,
and if you do some shorthand
of cross multiplying
this and that,
and you don't know what
you're doing exactly,
you'll oftentimes get it wrong.
So let's take this equation,
omega psi 1 equals
omega 1 psi 1,
and let's first multiply
on the left.
We have to make sure we multiply
on the same side
when we do this.
By psi 2 star.
OK. So now we've got psi 2
star omega psi 1 is equal
to psi 2 star little
omega 1 psi 1,
and then since omega 1's a
constant, I can pull it out
and say that's little
omega 1 psi 2 star psi 1.
Now I'm going to put an
integral on both sides
because if two things are equal,
than if I multiply them both
by psi 2 star, they're
still equal,
and if I integrate them both
over dx, they're still equal.
They don't become unequal.
And so I integrate over
psi 2 star x omega psi 1dx,
and that's the integral
of omega 1,
and since omega 1 is a
constant, I pull it out,
and I end up with omega
1 times the integral
of psi 2 star psi 1dx.
And we can do the same
series of operations exactly,
but instead of having omega hat
psi 1, we take omega hat psi 2,
and we get omega
2, and we just go
through the same only we just
swap the roles of 1 and 2.
We multiply by psi 1 star.
And if we do that, I've just
not done every step here,
but omega hat psi 2 is equal
to omega 2 times the integral
of psi 1 star psi 2dx.
Now let's take the conju,
complex conjugate of both sides
of the first equation.
So on the left-hand side, I
have the complex conjugate
of the whole thing, omega 2
omega hat omega 1 integrated,
and on the other side,
I have little omega
1 times the integral
of psi 2 star psi 1 dx star,
and I can simplify that.
I leave the other side
alone because that's going
to be the definition
of Hermitian.
The right-hand side I
turn to omega 1 star,
and then the integral of psi
2 star star times psi 1 star.
Well, the star or the star
lets the [inaudible] change i
to minus i back to
i. So that goes away,
and I can then write the
psi 1 in front of the psi 2.
It doesn't matter I'm
multiplying those.
There's no operator.
So I finally come
to the following.
The integral of psi 2 star
omega hat psi 1 is equal
to omega 1 times the
integral of psi 1 star psi 2.
What does that get us?
Well, the observable
is Hermitian.
And so when I put
in omega hat psi 2x
to give the eigen value omega
2, and I do the same thing,
I find that I get the
same thing backwards.
So omega 2 star omega 1 psi 1
is equal to omega 1 star, sorry,
psi 1 star omega hat psi 2.
And so using our two
series of equations,
here's what we come to finally.
Omega 2 times the integral
of psi 1 star psi 2 is equal
to omega 1 times the
integral of psi 1 star psi 2,
but omega 2 is not
equal to omega 1.
So let's subtract omega 1 times
the integral from both sides.
Then we find out that omega 2
minus omega 1 times the integral
is equal to 0.
But since omega 2's
not equal to omega 1,
it must be that the
other thing is 0.
Because if I have any
number times something,
the only way I can make
the whole thing 0 is
if the other thing's 0, and
that means that the integral
of psi 1 star psi 2 dx
is 0, and that means
that they are orthogonal.
So that's the proof.
You'll have to go over it a
couple of times to get it down,
but that's kind of a
standard thing that's done
in quantum mechanics to
show that eigen functions
for different values of eigen
values are, in fact, orthogonal.
Now suppose we make a
measurement on a quantum system,
and it's represented
by a way function, psi,
that's not an eigen state
of the operator in question.
Then what happens?
Well, we express psi as a linear
combination of eigen functions,
and that we know we can do
that because there's no
function that can escape us.
And we know our eigen functions
can be made normalized.
So we assume they're normalized.
And then the probability
of obtaining a particular eigen
value, let's say eigen value k
out of the totality from 1
to n is the absolute value c,
of ck squared where
ck is the co-efficient
of the k eigen function.
Now suppose we then make the
measurement again right away.
The question is do we
get a different result.
And the answer is kind
of surprising, but
the answer is no.
It turns out if we make
the measurement again,
we get the same result,
and if we keep measuring the
same observable over and over,
we keep getting the same result,
and now sort of mysteriously
in a way, it's 100
percent certain
that we're going
to get that result.
There is no other result
that we're going to get,
and I've tried to
encapsulate this
in this kind of pseudo equation.
We start out with probabilities.
It could be any of
these eigen states,
and then we make a
measurement, and somehow one
of them is chosen, and
we can't say how even
in an ideal experiment, but we
can say what the probability is.
Let's say 25 percent of the
time, we get this result.
Now if we measure again,
and nothing's intervened,
we haven't done anything, we get
the same result again and again
and again and again, and now
there's no probability at all.
It's always 100 percent.
So it's exact certainty.
That is, measurement is
kind of, like, a filter.
All the other possibilities
are filtered out,
leaving the one that's
actually observed.
If you sort coins with a coin
sorter, you roll them down,
and when they fit the size
of the tube, they drop in,
and if you're, you don't know
which tube it's going to drop
into at first, then that's
like being uncertain.
But then if you drop the
thing in, and it drops
into the third tube, and then
you empty it out of there,
and put it back in,
it's going to drop
into the third tube again,
and that's kind of an analogy
for what's going on here.
Or we could say, for example,
suppose we flip a coin.
Until it hits and stops
spinning and lies down,
we assume it's 50
percent probability heads
and 50 percent probability
tails.
But once it lands, and we look
at it, then we see it's heads,
then if we don't do anything,
we don't flip it again,
just sit there, it's heads.
It's heads again.
It's heads again, and so forth,
and it's heads as many times
as we want to keep looking at
it, and that's what this theory
of measurement is saying.
In other words, when
you make a measurement,
you rule out certain
possibilities.
They're now gone.
Now the measurement's made.
It came up.
It happened to come up this.
If you make it again,
it comes up this.
If you make it again,
it comes up this.
And that's assuming you don't
have any interaction in between,
but this is an idealized
experiment.
We aren't talking about how we
practically would implement it.
And likewise, in
quantum mechanics,
it's just like looking
at that coin.
If we make the same ideal
measurement again and again
after filtering out all
the other possibilities,
we just get the one result
that we got the same
result each time.
But before we made
the measurement,
it seemed like there
were other possibilities,
and if we start all over, not
with the one we've measured,
but with an identical
particle coming
through that we haven't
measured,
then we might get
a different answer.
And then if we measure
that again,
we get that different answer
again, and so on and so forth.
And so it seems as if the
measurement itself took this
very fragile thing,
this way function,
and it made it collapse onto
a particular eigen function.
Said right, this is it, and
then all the other possibilities
vanish forever.
Now if I decide I'm going
to give a lecture, I turn up
and do the lecture, but
if I decide I'm going
to the beach instead,
and I go to the beach,
then the lecture is
not a possibility,
and it's now vanished forever.
It's gone, and I'm at the beach.
And so by making that
choice, I've narrowed
down the possibilities.
Before I did that,
I could say, well,
50/50 I give the lecture
or go to the beach.
And that's important
because it means
that measurement affects
quantum systems, and that means
that there is no such thing a
property without measuring it.
The, we usually think
that things have
properties independent
of measuring them
because they seem to.
This pointer, for example,
has a mass, whether I have it
on a scale or not, and
I assume it's the same.
And for big objects that
are always being bombarded
by all kinds of things
and never have a chance
to let the way function sneak
around, that's certainly true,
but for small things, we have
to be very wary about assuming
that something has a property
if we have not measured it
because the measurement will
change, and so it could be
that it was in some
superposition of mixture,
and when we measure, we
picked out one of them,
but that doesn't mean
it was like that before.
It means we might
have changed it.
So if we had obtained, let's
say, go back to the coin.
If we had obtained tails instead
of heads on the first throw,
then if we keep looking
at it, it's tails.
And so we get 50 percent
probability, and it collapses
onto a particular
choice half the time,
and once it has collapsed
onto that particular choice,
it remains there for any number
of repeated measurements.
It does not change.
Now the question is this.
What happened to the
uncertainty principle?
Because now I'm claiming that
we can get measured results
with certainty.
We're saying we always
get the same result.
We just measure it once, then
the uncertainty goes away,
and that's kind of interesting
because it's not so simple.
Because the uncertainty
principle, which we quoted
for position and momentum,
applies to measuring two things
- position and momentum at once,
or one right after the other.
Not just one observable.
There is no uncertainty
about measuring one thing
as well as you like.
The problem is if you want
to measure what you think
of as everything that
you could measure,
then there will be some
problems, some blurring perhaps
that you didn't anticipate.
So a deeper analysis shows us
that not all properties
need to be uncertain.
In fact, if the two
operators have the same set
of eigen functions, this
is why it's very important
mathematically for us to be able
to determine the eigen functions
of an operator because we
might have this operator
and that operator representing
this and that, and if it turns
out that the two operators
mathematically have the same set
of eigen functions, even if they
have different eigen values,
usually they will because they
have different units and so on.
Then we can measure
both of them,
and we get exact
results for both.
So we may measure this and that,
and we get a certain value,
and if we measure this and that,
again, we get the same for both,
and there's no uncertainty.
But, unfortunately, position and
momentum, which are two things
that people like to determine,
are not compatible in that way.
Oh, there's this idea
called complementarity.
Observables that are
incompatible cannot be measured
to arbitrary precision.
Now here what I've
shown is a real coin.
I flipped it, and it
happened to come up heads.
It's a penny, and you
can see that it's heads
because you can see Lincoln in
profile on the face of the coin,
and you can even read
other things on it
like the year it's minted.
But let's just say we can
tell that it's heads for sure.
Now suppose instead of
trying to measure heads,
and if I leave it there,
it's going to, obviously,
measure heads, heads, heads.
It's not going to flip because
I'm not allowed to do anything
to it except look
at it, measure it.
If, on the other
hand, we're interested
in the exact thickness of
the coin, in that case,
we have to orient it like this,
and this was tricky to do,
but the coin did
balance on its edge.
It was thick enough, and
the surface was flat enough.
And my collaborator had
a steady enough hand.
And now if you have the
coin oriented like this,
you can see exactly how thick
it is whereas when it was
down with the head pointing, you
had no idea how thick it was.
Imagine you're looking
straight down on it
so you can get the
best possible view.
Now you can try, now
if the coin's on edge,
it's obviously unstable.
And so anything I try to do to
look at it could have it drop,
but the question is, when it's
like that, which side is heads,
and the answer is because
I'm looking at it edge on,
I have no idea which
side is heads,
and quantum systems
are very much like that
if we have complementary
variables.
If I try to zero
in on one of them,
it means the other one fades
out, and I can't get both
at once because they're
interfering with each other,
and I just, there's no
possibility of doing that.
We could try to cheat.
Here's the coin balanced
on a pen with an eraser
to keep it steady, and we could
look at the coin on an angle
like this, and the
way it's angled here,
I can pretty much
tell it's heads.
It's not as clear
as it was before,
but I can pretty
much tell it's heads,
but what I can't do now is
measure the thickness very well
because I'm seeing the
thickness from an angle,
and it gets smaller and smaller
and smaller as I get the heads,
and then I can't
measure it very well.
And basically in order to
get the thickness better,
I have to turn the coin toward
me like that, and then, finally,
at some point, I can't see
whether it's heads or tails.
Now when a coin, a macroscopic
thing, I can look at it heads.
I can orient it and say,
well, that side's heads,
but with small things,
you can forget that.
That's not possible.
Unless you can see it's heads,
you don't know what it is,
and that's the problem.
So the uncertainty
principle really makes this
numerically rigorous.
It says exactly how well you
could measure the thickness
and/or tell it's heads
when it's a small thing,
and when you know how
the different variables,
the things you are trying
to measure interact
with each other.
That's basically what it's
making much more rigorous.
Let's talk now about
classical atoms.
Though in a classical atom,
we have Maxwell's equations,
and this was another
problem actually at the turn
of the century is that it
was fairly easy to work
out that an accelerating
charge would radiate energy
in accordance with
Maxwell's equations.
But if the electron, which
is certainly accelerating,
if we imagine it
going in a circle
around a positively
charged nucleus,
then it has to radiate
energy, then it has to slow
down because the energy
doesn't come from nowhere,
and what that means is that
the electron would spiral
in toward the proton and would
eventually condense onto it.
And if that happened, there
wouldn't be any electrons
around to make bonds, and so
there wouldn't be any molecules.
There wouldn't be any life.
There wouldn't be
any atoms even.
It would just be, like, a
neutron star or something
with just all this
condensed matter.
Somehow the electron is not
like in a planetary orbit,
and it's not behaving according
to the way a charge would
in Maxwell's equations.
And the reason it doesn't partly
is the answer the principle.
Because suppose the
electron starts slowing down
and spiraling in and coming
in and then it ends smaller
and smaller and smaller.
Well, we just did a
calculation that showed
if the electrons were then
200 peakameters [phonetic]
that the minimum uncertainty
and velocity is, like,
100,000 meters per second.
And what that means is that it
is impossible for the electron
to spiral in and be
on top of the proton
in that itsy bitsy
space and be stationery
because that violates the
uncertainty principle,
and quantum mechanics says it's
not possible to measure position
and momentum that accurately.
And, therefore, the electron
may start spiraling in,
and then may just get
tossed out suddenly and go
in a different direction,
and so that saves us.
So if the electron
is not spiraling
around like a planetary
model of an atom in an orbit,
then what is it doing?
It certainly maintains a
stable probability distribution
because we looked at atoms, and
we see that they have a cloud
of negative charge around,
and it doesn't change
if we don't disturb the atom
if we just leave it alone.
But we don't know
where the electron is
because the electron behaves
like a wave unless we try
to measure its position
by which we would have
to use a very energetic photon
and blow the electron clean
out of the atom basically,
and then, of course,
we've lost our thread.
We were trying to
figure out what it looked
like when we didn't look at it,
and the problem is
that's not allowed.
You can only talk about
the things you can measure.
You can't talk about things
that you can imagine
what they might be.
Stable distributions
of the electron density
have to be standing waves.
Standing wave, you can
think of a guitar string.
If I put my finger, I have
a fret [phonetic] here.
You can't move here.
You can't move it the
other end, in between.
It can vibrate, and it
just makes a stable pattern
and sits there doing
the same thing.
And the electron, then, has to
do something very much like that
when it's in an atom, and
it has a very interesting
wave property.
We couldn't understand it at all
if we thought of it as a rock
or a bb moving around in there.
An orbit, of course, is
a periodic trajectory,
but electrons don't
have trajectories.
And so instead of an
orbit, we speak of orbitals,
which is the wave
analogy of a stable orbit.
Now it's the wave function
that has to somehow,
like the guitar string can
only play a certain note
if I hit that fret.
The wave function can only
play certain notes in the atom.
It has to come around
and match itself,
and give a stable standing wave,
and that means the wavelength
of the wave function has
to match into the space
into which the electron
is confined.
If it doesn't, you
won't find the electron
in that wave function.
So there is destructive
interference,
and the wave function vanishes,
and if the wave function
vanishes, then the chance
of seeing the electron at
that energy also vanishes
because the wave function
tells us the probability.
Let's, a 3D thing is kind
of hard to visualize,
but we can certainly do an
electron on a 2D sphere, a ring,
and that makes it much
easier for us to draw.
So let's have a look, then,
on electron on a ring.
Here is an electron
going around as a wave.
I say going around, but
I don't know where it is
because I haven't
measured its position,
but I have a standing wave.
It's equal everywhere in space.
I'm just showing the real part.
When the real part 0, the
imaginary part is big,
and that's why, another
reason why we have
to have complex waves sometimes.
This one goes round
and round and round,
and every time it comes around,
it's back in the same place.
Goes around, comes
back in the same place.
Round and round.
And so, therefore, it's going to
make a stable repeating pattern.
It's going to sit
there, and, in fact,
we can't see anything
going round and round.
I imagined it was doing that
as if were a little rock going
around and around when, in fact,
all it is is just this pattern.
Stable repeating pattern.
Because it exactly matches
the condition that it met,
made itself when
it hooks up again.
But if I have a slightly
different wavelength
so that it doesn't match
when it comes around,
but it's a little bit off.
It goes around instead
of matching perfect.
It's a little higher.
Then if it goes around
again, it's a little worse.
If it goes around again,
it's a little worse.
Waves go up and down.
Finally, it's coming around.
It has the opposite
sign, and let's go
around then another
time on this thing.
Second time around,
it's a worse mismatch,
and if I go around
several times,
it appears when I draw the thing
that the wave is up and down
and up and down and up and down
and up and down everywhere.
And that means that
it cancels itself out.
There is no wave there if it's
up and down and up and down.
It just canceled itself out.
The only position that
can have a wave is the one
that perfectly matches.
Now there could be a
higher one where instead
of three lobes it has four.
Goes around four.
That perfectly matches, too, but
there's not three and a half,
and there's not 3.1, and
there's not pi lobes.
There's exactly an
integer number of lobes,
and that means there
are a certain number
of integer energy levels
that the electron can be in,
and it can't just be anywhere.
That's not allowed, and
that's very important
because it explains the
spectroscopic observation
of atoms where they didn't
just irradiate any old light,
but every element gave
certain characteristic lines
which depended where it was in
the periodic table and so forth
and so on, and, of
course, it's very important
for chemical analysis.
That's one way you
can tell what's
in an unknown sample is you do
atomic emission spectroscopy.
Confined systems.
Atoms are confined systems
because the electron has
to stay there, but there are
many other confined systems.
Nano particles, so
called particle in a box,
which is a model
problem we're going to do
which has very easy mathematics
compared to real problems,
which is why we do it.
Wherever you have
a confined system,
doesn't matter how it's
confined, the way function has
to somehow fit, it has to
fit into the space available.
If it goes round or bounces
back and forth or does anything,
and it comes back different,
that means that particular
wave function is going
to cancel itself out,
and it's just gone.
This matching condition really
restricts the wave function
to a certain set of values, and
gives us allowed energy states
such as those that are close,
the, such that are observed
in atoms and molecules and
form the basis of all kinds
of chemical analysis
that we're going to do.
If we've got much
shorter wavelength light,
we learned that short
wavelength equals high energy,
and that long wavelength
light equals low energy.
And Debroiley said, well,
particles have a wave associated
with them, and now we've given
this thing, the wave function,
and it's a function,
and we can plot it
if we have a functional form
for it, which we sometimes do,
and we can look at it,
and what we find is
that if the wave function has
higher curvature, is going up
and down, up and down
a lot like crazy,
that's sort of like a photon
with a short wavelength,
and that means that that state,
that quantum state
is higher energy
than one that's all spread out
and just kind of moping around
and not really very many
up and down parts and up
and down notes in the things.
I'm going to close here
with a position operator,
and we'll pick this
up next time.
We found that the
momentum operator was
to take minus ih bar
times the derivative.
The position operator is,
in fact, I gave it before
as an example of an operator.
X hat on psi is just equal
to the number x on psi.
And the number x is the
position of the particle.
The momentum eigen function
is e to the ipx upon h bar.
And to make sure that the
eigen function is normalized,
we should include a
normalization factor,
which I'll just put
n some number here
to make sure it's normalized.
And the question is what if
the probability density look
like for a momentum eigen state?
Well, we just take 5p
time, 5 star of p times 5p.
We get n, e to the minus
ipx, and e to the plus ipx,
e to the plus, e to the minus,
then it's 1 because
that's e to the 0.
It doesn't matter whether
it's imaginary or real.
That still works.
We just get n squared,
but that's weird
because it says the probability
density doesn't depend
on x. It's just some number,
and what that means, then,
is for a momentum eigen state,
the particle has equal
probability of being anywhere
at all basically from plus or
minus infinity equally likely.
And so position eigen states
where the particle is definitely
within 10 to the minus no matter
how small you want to think
at that point and
momentum eigen states
where the momentum is
exactly determined,
are two completely different
aspects of measurement,
and they're completely at odds.
The best you can do is
you can get the position
to within a certain limit,
and then simultaneously
you can get the momentum
to within a certain limit, but
if you try to get too aggressive
with one, it's like squeeze this
in, then the other one gets wide
because somehow this area
between the two of them is
like pushing on Jello
or something.
It squeezes out if you try to
get too aggressive with it,
and there's no way you can
minimize that effect except
to have the very best
uncertainty on the inequality,
but you can't make it 0
the way you'd like to.
To have a position eigen
state, on the other hand,
instead of this corkscrew,
you should think of e
to the ipx as a corkscrew.
If it's corking this way, the
particle's going that way.
If it's corking the other
way, it's going this way,
but in either case, it's
just a constant thing corking
around like a corkscrew driving
through a wine bottle cork.
It's going to go a
certain direction.
On the other hand, position
shouldn't be like that at all.
What it should be
like is it should be,
the wave function should be
piled up like a big pile of sand
in this position, and then
it should be 0 elsewhere
because we know when we take
the wave function and square it,
that tells us the probability
of finding the particle.
So we take some function then
pile it like the Eiffel Tower,
real steep, then
particle's going to be there,
and then it might
have some uncertainty.
It might be out here, but
we could imagine piling it
up very steep and very high,
and that would be a
position eigen function.
Next time, what I'm
going to do is I'm going
to take a model position
function that's localized,
and I'm going to expand it in
terms of momentum functions
and show you that the
momentum of such a function
like that becomes more
and more uncertain
as we make the position sharper.
And then, finally, we'll finally
introduce after the first week
of class is over, we're going
to introduce the wave equation,
that's what we didn't
have so far,
that tells us exactly how these
wave functions move forward
in time, and how they
have certain energy
and other properties,
and that allows us, then,
to discover what these wave
functions actually are.
So we'll pick it
up there next time.
------------------------------e76b20b86e19--
