The following
content is provided
under a Creative
Commons license.
Your support will help MIT
OpenCourseWare continue
to offer high quality
educational resources for free.
To make a donation or to
view additional materials
from hundreds of MIT courses,
visit MIT OpenCourseWare
at ocw.mit.edu.
PROFESSOR: So,
finally, before I get
started on the new
stuff, questions
from the previous lectures?
No questions?
Yeah.
AUDIENCE: I have a question.
You might have said
this last time,
but when is the first exam?
PROFESSOR: Ah, excellent.
Those will be posted on the
Stellar page later today.
Yeah.
AUDIENCE: OK, so we're
associating operators
with observables, right?
PROFESSOR: Yes.
AUDIENCE: And
Professor [? Zugoff ?]
mentioned that whenever
we have done a wave
function with an
operator, it collapses.
PROFESSOR: OK, so let me
rephrase the question.
This is a very valuable
question to talk through.
So, thanks for asking it.
So, we've previously observed
that observables are associated
with operators--
and we'll review
that in more detail
in a second--
and the statement was
then made, does that
mean that acting on a wave
function with an operator
is like measuring
the observable?
And it's absolutely
essential that you understand
that acting on a wave
function with an operator
has nothing whatsoever to do
with measuring that associated
observable.
Nothing.
OK?
And we'll talk about
the relationship
and what those things mean.
But here's a very
tempting thing to think.
I have a wave function.
I want to know the momentum.
I will thus operate with
the momentum operator.
Completely wrong.
So, before I even tell you
what the right statement is,
let me just get that
out of your head,
and then we'll talk through
that in much more detail
over the next lecture.
Yeah.
AUDIENCE: Why doesn't it
collapse by special relativity?
PROFESSOR: We're
doing everything
non-relativistically.
Quantum Mechanics
for 804 is going
to be a universe in which
there is no relativity.
If you ask me that more
precisely in my office hours,
I will tell you a
relativistic story.
But it doesn't violate
anything relativistic.
At all.
We'll talk about that-- just
to be a little more detailed--
that will be a very
important question that we'll
deal with in the last two
lectures of the course,
when we come back to Bell's
inequality and locality.
Other questions?
OK.
So, let's get started.
So, just to review where we are.
In Quantum Mechanics
according to 804,
our first pass at the
definition of quantum mechanics
is that the configuration of
any system-- and in particular,
think about a single
point particle--
the configuration
of our particle
is specified by giving
a wave function, which
is a function which
may depend on time,
but a function of position.
Observables-- and this is
a complete specification
of the state of the system.
If I know the wave
function, I neither
needed nor have access to
any further information
about the system.
All the information specifying
the configuration system
is completely contained
in the wave function.
Secondly, observables
in quantum mechanics
are associated with operators.
Something you can
build an experiment
to observe or to measure is
associated with an operator.
And by an operator, I
mean a rule or a map,
something that tells you
if you give me a function,
I will give you a
different function back.
OK?
An operator is just a
thing which eats a function
and spits out another function.
Now, operators-- which I
will denote with a hat,
as long as I can remember
to do so-- operators
come-- and in particular, the
kinds of operators we're going
to care about, linear operators,
which you talked about
in detail last lecture--
linear operators come endowed
with a natural set
of special functions
called Eigenfunctions with
the following property.
Your operator, acting
on its Eigenfunction,
gives you that same function
back times a constant.
So, that's a very
special generically.
An operator will take
a function and give you
some other random
function that doesn't
look all like the
original function.
It's a very special thing to
give you the same function
back times a constant.
So, a useful thing
to think about here
is just in the case
of vector spaces.
So, I'm going to consider
the operation corresponding
to rotation around the
z-axis by a small angle.
OK?
So, under rotation around
the z-axis by a small angle,
I take an arbitrary vector
to some other stupid vector.
Which vector is completely
determined by the rule?
I rotate by a small
amount, right?
I take this vector and
it gives me this one.
I take that vector,
it gives me this one.
Everyone agree with that?
What are the Eigenvectors
of the rotation
by a small angle
around the z-axis?
AUDIENCE: [INAUDIBLE]
PROFESSOR: Yeah, it's
got to be a vector that
doesn't change its direction.
It just changes by magnitude.
So there's one, right?
I rotate.
And what's its Eigenvalue?
AUDIENCE: One.
PROFESSOR: One, because
nothing changed, right?
Now, let's consider the
following operation.
Rotate by small angle
and double its length.
OK, that's a different operator.
I rotate and I
double the length.
I rotate and I
double the length.
I rotate and I
double the length.
Yeah, so what's the Eigenvalue
under that operator?
AUDIENCE: Two.
PROFESSOR: Two.
Right, exactly.
So these are a very
special set of functions.
This is the same idea, but
instead of having vectors,
we have functions.
Questions?
I thought I saw a hand pop up.
No?
OK, cool.
Third, superposition.
Given any two viable
wave functions
that could describe
our system, that
could specify states or
configurations of our system,
an arbitrary superposition of
them-- arbitrary linear sum--
could also be a valid
physical configuration.
There is also a
state corresponding
to being in an arbitrary sum.
For example, if we know that
the electron could be black
and it could be
white, it could also
be in an arbitrary superposition
of being black and white.
And that is a statement in
which the electron is not black.
The electron is not white.
It is in the
superposition of the two.
It does not have
a definite color.
And that is exactly
the configuration
we found inside our apparatus
in the first lecture.
Yeah.
AUDIENCE: Are those Phi-A
arbitrary functions,
or are they supposed
to be Eigenfunctions?
PROFESSOR: Excellent.
So, in general the
superposition thank you.
It's an excellent question.
The question was are these
Phi-As arbitrary functions,
or are they specific
Eigenfunctions
of some operator?
So, the superposition
principle actually
says a very general thing.
It says, given any two
viable wave functions,
an arbitrary sum, an
arbitrary linear combination,
is also a viable wave function.
But here I want to mark
something slightly different.
And this is why I chose
the notation I did.
Given an operator
A, it comes endowed
with a special set of functions,
its Eigenfunctions, right?
We saw the last time.
And I claimed the following.
Beyond just the usual
superposition principle,
the set of Eigenfunctions
of operators
corresponding to physical
observables-- so, pick
your observable, like momentum.
That corresponds to an operator.
Consider the
Eigenfunctions of momentum.
Those we know what those are.
They're plane waves with
definite wavelength,
right? u to the ikx.
Any function can be
expressed as a superposition
of those Eigenfunctions of
your physical observable.
We'll go over this in
more detail in a minute.
But here I want to emphasize
that the Eigenfunctions have
a special property that-- for
observables, for operators
corresponding to observables--
the Eigenfunctions form
a basis.
Any function can be expanded
as some linear combination
of these basis functions,
the classic example
being the Fourier expansion.
Any function, any
periodic function,
can be expanded as a sum
of sines and cosines,
and any function
on the real line
can be expanded as a sum of
exponentials, e to the ikx.
This is the same statement.
The Eigenfunctions
of momentum are what?
e to the ikx.
So, this is the same that
an arbitrary function--
when the observable
is the momentum,
this is the statement that
an arbitrary function can
be expanded as a superposition,
or a sum of exponentials,
and that's the Fourier theorem.
Cool?
Was there a question?
AUDIENCE: [INAUDIBLE]
PROFESSOR: OK, good.
Other questions on these points?
So, these should not yet be
trivial and obvious to you.
If they are, then that's
great, but if they're not,
we're going to be
working through examples
for the next several
lectures and problem sets.
The point now is to give you
a grounding on which to stand.
Fourth postulate.
What these expansion
coefficients mean.
And this is also
an interpretation
of the meaning of
the wave function.
What these expansion
coefficients mean
is that the probability that
I measure the observable
to be a particular Eigenvalue
is the norm squared
of the expansion coefficient.
OK?
So, I tell you that
any function can
be expanded as a superposition
of plane waves-- waves
with definite momentum--
with some coefficients.
And those coefficients
depend on which function
I'm talking about.
What these coefficients
tell me is the probability
that I will measure the
momentum to be the associated
value, the Eigenvalue.
OK?
Take that coefficient,
take its norm squared,
that gives me the probability.
How do we compute these
expansion coefficients?
I think Barton didn't
introduce to you this notation,
but he certainly told you this.
So let me introduce to you this
notation which I particularly
like.
We can extract the
expansion coefficient
if we know the wave function
by taking this integral,
taking the wave
function, multiplying
by the complex conjugate of
the associated Eigenfunction,
doing the integral.
And that notation is this
round brackets with Phi A
and Psi is my notation
for this integral.
And again, we'll still see
this in more detail later on.
And finally we have
collapse, the statement that,
if we go about measuring
some observable A,
then we will always, always
observe precisely one
of the Eigenvalues
of that operator.
We will never measure
anything else.
If the Eigenvalues are one,
two, three, four, and five,
you will never measure
half, 13 halves.
You will always
measure an Eigenvalue.
And upon measuring
that Eigenvalue,
you can be confident that that's
the actual value of the system.
I observe that it's
a white electron,
then it will remain white if I
subsequently measure its color.
What that's telling you is
it's no longer a superposition
of white and black,
but it's wave function
is that corresponding
to a definite value
of the observable.
So, somehow the process
of measurement-- and this
is a disturbing statement, to
which we'll return-- somehow
the process of measuring
the observable changes
the wave function from our
arbitrary superposition
to a specific Eigenfunction,
one particular Eigenfunction
of the operator we're measuring.
And this is called the
collapse of the wave function.
It collapses from
being a superposition
over possible states to
being in a definite state
upon measurement.
And the definite
state is that state
corresponding to the value
we observed or measured.
Yeah.
AUDIENCE: So, when the
wave function collapses,
does it instantaneously not
become a function of time
anymore?
Because originally
we had Psi of (x,t).
PROFESSOR: Yeah, that's
a really good question.
So I wrote this only
in terms of position,
but I should more
precisely write.
So, the question was, does this
happen instantaneously, or more
precisely, does it cease
to be a function of time?
Thank you.
It's very good question.
So, no, it doesn't cease
to be a function of time.
It just says that
Psi at x-- what
you know upon doing
this measurement
is that Psi, as a function
of x, at the time which I'll
call T star, at what
you've done the measurement
is equal to this wave function.
And so that leaves us with
the following question, which
is another way of asking
the question you just asked.
What happens next?
How does the system
evolve subsequently?
And at the very end
of the last lecture,
we answered that--
or rather, Barton
answered that-- by introducing
the Schrodinger equation.
And the Schrodinger equation,
we don't derive, we just posit.
Much like Newton
posits f equals ma.
You can motivate it,
but you can't derive it.
It's just what we mean by
the quantum mechanical model.
And Schrodinger's equation
says, given a wave function,
I can determine the time
derivative, the time
rate of changes of
that wave function,
and determine its
time evolution,
and its time derivative,
its slope-- its velocity,
if you will-- is one upon I h
bar, the energy operator acting
on that wave function.
So, suppose we measure
that our observable capital
A takes the value
of little a, one
of the Eigenvalues of
the associated operators.
Suppose we measure
that A equals little a
at some particular
moment T start.
Then we know that
the wave function
is Psi of x at that
moment in time.
We can then compute
the time derivative
of the wave function
at that moment in time
by acting on this wave
function with the operator e
hat, the energy operator.
And we can then integrate that
differential equation forward
in time and determine how
the wave function evolves.
The point of today's
lecture is going
to be to study how time
evolution works in quantum
mechanics, and to look
at some basic examples
and basic strategies for
solving the time evolution
problem in quantum mechanics.
One of the great surprises
in quantum mechanics--
hold on just one sec-- one of
the real surprises in quantum
mechanics is that
time evolution is
in a very specific sense
trivial in quantum mechanics.
It's preposterously simple.
In particular, time evolution is
governed by a linear equation.
How many of you have studied
a classical mechanical system
where the time evolution is
governed by a linear equation?
Right.
OK, all of you.
The harmonic oscillator.
But otherwise, not at all.
Otherwise, the equations
in classical mechanics
are generically
highly nonlinear.
The time rate of change
of position of a particle
is the gradient of the force,
and the force is generally
some complicated
function of position.
You've got some capacitors
over here, and maybe
some magnetic field.
It's very nonlinear.
Evolution in quantum
mechanics is linear,
and this is going
to be surprising.
It's going to lead to some
surprising simplifications.
And we'll turn
back to that, but I
want to put that your
mind like a little hook,
that that's something
you should mark
on to as different from
classical mechanics.
And we'll come back to that.
Yeah.
AUDIENCE: If a particle
is continuously
observed as a not
evolving particle?
PROFESSOR: That's
an awesome question.
The question is, look,
imagine I observe--
I'm going to paraphrase--
imagine I observe a particle
and I observe that it's here.
OK?
Subsequently, its wave function
will evolve in some way--
and we'll actually study that
later today-- its wave function
will evolve in some
way, and it'll change.
It won't necessarily be
definitely here anymore.
But if I just keep measuring it
over and over and over again,
I just keep measure
it to be right there.
It can't possibly evolve.
And that's actually
true, and it's
called the Quantum Zeno problem.
So, it's the observation
that if you continuously
measure a thing,
you can't possibly
have its wave function
evolve significantly.
And not only is it a cute
idea, but it's something people
do in the laboratory.
So, Martin-- well, OK.
People do it in a
laboratory and it's cool.
Come ask me and I'll tell
you about the experiments.
Other questions?
There were a bunch.
Yeah.
AUDIENCE: So after you measure,
the Schrodinger equation
also gives you the
evolution backwards in time?
PROFESSOR: Oh, crap!
Yes.
That's such a good question.
OK.
I hate it when people
ask that at this point,
because I had to
then say more words.
That's a very good question.
So the question goes like this.
So this was going to be a
punchline later on in the
in the lecture but you stole
my thunder, so that's awesome.
So, here's the deal.
We have a rule for time
evolution of a wave function,
and it has some
lovely properties.
In particular-- let me talk
through this-- in particular,
this equation is linear.
So what properties does it have?
Let me just-- I'm
going to come back
to your question
in just a second,
but first I want to set it up
so we have a little more meat
to answer your
question precisely.
So we note some properties
of this equation, this time
evolution equation.
The first is that it's
a linear equation.
The derivative of
a sum of function
is a sum of the derivatives.
The energy operator's
a linear operator,
meaning the energy operator
acting on a sum of functions
is a sum of the energy operator
acting on each function.
You guys studied linear
operators in your problem set,
right?
So, these are linear.
What that tells you
is if Psi 1 of x and t
solves the Schrodinger equation,
and Psi 2 of x and t-- two
different functions of
position in time-- both
solve the Schrodinger equation,
then any combination of them--
alpha Psi 1 plus Beta Psi 2--
also solves-- which I will call
Psi, and I'll make it
a capital Psi for fun--
solves the Schrodinger
equation automatically.
Given two solutions of
the Schrodinger equation,
a superposition of them--
an arbitrary superposition--
also solves the
Schrodinger equation.
This is linearity.
Cool?
Next property.
It's unitary.
What I mean by unitary is this.
It concerns probability.
And you'll give a
precise derivation
of what I mean by
unitary and you'll
demonstrate that, in fact,
Schrodinger evolution
is unitary on your
next problem set.
It's not on the current one.
But what I mean by unitary is
that conserves probability.
Whoops, that's an o.
Conserves probability.
IE, if there's an
electron here, or if we
have an object, a
piece of chalk-- which
I'm treating as a quantum
mechanical point particle--
it's described by
the wave function.
The integral, the probability
distribution over all the
places it could possibly
be had better be one,
because it had better be
somewhere with probability one.
That had better
not change in time.
If I solve the Schrodinger
equation evolve the system
forward for half an
hour, it had better not
be the case that the
total probability
of finding the
particle is one half.
That means things
disappear in the universe.
And much as my socks
would seem to be
a counter example of that,
things don't disappear, right?
It just doesn't happen.
So, quantum mechanics
is demonstrably-- well,
quantum mechanics
is unitary, and this
is a demonstrably good
description of the real world.
It fits all the observations
we've ever made.
No one's ever discovered
an experimental violation
of unitarity of
quantum mechanics.
I will note that there is
a theoretical violation
of unitarity in quantum
mechanics, which
is dear to my heart.
It's called the Hawking Effect,
and it's an observation that,
due quantum mechanics, black
holes in general relativity--
places from which light
cannot escape-- evaporate.
So you throw stuff and
you form a black hole.
It's got a horizon.
If you fall through
that horizon,
we never see you again.
Surprisingly, a black hole's
a hot object like an iron,
and it sends off radiation.
As it sends off radiation,
it's losing its energy.
It's shrinking.
And eventually it will,
like the classical atom,
collapse to nothing.
There's a quibble
going on right now
over whether it really
collapses to nothing,
or whether there's
a little granule
nugget of quantum goodness.
[LAUGHTER]
We argue about this.
We get paid to argue about this.
[LAUGHTER]
So, but here's the funny thing.
If you threw in a dictionary and
then the black hole evaporates,
where did the information
about what made the black hole
go if it's just thermal
radiation coming out?
So, this is a
classic calculation,
which to a theorist says, ah ha!
Maybe unitarity isn't conserved.
But, look.
Black holes, theorists.
There's no
experimental violation
of unitarity anywhere.
And if anyone ever did
find such a violation,
it would shatter the basic
tenets of quantum mechanics,
in particular the
Schrodinger equation.
So that's something we would
love to see but never have.
It depends on your
point of view.
You might hate to see it.
And the third-- and
this is, I think,
the most important-- is that
the Schrodinger evolution, this
is a time derivative.
It's a differential equation.
If you know the
initial condition,
and you know the derivative,
you can integrate it forward
in time.
And they're existence and
uniqueness theorems for this.
The system is deterministic.
What that means
is that if I have
complete knowledge of the
system at some moment in time,
if I know the wave function
at some moment in time,
I can determine
unambiguously the wave
function in all subsequent
moments of time.
Unambiguously.
There's no probability, there's
no likelihood, it's determined.
Completely determined.
Given full knowledge now, I
will have full knowledge later.
Does everyone agree
that this equation
is a deterministic
equation in that sense?
Question.
AUDIENCE: It's also local?
PROFESSOR: It's all-- well, OK.
This one happens
to be-- you need
to give me a better
definition of local.
So give me a definition
of local that you want.
AUDIENCE: The time evolution
of the wave function
happens only at a
point that depends only
on the value of the derivatives
of the wave function
and its potential
energy at that point.
PROFESSOR: No.
Unfortunately,
that's not the case.
We'll see counter
examples of that.
The wave function--
the energy operator.
So let's think about
what this equation says.
What this says is the
time rate of change
of the value of the wave
function at some position
and some moment in time is the
energy operator acting on Psi
at x of t.
But I didn't tell you what
the energy operator is.
The energy operator
just has to be linear.
But it doesn't have to be--
it could know about the wave
function everywhere.
The energy operator's a map
that takes the wave function
and tells you what
it should be later.
And so, at this level there's
nothing about locality built
in to the energy
operator, and we'll
see just how bad that can be.
So, this is related
to your question
about special relativity, and
so those are deeply intertwined.
We don't have that
property here yet.
But keep that in your
mind, and ask questions
when it seems to come up.
Because it's a very,
very, very important
question when we talk
about relativity.
Yeah.
AUDIENCE: Are postulates
six and three redundant
if the Schrodinger equation
has superposition in it?
PROFESSOR: No.
Excellent question.
That's a very good question.
The question is, look, there's
postulate three, which says,
given any two wave functions
that are viable wave
functions of the
system, then there's
another state which
is a viable wave
function at some moment in time,
which is also a viable wave
function.
But number six, the Schrodinger
equation-- or sorry,
really the linearity
property of the Schrodinger
equation-- so it needs to be
the case for the Schrodinger
question, but it says
something slightly different.
It doesn't just say that any
any plausible or viable wave
function and another
can be superposed.
It says that, specifically,
any solution of the Schrodinger
equation plus any other solution
of the Schrodinger equation
is again the
Schrodinger operation.
So, it's a slightly
more specific thing
than postulate three.
However, your question is
excellent because could it
have been that the
Schrodinger evolution didn't
respect superposition?
Well, you could imagine
something, sure.
We could've done a
differ equation, right?
It might not have been linear.
We could have had that
Schrodinger equation was
equal to dt Psi.
So imagine this equation.
How do we have blown linearity
while preserving determinism?
So we could have added plus, I
don't know, PSI squared of x.
So that would now be
a nonlinear equation.
It's actually refer to as the
nonlinear Schrodinger equation.
Well, people mean
many different things
by the nonlinear
Schrodinger equation,
but that's a nonlinear
Schrodinger equation.
So you could certainly
write this down.
It's not linear.
Does it violate
the statement three
that any two states
of the system
could be superposed to
give another viable state
at a moment in time?
No, right?
It doesn't directly violate.
It violates the spirit of it.
And as we'll see
later, it actually
would cause dramatic problems.
It's something we don't
usually emphasize-- something
I don't usually emphasize
in lectures of 804,
but I will make
a specific effort
to mark the places where
this would cause disasters.
But, so this is actually
a logically independent,
although morally--
and in some sense
is a technically related point
to the superposition principle
number three.
Yeah.
AUDIENCE: For postulate three,
can that sum be infinite sum?
PROFESSOR: Absolutely.
AUDIENCE: Can you
do bad things, then,
like creating discontinuous
wave functions?
PROFESSOR: Oh yes.
Oh, yes you can.
So here's the thing.
Look, if you have two functions
and you add them together--
like two smooth
continuous functions,
you add them together--
what do you get?
You get another smooth
continuous function, right?
Take seven.
You get another.
But if you take an
infinite number-- look,
mathematicians are sneaky.
There's a reason we keep
them down that hall,
far away from us.
[LAUGHTER]
They're very sneaky.
And if you give them an infinite
number of continuous functions,
they'll build for you a
discontinuous function, right?
Sneaky.
Does that seem
terribly physical?
No.
It's what happens when you
give a mathematician too much
paper and time, right?
So, I mean this less
flippantly than I'm saying it,
but it's worth being a
little flippant here.
In a physical
setting, we will often
find that there are
effectively an infinite number
of possible things
that could happen.
So, for example in this room,
where is this piece of chalk?
It's described by a
continuous variable.
That's an uncountable
infinite number of positions.
Now, in practice,
you can't really
build an experiment
that does that,
but it is in principle
an uncountable infinity
of possible positions, right?
You will never get a
discontinuous wave function
for this guy, because
it would correspond
to divergent
amounts of momentum,
as you showed on the
previous problem set.
So, in general, we will often
be in a situation as physicists
where there's the
possibility of using
the machinery-- the
mathematical machinery--
to create pathological examples.
And yes, that is a risk.
But physically it never happens.
Physically it's
extraordinarily rare
that such infinite
divergences could matter.
Now, I'm not saying
that they never do.
But we're going to be very
carefree and casual in 804
and just assume that when
problems can arise from,
say, superposing an infinite
number of smooth functions,
leading potentially
to discontinuities
or singularities, that they will
either not happen for us-- not
be relevant-- or they will
happen because they're
forced too, so for
physical reasons
we'll be able to identify.
So, this is a very
important point.
We're not proving
mathematical theorems.
We're not trying to be rigorous.
To prove a mathematical
theorem you
have to look at all
the exceptional cases
and say, those
exceptional cases,
we can deal with
them mathematically.
To a physicist, exceptional
cases are exceptional.
They're irrelevant.
They don't happen.
It doesn't matter.
OK?
And it doesn't
mean that we don't
care about the mathematical
precision, right?
I mean, I publish
papers in math journals,
so I have a deep love
for these questions.
But they're not salient for
most of the physical questions
we care about.
So, do your best to try not
to let those special cases get
in the way of your understanding
of the general case.
I don't want you to
not think about them,
I just want you not
let them stop you, OK?
Yeah.
AUDIENCE: So, in
postulate five, you
mentioned that
[? functions ?] in effect
was a experiment that
more or less proves
this collapse [INAUDIBLE]
But, so I read that it
is not [? complicit. ?]
PROFESSOR: Yeah, so as with many
things in quantum mechanics--
that's a fair question.
So, let me make a slightly
more general statement
than answering that
question directly.
Many things will-- how to
say-- so, we will not prove--
and experimentally you almost
never prove a positive thing.
You can show that a prediction
is violated by experiment.
So there's always going
to be some uncertainty
in your measurements,
there's always
going to be some uncertainty
in your arguments.
However, in the absence
of a compelling alternate
theoretical
description, you cling
on to what you've got it as
long as it fits your data,
and this fits the
data like a champ.
Right?
So, does it prove?
No.
It fits pretty well,
and nothing else
comes even within the ballpark.
And there's no
explicit violation
that's better than our
experimental uncertainties.
So, I don't know if
I'd say, well, we
could prove such a
thing, but it fits.
And I'm a physicist.
I'm looking for things that fit.
I'm not a metaphysicist.
I'm not trying to give you
some ontological commitment
about what things are true
and exist in the world, right?
That's not my job.
OK.
So much for our review.
But let me finally
come back to-- now
that we've observed
that it's determinist,
let me come back to
the question that
was asked a few minutes
ago, which is, look,
suppose we take
our superposition.
We evolve it forward for some
time using the Schrodinger
evolution.
Notice that it's time reversal.
If we know it's
time reverted, we
could run it
backwards just as well
as we could run it
forwards, right?
We could integrate
that in time back,
or we could integrate
that in time forward.
So, if we know the wave
function at some moment in time,
we can integrate it forward,
and we can integrate it back
in time.
But, If at some
point we measure,
then the wave
function collapses.
And subsequently, the
system evolves according
to the Schrodinger
equation, but with
this new initial condition.
So now we seem to
have a problem.
We seem to have--
and I believe this
was the question that was asked.
I don't remember who asked it.
Who asked it?
So someone asked it.
It was a good question.
We have this problem
that there seem
to be two definitions of time
evolution in quantum mechanics.
One is the Schrodinger
equation, which
says that things
deterministically
evolve forward in time.
And the second is collapse,
that if you do a measurement,
things non-deterministically
by probabilities collapse
to some possible state.
Yeah?
And the probability
is determined
by which wave function you have.
How can these
things both be true?
How can you have two different
definitions of time evolution?
So, this sort of
frustration lies
at the heart of much
of the sort of spiel
about the interpretation
of quantum mechanics.
On the one hand, we
want to say, well,
the world is inescapably
probabilistic.
Measurement comes with
probabilistic outcomes
and leads to collapse
of the wave function.
On the other hand, when
you're not looking,
the system evolves
deterministically.
And this sounds horrible.
It sounds horrible to
a classical physicist.
It sounds horrible to me.
It just sounds awful.
It sounds arbitrary.
Meanwhile, it makes it
sound like the world cares.
It evolves differently depending
on whether you're looking
or not.
And that-- come on.
I mean, I think we can all
agree that that's just crazy.
So what's going on?
So for a long time, physicists
in practice-- and still
in practice-- for a
long time physicists
almost exclusively
looked at this problem
and said, look,
don't worry about.
It fits the data.
It makes good predictions.
Work with me here.
Right?
And it's really hard to
argue against that attitude.
You have a set of rules.
It allows you to compute things.
You compute them.
They fit the data.
Done.
That is triumph.
But it's deeply disconcerting.
So, over the last, I
don't know, in the second
or the last quarter,
roughly, the last third
of the 20th century,
various people
started getting more
upset about this.
So, this notion of just
shut up and calculate,
which has been enshrined
in the physics literature,
goes under the name of the
Copenhagen interpretation,
which roughly says,
look, just do this.
Don't ask.
Compute the numbers,
and get what you will.
And people have questioned the
sanity or wisdom of doing that.
And in particular,
there's an idea--
so I refer to the Copenhagen
interpretation with my students
as the cop out,
because it's basically
disavowal of responsibility.
Look, it doesn't make
sense, but I'm not
responsible for making sense.
I'm just responsible
for making predictions.
Come on.
So, more recently has come
the theory of decoherence.
And we're not going to
talk about it in any detail
until the last couple
lectures of 804.
Decoherence.
I can't spell to save my life.
So, the theory of decoherence.
And here's roughly
what the theory says.
The theory says,
look, the reason
you have this problem between
on the one hand, Schrodinger
evolution of a quantum
system, and on the other hand,
measurement leading
to collapse, is
that in the case of measurement
meaning to collapse,
you're not really studying the
evolution of a quantum system.
You're studying the evolution
of a quantum system-- ie
a little thing that you're
measuring-- interacting
with your experimental
apparatus, which is made up
of 10 to the 27th particles,
and you made up of 10
to the 28 particles.
Whatever.
It's a large number.
OK, a lot more than that.
You, a macroscopic object,
where classical dynamics
are a good description.
In particular,
what that means is
that the quantum effects
are being washed out.
You're washing out
the interference
of fringes, which is why
I can catch this thing
and not have it split into
many different possible wave
functions and where it went.
So, dealing with that is hard,
because now if you really
want to treat the system
with Schrodinger evolution,
you have to study the
trajectory and the motion,
the dynamics, of every particle
in the system, every degree
of freedom in the system.
So here's the question that
decoherence is trying to ask.
If you take a system where
you have one little quantum
subsystem that you're
trying to measure,
and then again a gagillion
other degrees of freedom,
some of which you care
about-- they're made of you--
some of which you don't,
like the particles
of gas in the room,
the environment.
If you take that whole
system, does Schrodinger
evolution in the end
boil down to collapse
for that single
quantum microsystem?
And the answer is yes.
Showing that take
some work, and we'll
touch on it at the end of 804.
But I want to mark
right here that this
is one of the most deeply
unsatisfying points
in the basic story
of quantum mechanics,
and that it's deeply
unsatisfying because of the way
that we're presenting it.
And there's a much
more satisfying--
although still you
never escape the fact
that quantum mechanics
violates your intuition.
That's inescapable.
But at least it's not illogical.
it doesn't directly
contradict itself.
So that story is the
story of decoherence.
And if we're very
lucky, I think we'll
try to get one of my friends
who's a quantum computing
guy to talk about it.
Yeah.
AUDIENCE: [INAUDIBLE]
Is it possible
that we get two
different results?
PROFESSOR: No.
No.
No.
There's never any ambiguity
about what result you got.
You never end up in a state
of-- and this is also something
that decoherence is
supposed to explain.
You never end up in a situation
where you go like, wait, wait.
I don't know.
Maybe it was here,
maybe it was there.
I'm really confused.
I mean, you can get
up in that situation
because you did a
bad job, but you
don't end up in that
situation because you're
in a superposition state.
You always end up when you're
a classical beast doing
a classical
measurement, you always
end up in some definite state.
Now, what wave
function describes
you doesn't
necessarily correspond
to you being in a simple state.
You might be in a superposition
of thinking this and thinking
that.
But, when you think this,
that's in fact what happened.
And when you think that,
that's in fact what happened.
OK.
So I'm going to leave
this alone for the moment,
but I just wanted to mark
that as an important part
of the quantum mechanical story.
OK.
So let's go on to solving
the Schrodinger equation.
So what I want to do
for the rest of today
is talk about solving
the Schrodinger equation.
So when we set about solving
the Schrodinger equation,
the first thing
we should realize
is that at the end of the
day, the Schrodinger equation
is just some
differential equation.
And in fact, it's a particularly
easy differential equation.
It's a first order linear
differential equation.
Right?
We know how to solve those.
But, while it's
first order in time,
we have to think about what
this energy operator is.
So, just like the Newton
equation f equals ma,
we have to specify
the energy operative
before we can actually solve
the dynamics of the system.
In f equals ma, we
have to tell you
what the force is before
we can solve for p,
from p is equal to f.
So, for example.
So one strategy to solve
the Schrodinger equation
is to say, look, it's just
a differential equation,
and I'll solve it using
differential equation
techniques.
So let me specify, for
example, the energy operator.
What's an easy energy operator?
Well, imagine you had a harmonic
oscillator, which, you know,
physicists, that's your go-to.
So, harmonic
oscillator has energy p
squared over 2m plus M Omega
squared upon 2x squared.
But we're going
quantum mechanics,
so we replace these
guys by operators.
So that's an energy operator.
It's a perfectly
viable operator.
And what is the differential
equation that this leads to?
What's the Schrodinger
equation leads to?
Well, I'm going to put the
ih bar on the other side.
ih bar derivative with respect
to time of Psi of x and t
is equal to p squared.
Well, we remember that
p is equal to h bar
upon i, derivative
with respect to x.
So p squared is minus h bar
squared derivative with respect
to x squared upon 2m, or
minus h bar squared upon 2m.
Psi prime prime.
Let me write this as dx squared.
Two spatial derivatives acting
on Psi of x and t plus m
omega squared upon 2x
squared Psi of x and t.
So here's a
differential equation.
And if we want to know how
does a system evolve in time,
ie given some initial
wave function, how does it
evolve in time, we just take
this differential equation
and we solve it.
And there are many tools to
solve this partial differential
equation.
For example, you could
put it on Mathematica
and just use NDSolve, right?
This wasn't
available, of course,
to the physicists at
the turn of the century,
but they were less timid
about differential equations
than we are, because they
didn't have Mathematica.
So, this is a very
straightforward differential
equation to solve,
and we're going
to solve it in a
couple of lectures.
We're going to study the
harmonic oscillator in detail.
What I want to emphasize for
you is that any system has have
some specified energy operator,
just like any classical system,
has some definite
force function.
And given that energy
operator, that's
going to lead to a
differential equation.
So one way to solve the
differential equation
is just to go ahead and
brute force solve it.
But, at the end of
the day, solving
the Schrodinger equation
is always, always
going to boil down
to some version
morally of solve this
differential equation.
Questions about that?
OK.
But when we actually look
at a differential equation
like this-- so, say we have
this differential equation.
It's got a derivative
with respect to time,
so we have to specify
some initial condition.
There are many ways to solve it.
So given E, given
some specific E,
given some specific
energy operator,
there are many ways to solve.
The resulting
differential equation.
And I'm just going to mark
that, in general, it's
a PDE, because it's got
derivatives with respect
to time and derivatives
with respect to space.
And roughly speaking,
all these techniques
fall into three camps.
The first is just brute force.
That means some analog of
throw it on Mathematica,
go to the closet and pull
out your mathematician
and tie them to the
chalkboard until they're done,
and then put them back.
But some version of a
brute force, which is just
use, by hook or by
crook, some technique
that allows you to solve
the differential equation.
OK.
The second is
extreme cleverness.
And you'd be amazed how
often this comes in handy.
So, extreme
cleverness-- which we'll
see both of these
techniques used
for the harmonic oscillator.
That's what we'll do next week.
First, the brute
force, and secondly,
the clever way of solving
the harmonic oscillator.
When I say extreme
cleverness, what I really mean
is a more elegant use
of your mathematician.
You know something
about the structure,
the mathematical structure of
your differential equation.
And you're going to
use that structure
to figure out a good way to
organize the differential
equation, the good way
to organize the problem.
And that will teach you physics.
And the reason I distinguish
brute force from cleverness
in this sense is
that brute force,
you just get a list of numbers.
Cleverness, you learn
something about the way
the physics of the
system operates.
We'll see this at work
in the next two lectures.
And see, I really should
separate this out numerically.
And here I don't just mean
sticking it into MATLAB.
Numerically, it can
be enormously valuable
for a bunch of reasons.
First off, there
are often situations
where no classic technique
in differential equations
or no simple mathematical
structure that would just
leap to the imagination
comes to use.
And you have some horrible
differential you just
have to solve, and you
can solve it numerically.
Very useful lesson, and
a reason to not even--
how many of y'all are thinking
about being theorists of some
stripe or other?
OK.
And how many of y'all
are thinking about being
experimentalists of
some stripe or another?
OK, cool.
So, look, there's this
deep, deep prejudice
in theory against numerical
solutions of problems.
It's myopia.
It's a terrible attitude,
and here's the reason.
Computers are stupid.
Computers are
breathtakingly dumb.
They will do whatever
you tell them to do,
but they will not tell you
that was a dumb thing to do.
They have no idea.
So, in order to solve an
interesting physical problem,
you have to first
extract all the physics
and organize the
problem in such a way
that a stupid computer
can do the solution.
As a consequence, you learn
the physics about the problem.
It's extremely valuable to
learn how to solve problems
numerically, and we're going
to have problem sets later
in the course in
which you're going
to be required to
numerically solve
some of these
differential equations.
But it's useful because
you get numbers,
and you can check
against data, but also it
lets you in the process
of understanding
how to solve the problem.
You learn things
about the problem.
So I want to mark that as a
separate logical way to do it.
So today, I want to
start our analysis
by looking at a
couple of examples
of solving the
Schrodinger equation.
And I want to start by looking
at energy Eigenfunctions.
And then once we understand how
a single energy Eigenfunction
evolves in time, once we
understand that solution
to the Schrodinger
equation, we're
going to use the linearity
of the Schrodinger equation
to write down a general solution
of the Schrodinger equation.
OK.
So, first.
What happens if we have a
single energy Eigenfunction?
So, suppose our wave function
as a function of x at time t
equals zero is in a known
configuration, which
is an energy Eigenfunction
Phi sub E of x.
What I mean by Phi sub E of x is
if I take the energy operator,
and I act on Phi sub E
of x, this gives me back
the number E Phi sub E of x.
OK?
So it's an Eigenfunction of the
energy operator, the Eigenvalue
E.
So, suppose our
initial condition is
that our system
began life at time t
equals zero in this state with
definite energy E. Everyone
cool with that?
First off, question.
Suppose I immediately
at time zero
measure the energy
of this system.
What will I get?
AUDIENCE: E.
PROFESSOR: With
what probability?
AUDIENCE: 100%
PROFESSOR: 100%, because this
is, in fact, of this form,
it's a superposition
of energy Eigenstates,
except there's only one term.
And the coefficient
of that one term
is one, and the probability
that I measure the energy
to be equal to that value is
the coefficient norm squared,
and that's one norm squared.
Everyone cool with that?
Consider on the other hand, if
I had taken this wave function
and I had multiplied it
by phase E to the i Alpha.
What now is the probability
where alpha is just a number?
What now is the probability
that I measured the state
to have energy E?
AUDIENCE: One.
PROFESSOR: It's still one,
because the norm squared
of a phase is one.
Right?
OK.
The overall phase
does not matter.
So, suppose I have this
as my initial condition.
Let's take away
the overall phase
because my life will be easier.
So here's the wave function.
What is the
Schrodinger equation?
Well, the Schrodinger
equation says
that ih bar time
derivative of Psi
is equal to the energy
operator acting on Psi.
And I should be specific.
This is Psi at x at time t,
Eigenvalued at this time zero
is equal to the energy operator
acting on this wave function.
But what's the energy operator
acting on this wave function?
AUDIENCE: E.
PROFESSOR: E. E on
Psi is equal to E
on Phi sub E, which
is just E the number.
This is the number E,
the Eigenvalue E times
Psi at x zero.
And now, instead of having
an operator on the right hand
side, we just have a number.
So, I'm going to write this
differential equation slightly
differently, ie time
derivative of Psi
is equal to E upon ih bar,
or minus ie over h bar Psi.
Yeah?
Everyone cool with that?
This is the easiest differential
equation in the world to solve.
So, the time derivative
is a constant.
Well, times itself.
That means that
therefore Psi at x and t
is equal to E to the minus i
ET over h bar Psi at x zero.
Where I've imposed the initial
condition that at time t
equals zero, the
wave function is
just equal to Psi of x at zero.
And in particular, I know
what Psi of x and zero is.
It's Phi E of x.
So I can simply write
this as Phi E of x.
Are we cool with that?
So, what this tells me is
that under time evolution,
a state which is initially
in an energy Eigenstate
remains in an energy
Eigenstate with the same energy
Eigenvalue.
The only thing that changes
about the wave function
is that its phase
changes, and its phase
changes by rotating with
a constant velocity.
E to the minus i, the
energy Eigenvalue,
times time upon h bar.
Now, first off, before we
do anything else as usual,
we should first check the
dimensions of our result
to make sure we
didn't make a goof.
So, does this make
sense dimensionally?
Let's quickly check.
Yeah, it does.
Let's just quickly check.
So we have that the exponent
there is Et over h bar.
OK?
And this should have dimensions
of what in order to make sense?
AUDIENCE: Nothing.
PROFESSOR: Nothing, exactly.
It should be dimensionless.
So what are the
dimensions of h bar?
AUDIENCE: [INAUDIBLE]
PROFESSOR: Oh, no, the
dimensions, guys, not
the units.
What are the dimensions?
AUDIENCE: [INAUDIBLE]
PROFESSOR: It's an action,
which is energy of time.
So the units of
the dimensions of h
are an energy times
a time, also known
as a momentum times a position.
OK?
So, this has dimensions of
action or energy times time,
and then upstairs
we have dimensions
of energy times time.
So that's consistent.
So this in fact is dimensionally
sensible, which is good.
Now, this tells you a
very important thing.
In fact, we just
answered this equation.
At time t equals
zero, what will we
get if we measure the energy?
E. At time t prime-- some
subsequent time-- what energy
will we measure?
AUDIENCE: E.
PROFESSOR: Yeah.
Does the energy
change over time?
No.
When I say that, what I
mean is, does the energy
that you expect to
measure change over time?
No.
Does the probability that
you measure energy E change?
No, because it's just a phase,
and the norm squared of a phase
is one.
Yeah?
Everyone cool with that?
Questions at this point.
This is very simple
example, but it's
going to have a lot of power.
Oh, yeah, question.
Thank you.
AUDIENCE: Are we going to deal
with energy operators that
change over time?
PROFESSOR: Excellent question.
We will later, but not in 804.
In 805, you'll discuss
it in more detail.
Nothing dramatic
happens, but you just
have to add more symbols.
There's nothing deep about it.
It's a very good question.
The question was,
are we going to deal
with energy operators
that change in time?
My answer was no, not in
804, but you will in 805.
And what you'll find is
that it's not a big deal.
Nothing particularly
dramatic happens.
We will deal with systems where
the energy operator changes
instantaneously.
So not a continuous
function, but we're
at some of them you turn
on the electric field,
or something like that.
So we'll deal with
that later on.
But we won't develop a
theory of energy operators
that depend on time.
But you could do it,
and you will do in 805.
There's nothing
mysterious about it.
Other questions?
OK.
So, these states-- a
state Psi of x and t,
which is of the form e
to the minus i Omega t,
where Omega is equal
to E over h bar.
This should look familiar.
It's the de Broglie relation,
[INAUDIBLE] relation, whatever.
Times some Phi E
of x, where this
is an energy Eigenfunction.
These states are called
stationary states.
And what's the reason for that?
Why are they called
stationary states?
I'm going to erase this.
Well, suppose this is my wave
function as a function of time.
What is the probability
that at time t
I will measure the particle
to be at position x,
or the probability density?
Well, the probability density
we know from our postulates,
it's just the norm squared
of the wave function.
This is Psi at x t norm squared.
But this is equal to
the norm squared of e
to the minus Psi Omega t Phi
E by the Schrodinger equation.
But when we take
the norm squared,
this phase cancels
out, as we already saw.
So this is just equal to
Phi E of x norm squared,
the energy Eigenfunction norm
squared independent of time.
So, if we happen to know that
our state is in an energy
Eigenfunction, then
the probability density
for finding the particle
at any given position
does not change in time.
It remains invariant.
The wave function rotates
by an overall phase,
but the probability density
is the norm squared.
It's insensitive to
that overall phase,
and so the probability
density just
remains constant in
whatever shape it is.
Hence it's called
a stationary state.
Notice its consequence.
What can you say about
the expectation value
of the position as
a function of time?
Well, this is equal
to the integral dx
in the state Psi of x and t.
And I'll call this Psi
sub E just to emphasize.
It's the integral of the x,
integral over all possible
positions of the
probability distribution,
probability of x
at time t times x.
But this is equal
to the integral dx
of Phi E of x squared x.
But that's equal to expectation
value of x at any time,
or time zero. t equals zero.
And maybe the best way to write
this is as a function of time.
So, the expectation value
of x doesn't change.
In a stationary state,
expected positions,
energy-- these
things don't change.
Everyone cool with that?
And it's because
of this basic fact
that the wave
function only rotates
by a phase under
time evolution when
the system is an
energy Eigenstate.
Questions?
OK.
So, here's a couple of
questions for you guys.
Are all systems always
in energy Eigenstates?
Am I in an energy Eigenstate?
AUDIENCE: No.
PROFESSOR: No, right?
OK, expected position of my
hand is changing in time.
I am not in-- so obviously,
things change in time.
Energies change in time.
Positions-- expected typical
positions-- change in time.
We are not in
energy Eigenstates.
That's a highly
non-generic state.
So here's another question.
Are any states ever truly
in energy Eigenstates?
Can you imagine an
object in the world
that is truly
described precisely
by an energy Eigenstate
in the real world?
AUDIENCE: No.
PROFESSOR: Ok, there
have been a few nos.
Why?
Why not?
Does anything really
remain invariant in time?
No, right?
Everything is getting
buffeted around
by the rest of the universe.
So, not only are these
not typical states,
not only are stationary
states not typical,
but they actually never
exist in the real world.
So why am I talking
about them at all?
So here's why.
And actually I'm
going to do this here.
So here's why.
The reason is this guy, the
superposition principle,
which tells me that if
I have possible states,
I can build
superpositions of them.
And this statement--
and in particular,
linearity-- which says
that given any two
solutions of the
Schrodinger equation,
I can take a
superposition and build
a new solution of the
Schrodinger equation.
So, let me build it.
So, in particular,
I want to exploit
the linearity of the Schrodinger
equation to do the following.
Suppose Psi.
And I'm going to
label these by n.
Psi n of x and t is equal
to e to the minus i Omega nt
Phi sub En of x, where En
is equal to h bar Omega n.
n labels the various different
energy Eigenfunctions.
So, consider all the energy
Eigenfunctions Phi sub En.
n is a number which labels them.
And this is the solution to
the Schrodinger equation, which
at time zero is just
equal to the energy
Eigenfunction of interest.
Cool?
So, consider these guys.
So, suppose we have
these guys such that they
solve the Schrodinger equation.
Solve the Schrodinger equation.
Suppose these guys solve
the Schrodinger equation.
Then, by linearity, we
can take Psi of x and t
to be an arbitrary superposition
sum over n, c sub n, Psi sub
n of x and t.
And this will automatically
solve the Schrodinger equation
by linearity of the
Schrodinger equation.
Yeah.
AUDIENCE: But
can't we just get n
as the sum of the
energy Eigenstate
by just applying that and
by just measuring that?
PROFESSOR: Excellent.
So, here's the question.
The question is,
look, a minute ago
you said no system is truly in
an energy Eigenstate, right?
But can't we put a system
in an energy Eigenstate
by just measuring the energy?
Right?
Isn't that exactly what the
collapse postulate says?
So here's my question.
How confident are
you that you actually
measure the energy precisely?
With what accuracy can
we measure the energy?
So here's the unfortunate
truth, the unfortunate practical
truth.
And I'm not talking about
in principle things.
I'm talking about it in practice
things in the real universe.
When you measure the energy of
something, you've got a box,
and the box has a dial,
and the dial has a needle,
it has a finite width,
and your current meter
has a finite sensitivity
to the current.
So you never truly measure
the energy exactly.
You measure it to
within some tolerance.
And
In fact, there's a
fundamental bound--
there's a fundamental bound on
the accuracy with which you can
make a measurement, which
is just the following.
And this is the analog of
the uncertainty equation.
We'll talk about
this more later,
but let me just jump
ahead a little bit.
Suppose I want to
measure frequency.
So I have some
signal, and I look
at that signal for 10 minutes.
OK?
Can I be absolutely confident
that this signal is in fact
a plane wave with the given
frequency that I just did?
No, because it could
change outside that.
But more to the
point, there might
have been small
variations inside.
There could've been
a wavelength that
could change on a time
scale longer than the time
that I measured.
So, to know that the
system doesn't change
on a arbitrary-- that
it's strictly fixed Omega,
I have to wait a very long time.
And in particular, how confident
you can be of the frequency
is bounded by the time over
which-- so your confidence,
your uncertainty
in the frequency,
is bounded in the
following fashion.
Delta Omega, Delta t is always
greater than or equal to one,
approximately.
What this says is
that if you want
to be absolute confident
of the frequency,
you have to wait an
arbitrarily long time.
Now if I multiply this
whole thing by h bar,
I get the following.
Delta E-- so this is
a classic equation
that signals analysis--
Delta E, Delta t
is greater than or
approximately equal to h bar.
This is a hallowed time-
energy uncertainty relation,
which we haven't talked about.
So, in fact, it is
possible to make
an arbitrarily precise
measurement of the energy.
What do I have to do?
I have to wait forever.
How patient are you, right?
So, that's the issue.
In the real world, we can't make
arbitrarily long measurements,
and we can't isolate systems
for an arbitrarily long amount
of time.
So, we can't put things in
a definite energy Eigenstate
by measurement.
That answer your question?
AUDIENCE: Yes.
PROFESSOR: Great.
How many people
have seen signals
in this expression, the
bound on the frequency?
Oh, good.
So we'll talk about that
later in the course.
OK, so coming back to this.
So, we have our solutions
of the Schrodinger equation
that are initially
energy Eigenstates.
I claim I can take an arbitrary
superposition of them,
and by linearity derive
that this is also
a solution to the
Schrodinger equation.
And in particular, what
that tells me is-- well,
another way to say
this is that if I
know that Psi of x times zero
is equal to sum over n-- so
if sum Psi of x--
if the wave function
at some particular
moment in time
can be expanded as sum over
n Cn Phi E of x, if this is
my initial condition,
my initial wave function
is some superposition, then I
know what the wave function is
at subsequent times.
The wave function by
superposition Psi of x and t
is equal to sum over
n Cn e to the minus i
Omega nt Phi n-- sorry, this
should've been Phi sub n-- Phi
n of x.
And I know this has to
be true because this
is a solution to the Schrodinger
equation by construction,
and at time t equals
zero, it reduces to this.
So, this is a solution to
the Schrodinger equation,
satisfying this condition at
the initial time t equals zero.
Don't even have to
do a calculation.
So, having solved the
Schrodinger equation once
for energy,
Eigenstates allows me
to solve it for
general superposition.
However, what I just
said isn't quite enough.
I need one more argument.
And that one more argument is
really the stronger version
of three that we talked
about before, which
is that, given an
energy operator E,
we find the set of
wave functions Phi sub
E, the Eigenfunctions
of the energy operator,
with Eigenvalue E.
So, given the energy operator,
we find its Eigenfunctions.
Any wave function Psi at
x-- we'll say at time zero--
any function of x can
be expanded as a sum.
Specific superposition sum
over n Cn Phi E sub n of x.
And if any function
can be expanded
as a superposition of
energy Eigenfunctions,
and we know how to
take a superposition,
an arbitrary
superposition of energy
Eigenfunctions, and find
the corresponding solution
to the Schrodinger equation.
What this means is, we can take
an arbitrary initial condition
and compute the full solution
of the Schrodinger equation.
All we have to do is figure out
what these coefficients Cn are.
Everyone cool with that?
So, we have thus, using
superposition and energy
Eigenvalues, totally solved
the Schrodinger equation,
and reduced it to the problem
of finding these expansion
coefficients.
Meanwhile, these expansion
coefficients have a meaning.
They correspond
to the probability
that we measure the
energy to be equal
to the corresponding
energy E sub n.
And it's just the norm
squared of that coefficient.
So those coefficients
mean something.
And they allow us to
solve the problem.
Cool?
So this is fairly abstract.
So let's make it concrete
by looking at some examples.
So, just as a quick aside.
This should sound an awful
lot like the Fourier theorem.
And let me comment on that.
This statement originally was
about a general observable
and general operator.
Here I'm talking
about the energy.
But let's think about a
slightly more special example,
or more familiar example.
Let's consider the momentum.
Given the momentum, we can
find a set of Eigenstates.
What are the set of
good, properly normalized
Eigenfunctions of momentum?
What are the Eigenfunctions
of the momentum operator?
AUDIENCE: E to the ikx.
PROFESSOR: E to the ikx.
Exactly.
In particular, one
over 2 pi e to the ikx.
So I claim that, for every
different value of k,
I get a different value of p,
and the Eigenvalue associated
to this guy is p is
equal to h bar k.
That's the Eigenvalue.
And we get that by acting
with the momentum, which
is h bar upon i, h bar times
derivative with respect to x.
Derivative with
respect to x pulls down
an ik times the same thing.
H bar multiplies the
k over i, kills the i,
and leaves us with an overall
coefficient of h bar k.
This is an Eigenfunction
of the momentum
operator with
Eigenvalue h bar k.
And that statement
three is the statement
that an arbitrary
function f of x
can be expanded
as a superposition
of all possible
energy Eigenvalues.
But k is continuously
valued and the momentum,
so that's an integral
dk one over 2 pi,
e to the ikx times
some coefficients.
And those coefficients
are labeled by k,
but since k is continuous, I'm
going to call it a function.
And just to give it a name,
instead of calling C sub k,
I'll call it f tilde of k.
This is of exactly
the same form.
Here is the expansion--
there's the Eigenfunction, here
is the Eigenfunction, here
is the expansion coefficient,
here is expansion coefficient.
And this has a familiar name.
It's the Fourier theorem.
So, we see that
the Fourier theorem
is this statement, statement
three, the superposition
principal, for the
momentum operator.
We also see that it's true
for the energy operator.
And what we're claiming
here is that it's
true for any observable.
Given any observable, you
can find its Eigenfunctions,
and they form a basis on the
space of all good functions,
and an arbitrary function can
be expanded in that basis.
So, as a last example,
consider the following.
We've done energy.
We've done momentum.
What's another
operator we care about?
What about position?
What are the
Eigenfunctions of position?
Well, x hat on
Delta of x minus y
is equal to y Delta x minus y.
So, these are the states with
definite value of position
x is equal to y.
And the reason this is true
is that when x is equal to y,
x is the operator that
multiplies by the variable x.
But it's zero, except
at x is equal to y,
so we might as well
replace x by y.
So, there are the
Eigenfunctions.
And this statement
is a statement
that we can represent
an arbitrary function f
of x in a superposition of
these states of definite x.
f of x is equal to the integral
over all possible expansion
coefficients dy delta x
minus y times some expansion
coefficient.
And what's the
expansion coefficient?
It's got to be a function of y.
And what function
of y must it be?
Just f of y.
Because this integral
against this delta function
had better give me f of x.
And that will only be
true if this is f of x.
So here we see, in some
sense, the definition
of the delta function.
But really, this is a statement
of the superposition principle,
the statement that any
function can be expanded
as a superposition of
Eigenfunctions of the position
operator.
Any function can be
expanded as a superposition
of Eigenfunctions of momentum.
Any function can be
expanded as a superposition
of Eigenfunctions of energy.
Any function can be
expanded as a superposition
of Eigenfunctions of any
operator of your choice.
OK?
The special cases-- the Fourier
theorem, the general cases,
the superposition postulate.
Cool?
Powerful tool.
And we've used
this powerful tool
to write down a general
expression for a solution
to the Schrodinger equation.
That's good.
That's progress.
So let's look at some
examples of this.
I can leave this up.
So, our first example is going
to be for the free particle.
So, a particle whose
energy operator
has no potential whatsoever.
So the energy operator
is going to be
just equal to p squared upon 2m.
Kinetic energy.
Yeah.
AUDIENCE: When you say
any wave function can
be expanded in terms of--
PROFESSOR: Energy
Eigenfunctions,
position Eigenfunctions,
momentum Eigenfunctions--
AUDIENCE: Eigenbasis,
does the Eigenbasis
have to come from an operator
corresponding to an observable?
PROFESSOR: Yes.
Absolutely.
I'm starting with
that assumption.
AUDIENCE: OK.
PROFESSOR: So, again,
this is a first pass
of the axioms of
quantum mechanics.
We'll make this more precise,
and we'll make it more general,
later on in the course, as we
go through a second iteration
of this.
And there we'll talk about
exactly what we need,
and what operators are
appropriate operators.
But for the moment, the
sufficient and physically
correct answer is,
operators correspond
to each observable values.
Yeah.
AUDIENCE: So are the set of
all reasonable wave functions
in the vector space
that is the same
as the one with
the Eigenfunctions?
PROFESSOR: That's an
excellent question.
In general, no.
So here's the question.
The question is,
look, if this is true,
shouldn't it be that the
Eigenfunctions, since they're
our basis for the
good functions,
are inside the space of
reasonable functions,
they should also be
reasonable functions, right?
Because if you're going
to expand-- for example,
consider two dimensional
vector space.
And you want to
say any vector can
be expanded in a basis of pairs
of vectors in two dimensions,
like x and y.
You really want to make
sure that those vectors are
inside your vector space.
But if you say this
vector in this space
can be expanded in terms
of two vectors, this vector
and that vector, you're
in trouble, right?
That's not going
to work so well.
So you want to make sure
that your vectors, your basis
vectors, are in the space.
For position, the basis
vector's a delta function.
Is that a smooth, continuous
normalizable function?
No.
For momentum, the
basis functions
are plane waves that
extend off to infinity
and have support everywhere.
Is that a normalizable
reasonable function?
No.
So, both of these
sets are really bad.
So, at that point you might say,
look, this is clearly nonsense.
But here's an important thing.
So this is a totally
mathematical aside,
and for those of you who
don't care about the math,
don't worry about it.
Well, these guys
don't technically
live in the space of
non-stupid functions--
reasonable, smooth,
normalizable functions.
What you can show is that
they exist in the closure,
in the completion of that space.
OK?
So, you can find a
sequence of wave functions
that are good wave functions,
an infinite sequence,
that eventually that
infinite sequence converges
to these guys, even
though these are silly.
So, for example, for the
position Eigenstates,
the delta function is not a
continuous smooth function.
It's not even a function.
Really, it's some god- awful
thing called a distribution.
It's some horrible thing.
It's the thing that tells
you, give it an integral,
it'll give you a number.
Or a function.
But how do we build
this as a limit
of totally reasonable functions?
We've already done that.
Take this function with
area one, and if you want,
you can round this out by
making it hyperbolic tangents.
OK?
We did it on one of
the problem sets.
And then just make it
more narrow and more tall.
And keep making it more narrow
and more tall, and more narrow
and more tall, keeping
its area to be one.
And I claim that eventually
that series, that sequence
of functions, converges
to the delta function.
So, while this function is
not technically in our space,
it's in the completion
of our space,
in the sense that we take a
series and they converge to it.
And that's what you need for
this theorem to work out.
That's what you need
for the Fourier theorem.
And in some sense,
that observation
was really the
genius of Fourier,
understanding that
that could be done.
That was totally
mathematical aside.
But that answer your question?
AUDIENCE: Yes.
PROFESSOR: OK.
Every once in a
while I can't resist
talking about these
sort of details,
because I really like them.
But it's good to know that
stupid things like this
can't matter for
us, and they don't.
But it's a very good question.
If you're confused about
some mathematical detail,
no matter how elementary, ask.
If you're confused, someone
else the room is also confused.
So please don't hesitate.
OK, so our first example's
going to be the free particle.
And this operator can be
written in a nice way.
We can write it as minus--
so p is h bar upon iddx,
so this line is
minus h bar squared
upon 2m to the derivative
with respect to x.
There's the energy operator.
So, we want to solve
for the wave functions.
So let's solve it
using an expansion
in terms of energy
Eigenfunctions.
So what are the
energy Eigenfunctions?
We want to find the
functions E on Phi sub E
such that this is
equal to-- whoops.
That's not a vector.
That's a hat-- such as this is
equal to a number E Phi sub E.
But given this
energy operator, this
says that minus h bar squared
over 2m-- whoops, that's a 2.
2m-- Phi prime prime of
x is equal to E Phi of x.
Or equivalently, Phi prime
prime of x plus 2me over h bar
squared Phi of x
is equal to zero.
So I'm just going to
call 2me-- because it's
annoying to write it
over and over again--
over h bar squared.
Well, first off,
what are its units?
What are the units
of this coefficient?
Well, you could do it two ways.
You could either do dimensional
analysis of each thing here,
or you could just know that we
started with a dimensionally
sensible equation,
and this has units
of this divided by length twice.
So this must have to
whatever length squared.
So I'm going to
call this something
like k squared,
something that has
units of one over
length squared.
And the general
solution of this is
that phi E E of x-- well, this
is a second order differential
equation that will have two
solutions with two expansion
coefficients-- A e to the ikx
plus B e to the minus iks.
A state with definite momentum
and definite negative momentum
where such that E is equal to h
bar squared k squared upon 2m.
And we get that just from this.
So, this is the solution of the
energy Eigenfunction equation.
Just a note of terminology.
People sometimes
call the equation
determining an energy
Eigenfunction-- the energy
Eigenfunction
equation-- sometimes
that's refer to as the
Schrodinger equation.
That's a sort of cruel
thing to do to language,
because the Schrodinger's
equation is about time
evolution, and this equation
is about energy Eigenfunctions.
Now, it's true that energy
Eigenfunctions evolve
in a particularly simple
way under time evolution,
but it's a different equation.
This is telling you about
energy Eigenstates, OK?
And then more
discussion of this is
done in the notes, which I will
leave aside for the moment.
But I want to do one more
example before we take off.
Wow.
We got through a lot
less than expected today.
And the one last example
is the following.
It's a particle
in a box And this
is going to be important
for your problem sets,
so I'm going to go ahead and
get this one out of the way
as quickly as possible.
So, example two.
Particle in a box.
So, what I mean by
particle in a box.
I'm going to take a system
that has a deep well.
So what I'm drawing here
is the potential energy
U of x, where this is
some energy E naught,
and this is the
energy zero, and this
is the position x
equals zero, and this
is position x equals l.
And I'm going to idealize
this by saying look,
I'm going to be interested
in low energy physics,
so I'm going to just treat
this as infinitely deep.
And meanwhile, my
life is easier if I
don't think about curvy
bottoms but I just
think about things
as being constant.
So, my idealization
is going to be
that the well is
infinitely high and square.
So out here the
potential is infinite,
and in here the
potential is zero.
U equals inside, between
zero and l for x.
So that's my system
particle in a box.
So, let's find the
energy Eigenfunctions.
And again, it's the same
differential equations
as before.
So, first off, before we
even solve anything, what's
the probability that I
find x less than zero,
or find the particle
at x greater than l?
AUDIENCE: Zero.
PROFESSOR: Right,
because the potential
is infinitely large out there.
It's just not going to happen.
If you found it there,
that would correspond
to a particle of
infinite energy,
and that's not going to happen.
So, our this tells
us effectively
the boundary
condition Psi of x is
equal to zero outside the box.
So all we have to
do is figure out
what the wave function is inside
the box between zero and l.
And meanwhile, what
must be true of the wave
function at zero and at l?
It's got to actually
vanish at the boundaries.
So this gives us boundary
conditions outside the box
and at the boundaries x
equals zero, x equals l.
But, what's our differential
equation inside the box?
Inside the box, well,
the potential is zero.
So the equation is the
same as the equation
for a free particle.
It's just this guy.
And we know what
the solutions are.
So the solutions can be
written in the following form.
Therefore inside the
wave function-- whoops.
Let me write this as
Phi sub E-- Phi sub
E is a superposition of two.
And instead of writing
it as exponentials,
I'm going to write it
as sines and cosines,
because you can express
them in terms of each other.
Alpha cosine of kx
plus Beta sine of kx,
where again Alpha and Beta
are general complex numbers.
But, we must satisfy the
boundary conditions imposed
by our potential at x
equals zero and x equals l.
So from x equals
zero, we find that Phi
must vanish when x equals zero.
When x equals zero, this
is automatically zero.
Sine of zero is zero.
Cosine of zero is one.
So that tells us that
Alpha is equal to zero.
Meanwhile, the condition that at
x equals l-- the wave function
must also vanish-- tells us
that-- so this term is gone,
set off with zero-- this
term, when x is equal to l,
had better also be zero.
We can solve that by
setting Beta equal to zero,
but then our wave
function is just zero.
And that's a really
stupid wave function.
So we don't want to do that.
We don't want to
set Beta to zero.
Instead, what must we do?
Well, we've got a sine,
and depending on what k is,
it starts at zero and
it ends somewhere else.
But we need it hit zero.
So only for a very
special value of k will
it actually hit zero
at the end of l.
We need kl equals zero.
Or really, kl is
a multiple of pi.
Kl is equal-- and we
want it to not be zero,
so I'll call it n plus
1, an integer, times pi.
Or equivalently, k is equal
to sub n is equal to n plus 1,
where n goes from zero to any
large positive integer, pi
over l.
So the energy
Eigenfunction's here.
The energy Eigenfunction
is some normalization--
whoops-- a sub n
sine of k and x.
And where kn is equal to
this-- and as a consequence,
E is equal to h bar squared
kn can squared E sub
n is h bar squared kn squared
over 2m, which is equal to h
bar squared-- just plugging
in-- pi squared n plus 1 squared
over 2ml squared.
And what we found is
something really interesting.
What we found is, first
off, that the wave functions
look like-- well,
the ground state,
the lowest possible energy
there is n equals zero.
For n equals zero, this is
just a single half a sine wave.
It does this.
This is the n equals zero state.
And it has some energy,
which is E zero.
And in particular, E zero
is not equal to zero.
E zero is equal to h bar squared
pi squared over 2ml squared.
It is impossible for
a particle in a box
to have an energy lower
than some minimal value E
naught, which is not zero.
You cannot have less
energy than this.
Everyone agree with that?
There is no such Eigenstate
with energy less than this.
Meanwhile, it's worse.
The next energy is
when n is equal to 1,
because if we decrease
the wavelength or increase
k a little bit, we get
something that looks like this,
and that doesn't satisfy
our boundary condition.
In order to satisfy
our boundary condition,
we're going to have to
eventually have it cross over
and get to zero again.
And if I could only draw--
I'll draw it up here--
it looks like this.
And this has an
energy E one, which
you can get by
plugging one in here.
And that differs by one,
two, four, a factor of four
from this guy.
E one is four E zero.
And so on and so forth.
The energies are gaped.
They're spread away
from each other.
The energies are discrete.
And they get further and
further away from each other
as we go to higher
and higher energies.
So this is already
a peculiar fact,
and we'll explore some of
its consequences later on.
But here's that I want
to emphasize for you.
Already in the first
most trivial example
of solving a
Schrodinger equation,
or actually even before
that, just finding the energy
Eigenvalues and the energy of
Eigenfunctions of the simplest
system you possibly could,
either a free particular,
or a particle in
a box, a particle
trapped inside a potential
well, what we discovered
is that the energy
Eigenvalues, the allowed values
of the energy, are discrete, and
that they're greater than zero.
You can never have zero energy.
And if that doesn't
sound familiar,
let me remind you of something.
The spectrum of light coming
off of a gas of hot hydrogen
is discrete.
And no one's ever
found a zero energy
beam of light coming out of it.
And we're going to make contact
with that experimental data.
That's going to be part
of the job for the rest.
See you next time.
[APPLAUSE]
