Prof: All right,
today's topic is the theory of
nearly everything,
okay?
You wanted to know the theory
of everything?
You're almost there,
because I'm finally ready to
reveal to you the laws of
quantum dynamics that tells you
how things change with time.
 
So that's the analog of F =
ma.
That's called the
Schrˆdinger equation,
and just about anything you see
in this room,
or on this planet,
anything you can see or use is
really described by this
equation I'm going to write down
today.
 
It contains Newton's laws as
part of it, because if you can
do the quantum theory,
you can always find hidden in
it the classical theory.
 
That's like saying if I can do
Einstein's relativistic
kinematics at low velocities,
I will regain Newtonian
mechanics.
 
So everything is contained in
this one.
There are some things left,
of course, that we won't do,
but this goes a long way.
 
So I'll talk about it probably
next time near the end,
depending on how much time
there is.
But without further ado,
I will now tell you what the
laws of motion are in quantum
mechanics.
So let's go back one more time
to remember what we have done.
The analogous statement is,
in classical mechanics for a
particle moving in one
dimension,
all I need to know about it
right now is the position and
the momentum.
 
That's it.
 
That's the maximal information.
 
You can say,
"What about other things?
What about angular momentum?
 
What about kinetic energy?
 
What about potential energy?
 
What about total energy?"
 
They're all functions of x
and p.
For example,
in 3 dimensions, x will
be replaced by r, p
will be replaced by some
vector
p, and there's a
variable called angular
momentum,
but you know that once you
know r
and p
by taking the cross product.
That's it.
 
And you can say,
"What happens when I
measure any variable for a
classical particle in this
state, x,p?"
 
Well, if you know the location,
it's guaranteed to be x
100 percent,
momentum is p,
100 percent.
 
Any other function of x
and p,
like r X
p, is guaranteed to have
that particular value.
 
So everything is completely
known.
That's the situation at one
time.
Then you want to say,
"What can you say about
the future?
 
What's the rate of change of
these things?"
And the answer to that is,
d^(2)x/dt^(2) times
m is the force,
and in most problems you can
write the force as a derivative
of some potential.
So if you knew the potential,
1/2kx^(2) or whatever it
is,
or mgx,
you can take the derivative on
the right hand side,
and the left hand side tells
you the rate of change of
x.
 
I want you to note one thing -
we know an equation that tells
you something about the
acceleration.
Once the forces are known,
there's a unique acceleration.
So you are free to give the
particle any position you like,
and any velocity,
dx/dt.
That's essentially the momentum.
 
You can pick them at random.
 
But you cannot pick the
acceleration at random,
because the acceleration is not
for you to decide.
The acceleration is determined
by Newton's laws to equal the
essentially the force divided by
mass.
That comes from the fact
mathematically that this is a
second order equation in time,
namely involving the second
derivative.
 
And that, from a mathematical
point of view,
if the second derivative is
determined by external
considerations,
initial conditions are given by
initial x and the first
derivative.
All higher derivatives are
slaved to the applied force.
You don't assign them as you
wish.
You find out what they are from
the equations of motion.
That's really all of classical
mechanics.
Now you want to do quantum
mechanics, and we have seen many
times the story in quantum
mechanics is a little more
complicated.
 
You ask a simple question and
you get a very long answer.
The simple question is,
how do you describe the
particle in quantum mechanics in
one dimension?
And you say,
"I want to assign to it a
function
Y(x)."
Y(x) is any
reasonable function which can be
squared and integrated over the
real line.
Anything you write down is a
possible state.
That's like saying,
any x and any p
are allowed.
 
Likewise, Y(x) is
nothing special.
It can be whatever you like as
long as you can square it and
integrate it to get a finite
answer over all of space.
That's the only condition.
 
And if your all of space goes
to infinity, then Y
should vanish a and- infinity.
 
That's the only requirement.
 
Then you say,
"That tells me everything.
Why don't you tell me what the
particle is doing?"
And you can say,
"What do you want to
know?"
 
Well, I want to know where it
is.
That's when you don't get a
straight answer.
You are told,
"Well, it can be here,
it can be there,
it can be anywhere else.
And the probability density
that it's at point x is
proportional to the absolute
square of Y."
That means you take the
Y and you square it,
so it will have nothing
negative in it.
Everything will be real and
positive, and Y itself
may be complex.
 
But this Y^(2),
I told you over and over,
is defined to be
Y*Y.
That's real and positive.
 
Then you can say,
"What if I measure
momentum, what answer will I
get?"
That's even longer.
 
First you are supposed to
expand--I'm not going to do the
whole thing too many times--
you're supposed to write this
Y as some coefficient
times these very special
functions.
 
In a world of size L,
you have to write the given
Y in this fashion,
and the coefficients are
determined by the integral of
the complex conjugate of this
function times the function you
gave me, Y(x).
Now I gave some extra notes,
I think.
Did people get that?
 
Called the "Quantum
Cookbook?"
That's just the recipe,
you know.
Quantum mechanics is a big fat
recipe and that's all we can do.
I tell you, you do this,
you get these answers.
That's my whole goal,
to simply give you the recipe.
So the recipe says--what's
interesting about quantum
mechanics,
what makes it hard to teach,
is that there are some physical
principles which are summarized
by these rules,
which are like axioms.
Then there are some purely
mathematical results which are
not axioms.
 
They are consequences of pure
mathematics.
You have to keep in mind,
what is a purely mathematical
result,
therefore is deduced from the
laws of mathematics,
and what is a physical result,
that's deduced from experiment.
 
The fact that Y
describes everything is a
physical result.
 
Now it tells you to write
Y as the sum of these
functions,
and then the probability to
obtain any momentum p is
A_p^(2),
where A_p is
defined by this.
The mathematics comes in in the
following way - first question
is, who told you that I can
write every function Y in
this fashion?
 
That's called the Fourier's
theorem,
that guarantees you that in a
circle of size L,
every periodic function,
meaning that returns to its
starting value,
may be expanded in terms of
these functions.
 
That's the mathematical result.
 
The same mathematical result
also tells you how to find the
coefficients.
 
The postulates of quantum
mechanics tell you two things.
A_p^(2) is the
probability that you will get
the value p when you
measure momentum,
okay?
 
That's a postulate,
because you could have written
this function 200 years before
quantum mechanics.
It will still be true,
but this function did not have
a meaning at that time as states
of definite momentum.
How do I know it's a state of
definite momentum?
If every term vanished except
one term,
that's all you have,
then one coefficient will be
A something = 1,
everything is 0,
that means the probability for
getting momentum has a non 0
value only for that momentum.
 
All other momenta are missing
in that situation.
Another postulate of quantum
mechanics is that once you
measure momentum and you get one
of these values,
the state will go from being a
sum over many such functions,
and collapse to the one term
and the sum that corresponds to
the answer you got.
 
Then here is another
mathematical result - p is not
every arbitrary real number you
can imagine.
We make the requirement,
if you go around on a circle,
the function should come back
to the starting value,
therefore p is
restricted to be
2p‚Ñè
/L times some
integer n.
 
That's a mathematical
requirement, because if you
think Y ^(2) is the
probability, Y should
come back to where you start.
 
It cannot get two different
values of Y when you go
around the circle.
 
That quantizes momentum to
these values.
The last thing I did was to
say, if you measure energy,
what answer will you get?
 
That's even longer.
 
There you're supposed to solve
the following equations,
(((h^(2)/2m)d^(2)
Y_E
/dx^(2)) V(x))
Y_E(x) =
EY_E(x).
 
In other words,
for energy the answer's more
complicated, because before,
I can tell you anything.
I want you to solve this
equation.
This equation says,
if in classical mechanics the
particle was in some potential
V(x) and the particle had
some mass m,
you have to solve this
equation, then it's a purely
mathematical problem,
and you try to find all
solutions that behave well at
infinity,
that don't blow up at infinity,
that vanish at infinity.
 
That quantizes E to certain
special values.
And there are corresponding
functions, Y_E
for each allowed value.
 
Then you are done,
because then you make a similar
expansion, you write the unknown
Y and namely some
arbitrary Y that's given
to you.
You write it as a
Œ£A_E
Y
_E(x),
where A_E is
found by a similar rule.
Just replace p by
E and replace this
function by these functions.
 
Then if you square
A_E you will
get the probability you will get
the energy.
So what makes the energy
problem more complicated is that
whereas for momentum we know
once and for all these functions
describe a state of definite
momentum where you can get only
one answer,
states of definite energy
depend on what potential is
acting on the particle.
If it's a free particle,
V is 0.
If it's a particle that in
Newtonian mechanics is a
harmonic oscillator,
V(x) would be
ÔøΩkx^(2) and so on.
 
So you should know the
classical potential and you've
got it in for every possible
potential you have to solve
this.
 
But that's what most people in
physics departments are doing
most of the time.
 
They're solving this equation
to find states of definite
energy.
 
So today, I'm going to tell you
why states of definite energy
are so important.
 
What's the big deal?
 
Why is state of momentum not so
important?
Why is the state of definite
position not so interesting?
What is privileged about the
states of definite momentum?
And now you will see the role
of energy.
So I'm going to write down for
you the equation that's analog
of F = ma.
 
So what are we trying to do?
 
Y(x) is like
x and p.
You don't have a time label
here.
These are like saying at some
time a particle has a position
and a momentum.
 
In quantum theory or some time,
it has a wave function
Y(x).
 
But the real question in
classical mechanics is,
how does x vary with
time and how does p vary with
time.
 
The answer is,
according to F = ma.
And here the question is,
how does Y vary with
time?
 
First thing you've got to do is
to realize that Y itself
can be a function of time,
right?
That's the only time you've got
to ask, "What does it do
with time?"
 
So at t = 0 it may look
like this.
A little later,
it may look like that.
So it's flopping and moving,
just like say a string.
It's changing with time and you
want to know how it changes with
time.
 
So this is the great
Schrˆdinger equation.
It says i‚Ñèd
Y(x,t)/dt (it's
partial,
because this depends on
x and t so this is
the t derivative) = the
following,
[-‚Ñè^(2) /2m
d^(2)Y/dx^(2) V(x)
Y(x,t)].
That's the equation.
 
x comma t.
 
So write this down,
because if you know this
equation, you'll be surprised
how many things you can
calculate.
 
From this follows the spectrum
of the atoms,
from this follows what makes a
material a conductor,
a semiconductor,
a superconductor.
Everything follows from this
famous Schrˆdinger
equation.
 
This is an equation in which
you must notice that we're
dealing for the first time with
functions of time.
Somebody asked me long back,
"Where is time?"
Well, here is how Y
varies with time.
So suppose someone says,
"Here is initial
Y(x) and 0.
 
Tell me what is Y a
little later,
1 millisecond later."
 
Well, it's the rate of change
of Y with time multiplied
by 1 millisecond.
 
The rate of change of Y
at the initial time is obtained
by taking that derivative of
Y and adding to it V
times Y,
you get something.
That's how much Y
changes.
Multiply by Dt,
that's the change in Y.
That'll give you Y at a
later time.
This is the first order
equation in time.
What that means mathematically
is, the initial Y
determines the future
completely.
This is different from position
where you need x and
dx/dt are the initial
time.
The equation will only tell you
what d^(2)x/dt^(2)^( )is.
But in quantum mechanics,
dY /dt
itself is determined,
so you don't get to choose
that.
 
You just get to choose the
initial Y.
That means an initial wave
function completely determines
the future according to this
equation.
So don't worry about this
equation.
I don't expect you all to see
it and immediately know what to
do, but I want you to know that
there is an equation.
That is known now.
 
That's the analog of F =
ma. If you solve this
equation, you can predict the
future to the extent allowed by
quantum mechanics.
 
Given the present,
and the present means
Y(x,0) is
given,
then you go to the math
department and say,
"This is my
Y(x,0).
Please tell me by some trick
what is Y(x) and
t."
 
It turns out there is a trick
by which you can predict
Y(x) and t.
Note also that this number
i is present in the very
equations of motion.
So this is not like the
i we used in electrical
circuits where we really meant
sines and cosines,
but we took
e^(t)^(q)or
e^(i)^(w)
^(t),
always hoping in the end to
take the real part of the answer
because the functions of
classical mechanics are always
real.
 
But in quantum theory,
Y is intrinsically
complex and it cannot get more
complex than that by putting an
i in the equations of
motion,
but that's just the way it is.
 
You need the i to write
the equations.
Therefore our goal then is to
learn different ways in which we
can solve this.
 
Now remember,
everybody noticed,
this looks kind of familiar
here, this combination.
It's up there somewhere.
 
It looks a lot like this.
 
You see that?
 
But it's not quite that.
 
That is working on a function
only of x;
this is working on a function
of x and t.
And there are partial
derivatives here and there are
total derivatives there.
 
They are very privileged
functions.
They describe states of
definite energy.
This is an arbitrary function,
just evolving with time,
so you should not mix the two
up.
This Y is a generic
Y changing with time.
So let's ask,
how can I calculate the future,
given the present?
 
How do I solve this equation?
 
So here is what you do.
 
I'm going to do it at two
levels.
One is to tell you a little bit
about how you get there,
and for those of you who say,
"Look,
spare me the details,
I just want to know the
answer,"
I will draw a box around the
answer,
and you are free to start from
there.
 
But I want to give everyone a
chance to look under the hood
and see what's happening.
 
So given an equation like this,
which is pretty old stuff in
mathematical physics from after
Newton's time,
people always ask the following
question.
They say, "Look,
I don't know if I can solve it
for every imaginable initial
condition."
It's like saying,
even the case of the
oscillator,
you may not be able to solve
every initial condition,
you say, "Let me find a
special case where
Y(x, t),
which depends on x and
on t,
has the following simple form -
it's a function of t alone times
a function of x
alone."
Okay?
 
I want you to know that no one
tells you that every solution to
the equation has this form.
 
You guys have a question about
this?
Over there.
 
Okay, good.
 
All right, so this is an
assumption.
You want to see if maybe there
are answers like this to the
problem.
 
The only way to do that is to
take their assumed form,
put it into the equation and
see if you can find a solution
of this form.
 
Not every solution looks like
this.
For example,
you could write
e^((x-t)2).
 
That's the function of x
and t.
But it's not a function of
x times the function of
t, you see that?
 
x and t are mixed
up together.
You cannot rip it out at the
two parts, so it's not the most
general thing that can happen;
it's a particular one.
Right now, you are eager to get
any solution.
You want to say,
"Can I do anything?
Can I calculate even in the
simplest case what the future
is, given the present?"
 
You're asking,
"Can this happen?"
So I'm going to show you now
that the equation does admit
solutions of this type.
 
So are you guys with me now on
what I'm trying to do?
I'm trying to see if this
equation admits solutions of
this form.
 
So let's take that and put it
here.
Now here's where you've got to
do the math, okay?
Take this Y and put it
here and start taking
derivatives.
 
Let's do the left hand side
first.
Left hand side,
I have i‚Ñè.
Then I bring the d by dt to act
on this product.
d by dt partial means only time
has to be differentiated;
x is to be held constant.
 
That's the partial derivative.
 
That's the meaning of the
partial derivative.
It's like an ordinary
derivative where the only
variable you'd ever
differentiate is the one in the
derivative.
 
So the entire
Y(x) doesn't do
anything.
 
It's like a constant.
 
So you just put that
Y(x) there.
Then the derivative,
d by dt,
comes here, and I claim it
becomes the ordinary derivative.
That's the left hand side.
 
You understand that,
why that is true?
Because on a function only of
time there's no difference
between partial derivative and
ordinary derivative.
It's got only one variable.
 
The other potential variable
this d by dt,
doesn't care about so it's just
standing there.
That's the left hand side.
 
Now look at the right hand
side, all of this stuff,
and imagine putting for this
function Y this product
form.
 
Is it clear to you,
in the right hand side the
situation's exactly the
opposite?
You've got all these d by
dx's partials,
they're only interested in this
function because it's got
x dependence.
 
F(t) doesn't do anything.
 
All the derivatives,
they go right through F,
so you can write it as
F(t).
Now you have to take the
derivative with respect to
Y.
 
That looks like
d^(2)Y
/dx^(2)
V(x)Y(x).
If you follow this,
you're almost there,
but take your time to
understand this.
The reason you write it as a
product of two functions is the
left hand side is only
interested in differentiating
the function F,
where it becomes the total
derivative.
 
The right hand side is only
taking derivatives with respect
to x so it acts on this
part of the function,
it depends on x.
 
And all partial derivatives
become total derivatives because
if you've got only one variable,
there's no need to write
partial derivatives.
 
This combination I'm going to
write to save some time as
HY.
 
Let's just say between you and
me it's a shorthand.
HY is a shorthand
for this entire mess here.
Don't ask me why it looks like
H times Y,
where are the derivatives?
 
It is a shorthand, okay.
 
I don't feel like writing the
combination over and over,
I can call it HY.
 
So what equation do I have now?
 
I have i‚Ñè
Y(x)dF/dt =
F(t)HY,
where all I want you to notice,
HY depends only
on x.
It has no dependence on time.
 
Do you see that?
 
There's nothing here that
depends on time.
Okay, now this is a trick which
if you learned,
you'll be quite pleased,
because you'll find that as you
do more and more stuff,
at least in physics or
economics or statistics,
the trick is a very old trick.
The problem is quantum
mechanics, but the mathematics
is very old.
 
So what do you do next?
 
You divide both sides by
YF.
So I say divide by
F(t)Y(x).
What do you think will happen
if I divide by F(t),
Y(x)?
 
On the left hand side,
I say divide by
YF and the right
hand side,
I say divide by
YF,
can you see that the Y
cancels here,
and you have a 1/F.
 
F cancels here and you
have a 1/Y.
The equation then says
i‚Ñè(1/
F(t))dF/dt =
1/Y(x)H
Y. I've
written this very slowly because
I don't know,
you'll find this in many
advanced books,
but you may not find it in our
textbook.
 
So if you don't follow
something, you should tell me.
There's plenty of time to do
this, so I'm in no rush at all.
These are purely mathematical
manipulations.
We have not done anything
involving physics.
You all follow this?
 
Yes?
 
Okay.
 
Now you have to ask yourself
the following.
I love this argument.
 
Even if you don't follow this,
I'm just going to get it off my
chest,
it is so clever,
and here is the clever part -
this is supposedly a function of
time,
you agree?
All for a function of time.
 
This is a function of x.
 
This guy doesn't know what time
it is;
this guy doesn't know what
x is.
And yet they're supposed to be
equal.
What can they be equal to?
 
They cannot be equal to a
function of time,
because then,
as you vary time--suppose you
think it's a function of time,
suppose.
It's not so.
 
Then as time varies,
this part is okay.
It can vary with time to match
that, but this cannot vary with
time at all, because there is no
time here.
So this cannot depend on time.
 
And it cannot depend on
x, because if it was a
function of x that it was
equal to,
as you vary x,
this can change with x
to keep up with that.
 
This has no x dependence.
 
It cannot vary with x.
 
So this thing that they are
both equal to is not a function
of time and it's not a function
of space.
It's a constant.
 
That's all it can be.
 
So the constant is going to
very cleverly be given the
symbol E.
 
We're going to call the
constant E.
It turns out E is
connected to the energy of the
problem.
 
So now I have two equations,
this = E and that =
E and I'm going to write
it down.
So one of them says i
‚Ñè (1/F) dF/dt =
E.
 
So let me bring the F
here.
Other one says H
Y = E
Y. These two
equations, if you solve them,
will give you the solution you
are looking for.
In other words,
going back here,
yes,
this equation does admit
solutions of this form,
of the product form,
provided the function F
you put in the product that
depends on time obeys this
equation,
and the function Y that
depends only on x obeys
this equation.
 
Remember, HY is
the shorthand for this long
bunch of derivatives.
 
We'll come to that in a moment,
but let's solve this equation
first.
 
Now can you guys do this in
your head?
i‚ÑèdF/dt = EF.
 
So it's saying F is a
function of time whose time
derivative is the function
itself.
Everybody knows what such a
function is.
It's an exponential.
 
And the answer is,
I'm going to write it down and
you can check,
F(t) is F(0)
e^(‚àíiEt/√¢¬Ñ¬è).
If you want now,
take the derivative and check.
 
F(0) is some constant.
 
I call it F(0) because
if t = 0,
this goes away and F(t)
= F(0).
But take the time derivative
and see.
When you take a time derivative
of this,
you get the same thing times
‚àíiE/√¢¬Ñ¬è,
and when you multiply by 
i‚Ñè,
everything cancels except
EF.
So this is a very easy solution.
 
So let's stop and understand.
 
It says that if you are looking
for solutions that are products
of F(t) times
f(x),
F(t) necessarily is this
exponential function,
which is the only function you
can have.
But now once you pick that
E, you can pick E
to be whatever you like,
but then you must also solve
this equation at the same time.
 
But what is this equation.
 
This says
-‚Ñè^(2)/2md^(2)
Y/dx^(2)
VY =
EY,
and you guys know who that is,
right?
 
What is it?
 
What can you say about the
function that satisfies that
equation?
 
Have you seen it before?
 
Yes?
 
What is it?
 
Student: 
Schrˆdinger equation.
Prof: It's the state of
definite energy.
Remember, we said functions of
definite energy obey that
equation.
 
So that Y is really just
Y_E.
So now I'll put these two
pieces together,
and this is where those of you
who drifted off can come back,
because what I'm telling you is
that the Schrˆdinger
equation in fact admits a
certain solution which is a
product of a function of time
and a function of space.
And what we found by fiddling
around with it is that F(t)
and Y are very
special,
and F(t) must look like
e^(‚àíiEt/
‚Ñè),
and Y is just our
friend, Y
_E(x), which
are functions associated with a
definite energy.
Yes?
 
Student:  Is it possible
that there are other solutions,
x/Y(x,
t) that don't satisfy the
conditions?
 
Prof: Okay,
the question is,
are there other solutions for
which this factorized form is
not true?
 
Yes, and I will put you out of
your suspense very soon by
talking about that.
 
But I want everyone to
understand that you can at least
solve one case of
Schrˆdinger's equation.
So what does this mean?
 
I want you guys to think about
it.
This says if you start Y
in some arbitrary configuration,
that's my initial state,
let it evolve with time,
it obeys this rather crazy,
complicated Schrˆdinger
equation.
 
But if I start it at t =
0, in a state which is a state
of definite energy,
namely a state obeying that
equation.
 
Then its future is very simple.
 
All you do is attach this phase
factor, e^(‚àíiEt/
‚Ñè).
 
Therefore it's not a generic
solution, because you may not in
general start with a state which
is a state of definite energy.
You'll start with some random
Y(x) and it's made
up of many,
many Y_E's
that come in the expansion of
that Y,
so it's not going to always
work.
But if you picked it so that
there's only one such term in
the sum over E,
namely one such function,
then the future is given by
this.
For example,
if you have a particle in a
box,
you remember the wave function
Y_n looks like
square root of 2/Lsine(n
px/L). An
arbitrary Y doesn't look
like any of these.
 
These guys, remember,
are nice functions that do many
oscillations.
 
But if you chose it initially
to be exactly the sine function,
for example,
Y_1,
then I claim as time evolves,
the future state is just this
initial sine function times this
simple exponential.
This behavior is very special
and it called normal modes.
It's a very common idea in
mathematical physics.
It's the following - it's very
familiar even before you did
quantum mechanics.
 
Take a string tied at both ends
and you pluck the string and you
release it.
 
Most probably if you pluck it
at one point,
you'll probably pull it in the
middle and let it go.
That's the initial
Y(x,t),
this time for a string.
 
Pull it in the middle,
let it go.
There's an equation that
determines the evolutions of
that string.
 
I remind you what that equation
is.
It's d^(2)Y
/dx^(2)=(1/veloc
ity^(2))d^(2)Y
/dt^(2).
That's the wave equation for a
string.
It's somewhat different from
this problem,
because it's a second
derivative in time that's
involved.
 
Nonetheless,
here is an amazing property of
this equation,
derived by similar methods.
If you pull a string like this
and let it go,
it will go crazy when you
release it.
I don't even know what it will
do.
It will do all kinds of things,
stuff will go back and forth,
back and forth.
 
But if you can deform the
string at t = 0 to look
exactly like this,
sine(px/L)
times a number A,
that's not easy to do.
Do you understand that to
produce the initial profile,
one hand is not enough,
two hands are not enough?
You've got to get infinite
number of your friends,
who are infinitesimally small.
 
You line them up along the
string until each one--tell the
person here to pull it exactly
this height.
Person here has to pull it
exactly that height.
You all lift your hands up,
then I follow you.
You follow this perfect sine.
 
Then you let go.
 
What do you think will happen
then?
What do you think will be the
subsequent evolution of the
string?
 
Do you have any guess?
 
Yes?
 
Student: 
>
Prof: It will go up and
down, and the future of that
string will look like
cosine(pxvt/L).
Look at it.
 
This is a time dependence,
that t = 0,
this goes away,
this initial state.
But look what happens at a
later time.
Every point x rises and
falls with the same period.
It goes up and down all
together.
That means a little later it
will look like that,
a little later it will look
like that,
then it will look like this,
then it will look like this,
then it will look like that,
then it will go back and forth.
But at every instant,
this guy, this guy,
this guy are all rescaled by
the same amount from the
starting value.
 
That's called a normal mode.
 
Now your question was,
are there other solutions to
the string?
 
Of course.
 
Typically, if you don't think
about it and pluck the string,
your initial state will be a
sum of these normal modes,
and that will evolve in a
complicated way.
But if you engineered it to
begin exactly this way,
or in any one of those other
functions where you put an extra
n here,
they all have the remarkable
property that they rise and fall
in step.
What we have found here is in
the quantum problem,
if you start the system in that
particular configuration,
then its future has got a
single time dependence common to
it.
 
That's the meaning of the
factorized solution.
So we know one simple example.
 
Take a particle in a box.
 
If it's in the lowest energy
state or ground state the wave
function looks like that.
 
Then the future of that will be
Y_1/x
and t will be the
Y_1x
times
e^(-iE1t/‚Ñè),
where E_1 =
‚Ñè^(2)
p^(2)
1^(2)/2mL^(2).
 That's the energy
associated with that function.
That's how it will oscillate.
 
Now you guys follow what I said
now, with an analogy with the
string and the quantum problem?
 
They're slightly different
equations.
One is second order,
one is first order.
One has cosines in it,
one has exponentials in it.
But the common property is,
this is also a function of time
times the function of space.
 
Here, this is a function of
time and a function of space.
Okay, so I'm going to spend
some time analyzing this
particular function.
 
Y(x) and t =
e^(‚àíiEt/√¢¬Ñ¬è)^(
)Y
_E(x).
And I'm going to do an abuse in
rotation and give the subscript
E to this guy also.
 
What I mean to tell you by that
is, this Y,
which solves Schrˆdinger
equation--by the way,
I invite you to go check it.
 
Take the Y,
put it into Schrˆdinger
equation and you will find it
works.
In the notes I've given you,
I merely tell you that this is
a solution to Schrˆdinger's
equation.
I don't go through this
argument of assuming it's a
product form and so on.
 
That's optional.
 
I don't care if you remember
that or not,
but this solves
Schrˆdinger equation,
and I call it
Y_E because the
function of the right hand side
are identified with states of
definite energy.
 
Okay, what will happen if you
measure various quantities in
this state?
 
For example,
what's the position going to
be?
 
What's the probability for
definite position?
What's the probability for
definite momentum?
What's the probability for
definite anything?
How will they vary with time?
 
I will show you,
nothing depends on time.
You can say,
"How can nothing depend on
time?
 
I see time in the function
here."
But it will go away.
 
Let us ask, what is the
probability that the particle is
at x at time t for a
solution like this?
You know, the answer is
Y *(x,t)
Y(x,t), and
what do you get when you do
that?
 
You will get Y_E
*(x
)Y
_E(x). Then you
will get the absolute value of
this guy squared,
e^(iEt)^( )times
e^(-iEt),
and that's just 1.
 
I hope all of you know that
this e^(i)^( )absolute
value squared is 1.
 
So it does not depend on time.
 
Even though Y depends on
time, Y *Y has no
time dependence.
 
That means the probability for
finding that particle will not
change with time.
 
That means if you start the
particle in the ground state,
Y, and let's say
Y^(2) in fact looks
pretty much the same,
it's a real function,
this probability does not
change with time.
That means you can make a
measurement any time you want
for position,
and the odds don't change with
time.
 
It's very interesting.
 
It depends on time and it
doesn't depend on time.
It's a lot like
e^(ipx/‚Ñè).
It seems to depend on x
but the density does not depend
on x because the
exponential goes away.
Similarly, it does depend on
time.
Without the time dependence,
it won't satisfy
Schrˆdinger equation,
but the minute you take the
absolute value,
this goes away.
That means for this particle,
I can draw a little graph that
looks like this,
and that is the probability
cloud you find in all the
textbooks.
Have you seen the probability
cloud?
They've got a little atom
that's a little fuzzy stuff all
around it.
 
They are the states of the
hydrogen atom or some other
atom.
 
How do you think you get that?
 
You solve a similar equation,
except it will be in 3
dimensions instead of 1
dimension,
and for V(x), you write
-Ze^(2)/r, if you want
r as x^(2) y^(2)
z^(2).
Ze is the nuclear charge,
and -e is the electron
charge.
 
You put that in and you solve
the equation and you will find a
whole bunch of solutions that
behave like this.
They are called stationary
states, because in that
stationary state--
see, if a hydrogen atom starts
out in this state,
which is a state of definite
energy,
as time goes by,
nothing happens to it
essentially.
Something trivial happens;
it picks up the phase factor,
but the probability for finding
the electron never changes with
time.
 
So if you like,
you can draw a little cloud
whose thickness,
if you like,
measures the probability for
finding it at that location.
So that will have all kinds of
shape.
It looks like dumbbells,
pointing to the north pole,
south pole, maybe uniformly
spherical distribution.
They're all the probability of
finding the electron in that
state, and it doesn't change
with time.
So a hydrogen atom,
when you leave it alone,
will be in one of these allowed
states.
You don't need the hydrogen
atom;
this particle in a box is a
good enough quantum system.
If you start it like that,
it will stay like that;
if you start it like that,
it will stay like that,
times that phase factor.
 
So stationary states are
important, because that's where
things have settled down.
 
Okay, now you should also
realize that that's not a
typical situation.
 
Suppose you have in 1
dimension, there's a particle on
a hill, and at t = 0,
it's given by some wave
function that looks like this.
 
So it's got some average
position, and if you expand it
in terms of exponentials of
p, it's got some range of
momenta in it.
 
What will happen to this as a
function of time?
Can you make a guess?
 
Let's say it's right now got an
average momentum to the left.
What do you think will happen
to it?
Pardon me?
 
Student:  It will move
to the left.
Prof: It will move to
the left.
Except for the fuzziness,
you can apply your classical
intuition.
 
It's got some position,
maybe not precise.
It's got some momentum,
maybe not precise,
but when you leave something on
top of a hill,
it's going to slide down the
hill.
The average x is going
to go this way,
and the average momentum will
increase.
So that's a situation where the
average of the physical
quantities change with time.
 
That's because this state is
not a function
Y_E(x).
 
It's some random function you
picked.
Random functions you picked in
some potential will in fact
evolve with time in such a way
that measurable quantities will
change with time.
 
The odds for x or the
odds for p,
odds for everything else,
will change with time,
okay?
 
So stationary states are very
privileged,
because if you start them that
way,
they stay that way,
and that's why when you look at
atoms,
they typically stay that way.
But once in a while,
an atom will jump from one
stationary state to another one,
and you can say that looks like
a contradiction.
 
If it's stationary,
what's it doing jumping from
here to there?
 
You know the answer to that?
 
Why does an atom ever change
then?
If it's in a state of definite
E, it should be that way
forever.
 
Why do they go up and down?
 
Want to guess?
 
Student: 
>
Prof: That's correct.
 
So she said by absorbing
photons.
And what I really mean by that
is, this problem V(x)
involves only the coulomb force
between the electron and the
proton.
 
If that's all you have,
an electron in the field of a
proton, it will pick one of
these levels,
it can stay there forever.
 
When you shine light,
you're applying an
electromagnetic field.
 
The electric field and magnetic
field apply extra forces in the
charge and V(x) should
change to something else.
So that this function is no
longer a state of definite
energy for the new problem,
because you've changed the
rules of the game.
 
You modified the potential.
 
Then of course it will move
around and it will change from
one state to another.
 
But an isolated atom will
remain that way forever.
But it turns out even that's
not exactly correct.
You can take an isolated atom,
in the first excited state of
hydrogen, you come back a short
time later, you'll find the
fellow has come down.
 
And you say,
"Look, I didn't turn on
any electric field.
 
E = 0, B = 0.
 
What made the atom come
down?"
Do you know what the answer to
that is?
Any rumors?
 
Yes?
 
Student:  Photon
emission?
Prof: Its photon is
emitted,
but you need an extra thing,
extra electrical magnetic field
to act on it before it will emit
the photon.
But where is the field?
 
I've turned everything off.
 
E and B are both
0.
So it turns out that the state
E = B = 0 is like
a state say in a harmonic
oscillator potential x =
p = 0,
sitting at the bottom of the
well.
 
We know that's not allowed in
quantum mechanics.
You cannot have definite
x and definite p.
It turns out in quantum theory,
E and B are like
x and p.
 
That means the state of
definite E is not a state
of definite B.
 
A state of definite B is
not a state of definite
E.
 
It looks that way in the
macroscopic world,
because the fluctuations in
E and B are very
small.
 
Therefore, just like in the
lowest energy state,
an oscillator has got some
probability to be jiggling back
and forth in x and also
in p.
The vacuum, which we think has
no E and no B,
has small fluctuations,
because E and B
both vanishing is like x
and p both vanishing.
Not allowed.
 
So you've got to have a little
spread in both E and both
B.
 
They're called quantum
fluctuations of the vacuum.
So that's a theory of nothing.
 
The vacuum you think is the
most uninteresting thing,
and yet it is not completely
uninteresting,
because it's got these
fluctuations.
It's those fluctuations that
tickle the atom and make it come
from an excited state to a
ground state.
Okay, so unless you tamper with
the atom in some fashion,
it will remain in a stationary
state.
Those states are states of
definite energy.
They are found by solving the
Schrˆdinger equation
without time in it.
 
H Y = E
Y is called the time
independent Schrˆdinger
equation,
and that's what most of us do
most of the time.
The problem can be more
complicated.
It can involve two particles,
can involve ten particles.
It may not involve this force,
may involve another force,
but everybody is spending most
of his time or her time solving
the Schrˆdinger equation to
find states of definite energy,
because that's where things
will end up.
All right, I've only shown you
that the probability to find
different positions doesn't
change with time.
I will show you the probability
to find different anything
doesn't change with time.
 
Nothing will change with time,
not just x probability,
so I'll do one more example.
 
Let's ask, what's the
probability to find a momentum
p?
 
What are we supposed to do?
 
We're supposed to take
e^(ipx)^(/
‚Ñè )times the
function at some time t
and do that integral.
 
I'm sorry, you should take that
and do that integral,
and then you take the absolute
value of that.
And that's done at every time.
 
You take the absolute value and
that's the probability to get
momentum p,
right?
The recipe was,
if you want the probability,
take the given function,
multiply to the conjugate of
Y_p and
do the integral,
dL and dx.
 
Y(x,t) in general
has got complicated time
dependence, but not our
Y.
Remember our Y?
 
Our Y looks like
Y(x) times e^(
‚àíiEt/√¢¬Ñ¬è).
 
But when you take the absolute
value, this has nothing to do
with x.
 
You can pull it outside the
integral--or let me put it
another way.
 
Let's do the integral and see
what you get.
You will find
A_p looks like
A_p
(0)e^(‚àíiEt/
‚Ñè).
 
Do you see that?
 
If the only thing that happens
to Y is that you get an
extra factor at later times,
only thing that happens to the
A_p is it gets
the extra factor at later times.
But the probability to find
momentum p is the
absolute value square of that,
and in the absolute value
process, this guy is gone.
 
You follow that?
 
Since the wave function changes
by a simple phase factor,
the coefficient to have a
definite momentum also changes
by the same phase factor.This is
called a phase factor,
exponential of modulus 1,
but when you take the absolute
value,
the guy doesn't do anything.
Now you can replace p by
some other variable.
It doesn't matter.
 
The story is always the same.
 
So a state of definite energy
seems to evolve in time,
because e^(ipx)^(/
‚Ñè),
but none of the probabilities
change with time.
It's absolutely stationary.
 
Just put anything you measure.
 
That's why those states are
very important.
Okay, now I want to caution you
that not every solution looks
like this.
 
That's the question you raised.
 
I'm going to answer that
question now.
Let's imagine that I find two
solutions to the
Schrˆdinger equation of
this form.
Solution Y_1
looks like
Y_1(x,t),
looks like e^(ipx)^(/
‚Ñè
)Y_1,
Y_E1(x).
 
That's one solution for energy
E_1.
Then there's another solution,
Y_2 (x)
looks like e^(ipx)
^(/‚Ñè) Y
_E2(x).
 
This function has all the
properties I mentioned,
namely nothing depends on time.
 
That has the same property.
 
But because the
Schrˆdinger equation is a
linear equation,
it's also true that this
Y, which is
Y_1
Y_2,
add this one to this one,
is also a solution.
 
I think I have done it many,
many times.
If you take a linear equation,
Y_1 obeys the
Schrˆdinger equation,
Y_2 obeys the
Schrˆdinger equation.
 
Add the left hand side to the
left hand side and right hand
side to the right hand side,
you will find that if
Y_1 obeys it
and Y_2 does,
Y_1
Y_2 also obeys
it.
 
Not only that,
it can be even more general.
You can multiply this by any
number,
Y_1(x,t),
any constant,
A_2
Y_2
(x,t),
but A_1 and
A_2 don't
depend on time.
Also, obeys Schrˆdinger
equation.
Can you see that?
 
That's superposition of
solutions.
It's a property of linear
equations.
Nowhere does Y^(2)
appear in the Schrˆdinger
equation, therefore you can add
solutions.
But take a solution of this
form.
Even though
Y_1 is a
product of some F and a
Y,
and Y_2 is a
product of some
F_2 and
Y_2,
the sum is not a product of
some F and some Y.
You cannot write it as a
product, you understand?
That's a product,
and that's a product,
but their sum is not a product,
because you cannot pull out a
common function of time from the
two of them.
They have different time
dependence.
But that is also a solution.
 
In fact, now you can ask
yourself, what is the most
general solution I can build in
this problem?
Well, I think you can imagine
that I can now write
Y(x) and t
as A_EY
_E(x,t),
sum over all the allowed
Es.
 
That also satisfies
Schrˆdinger equation.
Do you agree?
 
Every term in it satisfies
Schrˆdinger equation.
You add them all up,
multiply by any constant
A_E,
that also satisfies
Schrˆdinger equation.
 
So now I'm suddenly
manufacturing more complicated
solutions.
 
The original modest goal was to
find a product form,
but once you got the product
form, you find if you add them
together,
you get a solution that's no
longer a product of x and
a product of t,
function of x and a
function of t,
because this guy has one time
dependence;
another term is a different
time dependence.
You cannot pull them all out.
 
So we are now manufacturing
solutions that don't look like
their products.
 
This is the amazing thing about
solving the linear equation.
You seem to have very modest
goals when you start with a
product form,
but in the end,
you find that you can make up a
linear combination of products.
Then the only question is,
will it cover every possible
situation you give me?
 
In other words,
suppose you come to me with an
arbitrary initial state.
 
I don't know anything about it,
and you say,
"What is its future?"
 
Can I handle that problem?
 
And the answer is,
I can, and I'll tell you why
that is true.
 
Y(x) and t looks
like A_E.
I'm going to write this more
explicitly as
Y_E(x)e^(-
ipx/‚Ñè).
Look at this function now at
t = 0.
At t = 0,
I get Y(x) and 0
to be Œ£A_E
Y_E(x).
In other words,
I can only handle those
problems whose initial state
looks like this.
But my question is,
should I feel limited in any
way by the restriction?
 
Do you follow what I'm saying?
 
Maybe I'll say it one more time.
 
This is the most general
solution I'm able to manufacture
that looks like this,
A_E
Y_E(x)e^(-
ipx/‚Ñè).
It's a sum over solution to the
product form with variable,
each one with a different
coefficient.
That's also a solution to
Schrˆdinger equation.
If I take that solution and
say, "What does it do at
t = 0?"
 
I find it does the following.
 
At t = 0,
it looks like this.
So only for initial functions
of this form I have the future.
But the only is not a big only,
because every function you can
give me at t = 0 can
always be written in this form.
It's a mathematical result that
says that just like sines and
cosines and certain exponentials
are a complete set of functions
for expanding any function.
 
The mathematical theory tells
you that the solutions of H
Y = E Y,
if you assemble all of them,
can be used to build up an
arbitrary initial function.
That means any initial function
you give me, I can write this
way, and the future of that
initial state is this guy.
Yes?
 
Student: 
>
Prof: Yes.
 
Lots of mathematical
restrictions,
single valued.
 
Physicists usually don't worry
about those restrictions till of
course they get in trouble.
 
Then we go crawling back to the
math guys to help us out.
So just about anything you can
write down, by the way physics
works, things tend to be
continuous and differentiable.
That's the way natural things
are.
So for any function we can
think of it is true.
You go the mathematicians,
they will give you a function
that is nowhere continuous,
nowhere differentiable,
nowhere defined,
nowhere something.
That's what makes them really
happy.
But they are all functions the
way they've defined it,
but they don't happen in real
life,
because whatever happens here
influences what happens on
either side of it,
so things don't change in a
discontinuous way.
 
Unless you apply an infinite
force,
an infinite potential,
infinite something,
everything has got what's
called C infinity,
can differentiate any number of
times.
So we don't worry about the
restriction.
So in the world of physicists'
functions, you can write any
initial function in terms of
these functions.
So let me tell you then the
process for solving the
Schrˆdinger equation under
any conditions.
Are you with me?
 
You come and give me
Y(x, 0),
and you say,
"As a function of time,
where is it going to end
up?"
That's your question.
 
That's all you can ask.
 
Initial state, final state.
 
This is given, this is needed.
 
So I'll give you a 3 step
solution.
Step 1, find A_E=
Y_E
*(x)
Y(x,0).
Step 2, Y(x) and
t = this A_E
that you got times
e^(-ipx/
‚Ñè)Y
_E(x).
So what I'm telling you is,
the fate of a function Y
with the wiggles and jiggles is
very complicated to explain.
Some wiggle goes into some
other wiggle that goes into some
other wiggle as a function of
time,
but there is a basic simplicity
underlying that evolution.
The simplicity is the following.
 
If at t = 0 you expand
your Y as such a sum,
where the coefficients are
given by the standard rule,
then as time goes away from
t = 0,
all you need to do is to
multiply each coefficient by the
particular term involving that
particular energy.
And that gives you the Y
at later times.
A state of definite energy in
this jargon will be the one in
which every term is absent
except 1, maybe E =
E_1.
 
That is the kind we study.
 
That state has got only 1 term
in the sum and its time
evolution is simply given by
this and all probabilities are
constant.
 
But if you mix them up with
different coefficients,
you can then handle any initial
condition.
So we have now solved really
for the future of any quantum
mechanical problem.
 
So I'm going to give you from
now to the end of class concrete
examples of this.
 
But I don't mind again
answering your questions,
because it's very hard for me
to put myself in your place.
So I'm trying to remember when
I did not know quantum
mechanics, sitting in some
sandbox and some kid was
throwing sand in my face.
 
So I don't know.
 
I've lost my innocence and I
don't know how it looks to you.
Yes.
 
Student:  For each of
these problems,
you have to solve that equation
you gave us before to find the
form of the
Y_E,
right?
 
Prof: Right.
 
So let's do the following
problem.
Let us take a world in which
everything is inside the box of
length L.
 
And someone has manufactured
for you a certain state.
Let me come to that case in a
minute.
Let me take a simple case then
I'll build up the situation you
want.
 
Let's first take a simple case
where at t = 0
Y(x,0) = the
square root of
2/Lsine(n
px/L).
That is just a function with n
oscillations.
You agree, that's a state of
definite energy.
The energy of that state,
E_n,
is ‚Ñè^(2)
p^(2)n^(2)/
2mL^(2).
 
We did that last time.
 
And the reason,
why were we so interested in
these functions?
 
Now I can tell you why.
 
If this is my initial state,
let me take a particular
n,
then the state at any future
time, Y(x,t),
is very simple here,
the square root of
2/Lsine(n
px/L), times
e^(-ipx/
‚Ñè),
where energy is
n^(2)p
^(2)
‚Ñè^(2)
/2mL^(2).
That's it.
 
That is the function of time.
 
All I've done to that initial
state is multiply by
e^(-iEt),
but E is not some random
number.
 
E is labeled by n
and E_nis whatever you
have here.
 
That's the time dependence of
that state.
It's very clear that if you
took the absolute value of this
Y, this guy has absolute
value = 1 at all times.
It's like saying cos t
depends on time,
sine t depends on time,
but cos^(2) sine^(2),
cos^(2 )t sine^(2)
t seems to depend on
time,
but it doesn't.
So this seems to depend on time
and it does, but when you take
the absolute value,
it goes away.
That's the simplest problem.
 
I gave you an initial state,
the future is very simple,
attach the factor.
 
Now let's give you a slightly
more complicated state.
The more complicated state will
be--I'm going to hide that for
now.
 
Let us take a
Y(x,0) that looks
like 3 times square root of
2/Lsine(2
px/L) 4 times
sine...
This is my initial state.
 
What does it look like?
 
It's a sum of 2 energy states.
 
This guy is what I would call
Y_2 in my
notation, the second highest
state.
This guy is
Y_3.
Everybody is properly
normalized, and these are the
As.
 
So this state,
if you measure its energy,
what will you get?
 
Anybody tell me what answers I
can get if I measure energy now?
You want to guess what are the
possible energies I could get?
Yes.
 
Any of you, either of you.
 
Can you tell?
 
No?
 
Yes?
 
Student:  You can get
h^(2)p^(2) times
4/2mL^(2),
or times 9.
Prof: So her answer was,
you can get in my convention
E_2or
E_3,
just put n = to 2 0r 3
That's all you have,
your function written as a sum
over Y_E is
only 2 terms.
 
That means they are the only 2
energies you can get.
So it's not a state of definite
energy.
You can get either this answer
or this answer.
But now you can sort of see,
it's more likely to get this
guy,
because it has a 4 in front of
it, and less likely to get this
guy,
and impossible to get anything
else.
So the probability for getting
n = 2 is proportional to
3^(2),
and the probability for getting
n = 3 is proportional to
4^(2).
If you want the absolute
probabilities,
then you can write it as 3^(2)
divided by 3^(2) 4^(2),
which is 4^(2),
or you can write it as 3^(2)
4^(2) is 5^(2).
 
See, if you square these
probabilities,
you get 25,3^(2) 4^(2).
 
If you want to get 1,
I think you can see without too
much trouble,
if you rescale the whole thing
by 1 fifth,
now you'll find the total
probabilities add up to 1.
 
That's the way to normalize the
function, that's the easy way.
The hard way is to square all
of this and integrate it and
then set it = to 1 and see what
you have to do.
In the end, all you will have
to do is divide by 5.
I'm just giving you a shortcut.
 
When you expand the Y in
terms of normalized functions,
then the coefficient squared
should add up to 1.
If they don't,
you just divide them by
whatever it takes.
 
So this has got a chance 3 is
to 5 of being this or that
energy.
 
But as a function of time,
you will find here things vary
with time.
 
This is not going to be time
independent.
I want to show you that.
 
So Y(x,t) now is
going to be (3/5‚àö(2/L
))sine(2
px/L) times
e^(‚àíi(E2)).
 
I don't want to write the full
formula for E_n
every time.
 
I'm just going to call it
E_2,
(4/5‚àö(2/L))sine
(3px/L) times
e^(‚àíi(E3)t/√¢¬Ñ¬è.
 
)Now you notice that if I
found the probability to be at
sum x,
p(x,t), I have to take the
absolute square of all of this.
 
And all I want you to notice is
that the absolute square of all
of this, you cannot drop these
exponentials now.
If you've got two of them,
you cannot drop them,
because when you take
Y_1
Y_2 absolute
squared,
Y_1
Y_2,
you multiply it by
Y_1 conjugate
Y_2 conjugate,
let's do that.
So you want to multiply the
whole thing by its conjugate.
So first you take the absolute
square of this.
You will get
(9/25)(2/L)sin^(2)(2
px/L).
 
And the absolute value of this
is just 1.
You see that?
 
That is Y_1*
Y_1.
Then you must take
Y_2*
Y_2.
 
That will be 16/25‚àö(2/L)--
I'm sorry, no square
root--2/Lsin^(2)(3px/L)
times 1,
because the absolute value of
this guy with itself is 1.
But that's not the end.
 
You've got 2 more terms which
look like Y_1
^(*)Y_2
Y_2*
Y_1.
 
I'm not going to work out all
the details, but let me just
show you that time dependence
exists.
So if you take 
Y_1^(*)
Y_2,
you will get (3/5‚àö(2/L
))sine(2
px/L) times e^(
iE2t/‚Ñè)^( )times--
sorry.
3/5 times
(4/5‚àö(2/L))sine
(3px/L) times
e^((i(E2)‚àíi(E3)t)
/‚Ñè) 1 more term.
 
I don't care about any of these
things.
I'm asking you to see,
do things depend on time or
not?
 
This has no time dependence
because the absolute value of
that vanished.
 
This has no time dependence,
the absolute value of this
vanished.
 
But the cross terms,
when you multiply the conjugate
of this by this,
or the conjugate of this by
that, they don't cancel.
 
That's all I want you to know.
 
So I'll get a term like this
this complex conjugate.
I don't want to write that in
detail.
If you combine this function
with the conjugate,
you'll find this the conjugate
will give me a cosine.
So I should probably write it
on another part of the board.
So maybe I don't even want to
write it because it's in the
notes.
 
I want you to notice the
following - look at the first
term, no time dependence,
second term,
no time dependence.
 
The cross term has an e to the
something and I claim the other
cross term will have e to the -
something.
e to the i something or
e to the -i something is
the cosine of something.
 
That's all I want you to know.
 
So there is somewhere in the
time dependence P(x,t)
that's got a lot of stuff which
is t independent,
something that looks like a
whole bunch of numbers,
times cosine(E_2 -
E_3)t /‚Ñè.
That's all I want you to notice.
 
That means the probability
density will be oscillating.
The particle will not be fixed;
it will be bouncing back and
forth between the walls.
 
And the rate at which it
bounces is given by the
difference in energy between the
two states you formed in the
combination.
 
So this is how a particle in
general--
if you want,
not the most general one,
but it's a reasonably general
case where you added some
mixture of that--
let me see.
You added 2 to 3.
 
You added that state that state.
 
You added 4 times fifth of that
times 3 fifths of that as the
initial condition.
 
Okay?
 
4 fifths of this one guy with 2
wiggles and 3 fifths with 3
wiggles and you let it go in
time,
you will find then if you add
these time dependences,
there'll be a part that varies
with time.
So the density will not be
constant now.
It will be sloshing back and
forth in the box.
That's a more typical situation.
 
So not every state,
initial state,
is a function of a state of
definite energy.
It's an admixture.
 
I've taken the simplest case
where the admixture has only 2
parts in it.
 
You can imagine taking a
function made of 3 parts and 4
parts and 10 parts and when you
square them and all that,
you'll get 100 cross terms.
 
They'll all be oscillating at
different rates.
But the frequencies will always
be given by the difference in
frequencies, differences in
energies that went into your
mixture.
 
The last problem which I'm not
going to do now,
I'll do next time,
but I'll tell you what it is I
want to do,
which is really this
mathematically more involved,
but the idea is the same.
Here I gave you the initial
state on a plate.
I just said,
here it is, 3 fifths times 1
function of definite energy,
4 fifths times another function
of definite energy.
 
The problem you really want to
be able to solve right now is
when somebody gives you an
arbitrary initial state and
says,
"What happens to it?"
So I'm going to consider next
time a function that looks
something like this - x
times 1 - x is my
function at time 0.
 
That's a nice function,
because it vanishes at 0,
it vanishes at 1.
 
It's a box of length L =
1.
It's an allowed initial state.
 
Then I can ask,
what does this do.
So you should think about how
do I predict its future,
what's involved?
 
So you will see that in the
notes.
So what I'm going to do next
time is to finish this problem
and give you the final set of
postulates,
so that you can see what the
rules of quantum mechanics are.
 
 
