The following
content is provided
under a Creative
Commons license.
Your support will help MIT
OpenCourseWare continue
to offer high-quality
educational resources for free.
To make a donation or
view additional materials
from hundreds of MIT courses,
visit MIT OpenCourseWare
at ocw.mit.edu.
PROFESSOR: OK, last time we
were talking about uncertainty.
We gave a picture
for uncertainty--
it was a neat picture, I
think of the uncertainty,
refer to the uncertainty
measuring an operator
A that was a Hermitian operator.
And that uncertainty
depended on the state
that you were measuring.
If the state was
an eigenstate of A,
there would be no uncertainty.
If the state is not
an eigenstate of A,
there was an uncertainty.
And this uncertainty
was defined as the norm
of A minus the expectation
value of A acting on psi.
So that was our
definition of uncertainty.
And it had nice properties.
In fact, it was zero if
and only if the state was
an eigenstate of the operator.
We proved a couple
of things as well--
that, in particular, one
that is kind of practical
is that delta A of psi
squared is the expectation
value of A squared on the
state psi minus the expectation
value of A on the
state psi squared.
So that was also proven,
which, since this number is
greater than or equal to 0, this
is greater than or equal to 0.
And in particular, the
expectation value of A squared
is bigger than the
expectation of A squared.
So let's do a trivial
example for a computation.
Suppose somebody tells
you in an example
that the spin is in
an eigenstate of Sz.
So the state psi it's what we
called the plus state, or the z
plus state.
And you want to know what
is uncertainty delta of Sx.
So you know if you're
in an eigenstate of z,
you are not in an
eigenstate of x-- in fact,
you're in a superposition
of two eigenstates of Sx.
Therefore, there should
be some uncertainty here.
And the question is, what is
the quickest way in which you
compute this uncertainty,
and how much is it?
So many times, the simplest way
is to just use this formula.
So let's do that.
So what is the expectation
value of Sx in that state?
So it's Sx expectation
value would
be given by Sx on this thing.
Now, actually, it's
relatively clear
to see that this expectation
value is going to be 0,
because Sx really
in the state plus
is equal amplitude to be
Sx equal plus h bar over 2,
or minus h bar over 2.
But suppose you
don't remember that.
In order to compute
this, it may come
handy to recall the matrix
presentation of Sx, which
you don't need to know by heart.
So this state plus
is the first state,
and the basis state
is the state 1 0.
And then we have Sx on plus
is equal to h bar over 2 0
1 1 0, acting on 1 0.
Zero and that's equal
to h bar over 2.
The first thing gives you 0,
and the second one gives you 1.
So that's, in fact, equal to h
bar over 2, the state of minus.
So here you go to h
bar over 2 plus minus,
and you know plus and minus are
orthogonal, so 0 is expected.
Well, are we going to
get zero uncertainty?
No, because Sx
squared, however, does
have some expectation value.
So what is the expectation
value of Sx squared?
Well, there's an advantage here.
You may remember that this
Sx squared is a funny matrix.
It's a multiple of the
identity, because if you square
this matrix, you get the
multiple of the identity.
So Sx squared is h over 2
squared times the identity
matrix-- the two by
two identity matrix.
So the expectation
value of Sx squared
is h bar over 2 squared
times expectation value
of the identity.
And on any state,
the expectation value
on any normalized state,
the expectation value
of the identity
will be equal to 1.
So this is just h
squared over 2 squared.
So back to our uncertainty,
delta Sx squared
would be equal to the
expectation value of Sx squared
minus the expectation
value of Sx squared.
This was 0.
This thing was equal to
h bar over 2 squared,
and therefore, delta Sx
is equal to h bar over 2.
So just I wanted to make
you familiar with that.
You can compute these
things-- these norms and all
these equations are pretty
practical, and easy to use.
So today what we have
to do is the following--
we're going to establish
the uncertainty principle.
We're going to just prove it.
And then, once we have
the uncertainty principle,
we'll try to find some
applications for it.
So before doing
an application, we
will discuss the case of
the energy time uncertainty
principle, which is
slightly more subtle
and has interestingly
connotations that we
will develop today.
And finally, we'll use
the uncertainty principle
to learn how to find bounds
for energies of ground states.
So we might make a rigorous
application of the uncertainty
principle.
So the uncertainty principle
talks about two operators
that are both
Hermitian, and states
the following-- so given
the theorem, or uncertainty
principle, given two
Hermitian operators A and B,
and a state psi normalized, then
the following inequality holds.
And we're going to write it in
one way, then in another way.
Delta A psi squared times
delta B-- sometimes people
in order to avoid cluttering
don't put the psi.
I don't know whether
to put it or not.
It does look a little more
messy with the psi there,
but it's something you
have to keep in mind.
Each time you have
an uncertainty,
you are talking about
some specific state
that should not be forgotten.
So maybe I'll erase it to
make it look a little nicer.
Delta B squared-- now
it's an inequality.
So not just equality,
but inequality.
That product of uncertainties
must exceed a number--
a computable number-- which is
given by the following thing.
OK, so here it is.
This is a number, is
the expectation value
of this strange operator
in the state psi squared.
So even such a statement is
somewhat quite confusing,
because you wish to know
what kind of number is this.
Could this be a complex number?
If it were a complex
number, why am I squaring?
That doesn't make any sense.
Inequalities-- these
are real numbers.
Deltas are defined
to be real numbers.
They're the norms.
So this is real positive.
This would make no sense if
this would be a complex number.
So this number better be real.
And the way it's
written, it seems
to be particularly
confusing, because there
seems to be an i here.
So at first sight, you might
say, well, can it be real?
But the thing that you
should really focus here
is this whole thing.
This is some operator.
And against all first
impressions, this operator
formed by taking the
commutator of A and B--
this is the
commutator A B minus B
A-- is Hermitian,
because, in fact,
if you have two
operators, and you take
the commutator, if the
two of them are Hermitian,
the answer is not Hermitian.
And that you know already--
x with p is equal to i h bar.
These are Hermitian operators,
and suddenly the commutator
is not a Hermitian operator.
You have the unit here.
A Hermitian operator
with a number
here would have to
be a real things.
So there's an extra i,
that's your first hint
that this i is important.
So the fact is that
this operator as defind
here is Hermitian, because
if you take 1 over i A B--
and we're going to try to
take its Hermitian conjugate--
we have 1 over i A B
minus B A. And we're
taking the Hermitian conjugate.
Now, the i is going to
get complex conjugated,
so you're going to
get 1 over minus i.
The Hermitian
conjugate of a product
is the Hermitian conjugate
in opposite order.
So it would be B dagger A
dagger minus A dagger B dagger.
And of course, these
operators are Hermitian,
so 1 over minus i
is minus 1 over i.
And here I get B A minus
A B. So with a minus sign,
this is 1 over i A B again.
So the operator is equal to
its dagger-- its adjoint.
And therefore, this
operator is Hermitian.
And as we proved,
the expectation value
of any Hermitian
operator is real.
And we're in good shape.
We have a real number.
This could be negative.
And a number, when
you square it,
is going to be a
positive number.
So this makes sense.
We're writing something
that at least makes sense.
Another way, of course,
to write this equation,
if you prefer--
this inequality, I
mean-- is to take
the square root.
So you could write it
delta A times delta
B. Since this is a real number,
I can take the square root
and write just this as absolute
value of psi, 1 over 2i i
A B psi.
And these bars here
are absolute value.
They're not norm of a vector.
They are not norm
of a complex number.
They are just absolute value,
because the thing inside
is a real thing.
So if you prefer,
whatever you like better,
you've got here the statement
of the uncertainty principle.
So the good thing about
this uncertainty principle
formulated this way is that
it's completely precise,
because you've defined
uncertainties precisely.
Many times, when you first
study the uncertainty principle,
you don't define
uncertainties precisely,
and the uncertainty
principle is something
that goes with [? sim ?] is
approximately equal to this.
And you make statements that
are intuitively interesting,
but are not thoroughly precise.
Yes, question, yes.
AUDIENCE: Should that
be greater or equal?
PROFESSOR: Greater than or equal
to, yes-- no miracles here.
Other question?
Other question?
So we have to prove this.
And why do you
have to prove this?
This is a case,
actually, in which
many interesting questions
are based on the proof.
Why would that be the case?
Well, a question that is
always of great interest
is reducing uncertainties.
Now, if two operators commute,
this right-hand side is 0
and it just says
that the uncertainty
could be made
perhaps equal to 0.
It doesn't mean that
the uncertainty is 0.
It may depend on the state,
even if the operators commute.
This is just telling
you it's bigger than 0,
and perhaps by being clever,
you can make it equal to 0.
Similarly, when you
have two operators that
just don't commute, it
is of great importance
to try to figure out if there
is some states for which
the uncertainty
relation is saturated.
So this is the question
that, in fact, you could not
answer if you just know this
theorem written like this,
because there's
no statement here
of what are the conditions
for which this inequality is
saturated.
So as we'll do the proof,
we'll find those conditions.
And in fact, they go
a little beyond what
the Schwarz
inequality would say.
I mentioned last time that
this is a classic example
of something that looks
like the Schwarz inequality,
and indeed, that will
be the central part
of the demonstration.
But there's one extra step
there that we will have to do.
And therefore, if you
want to understand
when this is saturated, when
do you have minimum uncertainty
states, then you need
to know the proof.
So before we do, of
course, even the proof,
there's an example--
the classic illustration
that should be mentioned--
A equal x and B equals p,
xp equal i h bar.
That's the identity.
So delta x squared
delta p squared
is greater or equal than psi 1
over 2i-- the commutator-- i h
bar 1 psi squared.
And what do we get here?
We get the i's cancel, the h bar
over 2 goes out, gets squared,
and everything else is equal
to 1, because h is normalized.
So the precise version of
the uncertainty principle
is this one for x and p.
And we will, of course,
try to figure out
when we can saturate this.
What kind of wave
functions saturate them?
You know the ones that are
just sort of strange-- if x
is totally localized, the
uncertainty of momentum
must be infinite, because
if delta x is 0, well,
to make this something that
at least doesn't contradict
the identity, delta
p better be infinite.
Similarly, if you
have an eigenstate
of p, which is a wave,
is totally delocalized,
and you have infinite
here and 0 here.
Well, they're interesting
states that have both,
and we're going to
try to find the ones
of minimum uncertainty.
So OK, we've stated
the principle.
We've given an example.
We've calculated an uncertainty.
Let us prove the theorem.
So as we mentioned before, this
idea that the uncertainty is
a norm, is a good one.
So let's define two
auxilliary variables--
f, a state f, which is going
to be A minus the expectation
value of A on psi.
And we can put the ket here.
And g, which is going to
be B minus the expectation
value of B, psi.
Now what do we know about this?
Well the uncertainties are
the norms of these states,
so the norm squared
of these states
are the uncertainty squared.
So delta A squared is
f f, the norm squared.
And delta B squared is g g.
And Schwarz' inequality
says that the norm
of f times the normal
of g is greater than
or equal than the absolute
value of the inner product of f
with g.
So squaring this thing,
which is convenient perhaps
at this moment, we have f
f-- norm squared of f-- norm
squared of g must be greater
than or equal than f g squared,
absolute value squared.
So this is Schwarz.
And this is going to
just make a note-- here
we know when this is saturated.
It will be saturated
if f is parallel to g.
If these two vectors are
parallel to each other,
the Schwarz inequality
is saturated.
So that's something
to keep in mind.
We'll use it soon enough.
But at this moment, we can
simply rewrite this as delta
A squared times delta B
squared-- after all, those
were definitions-- are
greater than or equal--
and this is going to be a
complex number in general,
so f g in Schwarz' inequality
is just a complex number.
So this is real of f g squared,
plus the imaginary part
of f g squared.
So that's what we have--
real and imaginary part.
So let's try to get what f g is.
So what is f g?
Let's compute it.
Well we must take the bra
corresponding to this,
so this is psi.
Since the operator
is Hermitian, you
have A minus
expectation value of A,
and here you have B minus
expectation value of B psi.
Now we can expand this, and
it will be useful to expand.
But at the same time, I will
invent a little notation here.
I'll call this A check,
and this B check.
And for reference, I'll put
that this is psi A check B check
psi.
On the other hand, let's
just compute what we get.
So what do we get?
Well, let's expand this.
Well, the first term is
A times B on psi psi,
and we're not going
to be able to do
much about that-- A B psi.
And then we start getting
funny terms-- A cross with B,
and that's-- if you
think about it a second,
this is just going to be equal
to the expectation value of A
times the expectation of B,
because the expectation value
of B is a number, and then A
is sandwich between two psi.
So from this cross product,
you get expectation value
of A, expectation value
of B, with a minus sign.
From this cross product, you
get the expectation value of A
and expectation value of B--
another one with a minus sign.
And then one with a plus sign.
So the end result is a
single one with a minus sign.
So expectation value of A,
expectation value of B. Now,
if I change f and
g, I would like
to compute not only fg inner
product, but gf inner product.
And you may say why?
Well, I want it because
I need the real part
and the imaginary parts, and
gf is the complex conjugate
of f g, so might
as well compute it.
So what is gf?
Now you don't have to do
the calculation again,
because basically you
change g to f or f
to g by exchanging A
and B. So I can just
say that this is psi
B A psi minus A B.
And if I write it
this way, I say
it's just psi B
check A check psi.
OK so we've done some work, and
the reason we've done this work
is because we
actually need to write
the right-hand side
of the inequality.
And let's, therefore,
explore what these ones are.
So for example, the
imaginary part of f g
is 1 over 2i f g minus its
complex conjugate-- gf.
Imaginary part of
a complex number
is z minus z star divided by 2i.
now, fg minus gf
is actually simple,
because this product of
expectation values cancel,
and this gives me the
commutator of A with B.
So this is 1 over 2i, and
you have psi expectation
value of A B commutator.
So actually, that looks
exactly like what we want.
And we're not going to be
able to simplify it more.
We can put the 1 over 2i inside.
That fine.
It's sort of in the operator.
It can go out, but we're not
going to do better than that.
You already recognize,
in some sense,
the inequality we want
to prove, because if this
is that, you could ignore
this and say, well,
it's anyway greater
than this thing.
And that's this term.
But let's write the other one,
at least for a little while.
Real of fg would be
1/2 of fg plus gf.
And now it is your choice
how you write this.
There's nothing great
that you can do.
The sum of these two things
have AB plus BA and then twice
of this expectation
value, so it's not
nothing particularly inspiring.
So you put these
two terms and just
write it like this-- 1/2
of psi anti-commutator
off A check with B check.
Anti-commutator, remember,
is this combination
of operators in which you
take the product in one way,
and add the product
in the other way.
So I've used this
formula to write this,
and you could write it as an
anti-commutator of A and B
minus 2 times the
expectation values,
or whichever way you want it.
But at the end of the
day, that's what it is.
And you cannot simplify it much.
So your uncertainty
principle has become delta
A squared delta
B squared greater
than or equal to expectation
value of psi 1 over 2i A B psi
squared plus expectation
value of psi 1 over 2
A check B check psi squared.
And some people call this
the generalized uncertainty
principle.
You may find some
textbooks that tell you
"Prove the generalized
uncertainty principle,"
because that's
really what you get
if you follow the rules
and Schwarz' inequality.
So it is of some interest.
It is conceivable that sometimes
you may want to use this.
But the fact is that
this is a real number.
This is a Hermitian
operator as well.
This is a real number.
This is a positive number.
So if you ignore it, you still
have the inequality holding.
And many times-- and that's
the interesting thing--
you really are
justified to ignore it.
In fact, I don't know
of a single example--
perhaps somebody can tell me--
in which that second term is
useful.
So what you say at this moment
is go ahead, drop that term,
and get an inequality.
So it follows directly from
that, from this inequality,
that delta A squared
delta B squared
is greater than or equal--
you might say, well,
how do you know it's equal?
Maybe that thing cannot be 0.
Well, it can be 0
in some examples.
So it's still greater than
or equal to psi 1 over 2i
A B psi squared.
And that's by ignoring
the positive quantity.
So that is really the proof
of the uncertainty principle.
But now we can ask what
are the things that
have to happen for the
uncertainty principle
to be saturated?
That you really have delta A
delta B equal to this quantity,
so when can we saturate?
OK, what do we need?
First we need Schwarz
inequality saturation.
So f and g must be states that
are proportional to each other.
So we need one, that
Schwarz is saturated.
Which means that g is
some number times f,
where beta is a complex number.
This is complex vector
space, so parallel
means multiply by
a complex number.
That's still a parallel vector.
So this is the
saturation of Schwarz.
Now, what else do we need?
Well, we need that
this quantity be
0 as well, that the real part
of this thing is equal to 0.
Otherwise, you really
cannot reach it.
The true inequality is this,
so if you have Schwarz,
you've saturated.
This thing is equal
to this thing.
The left-hand side is equal
to this whole right-hand side.
Schwarz buys you that.
But now we want this to
be just equal to that.
So this thing must be
0, so the real part of f
overlap g-- of fg must be 0.
What does that mean?
It means that fg
plus gf has to be 0.
But now we know what g is,
so we can plug it here.
So g is beta times f.
Beta goes out, and
you get beta f f.
Now when you form the bra
g, beta becomes beta star.
So you get beta
star f f equals 0.
And since f need not have
zero norm, because there
is some uncertainty
presumably, you
have that beta plus beta star
is equal to 0, or real of beta
is equal to 0.
So that said, it's not that bad.
You need two things--
that the f and g vectors
be parallel with a
complex constant,
but actually, that constant
must be purely imaginary.
So beta is purely
imaginary-- that this beta
is equal to i lambda,
with lambda real.
And we then are in shape.
So for saturation,
we need just g
to be that, and g to be beta f.
So let me write that
equation over here.
So g-- what was g?
It's B, B minus absolute
value of B on psi, which is g,
must be equal to beta,
which is i lambda
A minus absolute
value of A on psi.
Condition-- so this is the
final condition for saturation.
now, that's a
strange-looking equation.
It's not all that
obvious how you're even
supposed to begin solving it.
Why is that?
Well, you're trying
to look for a psi,
and you have a
constraint on the psi.
The psi must satisfy this.
I actually will tell
both Arum and Will
to discuss some of these
things in recitation--
how to calculate minimum
uncertainty wave packets based
on this equation,
and what it means.
But in principle, what
do you have to do?
You have some kind of
differential equation,
because you have, say, x and
p, and you want to saturate.
So this is x, and this is p.
Since p, you want to use a
coordinate representation,
this will be a derivative, and
this will be a multiplication,
so you'll get a differential
equation on the wave function.
So you write an answer
for the wave function.
You must calculate the
expectation value of B.
You must calculate the
expectation value of A,
and then plug into
this equation,
and try to see if
your answer allows
a solution-- and a solution
with some number here, lambda.
At least one thing
I can tell you
before you try this too hard--
this lambda is essentially
fixed, because we can take
the norm of this equation.
And that's an interesting
fact-- take the norm.
And what is the norm of this?
This is delta B, the
norm of this state.
And the norm of i lambda--,
well norm of i is 1.
Norm of lambda is
absolute value of lambda,
because lambda was real.
And you have delta A
here of psi, of course.
So lambda can be either
plus or minus delta B
of psi over delta A of psi.
So that's not an
arbitrary constant.
It's fixed by the
equation already,
in terms of things
that you know.
And therefore, this
will be a subject
of problems in a little bit of
your recitation, in which you,
hopefully, discuss how to find
minimum uncertainty packets.
All right, so that's
it for the proof
of the uncertainty principle.
And as I told you, the proof
is useful in particular
to find those special states
of saturated uncertainty.
We'll have a lot to say
about them for the harmonic
oscillator later on, and in
fact throughout the course.
So are there any questions?
Yes.
AUDIENCE: So if we have one of
the states and an eigenstate,
we know that [INAUDIBLE]
is 0 and we then
mandate that the uncertainty
of the other variable
must be infinite.
But is it even possible to
talk about the uncertainty?
And if so, are we
still guaranteed--
we know that it's
infinite, but it's
possible for 0 and an infinite
number to multiply [INAUDIBLE]
PROFESSOR: Right, so you're in a
somewhat uncomfortable position
if you have zero uncertainty.
Then you need the other
one to be infinite.
So the way, presumably,
you should think of that,
is that you should take limits
of sequences of wave functions
in which the uncertainty
in x is going to 0,
and you will find that
as you take the limit,
and delta x is going to 0, and
delta p is going to infinity,
you can still have that.
Other questions?
Well, having done this, let's
try the more subtle case
of the uncertainty principle
for energy and time.
So that is a pretty
interesting subject, actually.
And should I erase here?
Yes, I think so.
Actually, [? Griffith ?]
says that it's usually
badly misunderstood, this
energy-time uncertainty
principle, but seldom
your misunderstanding
leads to a serious mistake.
So you're saved.
It's used in a hand-wavy way,
and it's roughly correct,
although people say all
kinds of funny things
that are not exactly right.
So energy time
uncertainty-- so let
me give a small motivation--
a hand-wavy motivation,
so it doesn't get us
very far, but at least it
gives you a picture
of what's going on.
And these uncertainty
relations, in some sense,
have a basis on some
simple statements that
are totally classical, and
maybe a little imprecise,
but incontrovertible,
about looking at waveforms,
and trying to figure
out what's going on.
So for example, suppose in
time you detect a fluctuation
that as time progresses,
just suddenly turns on.
Some wave that just dies
off after a little while.
And you have a
good understanding
of when it started,
and when it ended.
And there's a time T.
So whenever you have
a situation like that,
you can try to count the
number of waves-- full waves
that you see here.
So the number of waves
would be equal to--
or periods, number
of full waves--
would be the total time divided
by the period of this wave.
So sometimes T is
called the period.
But here, T is the
total time here,
and the period is
2 pi over omega.
So we say this is
omega t over 2 pi.
Now, the problem with these
waves that begin and end,
is that you can't
quite see or make
sure that you've got
the full wave here.
So in the hand-wavy
way, we say that even
as we looked at the
perfectly well-defined,
and you know the
shape exactly-- it's
been measured-- you can't
quite tell whether you've
got the full wave here or
a quarter of a wave more,
so there's an uncertainty in
delta n which is of order 1.
You miss half on one side,
and half on the other side.
So if you have an
uncertainty here of order 1,
and you have no
uncertainty in T,
you would claim that
you have, actually,
in some sense, an
uncertainty in what omega is.
Omega might be well
measured here, but somehow
towards the end you
can't quite see.
T we said was precise, so
over 2 pi is equal to 1.
I just took a delta of here,
and I said P is precise,
so it's delta omega.
So this is a
classical statement.
An electrical engineer
would not need
to know any quantum mechanics
to say that's about right,
and you can make it
more or less precise.
But that's a
classical statement.
In quantum mechanics,
all that happens
is that something has
become quantum, and the idea
that you have
something like this,
we can associate it with
a particle, a photon,
and in which case, the
uncertainty in omega
is uncertainty in energy.
So for a photon, the uncertainty
is equal to h bar omega,
so delta omega times h bar
is equal to the uncertainty
in energy.
So if you plug it in here,
you multiply it by h bar here,
and you would get delta E
times T is equal to 2 pi h bar.
And then you have to add words.
What is T?
Well, this T is the
time it takes the photon
to go through your detector.
You've been seeing it.
You saw a wave.
You recorded it, and took
a time T-- began, ended.
And it so it's the
time it took you
to have the pulse go through.
And that time is related
to an uncertainty
in the energy of the photon.
And that's sort of the beginning
of a time energy uncertainty
relationship.
This is quantum,
because the idea
that photons carry energies
and they're quantized--
this is a single photon-- and
this connection with energy
is quantum mechanics.
So this is good and
reasonable intuition, perhaps.
And it can be the basis
of all kinds of things.
But it points out the fact that
the more delicate part here
is T. How could I speak
of a time uncertainty?
And the fact is that you can't
speak of a time uncertainty
really precisely.
And the reason is,
because there's
no Hermitian operator
for which we could say,
OK the eigenstates of this
Hermitian operator are times,
and then you have a norm,
and it's an uncertainty.
So you can't do it.
So you have to do something
different this time.
And happily, there's
something you
can do that is precise
and makes sense.
So we'll do it.
So what we have to do is just
try to use the uncertainty
principle that we have,
and at least one operator.
We can use something
that is good for us.
We want uncertainty in energy,
and we have the Hamiltonian.
It's an operator.
So for that one, we can use
it, and that's the clue.
So you'll take A to be
the Hamiltonian, and B
to be some operator
Q that may depend
on some things-- for example,
x and p, or whatever you want.
But the one thing I want
to ask from this operator
is that Q has no explicit time
dependence-- no explicit time
dependence whatsoever.
So let's see what this gives us
as an uncertainty relationship.
Well, it would give us
that delta H squared--
that's delta Q
squared-- would be
greater than or equal
to the square of psi 1
over 2i H with Q psi.
OK, that's it.
Well, but in order to get
some intuition from here,
we better be able
to interpret this.
This doesn't seem
to have anything
to do with energy and time.
So is there something
to do with time here?
That is, in fact, a
very well-known result
in quantum mechanics--
that somehow commutators
with the Hamiltonian test the
time derivative of operators.
So whenever you see an
H with Q commutator,
you think ah, that's
roughly dQ dt.
And we'll see what
happens with that.
And say, oh, dQ
dt, but it doesn't
depend on T-- you said 0.
No it's not 0.
There's no explicit dependence,
but we'll see what happens.
So at this moment, you really
have to stop for one second
and derive a familiar
result-- that may or may not
be that familiar
to you from 804.
I don't think it was
all that emphasized.
Consider expectation value of Q.
And then the
expectation of Q-- let
me write it as psi
Q psi, like this.
Now let's try to take the
time derivative of this thing.
So what is the time derivative
of the expectation value of q?
And the idea being
that look, the operator
depends on some
things, and it can
have time-dependent
expectation value,
because the state
is changing in time.
So operators can have
time-dependent expectation
values even though the
operators don't depend on time.
So for example, this
depends on x and p,
and the x and p in a harmonic
oscillator are time dependent.
They're moving around, and this
could have time dependence.
So what do we get from here?
Well, if I have to take the
time derivative of this,
I have d psi dt here, Q
psi, plus psi Q d psi dt.
And in doing this, and not
differentiating Q itself,
I've used the fact that
this is an operator
and there's no time
anywhere there.
I didn't have to
differentiate Q.
So how do we evaluate this?
Well, you remember the
Schrodinger equation.
Here the Schrodinger
equation comes in,
because you have time
derivatives of your state.
So i d psi dt, i H bar d
psi dt is equal to H psi.
That's a full time-dependent
Schrodinger equation.
So here, maybe, I
should write this
like that-- this is all
time-dependent stuff.
At this moment, I don't
ignore the time dependence.
The states are not
stationary states.
If they would be
stationary states,
there would be no
energy uncertainty.
So I have this, and therefore,
I plug this in here,
and what do we get? i H
bar h psi Q psi plus psi
Q i H bar H psi.
Now, I got the i H in the
wrong place-- sorry-- 1
over i H bar, and
1 over i H bar.
Now the first term--
this thing comes out
as its complex
conjugate-- 1 minus i H
bar, because it's
on the first input.
H is Hermitian, so I can
send it to the other side,
so psi, HQ psi.
Second term-- the 1
over i H just goes out,
and I don't have
to move anybody.
QH is there, psi.
So actually, this
is i over H bar,
because minus i
down goes up with i.
And I have here psi
HQ, and this is minus i
over H bar, so I
get HQ minus QH psi.
So this is your final result--
the expectation value d
dt of the expectation value
of Q is equal to i over H bar,
expectation value of the
commutator of H with Q.
So this is neat, and it should
always stick in your mind.
This is true.
We will see the
Heisenberg way of writing
this equation in a
little while-- not today,
but in a couple of weeks.
But maybe even write
it even more briefly
as i over H bar
expectation value of HQ.
So what do we get from here?
Well, we can go back to
our uncertainty principle,
and rewrite it, having learned
that we have time derivative.
So time finally showed
up, and that's good news.
So we're maybe not too far
from a clear interpretation
of the uncertainty principle.
So we're going back
to that top equation,
so that what we
have now is delta
H squared delta Q squared
is that thing over there,
the expectation
value of 1 over 2i.
There's some signs
there, so what
do we have-- equals 1 over
2i H bar over i d dt of Q.
So what I did here was to say
that this expectation value was
H bar over i d dt of Q,
and I plugged it in there.
So you square this thing, so
there's not too much really
to be done.
The i don't matter at
the end of the day.
It's a minus 1
that gets squared.
So the H bar over 2-- I'm
sorry-- the H bar over 2
does remain here, squared.
And you have dQ dt squared.
Q is a Hermitian operator.
B was supposed to be Hermitian.
The expectation value is real.
The time derivative is real.
It could be going up or down.
So at the end of
the day, you have
delta H delta Q is greater
than or equal to H bar
over 2, the absolute
value of dQ over dt.
There we go.
This is, in a sense,
the best you can do.
Let's try to interpret
what we've got.
Well, we've got something
that still doesn't quite
look like a time
uncertainty relationship,
but there's time in there.
But it's a matter
of a definition now.
You see, if you have delta Q,
and you divide it by dQ dt,
first it is some sort of time.
It has the units of time.
And we can define it, if
you wish, to be sub delta t.
And what physically, does
this delta t represent?
Well, it's roughly-- you
see, things change in time.
The rate of change of the
expectation value of Q
may not be uniform.
It make change fast, or
it may change slowly.
But suppose it's changing.
Roughly, this ratio, of
this would be constant,
is the time it takes the
expectation value of Q
to change by delta Q. It
is like a distance divided
by a velocity.
So this is roughly the time
needed for the expectation
value of Q to change by
delta Q, by the uncertainty.
So it's a measure
of the time needed
for a significant change,
if the expectation
value, if the uncertainty
of Q is significant, and is
comparable to Q. Well,
this is the time needed
for significant change.
Now this is pretty much all you
can do, except that of course,
once you write it like
that, you pull this down,
and you go up now,
delta H delta t
is greater or equal
than H bar over 2.
And this is the best you can
do with this kind of approach.
Yes?
AUDIENCE: [INAUDIBLE]
PROFESSOR: Yeah, I
simply define this,
which is a time that has
some meaning if you know what
the uncertainty of the operator
is and how fast it's changing--
is the time needed for a change.
Once I defined this, I simply
brought this factor down here,
so that delta Q over this
derivative is delta t,
and the equation just
became this equation.
So we'll try to figure out a
little more of what this means
right away, but you can
make a few criticisms
about this thing.
You can say, look,
this delta time
uncertainty is not universal.
It depends which
operator Q you took.
True enough.
I cannot prove that it's
independent of the operator Q,
and many times I cannot even
tell you which operator Q is
the best operator
to think about.
But you can try.
And it does give
you-- first, it's
a mathematical statement about
how fast things can change.
And that contains physics, and
it contains a very precise fact
as well.
Actually, there's a version
of the uncertainty principle
that you will explore in
the homework that is, maybe,
an alternative picture of this,
and asks the following thing--
if you have a state
and a stationary state,
nothing changes in the state.
But if it's a stationary
state, the energy uncertainty
is 0, because the energy is
an eigenstate of the energy.
So nothing changes.
So you have to wait infinite
time for there to be a change,
and this makes sense.
Now you can ask the
following question-- suppose
I have a state that is not
an eigenstate of energy.
So therefore, for example,
the simplest thing
would be a superposition
of two eigenstates
of different energies.
You can ask, well, there
will be time evolution
and this state will
change in time.
So how can I get a
constraint on changes?
How can I approach changes?
And people discovered the
following interesting fact--
that if you have a state, it has
unit norm, and if it evolves,
it may happen that
at some stage,
it becomes orthogonal to
itself-- to the original one.
And that is a big change.
You become orthogonal
to what you used to be.
That's as big a
change as can happen.
And then you can ask,
is there a minimum time
for which this can happen?
What is the minimum
time in which
a state can change
so much that it
becomes orthogonal to itself?
And there is such an
uncertainty principle.
It's derived a little
differently from that.
And it says that if you
take delta t to be the time
it takes psi of x and t to
become orthogonal to psi of x0,
then this delta
t times delta E--
the uncertainty of the energies
is the uncertainty in h--
is greater than or
equal to h bar over 4.
Now a state may never
become orthogonal to itself,
but that's OK.
Then it's a big number
on the left-hand side.
But the quickest it
can do it is that.
And that's an interesting thing.
And it's a version of the
uncertainty principle.
I want to make a
couple more remarks,
because this thing
is mysterious enough
that it requires thinking.
So let's make some precise
claims about energy
uncertainties and then
give an example of what's
happening in the
physical situation.
Was there a question?
Yes.
AUDIENCE: [INAUDIBLE]
PROFESSOR: You're going to
explore that in the homework.
Actually, I don't think
you're going to show it, but--
AUDIENCE: [INAUDIBLE]
H bar [INAUDIBLE]
it's even less than the
uncertainty [INAUDIBLE]
PROFESSOR: It's a
different statement.
It's a very precise way of
measuring, creating a time.
It's a precise
definition of time,
and therefore,
there's no reason why
it would have been the same.
So here is a statement
that is interesting-- is
that the uncertainty delta
E in an isolated system
is constant-- doesn't change.
And by an isolated
system, a system
in which there's no
influences on it,
a system in which you have
actually time independent
Hamiltonians.
So H is a time
independent Hamiltonian.
Now that, of course, doesn't
mean the physics is boring.
Time- independent Hamiltonians
are quite interesting,
but you have a whole system.
Let's take it to be isolated.
There's no time dependent
things acting on it,
and H should be a time
independent Hamiltonian.
So I want to use
this statement to say
the following--
if I take Q equals
H in that theorem
over there, I get
that d dt of the expectation
value of H would be what?
It would be i over H bar.
Since H is time
independent-- the condition
here was that Q had
no time dependence.
But then I get H
commutator with H.
So I get here H commutator with
H. And that commutator is 0.
However complicated an operator
is, it commutes with itself.
So the expectation value of
the energy doesn't change.
We call that energy
conservation.
But still, if you take Q
now equal to H squared,
the time derivative of
the expectation value of H
squared, you get i over H bar.
You're supposed to be
H commutator with Q,
which is H squared, now.
And that's also 0.
So no power of the expectation
value of H vanishes.
And therefore, we have
that the time derivative
of the uncertainty
of H squared--
which is the time derivative
of the expectation value of H
squared minus the expectation
value of H squared-- well,
we've shown each one of the
things on the right-hand side
are 0, so this is 0.
So delta H is constant.
So the uncertainty-- delta
E or delta H of the system
is constant.
So what do we do with that?
Well it helps us think a little
about time dependent processes.
And the example we
must have in mind
is perhaps the one
of a decay that
leads to a radiation of
a photon, so a transition
that leads to a
photon radiation.
So let's consider that example.
So we have an atom in
some excited state,
decays to the ground state
and shoots out the photon.
Then it's an unstable state,
because if it would be stable,
it wouldn't change in time.
And the excited state of an
atom is an unstable state,
decays into-- goes
into the ground state.
And it makes a photon.
Now this idea of
the conservation
of energy uncertainty at least
helps you in this situation
that you would typically do
it with a lot of hand-waving,
organize your thoughts.
So what happens in such decay?
There's a lifetime, which
is a typical time you
have to wait for that
excited state to decay.
And these lifetime
is called tau.
And certainly as the
lifetime goes through,
and the decay happens, some
observable changes a lot.
Some observable Q
must change a lot.
Maybe a position of the
electron in an orbit,
or the angular momentum
of it, or some squared
of the momentum--
some observable
that we could do an atomic
calculation in more detail
must change a lot.
So there will be associated
with some observable that
changes a lot
during the lifetime,
because it takes that long
for this thing to change.
There will be an
energy uncertainty
associated to a lifetime.
So how does the energy
uncertainty reflect itself?
Well, you have a ground state.
And you have this excited state.
But generally, when you
have an excited state
due to some interactions
that produce instability,
you actually have a
lot of states here
that are part of
the excited state.
So you have an excited
state, but you do have,
typically, a lot of
uncertainty-- but not
a lot-- some uncertainty
of the energy here.
The state is not
a particular one.
If it would be a
particular one, it
would be a stationary state--
would stay there forever.
Nevertheless, it's a
combination of some things,
so it's not quite
a stationary state.
It couldn't be a
stationary state,
because it would be eternal.
So somehow, the
dynamics of this atom
must be such that there's
interactions between, say,
the electron and the nucleus, or
possibly a radiation field that
makes the state of
this electron unstable,
and associated to it an
uncertainty in the energy.
So there's an uncertainty
here, and this particle--
this electron goes eventually
to the ground state,
and it meets a photon.
So there is, associated to this
lifetime, an uncertainty delta
E times tau, and I will put
similar to H bar over 2.
And this would be
the delta E here,
because your state
must be a superposition
of some states over there.
And then what happens later?
Well, this particle goes
to the ground state--
no uncertainty any more
about what its energy is.
So the only possibility
at this moment
consistent with the conservation
of uncertainty in the system
is that the photon
carries the uncertainty.
So that photon must have
an uncertainty as well.
So delta energy
of the photon will
be equal to h bar delta
omega, or h delta nu.
So the end result is that
in a physical decay process,
there are uncertainties.
And the uncertainty
gets carried out,
and it's always there--
the delta E here
and the photon having
some uncertainty.
Now one of the most famous
applications of this thing
is related to the hyperfine
transition of hydrogen.
And we're very lucky in physics.
Physicists are very lucky.
This is a great break for
astronomy and cosmology,
and it's all based on this
uncertainty principle.
You have the hyperfine
transition of hydrogen.
So we will study
later in this course
that because of the
proton and electron
spins in the hydrogen
atom, there's
a splitting of
energies having to do
with the hyperfine interaction.
It's a magnetic
dipole interaction
between the proton
and the electron.
And there's going
to be a splitting.
And there's a transition
associated with this splitting.
So there's a hyperfine
splitting-- the ground state
of the hyperfine
splitting of some states.
And it's the top state
and the bottom state.
And as the system decays,
it emits a photon.
This photon is approximately
a 21 centimeter wavelength--
is the famous 21 centimeter
line of hydrogen.
And it corresponds to
about 1420 megahertz.
So how about so far so good.
There's an energy
splitting here,
21 centimeters wavelength,
5.9 times 10 to the minus 6
eV in here.
But that's not the
energy difference
that matters for
the uncertainty,
just like this is not the
energy difference that
matters for the uncertainty.
What matters for the uncertainty
is how broad this state
is, due to interactions
that will produce the decay.
It's a very funny,
magnetic transition.
And how long is the
lifetime of this state?
Anybody know?
A second, a millisecond, a day?
Nobody?
Ten million years-- a long
time-- 10 million years--
lifetime tau.
A year is about pi times
10 to the 7 seconds
is pretty accurate.
Anyway, 10 million
years is a lot of time.
It's such a large time
that it corresponds
to an energy uncertainty
that is so extraordinarily
small, that the wavelength
uncertainty, or the frequency
uncertainty, is so small that
corresponding to this 1420,
it's I think, the uncertainty
in lambda-- and lambda
is of the order of
10 to the minus 8.
The line is extremely sharp,
so it's not a fussy line
that it's hard to measure.
It's the sharpest possible line.
And it's so sharp because of
this 10 million years lifetime,
and the energy time
uncertainty relationship.
That's it for today.
