[ Silence ]
>> Well, as they say, all good
things must come to an end
and even this course
must come to an end.
Today, what I want to do is
to summarize what we've
covered during this course,
remind you of some
of the important results we've
obtained, and give you an idea
of how you could continue
your study in the future.
Well, we've covered a lot of
ground since the beginning
of the lectures,
that's for sure.
I count over 700 slides
that we've gone through
and that's a lot of
material for anybody.
And some of the material
has been pretty advanced.
It's going to take
you sometime to digest
and understand the material and
become more comfortable with it.
And the best way to get good
at this kind of thing is
to work through the problems.
If you work through a problem
and you don't cheat and look
at the answer and
you struggle with it
until you understand
exactly how it works,
then you remember it
for a good long time.
If you just treat things as a
flurry-effect that come and go,
the problem is they usually
go when you need them.
You can go back luckily
in this format
of doing these lectures online.
You can go back and
review at anytime
and you can refresh you memory
and that's one real advantage
of the leverage of this
kind of format gives us.
I'll just remark that our
treatment of atoms was
at a much higher level, I think,
than many courses have time for.
That was partly a reaction to
the kind of cursory treatment
that our textbook
gave that material.
And partly, the idea that there
could be people who want to tune
in and find out something who
are in more advanced courses
than typically at this level and
they can use that material too.
OK. We had quantum
mechanics it was motivated
by some early experiments.
Science is about trial
and error, basically.
That's the most powerful
method in science.
And you've to do an
experiment and you've got to try
to control all the things you
can control so that you know
that the things that
you're changing are making
a difference.
And that's why a good scientist
records as much information
about what's going
on as is possible.
We had several experiments
that were really key.
One was the very simple
experiment of the distribution
of radion from a so-called
blackbody like lampblack
at a temperature T
which could be measured.
That distribution was simple
it didn't seem to depend
on anything except
the temperature.
And it was completely
unexplainable
by classical mechanics.
The second key experiment
was the photoelectric effect,
which again disagreed
and seemed to indicate
that light might have
a particle nature
that perhaps Newton's
idea was correct.
And the third was the
double-slit experiment
which seemed to indicate
that an electron could go
through both slits but that's
a more recent experiment.
And the earlier experiment
was just
that the electrons could
diffract from a nickel crystal
and so the electrons seemed
to have a wave property
because waves diffract
and particles do not.
And finally, there was--
we touched on it briefly
but there was just the stability
of atoms and chemical bonds.
Both of which were very
difficult to explain
with classical mechanics
and we're left
as kind of unsolved riddle.
But-- So, classical
mechanics was floundering
on all these fronts
and whenever our theory
of knowledge is floundering,
that means that we have
to think more deeply and we need
to perhaps invent a new idea.
And this was of course an
extremely exciting time,
but it was clear that a
"mechanics of the small",
so to speak, was needed.
And that we couldn't
just extrapolate
from our everyday experience
with big objects when we go
down to these very
tiny particles.
As far as we can
ascertain right now,
quantum mechanics is basically
completely unchallenged
in the domain of
its applicability.
In other words, it's really
a theory worth learning
because it allows you
to figure out things
to many decimal points
of accuracy
and to explain all kinds
of interesting phenomena
and design very tiny circuits
that you can use in computers
and so on and so forth.
And without it, you're
basically lost in any
of those applications.
We talked about matter and
radiation, in particular,
matter has a wave-like aspect,
quantified by the De
Broglie wavelength.
And radiation has a
particle aspect which we saw
when we dealt with the
photoelectric effect.
This kind of wave-particle
duality
that in fact something
can appear
to have different properties
depending on what you ask
about the object, allow us to
explain all these experiments
by photons ejecting electrons
essentially instantaneously
from a metal surface in a vacuum
and electrons diffracting
from crystals.
The wavefunction we decided
is the fundamental object
in quantum mechanics, that's the
thing that common to everything.
And if we know the
wavefunction, then we know all
that it is possible to know
about the quantum
mechanical system.
That means as a corollary
that usually we do not know the
wavefunction in all its glory.
We only know some
aspect of it depending
on what we've chosen to measure.
But we don't know everything
that it's possible to know
about most systems even
very small quantum systems
where we're trying to
control everything.
And all we can know is
the probability density
or probability amplitude
which is the wavefunction.
But what we measure is
the probability density
and that's the fundamental
thing.
And philosophically, that's
very frustrating to some people
that when we prepare things in
identical states, often it seems
like we get random results.
But there are many
analogous things
when we throw a die
we get one through six
and we get that at random.
And you could argue well
that the die is not throwing the
same way but if all six sides
of the die were identical
so you couldn't see them
when there were in your hand,
you couldn't see beforehand
which we're going to get,
then you couldn't
tell which way it was.
And when you threw it, then it
appears there's a number then,
that number appears to be
random when you look at it.
The state of the physical system
is given by this wavefunction
which we capitalized and
we put a semicolon and T
when it depended on time.
And it depends on the positions
of all the constituents
of the quantum system.
Classically they would be the
coordinates of the particle,
although it kind of
begs the question
if you're saying the
coordinates of the particles
that you think you
know where there are.
But in fact, what you're doing
is treating these as parameters
in the wavefunction and
the wavefunction is the
fundamental thing.
When we make a measurement,
we represent
that with a linear
Hermitian operator.
And the only possible result
of an ideal measurement is one
of the eigenvalues of the
linear Hermitian operator
which because it's Hermitian
has real eigenvalues.
And once we make a measurement,
then the system has been altered
usually by that measurement,
unless we're measuring the
exact same thing again.
In which case, we know the
result we're going to get
in that peculiar case because
we've done an experiment
that rules out all the
other probabilities and then
if we do the same
experiment again, it's like--
rather like throwing
the dice on the table
and then just not
throwing them again
and just looking
again at what's up.
So you didn't give it
any change to change.
And the set of all possibilities
of these measurements just
like the six numbers on the
die constitute a complete set.
They tell you everything
that can possibly happen.
You can't have some
outside that.
And that's very important
because that means
that we can make
up a wavefunction
as a linear combination
of these eigenstates.
When we make a measurement,
we generally change
the wavefunction.
And if we have first
of all a superposition,
which I've written here on slide
691 as the sum over N of CN,
phi N. And we make a
measurement, we put on O hat
on an eigenstate phi N and
we get an eigenvalue ON.
And we know if the measurements
give a different eigenvalue,
that the functions are
orthogonal which is expressed
by the integral of the
product being zero.
Then, after we have
obtained the result OK,
O sub K the probability
obtaining that result is given
by the square of the expansion
coefficient, C sub K squared.
And after that, the wavefunction
has been irreversibly changed.
And the now, the
wavefunction has changed
to the eigenstate phi sub
K. That seems to be kind
of a collapse of probability
because we had this big thing,
the wavefunction and then
we made a measurement
and then it collapsed.
And now it's in this
state, but of course
that might not be quite the
right way to look at it.
It might be a bit more
complicated than that.
But you could ask exactly
how does this collapse
of the wavefunction happened.
And actually that somewhat
opened the questioned like a lot
of things in quantum mechanics.
The actual equations are
not open to question.
But mechanisms and
interpretations
and reasons why certain things
happen is open to question
and some people temperamentally
dismiss that as irrelevant
and other people are
quite interested in it.
But for example,
suppose we flip a coin,
then before we flip
it the change
of getting heads is 50 percent.
But after we flip it, after
we make the measurement,
then the chance of getting
heads is suddenly 100 percent
if we got heads.
And that doesn't
seem to bother us
that there were two
possibilities
and now there is one and we kind
of boosted the probability
afterwards back
up to 100 percent if we then
start doing other things
with the coin.
And so perhaps that's not so
strange that that can happen.
Then we talked about uncertainty
and when two operators
don't commute,
that means they're
incompatible and that is measure
by taking the difference in
the order, the commutator,
and I've shown you the
commutator reminded you what it
is for position and momentum.
It's IH bar and that means that
we can't measure those variables
to arbitrary precision
simultaneously.
One measurement causes
the wavefunction to change
and then the other measurement
is imprecise, and vice-versa.
And so you keep sort of stepping
on your own shoelaces over
and over as you try
to repeatedly measure
the properties.
We're certain of
this uncertainty.
There's been a lot written
on the uncertainty principle
which amounts to philosophy.
Using the word uncertainty
and somehow indicating
that this is a very deep thing.
But the uncertainty principle
itself is just a technical
detail concerning whether
operators commute or not
and whether measurements can
be made arbitrary precision.
It doesn't leave open a huge
unfashionable uncertainty
about other things.
That's kind of misuse or--
of the word uncertainty
and it's nothing really to
do with the principle itself.
Left alone, the wavefunction
will evolve in time according
to the time-dependent,
Schrodinger equation.
H hat psi is equal IH bar D psi,
D T. We really didn't
really deal in this course
with much time-dependent
phenomena.
And that's where you
could have a springboard
into a more advanced course,
is to look more deeply
at time-dependent phenomena.
What we did is we said, look we
want to find out the property
of helium atom or a
molecule or something
like that that's
left alone at first.
What is it like in nature
and what's the ground state.
And that we know that left alone
it's not changing it has certain
properties as persisting
in time.
And we call this
stationary state
and for a stationary state, the
wavefunction change its phase
which you can interpret as
the shape staying the same
and it keeps changing color.
But the shape stays the same
and the colors indicating
time passing.
But when we square it and we
just figure out where the piles
of sand are, it's all the same.
And so there's no nothing
we can actually see
from that stationary state
that's actually changing.
And the stationary
states turnout
to be the energy eigenstates.
And that's why solving for
the energy of these systems is
so important because
those states are the ones
that have the staying
power to persist in time.
And for the energy eigenstates,
we can solve the equation
exactly and we find
that we just get a
complex exponential time
to whatever the probability
distribution is at time zero.
So the wavefunction
change its phase
as I said and nothing else.
And you should think of
that as change in color.
Then talked about Bound States
and quantization of energy,
it arises somehow in
the particle's bound.
And the reason it arises and
most fundamental reason is
because the wavefunction
has to fit
in to the space allocated to it.
And it can't come around
for example on a ring
and have a different face
than the one around before.
It has to match up perfectly
otherwise it interferes
with itself.
And more generally,
the wavefunction being a wave
can interfere with itself
and unless we have interference
that remains constant in time,
that doesn't just
cancel out to zero,
we aren't going to see anything.
So you could imagine some of the
other energies not on the ladder
of states that we
derive as the solution
to our equation could be there.
But there could be something
wrong, the wavefunction may blow
up and go to infinity.
So it's not normalizable,
it doesn't have a
proper probability.
Or it could cancel,
it could go around
and it could cancel itself
out, in which case it's zero
and either one of
those things kills it.
We found for a particle
on the box that the ladder
of states went like N squared
and there's zero probability
of being found outside the box.
On the other hand,
for the gentler slop
of the harmonic oscillator,
not so stiff
like a box but just X squared.
Then, the levels went, like N,
they were all evenly spaced.
And that's a very
classic taste of study
because it's simple to solve.
And we solved it and got
the ground state being
Gaussian function.
And then, the hydrogen atom
is another one we can solve
and here the levels go like
minus E over N squared.
And again, we have
infinite number of levels
but there's a lowest one on
this, they come up to some level
that we call zero there,
which an isolated proton
and then electron at infinity.
We have an infinite
number of levels in there
but they're within
a finite band.
Then, we saw that a particle
could tunnel through a barrier.
So it could magically, so do
speak, get around some sort
of barrier and appear
on the other side.
As far as I'm aware, there
is no experiment that tries
to measure the transit in
the barrier, the particle is
in front of the barrier
and it comes
out on the other side
as far as I'm aware.
But we do find that particles
can and do appear outside
where they're supposed to be.
For example, radioactive
decay of an alpha particle,
the alpha particle is
part of the nucleus
and then suddenly
it appears out here
because it has a wavefunction
and there's some probability
of being out there.
And finally it is out there
and once it is out there
and the strong force is not
holding the two particles
together, it's ejected
at top speed
like about 5 million
electron volts.
And that certainly wouldn't
be allowed if you looked
at the barrier that you had
to come over and you had
to squeeze the thing through.
We call that tunneling because
the idea is you don't have
to have the energy to
go over the barrier,
you can just somehow go through.
In classical mechanics,
this kind of behavior is
just not allowed at all.
In quantum mechanics it
happens all the time.
And the less massive
a particle is,
the more likely it is to tunnel.
So tunneling is very
important for electrons
and moderately important for
hydrogen nuclei and H atoms
and pretty much diminishes
in probability after that.
On the other hand, if we have
an unbound state, where there--
it's free to go anywhere,
then the particle can have a
continuous range of energies.
Why, because the
wavefunction doesn't have
to fit in to anything.
And then we've derived
that the wavefunction looks
like a cork screw,
cork screw in left
to right depending
whether the momentum
of the particle is
positive or negative.
And so, as I mentioned in some
sense quantization always arises
because the wavefunction
has to fit somehow
into a confined space.
A free particle, it can
just go anywhere at once.
It doesn't have to have
any phase matching anywhere
and so it can have any energy.
This idea of quantization
that emerges as a property
of this theory, you know,
it's no something we injected.
It emerges based on our
fundamental postulates,
is one of the main triumphs
of quantum mechanics.
Because it explains
atomic spectroscopy
and a million other things
that are very, very important
for everyday life and the
classical mechanics completely
trips up on them.
Then we talked about
approximate solutions
to the Schrodinger equation.
We talked about perturbation
theory and how
to treat unknown solution
to the Schrodinger equation.
And then an additional thing
which we hope it was small
but sometimes it isn't
small and we try it anyway
and then sometimes we get a,
kind of look difficult
conundrum out of that.
The reason we need things
like perturbation theory is
that sadly the equation
that we have
to solve is difficult enough
that we can only solve it
for the simplest ideal
kind of model systems.
We can't really solve it
exactly for other systems.
And so we have to treat them
as approximate solutions.
But approximate in this game
means as close as you like.
It's just that we can't write in
closed form the exact solution
as here is the function.
But we can write
things that are very,
very close to many
decimal points.
And luckily for us, the hydrogen
atom can be solved exactly.
And those solutions for SPDF and
so forth and one, two, three,
four those solutions
guide our intuition
about every other atom.
When we think about a
two S orbital we kind
of intuitively think while
it probably looks something
like a two S on hydrogen.
And then, while as the nuclear
charge is bigger, we shrink it
down and then well it's not
exactly the same because two S
and another atom may
have a one S inside
and so things there maybe
electron-electron repulsion
and so forth.
But we kind of think
in terms of this way.
And as I showed you with
the distribution functions,
the probability of finding
certain electrons at positions,
they go like shells and so it is
accurate to think of one, two,
three or KLM and so forth
when thinking about atoms.
We can use perturbation
theory then
to correct any exact
solution as long
as we have a small perturbation.
But if we have a big
perturbation we have
to be careful.
And then the other approximate
method that's quite important
that we used over and over again
is to introduce a parameter
into the wavefunction.
And then optimize the
parameter by adjusting it
so that the energy is lowered.
And that is a result of
the Variational Principle
which states that the
wavefunction that is closest
to the ground state
in energy is better.
And so if we have a-- It's
a compass for us it's a way
to know which way is north.
We have to have some
measure when we're trying
to change the wavefunction
and optimize it whether
it's getting better.
And if the energy is going
lower, then its getting better.
And if we have the
exact the wavefunction
for the ground state, then we
get the lowest energy possible
and that's very important,
that's our metric for--
to tell if we're doing
the right thing or not.
We talked about atomic
spectroscopy which follows a set
of rules for electric
dipole-allowed transitions.
Delta L plus or minus one delta
N is anything and so forth.
Historically, it's these
emission spectra that led
to the empirical relationship
for emission lines
being differences of one
over N squared which
Rydberg formulated.
Who knows how, but anyway,
without knowing anything else,
he just seemed to say, "Hey,
this is a fourth minus the ninth
and this is a ninth--
this is sixteenth,"
and it was amazing numerical
insight to be able to do that.
But it was still
completely unexplained
as to why they were
those differences.
But quantum mechanics then
came along and explained those,
but those are very
important clue.
And then we talked
about the term symbols
which have the multiplicity
two S plus one
where big S is all the electrons
spins added up according
to the rules of angular
momentum.
L is the orbital angular
momentum of all the electrons
and J is L plus S which again
follows the Clebsch-Gordan
series which goes down by one
until it reaches the
absolute value of L minus S.
And those term symbols are a
very compact way to categorize
and keep track of atomic
transitions in things
like sodium and other
atoms where we talked
about the two yellow
lines being very close.
The two double P three halves
and double P one half both going
to the double S one half.
And we talked about
Atomic Structure
and what I would say here is
we did quite a bit on this.
But atomic electronic
structure calculation
if you haven't figure
it out already.
It involves a lot of
integrals and a lot
of those integrals
are not so easy to do
because they're multidimensional
integrals
and they have ugly
things in them.
And unless you adopt the
right coordinate system,
they're completely
hopeless to do.
If you try to do those integrals
in Cartesian coordinates,
you make no progress at all
you just end up with things
that you can't integrate.
And although we didn't do
it we introduced elliptical
coordinates but if we
were getting serious
about things like
H2 and H2 plus.
We could simplify our work a
lot if we were willing to adopt
that coordinate system which
I didn't do I just stuck
with spherical coordinates
for simplicity
and then did a lot of hard work.
It's simplest in
these calculations
to use atomic units.
Atomic units, we
measure energy in Hartree
and we set all the
fundamental constants to one.
So that they're out of our
hair and that that way,
we get nice simple equations
like minus one half del
squared and things like that.
That are much easier
to write down
and we've already got enough
work doing all these integrals
without having some gigantic
fraction out in front.
Though we have to keep rewriting
all the time, keep a track of.
And then we remark that
there was a hidden advantage
to doing it in these units.
And that is that when the
calculation is so accurate,
that if you make a revision to
one of the fundamental constants
by doing a different experiment.
So you change the value of H bar
slightly or you change the speed
of light slightly out here
in some decimal point.
You don't want to have
redo everything all over.
But if you've done it and
these dimensional as units,
you just automatically change
the energy to update it
when you put in the new units.
And so you don't have to do it
over because you didn't
quote it MEV or kilojoules
and that's much better because
the constants themselves can
and do sometimes get revised.
We did the hydride anion
and one thing we found
out there is it's a
pretty tough nut to crack,
even though in some
sense it's the simplest
two-electron system.
It's hard because the
electron-electron repulsion
which is between two
unit negative charges.
And the attraction which is
between a negative charge
and a positive charge
are about the same size.
And so, treating the
electron-electron repulsion
as a perturbation is treating
something that's about as big
as what you said
was big as small.
And what that gave us
unfortunately was an unstable
situation where we predicted
that the hydride anion
would be unstable compared
to a hydrogen atom and
an electron at affinity.
Remember, we got minus three
eight, minus three-eights rather
than minus a half
in atomic units.
Then, what we did is we expand
it with our Greek friend zeta
which now I'm sure you'll never
forget this beautiful symbol.
And we expand it, the
orbital and allowed that--
introduced that as a
variational parameter.
And we tried to optimize
that and it came darn close
but unfortunately
it was not stable.
And only a trial way
function where we hypothesized
that we had one electron
that was bound rather tightly
and another one that
was further out.
And then the opposite because
for symmetry we don't know
which electron is which so
we permute the possibilities,
when we took that one that gave
a reasonable result for hydride.
But hydride is a very big
anion as I remarked it's bigger
than fluoride and so
it's not that stable
and it's quite a
difficult problem to do.
By contrast, helium when you
introduce plus two charge,
then this interaction is twice
as big as this is in a repulsion
and boy that helps you a lot
when you start doing them.
Then, we said look, we have
that wavefunction for hydride
but that didn't consist
of orbitals.
An orbital is a one-electron
wavefunction and we
like to think of a
multi-electron wavefunction
for an atom as a
product of orbitals.
And because we have a
one-electron wavefunction,
what we do is we smear out
all the other electrons
into just a vague cloud
of charge according
to the probability.
And then we treat
that probability not
as an instantaneous thing,
like what's actually going on.
But as something
that's already completed
and done and not dynamic.
And all it does then is
change the potential energy,
then we solve we turn the crank.
We may need a computer, a lot
of integrals whatever
we have to do.
And we optimize the one
live electron that we've got
and then we put that one into
the soup and smear it out.
And we grab another electron
out of the hat and we keep going
around and around and around
on a computer usually.
Until none of the electron
wavefunctions change,
none of the one-electron
orbitals changes
from one to another.
And then in that case you can't
improve it because if none
of them change, none of
them are going to change
if you go through again either.
And so at that point
you quit and you say,
you've got a self-consistent
field solution.
Usually that solution is pretty
good but it's never perfect
and the reason it isn't is
because it neglects the fact
that electrons tend to, in
real life, electrons would tend
to avoid each other and that's
called electron correlation.
And therefore, theories
that start
with the self-consistent field
or Hartree-Fock equations
and then add correlation
always give a better result
if you treat the
correlation right.
OK. Then we talked about
the Pauli Principle
that the wavefunction
should be antisymmetric
if we exchange the labels
which we're calling one
and which we're calling
two for example.
And if the wavefunction is
factorizable, if it factors
into spatial part times a spin
part, which often it does,
then if the spin part is
symmetric the spatial part is
antisymmetric and vice versa.
This behavior we
decided could be encoded
in this neat device
called a Slater determinant
because a Slater determinant,
when you change columns or rows
in a determinant it
changes sign automatically.
And so it actually keeps track
of this property for you.
Not every wavefunction
can be written
as a single Slater determinant.
And that's one reason
why we didn't do a lot
with open shell atoms
and things like that,
that could be more complicated.
Then we talked about molecules
and in particular we
introduced the Born-Oppenheimer
approximation, which basically
relied on the physical insight
that the nuclei being
more massive move slowly.
And the electrons move very
rapidly and the electrons,
the second nuclei move a little,
the electrons immediately
readjust.
They have time to go around
the track time and time again.
And they immediately readjust to
whatever the new environment is.
If they get squeezed
out they get pushed
out because the nuclei
are coming together, fine.
If the nuclei are going apart
and they can hide
in there, fine.
But they find the right solution
essentially immediately.
There's no lag and therefore
when we want to solve it,
we can make an essential
simplification.
We solve a bunch of problems
where the nuclei are frozen.
And we just calculate
the electronic energy
and the internuclear repulsion
with everything frozen
and that give us an
energy as a function
of let's say two
atoms moving apart.
And that is a very
fundamental thing to understand
about the way a chemist think.
Because that frozen
approximation as a function
of big R, which was our
internuclear distance,
that's called a potential
energy curve
or potential energy surface
if you have more than two.
And that allows us to
organize all our thinking
about how the nuclei move.
So now, we solve the
electronic energy.
And then when we want to figure
out what the nuclei want to do,
we look at the electronic
energy and we calculate a force
and then we can see where
the nuclei are going to move.
They're going to try
to roll down hill.
They go too far, they're
going to roll up and so forth
and so these curves allow
us to quantify them.
And recall that we had a
couple of empirical curves
that we introduced early on with
vibration, the Morse oscillator,
and even the Lennard-Jones
6-12 potential.
The simplest molecule is H2 plus
which we did in some detail.
And we could solve
that under the
Born-Oppenheimer approximation.
And we decided we can make
up a molecular orbital
which was the analogous
one-electron wavefunction
to an atomic orbital.
And except instead of
just having one nucleus
and solving one-electron at
a time, we have two nuclei
and we solve one-electron
at a time.
Well H2 plus we can solve
because it's only one electron,
the fact that the nucleus
has split apart isn't
that big a deal.
It makes the math worse but
it doesn't change things
in any other fundamental way.
But when we get to H2 then we
can't, then it's too hard again
because we've got two electrons.
And most commonly then,
what we do is we start
out with some atomic orbitals
and we take linear
combinations of them.
We add them, multiplied
by numbers
but we do not multiply
the functions.
That's very important, we
don't raise them to powers
or take square roots of
them or do something else.
We just add them, we say, you're
50 percent this and 20 percent
that and these numbers could
have imaginary parts this being
quantum mechanics after all.
But that's nothing
worse than that
and this is called the
LCAO-MO or Linear Combination
of Atomic Orbitals
Molecular Orbital Approach.
Whenever we start out
with a certain number, N,
of atomic orbitals we end
up with the same number
and of molecular orbitals.
If we start out with
12, we end up with 12
and it's always like that.
And then we found we could
classify our solutions
by symmetry and the lowest
energy usually has the
fewest node.
And then as we go up,
things start increasing
nodes things going to zero
in between the nuclei and
very unfavorable things.
And those are configurations of
the electrons that are unstable
for the molecules so that's
how the molecule can associate.
Our rules for combining atomic
orbitals were the following.
They have to have similar
energy that's because they have
to have similar the de Broglie
wavelength in order to interact.
They have to have the same
symmetry and as I said,
in a more advanced course
you'll understand exactly what
that means when you
study point groups.
For now, just keep in mind that
if one of them changes sign
when you flip the molecule
and the other one doesn't
those aren't going to interact.
The integral is going
to be zero identically.
And then, the third condition is
that they have to
overlap in space.
Because if they don't
overlap in space,
if the atoms were miles
apart, there's not going
to be a molecular orbital
possible, they only have
to be pair-wise though if
you have more than two.
And bonding orbitals then
build up electron density
in between the nuclei.
Antibonding orbitals often
put nodes between the nuclei
and that's how you can tell.
However, if the node
is at the nucleus,
then it's not particularly bad.
It's just if it's between
in a bond that it's bad.
We saw a couple of
examples that I highlighted
where the Molecular
Orbital Theory was superior
to the localized bond or
Lewis structure approach.
The first is that molecular
oxygen is paramagnetic.
The molecular orbital
theory clearly predicts
that we could have two
unpaired electrons.
That's what we observe.
The other form to remember
is called Singlet Oxygen.
And the other is
that there are--
that we saw simple example is
that there are two
photoelectron bonds
in the UV photoelectron
spectrum of methane.
One of them we called A1 and
one of them we called T2.
And again molecular
orbital theory predicted
that these four bonds that we
draw were really thee plus one.
The one being rather different,
the A1, remember being
like that big teddy bear.
And the others had nodes
but they didn't have
nodes between the bonds.
They had nodes at the
carbon nucleus itself.
And so they were net bonding,
they glue things together.
And the A electrons
went into there.
And those two examples are two
clear examples you can give
where molecular orbital theory
rather naturally predicts what
we see and to go back and say,
well, I've got four SP3 hybrids
and I'm drawing these
lines for bonds.
And so on and so forth,
doesn't really explain.
And it's very, very
difficult to figure
out how you're going
to explain it.
And that's because it's wrong,
even though it's
useful in some cases.
Then, we have this thing the
bond order is the difference
between the number of bonding
electrons minus the number
of antibonding electrons
divided by two.
And although we didn't
emphasize it matched,
there are nonbonding
electrons, nonbonding orbitals
and those usually are sort
of like what you would draw
as lone pairs on
a Lewis structure.
And so they neither
hurt nor help you.
They aren't really
involved in the active part
of the structure holding
the atoms together.
And we did a detailed
calculation
of the molecular
orbitals for simple cases.
And what we found out
when we did that is
that the so-called exchange
integral was the one
that causes the bond
to be stable.
And that exchange integral
was basically purely a quantum
mechanical effect.
That was not possible--
would not be seen in
classical mechanics.
And that in turn explains how
quantum mechanics naturally
produces-- predicts the
things like H2O and things
like that are going to exist.
And be more stable than
the uncombined atoms.
But in classical mechanics,
this was just another
conundrum to figure out.
We saw that our perfectly good
LCAO-MO solution for hydrogen--
molecular hydrogen
H2 sort of fell apart
when we considered
the dissociation of H2
into two hydrogen atoms.
And when we looked at
that, what we realized is
that the molecular orbital
approach assigns equal weight
to disassociating as to H
atom and H atom and hydride
and H plus or a proton.
And that was the problem there.
By contrast, the so-called
valence bond approach,
the paper of Heitler and London,
gave the correct prediction
that it would come
apart into two atoms.
And what we realized then is we
should keep our MO description
when we've got the bond
but we shouldn't assume
that our description that we got
with this optimized
geometry is going
to be the correct
solution at all geometries.
And so what we have to do then
is adjust our wave function
as we adjust R in the convenient
way we did that was to mix
in a certain amount of
the antibonding orbital
as a function of R. And when
we did that, and optimized it,
we got the correct
prediction again.
We got the light
ionization energy,
the right bond dissociation
energy.
And we called this
configuration interaction,
so we have two configurations,
two orbitals
and then they were mixing
and giving this interaction.
And that give us
the correct result.
And then finally, we closed by
talking about Delocalized System
of electrons, Pi systems.
I think all organic
chemist think of these kinds
of so-called Aromatic Systems
in terms of molecular orbitals,
we draw alternating
single and double bonds
but chemist never
think of them that way.
They think of a Delocalized
Pi system
because the alternating single
and double bonds doesn't
predict the chemistry at all.
And when we filled up, we did
a detailed molecular orbital
diagram and what we found out is
that all the sigma bonds
were all just full.
And then there were these
two that were just half full,
so to speak, in the middle
and we could treat those
two just looking at them,
just the P electrons and
these alternating single
and double bonds.
Just the same way as
H2, in other words,
the math was the same and
so why not just recycle it
and reuse it.
And in the Huckel approximation,
what we did is we simplified,
if we have a bigger system,
we just simplify and we said,
the overlap of an orbital
with itself is one big deal.
That means it's normalized.
And then we said the
overlap of an orbital
with its neighbor is zero.
Now, that seems to
contradict the idea
of forming a molecular orbital
because I said they
have to overlap.
But keep in mind, the overlap,
the S was usually
just a correction term
in the denominator 2S plus
square to 2S plus one and so on.
To correct for the fact that
you're losing some probability
but it doesn't change
the form of the solution.
The fact that things move apart
is nothing much to do with S.
And so, it turns out
that saying S is zero
and then still making
a bond is OK
because it's the other part
that's making the bond.
And you don't say beta zero.
So, you have some energy terms,
we call the alpha and some--
and we said they're all the
same for identical carbon.
And in some energy terms,
we call beta, the off tag
and at once and then you
can solve that and you find
that cyclic systems with 4N plus
2 Pi electrons are especially
favorable when we
did then same kind
of thing for cyclobutadiene.
We predicted a diradical,
instead because the two
energy levels were the same.
That could be what
happens or could it be
that the molecule distorts
so it wants lower but then,
it's localized bonds if it does
that because we have
to solve it over.
So here's our summary then.
Quantum mechanics
is really a deep
and very beautiful description
of atoms and molecules.
It certainly isn't easy but
it's not impossible either.
And most of it comes down to
just having enough knowledge
of mathematics so you can
understand what the equations
are all about.
And you can solve them and then
once you reached that level,
then you can focus
on what it means
and you can understand what
it means in terms of science
and chemistry and not
be lost in details.
Certain aspects of
quantum mechanics,
I'd say remain hard
to understand.
For example, if you want
a concrete explanation
of exactly how an electron goes
through both slits at once,
that's very difficult.
I don't think anybody can give
you an explanation of that
because for one thing,
the only time it does
that is when you don't look.
And if you don't look,
quantum mechanics says,
you don't really
know what's going on.
And so you can't
propose some mechanism
by which you have no
experiment to measure.
In quantum mechanics, you
have to propose an experiment
of measure of something.
And when we measure,
then we find out, well,
it went through definitely
one slit or the other.
So if we have a possibility
of measuring.
If we don't measure, then
both possibilities are
entertained simultaneously.
Nevertheless, even though the
interpretation or some features
of this theory can be hard
to understand in terms
of everyday behavior of
objects, when it comes
to calculating atomic
and molecular properties,
when you need to
get something right,
this theory is unparalleled.
It seems to be an extremely
accurate and versatile theory.
And you can do lots
of work with it.
And I would say in
my experience,
quantum mechanics remains
completely unchallenged
in the small world.
I hope you've enjoyed these
topics we've covered and I hope
that you go on to learn more
about chemistry and science.
Thanks.
------------------------------7a2d9e48ebcd--
