[MUSIC PLAYING]
RYAN: Great.
Good afternoon, everyone.
Welcome to the next installment
of the Quantum AI speaker
lecture series.
Today we're going to be hearing
from Professor Garnet Chan, who
is now a professor at Caltech,
having very recently moved
from Princeton, where he was
a professor for several years.
Before, he was at Cornell.
Garnet's group is
known for being
one of the most
prominent research
groups in the
development of methods
for electronic
structure theory, which
comprises theoretical
chemistry and also
some materials science.
So, in particular,
his group is known
for developing very efficient,
high-accuracy methods
for simulating quantum
systems and also techniques
for using those high-precision
methods to gain useful insights
about real systems.
So without further ado,
here's Professor Chan.
GARNET CHAN: All right.
Thanks, Ryan.
[APPLAUSE]
Let's just make sure
it's all working.
OK, great.
So because I can only think
of one side at a time,
I'll mainly point at this
side if I'm going to point.
OK, so it's really
great to be here.
And this is the first time I've
ever been on the Google campus,
so it's very interesting to
see how it's all laid out.
And I've already had a chance
to sample some of the free food,
and it was great.
And I can remember
periods in my life
where I'd basically go anywhere
if free food was offered.
So that's one of the
reasons why I'm here.
But, of course, now is
the time to do some work.
And so here, I'm going to tell
you a little bit about some
of my research.
So the title of my talk is
"Simulating the Quantum World
on a Classical Computer."
But before I get
into the talk, I
will just say a little
bit about the level
that this talk is pitched at.
So when I was putting
this talk together,
I emailed Ryan, my host, and
asked, what sort of people
are going to attend?
And he said, well,
there are going
to be people who are experts in
quantum mechanics and quantum
computation.
They will be some of the people.
And then there are
going to be people
who are sort of general
computer scientists, who
are very tech-savvy, but might
not really be physicists.
And then, in
addition, there might
be people-- this
might be on YouTube,
and there might be people
anywhere in the world
watching it.
So I'm used to teaching
classes in college,
where you have a
range of abilities,
a range of backgrounds.
But that's a really big spread
in preparation for this talk.
And so what I've
tried to do-- and this
will be an experiment--
is I've tried
to make a talk which has
a little bit of something
for each of those three
kinds of audience members.
And so if you're someone who is
an expert in quantum mechanics,
studies quantum computing,
then just sit back, and relax,
and have a little fun with me
in the first half of this talk.
And we'll get to the
detailed technical stuff
towards the end.
And if you're someone who
really doesn't know any quantum
mechanics, then I'm really
going to try and hold
your hand all through the
first half of this talk
and try and teach you
some quantum mechanics.
And then when I get
to the second half,
I will let go of your hand.
But hopefully, you'll
have absorbed enough
that the general gist of
what I'm trying to say
will make some sense.
OK, so let's start.
And let's start with who I
am, and perhaps, what I am.
So I'm a theoretical chemist.
And that means I'm
the kind of chemist
who doesn't do any experiments.
And in fact, I remember
precisely the last time
I did an experiment.
And it wasn't
actually so long ago.
It was six years ago, in 2010.
And it was when I was
doing some community
outreach in an elementary
school and teaching
the little children how
to make batteries out
of a piece of fruit
and some pencil lead.
And I distinctly remember,
out of the various things
on that day, not being
able to do that experiment.
So there's a picture of me
where the kids are helping me
with my battery.
And that's why I have
ended up a theorist.
So what do I do as a
theoretical chemist?
Well, in essence,
what I try and do
is I try and simulate the world
of chemistry on a computer
rather than do experiments.
And since the world
of chemistry is
a world of atoms and molecules,
and the atoms and molecules
are governed by the laws
of quantum mechanics,
this is basically the same
thing as simulating the quantum
world.
So in the first
part of the talk,
I'm going to try to explain
what quantum mechanics is,
and what the quantum
world is, and then
look at why, trying to carry out
the simulation of this world,
it appears to be
very hard if you just
have the kinds of computers
that we have today,
classical computers.
And then in the second
half of this talk,
I will turn this
question on its head
and point out some
of the reasons
why we now believe
actually simulating
quantum mechanics isn't
as hard as we first
thought and can, in fact, be
done on classical computers.
And I'll give you
some reasons why
this is the case and some
examples of these simulations.
And the reason why this
complexity of quantum mechanics
is often really just
an illusion is actually
related to many ideas
that would be familiar,
if you have a background
in computer science,
for machine learning.
And so I'll draw the
connections to machine learning,
and of course, the implications
for quantum computing
at the end of the talk.
OK.
But first, let's begin with
what the quantum world is.
And so the quantum world is the
world of atoms and molecules.
And understanding that
the world around us
is made of these discrete
units, atoms and molecules,
is perhaps one of the most
fundamental insights we've ever
achieved as a human race.
And that's something that
Richard Feynman-- of course,
the famous physicist
at Caltech--
pointed out very eloquently
where he said, well,
if all of humanity's
knowledge was wiped out
in some catastrophic event,
and you could only preserve
one piece of wisdom to pass
on to future generations, what
would that be?
And as he argued,
that would be the fact
that all things
are made of atoms.
Now, for such an
important fact, you'd
think we would have known about
this for a very long time.
But as it turns
out, the acceptance
of this atomic hypothesis is
a relatively recent event.
Even up to 100 years ago,
whether or not matter
was discrete-- in other words,
made up of individual units--
or just some continuous
substance was widely debated.
100 years is not
such a long time.
There are people
on the planet who
are more than 100
years old, so they
were around when
this was a question
that people thought about.
The debate was finally settled
in a nice collaboration
between some theoretical
ideas from Einstein
and careful experiments
by a French physicist
called Perrin, who was a Nobel
Laureate in the early part
of the 20th century.
And the experiment
he carried out
was to take some very
fine particulate matter,
so that could be,
like, a pollen grain,
or it could be, say,
a starch granule.
If you take a potato
and squeeze it,
you can get some
starch granules out.
And you place that grain
on the surface of a fluid.
So that might be,
like, some water.
And you observe what happens
under the microscope.
Now, if the fluid water is
really a continuous substance,
and you ask, well,
what should I expect
to see for the motion
of the pollen grain?
Well, perhaps the pollen
grain will stay stationary.
Or if there are
some currents, it
should move smoothly around
the surface of the water.
But that's not what you see.
What you actually observe if you
follow the path of the pollen
grain, is you see this
very jagged motion
where the particles seem to
change direction very abruptly
at various points in time.
And the only reasonable way
of explaining what happens--
and this was what
Einstein worked out--
was to deduce that
water is not actually
a continuous substance, but made
up of little bits, little bits,
atoms and molecules.
And these abrupt changes
of direction correspond
to those molecules bumping
randomly, at random times,
into the pollen grain
or starch granule,
imparting some momentum.
OK, so this was
really the experiment
that convinced scientists
that the world is
made of discrete units,
atoms and molecules.
But as you see, it's really
quite an indirect type
of evidence.
And if we go, now, to the 21st
century, to the modern day,
we have much clearer
evidence that atoms exist.
And perhaps the most evocative
image of the reality of atoms
is provided by an instrument
known as a scanning tunneling
microscope.
The scanning tunneling
microscope is, in essence,
just a circuit which has a
battery and some wires sticking
out.
And the special
thing about the wire
is that the tip of the
wire is extremely sharp.
So you can make these wires
so sharp at the bottom
that they're essentially
just one atom sharp, or one
atom thick, at the very end.
And the way you use
this microscope is you
attach it to a material.
So you attach one end of
the wire to the material.
And you take the other
end, and bring it around,
and bring the tip
all the way down
to the surface of the material.
Now, if you made the tip
touch the material itself,
then you would
complete the circuit,
and a current would flow.
But as it turns
out, you don't have
to bring the tip all the
way down to the surface.
You can leave a little gap.
And some current will still
cross that little gap.
That's called tunneling
of the current.
And the amount of
current that crosses
that gap, the strength
of the current,
depends on the distance from the
surface of the material you're
studying.
And so because atoms are kind of
round-- so if you move the tip,
and it goes across the
top, round bit the atom,
you'll get a lot of
current, because you're
close to the top of the atom.
And when you move into the
bits in between the atoms where
you have the atoms dip down,
you'll see a small current.
And you can register
this as you move the tip,
scan it back and forth.
And so you can
see an image where
you produce the image of
the shape of the atoms
with the scanning
tunneling microscope.
And this is a beautiful
example of corporate research,
because the scanning tunneling
microscope was developed
at IBM, as you can see.
It turns out that
you can even pick up
atoms with this technique.
If you bring it down, the
atoms like to stick the tip.
And so they positioned
the atoms and scanned
by moving the tip across to
produce this very nice image,
now, almost 30 years ago.
So today we do have
direct evidence of atoms.
And we can even manipulate
the atoms themselves.
We can pick them up with
these very precise machines.
But these are not
easy experiments.
And these are not cheap
pieces of equipment.
You don't have them
in your kitchen.
And fundamentally, even though
we can directly access atoms,
it's still not-- it's never
going to be an easy task
to directly interrogate them.
And the reason for this
is just very simple.
It's because humans, we have
these fat, pudgy fingers,
right?
And atoms are extremely small.
And there are 10
orders of magnitude
in scale between the
size of a human finger
and the size of an
atom, and trying
to bridge this
scale experimentally
is always going
to be a challenge.
And so that motivates
a different way
to try to interrogate the
world of atoms and molecules.
And that's to not try to study
it experimentally, but to try
and recreate it digitally
inside a computer.
And so you can study
just the simulation
of the atoms inside
your silicon chip
rather than having to carry
out a complicated experiment.
So the goal of this
type of project
is to really produce
the same sort of thing
as you see in the movie
"The Matrix," right?
So if you remember, in
"The Matrix," what you had
was some race of
beings who had created
some simulation
of the world which
was so realistic that,
when you stuck the humans
inside of them, they couldn't
tell if that world was real
or if it was a simulation.
And that's what we're really
trying to achieve now,
but for the more modest
goal of simulating
the world of atoms
and molecules,
not the entire world around us.
Now, of course, when I
say "simulate this world,"
I don't mean generate
computer graphics
or some kind of animation.
You really want it to
be actually faithful
to the true world.
And that means that we have to
follow the physical laws that
govern atoms and
molecules, which are
the laws of quantum mechanics.
And trying to faithfully
emulate these laws
as precisely as
possible in a computer
is a goal of what I try
to do and essentially
what my whole field of research,
quantum chemistry is about.
Now, if you're not a scientist--
and perhaps not everyone
in this room is a
scientist-- this statement
is sometimes surprising.
And this statement is
perhaps less surprising
if you are scientist.
It turns out that
the laws of nature
are, in fact, for all intents
and purposes, completely known.
And aside from some very, very
extreme cases, for example,
at the very, very
beginning of the universe
just after the Big
Bang, or perhaps
on some minute scales of
10 to the minus 35 meters
or something--
basically, you will never
probe experimentally--
we know exactly how all
the fundamental
particles work and how
they interact with each other.
And if you make this
kind of statement,
you can have two
kinds of reactions.
So you might have the
reaction, well, we
know all the fundamental
laws of physics.
Well, that's the end of physics.
That's the end of the world.
And one can adopt that attitude.
But actually, what most people
think today, and the attitude
that I have myself, is
that, actually, this
is not the end of the road,
but it's the beginning
of a new and beautiful journey.
And the reason why it's
the start of a new journey
is because, just because
we know the behavior
of the individual particles
and how they interact
with each other in pairs, this
doesn't mean we understand
how all those interactions
combine when you have
assemblies of thousands,
or millions, or billions
of particles to produce
all the complexity that we
see in the world around
us, like life, for example.
So the situation, I would
say, in physics today
can be illustrated
by a chess analogy.
So the way physics is
today, we basically
know the rules of chess.
We know all the rules of chess.
There aren't any missing
rules, like capturing a pawn
en passant that we haven't
yet put into the game.
So we know all the
rules of chess.
But even though we know
the rules of chess,
that's a very different
thing from understanding
how those rules build
on top of each other
when there are many
pieces in a game
and you apply the rules
many times to generate
the game of chess.
So this fact that having
just lots of stuff around,
even if you know the
fundamental laws,
can generate some unexpected
complexity, which is really
encapsulated in a nice
phrase by my former colleague
at Princeton, Phil Anderson,
that more is different.
And more is especially
different when
you go into the world
of quantum mechanics.
So when you have many quantum
particles interacting,
they can produce some truly
exotic and unexpected behavior.
And it's the
challenge of capturing
these unexpected
types of phenomena
that is the challenge of
simulating the quantum world.
OK, so now, let me talk a little
bit about quantum mechanics
and what the quantum world is.
And to place quantum
mechanics in perspective,
it's useful to understand
where we apply it
as physicists, the
kinds of phenomena
we apply it to as physicists.
And so we can divide the
phenomena in the universe
or in the world into
different categories based
on length and time scales.
And so on the larger scales
of the universe, then really,
the right theory to use to
answer questions people pose,
like why galaxies
are distributed
the way they are in
space, is the theory
of general relativity.
And then, when we
go down, perhaps,
to the human scale, which is the
scale of about a meter in size
or a second in time, then the
most convenient theory to apply
is the good, old
Newtonian mechanics
that Isaac Newton worked
out 400 years ago.
It's not an exact theory.
It's an approximate theory.
But it works so well,
and it's so easy to do,
that that's the best thing to
do for human-size phenomena.
And then when you go beneath
that scale, down to, say, 10
to the minus 6 meters
or below, then that's
the regime where quantum
mechanical effects become
important.
So I like to say that
quantum mechanics is really
the theory of the small.
But even though it's
the theory of the small,
it has very big consequences.
Because almost all the
properties of materials that
you see around you
do not-- or cannot--
arise if you just do
classical mechanics.
They arise due to intrinsic
quantum mechanical behavior.
So if I were to
look at, say-- ask
why is a materials
a certain color,
or why is something a metal and
why is something an insulator,
or even something like,
if I take a gecko,
why is the feet sticky,
these are all things that
arise due to quantum effects.
And they don't exist
in the classical theory
or in relativity.
OK, so we're going to look
at quantum mechanics applied
to atoms and molecules.
And if the last time you did
chemistry was in high school,
then this would maybe be
your picture of an atom.
So you have a nucleus,
and the electrons
are fizzing around in orbit.
And maybe if you
did AP chemistry,
then you'd also learn that
the molecule has bonds in it.
And those bonds are
made by sharing pairs
of electrons with each other.
Now, since everyone in
the room is an adult,
I think it's safe
that I can say this,
that everything you
learn in high school
chemistry or physics is
a complete lie, right?
It's all completely false.
Because that's not really
what an atom looks like.
It doesn't look like this.
You don't have these
well-defined particles
going in nice orbits.
Really, an atom
is kind of fuzzy.
And that's a general
feature in quantum mechanics
that everything is-- everything
that you think of as normally
being a very
well-defined object is
kind of fuzzy around the edges.
Now, "fuzzy" is not
a very precise term.
But the technical
term for fuzziness
is that things
behave like waves.
And so the motion
of a fuzzy particle
is not the motion
of a point-- is not
the motion of a
point-like particle,
it's the motion of a wave.
And we see this
from the following.
So if you imagine a
non-fuzzy particle,
like a billiard ball-- OK,
it's very discrete and hard--
and you think of how it moves
through space, you can describe
its position, for example,
from its center of mass,
at every point in time.
And it's very, very clear.
But if I take a fuzzy
object, its fuzziness
means that it has some shape.
And if it has some shape,
then, while traveling,
the shape can also distort.
So a fuzzy object will
be something whose shape
can change with time.
And mathematically, the motion
of something with shapes
is just wave motion.
The wave is just something
that has a waveform.
And that waveform,
it can move around.
And the waveform, the
shape can change with time.
So the individual quantum
particles, like the electron
or like the proton,
and so on and so forth,
they have this
intrinsic fuzziness,
which means that when
they move around,
their motion is best described
by mathematical equations which
look like the equations
that describe waves.
Now, the juxtaposition
of these two words,
"wave" and "particle"
that in colloquial usage
have very different meanings
causes all sorts of angst
when you first come
across quantum mechanics.
And you can see this angst
when you go into the internet.
And let's say you type
something into a search
engine like Yahoo.
And I report, here a--
something that I actually--
the precise, verbatim thing
that came up when I searched
for "wave particle duality."
So it's this thing on
Yahoo Answers, right?
"In quantum physics,
how can one particle
could exist in multiple places?
Duh."
It's notable
because I don't know
if that's a question
or an answer, actually.
But the basic issue
is, of course,
if something is fuzzy
around the edges,
it's kind of hard
to say where it is.
So if you were to look at a
wave, just a classical water
wave, and you were to
say, well, is it here?
Or is it here?
Or is it there?
Or maybe it's a little
bit in all the places.
Well, you know, it's
not entirely clear
what you would answer.
And that's the same
difficulty in answering
the question about where
a fuzzy particle is
in quantum mechanics.
So if I take this fuzzy
electron and I say, is it here,
is it here, is it here-- I
carry out some measurements
to detect where it
is-- you will actually
find that your
measurements report that,
some percent of the
time, they'll read out
that the electron is here.
And some percent of the
time, they'll read out,
the electron is here.
And some percent of the
time, they'll read out,
the electron's there, OK?
So quantum mechanics,
because of its fuzziness,
gives rise to an intrinsic
probabilistic spread
in the results of measurements.
If you ask, is something
going this way or that way,
you don't get a definite answer.
You get just some probability
that you see this or that.
So another way that we can
think about this fuzziness
is that really the
particle is a combination
of different scenarios for
different measurements.
So a single particle, if
you carry out a measurement,
might seem to be here 10% of the
time, and here 20% of the time,
and here 40% of the time.
And so I can actually
think of a particle
as being a superposition of all
these possibilities weighted
by some probabilities.
And I can write down a function,
a probability function,
probability density
for observing,
say, the particle
here, or here, or here.
And this probability
density is just one step
removed from the most
fundamental object in quantum
mechanics, which is the
wave function, which
is the quantity on
the right-hand side.
And so this wave function
is the mathematical object
that behaves a bit like a wave.
But it really is a
wave of probabilities
of different measurements.
That's the wave that is being
propagated in the equations.
And indeed, if I look at
the mathematical equation
for a water wave, it
basically tells you
the height of the wave
at different positions
on this coordinate.
And this mathematical
equation looks almost exactly
like the quantum wave equation,
except the only difference
is now we interpret this
height of the wave that's
being solved mathematically
as telling you
the probability of
seeing different things
in your quantum
mechanical experiments.
Now, you can ask, well, how
do I solve this equation?
You know it's a
differential equation.
And in certain simple
cases, you can solve it
on a piece of paper.
Like when we teach this
to our undergraduates,
you give them the
very special problems
where they can solve
on a piece of paper.
But let's say you can't
solve it on a piece of paper.
It turns out this equation
is still very easy
to solve on a computer.
Because if I take
this probability wave,
you want to find out its
value at different positions
in space.
So what you would
do on your computer
is you'd put down a grid in
space, maybe a billion points.
But a billion points,
a billion numbers,
is a very small set of numbers
for a modern computer, OK?
So even if you put down
this very, very dense grid
with many, many
points over space
to try and solve this equation,
it's actually very easy to do.
So solving this kind
of Schrodinger equation
is essentially trivial
on modern fast computers.
And so does this mean that
simulating quantum mechanics is
very easy?
I tell you the equation,
it's easy to solve?
And the answer is, well,
no, because so far,
I've only talked about
a single particle.
So now let me talk about
two-particle and many-particle
quantum mechanics,
which is where
things start to become
mysterious and harder.
OK, so first, let's talk
about two particles.
And I'm going to illustrate in
some abstract way, because I
don't know how to draw
it out, some quantum
state of two particles
that, say, are somewhere
distributed in the box.
And in just the same
way that I could
think about the state
of one particle,
one fuzzy electron, as being
a mixture of probabilities
of an electron being here,
or an electron being here,
or an electron being
there, I can similarly
think of the state
of two particles
as being some superposition,
a mixture perhaps,
of the two electrons
being distributed
like this or the two electrons
being distributed like this.
That would be just an example
of a fuzzy state of the two
particles.
Now, the interesting thing
about this particular mixture
of positions is
that it actually has
what we call a correlation
between the particle positions.
In other words, if I
just look at these two
as a set of possibilities
for what the system's doing,
you will notice that
if I make a measurement
and I see a particle on the
left-hand side of the box,
then you will know, because the
system's only made of these two
possibilities, that there
will be a particle always
on the right-hand
side of the box.
So the two particles always
seems to be correlated.
If you measure one on the
left, then the other one
will be found on the right.
If you measure one on the right,
one will be found on the left.
So these correlations
between the properties,
between the measurements
on individual particles,
is what is known as
quantum entanglement.
Now, entanglement is yet another
of those aspects of quantum
mechanics which causes
a lot of confusion
and makes people unhappy.
And so once again, if
you go to Yahoo Answers,
you see something like this.
"I read about this phenomenon.
It seems quantum
entanglement transmit data
with infinite speed.
Is this correct?"
Now, you know it's the internet.
So, of course, it's not correct.
And this is the most
common misconception
about entanglement,
that it allows
particles to transmit data
very quickly between places.
But I think it's
worth understanding
how this misconception arises.
And this misconception
arises because we
build a mental model that
actually has a flaw in it, OK?
So the way in which you
can look at a problem that
makes it seem as if data
is being transmitted
with infinite speed
is the following.
So if I say, let's
take the same system
that exists in the superposition
of these two possibilities,
and I ask what's the
probability, just overall,
of seeing a blue particle
on the left or the right,
you will find that there's a
50% chance of the blue particle
on the left and a 50% chance
that the blue particle's
on the right.
And the way we usually
think about this
is that this is an intrinsic
property of the particle.
In other words, if you
are the blue particle,
then you'll have a
50% chance of being
a left blue particle,
a 50% chance of being
a right blue particle.
And similarly, you
can say the same thing
for the red particle, right?
So you can say, well,
if I'm the red particle,
and you look at it, then
there's a 50% chance that it's
on the left or the right.
And you think of that
as an intrinsic property
of the red particle.
And so it becomes
very mysterious
with this kind of mindset that,
if you find the blue particle
on the left, then
all of a sudden,
you see the red particle with
100% probability on the right.
You seem to have changed some
fundamental intrinsic property
of the red particle.
And if you think from
that kind of mindset,
then it seems like
the blue particle
has told the red particle to
suddenly appear on the right.
And this is what
makes people think
that you can
transmit information
through entanglement.
But it's, of course, just
the wrong mental picture.
Because what your mind is really
constructing is the following.
So when I tell you there's
a 50% chance you'll
see red particles on
the left or right,
you make this mental image
of where the particles are.
And then when you say
there's a 50% chance
of a blue particle
on the left or right,
you make this mental image.
And then for the total combined
states of the two particles,
inside your head it's most
natural to think that it really
looks like this, that there's a
25% probability that the system
is in this state, or in this
state, or in this state,
or in this state.
But in actuality,
there's no reason why,
when I built the quantum
state of two particles,
that I had to prepare
it in such a way
that it has this set of
probabilities associated
with it.
In fact, the particular
quantum state I prepared
did not even have these
two possibilities.
It was just a
mixture of these two.
And so all that's
happening when you
see all the effects
of entanglement
is it just reflects the way that
the quantum state was prepared
in the beginning,
not as, perhaps,
the most intuitive
superposition of states,
where all these possibilities
are allowed, but as, perhaps,
some slightly nonintuitive
or peculiar combination.
OK, now, so far, I've been
talking about one particle, two
particles.
You'll notice that even
when I was talking about two
particles, even if
these particles only
had the property that they could
be on the left or the right,
there are still four
possibilities that I
had to keep track
of in principle
for the different possible
things I could see, right?
So I have to keep track
of the probability
that I see this event
or the probability
that they distribute like this,
or like this, or like this.
So there are four
possibilities there.
And I've written down numbers.
0 means particle's
on the left, 1,
there's a particle on the
right, And the probabilities
are this function, PSI.
Now, it's not hard to see that
if I now go from two particles
to many, many more particles--
let's say 1,000 particles--
that you'd have to keep track of
many, many more probabilities.
So, in particular, if
I had 1,000 particles,
and each particle
could be either
on the left or the
right, and I had
to keep track of all these
possible configurations that
can be mixed together
in the quantum state,
then I'd have to write
this wave function that
is a function of a binary
string of numbers where
you have 1,000 zeroes or ones.
And that's an enormous
set of numbers.
So that's 2 to the 1,000.
And this is not correct.
That's 10 to the 300.
But in any case, it's a very,
very large-- very large number
of possibilities.
And it's this
gigantic blossoming
of possible ways of particles
behaving simultaneously
that makes more in quantum
mechanics very, very hard.
This leads to a kind
of depressing viewpoint
on quantum mechanics.
Because what we're
saying is we know
these fundamental
equations of nature.
But when you have more than
just a handful of particles,
you have to keep track
of an exponential set
of possibilities.
And that just doesn't
seem feasible.
And indeed, if you were going
to go back to the 20th century,
you would see statements
written all the time
that are really
quite depressing.
So, for example, this
is from the year 2000.
David Pines and Bob Laughlin,
both very famous people--
Bob Laughlin was
a Nobel Laureate
in physics for the fractional
quantum Hall effect.
And he wrote, "The Schrodinger
equation cannot be solved
accurately when the number of
particles exceeds about 10."
And then they go on to
make the precise prediction
that no computer existing
or that will ever
exist can break this
barrier because it's
a catastrophe of dimension,
by which they simply
mean that if you increase
the number of particles
and have to count how
many possibilities
for the joint behaviors
of all these particles,
it's growing so rapidly that you
will never be able to tackle it
on an ordinary computer.
But what we've really learned
though, in the last 15 years--
you know, the time since
this type of statement
used to be popular-- is
that this complexity is,
in almost all cases, not
actually really there,
you know?
It's what we call an
illusion of complexity.
It appears to be there when
you write the equations down
at first, but if you actually
look in the world around you
for it, you don't
actually see it.
So basically, there are
all these possibilities
for how a quantum
system will behave,
but they're just very,
very hard to observe.
So a concrete example of this is
if you take Schrodinger's cat,
OK?
So Schrodinger's cat
is a quantum state
that's a mixture of the state
where all the atoms are in such
a way that the cat is dead
with a cat whose atoms are
in such a state that
the cat's alive, OK?
So you can take-- it's a
valid quantum state to say,
I'm in this fuzzy
picture where the cat is
50% dead and 50% alive.
But as we all know,
if you actually
try to look for
Schrodinger's cat,
you never see Schrodinger's
cat in reality.
So for most physical
systems-- I like this cartoon.
So for most physical
systems, this exponential set
of possibilities is not real.
And you can go back and ask,
why does nature not allow
this complete exponential
set of possibilities,
at least at the energy scales
that we usually look at?
And the deep reason is that we
live in quite a special type
of world.
It's not like a world where
you have arbitrary interactions
between all the particles.
So if you take the
fundamental particles,
they only interact
pairwise at a time.
And the interactions between
the pairs of particles
are very simple.
So if I were to take atoms
or molecules, Coulomb's law,
Coulomb interaction
is just something
that's a strong interaction
with the particles are close
together.
And when you go very far
away, it dies off to zero.
So it's this very simple
nature of these interactions
which are essentially
applied many, many times.
So when you have a system where
the electrons are bumping off
each other many,
many times, they
feel these interactions
many, many times,
these simple interactions impart
the structure to the resulting
quantum states that we
see and make them also,
correspondingly, very simple.
So the type of
possibilities that we see,
the type of superpositions
of many particle states
that are physically
relevant are not
these generally
entangled states,
which are mixtures of arbitrary
possibilities for the quantum
systems, but only what are
so-called locally entangled
states.
And by local
entanglement, I mean
that if you take a
system with particles
that are stuck in the
small region of space,
then over this small
region of space,
you do need to keep track
of all the possibilities
for what the quantum particles
can do simultaneously.
But if I go to a
larger problem, where
you have well-separated regions
for the different quantum
particles, then you find that
if you can make measurements
on, say, the part
of the system here
and the part of the system
which is outside the door,
then in the typical state
that we see in the world,
there is no correlation
between those measurements.
They behave as if
they are independent.
And so this locality
principle, or this principle
that you generally only
see local entanglement,
means that you
don't need to keep
track of this exponentially
growing set of possibilities,
but only the possibilities
that occur together
in small nearby
regions of space.
And so this removes
the complexity,
one can see, in some
quality of sense.
And then the
technical question is,
how do I rewrite the laws
of quantum mechanics,
or rewrite the mathematics
of quantum mechanics,
to build this physical
simplification in.
OK, so to do this,
I'm going to have
to introduce what
entanglement is
in a slightly more mathematical
way, but not much more.
So consider a system which
has got two regions in it, OK?
So it's got a left region,
it's got a right region.
OK, 1 and 2.
And this symbol n will
denote some property
that you measure in the region.
So, for example, you
might think n denotes
the number of particles you
would find in a given region.
So you can write down
your wave function, which
describes the
probabilities of finding
a set of particles in region 1,
a set of particles in region 2.
And if you have no entanglement,
then this wave function,
this probability function,
would just factorize.
So that tells you, because
the probabilities factorize,
that there's no correlation
between measurements
in regions 1 and 2.
OK, so what does it mean for
you to have an entangled state?
Well, in that case, this wave
function doesn't factorize.
And instead of writing it just
as a product of probabilities,
you now have to write it
as a sum over products
of probabilities.
And there's this index
of summation here.
And it's this summation
over many types
of probabilities which
generates the entanglement.
Loosely speaking, if you
have more terms in the sum,
then you have more
entanglement in the system.
And you can think of these
individual probabilities
appearing now as conditional
probability amplitudes.
So this basically says, what's
the probability of measuring
the property n1 in region
1 given some information i
that has been communicated
from the neighboring state
2-- neighboring region 2.
OK, so we can see
now, mathematically,
how you can generate
entanglement just
in these mathematical symbols.
And that now allows
me to explain what
I mean by local entanglement.
So in a state with
local entanglement,
let's first consider
a system-- well,
with three spatial regions.
So let's generalize
it a little bit.
And first, the state
with no entanglement,
this probability,
this wave function,
probability amplitude
then, again,
just factorizes into three
sets of probabilities.
And now I know that if I
want to put entanglement back
into the system,
I have to write it
as a set of sums
of probabilities.
But I write in a special way
so that, only locally, regions
are coupled together.
And so this would be a
locally entangled state,
where I'm now summing over sets
of probabilities associated
with regions 1, 2, and 3.
But the summation is such that
the probabilities in regions 1
and 2 are directly coupled, and
the conditional probabilities
in 2 and 3 are directly coupled.
So if it were a more
general entangled state,
I would also have to
couple 1 and 3 as a region.
And so this is a mathematical
form of the wave function
where there is only entanglement
explicitly between regions 1
and 2, and 2 and 3, which,
if you imagine the regions
on the line, means that
the entanglement is only
present locally.
So I can generalize that
to arbitrary regions.
So let me now not just have
three regions, but an arbitrary
number of regions, l regions.
And now this complicated
wave function
is again written as a
product of probabilities
that are all coupled
together in this local way.
And usually, this is a
formula with lots and lots
of variables and indices.
And it's kind of tricky to write
out all the time like this.
So usually we use a notation
that is more of a picture.
And so I say, well, on
this left-hand side,
I have this gigantic
set of numbers.
It's a gigantic array of
numbers that's inside the wave
function, or a gigantic tensor.
And I draw this array
or tensor like this,
where these are the indices
of the array or tensor.
Here I've just drawn three
indices, not l indices,
but just for simplicity.
And then, this type of
approximation to this tensor
is equivalent by saying,
I can write this picture
in terms of these
little probability
objects, these lower-dimensional
tensors, lower-ranked tensors,
and connect them all together,
sum them over these indices i1,
i2, which is denoted
by these bonds.
Now, the achievement in
reduction in complexity,
moving from this
picture for storage,
is, of course, immense.
Because on the
left-hand side, you
have an exponential
set of numbers,
exponentially large set
of numbers in the tensor.
And on the right-hand
side, you only
have a linear complexity in
terms of how much storage
you have as the
system size grows.
So going from an exponential
to a linear is, of course,
you know, an amazing
reduction in cost.
But the only reason
why we can achieve
this is because the types
of states we see around us
are very, very special
quantum states and not
the arbitrary types of quantum
states, in principle, allowed.
OK, so here, I've generated
local entanglement in some very
one-dimensional way.
I've joined region 1
to 2, 2 to 3, 3 to 4,
and so on and so forth.
But you can, of course,
connect these regions
in an arbitrary network.
And so, for example,
this was the connection
I showed earlier.
That's suitable in the
one-dimensional geometry.
And this is something
known as a matrix product
state, which people
have studied in physics
for more than 20 years now.
But you can also
consider other ways
of joining all these
tensors together.
And this would be
an example of how
you'd join them
together to describe
a two-dimensional system.
And you could join
them in a cube
to describe a three-dimensional
system, and so on and so forth.
OK, so all this now presents a
concrete mathematical machinery
to write down only the
relevant quantum states that
actually occur in nature.
And we can now use this
machinery to actually carry out
a simulation, not just
for one particle, or two
particles, or 10
particles, but really,
for hundreds of particles.
And this is something
that my group has
done over the last few years.
And you can see a
few examples of this.
So this would be a
simulation on the active site
of the oxygen evolving
complex, which is sort
of the heart of photosynthesis.
So if you think what
is it that plants do,
they take sunlight in,
and they convert water
into oxygen and other things.
And it all takes
place in a few atoms
in the middle of an enzyme.
And I'm only going to
show those few atoms here.
They're surrounded
by all sorts of gunk,
but I'll just show you
the important atoms.
And if I were to try and write
down a wave function that
described all the
possible quantum
behaviors of the
electrons in these atoms,
even making lots and
lots of simplifications,
it would require a very large
number of elements, about 10
to the 18 elements.
But if I instead build
the wave function
by writing down tensors on
each of the atoms-- in fact,
you write down a few
tensors on each atom,
because there's more than
one electron per atom--
then you can represent this
wave function, essentially,
all the way down to numerical
precision, in other words,
essentially numerically
exactly with a very, very
small number of elements.
And given this
parametrization, I
can solve the wave equation,
the Schrodinger equation.
And I can ask questions about
how this molecule behaves.
And this, for example,
is just a pretty picture
that just shows you how the
entanglement is spread out
across this molecule.
I'll show you another
example, which, I think,
realizes the dream we
had in the beginning
of realizing the matrix
for atoms and molecules.
And that was recreating the
world of atoms and molecules
so precisely that we
can't tell the difference
between the simulation
and the experiment.
And so this is something that
we did for a simple material,
which is benzene crystal.
It's just a crystallized
organic system.
And here, we asked
a simple question
about this material,
which is, what's
the amount of energy that's
holding the material together.
That's the so-called
lattice energy.
And you can do this calculation.
It has lots of terms in it.
And so you compute all
these different terms.
And you add them all up.
And then you obtain
a number which
is the theoretical estimate,
just from quantum mechanics
and with no other
assumptions, for the lattice
energy of the system.
Now, this is a model
material that has
been studied many, many times.
In this last century,
it's been measured
through many, many experiments.
And there's an accepted
experimental number.
And the accepted
experimental number
is this number here--
51.5 kilojoules
versus 56 kilojoules.
Now, you might say,
that's not so different.
That's pretty good.
But actually, they
don't actually
agree within the error bars.
And so that leads you to two
inescapable conclusions, only
one of which can be right, OK?
And assuming that
we haven't actually
made a mistake in
what we were doing,
so either-- because all we did
was solve quantum mechanics
as exactly as possible, if
we see this disagreement,
either you're left
with the conclusion
that quantum
mechanics are wrong,
or you're left
with the conclusion
that all the
experiments are wrong.
And of course, the answer is
that the experiments are wrong,
right?
So if you go back
and look at what
the experiments
are measuring, they
see some thing on the dial.
And they write it down
in their lab book.
And then they have to
interpret what that number is.
And there's a lot
of stuff that's
hidden in the interpretation
of the experimental number.
And if you go back through
the experimental literature
and do that interpretation more
carefully and more accurately,
that will bring that
experimental number
back up into agreement with
theory, which is really
how a theorist likes
the world to work
and is an example of a case
where we really can simulate
the world of atoms and
molecules in the real material
more precisely now
than we can measure it
with accurate experiment.
OK.
Now, let me-- I don't
know what the time is,
but I think I'm coming
towards the end.
Is that right?
So let me finish with
a few slides that
explain why I'm down here
today, so the Google and I.
This has got two parts.
And the first part
is the following.
If I take tensor
networks and I describe
them, which was this way I
wrote down the wave function,
and I describe them just in
an abstract, mathematical way,
then they represent the
nonlinear parametrization
of a complicated,
many-variable function.
And there might
be some of you who
have seen this type of phrase
used in a different setting.
Because an artificial
neural network
is also just a nonlinear
parametrization
of a complicated,
many-variable function.
And so you can ask, well, is
this similarity a deep analogy,
or is it just words?
And the answer to
this is, no one really
knows for sure right
now, but it really
looks like there's a very
compelling resemblance
between the two.
Because if I look, for
example, at a tensor network
we didn't discuss--
and this is a MERA,
but we didn't talk
about this one,
but this is one that people
also use-- this picture looks,
really, very similar
to this picture.
Now, the individual
symbols in the picture
actually do not
mean the same thing.
So, for example,
each of these lines
here represents a summation.
Now, in this type of
neural network picture,
this just means an input, OK?
So it's not a
direct equivalence,
but there's clearly a lot of
analogies between the two.
And so one can then
ask the question,
well, let's say is
a deep analogy, then
what are the implications?
So one of the first
implications would
be to say, well, we know
a lot about physics,
quantum physics and the
physics of tensor networks.
And we have all these
well-defined theories,
renormalization groups,
theory of entanglement,
that we use to understand
tensor networks.
Well, perhaps these can
be used to understand
how neural networks
actually work, or say,
for example, how
deep learning works.
And people actually have
begun to embark on this task.
And there's, for example,
a paper which, I think,
is well known by now from
Mehta and Schwab, which
maps the renormalization
group which lies behind tensor
network algorithms onto the
structure of deep learning
and vice versa.
Now, another
implication is we have
so many artificial neural
networks in the world.
I'm very new to this field.
And all I can say is,
every time I read it,
you can say, what's an
artificial neural network?
Well, there's like millions
of different kinds, millions
of examples.
And so perhaps some of
these neural networks
may, in fact, be useful
in quantum simulations.
And there are quite a few
people who are actually
thinking about this.
But the first
paper that appeared
is from Matthias Troyer's
group, where they actually
use a type of neural
network, I think,
called a recursive Boltz
machine to simulate some quantum
antibody physics.
And then, the final thing
is-- a final connection
is, even if we stay
within these tensor
approximations, these networks
of our respective domains--
so let's say I just want to
work with tensor networks
in physics and chemistry,
because that's what I feel
comfortable with--
nonetheless, there
are all sorts of algorithms
and optimization algorithms,
for example, gradient descent
algorithms and back propagation
and, all these kinds
of things, which
were invented in the artificial
neural network community, which
actually haven't been applied
in the regime of physics
simulations.
And you can try and transport
this information from one
community to the other.
And so that's something that we
did very recently, for example,
just as a little fun
project, where we implemented
a calculation using a tensor
network representation
of the wave function.
But we did it inside TensorFlow.
So we repurposed the
neural-network-type
architecture, TensorFlow, to
do these quantum simulations.
And this allowed us
to do, for example,
automatic differentiation,
which had never
been used in tensor
network simulations before.
And here' they're showing a
simulation of the hydrogen
cluster using the
typical gradient descent
algorithm in TensorFlow.
So I'd like to-- it's really the
beginning of this field that's
about the interface
between machine learning
and many-bodied quantum physics.
And it's something that
I and others in the field
are very interested in.
And it's something
that I think would
be interesting to have a
conversation with people here
at Google, who have so much
expertise in the machine
learning side especially.
OK, now, there's the
second reason why I'm here.
The second part of
the Google and I
is probably what I'm going
to spend most of my time
today doing.
And you'll notice
that everything
that I've been talking about has
been trying to simulate quantum
mechanics by using the
computers we already have,
the Intel chips.
And you know you have a
Quantum AI group here.
And they're trying to
build a quantum computer.
And one of the
things they're trying
to use the quantum computer for
is to do quantum simulations.
And so the key question
to answer there,
if you want to establish
so-called quantum supremacy,
where you can show a quantum
computer is better than
a classical computer in
every possible aspect,
is to try to find the type
of quantum problem which you
cannot simulate in a classical
computer and then get them
to do it, get the
Quantum AI lab to do it.
And so trying to
understand the crossover
between the classical
and quantum simulations
of the quantum world is really
a very important question
to answer if
quantum computing is
to become an accepted and useful
tool for physical simulations.
OK.
And so that's pretty much it.
So that brings me
pretty much to the end.
So I gave you, as I mentioned
at the beginning, a slightly
odd talk today that was
pitched at different levels.
And I only talked a little bit
about one part of my research.
But generally
speaking, in my group,
we try and understand
many-particled quantum
mechanics with tools
from chemistry, physics,
and computer science.
We work a lot on methods like
the tensor network methods
I showed you before.
We think about different kinds
of material applications.
And that's basically it.
So if you're at Cal
Tech, come and visit.
And thank you all
for your attention.
[APPLAUSE]
RYAN: If anyone
has any questions,
you can go ahead and ask.
AUDIENCE: So when
you were talking
about how many interactions
that you needed to
simulate in the early
part of the talk--
and I guess that was still where
you were in the simple part.
So you were basically
talking about things
in kind of 1D manner,
where you only
had two particles
interacting at a time.
GARNET CHAN: And they could
only be left or right,
in the left or right hand.
AUDIENCE: But then
you got to benzene.
And the first thing
I was thinking of
was, OK, well, what
about aromatics,
where you've got fundamental
interactions between six
electrons at a time.
And I thought about benzene.
And I thought, well, but benzene
has all these symmetries.
So what about toluene, just
break the symmetry that way?
GARNET CHAN: Yeah.
AUDIENCE: How hard is it to
completely simulate toluene?
I suppose it's easier than
the photosynthesis thing.
GARNET CHAN: Yeah,
so you don't really
need to make any assumptions of
symmetry in the calculations.
And so the idea that you want
to-- so the general framework,
in principle and in practice
in all of this is developed.
But in the general
frameworks, you
say, I put down a
toluene molecule.
I put down a grid in space to
describe the different regions
of the toluene molecule.
And then I build up the total
wave function for a system
by putting down little arrays
on each of those grid points
and connecting
them all together.
And that's the vision of what
one really wants to achieve.
In practice--
AUDIENCE: But how many
particles at a time
do you have-- what's the
dimensionality of this problem,
right?
If you put three dimensions
for every particle,
what's the dimensionality
of the whole thing
that you have to model.
GARNET CHAN: Yeah, so
if the particles don't
talk to each other, you can
see, even if each particle has
1,000 degrees of freedom
associated with it,
you don't multiply out
that complexity when
you have many particles.
And that's what this technique
is allowing you to do.
Because in the arbitrary
quantum wave function,
if you represent one
particle, one region
with 100 bits of information,
then you would need,
for 1,000 regions, 100
to the 1,000, right?
But you are writing down
the total wave function
in a special way that
you actually only need
a linear amount of information
in the number of particles
and the size of the system.
AUDIENCE: But that's only if
you have just the two particle
interactions, right?
What if you have the-- I
saw that you took it up
to four electron interactions
for the benzene crystal.
GARNET CHAN: Yes.
So it's something which you can
do so long as there isn't very
complicated,
many-particled correlations
involving all the electrons
generated at the same time.
But that's something that
you do see in a system.
So even if you are
in a system where
the electronic behavior of
seven or eight electrons
close together appears
to be complicated,
if you're over a
large region in space,
the behaviors tend to decouple.
So there's always just--
there's some finite cut-off
on how complex it can become.
AUDIENCE: Thank you.
AUDIENCE: I'm not
an expert in this,
but I'm trying to keep up.
And I was wondering-- something
you were saying, in my head,
related to Bezier
curves or NURBS
curves as far as
trying to sum together
what the-- how they interact.
Am I getting a correct
mental model there,
or am I way off base in how
I view things in my head?
GARNET CHAN: Also, I'm not
an expert in the things
that you're saying.
But as I understand,
a Bezier curve
is a way of representing an
arbitrarily complicated curve
by breaking it down
into little segments.
Is that--
AUDIENCE: It seems very similar.
GARNET CHAN: So there's
some similarity to that.
But the thing that makes a
quantum prob-- but the quantum
problem is harder
than the problem
of approximating a curve.
So the problem of approximating
a curve, if you define
the complexity as being related
to the length of the curve, it
should get harder and harder
to approximate a longer curve.
But nonetheless, if you
approximate a curve that's
twice as long, it's
really only twice as hard.
You need to use twice as
many interpolation points,
or little Bezier segments, to
approximate the curve being
twice as long.
But in quantum
mechanics, you're trying
to approximate not
just a curve that
has this one-dimensional
line, but it's
some function in a very,
very high-dimensional space.
And so you find that
when you increase
the number of particles
from one to two,
the complexity doesn't double.
It's quadratic.
And if you go from 1 to 10,
the complexity explodes.
So there's some similar
themes of approximation,
but there are some ways in
which it's a harder problem.
AUDIENCE: Is there a
distance at which you just
disregard the connections?
GARNET CHAN: Yeah, so is there
some distance beyond which you
disregard, say, different
bits of the system
talking to each
other, for example.
AUDIENCE: For
example, in some ways
of doing the curves we
were just discussing
is they might take n, where n
might be 4, or 5, or something.
And if there's 1,000
items, you only
ever consider nearby neighbors
as far as plotting the wave
across them.
Because if it gets too far
away, it's inconsequential.
GARNET CHAN: So there is
something like that going on.
So that's, again, a
locality principle.
And that's also kind of
what is saving you here.
So it's saying if I'm trying to
describe the curve of the wave
function, so to speak,
with this funny function,
then indeed, you
only tend to need
to think about
variables associated
with behavior nearby.
And that's a locality
type of argument.
AUDIENCE: I was wondering how
big n might be, not, I guess,
n of the whole set.
But how far-- is it a handful?
Is it 5 or 10?
GARNET CHAN: Well, so
maybe the right measure--
because it's a little
bit hard to compare
the quantities you're saying
with the quantities that
enter here.
But maybe the right
measure is just how many
bits of information,
how many numbers do I
need to keep track of at a time.
And well, if you kept
track of more and more,
you'd be more and more accurate.
So the way we do our
simulations is we always keep
track of the largest-- as a
scientific computing person,
you just keep track
of the largest
amount of numbers you
can keep on the biggest
computer you have, right?
And so that means billions
and billions of numbers,
because you typically
can manipulate
many gigabytes of data, even
terabytes of data, in memory
these days.
AUDIENCE: Thank you.
AUDIENCE: So, actually,
I'm faculty at USC.
I do machine learning
and graphic model.
I'm just visiting.
So great talk.
I really enjoyed it.
So I have a question
about, looking
at how you decompose using
the tensor modification.
It reminds me of
a graphic model.
And one of the ways of
looking at a graphic model
is it allows you to
factorize a very complicated
joint distribution into multiple
summation of tractable ones.
But the problem with
that is that when
you're starting to
do marginalization,
when you do conditional
probability, it becomes havoc.
So I'm just wondering,
a situation like this,
how would you solve that?
Second question is more
about you can easily
imagine that a special region
right now is pre-partitioned.
But you can also think
about, maybe they evolve.
In some way, you have to build
a multiscale hierarchical
organization of a
spatial region just
so you have better trackability
in terms of computation.
I'd just like to
get some comments.
And I don't know,
if you have time,
maybe I can grab him after
the meeting's adjourned.
GARNET CHAN: Sure.
Also, you're not
so far away at USC.
Come around to Pasadena.
Get some good food.
AUDIENCE: I'm actually
in San Marino.
GARNET CHAN: Oh,
you're in San Marino.
OK, yeah, you're just next door.
Yeah, so there's quite a
few things in that comment.
And so you brought
up graphical models.
And I have to say I'm no
expert in machine learning.
But my understanding
of graphical models
is they're very close to
statistical physics models.
It's not like you write
down something that looks
like a partition function.
And the problem that
you're referring to
is something that I
omitted talking about.
But it's when you go from the
probability distribution itself
down to a single number,
that computation, formally,
can look like it's NP-hard.
So that's like saying,
if I-- this actually
is a representation
of the normalization
of the distribution, which is
attained by taking something
on the bottom and contracting
something at the top.
So there are techniques
that are made.
If you just do this
naively, the complexity
of just creating a marginal
distribution, a reduced
distribution, is
very, very high.
It actually grows exponentially.
But there are ways in which
every time you contract tensors
together, you reduce
the information of them
every time you add
another contraction.
And in that way, you can control
the growth of the complexity.
But what you're saying is
one of the key questions
about how to make efficient
algorithms with these things.
And perhaps we know
from the physics side
some things maybe that aren't
used in machine learning.
So that would be an
interesting thing.
The second thing
you were mentioning
was whether or not
one should have
some hierarchical distribution,
decomposition in space.
And the answer is,
in principle, yes.
That would only make
the algorithms even more
complicated.
And no one's gotten
around to doing it.
This is why pieces of software
tend to flow to other things
that people are developing who
are real computer scientists.
It is actually very useful,
because it makes it easier
to implement these things.
But in principle, yes.
AUDIENCE: I was curious.
So you described how, using
tensor network methods,
you can conquer what is just
the seeming complexity, when,
in fact, the numbers of degrees
of freedom are much smaller.
But in your second part, Google
and I, when you were saying,
hey, maybe we can use an
emerging quantum processor
to do chemistry calculations,
this would, of course,
be most helpful for cases
where these methods break down.
GARNET CHAN: Yeah,
break down, right.
AUDIENCE: Could you give us
some ideas for application areas
where you would
expect this to happen?
GARNET CHAN: Yeah.
So in some sense, this will be
what I hope to talk to you guys
about.
And we can talk more.
But the problems that--
speaking in a nonrigorous way,
and there are people here
who prove real theorems,
and I'm going to say something
that just very, very fuzzy.
But the types of quantum
states that are simple
are states that are generated
with the largest imprint
from the Hamiltonians,
the physical Hamiltonians.
And so if I think about
eigenstates of the Hamiltonian,
the way-- if you take
the lowest energy
eigenstate or the highest
energy eigenstate,
the ground state or the highest
energy state of the Hamilton,
that's, in some sense, arrived
at by applying the Hamiltonian
to an arbitrary state
many, many, many times.
And it projects out that state.
And so fortunately, it turns
out that at the energy scales
we look at, often,
we're interested
in these low-energy states.
And those are very simple
for classical computers,
or relatively simple.
But what a quantum
computer can access
is going to arbitrary states
that are higher up in energy.
And those become much
more difficult to describe
on a classical computer.
So, for example,
I take a problem
that we didn't talk
about-- high-temperature
superconductivity.
It's now possible for the
models of superconductivity
to do extremely good
simulations of the ground state.
But what happens at
finite temperatures
and what happens with the
excitations of the system
is much harder.
I don't know if it's impossible
for classical computers.
I don't think so, but then
I come from that side.
But if they were easier
on a quantum computer,
that would be great.
AUDIENCE: So I think I probably
agree with your premise
that, in general, the complexity
of the quantum states that we
see in nature probably
don't grow exponentially
in system size in the sense
that there is some locality
to entanglement.
And so once you get past a
certain correlation length,
you don't need to keep track
of all of the correlations.
But there's a question
of, for systems
that we might be interested
in, just how much one does need
to keep before
getting to that point
where the complexity stops
growing exponentially.
And it seems like
there are a variety
of systems, in particular,
ones that you might
be interested in for
catalysis or in, say,
material properties
for high-temperature
superconductivity
where you might worry
that, while formally,
you don't have unbounded
exponential growth.
You need such large-- you
need to keep correlation
in such large regions that it's
just intractable in practice
classically.
Or you might worry that the
structure of the entanglement
is sufficiently complicated
that you don't really
know how to write down a
good tensor network ansatz.
Or maybe you don't know how
to contract it efficiently.
So what's your feeling
about these things?
Do you think that, in general,
for all physical states,
you will be able to
find a good ansatz
and that the correlation
length isn't so big that you
can handle it classically?
GARNET CHAN: So I agree with
all the things you're saying.
And again, there's a couple
points that you addressed.
But what Ryan is
saying is, even if I
say that, for most
of the quantum
states we're
interested in, perhaps,
even all the quantum
states we're interested in,
you don't have this formal
exponential complexity
of quantum mechanics.
And in practice, the
complexity could be very high.
Even if you have a polynomial
dependence on system size
and it was a high polynomial,
that would be a problem.
And that's very true.
Whether or not that
type of problem
is the best problem
to pit a quantum
computer against a
classical computer
in an early test of quantum
supremacy is an open question.
Because it comes down more
to a game of prefactors.
Because if something is
exponentially scaling,
you can always beat
it, basically, right?
It's so bad.
But if you have two
polynomials, and let's say
the two polynomials are
the same polynomial,
you're down to a
game of prefactors.
And then, I would say, given my
perspective on science issues
and I've always come from
a classical computing
world, the fact that we've
done classical computing
for so long gives us an
inherent advantage for some time
with prefactors that would
take some time to achieve
with quantum computers.
One day, that prefactor
might be gone.
Now, there's just a
practical question, though.
Are there problems
which, today, we
can't solve on a classical
computer, just right
this moment?
If I said, you need to know the
answer tomorrow, or in a month,
or in three months,
could we find--
and there are systems like that.
And those, of course,
are systems which
drive our research, right?
But of course, I'm very
optimistic that one
can push the classical
algorithms to study
those systems.
Otherwise, I wouldn't be
working on classical simulation.
So what I would say, I
think that we probably still
can do those on a
classic computer.
But the honest
answer is, right now,
or for the next three
months, or six months, there
isn't a computer to do it.
And some problems of catalysis
and superconductivity
fall into that category.
AUDIENCE: In terms of scaling
this up to larger problems
and use in more areas as
a general way of solving
these kinds of
problems, what do you
see as the issue in
terms of scalability?
Is it that communication
in the tensor
network is so high
that scaling this thing
across many, many
machines is a problem?
Or just the availability
of-- if I gave you 1,000
GPUs to run this stuff on, would
you be able to take advantage?
Is it there's not enough
capable grad students to be
able to run all the
experiments you want to do?
What do you see as the
major scaling problem?
GARNET CHAN: Yeah.
Well, that's a really
interesting question which I've
thought about a lot recently.
And in a practical sense when
I started as a grad student,
computers were really just
getting faster, right?
And now all that's happening is
computers are getting cheaper.
And that's a very different
situation to be in.
And if we're asking, can we
push classical computing further
to get another two or three
orders of magnitude of speed
up, it's not clear
that you can do
that with these
general purpose chips.
So I think that there's-- you
can't parallelize these things
more, and more, and more.
There's just some limit.
And we're maybe not
yet at that limit,
but we're close to that limit.
And so that is a bottleneck.
It's a bottleneck that
classical computers are not
getting faster.
And these systems are
not infinitely parallel.
So I really think that
what one should do
is really build a custom chip
to do this type of computation.
And that's--
AUDIENCE: You mean similar
to what we've done with ML?
GARNET CHAN: It's
similar to what
I understand you've done but
I know none of the details of,
because I don't think
it's published--
AUDIENCE: Yeah, we're
not talking about it yet.
GARNET CHAN: --with ML, right?
And yeah, I was talking
to a Googler about this.
And I was saying, I really
want to build this custom chip
for tensor networks.
And he was just sitting
there silent the whole time.
And then the next
day, this announcement
came out about these things.
And I said, oh, I asked
him if he knew about it.
And he said, oh yeah,
I knew about it,
but I couldn't
tell you anything.
And it's actually
really fantastic.
Because the way these
tensor network computations
are set up, you're
manipulating data
at-- you think of
these tensors as data
that's held in different
regions of space they represent.
And the computations
only involve information
from nearby regions of space.
So if you set up a
computer that way,
it would never have to have
this deep memory hierarchy where
you have to go to global
memory, get some stuff,
and bring it back.
Everything would just
hold its local data
and move it around
next to each other.
And building that in
hardware would be fantastic.
But it lies far outside
my regime of expertise.
But you guys maybe
already know how to do it.
AUDIENCE: Cool.
Thank you.
RYAN: All right, let's
thank the speaker then.
[APPLAUSE]
GARNET CHAN: Thank you.
[MUSIC PLAYING]
