[MUSIC PLAYING]
STEPHEN JORDAN: Thanks.
So this is joint work
with Michael Jarrett
and Brad Lackey at Miss and
the University of Maryland.
So the central topic of
this talk is, in some sense,
something we've discussed
before, which is stoquasticity.
And just as a reminder,
the definition
of that is the
matrix is stoquastic
if all of its off-diagonal
elements are non-positive.
And most research into
adiabatic quantum computing,
both experimental
and theoretical,
focuses on stoquastic
Hamiltonians, though not all.
And by the
Perron-Frobenius theorem,
we know that
stoquastic Hamiltonians
have a ground state which
can be expressed using
all real positive amplitudes.
And if you go to
someone who does, say,
a practitioner of quantum
Monte Carlo simulations,
what they might tell
you is that, oh, well,
we should have no problem
simulating these Hamiltonians
with our algorithms because
there's no sign problem.
And so if you're someone
who's interested in using
adiabatic quantum computers to
solve problems that you can't
solve classically, then you
should take this statement
very seriously.
So is it true that these
classical algorithms can always
simulate what you do with
the stoquastic adiabatic
computations?
If so, then that detracts
from the motivation,
I would say, for building
adiabatic hardware--
at least stoquastic hardware.
Although, of course, I'm
going to approach this
from the point of view of
asymptotic complexity--
what's polynomial,
what's exponential?
But, of course, even there are
more fine-grained questions
one could ask about
finite size instances.
So let's look at
what these Monte
Carlo algorithms for simulating
these Hamiltonians are.
The most widely used,
as far as I understand,
is path integral Monte Carlo.
And earlier in this
conference, there
have been some nice talks,
such as by Elizabeth Crosson
and Aram Harrow
about proving that
under certain circumstances,
path integral Monte
Carlo can be proven to
converge efficiently.
And in practice, it's a
pretty effective method.
And on the other hand there,
was a very interesting paper
by Matt Hastings in 2013
which showed that even
without the sign problem, there
are certain instances that you
can construct where you can show
that the path integral Monte
Carlo method will fail to
converge in polynomial time
even though the stoquastic
adiabatic dynamics that it's
trying to simulate is one
that has a polynomial size
eigenvalue gap.
And the essence of
his examples was
to construct energy
landscapes where the world
lines get tangled around some
kind of topological obstacles.
So in path integral Monte
Carlo, you have a Markov chain,
and the objects that are
sort of hopping around
according to this Markov
chain are world lines.
And you need them
to equilibrate,
and if they get tangled around
certain kinds of things,
you can show that they
won't equilibrate.
So that's, in some sense, good
news for adiabatic quantum
computation.
That shows that at
least, in the worst case,
this will fail to work.
And there are some instances--
there exist some instances,
although rather
contrived ones-- where
you can prove that you
can't do this classically
by this method.
So more generally,
you can phrase this
as kind of a fundamental
complexity theory question.
You can define a
model of computation
based on stoquastic ground
state adiabatic computation,
and you can say,
well, where does
this lie between classical
and quantum computation?
What is the set of
problems that you
could solve in polynomial
time within this model?
And so here are some
definitions-- P,
polynomial time,
classical computing.
BQP, universal quantum computer.
Includes factoring and all that.
And I'll call this
StoqP, following
Scott Aaronson's
convention, polynomial time
stoquastic adiabatic computing.
And there's actually
fairly good complexity
theoretic evidence, or
at least some complexity
theoretic evidence, to think
that stoquastic adiabatic
quantum computation can't do
universal quantum computation.
If it could, then
that would mean
that BQP is in the third level
of the polynomial hierarchy,
which is, I think, generally
believed not to be the case.
So the two most
plausible scenarios
at this point are
either stoquastic
polynomial time
equals P, or it lies
somewhere intermediate
between P and BQP-- classical
and quantum.
And so what Hastings' paper
shows is that proving that
stoquastic polynomial time
is contained in classical
polynomial time-- that
proof cannot be achieved
by rigorizing path integral
Monte Carlo and putting general
purpose runtime bounds on it.
That's thwarted.
That proof path is thwarted
by topological obstructions.
But there are other
kinds of Monte Carlo
that you can analyze.
And perhaps the second most
popular type of Monte Carlo
are things that are sometimes
called diffusion Monte Carlo.
This actually goes by
many different names.
You could call it
population Monte Carlo.
In the computer
science literature,
there are things called
the go-with-the-winnders
algorithms, which are
quite similar in flavor.
But the essential
idea here is, well,
let's think about why it's
hard to simulate quantum
computations classically.
And one of the reasons
is that the quantum state
vector of n cubits is
2 to the n dimensional.
So if you have maybe 100
cubits, then even Google
can't store that much data.
But on the other
hand, maybe that's
OK because probability
distribution over n coins
is also some 2 to the
n dimensional vector,
and so if your wave
function is all positive,
then it's proportional to
probability distributions--
just normalized differently.
And so what you could
try to do is you could,
instead of trying to store the
wave vector on your computer's
memory, you can just
have your computer
inhabit this probability
distribution as a probability
distribution.
And so the key point to
designing a diffusion Monte
Carlo algorithm is
to design some kind
of stochastic process-- some
Markov chain, perhaps-- that
converges rapidly to the
desired distribution, which
is proportional to ground
state wave function.
So what we decided to
do is make the simplest,
most stripped-down variant
of diffusion Monte Carlo
that we could, not for the
purpose of necessarily getting
a really efficient
algorithm in practice,
but something that
we could analyze.
And we call it
sub-stochastic Monte Carlo.
I don't think it's very
distinct from other things
that people typically do.
But what it specifically is is
we take Schrodinger's equation,
and we switch to imaginary time.
So now it's something
that will drive you
into the ground state
of the Hamiltonian
because all of
the excited states
will decay away exponentially.
And if this Hamiltonian
is stoquastic,
then you can interpret this
as a diffusion equation,
except that the total norm
of this vector-- the sum
of the entries in this vector,
which you can interpret
as the sum of the
probabilities--
is something that's going to
shrink as a function of time
unless the ground energy
happens to be zero.
And so then you can
further interpret that
as some kind of continuous
time random walk.
And what it means for this
total probability to shrink
is that the walkers
have some probability
of dying at any given moment.
And so then you have some
population of walkers
that you track.
Once you discretize the
time evolution defined
by this diffusion
equation, you have
some kind of a
sub-stochastic Markov chain.
You have these little
sub-stochastic matrices, just
obtained by Taylor expanding the
exponential at each little time
step.
And the walkers are,
at each time step,
either hopping to
another bit string,
dying off, or reproducing.
So the point is
that we can't really
afford to have the population
of the walkers exponentially
decaying over the course
of our time evolution.
Otherwise, we'd have to start
with exponentially many walkers
at the beginning.
If we run out of
walkers, then there's
nothing left for the algorithm
to do and it just ends.
So you need to come
up with some way
of replenishing the walkers.
We did some computer experiments
with several different methods,
but they all are of the
same general flavor, which
is that we replenished
the population by spawning
new walkers on the sites of
the survivors after each step.
And if you choose
the probabilities
by which the walkers
either take a step,
die off if they're on a
high energy potential,
or reproduce if they're
on a low energy potential,
you can guarantee that
the limiting distribution
of this walk is proportional
to the ground state wave
function of your Hamiltonian.
And if you examine what happens
in an algorithm like this,
you'll notice that
there is something
that looks very much like
sort of a classical analog
of tunneling.
Because what can happen
is that a walker dies off
at a high potential, and sort
of responds or gets resurrected
at the location of some randomly
chosen other walker that's
at a lower value of
the objective function,
of the potential energy.
And this kind of
mimics something
that's very similar
to quantum tunneling.
Here's a numerical
example we ran.
It's everyone's
favorite example--
the ramp with the spike.
And you can see--
so at the beginning,
the distribution is a binomial
that's centered around zero,
at the Hemmingway n/2.
And then you turn
up the potential,
and this binomial ramps down.
And at some point
here, this is where
we're hitting the spike,
which is that Hemmingway
5 or something.
And there are some
walkers that are left over
underneath from earlier points,
and walkers on this side
can die and respawn
on the other side,
and you can tunnel
across this barrier.
That's the idea.
So there's some
question here about
whether tunneling from a
computational point of view
is truly a uniquely
quantum effect.
More broadly you
could even say, if you
think about stoquastic adiabatic
computation, what ingredients
of quantumness do you have?
Do you have entanglement?
Yes, you have some
entanglement there.
Do you have superposition over
an exponentially large state
space?
Yes.
Do you have interference?
Well, maybe not.
I mean, the ground state
is at all times something
with all positive amplitudes.
There's no manifest
interference in this process.
And there is kind
of a, I would say,
fundamental conceptual
question at stake, which is
is interference a
necessary ingredient
for exponential
quantum speedups?
So a tempting hypothesis
which you might propose
is that if you have some
stoquastic Hamiltonian
with a polynomial gap,
then you can always
track the instantaneous
ground state probability
distribution using a
classical efficient algorithm.
So we need a
probability distribution
which is proportional to ground
state, but just normalized.
But it turns out that
this hypothesis is false.
We were able to
construct counterexamples
where you can show that
the diffusion Monte
Carlo, our sub-stochastic
Monte Carlo,
fails to converge
this distribution.
And the basic idea
is that, well, you
can tunnel if there are some
walkers on the other side.
And there is some
probability for a walker
to be on the other side,
which is defined by the wave
function, and if the
number of walkers
is smaller than 1
over that probability,
then probably there
is none there,
and you're just not
going to tunnel.
So in quantum mechanics,
the probability distribution
is proportional to psi
squared, you can tunnel across,
more or less, if
that probability
to be on the other side is
not too small on the barrier.
But you can tune
the potential so
that this other normalization of
your probability distribution,
which comes from
the L1 norm, which
comes from the classical
case, differs exponentially
from the quantum case.
In exponentially big vectors,
L1 and L2 normalization
can be very different.
And in that case,
you can tune things
so that the adiabatic
quantum algorithm will
succeed but Monte Carlo fails.
And the way it works is
it's just another variant
of this ramp in a spike case.
We tune this ramp so that
the psi squared down here
is order one.
Psi divided by the sum of
the psi, the L1-normalized
version-- the
probability distribution
that diffusion Monte Carlo
and related algorithm
samples from-- is
exponentially small there.
And then we lower a basin here.
The quantum case will
pour into this basin,
and the classical
algorithm will never
find it because the
probability of a walker ever
landing there and noticing
that this potential energy
basin is turning on is
exponentially small.
And we can prove that the gap is
polynomial in the quantum case.
So that hypothesis
I made is false,
and that's another piece of
good news for adiabatic quantum
computation with
stoquastic Hamiltonians.
So on the other
hand, we can also
take the practical
point of view.
Let's try running this
algorithm on some real problems.
We'll take some
optimization problems,
write down the standard
adiabatic optimization
algorithm for solving
these, and simulate
that adiabatic process using
our sub-stochastic Monte Carlo
code.
And so the problems we
picked were SAT and MaxSAT
just because they're kind of
standard benchmark problems.
They're sort of like the
fruit fly or the lab rat.
Standard lab rat of
biology is SAT and MaxSAT
for combinatorial optimization.
And an amusing fact is
that every year there
is a competition which is held
for the fastest solvers of SAT
and MaxSAT, so we
have a lot of data
to sort of benchmark against.
We know what the
state of the art
is on these benchmark processes.
So we ran our code, and it's
not competitive with top SAT
solvers.
SAT, it turns out, in practice
is very different from MaxSAT
because the fact that you know
that there is this completely
satisfying assignment
allows you to do
a lot of algebraic manipulations
to your instance, sort
of eliminating possibilities,
canceling out variables,
and so on.
The other hand, for
MaxSAT, our algorithm
which, recall, we
could prove sometimes
fails to converge-- it
can take exponential time.
On the other hand, on this
ensemble of instances,
it worked much better
than we expected.
And so we really
had to savor that.
It's so rare in research that
something works better than you
expect, but that was the case.
And in fact, our
simulation of this quantum
process-- even forgetting that
we care about quantum mechanics
at all.
We could just pretend that
our original goal was to solve
these optimization problems.
It actually is very successful.
On certain classes of instances,
the Max3SAT random instances,
it actually was superior to the
winner of last year's contest.
So we entered it in
this year's contest.
I think we would have had a
chance except for the fact
that Helmut has also entered
his code into this contest.
But we'll see in a
week what happens.
Here's some more data.
Ours is the blue line.
This is showing how long it
takes to solve these instances.
The only problem is there's
a couple of instances--
there's a total of, I think,
maybe 200 benchmarking
instances last year.
There are like two
or three over here
that software just choked on.
I'm not exactly sure why, but
for a large majority of things,
our software not
only solved them,
but solved them faster than
previous state-of-the-art
things.
So to summarize, we looked
at sub-stochastic Monte Carlo
as an example of
diffusion Monte Carlo.
And three possible goals
you might have for this
are maybe this could be
a useful numerical tool
for understanding adiabatic
quantum computation.
And while we haven't
really pursued that yet--
but that might be
something for the future--
could this type of
algorithm be used
to prove that adiabatic
quantum computation
with stoquastic Hamiltonians
is actually contained in P,
that it's incapable of
exponential speedup.
Well, apparently not, due
to these L1 normalization
versus L2 normalization
barriers that we observed.
I should mention that I
think this L1 versus L2
is sort of a known-ish thing,
at least at the folklore level.
I'm not sure that it's
a completely new idea,
but we actually
explicitly proved this.
And is it a fast
classical algorithm
for combinatorial optimization?
That's the third goal
we could put on there.
I think it was actually
Eddie Farhi's suggestion
that we try this, and that was a
very good suggestion because it
turned out that yes, it
was surprisingly good.
And before concluding, I
should mention one last thing.
That these barriers
really apply to anything
where you're trying to
track something that's
proportional to psi-- a
probability distribution that's
proportional to psi
rather than psi squared.
So that includes pretty
much most variants
of diffusion Monte Carlo.
And also, it includes some forms
of path integral Monte Carlo
with open boundary conditions.
So we don't know where
stoquastic computation lies
between P and BQP,
but maybe this
is another piece of
evidence that it's really
something intermediate,
and it's not
just equal to classical
computing asymptotically.
So that's all, and thank
you for your attention.
SPEAKER: I think we have time
for one or two quick questions.
Yes.
AUDIENCE: So applying
this exponential operator,
[INAUDIBLE], right?
I mean, you said that some
walkers die, but, I mean,
then what do you do?
I mean, you clone the
walkers that are still alive?
I mean, you make copies of
them and you keep going?
STEPHEN JORDAN: Yeah.
We actually tried
three different methods
for replenishing the walkers.
The one that worked the
best is that the walkers
on the low-energy sites,
the lowest energy sites,
had some probability of
reproducing into two walkers.
So it's kind of like
you have some bacteria,
and they hop around diffusively,
and the ones where there's
lots of food, they
reproduce, and the ones
that there's a little bit
of food, they die off.
It's that style of algorithm.
But we tried another one,
which was when a walker dies,
it just instantly teleports
to the location of a uniformly
randomly selected other walker.
That you can also prove
converges to the right thing,
but just in practice it
didn't converge quite as
well for these optimization
problems, it seemed like.
AUDIENCE: I don't
believe that there
is this equivalence between
diffusion Monte Carlo and path
integral ground state.
There should be a
difference [INAUDIBLE]
path integral and ground state.
You have a still finite number
of [INAUDIBLE] that can tunnel,
even if there is no other
things in the other one.
[INAUDIBLE] diffusion needs
something to be a [INAUDIBLE]
in order to--
STEPHEN JORDAN: Yeah.
So perhaps I misspoke a little.
I don't mean to suggest that
open boundary condition path
integral Monte Carlo is
equivalent to diffusion
Monte Carlo.
They share some shared features.
But basically, the
status right now
is we don't have,
as far as I'm aware,
any really plausible candidate
of a single classical algorithm
that could have provable
polynomial convergence
for simulating all stoquastic
adiabatic processes.
The path integral
ones can be thwarted
by these topological barriers
and the diffusion ones
can be thwarted by these
L1 versus L2 barriers.
So that's all I
really mean to say.
SPEAKER: And I think with that,
let's thank the speaker again.
[APPLAUSE]
[MUSIC PLAYING]
