[MUSIC PLAYING]
SPEAKER 1: Good
morning, everybody.
Let's start the session.
This is AQC 2016.
This is the fifth of the series.
And we have heard 180
participants this time.
It's the biggest one.
The field is expanding
exponentially.
So the first speaker, we have
Hartmut Neven from Google.
HARTMUT NEVEN: Good
morning, everyone.
So on behalf of
the Google team, I
would like to welcome you all.
We're very happy and
glad you could make it.
And hopefully, we have a
few interesting and fun
days together.
As Mr. [? Morrison ?]
said, we were
pretty astonished and pleased to
see that there's quite a growth
and maturity coming
to our field,
in the sense that we had about
100 submissions this year.
And even though we
went for the first time
for this conference
a double track,
we still had to reject about
half of the submitted talks.
And as always, there's
a random element to it.
So for those where
we unfortunately
had to relegate it to
the poster session,
please don't be sad about it.
And inversely, please check
out the poster session,
because there's a lot of
high-quality talks in there
as well.
So getting to the
scientific part,
I wanted to kick
off this conference
with some optimistic note.
I happen to be the
optimist on our team.
And I wanted to share
with you an argument
why I think quantum
annealing is going to succeed
and why I believe in the
next one, two years we are
going to gather conclusive
evidence that this is indeed
the case.
And I have a little
argument I want to walk you
through rather quickly.
So the first point
of observation
is-- this was our
December paper.
And all details will be
presented in the next talk
by [INAUDIBLE].
But essentially, we saw that
for a crafted proof of concept
problem that had a very rugged
energy landscape characterized
by tall and narrow
energy barriers,
quantum annealing is
better-- much better--
than its classical
counterpart, thermal annealing.
And then four of us-- and then
we looked closer and found,
not surprisingly, this
is not a rare phenomenon.
Actually, you can
use a tool-- I would
like to refer you to the
talks by Helmut [INAUDIBLE],
and [INAUDIBLE], and team.
So Helmut developed a measure
we refer to as p of q.
I will not explain
what it is other
than to say you can
use it as a litmus test
to characterize the landscape
of an optimisation problem
you are dealing with.
And then if you find that in
this landscape of [INAUDIBLE]
ideal case and you have tall
and narrow barriers, but not
any broad barriers, then you
find a numerical simulation
that simulated
quantum annealing--
in our case, the simulated
by quantum Monte Carlo--
succeeds much quicker than the
classical counterpart again.
But actually, the
inverse is also true.
You can use this litmus test
to find many examples where
quantum annealing is not doing
as well as thermal annealing,
which leads to the conclusion
that what you really want to do
is you don't want to
have quantum annealing
fight on its own the portfolio
of best classical solvers.
What you need to do,
you need to combine
the best of both-- of
thermal transitions
as well as tunneling
transitions.
And we have recently
developed an algorithm
we call the quantum
parallel tempering
algorithm which achieves this.
And let me quickly explain
to you how this works.
I'm sure most of
you are familiar
with traditional classical
parallel tempering,
very similar to
simulated annealing.
The difference is that you don't
have one temperature parameter
that you dial down
from high to low,
but rather you produce
replicas of your system.
And they live at
different temperatures.
Each one follows a
metropolis dynamics,
and then you have metropolis
swaps between those replicas.
And this elicits a diffusion
process where eventually, you
have the best solutions winding
up at the cold temperatures.
But by what I said
earlier, you should
expect that if you have a
large optimisation problem,
you will have all
kinds of barriers.
You will have the tall and
narrow kind and broad barriers.
So what you should expect
is that if you just
run it classically,
then there will
be groups of spins,
locked up behind tall
and hopefully narrow barriers.
And in that case,
you would benefit
from just extending
the scheme and running
annealing a little bit different
than we normally do it.
You would have, again,
this classical column.
But then every once in a while,
or after every outer loop,
the quantum annealer comes
in and grabs the replica,
initializes the annealing
process at a classical state.
Then you dial up the
transversal field
to a certain gamma
max-- you don't
know exactly necessarily
what this is--
and dial it back down.
Take that replica,
and again, if there
was a group of spins locked
up behind a tall barrier,
there's some probability
you will find
those spins on the other side.
And you would readmit this state
back into the pool of replicas
again by metropolis rule.
So we think that in
this combination,
you get the best of both worlds.
And we think, or expect,
that-- the question
you should ask now, how would
we know that such a scheme has
a chance of beating the
best non-quantum enhanced
optimisation algorithms?
And there's a long list
of reasons, actually, why
we think that's to be the case.
For example, again,
look at talks by Helmut.
You will find that the classical
parallel tempering scheme alone
is highly competitive with the
winners of the SAT competition.
For example, there's a
paper out in the archives
that shows that it beat CCLS
2015, which was last year's
winner of the SAT competition.
We also know is that if
the connectivity graph is
sufficiently dense,
then methods based
on cluster finding of methods
like the HFS algorithm
that exploits the existence
of subgraphs that are nearly
tree-structured, those
algorithms are going to fail,
going to cease to be effective.
And I would check
out those talks.
So therefore, I would like to
make the following prediction--
that if you integrate
quantum annealers that
feature sufficiently dense
connectivity graphs-- that's
important-- into a parallel
tempering scheme, then
you can attain speed-ups that
are practically relevant.
And actually, from a
Google perspective,
that's how we
envision eventually
an optimisation
service to work like.
This what we would like to
offer our customers eventually,
that folks, internal
or external,
have optimisation problems.
They send instances
to a service.
And the service will
return good solutions
to their optimization problem.
And the nice thing
about the scheme
is you can build an asynchronous
service architecture, where
you essentially have a big
server in the middle that
hosts the replicas at
different temperatures.
And you have running the
baseline algorithm-- local
stochastic search-- realized
by metropolis updates.
So you have these
replicas living there,
different temperatures,
going through
their metropolis dynamics.
That's the heart
of the algorithm.
But then you have-- at Google,
we would call it a worker pool.
You have specialists who
come in and say, oh, I
have time right now.
Give me one of your replicas.
Let me see whether I can
help you with an update.
And there, again
if those problems
live on sufficiently
dense graphs,
we will find that a
quantum annealer eliciting
tunneling transitions,
as described before,
will be able-- or we expect
it to be able-- to give you
good updates that you
would readmit back
to the replica pool.
And that this particular
move will not,
using any classical technique,
would take much longer time.
But in this scheme--
in the past,
we have always done this
[? bake ?] of quantum
annealing against everything.
Here it's more like a Judo move.
You say, we understand
much better now
what quantum
annealing is good at,
and let the quantum resources
do what they're best at.
Again, that is dealing with
rugged energy landscapes
on a dense graph where you try
to explore the energy landscape
and find a lower minimum.
But you can mix and
match this with any type
of classical update you like.
If your problem admits for
cluster updates, use those.
If you see the subgraphs
which you can update quickly
by classical methods, do this.
So essentially in this scheme,
anything you can do classically
to advantage, you can use it.
And the nice thing is also
because the core of it's
a replica server, as
a classical object,
you can instrument it and
take measurements there.
And you can, for example,
look at replica overlaps,
to be explained by Helmut.
You can look at the time
correlation functions.
And this information
can help you
to guide the flow of
replicas through the system.
And you can even
get rather fancy.
Put a machine
learning on top to see
that some of the
flow of replicas
is optimal or sufficient.
That kind of service
we hope to do
some of diligent
benchmarking work
and show that in
such an architecture,
having quantum
annealing there will
be a valuable member of
this architecture helping
with certain types of
updates that in the, let's
say, 500 nanoseconds or one
microsecond that an annealing
run would take, that
you wouldn't find
any classical schemes that
could do a competitive move
in that short time.
So that is the reason why
I'm rather optimistic.
And now you may
say, OK, fine, fine.
You have introduced
this architecture.
But when you are faced was
a true NP hard problem,
you haven't changed
the complexity.
This still looks like
exponential time complexity.
Or inversely, you might
just be able to lower
the residual energy
by a few percent.
So complexity theorists may
say, ah, not much achieved.
But from a more
economic perspective,
I would say a lot is achieved,
because even a few percent
improvement along important
product dimensions
is very important.
And why is this?
Because as the
internet has brought
us markets with
near-perfect transparency,
so this leads to this
winner-takes-most phenomenon
that if you have
a product that's
just slightly better
than the next product,
people will buy your product,
not the next one over.
And therefore, we should realize
that every computation happens
in a context.
And that context
may what starts out
as an initial small gain could
exponentially amplify it.
So we should not
dismiss the ability
to get-- think of it more
as interest rate rather than
what has been
gained in absolute.
Think of you have increased
interest rate a little bit.
And then think of what's that
going to do over a few years.
And that may be a
somewhat odd remark.
But this very
phenomenon makes me
find the field of quantum
biology quite interesting.
And I think we will see more
quantum biological effects
for that very reason.
If there's a bunch
of cats out there--
and we talked about the
dangerous road-- if one of them
has a little bit
better, or some of them
have a little bit
better, car detector,
that's the only type
of cat you will see
around a few generations later.
So I think it's important to
keep that phenomenon in mind.
So this concludes
my upbeat assessment
of why I think that
quantum annealing will be
very worthwhile to invest into.
I want to finish with
one last shameless plug.
And that is, we have
taken the liberty
to interpret the theme of this
conference a little bit broader
in the sense that quantum
annealing is really
the prototypical
and in some ways
the first pre-error-corrected
quantum algorithm.
But there's more now.
And in particular, we are quite
excited by the possibility
of what you can do with
shallow quantum circuits
and what pre-error-corrected
quantum algorithms
you can do there.
And I would like to
point you to a talk
that I think will be very
interesting by [INAUDIBLE]
and coworkers
showing how to reach
quantum supremacy with near-term
buildable shallow circuits.
So if you'll forgive
me for this plug,
I will conclude with
wishing you all to have fun
here the next days at AQC 2016.
[APPLAUSE]
SPEAKER 1: Thank you very much.
I should ask you to come
here for question to record.
So do you have any
questions or comments?
No questions?
Well, that's strange.
Please.
AUDIENCE: Well, I
didn't-- Hartmut,
I'd like to ask you something.
Hartmut.
Hartmut.
Hartmut, I'd like
to ask a question.
That's part of the session.
What I didn't understand
in the tempering thing
was when you go to
the middle server
and you find a configuration,
that's something
diagonal on the z-basis.
And then when you send that
over to the quantum annealer,
you're sending
something which is
diagonal in the
computational basis.
HARTMUT NEVEN: To
start out with.
AUDIENCE: Yeah, so
then what happens?
HARTMUT NEVEN: Now you dial
up the transversal field.
And this will start to
spread the wavelength.
AUDIENCE: When you send it in
the transverse field to zero,
and then you turn it up--
HARTMUT NEVEN: Exactly.
AUDIENCE: OK, got you.
HARTMUT NEVEN: And to
a certain strength.
And we don't know which
strength is it, so we'll
try a few gamma max and then--
AUDIENCE: I just missed that.
OK, thank you.
SPEAKER 1: --interesting aspect.
Any other questions?
Yeah, please come.
AUDIENCE: Hello, I'm not
very an expert on the domain.
But I would like to know what
is the definition of replicate?
[INAUDIBLE] is replica
a notion unrelated
to statistical physic system?
Can you, for dummies, explain
what replica is in very--
HARTMUT NEVEN: Oh, sorry.
I was under the
impression that everybody
is quite familiar with
parallel tempering.
So replica simply
means that you have
the problem on the same graph.
So you are in D-wave
language, the HIs
and the JIJs would be identical.
But the replicas can differ by
that exact spin configuration
because each one goes through
probabilistic metropolis
updates at different
temperatures.
So the evolution
will be different.
But each replica lives on
the same identical graph
with the same values
characterizing the connectivity
and the local fields.
AUDIENCE: Thank you.
SPEAKER 1: --once again
to go [INAUDIBLE].
[MUSIC PLAYING]
