MALE SPEAKER: Good afternoon.
I guess we should go
ahead and get started.
This is another installment of
the Quantum AI Speaker Series.
And I am very happy to announce
Matthias Stroyer from ETH
Zurich.
Matthias is a frequent, regular
visitor to the Quantum AI Lab.
He has been here before
we even started--
before we bought the
first machine at NASA
Ames-- the D-Wave machine.
And later he has been very
instrumental helping us
setting up our own hardware
effort in Santa Barbara.
So Matthias is a professor
of physics at ETH Zurich.
And he specializes
in many-body physics,
in particular,
computational approaches
to solving quantum, or
classical, many-body systems.
And, informally, we
refer to Matthias
as the captain of the
red team because he
has such strong,
highly optimized codes.
He gives people who want to
demonstrate a quantum speed up
a run for their money because
his codes or the codes
his team develops are often
the gold standard that you have
to beat before you
can be somewhat
certain that a speed-up
may be present.
And we're also grateful
that some of his students
are now with us.
Sergei Izakov is on our team.
He was a postdoc with Matthias.
Damien Steiger who came as
an intern during summer.
Matthias is active in
various other lines
towards bringing quantum
computing to fruition.
He is a consultant to Station
Q. That is the Microsoft
effort in quantum
computing at Santa Barbara.
And Matthias is also a trustee
of the Physics Center in Aspen.
And I'm very pleased
to hear your latest
talk on high-performance
quantum computing.
MATTHIAS TROYER: Thank you.
I first want to say
that it is great
that companies like Google
get involved in the quantum
devices and the
quantum computing
because as we scale up from
single qubits to devices,
one needs lots of resources
that one would not
have in a small lab.
But that also then
raises the big question
of why do we build
a quantum computer.
And what would we
do if we had one?
And then it's not
enough to just think
about how things would scale.
But one has to really compete
against the fastest and best
classical computers.
That's what
[INAUDIBLE] mentioned.
We are only experts
in classical HPC.
Now if you come with
the new technology,
you have to beat
not just a laptop,
but you have to beat the best
possible classical devices.
And that's why I call it the
talk "High-performance Quantum
Computing" because we have
to get quantum computing
to the same level.
So how do classical
machines fare so far?
When we look at the top
500 list that shows here
as the red line, the speed
of the fastest machine
in the world and through the
last 20 years the increase
has been very nicely
been exponential.
And there seems to be no end
in sight for Moore's law.
But when you look
closer, then you
see that when you look at the
number-- the 500 machine--
it seems to already
level off slightly.
And there are some problems.
And at some point,
Moore's law will end.
And I think we'll slow down.
And we need something to
bring us beyond Moore's law.
And that could be
quantum devices.
We already have quantum
random number generators
that promise perfect randomness.
We have quantum
encryption systems
that promise to secure
quantum communications.
We have quantum simulators
to solve quantum models.
We have quantum optimizers
like the D-Wave device.
And in the future we might
have the quantum computer.
They don't know much yet
how it will look like.
But I'm sure it will run
some version of Windows Q.
But that's just
because they pay me.
Now, there was a
conference last summer
when Paul Messina asked the
questions which technologies
could enable beyond
exascale computing.
And in that conference
what he meant
is not just something
that is smaller than 10
to the 21 operations per
second, but something
that is truly revolutionary
beyond it that does things
that you simply can
never do classically.
And if that's the
goal, if that's
where we compete
with quantum devices,
than we should not
compare a quantum
computer against a
single core on a laptop.
But we should compare it
against the best possible
classical special
purpose device I
might build for the
same problem in 10 years
or 20 years because that's
what we are up against.
So let's look at
what we have here.
The quantum random number
generators a useful--
they don't compute
quantum encryption either.
What about quantum simulators?
These are machines to build in
a quantum model and solve it.
And they passed the
tests four years back,
which "Science" called one of
the breakthroughs of the year
back then.
So let me show you just
what it was and then
ask whether that will help
us go beyond the exascale.
The problem is we want to solve
the material problem like we
want to test high-temperature
superconductors.
Superconductivity was
found near 100 years back.
It allows currents to
flow without any loss.
But it works only at very low
temperatures about 10 Kelvins
or near 30 Kelvin.
And then in 1986 there
was a big breakthrough.
And the temperatures jumped
up to 40, 70, 100, 140 Kelvin.
So to half of room
temperature, and people
hope that it will soon
reach room temperature
and we have room temperature
superconductors, levitating
trains and many magic things.
That didn't happen,
things are stuck here.
And we would like to
know why it stuck.
What caused the
superconductivity there
and could we push it
up to room temperature?
That could solve many problems.
But we don't have
any classical methods
to solving it well
enough to predict
the properties of the materials.
And so one would
like to maybe solve
that on the quantum device.
But solving the full
problem is hard.
So let's make it easier.
We start from the material.
We simplify it to the
crystal structure.
We find the distance to
the planes is important.
And we simplify them into
the simplest possible model.
That's a simple toy model.
But if we don't
understand the toy model,
we can't hope to
understand the full one.
And that's a very simple
model with electrons
on the square lattice.
They move around.
And the whole direction is
simplified to just something
on a site where two electrons
touch, it costs energy u.
It's very simplified,
but even this model
is far too hard
to solve exactly.
And that's why we want to build
a quantum device to solve it,
this being built using atoms at
nanokelvin temperatures-- very
cold-- quantum gasses-- lots
of lasers to form a lattice.
It costs about three
million to build.
It took three years work of top
experts to really calibrate it,
tune it, and so on.
So this is high engineering.
And then they have a device
that implements this model.
And the first question I ask
is does it work and how well.
So before I trust this thing
here to solve my model,
I first want to test it on
the model that I can solve.
And that's what we did
with the bosonic version
of this type of model.
We have the lattice.
We have the atoms in
it that we simulated.
We simulate the full set up.
And measure here the so-called
momentum distribution function.
It gets colder here,
then it gets warmer.
This is the simulation.
This is the experiment and
the climate is perfect.
These things work.
They've really built a quantum
simulator for this model
and found the phase
transition point and so on.
And they work.
Now one can do
many, many things.
One can do the hard
problems with fermions.
One can do dynamics.
There are lots of
things that one
can do that are
hard classically.
But can this device
really be used
to find us a room
temperature superconductor?
There are challenges here.
The first challenge
is we have this gas
and that's at
nanokelvin temperatures.
But the scales are low.
And the problem is you
have to cool it first.
Where are we?
We're at about the
negative temperature
of one-tenth of the Fermi
energy in the system.
This is not enough
to see magnetism.
And this is about a
factor of 50 or 100
higher than what is needed
to see superconductivity.
It's a huge challenge to
cool that factor 100 lower.
It's a big challenge
to calibrate it.
And it's a big
challenge to measure
things one wants to measure.
It's a big challenge to do
more than the simple model.
And why is that?
Because this is a
device that is analog.
I simply build this
with quantum Legos.
But it is very
hard to really get
component-scale in
the analog device.
So what we would ideally
want to have-- we
want to have a quantum
computer to solve it.
So the question then is what
about quantum computers?
And could we solve this
problem on a quantum computer?
Or let's ask the main
question what would we
do with a quantum computer?
So what are not
just math problems,
but what are really important
application problems that I can
solve on a quantum computer,
but that we cannot yet solve
on the best classical hardware
that you could imagine having
in ten years?
So what is the killer app
for a quantum computer?
When I ask my colleagues,
they tell me global search.
That's the problem where
if I am a given a database
that is not sorted and I want
to find an element, classically,
we have to go through the whole
database that takes N steps.
Quantum mechanically I can do it
in the square root of N steps.
This is a beautiful
speed-up that can be proven.
But you need to ask the
database for an oracle
and that you have to implement.
And that's where the problem is.
If this is a real database
with real data in it,
then I cannot implement a call
to that with less hardware than
I am [INAUDIBLE] data
points in the database.
I have to at least
read it in once.
But reading the data in once,
I can already find the element.
So when I want to really
implement this for real data,
then it will not work.
If I have all the
N hardware, then it
can find something in the
log N time, classically.
So global search is not useful
for looking in real data.
It's useful in a database that
can be calculated efficiently
on the fly in time to get faster
than the square root of N.
And the challenge here
that nobody could not
answer me yet is what is the
application problem where
that would help us?
If you have ideas,
I'd like to hear them.
So one thing is gone.
That's beautiful.
But what is the real
world application?
Let's look at factoring.
We have a huge
integer here that we
want to break into
prime factors.
I bet you can't do that because
it takes polynomial time.
But here are the factors.
It's easy to multiply them.
But it's hard to factor.
And that's a problem
that's hard classically.
But that's a
problem where people
showed that you can do
that in polynomial time
N cubed space-time complexity
on a quantum computer.
And that suddenly
made quantum computing
interesting for the broad
masses because now if you
have a quantum
computer, you can factor
and you can break RSA encryption
with near 1,000 bit key
in hours instead of millennia.
But is that a real application?
My claim is it's not
because the moment we
get ready to build
a quantum computer,
we change encryption schemes
to those quantum schemes.
Why is there interest?
Because the agencies
want to know
whether anybody in the
world can factor numbers
as long as nobody
can, it is safe.
When somebody can build
a quantum computer then
it's unsafe.
Then one has to change.
One simply wants to know whether
it's possible now to build one.
And that's why there's funding.
But once we have a
quantum computer,
that application
would just disappear
because we changed the
encryption schemes.
Maybe some people want
to decrypt some old data.
But at some point
it would be useless.
So factoring might fund
most of the field now.
It's not the killer
app in the end.
And people look around.
There's been a paper on quantum
page rank by the people at USC.
They say that one can solve
the page rank problem in log
N time.
To encode the state,
one needs log N qubits
for the simplest way.
That's not practical.
For the practical implementation
, one needs N qubits.
And then one has to solve it
here with an adiabatic quantum
algorithm implementing
a certain model that
has N squared couplings in it.
So the time it needs is log
N in the number of web pages.
Although, that's in question.
And to get the time, one needs
to have N squared couplings,
and one needs to have N
squared hardware resources.
Classically, page rank can be
solved with log N matrix-vector
multiplications.
But each one costs
t times N where
t is the mean number of links
and N, the number of web pages.
If we compare that, and if
we do the quantum device
serially-- we do one
coupling after the other--
then the time is N
squared times the log N.
And on a classical machine, it
would t times N times log N.
So here the classical device
would win with the same memory
requirements.
If one does the quantum
device in parallel,
the time goes down
to only poly the log
N. If I do it on
the general purpose
machine with the
[INAUDIBLE] network,
the time is one-third
on the classical device.
If I build a special purpose
device in the same style
as the quantum device with N
squared links in all networks,
I bring it down
to log N squared.
So in the end, if I
compare not the quantum
computer against a
single classical core,
but the quantum computer
against a classical device
with the same hardware
scaling, there's no advantage.
And actually, the quantum
device would need more hardware
than the classical one.
So that's also not the problem
that we would like to solve.
So what do we want to use
a quantum computer for?
That question gets
more and more urgent.
We can use it to solve a
linear system of equations
in logarithmic time.
There is an algorithm written
for that by Harrow, Hassidim,
and Lloyd.
And what that needs
is you need to be
able to take your
right-hand side
b to store it in a
quantum register.
And you need to evolve it
with the matrix for which you
want to solve the problem.
And we need to implement
this time of dilution here.
Would that help us?
If we do it for
a general matrix,
then a general matrix
has N squared entries.
And thus we need N squared gates
in general to implement it.
And there's been a
beautiful algorithm
here that shows
how it can be done.
But in the simple
way they proposed it,
one needs all the N
squared gates and qubits
to implement it and needs all
the N cubed preprocessing.
So if I have a classical
matrix then they
can do it in the log N time
on the quantum computer
with N squared hardware.
But with N squared
hardware, I can already
solve the problem classically.
And furthermore,
with the hardware
that's required
in this approach,
I could actually emulate
the quantum computer
and just run the
quantum algorithm
on the classical device.
So what that means
is we don't want
to run it on the classical data.
We want to run it on some matrix
that comes from some quantum
circuit-- some quantum
device that is efficiently
computed on the
quantum computer.
And we can only do it if
the evolution here needs
a short quantum circuit
and efficient code
and depends on little
classical data.
Then it can get this
exponential quantum speedup.
And that has been tried for
electromagnetic wave scattering
problems with an incoming wave.
An apple in here
scattered the radar waves.
And they costed out
exactly how many
quantum gates would it need and
how big would the problem have
to be for the quantum
computer to work faster
than a classical supercomputer.
And the bad news was
the cost of the time
is beyond a millennium
on current hardware.
That means we need to
substantially improve
those algorithms.
And we need to find
better problems.
So when we got to this point,
then we felt pretty depressed.
Why do we do a quantum computer?
And I don't want you
to feel depressed.
But I want to tell
you we can do it.
But we need to look for problems
that are classically hard.
We shouldn't look
for problems that
are quantum mechanically easy.
But we should
maybe start looking
at classically hard
problems and see
if we can solve them
quantum mechanically.
AI here is a challenge,
but might work.
What we looked into
is we looked what
are the codes that use the most
time on the big supercomputers
because they are the
most likely candidates.
And here is a list of the
fastest codes on the Jaguar
machines.
So those were the
first codes that
reached one petaflop
performance.
And the first five--
so the first five codes
to go beyond the
petaflop, they all
solved quantum
mechanics problems,
material science problems--
quantum chemistry problems.
So can we do quantum chemistry
on a quantum computer?
Can we design a room
temperature superconductor
on a quantum computer?
Can we develop a room
temperature catalyst
for sequestration of carbon
to solve the global warming
problem?
Can we develop a better
catalyst for nitrogen fixation
and get cheap fertilizer?
Those would be a really
important problems.
If you can do that
better on a quantum
computer than on
a classical one,
then is worth building
one, definitely.
But these are all
problems for which
we don't have good enough
classical codes yet
to solve it well enough to
be useful is exponentially
hard classically.
But this is
polynomial complexity
on a quantum computer.
So maybe that works.
So maybe we run into
the same problem
that the cost of the
time is millennia.
Let's hope not.
So how do we gain speed here?
You have to take a
molecule or a material
and find its ground state.
Classically, that's
very easy in principle.
I take some random vector.
I multiply it with the matrix.
Or I evolve it in
imaginary time.
And I project out
the ground state.
On a quantum computer, I cannot
do that because a quantum
computer has speeded up, but
only for unitary operations.
So you cannot just multiply
it with the matrix and do
to the power method an
iterative eigensolver.
It's hard because I can do
things exponentially faster,
but only a limited
set of operations.
So what I can do is I can
try to prepare a state that
might be close to
the ground state.
Once I'm there, I
measure the energy.
By measuring the energy, I fall
into one of the eigenstates.
If I measure the lowest one,
then I'm in the ground state.
And then I have it prepared.
The probability depends on
the overlap of the trial state
with the ground state.
So how do I measure the energy?
We can't just go there
with an energy meter.
But what I can do is I
can evolve the quantum
system under the
Hamiltonian for some time t.
And then it picks up a phase
depending on the energy.
And that phase I can try
to measure with something
called quantum phase estimation.
What I do here is I
interfere the system
with and without that phase.
And that way I
measure the phase.
So I take the ancilla here.
I put it into a superposition
of zero and one.
If it is zero, I
don't do anything.
When it's one, I evolve
it with the Hamiltonian.
So it picks up the phase
phi when the ancilla is one.
Then it changed my
phases and measure.
And if I measure the zero , the
phase is zero then modular 2
pi.
When I measure
one, then it's pi.
And that way I picked
out one bit of the phase
and thus of the energy.
When I repeated it with
many different times,
then I get the total energy out.
So how do I use that to solve
a quantum chemistry problem?
The classical one
uses the basis set.
One runs a calculation like
the Hartree-Fock calculation.
One gets some
approximate solution
that also gives one an
orthogonal negative basis set.
And one then looks
at the problem
in this basis that's Hamiltonian
like this with negative N
to the fourth terms in it.
And then one solves it exactly--
sometimes with the full CI
calculation.
But that exact calculation
is exponentially hard.
So what we can instead
do on quantum hardware--
the paper here by the
Aspuru-Guzik group-- one
takes this and solves this
model on the quantum hardware.
One prepares a guess
for the ground state--
measures the energy, and
then gets the ground state
energy and wave
function, and then
measures whatever one wants
to measure in the state.
And the scaling is polynomial.
So it's much better
than classical.
But the challenge is
there are N to the four
terms for electrons involved.
I have to evolve
this Hamiltonian.
I could do that in an easy way
by breaking the time evolution
into small steps with
one term after another.
There are N to the
four terms in here.
For each of those terms, there's
an efficient short circuit
to implement it.
And that way it can be done.
And the time is polynomial.
A few technical details
for the experts--
I save the occupation number
of the orbital in a qubit.
I map the operators
to the Pauli matrices.
One represents the
fermions to qubits
within the Jordan-Wigner
transformation, which gives us
long negative strings
of couplings here.
And then I map that
quantum circuit
that evolves under this term.
And now the question is which
problem would I really solve?
So can I solve a
non-trivial, small problem--
something that
counts classically.
And it can solve problems up
to about 50 spin orbitals--
so 70 classically well enough.
It gets challenging
at about 100.
And interesting problems
that I mentioned before
started at about 200 to
400-- some catalysts.
While with the superconductors,
I need at least near 10,000.
So let's go for
something that is just
a little bit beyond what
we can do classically.
And let's calculate
it to an energy that
is a bit less accurate than what
my chemistry colleagues want
to have.
And let's see how long
that it would take.
For the question how
long it would take,
I'm assuming a realistic
thing-- something
that transferred that 10
nanosecond gate time, that's
realistic in the sense that
we have ideas how we might
be able to build it in 20 years.
It's not something we have.
What they build might run
much, much, much slower.
But if we then plug
in how much time,
how long this calculation that
is just marginally impractical
classically, it would take--
the first estimates turned out
to be 30 years.
For the real problems that
we're really interested in,
it was, again, millennia.
For the [INAUDIBLE]
high-temperature
superconductor, it was more
than the age of the Universe.
At that point, one asks is that
really going to work or do we
have the same problem
again as before?
But because we got desperate, we
saw that we have to improve it.
And the first statement was
that we said, OK, we simply
have to bring it down.
30 years doesn't sound as
bad as three millennia.
And the last year
we brought it down.
We [INAUDIBLE] the first
tricks were simple software
optimization tricks.
Take the code, rewrite
it to cancel out terms,
make it parallel--
we optimized it.
And bought us a factor
of 100 times N squared.
Then we found that when
you order the terms well,
you can do bigger time steps.
That helped so that
then after one year
of software optimization and
algorithmic developments,
it ended up that we
can do the problem
on the same machine
in two minutes.
Now we're in business.
What was in there that helped
us get this factor 100 N
squared at least out?
One thing was that there were
long strings of CNOT gates that
if you ordered the
terms correctly,
they just cancelled out
and we gained a factor N.
Another step was there are many
terms in there that you cannot
apply at the same time until
you order them smartly.
When you have an electron
jumping from here to here,
then this circuit
touches all of the qubits
here between one and seven.
It cannot, at the same time,
do a circuit through here that
touches qubits to four.
But if one thinks more
closely, than this circuit that
goes from here to
here depends on
the qubits in between
in a special way
just to the parity
of the states--
state that are unchanged.
But by changing
the circuits, one
can do those terms in parallel.
And so by rewriting
the circuits,
by optimizing quantum codes, by
optimizing codes for a machine
that you might only have in 20
years, we got the scaling down.
We got the run times down.
Now it seems that those
problems would be realistic.
And we can use
that, for example,
to solve the problem
of nitrogen fixation.
That is, I want to find a
catalyst that convert nitrogen
from the air into ammonia, which
can be used to make fertilizer.
Currently, this is done
by the Haber process that
requires methane
from natural gas
and requires lots of energy,
high pressure, and temperatures
that uses between 3% and 5%
of the world's natural gas
production and 1% to 2%
of the world's energy
just to make fertilizer.
But we know that
in the soil there
are bacteria that can do it
at ambient pressure at room
temperature.
If we could find
out how they do it,
and if you could
mimic that, then that
would be much, much cheaper.
And that's a killer
app, I would say.
What it would need, we estimate,
is between 200 and 400 qubits
and a big array of many,
many, many quantum computers
still because one has to
explore various parameters
and steps and the reactions,
configurations-- candidate
materials.
But that's something
where a quantum
computer would definitely help.
So we have at least
one killer app.
Quantum chemistry, I
think, is one of them
for certain problems.
Next one, the room
temperature superconductor--
you could take
the same algorithm
that you sped up so much.
But we need to simulate
about 20 times 20 unit
cells of the material with
each of about 50 electrons
that's about 80,000 electrons.
If we plug in the
scaling we have,
then we find that the estimate,
even using our fast algorithm,
is comparable to the
age of the Universe.
And we, again, have a problem.
But remember before
I said we can go back
to simplified models.
If I plug in this Hubbard
model, the effective one,
then the number of electrons
is down to about 800.
The number of terms
is down to order
N. The run time is
down to order one.
The run time for
this preparation
is linear to order
N. The total run time
to find the ground state
is milliseconds to seconds.
So what we'll have to do is
we have to-- so we cannot put
forth any material.
We will be able to put
forth small molecules
once we have a quantum computer.
But for materials, we
need the physics insight.
We have to simplify to
the simple model, which
we solved then.
And as I said,
they will be easy.
They will run in seconds.
There is still many
open questions here.
How do we prepare the
ground states well?
How do we measure things well?
Because, as I said,
it takes only a second
to prepare the
state, but even then
have to measure a billion times.
And then it still takes
a billion seconds.
So we want to find ways to
non-destructively measure
properties.
And then in the end,
to find out what
is a room temperature
superconductor.
But the Hubbard model alone
is too simple, as I mentioned.
So we need to find out how we
can apply this to materials.
From the simplified models, we
gain insight into mechanisms.
We learn about which properties
the material should have.
But we gain no quantitative
predictive power.
What it does have to do is,
because we cannot do full
material, we have to
choose the hybrid scheme.
We have to start from
the full material.
I have to extract
a simple model that
captures the right
physics of it.
I need to make it more realistic
than the Hubbard model.
I solved that on the
quantum computer.
Then I find the physics of it.
I might learn it's different
than what I thought.
I need a better model, go
back, iterate, and so on.
But with a hybrid quantum
classical approach
plus insight, we will
be able to solve it.
So I think that's one of the
big applications of a quantum
computer.
And in the last five minutes
or so if I have some time.
Five minutes--
when should I stop?
MALE SPEAKER: You've
got five more minutes.
MATTHIAS TROYER: Five
more minutes, yes.
I want to mention
some work we recently
did on the topic of
quantum annealing.
I think that the methodology
should be well-known here.
I don't have to
introduce it, really.
Or should I mention something?
MALE SPEAKER: Keep
it very short.
MATTHIAS TROYER: OK, short.
So we want to solve a problem
like Ising-- the spin glass
problem.
And on the quantum
device, I do that
by encoding the problem in
the spin glass Hamiltonian.
But I add a strong
transfer field
so that my spins
start out pointing
in the field direction.
Then I slowly switch
the field off.
I slowly switch on the
couplings for my problem.
And then if it do that
in a quantum annealer
that is sufficiently
coded and coherent,
then I find the end, the
ground state, or the solution
to the spin glass problem.
And that can be done
both in a quantum device
or in a Monte Carlo simulation.
What we learned from
the devices by D-Wave
here both at USC and
here at Google and NASA
is that those machines
have entanglement.
There is collective tunneling.
And their performance
is consistent with that
of a quantum annealer.
So they really look like
they are quantum annealers.
But it has also been found
that in many problems
there is a simple classical
model that mimics it.
And so far we have not
seen quantum speedup yet.
That is surprising because
the hope was from the papers
by Santo and others that a
quantum annealer should easily
outperform a classical annealer.
And they showed that by looking
at a quantum annealer run
in a Monte Carlo simulation
of a quantum system.
And they compared that
to a classical annealer.
And they found that if they look
at the difference of the best
state found to the
true ground state,
that goes down much, much
faster on the simulation
of the quantum annealer than
on the best classical annealer
for some problems like the spin
glass problem, not for others.
But at least for
spin glass it should
be much, much more efficient.
That's what one expected.
But on D-Wave we
saw the opposite.
The classical annealer
worked better than D-Wave.
So that was a big puzzle here.
Why is the D-Wave
and the simulation
worse than the
classical annealer?
When one thought it
should be better.
We looked into that and
found that it depends
very sensitively on the time
step one does in the path
integral of the Monte
Carlo simulation.
If like in the paper by
Santo I make just 20 time
steps-- 20 big time steps.
Then the energy drops
down very, very rapidly.
But if I make the time step
smaller and smaller and smaller
and finer and finer and
go to the continuous time
limit of the physical
system, then the simulation
gets stuck at higher and
higher and higher energies.
And it's less and
less efficient.
So this quantum Monte
Carlo simulation
definitely outperforms
a classical annealer
if I run it as a classical
algorithm with a big time step.
Then the energy drops with
a well-chosen time step,
the energy drops
much, much faster
in this simulation run in
the unphysical machine,
than on the classical annealer.
But if I run it in the physical
limit, then for these problems
the classical annealer loses
energy much, much faster
asymptotically than
the quantum simulation.
What we learn here
is that if we want
to look for problems where--
if we want to run a quantum
annealer on a classical computer
in a Monte Carlo simulation,
we want to choose
the big time step.
If you want to find out for
which problems the quantum
annealer may better
than the classical one,
then I should take
a small time step.
The next issue we looked
at was the spread of times
because so far most
people had looked
at just the typical
time-- the median time.
But if you look at
the various quantiles,
then there's a
huge, huge spread.
The hard problems might take
hundreds or a 1,000 times
longer than the typical ones.
And that spread
seem to be larger
in the stimulation of the
quantum annealer and the D-wave
than in the classical annealer.
And we found very,
very fat tails
that were much, much fatter
so you had much longer times
on the simulation of
the quantum annealer
than on the classical one.
And that is disappointing
for quantum annealing.
But fortunately, we
found that there's
a surprising but easy
way to fix that problem.
And that way is to just run
this quantum annealer much, much
faster than we thought
it should be run.
If you run it very,
very slowly, then you
would think that
running it slowly,
adiabatically slowly, you
find the best, perfect state.
But it turns out when
you run it faster,
then these tails get
much, much narrower.
When you run it very,
very slowly, then you
might consistently always get
stuck, for the hard problems,
in the wrong state.
If you then speed it
up and run it too fast,
you get the chance of
jumping out and finding
the right state.
So it works much, much
better for the hard problems
if you run it faster than
you think it should be run.
I think I'm going to stop here
because time's up and just skip
a couple of slides
and jump to the end.
If we view a quantum computer
as something for post-exascale,
then that's challenging
because we need a problem that
is too hard for
classical computers
and where the code
finishes in, let's
say, one year or 10 years
and not in a millennium.
And the problem here is the CMOS
is just such an amazingly good
that it's hard to beat.
But while factoring
is not the killer app,
quantum chemistry
definitely should be.
Solving linear systems might be.
Machine learning is the big
hope that we'll hopefully
learn from you soon how
you want to tackle it.
But the main thing is
that it is time now
to look at quantum
algorithms here
in not only the field of
theoretical computer science,
but as application problems--
which problems would we solve
and how and look at them
with the software engineering
perspective and not only
the theoretical perspective.
Thank you.
[APPLAUSE]
MALE SPEAKER: Thank
you, Matthias.
The microphone is not on.
But the room is small enough.
MATTHIAS TROYER: Mic's on.
MALE SPEAKER: To
check whether there
are a few questions
for Matthias.
Yes?
Maybe you could come up
to the microphone, please.
AUDIENCE: So with
your simulations
there at the end, with
the small time step,
the energy fell quicker
in the beginning.
And then it seemed to
get stuck in a state.
MATTHIAS TROYER: Yes.
AUDIENCE: Is it possible to--
is it actually stuck in a state
or could you maybe then
increase the time step?
Have a simulation
as a variable time
step that would then optimally
find the state faster?
MATTHIAS TROYER: You think we
could first go down quickly
with the small time step
then increase the time
step until we get kicked out?
My fear is once we're
stuck there, we're stuck.
But, yes, it's very
interesting to maybe see
what is the optimal--
the schedule
that maybe the first one runs
with the smaller time step
then the bigger one.
Although, here now-- so
this is the total time
to run to the end.
And when I run fast, then a
small time step is better.
When I run slowly, than a
big time step is better.
But we still might run some
parts of it with a big time
step-- some parts
with the small one--
and see what comes out best.
What this does is
mainly get-- if I
want an approximate solution--
a good one-- pretty quickly,
then use a quantum device.
It's better.
If you want the best possible
solution for these problems,
then run the classical
device slower.
But to get something
good enough fast,
use the quantum [INAUDIBLE]
here with the small time step.
AUDIENCE: I had one
other question--
I looked at one of your papers.
And you said you were using
an 80 by 80 square lattice.
MATTHIAS TROYER: Yes.
MALE SPEAKER: Do
you have to worry
about any frustration with
a 3 by 1 ground state cell?
So do you have to worry
about frustration with a 3
by 1 ground state cell?
Does that makes sense?
For the lowest energy?
Do you know what
these look like?
MATTHIAS TROYER: OK, so
this the hard problem,
frustrate is spin glass.
But it's planar and there are
efficient classical algorithms
to find the ground state.
And that's what
made this possible.
If we go to harder the problems,
like a three-dimensional spin
glass or long-range couplings or
the hard cases problem, that's
where a quantum speed
might be [INAUDIBLE].
But that' s hard to check
because we don't know
the ground state for
the big problems.
So the fact that here
the classical annealer
is better than calculated here.
The quantum one might
also be because that's
a problem that is still
pretty easy classically.
For the harder problem,
sometimes it works worse.
Sometimes it works better.
And what we need to find out.
And what people here
work on is which
problems will that help us with.
And how should
one build a device
to actually make
it work in the end?
MALE SPEAKER: Thank you.
Any further questions?
MATTHIAS TROYER: I
tried to provoke you.
MALE SPEAKER: I had one.
On the location
of the knee where
it becomes more horizontal.
Do you guys have a
good understanding
where the position lies as
a function of the step size?
MATTHIAS TROYER:
Where the position
lies as a function
of the step size.
MALE SPEAKER: You know,
the knee where it bends.
You showed the envelope.
Do you have any good
understanding of that law?
MATTHIAS TROYER: No.
We don't .
AUDIENCE: And this is
where your verticals starts
seeing the other well, right?
MATTHIAS TROYER:
No, so this is where
they are stuck in the well.
And they don't get out anymore.
AUDIENCE: Yeah, but if
you have too many of those
particles, then
it's very difficult
to produce an instant one.
But if you have few
of them, some of them
can quickly run in
there and pull others.
That's, I think, the
origin of what you see.
MATTHIAS TROYER: What I see
is when I-- so when I run it.
So this is the number
of replicas that I have.
AUDIENCE: Yes.
MATTHIAS TROYER: Yes, when
I have few replicas then
it's very easy to create an
instant on and pull myself out.
AUDIENCE: Yes,
that's what I meant.
MATTHIAS TROYER: Yes, exactly.
AUDIENCE: That's
exactly what I meant.
There is an optimum somewhere.
MATTHIAS TROYER:
There's an optimum, yes.
So when I have too few,
than its classical.
When I have too many,
then I have too many
and I am held back.
AUDIENCE: Even if
a few of them jump,
the others will still
stay because there
is too many of those.
MATTHIAS TROYER: So I think the
optimum is if an instant one is
about as long as the spacing
between the replicas when it's
a single replica that
can tunnel-- or a few.
When it's too many,
then it's just too hard
to really pull them all over.
So we have qualitative
standard, but not
where exactly this knee happens.
What's also interesting
is that if we run it
as a-- is that in some
cases, we actually
found that running it longer and
slower gives us worse energies.
So when I run it with
the same parameters--
when I run it slower, sometimes
the energy goes up and up
and up as I run it slower
and slower and slower.
AUDIENCE: That's not
classical [INAUDIBLE].
MATTHIAS TROYER:
That is possible
because I run the
annealer slower.
And when I run it
fast enough, then it--
so I have the
system-- it was slow.
And it's stuck in the
some local minimum.
And when I move it very,
very slowly, then it
will always carefully
stay in the local minimum.
When I move faster, then it
might just jump out of it.
So we at first thought that
that shouldn't happen here,
but this is a classical code.
And when I run the annealing
faster, it gets better.
AUDIENCE: Is there any
possibility of combining--
MALE SPEAKER: Would you
mind using the microphone
because, otherwise, on the
video it will not play properly?
AUDIENCE: So first
of all, I'm not sure
if this question
even makes sense.
So let me know if it doesn't.
But is there any
possibility of combining--
doing part of the problem
on the quantum simulator
and part of the
problem on a classical
and somehow, for
example, you run
this quantum for a while--
you extract the state.
And then you refine it on a
classical simulator-- something
like that.
MATTHIAS TROYER:
That is possible.
Yes?
AUDIENCE: Is it useful?
MATTHIAS TROYER:
That's possible.
And it's useful, yes, because
the quantum device might not
give you a ground state.
But it finds something
that is good.
But it might not be
the true minimum there.
And then you do something
like a gradient descent
or some local classical
post-annealing.
And that does improve things.
It doesn't change the scaling.
But it improves things.
What one can also do is see
that if we take this system
with this big time step
where it's a few coupled
replicas in the path
integral, then that
is sometimes better than
running in the physical limit.
So maybe we could just take
the quantum replicas and couple
them.
Let's run 10 replicas
in parallel and couple
them in some classical way
and see if that helps us.
Yes, that was a good question.
Yeah, but there's so
much to explore here.
MALE SPEAKER: OK.
I am tempted to conclude it.
We are 15 minutes over
it unless somebody
has a burning question.
But Matthias is still around
for the remainder of the day.
So feel free to grab him
over a cup of coffee.
Thanks one more time,
Matthias, for [INAUDIBLE].
[APPLAUSE]
[MUSIC PLAYING]
