SPEAKER: So he's talking about
testing adiabatic quantum
computers using simple
quantum simulation.
PETER LOVE: OK.
Thank you very much
for the invitation,
and thanks to all
the speakers so far
for being a hard act to follow.
So what I want to talk about
today is an issue that you
should not require
convincing is important,
which is determining whether
the adiabatic quantum computers
or quantum annealers
that one might build
are actually doing what
you think they're doing.
So this is the most
superfluous slide
imaginable for this
audience, but this
is adiabatic quantum computing.
You start in some simple state
of some simple Hamiltonian,
you drag your Hamiltonian over
by switching on interactions,
and what matters is
the instantaneous eigen
inspector at each state,
particularly the gap.
OK so another way of
thinking about what
I'm talking about
today is to think,
is there a distant future
for these machines?
So one question is
can you turn them
into a universal adiabatic
quantum computer,
so I'll spend a bit of time
telling you what that is.
One thing, as Hartmut
said, is that at some point
we might just feel that we've
developed the technology enough
by pursuing this annealing
route and dive off
and build a gate-model machine.
We might make a sequence of
post-annealing modifications,
which I'll define
later, so that when
all these companies working
on quantum computing
have merged into a
single large company,
they can then build a universal
adiabatic quantum computer.
And then the other option
we don't want to talk about.
I don't find this
joke funny, actually.
OK so let me tell you
about universal AQC.
So this is a construction that
goes back to Kitaev, really,
and Wim van Dam and Dorit
Aharonov formalized this
into a proof-- that
adiabatic quantum computing
is equivalent to regular
quantum computing.
What you need is something
new, something extra,
which is what's called
a clock register.
You have your regular
logical register
that stores the state
of your computation.
And the trick is you
make a Hamiltonian, which
is defined so that the ground
state is one in which time
is encoded in a superposition.
So the ground state
of this Hamiltonian
is what's called
the history state.
Every time slice of
your quantum circuit,
represented by
these unitaries, UT,
occurs as a separate term
in the superposition.
So if you measure this,
you see the time in one
of the registers.
And then you look in
the other register,
that's the state produced
by your circuit evolving
for that time.
And if you look at the terms
that you need to do this,
you have some H clock,
which makes your clock work.
You have some initializations
so that at the beginning,
your circuit is forced to
be in the initial state,
at timetable zero.
And then you have
this propagation term.
You evolve forward by UT, that's
the teeth gate in the circuit,
and you advance the
clock from T to T 1.
This has to be Hermitian, so
you also need the possibility
that you undo that and
decrement the clock,
so you go backwards in time.
So it was no surprise that
this construction goes back
to Feynman, who would never
think of anything going forward
in time without also thinking
of something going backwards
in time.
And so this is the
construction that you use.
This evades many thorny problems
by the use of the superposition
state.
Of course what you do is you
sample from the time evolution
of the circuit.
Just sample a lot, OK?
And eventually
you'll see the end.
Or if you're a
computer scientist,
you say do nothing
for a long time,
and then most of the
circuit is the end state.
Of course we know that we have
to prove this is efficient,
so we want to know
about the gap.
And this is unusually easy.
We can apply a change
of basis defined
by this unitary operator.
And what happens then
is the Hamiltonian
becomes independent
of the circuit.
And if you stare at
this, and you've ever
done any numerical
analysis, you immediately
notice that the Hamiltonian
is a discretization, a rather
trivial discretization,
of the kinetic energy
operator for a quantum system.
And then you can
immediately know
that the gap is going to go like
1 over the dimension squared.
This is not rigorous,
it's just the fastest way
of me communicating that to you.
But you can rigorize this.
OK so this sets out a list of
desiderata which are actually--
I don't actually
know what the Latin
for the opposite
of desiderata is,
but they're not so desirable.
So the gap scaling
is 1 over T-squared.
OK, so that's polynomial.
So if you're a computer
scientist at this point,
you go home.
But the problem is that there
is no Moore's law for dilution
refrigerators.
So eventually your gap is going
to run into the thermal noise.
So you actually-- this
is a general problem
with these constructions,
including the ones that Toby
was talking about, is polynomial
increases in Hamiltonian norm
are regarded, from a complexity
point of view, as acceptable.
Polynomial increases in
Hamiltonian norm in the lab
are not possible.
So that's a problem.
And that's a really serious
and interesting question
that a lot of people
have thought about.
So types of coupling.
You need XX-- you can
either just build XZ,
or you need XX and ZZ.
So the locality of
these constructions
varies from 5-local to 2-local.
And so on.
So I guess my
contribution to this area
is this paper with Jake
Biamonte, from a long time
ago now, where we
showed that just adding
an XX to the transverse Ising
model makes it universal.
Modular these issues with making
the coupling strengths scale.
And it has to be
non-Stoquastic XX.
OK so how do we modify locality?
I'm kind of going through
Masoud's list of questions
from the panel discussion.
So the central trick
is what I call gadgets.
So a gadget is a perturbation
theory construction.
Rather as Toby was
saying, what we want to do
is we just want the low-lying
levels of our Hamiltonians
to agree.
So I want to build a
physical Hamiltonian, which
has 2-local interactions,
but using it,
I want to simulate a Hamiltonian
that has K-local interactions.
So what I do is I switch on
some ginormous penalty term,
and the strength of this
penalty term is the big problem.
That splits my energy levels
into a low-energy sector
and a high-energy sector.
So then I turn on some
small perturbations
that splits my degenerate
low-energy sector.
And what I do is work out what
effective Hamiltonian describes
my low-energy sector.
Because of the
renormalization group,
we know that in
fact all of physics
is the low-energy sector.
So some theory that we
don't know, which I guess
is the one that ET
has on his cell phone.
So this is connected
ideas of renormalization.
So here's an example.
I have three qubits
every coupling,
every edge here is a
physical 2-local coupling,
that in principle can
be built in the lab.
I have three ancillas,
the green interior ones.
I have an ancilla Hamiltonian
that forces these to be
in a superposition
of 000 and 111.
I have physical couplings
that couple the ancillas
to the red qubits, and
I have interactions
among my red qubits.
Then the effective
Hamiltonian is then 3-local,
it actually ends up being
that this product ABC.
OK so just taking
this forward, here we
are doing quantum
annealing today.
The gate-model remains a
little bit in the future.
Perhaps universal AQC is an
intermediate thing we could do.
Maybe even with
all these caveats
I'm inserting about
coupling strengths,
maybe we can make a little toy.
Like a four-qubit universal
adiabatic quantum computer,
I would be very
excited to see that.
What do we need to do?
We need to add XX,
gadgets and a clock.
That's three things.
So we all know if you
ask our experimentalists
for three things, they
stop listening at two.
So let's think more
generally about
post-annealing modifications.
So I'm going to again, just--
Massoud set this up nicely,
so I'm going to talk about
non-Stoquastic interactions,
gadgets, and complex
interactions,
but not too complex, I'm
just going to talk about XX.
I cannot walk past a numberplate
like this without taking
a photograph of it.
Notice this is a British car--
this is not a British car,
rather, so you know I wanted to
have something that would work.
So let's think about the
justification for XX,
so here's the propaganda.
So this is the simplest
non-Stoquastic term,
and I'll explain in detail
what that means in a minute.
So it powers a lot of stuff.
If you can build XX, it opens
the door to a lot of things.
You can do gadgets.
You can do
simulation, which I'll
say more about in a moment.
You can do universal AQC
as we already talked about.
So is it useful for annealing?
So this is a great
question, and I refer you
to Layla Hormozi's
talk, which is
the last talk of the session
that's in here, I think.
It removes quantum
Monte Carlo talks
from the agenda for
this meeting, which
is the most important goal.
If you don't want to hear
anything more about instantons,
you should build an XX coupling.
But more importantly,
it's just what's next.
We know more about XX now
than we knew about X in 2004.
So it's just what's for dinner,
we should eat our vegetables.
OK so I've said Stoquastic
a bunch of times
and I don't think anyone
yet has really defined,
it so let's define it.
So it's the quick
way of saying is
all the off-diagonal
elements are negative or 0.
But you have to be slightly
careful because you can get
immediately confused by that.
That means up to simple
local transformations.
Because of course
if I write down
our favorite Hamiltonian
with a positive gamma here,
you say Peter, you clot.
That already does not
satisfy your definition.
But if I just take
any individual term
and do a local transformation
on the Hamiltonian,
on the case qubit, this
just leaves this term alone,
flips this.
So that's a local
transformation that
can make it-- a trivial
local transformation that
can make it Stoquastic.
So the simplest
non-Stoquastic is XX--
you could ask me
about YY, but don't.
So it's the simplest thing
you can do that is Stoquastic
and cannot be removed
by some simple trick.
OK so let's try and put some
more intermediate steps.
I want to talk about
adiabatic simulation.
What I'm talking
about is now nicely
special cases of this
big general theorem
that's lovely to have, that
Toby was talking about.
So I want to break
up these steps.
I'm going to tell
you about something
you can do with just
starting XX and gadgets.
OK so if we did this, and
then we added a clock,
and we would have universal.
So this is an
intermediate thing to do.
And I like chemistry.
So the Hamiltonian I'm
going to think about
is the molecular
electronic Hamiltonian,
it's just quantum
electrons moving
in the field of
classical nuclei.
And here's the
Hamiltonian in real space.
So you have to discretized
it, this is a long story now,
but we just use
molecular orbitals.
So we-- these are eigenstates
that are the single electron--
some single electron problem.
And then we use second
quantized notation,
so we define creation and
annihilation operators
that place electrons into
these states, or remove them.
Because electrons
are fermions, they
have to obey these
anti-commutation relations.
After all that, we can write
a second quantized Hamiltonian
in terms of these operators.
OK so this is just a chapter
one in any electronic structure
book.
What does this have
to do with qubits?
Well, we need a mapping
from fermion operators
to qubit operators.
And I think Rolando
was the first person
to realize that you could
use something as old
as the Schrodinger
equation, the Jordan Vigna
transformation to do this.
What's important is
that every time you
exchange a pair of electrons,
you have to pick up a sign.
So you have to keep
track of the parity
of the number of electrons
between every pair
of electrons.
This means you need these
long, long, long strings
of Z operators that are
effectively counting parities.
What does that mean?
That means that the Hamiltonian
going from N orbitals
to N qubits becomes an
N local Hamiltonian,
it touches all to every qubit.
In 2002, Bravyi and Kitaev
wrote this nice paper
on fermionic quantum
computing that
has a construction that leads
to log N local Hamiltonian.
I'm not going to describe
that, you can read their paper.
It took us 10 years, notice,
to understand that paper,
and use it for chemistry.
Which for a paper by
Kitaev, it's not so bad.
So now the couplings--
what's the effect
of this locality on
the coupling strengths
in our adiabatic Hamiltonian?
Well, if you have N
local Hamiltonian,
you're trying to make N
local terms of the gadget,
you're going to get-- you're
going to require Eric to build
you coupling strengths
in your Hamiltonian
that are scaling like some
number raised to the power N.
That's exponential, so
that's not scalable.
So better is some number to the
log N. OK, that's polynomial.
So that's scalable.
But as I said earlier,
there's a big difference
between scalable and buildable.
So in principle we could think
about doing a small simulation.
So we use the same
gadgets, and this
is joint work with Ryan,
who's sitting back there,
who's now at Google.
I think the important
thing about this work
is that this is
like trying to take
a step towards universal
adiabatic quantum computing
without a clock.
So it's intermediate.
Again it only needs XX plus ZZ.
OK so that's still
got two things in it,
so it would be better
to just reduce ourselves
to trying to find something
that just needs one new thing.
So now we move to the
idea of validation.
So if we think we're going to
need these extra constructions,
we're going to need gadgets,
we're going to need clocks.
Sometime in the future, what
can we do today with the systems
that we have, to make sure
that the systems we have today
are at least somewhat capable
in the future of doing
these things.
And then just an
elementary sort of thing.
If we can do
everything with XX, we
should be able do
the things that we
teach in our quantum
mechanics classes.
So can we do very, very, very
simple quantum simulation
of say, the one-dimensional
simple harmonic oscillator?
And then can we show evidence
that superposition states
exist in an adiabatic quantum
computer between states
with very different
Hamming ways?
That would be a very nice thing.
I think I have time.
So when I first got interested
in adiabatic superconducting
quantum computing,
no one had done
tomography and superconductors.
And Joe Martinez,
who is sitting here,
was using phase qubits as probes
of these parasitic two-level
systems in substrates.
So the phase qubit-- I think
this is a correct story
and you can correct
me if I'm wrong--
was the worst qubit when
you started doing this.
But it was therefore the best
probe of the environment.
So you got this
great information
about the environment,
and then you
were able to fix the
problems of the environment.
So this-- what I'm
going to talk about now
is there is sort
of in that spirit.
We're going to define the most
inefficient possible simulation
of the easiest quantum
system to simulate.
That's not because I
don't know how to solve
a simple harmonic oscillator.
It's because we want to use this
as a probe of the capabilities
of the machine.
If we make it really
hard for the machine
to simulate something we
understand really well,
that gives us a
very good diagnosis
of what the machine is doing.
It's a kind of backwards
idea, but that's the idea.
OK and the reason
we're doing this
is because if we had
a gate-model machine,
the requirements to make a
universal gate-model computer
are the same as the
requirements to test
the universal
gate-model computer.
Once you have a universal
gate, you can do tomography.
But once you let
go of that, and you
say I'm going to do
things adiabatically,
you lose the ability to
validate the machine,
and it becomes a
separate problem.
And this I think accounts
for the volume of literature
in this area to a large degree.
All right, so I want to hammer.
I still need to be able
to measure something.
So I'm going to use
as my tool, my hammer,
is going to be this--
what's it called-- tunneling
spectroscopy, that I guess
two or three of the authors
of this paper are in the room.
So this is the idea of how
to adiabatically determine
where the energy
levels of a system are.
And here's one energy level,
and here's another energy level.
So that's what I'm going to use.
There's actually a quote
in the New Statesman
this week saying with
a hammer, everything
looks like David Cameron's face.
I guess you might have to cut
that out before posting this.
What's the system, what's
the inspiration for this
originally, is this nice paper
that actually a mathematician
friend of mine
pointed out to me.
So this is looking at
some highly exotic phases
in an Ising chain.
This is like the
[INAUDIBLE] style,
where you find some actual
physical material that has
a transverse Ising Hamiltonian.
There's two bits
of physics here--
there's this E8
symmetry business, which
I'm not going to talk about.
But these peaks
here are evidence
of what's called
kink confinement,
and I'll define that.
But the point is
that these peaks
appear in positions that are
given by the zeros of the Airy
function.
So it's this amazing system.
You have this lump
of stuff in a fridge,
and it spits out the-- it spits
out these mathematical numbers.
So this is was sort of part
of the genesis of this work.
So what we do is we
have the kink basis.
So physically, what these kink
bases, or the clock states are,
is you take a
ferromagnetic chain,
and you put a twist on it.
So it all wants to be
the same but there's
this one awkward member
that doesn't want to-- I'm
sorry this is another
Brexit joke-- that
doesn't want to be the same.
And then what that causes
is the zero-energy states
that have one-- domain wall.
And so then you can label them
by the position of that domain
wall.
So that's the kink.
OK so that's the basis,
what's the transverse field
in the kink basis?
Well, if you think about what
a uniform transverse field does
to a basis state, it
takes a basis state
and it returns you the
uniform superposition
of all the Hamming
one distance states.
In other words, it flips
each of the qubits,
so you've got all the flips.
Almost none of those
are in the kink basis,
because you've put another
pair of domain walls
into your system.
But two of them
are, namely the one
that's the domain
wall shifted this way,
and one that's the domain
wall shifted that way.
If you look in the-- if you
project back into the kink
basis, you get the fact that
you get the upper and lower
off-diagonals, if you
think of this as a matrix.
And so out of this you
can build a kinetic energy
term, again a discretization
of the second derivative.
So if you take the projection
of your transverse field,
minus some constant,
then you can get what
looks like a kinetic energy.
All right?
So what would the
local field look like?
Well, a local field
is going to give you
HI if it's to the
left of the kink,
and -HI if I is to
the right of the kink.
So you get this
potential term where
the potential is this
annoying sum of your fields.
And the obvious case is if I
have a constant local field,
I get a linear potential.
That means that the
solution-- the eigenstates
of the linear potential
of the Airy functions,
that sort of confines the kinks.
And so that's kink confinement.
Oh, OK, so there's
some stuff here
that you can do any polynomial
potential, just playing around
with some identities.
You can figure out what
your local fields have
to be exactly, and we get an
annoying-- again, an annoying
increase in the requirement
for local fields,
they rise exponentially with
the degree of the potential.
But what the hell, we only
want to do linear and quadratic
anyway, so that's not so bad.
But in principle
you can do anything.
You can do anything exactly.
OK what about errors?
So I'm projecting back
into this subspace,
so how does this work?
Well, the first thing
is we've got obviously
we've discretized into a
finite number of qubits.
So we've got
discretization errors
in our simple simulation.
So the limit as the number
of qubits goes to infinity
is that some continuous
Hamiltonian and deviations
from that are standard
numerical analysis.
Then we've got leakage error.
So this is the fact
that the limit--
as the penalties that
we've imposed to push us
into the kink basis
goes to infinity,
we should exactly be
projected into the kink basis.
So OK, so we've got
a large penalty,
and we want to know
about corrections.
So we exactly know
how to do that.
That's exactly what
gadgets give you.
We've got some
low-energy theory, which
is this one-dimensional
Schrodinger equation,
and the gadgets are going to
tell you about leakage errors.
And so therefore, this
is a strange thing,
because the errors, the
deviations from what
you expect, are
actually telling you
about the presence of
virtual excitations
in your system,
which are actually
desirable to make gadgets.
There are three things
you can do with this.
So if this works at all,
you can make clock states.
Because you are displaying
a capability to produce
superpositions of states with
very different Hamming ways.
The leakage errors
in the simulation
tell you about the presence
of virtual excitations,
or the strength of
virtual excitations.
These are going to tell
you whether gadgets
are going to work at all.
So if you have a
noise model that's
raising those
virtual excitations,
gadgets won't work.
Both of these things
we can do today.
This is just Ising
model physics.
It's very interesting when
you go through and work out
the first error, the first
error can be used in fact
to detect the physical XX.
So I'll just tell you about
that and then I'll stop.
OK so what's the
thing look like?
So there's a kink
at the end, right?
Because this is a
kink, and then there's
a kink somewhere in the middle.
So it's really the
ground state really
has two kinks, and then four
kinks, six kinks, and so on.
I have to put on a
penalty to get rid
of the zero-kink states,
that's a technicality.
Here's my horrible
gadget equations.
I show you these so you can
stand there and be happy
that you don't have to
do gadget calculations.
That's the only reason
I'm showing these.
So if you do the gadget
calculations, what do you get?
Here's the effective
Hamiltonian.
Is the projection of the
Hamiltonian a constant term,
a boundary term, and
a Stoquastic XX term?
So in this simulation,
the first area you see
is actually a Stoquastic XX.
That's not what we want.
We don't want Stoquastic XX,
we want non-Stoquastic XX.
But remember that
the idea here is
to use these simulations
as a diagnostic.
So what if you actually
built a non-Stoquastic XX?
It would be this term,
a term of this form,
with the opposite sign.
So you could use your physical
XX to cancel this error.
That would change the error
scaling of your tiny little toy
simulation.
Therefore you can use that
change in error scaling
to detect the presence
of a physical XX.
So you do a simulation,
you look at the errors,
without any physical
XX switched on,
you get a certain
scaling of the errors.
Then you switch on your physical
XX, the scaling of your errors
changes, that's your detection.
So let's just look at
some-- these are just
mathematical simulations.
So this is the Airy
functions, if you
haven't seen them recently.
These are the continuous
Airy functions
in the discretized simulation.
This is the
discretized simulation
of the simple harmonic
oscillator functions.
Through obviously a very
coarse discretization,
it's nine points.
And the ones with leakage
errors are the points.
So there's some small
difference between the points
and the lines, but not too bad.
What about what I was
just talking about?
If you look at how the error
is run, with coupling strength,
this is just 8, 10 and 12,
going from blue to red to black.
If you then look at
the 8-qubit thing
and imagine getting rid
of that Stoquastic XX,
immediately see a jump in
the scaling of the errors.
So that's one way at least of
detecting a non-Stoquastic XX.
So I guess one should
be conservative,
so I say experiments
observing these effects
would fail to invalidate
the presence of XX.
I have one minute, so I can
tell you about something
that I think is fun
about these simulations.
Here we've got a
situation where you've
got quite highly entangled
set of qubits simulating
a one-particle equation.
So obviously in the one-particle
system there cannot be
entanglement, because there's
nothing to be entangled with.
So when you talk
about entanglement,
you're talking
about entanglement
in the thing-- the simulation.
But what does that
entanglement in the simulation
translate into if you
look at it in the kink
basis, what resource is it?
And it turns out there's
a very nice story here.
It translates into a
measure of delocalization
called the inverse
participation ratio.
So if you take a cut-- this
is how a condensed matter
physicist thinks about
entanglement-- you take
your system, you
cut it in half, you
ask what's the entanglement
of this half with that half?
So if you do that, and then you
move the position of the cut,
you get these graphs here.
So I've plotted just the
amplitude of the eigenstate,
which is in red on the left
axis, the entanglement is
the blue curve on
the right, and all
it's doing-- all the
entanglement in the qubits is
doing is giving you
the delocalization
of the single particle
in the simulation.
So I quite like that because
all this talk about coherence
and delocalization-- to
achieve delocalization here
you need a lot of entanglement
in the physical qubits.
That tickles me.
The point here is that doing
non-scalable simulations
is more than a toy activity.
It can be a good
probe of properties
that you need for post-quantum
annealing AC-- sorry,
I guess that means
post-annealing.
Can we please do
some XX experiments?
I guess I've been
saying that for,
how long have I
been saying that?
Like 12 years now?
So I'm just like any fanatic,
I'm just redoubling my efforts.
So one big question
here is there's
been a few talks this week about
locality reduction techniques.
That's fantastic, we need better
locality reduction techniques.
I also want to just
end with a worry I
know I made a joke about
quantum Monte Carlo here,
but there's a huge, huge fermion
quantum Monte Carlo community.
Non-Stoquastic XX is a way
of introducing a sine problem
into our business here.
But I do worry that the fermion
Monte Carlo community has
probably not spent a huge
amount of effort thinking
about non-Stoquastic XX.
So all of this is
subject to the criticism
that if that community
really took this on,
maybe they would come up
with some good heuristics
to non-Stoquastic
XX, and we'd have
to still keep listening to
fermion quantum Monte Carlo
talks.
OK.
Thank you very much
for your attention.
[APPLAUSE]
SPEAKER: All right, we have
time for a couple of questions.
And I guess a person
has to come to this mic.
[SIDE CONVERSATION]
AUDIENCE: So when you
use these gadgets,
is this sort of history
state you get faithful,
in the sense from Toby's talk?
PETER LOVE: Yes.
Yeah.
I was worried for
a minute, but yes.
AUDIENCE: So you
eloquently pointed out
all of the problems in my talk.
The difference between
scalable and buildable,
which I definitely
totally agree with.
This one of a poly gap, or
sort of equivalently the fact
that you need to take very large
couplings strengths to overcome
it-- there is of
course a solution
to making the coupling
strengths large,
which gets rid of it,
which is to cheat, and just
put lots more couplings.
So that you can have
it-- there's a trade-off,
I think we can make
this-- has been
made kind of fairly
rigorous in fact,
that between high
coupling strengths
or order one coupling
strengths everywhere,
but very high-degree
interactions.
Still two-cubed,
two-body, but do
you have as a theorist who's
slightly closer to experiments
than me have any idea
of which way is easier?
PETER LOVE: Yeah, there's
this Yudong Cao and Daniel
[? McGuy ?] paper,
where they show
you can pump up the effective
coupling strength by having
higher degree interactions.
I actually took a
slide out about that
because I was talking to
my experimentalist friends
over here who were saying
that if you put more coupling
strengths in, the coupling
strengths will tend to go down.
So it seems that there's an
experimental trade-off there.
SPEAKER: One More question?
All right, if not, I think
it's time to go for lunch.
And let's thank
Peter one more time.
