SPEAKER: Start the
session with the talk
by John Martinis of
Google Santa Barbara,
and he will tell us
about quantum hardware
development at Google.
JOHN MARTINIS:
Thank you very much.
It's my pleasure to be here.
You know, I really am
enjoying this conference
quite a lot because there's a
real focus on understanding how
to build a quantum computer.
And I think the
D-Wave machine kind
of been leading the field in the
sense of trying to figure out
how to build a useful machine.
And I think we've done a lot of
basic research for many years
now, and I think it's time
to get serious about building
a useful quantum computer.
And in fact, that's
the essential reason
why I decided to move my
research lab to Google
to try to do that.
I've been going to a lot
of conferences lately,
and there seems to be a shift
in thinking towards doing that.
And of course,
there are starts of
several large national
programs around the world where
one of the goal is to try
to do something useful.
So what's great
about this conference
is we're leading the field
into thinking about how
to do something useful,
and how to build,
and how to use this
computational resource
for doing some kind
of useful computation.
So because of that I thought I'd
first talk a little bit as I'm
looking around and hearing
about what people are proposing
to do, to kind of lay a
groundwork, what I think
is the way to think about
how to build a useful quantum
computer.
And there's lots
of different ways
that people are thinking
about doing that.
I'm going to start
with what I call
the digital approach, which
is the standard gate model
approach.
And in fact, for the theories
you assume no errors.
And because of that you
have this hugely large state
space where you
do your computing
that goes to the
number of qubits.
And what's nice is with
this kind of thing,
there are really good
mathematical proofs
of what the power of the
quantum computer can be.
And Shor's and Grover
algorithms are just examples,
and there's lots of
other algorithms, too.
Of course, fundamental to this
is that you have no errors.
So if you want to
do anything there,
we can do demonstration
experiments
on these algorithms.
But you eventually have to
go through error correction
to get something.
And we believe this
is going to work,
and we've done experiments
to try to do that.
But this is kind of a long road.
I mean, we think we
can get there and do
these very powerful algorithms.
But it's going to
take some time.
So I would say the
advantage of this
is you have this provable
exponential power.
You know that from good
mathematical proofs.
Everyone believes it's correct.
Of course, and one single error
destroys these calculations,
so you have to do that.
Again, you have to
require large machines.
And of course, what we're
thinking about here today
in terms of annealers is
kind of looking at-- I'm
going to call an
analog approach.
Which is where you assume
you're going to have errors,
and then you're going to try
to build something practical
even though you have errors.
Now I'm just going to
generalize a lot here,
and I can be happy to
talk to people afterwards.
But I'm going to say compared
to the digital realm,
we have less refined
mathematical proofs
on how to do this.
And I think in general
for the analog approach,
there's not as firm of a
mathematical foundation
in digital.
But of course, with
errors, you can
think about building practicable
machines and algorithms right
now.
And again, this conference
is all about that.
Now I want to also say
as you do that, and think
about other simulators,
there's kind of
been a couple of
things that are said
about this kind of
approach in general
that kind of parallels this.
If you're building quantum
computers with errors,
it is really incorrect
to say that you
have this exponential state
space, this powerful computing
power, using the digital
no-error kind of arguments.
So you can't say you
have exponential power.
It's not obvious at all.
Because this exponential
power comes with no errors,
and we have a lot of errors.
And that's sometimes said
and sometimes implied--
and maybe I've done
that on occasion.
OK, I'm sorry, but
that's just not true.
But on the other hand,
conversely, a lot of people
say that the only way to use
quantum computer is to do that,
and that's not correct either.
This approach, it's
incorrect to say
you aren't be able to
calculate, it's not
going to be useful
because you have errors
and you don't have
full quantum coherence.
So the truth is
somewhere in between.
So I like to think
of it as you have
your linear scaling of
classical algorithms,
you have this that's
kind of unknown,
you have exponential
power over here,
and somewhere in the
middle here there's
plenty of room in the middle--
to kind of misquote Simon--
to try to do something useful.
Because exponential
power is really great.
So even if it's not exponential
power, we can do that.
So in this we have
quantum annealing.
I think there's a lot of other
systems people are looking at.
I see a lot of people just
building these quantum systems,
and trying to understand
the quantum mechanics of it.
And I call these self
simulators, simulating couple
spins.
But it's just simulating
the system you're building.
And that's not necessarily
useful in the way
that, let's say, Feynman
started talking about it,
where he said, you can
build a quantum system
to emulate another system.
And so I'm kind of breaking this
up into self simulators-- which
are nice physics but not
necessarily useful-- to things
where you can simulate other
things where you can get
you useful utility out of it.
And of course, the quantum
annealer, for example,
is fully programmable,
and you can
hope to put mathematical
problems on it.
So these are the two areas in
the analog that are doing that.
So given all that,
I would say we
have to have a deep
understanding of the power
of these kind of analog machines
and the system requirements
to be able to do
something useful.
And I'm just going to say that
this is a frontier of physics
right now in our field, and
I'm even going to boldly say,
this is the big frontier
of our field right now.
If we really want to
try something useful,
try to understand,
both experimentally
and theoretically, what's the
power, and how to get there,
and how to put together
our systems to give power--
maybe not exponential
power, but again, something
that we can be useful.
So I really like this
conference because we're really
trying to address that problem
in a lot of the discussions
we're having here.
And I think, for
example, there's
much more things you can
think about than just
quantum annealing to
try to see that power.
And in fact, what
I'm going to do today
is I'm going to talk
about a proposal
by our group that's just
about to get on the archive
where we're going
to do just that.
We're going to try to
look at some kind of power
of the quantum computer.
And in fact, this
particular experiment
sits in the digital
domain with a gate
model with the shallow
circuits, but with 50 qubits
so that the 50 state
space shows you
that you have huge
computing power here.
We aren't going to do
anything useful right now,
but we're going to show an
example of how you might
be able to put together, using
the systems you have right now,
something that could
be very powerful.
So I like to put it this way.
Here we are, a
bunch of physicists
with a little
dilution refrigerator,
and we're going to do
some kind of calculation
on the 50 qubit system that,
if we want to check it,
is going to require the biggest
supercomputer in the world
to be able to check it.
No, we're not going to
do something useful,
we're just going to check it.
But at least you can
show that it's powerful.
So that, I think,
is pretty cool.
So again, this was proposed
by the Google Theory Group,
and Sergio this
afternoon will talk
about this in good theoretical,
mathematical detail.
I'm going to discuss it on a
high-level experimentalist view
and explain how it works.
What's nice, it's a
simple qubit test-- again,
results checked by
a supercomputer.
And the output, you can check it
by up to about 40 to 50 qubits.
At that point you
can't check it anymore,
but clearly you've
shown it's powerful.
Demonstrates exponential
processing power.
You have this huge
Hilbert space that you're
operating on in
the system, which
I think is a good fundamental
test of quantum computing.
Sensitive and complex tests.
Results fail if
you have an error.
So you really have to
know about coherence.
And it's a good test of the
scalable quantum processing.
Again, you're testing
it, you're testing
your error models and the like.
So here's the one
slide that explains how
to do it in seven easy steps.
And basically what
we're doing is
we're taking qubits and making
gates between all the qubits.
These are Control and these
are single qubit gates.
And what we're going to do is
for a sequence of N qubits,
we're going to take random picks
from a gate set that either are
Cliffords for the single
qubits and Control Z, which is
a Clifford group, on two qubit.
And then we're going to
add in a non-Clifford
to make it interesting.
And then we're going to run
this algorithm and check it.
Now, I'm going to
explain this in a second.
But what I'd like
to do is explain
this in a very simple
way using a laser.
And what I have here is
a 300-milliwatt laser
that you can only
buy from China.
And my credit card
number was stolen,
so be careful what you do that.
And to make it safe--
this is really dangerous--
I put a ground glass
little piece here
that's going to
diffuse the light.
And I'm not going
to point it at you,
but that's going to
make a little bit safer.
And of course, if we had white
light going through ground
glass, it's going to spread out
the beam in a homogeneous way,
but we have a
coherent source here.
So if you look at this-- can
we turn off this light here?
Might be a little
bit easier to see.
You can see the
light is spread out,
but you see a speckle pattern.
Can people see that?
A little brighter,
there are places here
that is more intense, and there
are places that's less intense.
And that basic physics
is called speckle.
And what's happening is-- you
can turn the lights back on,
that's the big demonstration.
You know about speckle,
you've studied this.
So what's happening, you
have a coherent light source.
And from the various paths
going through the ground glass,
there are times where coherently
it's interfering constructively
to give you bright
spots, and destructively
to give you dark spots.
And that has to do with
it's a coherent beam.
Now if I wanted to, I
can measure this beam.
And knowing the
ground glass surface,
I could calculate what
that beam should be.
And I should be able
to measure the beam,
and calculate the beam,
and get the right answer.
And in fact, if I knew enough
information about that,
I could calculate
the response of this.
And given that
there's N pixels here,
it would take about
n squared operations.
Not that hard to do.
That's a classically
tractable problem.
However, when instead
of doing light speckle
you do qubit
speckle, where you're
running through
these random gates
and getting a random
output, that is a 2
to the N hard
computational problem.
And what we're going to do
in the quantum supremacy
is measure the qubit speckle and
then compare it to calculation.
Now, when you get qubit speckle,
the intensity of the speckle
is proportional
to the probability
that you're going to measure it.
So what we're going
to do is as we measure
all the possible
states, you're going
to pick out the bright spots.
And we're going to get that
information of the bright spots
and say, OK, is that what we
predicted in the bright spots
of the theory?
And that's the algorithm.
And clearly, if things
are working right,
we're going to
pick out spots that
are going to be
presumably bright,
and then we're
going to check them,
and we're going to see that
that has higher probability.
And that's basically the
check part of the algorithm.
So let's go ahead now and
then go through the math
now that I explained
how it works.
So what we're going to
do is take this circuit
and we're going to
measure a K, and I'm
going to call K, that
bitstream, an integer that
runs from 2 to the N minus 1.
So it's a huge number
of possible outputs.
And we're going to repeat and
sample this about 100,000 times
to get a lot of statistics.
And in our next version
of the quantum computer,
that should take about a second.
So about 10 microseconds
per cycle through.
We could do that really fast.
So in a second when we
get huge amounts of data.
Now it's a random circuit, so
you would guess classically
that you would have
a random guess,
and any outcome, K, has
a probability P classical
over to the end.
But I'm telling you if you do a
quantum mechanical calculation,
which is just running this
through a supercomputer,
you're going to find
an N side, and you're
going to get a
probability, K. And you're
going to store it in some
kind of look up table,
and then you're
going to compare it.
And what I say is to calculate
this for 45 to 50 qubits
might take days, and it's
a huge amount of data.
That's going to go maybe on
200 state-of-the-art drives,
although I would recommend using
Google Cloud to sort the data.
That's much better solution.
But if it's big
enough, you aren't
even able to store
it on Google Cloud.
I mean, it's just too big.
So you may just compute it
and then just store the ones
that you found here.
But this is the trick, this
is the really beautiful idea.
You do a cross
entropy calculation.
You take the case that you
measure, and for those cases
you find the P's that
you computed here.
And then you compare
that to P classical.
Now if you were to
randomly choose K,
the random choice
of K would give you
a random P, which
would be P classical,
and this number would be 1.
But since you're taking
the bright ones here
and you're plugging it into
a matching bright ones here,
this is going to be
bigger than P classical.
And then this ratio's is
going to be bigger than 1.
And in fact, the theory is that
when you work it out in detail
and do all the statistics, this
cross entropy is minus 0.58
of its classical,
and 0.42 its quantum.
So you can readily
tell what's going on.
And statistically, since
you're taking million events,
you know very easily whether
it's quantum or classical.
OK, so you did that
for one sequence.
Then, of course, you
try another sequence.
One data point is
not good enough.
And you go through
this all again.
And you keep doing
this until you run out
of money, because this is
taking days on a supercomputer.
I mean, this is totally
trivial [INAUDIBLE] second.
So the quantum part is trivial,
and this is your limiting step.
Of course, that's
what's cool about it.
So let's explain a little bit
more how this thing is working.
I think this is for 36 qubits.
You just take the index
from 0 to the end.
You get probabilities that
are this kind of distribution.
And this is actually an
exponential distribution here.
And this is the P
classical right here.
So this is just the output
of a computer program.
And this exponential
distribution actually
comes from the fact that the
real and imaginary amplitude
is Gaussian distributed.
And then from that you actually
get an exponential probability
distribution.
And what you can do is
you take all those caves
and sort them from
low probability
to high probability, and
that sorted is given here.
And if you tilt your head to
the left, this is probability,
and this is number, and this
is an exponential curve here.
So this is, indeed, this
exponential distribution.
And I'm telling you that's
what we are claiming.
Now, this is the
interesting thing.
You put one error
into the circuit,
and then you measure
it at the end,
you get this blue line,
which is basically flat.
So one error totally
destroys the coherence,
and you won't see this
quantum effect anymore.
And again, this is the
power and the fragility
of quantum mechanics.
So you can just say this
cross entropy is just
the probability of no
error times this S value
that I told before, and
that's the probability here.
And then the probability
of error for given single
and two qubit measurements.
You can work this out easily,
and it's basically [INAUDIBLE]
number, the total number
of errors in your circuit.
And so that this
number here, you
can see some quantum effect,
you want N to be less than 1,
or practically
less than 3 or so.
So the experiment
you do is you're
going to measure S total
minus S classical, which
goes from 0 to 1.
And then as long as
your errors are small,
you'll be able to
see S greater than 0,
and you're going to
claim quantum supremacy.
If errors are too small,
this is too small,
and you won't be able to
figure out where it's going.
So it's a very simple
test like that.
So experimentally, you
could do this in a 1D chain
with 49 qubits.
We know how to make 1D chain.
But in order to
untangle the N qubits,
you kind of have
to do 49 Control
Z's to get the states
to talk to each other.
And given what we're doing
now, that NE would be 12,
it wouldn't work very well.
For a 2D array, to get them
to entangle across the array,
you're going to need square
root of N, or about 7.
And for error two,
that looks doable.
That's something we do.
Now it's going to take
more depth of the circuit
than probably 7 due
to some details,
so I think we're going to have
to improve upon this number.
But I'm just saying that this
is within reach for us to do,
and it's one of the
experiments we're trying.
And it's good to push
us to get the coherence.
Just to let you know, we
have done complex sequences
with 1,000 gates,
which is within that.
So we know how to
do the control.
In [? terms ?] to get
it out of a 2D array,
we have bump bonding working,
superconducting bump bonds,
with 1,000 bump bonds
working all the time.
So we think we kind of build it.
Just it's hard to
put it all together,
and that's something
we're going to work on.
OK, so that's the
end of my talk.
Let's summarize.
As we think about doing
something practical,
I think the [INAUDIBLE] quantum
computing is a great direction,
and we're thinking about it.
Really, really interesting.
It's possible maybe you could
do something with digital.
And right now we kind
of want to demonstrate
this exponential state
space and see if it works.
And although that's
just a theoretical
and a scientific
advance, I would
complain that there's
something useful to this.
Because I think if you were to
show this, and show that you're
needing a supercomputer
to check it,
that would be very
useful in the fact
that you would convince a lot
of Silicon Valley executives
that the technology is
maybe going somewhere.
And Google, that's kind
of cool to do that.
And also I would say, well OK,
we have this random algorithm.
But can some smart
person out there
think about a short
algorithm that
actually does some useful
mathematical function?
And if that's the case, we could
maybe turn this into something.
So it's kind of a call
that people can do that.
And if you do come
up with something,
we will definitely try that.
And we're working on that,
and we would do that.
So we're interested to see
if people have any ideas.
So in that, thank you very
much for your attention.
SPEAKER 1: So the talk
is open for questions.
Please.
SPEAKER 2: Well, I
would like to say
that I think there is evidence
that there are low depth
circuit model algorithms
that may be of use for doing
approximate optimization.
And my group showed that such
an algorithm with very low depth
could get an
approximate solution
to a combinatorial
search problem.
And for a short
period of time we
were outperforming the
best classical algorithm.
But then the classical
computer scientists
teamed up 10 against three
and improved their algorithm.
And so we differed by a
log factor now from them.
But our algorithm is at the
lowest possible circuit depth,
and we had those results.
And if we increased
the circuit depth,
the performance will improve.
So I would like to see someone
build a quantum computer that
executes a low circuit depth
optimizer in the gate model.
And I think people at Google
are interested in variants
of that, which also
people are with this VQE.
But I think there are
things on the table which
can be explored.
And I have my own schtick,
but I think there are others.
And I think we need to pay
attention to these things now.
Thank you.
SPEAKER 3: I want
to ask, when you
say you use random sequence
of gates, what does it mean,
random?
I mean, are you averaging
over different sequences,
or you just choose one--
JOHN MARTINIS: No, you just--
for each element in there, you
randomly choose a Clifford.
And there, you may have to
choose a few more T gates.
SPEAKER 3: No, but
you have to repeat--
JOHN MARTINIS: And then we
take that given sequence,
and then we repeat it,
let's say, 100,000 times.
SPEAKER 3: Yeah, but is
it clear that averaging
over different
sequences and averaging
over different repetitions
are equivalent to each other?
JOHN MARTINIS: Well,
every different sequence
you choose you have to do
a different supercomputer
calculation to be able to
compare the speckle pattern.
And it should not matter.
You should be able to do
this with one experiment
and show what's
going on, but I think
the community would like you to
try it two, three, four times.
Show that generically.
But I would say part
of the interesting part
is that this works for a
randomly chosen circuit, which
kind of shows that there's
no information there that you
can gain to build a
classical simulator to get
that information.
Because the individual
elements are chosen randomly.
That's kind of the
thinking there.
SPEAKER 1: All right, thank you.
So we have time for one
more question, please.
JOHN MARTINIS:
There was one over.
Oh, OK.
SPEAKER 4: Just a clarification
question on the errors
that are required
to destroy the fact.
How do you model this error,
I mean, what kind of errors
are you sensing?
JOHN MARTINIS: Yes, so
we have modeled them
in the typical depolarization
model scheme where
when you do the simulation
you just put in a random flip
to see what happens.
And that's, of course, the way
that we model things for error
correction and whatever.
And it's nice, because you
can argue whether that's
a good model or not.
I mean, our data so far says
it's a pretty good model.
But we actually think this
experiment is actually
a good way to check whether
those assumptions are
fundamentally sound.
Because we can estimate the
errors from a single and two
qubit experiments, and
then do this big experiment
to see if it makes sense.
So I think that
would be very useful.
SPEAKER 1: All right,
thank you very much
Let's thank the speaker again.
JOHN MARTINIS: Thank you.
