HARTMUT NEVEN: So yeah,
I wanted to start us off
with some updates to tell you
what the Google AI Quantum
Lab has been up to since
the last year's symposium.
And I have the privilege to
talk about the achievements
and results of our team.
But of course, I want
to acknowledge the fact
that we have a hardworking
and very talented team.
So it's hard to give credit
every place where it's due.
So here you see the list of
people we are working with.
And our team is
steadily growing,
and we are always
eager to increase
our intellectual horsepower
and our intellectual diversity.
So if you like our
work and you think
you would like to contribute or
have something to contribute,
don't be shy.
Contact us.
So I have a few messages
that I wanted to share.
So the first is, we are opening.
And when I wrote
down this line, I
didn't expect it would have
such a fraught meaning.
But it's rather innocent
what I mean in our case.
Many people on the
video call, I'm sure,
have visited our
laboratories in Santa Barbara
and been to this place
we call lovingly GQ1.
It stands for Google
Quantum 1 Laboratories.
But they're now being joined by
a much larger facility called
GQ2.
And you may have seen
this architectural drawing
a few times.
We showed this before.
But what's exciting
to us, as you
can see in this picture, which
was just taken a few days ago,
the facility is
almost ready to go.
Actually, if it wouldn't
have been for COVID-19,
we would be in there right now.
Actually, a number of our
dilution refrigerators
preceded us.
They're already installed.
And we are taking, already,
data out of this facility
in a remote fashion.
So if you wanted to use Google's
upcoming quantum computing
service, you would use
this in the following way.
You would be placed
somewhere-- let's
say your favorite
quarantine hangout.
Then you download
Cirq from Github.
That's a development environment
where you would formulate
your quantum algorithm.
Then you send it to our service.
Our service name
is Quantum Engine.
And then you have a choice.
Probably, initially, you
would just go to simulators.
Those can be used, then, to
double-check whether there
are any bugs, or at least
in a noise-free environment,
your code works.
And then you transfer it over to
the actual quantum processors.
So we have an increasing
fleet of quantum processors
available in production.
The older processors
just have 20 qubits.
But our fabrication team,
led by Anthony Megrant,
they have actually
braved COVID conditions
and started to fabricate
72-qubit processors.
And they hopefully will join
the fleet in production soon.
You'll also have-- for
additional scalability,
we have added machines
in Google Cloud.
Those are high memory machines
where you can do simulations up
to 38 qubits.
38 qubits is a little
bit of a sweet spot.
If you go beyond this, it
gets expensive on our side.
We have to add quite
a bit of machines.
But in my experience, I
have yet to see a simulation
in the 40s qubits where we
would see a phenomenon we
haven't seen at 38.
So it's a good testing ground.
Since there is a software
stack, I'm sure many on the call
are already familiar with it.
So beyond Quantum Engine and the
Cirq programming environment,
you'll find things
such as OpenFermion.
It's a library for quantum
simulations, particularly
quantum chemistry and electronic
structure calculations.
If you happen to be interested
in quantum machine learning,
a burgeoning field, then you
may like TensorFlow Quantum
as a library to work with.
And then there are various
other environments.
Eventually, there will
be vertical libraries
that would support activities
or tasks such as Certifiable
Random Number Generation.
So this afternoon, or a
bit later this morning,
you will hear a talk from
the two leads of this effort,
from the hardware
side Erik Lucero,
and then Dave Bacon
from the software side.
So the next message is
that we have neat toys.
Or, let's be more corporate
about it, we have neat tools.
And they're various, but
what I wanted to show
is a tool I think
is a piece of art.
And actually, it was done
by our physics team, led
by Vadim Smelyanskiy
in collaboration
with Yu Chen's metrology team.
And what this tool does--
anybody who has ever
run a quantum algorithm
on a processor knows that
you have to tune it up.
You have to calibrate
your processor well
before you start.
And for example, you
have a gate operation.
Let's say you have a swap
gate, and that's characterized
by an angle, theta.
And now you need to
know or put this theta
exactly at a known value.
So with this new tool, we
can estimate the parameters,
the coherent parameters in our
circuit, better than 3 times 10
to the minus 5.
Moreover, as you can
see in this figure,
we understand the
theory very well.
So the theory and experimental
results for this tool
line up really well.
And it's not only
a high precision,
state-of-the-art precision,
it's also very fast.
So you can actually
outrun calibration drift,
which is also quite important.
So you can see that you stay
below rather low error levels
during the duration of
extended computations
if you invoke it in between.
It is also very scalable.
So you can simultaneously
tune up all the parameters
in your circuit, and be sure
that you get-- or, be sure--
have a good probability that you
get, at the end, high-fidelity
results.
So the next message is that
we may be on the verge,
we may be able to implement
commercially or scientifically
interesting NISQ
applications at this point.
And I should say,
internally we have
the notion of a gold standard
for a NISQ publication.
So this gold
standard essentially
would consist of the
following features.
Obviously, you want
to compute something
of commercial or
scientific interest.
But second, ideally,
you want to have
this computation be beyond the
reach of classical machines.
And third, we would like to
require that the Cirq code is
open source.
This will lead to a growing
library of good examples
that other researchers, fellow
researchers, can build on.
Now, even though we dubbed
this the gold standard,
that doesn't mean there isn't
any other good works that
can be done in the NISQ era.
Of course, important
proof-of-concept stepping
stones towards the
gold standard can
be excellent work, or if you
have ideas for error mitigation
and want to try those out.
These are all pieces of
work we support, as well.
And I say we support as well
because, as you will hear
throughout the conference, there
are various funding vehicles we
offer for those who have a
suggestion for an algorithm
and want to run it
on our machines.
So here I'll give
you a little laundry
list of the different
algorithms we have already run.
And actually, the
service launch,
even though I showed
you the picture of GQ2
where it stands right now,
it's important to know
that this service has
already been launching
for a while since last year.
And at Google, we have
these cutesy names.
We go through a Fishfood phase,
a Dogfood phase, and then
Catfood.
And then, eventually, in
German we would say [GERMAN]..
The fun ends, and we
become more corporate
and call it now the Early
Access Program as the door opens
to the external world.
So Fishfood, this
was a phase defined
by just people from our own
team, the quantum AI team,
were running algorithms.
So of course, they
have insider knowledge.
They know about all the bugs
and the quirks of the hardware.
And they were able to run
algorithms such as the Quantum
Approximate
Optimization Algorithm.
They prepared
Hartree-Fock states,
or did interesting
pings looking at out
of time order correlations.
Actually, the talk
right after me
will give you the details
of these experiments.
And then in the next
phase, we widen the circle.
We go to Alphabet as
a whole and invited
our colleagues, who
now have less knowledge
about the intrinsics
of our offering,
hey, please, if you'll
run some algorithms.
And we got interesting
work, for example, from X--
you will hear about
this later today--
where they simulate
quantum gravity situations
on an actual chip.
And some other work I'm
rather excited about that we
may be able to run
at some point is
to compute an NMR
spectra for molecules,
single proteins,
bound to surfaces,
or proteins bound to membranes.
And then the final phase that's
opening soon and will run
through fall is we invited--
and many of you participated--
the wider community
to submit proposals.
And we selected the
cream of the crop,
and we have nice suggestions
by companies such as Phasecraft
to do a quantum simulation
of the 2D Hubbard model.
And Mishal Lukin from
Harvard and Manuel Enders
suggested to study phenomena
near quantum critical points.
So that's a nice list
of experiments we
are looking forward to running.
And also about some of them
you will hear, actually,
in the talk tomorrow.
Tomorrow will be
when we hear from--
not from Google, that's today.
Tomorrow, we will hear
from our collaborators.
Then the area that I do
hands-on working myself,
quantum machine learning.
I wanted to quickly give
you an update there.
There's sort of a set of
beliefs or convictions
that's shaping up.
It needs more work, and
it's definitely not proven.
But the following things or
statements seem to be true.
That we can do basic
machine learning operations,
such as preparing
probability distributions
that describe data
distributions,
or we can do feature maps into
a high-dimensional Hilbert
space that are expensive to
produce on classical machines.
Or, I should maybe say,
possibly even beyond the reach
of classical machines.
And then if you use those
machine learning primitives,
then when we apply
them to data sets that
are manifestly quantum, in the
sense that their results are
classical data sets, but
they resulted from measuring
an entangled
quantum system, then
we seem to have
advantages, let's say,
in terms of sample complexity.
What is less clear is,
would these primitives
also help us when we go to
a general machine learning
data set?
And even though I have to say,
we got some beginner's luck
here-- for example,
we ran a quantum GAN,
a Quantum Generative
Adversarial network,
for the plain vanilla
[? m-nest ?] character data
set, digit data set, and
we found that indeed,
we could produce with a small
[? stretch of ?] distance,
a sort of a metric,
getting good-quality images
synthesized.
But we still need some more.
This might have had nothing
to do with quantum resources
per se.
We might just have been lucky
and stumbled on an architecture
that worked well.
So there's definitely some
more ingenuity needed here.
But it's ripe for doing
something interesting.
And on that note, I wanted
to explain the following.
When we tried to
discover diamonds,
as [? Babba ?] called it,
interesting quantum algorithms,
and we do it with
scarce resources.
We have only that
many people, we only
have that many processors.
Then we follow the
principle at Google
that has served Google
well, and that says,
develop for the pro first.
So if the nerd in the family
loves your software product,
then she will explain
it to her siblings,
she will explain
it to her parents,
and this is typically a
good way to spread software.
And we have done the
same thing with finding
our [? Visquen ?]
computing, where
we reached out to top
institutions, top researchers
first.
And definitely this
yields results.
But there are two events
that recently happened
that have made me question
whether this should
be our sole approach,
or we should
start earlier than
we had planned
to complement this by a more
broad outreach approach.
And the two events were, I was
working with my intern, Alex.
And one day he blurted
out, oh, man, it
is so hard to think
about interference
patterns on a hypercube.
We need to start third
graders to think about this.
And there's some truth to it.
None of us are
really trained well
in the art of crafting a quantum
algorithm, which, as we know,
is really the art of crafting
an interesting interference
pattern.
And sort of bringing in
fresh unspoiled minds,
there's something
to be said for that.
And the second event that
happened, as we are all aware,
discussions around
diversity and inclusion
have heated up over the
last months tremendously.
And diversity and inclusion
is all about breaking down
barriers to opportunity.
So with the new
cloud-based simulators,
we can essentially provide a
Google-scale, worldwide scale
of access to quantum
computing resources,
at least to the simulators.
And there is an opportunity
to enable or find
the Ramanujan of
quantum computing.
Many of you will
know Ramanujan was
born in the 19th century in
rather modest circumstances,
taught himself math, but came
up with these amazing formulas
that amazed the contemporary
top mathematicians of his time.
So if you are a kid
in a town in India,
or a kid in a village
in Kenya, as long
as you have good enough
internet bandwidth
to watch low-bandwidth
YouTube videos,
you have enough
internet connectivity
to participate in
our quantum service.
So this seems to
be a nice win-win.
So one more thing.
We spend a lot of
time, since we reached
beyond classical computation
results, to now think ahead.
And we worked hard with
actually a large part
of the team on a plan to build
an error-corrected quantum
computer.
So this work was shepherded
by our quantum hardware team.
And the leads I should
acknowledge here
are Julian Kelly, Anthony
Megrant, and Yu Chen.
Of course, there's much
broader and larger teams
underneath them.
But they have worked on--
jointly with the
theorists-- on a roadmap
that looks 10 years out.
But we think, to allude to
the famous Kennedy quote,
we think we can do it
before the decade ends.
What, then, can we do?
We think by then we can build
a large error-corrected quantum
computer with 10 to the sixth,
million physical qubits.
And, in essence, this would
be an information architecture
based on the surface code
supported by transmon qubits.
And we developed a tight
schedule of well-defined--
there's actually a telephone
book thick of calculations
and the designs behind this.
So we have a tight
sequence of difficult
but, we feel, reachable
milestones that eventually
take us to this end goal.
So let me talk to you about
our next big milestone, which
is a demonstration
of a logical qubit.
I should rather say,
the demonstration
of reduced logical error.
We all know that
error correction,
at the end of the day, works
by introducing redundancy.
So you have to take
the information encoded
in your logical qubit
and distribute it
over a set of physical qubits.
For example, a 2D array of,
let's say, three-by-three data
qubits that would hold
the quantum information.
Or you can go to larger sets,
five-by-five, seven-by-seven
arrays.
And those data qubits
do hold the information.
Then, of course,
you have to peek in
in a quantum circumspect
way as to not collapse
the logical state.
So you would do
this by introducing
measure qubits, which
essentially do parity checks.
And based on the
parity measurements,
you can get a sense of,
did anything go wrong
in my circuit, and
then correct it.
So what we can also
do to complement this,
ultimately, we know we have
to correct against two errors,
phase flip and bit flip errors.
But to buttress this
work, we can also
study one-dimensional codes
by snaking in 1D architecture
into the 2D array, and study
repetition code or phase
flip code, as well.
The advantage is you can go
out to a higher code distance.
So what we are aiming
for is eventually
we want to publish a paper that
will have a figure a little bit
like this.
And these are sort of cartoon
figures, not actual data yet,
but just to show where
we are going towards.
So we want to show that as you
increase your code distance,
the logical error
rate comes down.
And you want to have a
nice suppression factor,
meaning as you go from
three-by-three to five-by-five,
you want to see, let's
say, a factor 10 reduction
of your logical error rate.
So this is what we would
like to demonstrate next.
And to see how far
away we are from this,
we did some theory, a model.
Actually, my dad always used
to say the most practical thing
there is is a good theory.
And we developed a component
model, which, essentially,
after every operation that
you use in the surface code,
you introduce a simple
Pauli error channel.
And then armed with
this error model
and putting in the
actual numbers,
you have sort of a tool to
chart the road into the future.
For example, we would
see that if today--
so here these different
bars tell you,
so how bad is our CZ error,
how bad is our Hadamard error.
I'm sorry, it's the
other way around.
The small one is the
Hadamard error, CZ error.
Actually, something we found
that surprised us and was not
properly accounted
for in the literature
so far is idling
errors matter a lot.
And of course in
hindsight, not surprising.
If your qubit is
sitting there and you
suffer from T1 processes, then
obviously your overall error
accumulates.
So if you apply this
simple error model,
and we would run
it today, actually,
our lambda suppression factor
would be smaller than 1.
That means actually our
error would increase
as we go to larger arrays.
But it also gives
us an idea how we
can reduce the
different error budgets,
and then make the proof
of principle work,
and then eventually
our roadmap target
is to get the overall
suppression factor to about 10.
And we definitely do
think this is feasible
based on calculations we made.
And based on what I said
about the idling error,
there seem to be two
basic roads forward.
One is you reduce
your idling error,
or you reduce your cycle time,
which is, of course, good
in its own right.
You have a faster
computer this way.
So you focus on circuit
design to make things better.
And for our teams, it's
a little bit in our DNA,
as imprinted on us
by John Martinis.
But you can also do a
complementary approach.
They both can live next to each
other, where you focus more
on materials research.
Let's say the Yale
school of thinking
would look more
into this direction.
So here you would aim at
longer coherence times,
let's say the hundreds
of microseconds,
where, if you make
your cycle time faster,
you may get away with
tens of microseconds.
And of course, you
can do both pathways.
So unfortunately, the
simple error model
is not quite enough.
There are extra, or
additional, error channels.
There's leakage.
There's calibration drift.
There's crosstalk.
There are correlated errors.
And we understand that
correlated errors have always
been sort of the boogeymen
of quantum error correction.
There are not good threshold
theorems known today.
Actually, our
physics team started
to work on exactly those.
And correlated errors are
not a theoretical thing.
They easily occur when, for
example, the leakage state
decays into states that are
valid in your code space.
Or a much more dramatic
example of correlated errors
is the impact of cosmic
rays on our qubit arrays.
And we do see this, actually.
This is the trace here,
a 25-microsecond duration
where we actually just run
repetition code experiments.
And we do see those
bursts of errors happening
that happen in a spatially and
temporally correlated manner.
So if now he's
saying, oh my God, we
will have to take quantum
computers down to a mine shaft,
no, it's not quite that bad.
We just will have to
install phonon traps.
But this is something you
will have to take care of.
So this was the 10
to the 2 milestone.
Then, of course, you would
just make your code distance
large enough that you approach
about 1,000 physical qubits
at that time.
If everything goes well and
you have a large suppression
factor, you should have
coherence times of years,
or what I like to say,
essentially your device
should stay coherent until
you switch off your computer.
And then there is a milestone
that I should maybe point out.
It's important from an
investment and financing
perspective.
Eventually, you will have
two logical qubits at hand,
and you have a full gate set
between them, and hopefully,
not just different gates,
but a complete gate set.
And at that point,
you really have
what may be called
the integrated
circuit of quantum computing.
You have a tileable
digital module.
And to build a large machine,
you would just take this module
and replicate it.
And this is maybe not too
far out, five, six years,
somewhere there, hopefully.
And it's important
because, past that point,
the risk profile for quantum
computing comes way down.
It becomes more the risk
profile of building a high rise
or freeway, rather than
doing cutting-edge quantum
electronics.
And now, fast-forward
to the final milestone.
Here I want to give you a
picture of how we envision
a one million cubit
error-corrected quantum
computer to look like.
So it will consist of
these modules that we tile.
And as I say, there
is quite a thick book
of technical information
that went into this design.
And something you see here,
as the scale is about right,
was the figure, a human
figure, placed there.
So there is sometimes
this misconception,
oh, superconducting qubits,
they require these thick wires,
and you will eventually need a
factory floor full of dilution
refrigerators will be used,
expensive and unwieldy.
This is actually not true.
If you spend careful time
designing and thinking
about your control electronics,
about wiring solutions,
then you see you would
be able to deliver
a product in a manageable size.
So with this, I want to
summarize a little bit
and also issue a
call to the community
to help us on various aspects.
So we feel we have a good
plan, a plan of record,
that says a 2D array
of transmon qubits
can be used to enable a surface
code information architecture.
We will be able to drop below
the error thresholds required.
But there are
places where it may
be worth it to invest
in parallel efforts.
And we name those parallel
efforts renegade efforts.
And, for example, one renegade
effort we are thinking
seriously about-- actually, we
already fund external groups
to look into this--
would it make sense to build
a more protected cubit?
Or is this an unobtainium
that is still out of reach,
or that it feels to us
as this community that
has been thinking
about zero pi qubits
or qubits based on
GKP states, this
may be ready for prime time.
And such qubits, of course,
could give us more elbow room
in terms of error budgets.
And, of course, the less
code distance you need,
obviously, your cycle
times would come down.
So it's definitely
something we're
looking very serious to possibly
have such an effort in house.
But we also rely
on the community
to look into such alternatives.
Another area is
asking the question,
is quantum error correction
beyond the surface
code possible?
And depending whom you ask, if
you go to our own surface code
expert Austin Fowler, he
will tell you, no, forget it.
If you have a 2D lattice
of qubits, then pretty much
you are confined
to surface codes.
Anything you do will
look like surface code.
And I haven't heard
this directly,
but it was conveyed to me
that John Preskill once said,
many people have tried to do
something beyond the surface
codes that have failed.
So maybe we're stuck with this.
But we are not ready
to give up yet,
and we are definitely
interested in efforts
to look at more
self-correcting codes that
could lead to more autonomous
forms of error correction.
And then the last big
worry I should issue,
is there's still
viscosity of algorithms?
So what would we do with
this concrete machine?
What would we do with a
one-million-qubit machine?
So this is a little bit more
concrete question than just
saying, oh, there's
a scaling advantage.
No, for the cycle times we have,
for the remaining error rates
that would be there, what
are attractive, commercially
relevant, valuable, or
scientifically interesting
algorithms to run?
And there, definitely,
we rely on the community
to hopefully create
many interesting ideas
in these directions.
And then, with that, I wanted
to thank you all for dialing in.
I know this was a little bit--
we all know long video
conferences are a challenge,
but we tried to make
it as fun as possible.
And I really hope you are going
to enjoy the Quantum Summer
Symposium.
