SPEAKER 1: OK.
Good afternoon everybody.
We are happy that we have
some crew from D-Wave
here today as guests.
In particular, Eric
Ladizinsky was a co-founder
of D-Wave and their
chief scientist.
I think your main task
is really shepherding
the manufacturing
of the D-Wave chips.
And to say a few
words about Eric,
he's actually here
from the neighborhood.
He went to UCLA, but
then moved to TRW
where he was working on
superconducting electronics
for next generation
supercomputers
to extend Moore's Law past CMOS.
And then at some
point he had an idea
that these superconducting
circuits would also
be quite useful to build
an extra quantum computer.
And he won, in 2000
I think that was,
a very large DARPA grant
to pursue this exact idea.
So he formed this
mini Manhattan Project
to build quantum computers based
on superconducting electronics.
And then in 2004 he hooked
up with Geordie Rose
as a co-founder of
D-Wave, and then
was in a company context
continuing this vision
to build such a
computer, and have
sort of all the necessary
disciplines of science
and engineering together
to pull this off.
And as you know, one
of these machines
we have in the quantum AI lab,
sitting over at NASA Ames.
And we have of course a
lot of fun with your chips,
experimenting with them,
essentially probing
a functional role of quantum
effects in computation
and blueprinting elements of
quantum annealing, quantum
optimization algorithms.
But of course, to build a
serious computational advantage
or capability these chips will
have to undergo more evolution.
And that's what we are going
to hear from Eric today,
how D-Wave and Eric's
crew is planning
to evolve scalable
quantum computation.
ERIC LADIZINSKY: Thank you.
Thank you very much.
First, I'd like to say it's
really gratifying that we built
something and create an effort
that people are interested in.
There's been tremendous progress
over a relatively short period
of time.
I'm really personally
gratified that we
have all these brilliant people,
and at various organizations,
USC and ISI crew,
and NASA and Google,
because a project
like this really
takes a community of researchers
to bring it to fruition.
In the early days
of microprocessors
they weren't computers
like we know them today.
They were controlling
elevators and things.
And it was a lot of
people who started
looking at this
nascent technology,
figuring out what you
could do with it over time,
so I'm enormously gratified that
there is a community like that
that's taken
interest, and I think
this will help us get
to where we want to go.
I call my talk Evolving
Scalable Quantum Computers
because at D-Wave we've taken
this evolutionary approach.
And I'll tell you
what that means,
but first like why
quantum computing?
So I'm assuming that some of the
people that I'll be speaking to
in the audience
they don't really
know what a quantum computer is.
They're kind of mysterious.
There's a lot of
complexity there.
I'm going to try to make
it as simple as possible
and tell you a
little bit about it.
Now, why is there excitement
in quantum computing?
And this is kind
of my perspective.
Every once in a
while humans learn
there were these
physical phenomenon that
were around them,
but we didn't know
how to harness them ourselves.
Obviously the first
one, the big one, fire.
But it was hard to make.
So it might occur naturally,
but I haven't actually
harnessed it for my own use.
And after a while we learned
how to do that ourselves,
whether it's agriculture or
tool making, or electricity.
And once we harness
those, and we
know how to control
them in a detailed way,
obviously it's game
changing for our species
and our capabilities.
I think that quantum
computing if realized at scale
could be this kind of thing.
It's harnessing phenomenon that
are fundamental to everything
in our universe,
quantum phenomenon, that
underlie the
structure of matter,
and radiation, and
everything that we know.
And people talk
about it like this.
There's a great
deal of excitement.
So it tends to bring in some
of the most brilliant people
from a whole set of disciplines
in physics, computer science,
and it could have a big impact.
Now, there's a lot
of hype around it
so how do you actually
make this happen?
And why is there this
interest in quantum computing?
What do we need
bigger computers for?
We have these giant centers
with Titan supercomputers
that do petaflop
scale computations.
So what do we need
more of that for?
Well, the interesting
thing is that if you
look at the top guy, so
this is a plot of the number
of electrons
involved in switching
a transistor from on to off.
OK?
And so the number of
electrons per switching event
is getting smaller and smaller.
The devices are getting
smaller and smaller.
And you're getting
more and more of them.
And there's going
to come a point.
We're rapidly approaching
sort of atomic dimensions
as a natural limit to
the size of transistors
in memory elements.
We already have single
electron transistors.
We just don't use them at scale.
OK, so, wow, we shouldn't
be able to make some really
powerful stuff, and then we can
have three dimensional circuits
with these single
electron transistors.
And imagine what we could do
and scale up giant data centers.
The interesting thing about
that, with all of that,
there are still problems
that would forever
remain beyond our reach as
humans, with all of that.
And so there's a field that's
called complexity theory,
and what people do is they
look at a model of computation,
say the Turing model
of computation.
We have a mathematical
model that
captures the idea of
what it means to compute.
And we can use that
model to say what
are the ultimate limits of
computation based on this idea?
And then using that model
theorists have created
these classifications for
what's considered tractable,
like something we could actually
do reasonably, and intractable.
And this is kind of an
artificial definition,
but it's useful
to right theorems.
We tend to look at not how
long a given problem takes,
but how much harder
a problem gets
in terms of the resources
required, in terms
of the time it takes
to get a solution,
or the amount of energy it
takes, or memory requirements.
And in this case, I'm looking
at the time to solution,
and I look at the
time to solution
as a function of how
big the problem is.
And I look at how the
resources scale up
as the problems get bigger.
So this definition
from complexity theory
is that the problems that scale
like a polynomial in the input,
like say there's n to the fifth.
That's pretty bad actually.
But let's say you
double the problem
and it was four times as
long to get a solution.
That's like n squared.
So problems that scale like
that are called polynomial time
scaling, and we call those
something that we could achieve
maybe by throwing
a lot of resources
at it if we thought
it was worth it.
Right?
Now, obviously n to
the 10th is pretty bad.
But there's another
class of problems
where the time to solution
scales exponentially
with the size of the
input, like 2 to the n.
And we call those
intractable at large scales,
because as I take all of those
single electron transistors, 3D
processors, and I
scale them out, they
kind of scale linearly.
And I'll never catch problems
that scale exponentially.
And it turns out that a lot of
really interesting problems,
high valued problems for us
to solve, are of this variety.
They scale very badly.
What kind of problems are those?
Simulation of quantum systems.
So when people want to
do quantum chemistry,
I put some atoms
together, I want
to know how they fold
up into a protein,
and the shape of that protein
confers its biologic function,
you can't do that for
very big molecules at all.
I detailed chemical
reactions if I
want to make better materials.
I can't model these on
computers sufficiently.
People try, but they have to
make tremendous approximations
and don't get very good results.
And the reason is
that at those scales,
at those molecular
scales, you're
getting quantum
mechanics involved,
and classical
computers can't solve
these in reasonable time frames.
Another class of
problems which we're
interested in at D-Wave
and you guys here at Google
is complex combinatorial
optimization.
OK, sounds like a mouthful.
Well, all it is is I have
a whole bunch of ways
that I could do some complicated
thing, and there's a best way.
So a mundane
example for instance
would be, FedEx
wants to route all
of its trucks and airplanes.
How do I do that routing?
I have this exponentially
growing tree
of possible routing strategies.
Which one do I choose to
minimize fuel consumption?
They can't do that.
You can put that on a whole
bunch of supercomputers.
It's better than doing nothing.
But you're not going
to get the best answer.
And there's a lot of problems
in artificial intelligence
and machine learning
that, at their core,
you have to search through a
vast number of possibilities
and find the best
one in some metrics.
These are very
difficult problems.
And finally, the thing
that got the field
funded, factoring large numbers.
It sounds like kind of a
strange, artificial problem.
I want to take a big
number and break it
into the product of a
couple of big primes.
But it turns out that
that scales, not actually
exponentially, sort
of sub exponentially,
but super polynomially.
And in 1994 it was
shown that if somehow I
could harness some exotic
quantum phenomenon, instead
of taking a billion
years to factor
big numbers on a
supercomputer I might
be able to do it in seconds,
and break our current RSA
encryption codes.
The way that we send
information back and forth
depends on the fact that it's
hard to factor large numbers.
So what are those
physical resources
that are all around us, that
are the basis of our reality,
that we don't usually tap into
for information processing?
The first one, there's
four basic ones,
first one is quantum tunneling.
You could have an object on one
side of an impenetrable barrier
and it appears on the other
side without passing through.
Pretty weird, but we make
these devices every day.
What it could allow
computationally
is I could explore solutions
quantum mechanically that
might be forbidden
to me classically.
The next one is energy
level quantization.
In an atom you can
have an electron
that can exist at many
different energies.
They're not these classical
orbits people think about.
And the interesting
thing is, you'll
see in textbooks sometimes,
you send some light in
and the electron jumps
from one to the other,
but that's not what happens.
You're not allowed in between.
So you have to go
from one energy
level to the other energy
level without traversing
the intervening space.
So that's pretty strange too.
Happens all the time.
Superposition,
which you hear a lot
about in connection
with quantum computing.
Classically you think about
a particle having a well
defined trajectory
in space and time.
But quantum
mechanically, if you want
to get the right answer for
a lot of physical processes
involving microscopic
particles, they
actually live out all possible
trajectories simultaneously.
So a single object can live
out many possible histories.
And this was the original
idea, say David Deutsch,
thinking about
quantum computing.
He said, what if I had
a really hard problem,
and the same physical
hardware behaved
as if it was a lot of different
hardware simultaneously,
and each one of
these trajectories
was a different part of a
very complex calculation?
But I don't have to
build a big data center.
I have one processor that
acts like a giant data center.
So this allows for massive
parallelism in computation.
There's a caveat,
but I'll get to that.
And finally,
entanglement, which really
bothered Albert Einstein.
This is when you can have what
you think of as two objects.
You measure something about one
and the other one immediately
sort of has a perfect
correlation with it.
And I'll give you an example.
This is just to illustrate it.
Let's say I have a pair of dice.
One's in California.
One's in New York City.
They're probabilistic events.
We roll the dice at the
same time 1,000 times,
and classically you would expect
I get random numbers here,
random numbers here,
and they have nothing
to do with each other.
If these were quantum entangled,
I would compare those lists,
and 1,000 rolls out of
1,000 rolls the same.
And this would be
quite startling.
And in fact, these
experiments have
been done, not with dice,
but things like photons.
And all of these
are the phenomenon
that underlie the
stability of matter,
the structure of
matter, radiation.
Everything that we know
is based on these kinds
of strange things.
But, while they underlie
how transistors work,
we haven't actually used them
in information processing.
And what if we did?
That's what quantum
computing is about.
I'll give you a simple example
of something called a quantum
gate, because we're going to
talk about this a little bit.
So most of the
people in this room,
you know that when you
build complex electronics
I'll have some simple functions.
Let's take the simplest
one called a NOT gate.
You put a binary zero
in, you get a one.
You put a one in,
you get a zero.
It's a simple function.
There's a class of simple
functions like and,
or, not, nor.
You put them together
in complex arrangements
and you can build arbitrary
logic and all the great stuff
our computers do today.
Quantum mechanical
version of that,
you can encode
information on a lot
of different physical systems.
It could be a transistor.
It used to be gear wheels
and mechanical calculators,
or an abacus.
And what people first
started looking at
was can I, as I miniaturize
these transistors,
can I use individual
atoms and molecules
as computing elements?
This is how it started.
So if I have an atom, there's
the proton and electron energy
levels, I could call the
first orbital a zero.
And I could call the
second energy level a one.
And we know that if we send in
light of the right frequency
I can make a transition
from here to here.
So sending a pulse of light into
an atom that was in the ground
state and going to the
first excited state
means I started with a zero.
I got a one.
That's a NOT gate.
It also turns out that if
you're in the excited state
and you send in a photon,
you'll get stimulated emission.
The electron will drop back
down to that other state.
A one becomes a zero.
That's a NOT gate.
So I've implemented a NOT
gate with an atom, OK?
Just by shining light on it.
But here's the interesting part.
There's a time associated
with this thing
to go from here to here.
The electron can't
be in between.
What it actually does
is, when you shine
that light on the atom, you can
think about the electron being
here, in the zero state,
and as that light turns on
it sort of fades out of
existence, it fades up here,
and now it's a one.
Leave the light on again,
it fades back to a zero.
So if you leave the light
on for half the time
it takes to make that transition
it's in both places at once.
That's a quantum gate.
It's called a Hadamard gate.
You put a zero in,
and basically the way
you'd implement this
is I have the atom.
I start it there.
I shine it for half the time
it takes to make a transition,
and what comes out is this
thing in this weird state
of being a zero and one
at the same time, which
is a quantum bit, or a qubit.
And so what people who think
about this sort of gate
like model for quantum
computing, now in ion traps.
This is a cartoon.
Imagine you had a whole
bunch of atoms like that.
And now for each atom I have
a laser, so I can excite it.
I can have that electron
in the ground state.
I can build a
register out of this.
I shine light on an atom
for half the time it takes
to make a transition,
and I put it
in this strange state of being
a zero and one at the same time.
And I do that for
every one of them.
So here's where
you start getting
an idea of the power
of quantum computing
as traditionally thought about.
Now, this single register that
has eight bits in a classical
register where I had transistors
that were either zero or one,
I could have two to the
eighth different possibilities
for that register.
But at any given snapshot in
time I only have one of these.
In this quantum register,
by just shining that light
for half the time
for each of those,
it will be in a state
that encodes all of those
registers simultaneously.
It's all zeros.
It's all ones.
It's every combination
simultaneously.
OK?
Now imagine you made
a 300-bit register
and you can see what happens.
Every time you add a qubit
you get a factor of two.
So the number of possibilities
grows exponentially.
So if I had a 300 digit
register I would have 2
to the 300 possible combinations
stored in a single register
simultaneously.
And that's more
numbers than there
are particles in the universe.
These are the kind
of things you hear.
This is the exciting part.
Now, the question is, how
do I take that information,
process it, and get answers out?
It's not as simple as
I have this parallelism
and I have 2 to the
300 answers at the end.
Because there's another thing
about quantum mechanics.
When I look at that register
I'm going to see one of those.
So you have to find a way before
you do a final measurement
to have those different
threads, those histories,
kind of interfere with
each other in such a way
that I get the
cumulative answer.
I've actually got the
advantage of all those paths,
but I get a single
answer at the output.
That's the hard part of
writing quantum algorithms.
OK, so people started trying
to build these things.
And there's a whole bunch of
different physical platforms
you could do that with.
There's people doing it with
photons, microscopic particles
of light, optical lattices.
This is an example
of an ion trap.
You have the atoms
are in there, floating
in these electromagnetic fields,
interacting through lasers.
I can have interactions between
these things to create logic.
One of the problems with
these approaches if you wanted
to scale up is these
are microscopic objects,
atoms, photons.
Right?
How do I build a control system
to control microscopic elements
and then scale it up?
Nobody knows how to
do that right now,
and it could take a long
time to build technologies
that were capable of doing that.
Now, something interesting is
that this is a D-Wave chip.
And this is very
macroscopic by comparison.
There's a misnomer
about quantum mechanics
that quantum mechanics, and I
hear this on a lot of journals
say this, a lot of quantum
computing people say this,
so I'll dispel it right now.
Quantum mechanics is not the
physics of the very small.
Quantum mechanics is the
physics of everything.
So you have this problem.
If microscopic particles
are in many places
at once, living out lots
of histories at once,
and going from place to place
without the intervening space
and all that, and we're
built out of them,
why don't we do that?
We know why now.
We know how the classical world
emerges from all that quantum
weirdness.
And I'm going to give
you a hand wavy argument,
and not a technical argument.
But way back when, here's
the Schrodinger equation.
And what it predicts is
microscopic and macroscopic
alike, I should be in
these strange states.
Here's a cat alive and
dead at the same time.
Why don't we ever see that?
In the '80s there was a new
field or a new discipline.
It talked about something
called decoherence.
And here's the thing.
If you look at an
electron or a photon,
an electron doesn't interact
with its environment very much.
An electron passing
through this room
will scarcely bump into
another air molecule or photon.
But a macroscopic object
like a cat or any of you
is constantly being
bombarded by air molecules,
by radiation fields,
and it turns out
that if you take all the
interactions into account
of an object with
its surroundings,
and you put it
into this equation,
you'll find that what emerges
is classical like states.
And I'm going to say this
in kind of a funny way.
What happens is, if you
have quantum weirdness here,
so let's say all those
weird effects are
in this region of
space, when a particle
comes in some of that
weirdness get passed out.
It doesn't disappear.
This idea about the
collapse of the wave
function, there's
no evidence for.
What we do have evidence for
is that the weirdness gets
passed out to all these
other degrees of freedom,
particles, radiation fields,
that cat standing on a floor
that's vibrating, and when
you take all that into account
you can see how the classical
world emerges right out
of quantum mechanics.
Another way to think about
it is energy conservation.
If I drop a tennis ball
from here why doesn't it
come up to where it was?
Because the energy went away?
No, it went into
vibrations in the floor,
and sound waves, and all that.
So it gets passed
out to other things.
So this allows for an
incredible possibility.
If I could put this cat
in an ultra high vacuum,
get rid of all the air
molecules bounding into it,
if I could shield it
from all the radiation,
if I could put it on a
floor that's really, really
cold so there's no
vibrating atoms,
the weirdness doesn't
get passed out.
Now, in the case
of the cat there's
another issue is
that even internally
the wiggling of its own atoms
will get rid of the weirdness.
But the interesting thing
is, the question since 1935
is, can you build
macroscopic quantum objects?
Answer, yes.
In 2000 this happened.
It was in "The New York Times."
"Schrodinger's Cat Lives."
And it was in a really
interesting object.
This is a super conducting ring.
So a ring of niobium metal.
Could be aluminum.
Unlike the cat, when you
go to very low temperature,
electrons can go
around this ring
without bumping into
anything, and they
can go around forever
non dissipatively.
So the weirdness doesn't get
passed out inside the ring.
And the idea was what
if I put this ring
in a rarefied environment
of ultra high vacuum,
ultra low temperature, radiation
shielded, all that stuff.
And when you do
you could put this
into this really
exotic state wherein
I can have a current that
goes around this ring,
all the current goes
clockwise, and all the current
goes counterclockwise.
And now I have something
that can encode a zero.
The current goes one way
in a magnetic field, up.
Current goes the other
way, magnetic field down.
I have a qubit in a
macroscopic object.
And what that allows
for in quantum computing
is that I can build
microscopic qubits
that I can couple to
macroscopic control elements.
I can't engineer them,
unlike atoms and molecules.
And there's already
existing technologies
for superconducting
logic where I
can interact with these
strange quantum objects,
with other quantum
electronics, and maybe
build things that scale
sooner rather than later.
So now we have the story.
It's like now you have to
build an organization that
can pull this off.
When we were thinking about
putting D-Wave together
to really do this,
you can't do this
with a lot of disparate efforts.
We took as our model something
like the Manhattan Project
or say Celera Genomics.
You have to have an
interdisciplinary group where
you have physicists, engineers,
material scientists, computer
scientists all working together.
You have to have
specificity of purpose.
You have to have an
interdisciplinary team.
They all have to be together
working together every day.
This is very important
rapid prototyping.
The only reason these programs
succeeded on very short time
scales is you have to
iterate over and over.
These systems are
too complicated
to model from first principles.
You have to build them.
You have to test them.
You have to learn.
And you have to do it fast.
The other thing is you
have to leverage the best
existing resources.
So we took superconductivity
to the semiconductor industry
and very quickly used
the trillion dollars
they've spent over
the last 50 years
to make the best superconducting
process in the world.
You don't want to
reinvent the wheel.
And of course, there's
a lot of leadership.
And oftentimes in
technical efforts
this is vastly under
appreciated, the coordination
of the effort and all of that
project management stuff.
So we did this at D-Wave.
That's a picture of
our new building.
You can see banks of
dilution refrigerators.
Under this roof we have
theorists, experimentalists,
design teams, electrical
engineers, applications,
everything under one roof,
and rapid prototyping
with lots of fridges.
We can do material science,
fabrications, state of the art,
and that's our mini
Manhattan Project.
Just a quick overview, we
have about 100 employees.
We raised about $130
million in capital
from some really
great investors.
Lots of US patents, maybe
more than all of the companies
combined.
And we do publish
a lot if you want
to read about what
we've done and how
we've done it, about 60
peer reviewed papers.
And most recently we have some
fantastic partners with you
guys, and Lockheed
Martin, and USC.
The other thing
about our approach
was we had an evolutionary
approach to building quantum
computers.
The idea was there's
this dream machine that
could do in seconds what
might take billions of years,
but how do you get
there from here?
So if you keep working
on that perfect computer,
and it's taking forever
to get just a few qubits,
you might lose interest.
So what we looked
at was is there
a model quantum computing,
adiabatic or quantum
annealing, a special purpose
quantum computer that
could solve high value problems,
prove that this technology has
legs, and give you a chance
to move forward and sustain
investment.
The other thing to
understand is that some
of these theoretical
constructs don't say much
about the real world.
So this polynomial versus
exponential scaling,
I'll give you an
example of that.
In the real world there
aren't an infinite number
of planes to figure out how
to schedule a Delta Airlines.
Real problems are bounded.
Right?
Maybe I have 500 airplanes.
So it'd be nice to get
polynomial scaling,
like in a Shor algorithm
versus exponential,
but if you get a
better exponential,
where it's 2 to the
n over 10, that's
worth billions of
dollars, and it
has huge impacts on
all kinds of industries
including what you do here.
So it's important to
understand real world
problems versus theoretical
constructs when you do this.
Now, there's different
models of quantum computing.
Which one would you choose?
So there's something
called a gate model.
I just showed you a little
example of a quantum gate.
Some of the issues with
the quantum gate model
is I'm shining that
light on that atom,
or I have to shine microwaves
on superconducting circuits.
There's a lot of high
frequency engineering involved.
It's very sensitive to noise.
And so it's been
really hard for people
to build systems because they
had to spend years and years
figuring out how to get rid
of all those interactions
with the environment so
that you could do anything
practical even at
the few qubit level.
There's another model, I'm
not going into topological
because it's based on objects
we don't quite know exist,
but adiabatic quantum computing
is another kind of computing
that offers the
advantages that it's
a lot less sensitive to noise.
So you can start
building things at scale
sooner rather than later.
It doesn't have the really
stringent noise requirements,
or very low noise requirements
as the gate model.
It's something that uses low
frequency control and not
a lot of high
frequency lines, which
is hard to do even in
classical circuits.
And we chose, obviously,
superconducting material
so that we can engineer things
with existing technology,
rather than trying to build
whole new technologies
for microscopic constituents.
And if you look at the current
state of the art, the two
big models anyway,
in the gate model
there's Shor's
algorithm, quantum gates.
There's analog of classical
gates in the quantum regime.
There's a well understood
theory about how
you can correct errors.
Kind of state of
the art in terms
of numbers of qubits
as you can see there.
That's because it's been really
hard over the last 20 years
to get the requirements to where
they need to be for that model.
Empirical scaling,
there is no data
because the systems
aren't big enough yet.
And this is just an estimate.
A lot of people are
looking at the gate model
to factor numbers.
So if you wanted to factor
a 768-bit number, which
is kind of a record, and you
used a certain kind of error
correction, of course
these things could change,
where you're using 1,000
physical qubits for one
logical qubit, but
you use redundancy
to do error correction, and
you start from these numbers,
and you double the number of
every 18 months like Moore's
law, it could take
you quite a long time
to do Shor's algorithm.
So it's a major
undertaking, a major effort.
But tremendous progress has
been made, and amazing science.
And I'll talk a
little bit about that.
In quantum annealing,
this is I'll describe.
There's less known
about it theoretically.
There are some
experiments people
have done where you could
get an advantage from doing
things this way.
It's much easier to scale.
We're now exploring the scaling.
And maybe we could beat
everything in a few years
based on where we're at now.
And I'll talk about that.
So quantum annealing.
So rather than
turning calculations
into a series of gates and
doing mathematical calculations,
this is a little bit like
doing analog computing.
I'm going to use the physical
evolution of a physical system
to solve problems for me.
It used to be that people would
build an electrical circuit,
and the physics that govern
that electrical circuit
would solve
differential equations.
So there's a theorem in quantum
called the adiabatic theorem.
But I'll start off with this.
So imagine that this
is a rubber sheet
and I put a little ball there.
An annealing actually
comes from-- People
used to do this
with metals, right?
I heat up a metal to a
very high temperature
to let its atoms move around,
and then I cool it slowly.
And maybe they can find the best
positions, the lowest energy
configuration, and I
get a stronger sword.
What people did is
they said, nature
seems to look for these
low energy solutions.
There's a lot of
problems where I'm
trying to find some minimum
of some complex function
of interacting parts.
Maybe I can write
code that simulates
this physical dynamics, and that
was called simulated annealing.
What they do is you have
some physical system.
I'll talk about what
kind in a minute.
And you put it in this very
simple energy landscape.
So things going to
roll to the bottom.
And then I gradually
deform this rubber sheet.
And what I'd like
is, as I deform it--
And it gets more
and more complex.
And you could have millions
of hills and valleys.
What I'm looking for
is the lowest valley.
And the problem here you can
see when people write simulation
codes or when they
actually do this
with real physical systems,
in classical systems
you tend to get stuck in what
they call a local minimum,
and it's not the best answer.
I'd like to be there.
And of course for real
problems with lots of variables
you could have an
energy landscape
with a horrendous number
of hills and valleys.
Quantum annealing is
based on this idea
that if this object or this
system is quantum mechanical
it isn't a little ball
that's in a particular place
in this energy landscape.
Under the right conditions I
can turn on quantum tunneling,
where I talked about
things ending up
on the other side of barriers
they can't pass through.
It also can be in
many places at once.
And so instead of
having somebody looking
through this vast
energy landscape
by running around from
one point to the next,
I could span the
entire landscape.
I could tunnel through these
mountains, and kind of ooze
into that lowest energy state.
And if I could do that
for large scale problems
I can encode
problems of interest
to people into this
kind of schemata
and solve some really
high value problems.
I'll talk about what that is.
So as an example here is like
a traveling salesman problem.
I'm a salesman.
I want to go to a whole
bunch of different cities.
This is a pretty famous problem.
How do I go to
every city at least
once, come back where I started,
and the minimum distance?
This can be recast as finding
that energy surface represents
all the different paths.
So instead of energy each
point on that landscape
represents the
path length, and I
want to find the
minimum path length.
And there's lots of them.
And this scales exponentially
with the number of cities.
One thing I'm going
to mention here
is in this adiabatic evolution
this is where the system starts
out, in that very simple
well, and then these
are all the other
higher value solutions.
And then as you evolve this
system you end up here.
That's the shortest
path, the next shortest,
the next shortest.
Another thing about real
world versus theoretical
is people tend to talk
about the global minimum.
But in the real
world, if I can do
better than anybody
else can, let's say
this represents finding
the lowest past here,
or this could be
the FedEx schedule.
If I find a better schedule than
anybody else in the world, even
one with low lying solutions,
that's worth a lot.
OK, this is a classic
physical system
people have thought
about to do this sort
of adiabatic evolution.
And it's called an
icing spin system.
So imagine that you had a
bunch of little magnets,
or quantum spins.
So I have a magnet here.
Each magnet I can put a
little horseshoe over,
and it's going to tend
to orient north to south.
Right?
So I can bias that
magnetic up or down.
So each magnet has a
little local fields
that can act on it,
and then the magnets
have interactions between them.
So I can have the
magnets interact so
that they want to be the same
way, so the norths are both up.
I call that a ferromagnetic.
Or I could have the
interaction such
that they want to be
the opposite direction.
And I can control the magnitude
of the local field that
makes it to want to be up
or down a little bit more,
and I can control the
interactions between them.
Well, it turns out that if you
make an array of such magnets,
and I can tune all the
interactions between them,
and then the local
fields acting on them,
and then you ask the
following question.
For those local fields,
and for those interactions
between the magnets,
how will the magnets
arrange themselves to minimize
the energy of this system?
That turns out to be
a horrendous problem.
And the solution to that
scales exponentially
with the number of magnets.
And how hard the problems
are depend on the details,
like the interactions
and the local fields.
But the interesting
thing about this
is people have thought about
this problem for a long time,
icing spin network.
You can see this formula here
basically have each spin.
It has its local
interaction, and then
interactions between the spin
is j, between neighboring spins,
and then the next
part of this equation
is, OK, this defines a problem.
So if we're given local
field and interaction
what's the lowest energy?
But how do I find that energy?
I can either jump
around that landscape,
like in the classical
case, and try to find it,
or what this represents
is a tunneling energy.
We actually have a knob
on our physical system
where we can turn on
quantum mechanics,
we can turn on
tunneling, so that it
can cut through these
mountains and valleys
and try to ooze into that
lowest energy solution.
And if you can do that maybe
you find better solutions
than anyone classically.
And if you could solve
this problem, solving it's
equivalent to a lot of other
hard, very high value problems.
And some of them, say at
Google, include things
like optimization that you do
in machine learning, and AI,
and protein folding,
things like that.
So there's a whole
range of applications
where if you could
solve that effectively
for a large number
of variables--
The trick is we have a
whole group at D-Wave
and you have people
here who figure out
how to translate the hard
part of important problems
into that formalism.
And these are some of the areas.
Pattern recognition,
protein folding,
finding information in big
data sets, bioinformatics.
And this was just a quote by
someone you know and love,
Harmut.
But he introduced us
to the possibility
of using these
kinds of resources
for doing machine
learning, and this
is sort of maybe the future
of looking at big data,
and getting useful
insights, and people agree.
OK, so now how do
we built this thing?
So we're not using
little electrons spins.
This represents electrons.
They're not really spinning,
but it's as if they're spinning
and they have a little
magnetic moment.
OK?
And for a real spin you
could put a magnetic field.
These are like poles
of a horseshoe magnet,
and I can try to
orient it up or down.
And I can do that
for every magnet.
And then they can have an
interaction between them.
But of course, real electron
spins are microscopic,
and it would be really hard
to build a technology based
on these.
So I just told you that I
can build an artificial spin.
I can have a super conducting
ring that's very macroscopic.
I can run a current around it.
And that current going around
will create a magnetic field
up, or a magnetic field down.
You'll see two control elements.
I can put a magnetic field
into the body of the main loop,
and that will act like
this horseshoe magnet,
making that thing want to
be up or down a little bit.
So I have a control for that.
I have a second
loop with a couple
of things called
Josephson junctions.
I can put a current in.
And that will act like
a transverse field,
like a field perpendicular
to these spins.
And what a transverse
field does on a spin
is it can put it
into a state where
it's up and down
at the same time.
So when you put a
transverse field on a spin
it turns on quantum mechanics
and allows that spin
to tunnel between states, or
to be in two states at once.
So we have the local
field here, and this
is how we turn on quantum
mechanics in our loops
where we have an effective
transverse field.
You put these spins
together in an array.
This is then realized on a chip.
And you say where
are the circles?
I'll tell you about
that in a second.
And interesting, it also looks
rather like a neural net.
You have objects
that are connected
to other objects that
have interactions.
And so rather like a
neural net structure,
and maybe good for learning.
The actual qubit here, when you
actually build real circuits,
you don't just have
circles like that anymore.
You'll see these sort
of horizontal lines,
vertical lines.
They're long,
skinny qubit loops.
Currents go one way.
You get a zero.
The other way they get a one.
When you put flux in that one
knob they can be in both states
simultaneously.
Most of this chip is the control
circuitry to do all of that.
And this is where you
get to the hard part.
You can build a
few qubits, and you
can do some really
great science,
but now you want to scale it up.
So when you build real
objects they're not identical.
Electrons are identical
to each other.
Photons are identical.
But even so when you build
control systems for those,
the control system
won't be identical.
So you're going to run into
this no matter what you do.
So in a real
manufacturing process
this qubit is not going to
be exactly like its neighbor.
So what do you do about that?
I want to have these
uniform properties
for these macroscopic
engineered qubits.
And we figured out
how to do that.
It took two years, but
it's been successful.
Each qubit has a whole
bunch of in situ, tunable
characteristics where
I can put flux again.
This is like a little inductor.
I can run current.
It puts a magnetic
field in a loop.
And just by putting
magnetic field
in a whole bunch of
loops around that qubit
I can tune out all
of their differences
and make engineered qubits
essentially identical,
like electron spins.
OK?
Now, to do that now you
run into something else.
You'll notice that
for each one of these
I've got a couple of wires here.
I've got a couple of wires here.
I've got wires here.
If you add the number of
wires going to this qubit,
say it has five or
six control elements,
and you say, OK, there's
12 wires in and out,
and I want 1,000 qubits.
Now I have 12,000 wires.
You're not going to put
12,000 wires around a chip.
Right?
Even Intel doesn't do that.
So what do you do?
Now I have to have a
routing system that somehow
gets these flux in
each one of these loops
to tune out these
variances in my qubits,
and it turns out that
when I was at TRW,
there's a mature, digital,
superconducting circuit
technology called single
flux quantum logic.
It operates at the
same temperature
you need to run these qubits at.
It sends chunks of
magnetic field around
with no dissipation.
So it's not going to
take the weirdness away.
And you can interact with
these circuits at scale
with kind of a flux router.
So when you end up
with a chip like this,
you can see here I've
drawn a schematic.
You have these
long, skinny loops.
There's a qubit There's
four horizontal qubits, four
vertical.
This is our unit cell,
the basic building
block of our quantum processor.
Here you can see it here.
And right where they cross over
they interact with each other.
And then of course they
interact with the next unit
cell over there.
And you can see that most
of what's in this circuit
is control circuitry to get
flux into all those loops.
We have to first take
out all the variability
in all the qubits.
Then we have to be able to
put all the local fields
and the interactions between
them to define a problem.
And then I have to put
flux into that loop that
turns quantum mechanics on
to have it tunnel around
and explore the landscape.
And then after the
evolution is done,
and hopefully it's in
the lowest energy state,
I have to read
out all the qubits
to see how many
spins are up or down,
and that's the answer to
my minimization problem.
There's an actual chip.
This is a 512 qubit chip.
You can see each square is
one of those unit cells.
This is a lot of
test circuitry you
have to put on there,
lots of bond wires.
How do you build
something like this?
It takes a world class
fabrication facility.
So we took super conductivity
out of the R&D labs
and we took it to a
real production fab.
If you've never seen one,
they're really impressive.
This is Cypress Semiconductor
in Minnesota, top view.
This just shows some of
the circuitry on chip,
just one layer out of many.
And these are rooms
and rooms and rooms
of football fields
worth of $10, $20,
$50 million pieces
of equipment to be
able to build things at scale.
And we have the most advanced
superconducting circuit process
[INAUDIBLE].
Now, remember what I said.
For macroscopic
objects, the reason
they don't normally show
themselves in these weird ways
is because they're interacting
strongly with the environment.
So the first thing you
gotta do is you put the chip
in a protected environment,
the right kind of materials.
All these lines are
superconducting,
so there's no heat,
no dissipation.
You put a cap on top of
that, radiation shield.
Of course I have to
send signals down
to control the chip and its
evolution and read it out,
but that's coming
from room temperature,
so it has little
wiggles on it that
could cause the quantum
resources to get passed out.
So I have all kinds of complex
filtering for all the lines,
so only the signals I want
and not what I don't want.
Lots and lots of
layers of shielding.
And then that whole thing.
It's like, OK, that's enough.
So maybe I've shielded
some magnetic fields out,
and I pull a vacuum on
it, but that's not enough.
I have to go to very,
very low temperatures,
even lower than necessary
for super conductivity,
because the chip
sits on a surface
whose atoms are wiggles.
And again you could
pass out the weirdness
to those wiggling atoms.
So we take the chip down to a
temperature that's 100 times
colder than interstellar
space, at 20 molar Kelvin.
And now you can see the
whole thing buttoned up,
all these highly filtered lines.
And again, guess
what you get to do?
Some more shielding.
And you want to make sure that
the electrical signals in here
aren't hot.
So you might do
optical isolation.
In addition to which, if there's
ambient fields, the Earth's
magnetic field,
which is quite weak,
we've developed a method using
quantum devices called quantum
interference devices to measure
the magnetic field in three
dimensions, create canceling
fields, and getting the field
that this thing sees
to 50,000 times less
than the Earth's magnetic field.
All of these were innovations
we had to do really fast,
and they all work.
And there you go.
Now, you can see that this is
the actual system all buttoned
up, but it's in a big box.
There's the big box.
What's out here is what sends
the control circuitry in,
the pumps to keep
the thing cold,
and the big box itself
is stainless steel,
and copper, all that to keep
radiation, and radio waves,
and all that stuff
from the outside world,
again, to isolate, to have those
robust quantum effects taking
place.
Now, I often get asked why
is this 10 by 10 by 15?
Is it because the thing's that
big like the old supercomputers
with banks and banks of relays.
No.
It's so four physicists
can fit inside.
As we go eventually this
maybe could be a 19 inch rack.
The actual dilution refrigerator
is something like that.
And this is, you
often have people
inside working on
things, so it's
more convenient
in the short term.
This is the one
at NASA Ames that
got installed that you
guys are using, and a lot.
I'm really gratified about
how much use this is getting.
And the uptimes
have been fantastic,
something like 99% or 100%.
And another box
here at the USC ISI.
There's Daniel Lidar
and Dean Yortsos.
So cool stuff.
And lots of results
coming out of both labs.
OK, something about quantumness.
So often it's asked how
quantum is this thing?
So every experiment done
to date has been consistent
with quantum dynamical models.
So when you want to look at
quantum effects in a processor
you write down the
equations for those rings
and their interactions,
and all that.
You put it through a
Schrodinger equation.
You also add a little
bit of environment
because it's not perfect.
And you get results.
And then you compare
experiment to those results.
What we've typically done is
we've taken, in a processor,
domains in that processor,
and checked them
for these quantum
mechanical properties.
And every time we
do that they meet
all the quantum mechanical
equations and not classical.
Here's an example from
our "Nature" paper.
I don't have to go into it, but
that's the quantum predictions.
That's the classical
predictions,
and everything lies on that
for an eight qubit unit cell.
Recently we looked
at entanglement
in an eight qubit
unit cell, which
is I believe the
largest solid state
demonstration of
entanglement ever done.
And again, we see that.
Now, the question
is if we look at it
at a lot of different
unit cells and domains
within a large processor,
what about entanglement
over 500 qubits?
So I'll tell you this is going
to be a challenge for everyone
in the field.
Nobody can write down the
equations for 512 qubits
and write down what the
answer that you should get.
And even if you could do that,
the number of experiments
you would have to do to
confirm that is something
like 2 to the 2 the n, where
n's the number of qubits.
You can't do enough measurements
to fully characterize
the quantum state of
very large systems.
Everyone's going
to run into this.
So what people
are looking for is
can we see signatures
of quantum effects
in these larger
scale processors,
and at the end of the day is
it beating everything on Earth
for some set of applications.
And that'll be that.
But this is going to
be true for everybody.
OK, and lots of good results
also coming out of USC.
One of which they're looking
at error correction now.
Some good stuff from
all of our partners.
And I'm going to talk a
little bit about where
are we at with this processor.
We're at a scale where we
can start playing with it--
You guys are-- to
see what it does.
Again, I'll remind
you basically what
we do is we have some
function of a string
of binary variables.
So some function.
It could be some
complex function
you guys use in
machine learning.
And it's a function
of 512 zeros and ones.
And I want to find what set of
zero and ones minimize that.
It could be an
error function when
you're training a neural map.
And I turn that problem into--
you put a local bias on a spin.
I have interactions
between the spins.
Now I turn on my
quantum tunneling term
so it can explore that complex
landscape and hopefully ooze
into the right answer.
And at the end of that I should
have a bit string, up and down
of all those, that
I can put into that,
and I'll get a very
low value, lower
than I can do any other way.
And that's the idea.
Interestingly, just from the
cool quantum magic side of it,
at the very beginning of this
when you turn on that tunneling
term, that array of magnets
is in a superposition
of all possible permutations
of that lattice.
So what that means is
all the spins are up,
and they're all down, and have
are up, and a half are down.
And they alternate,
up and up, down.
And there's 2 to the
500 permutations,
and they all exist
simultaneously,
which is mind bending.
And again, just
reminding you, so there's
the spins, four
horizontal, four vertical.
They interact.
They're like little magnets.
I put a horseshoe on
each one to kind of make
them want to go up or down.
And if that's all that's
going on it's easy.
They're all going to
line up, north to south.
But now you add
some interactions
where they want to be like
some of their neighbors,
or unlike other
neighbors, and it
becomes what's called
a frustrated system.
If you're interacting with a
whole bunch of your neighbors
and they're all telling you
different stuff, what do I do?
And the same thing's happening
to all of them, wherein
you get this
incredible complexity.
Lots of variables interacting.
How do you satisfy all
of those conditions
simultaneously, maximally?
And this just shows
you here's the problem.
So this represents the problem.
You kind of introduce
the problem, these fields
and couplings.
And this represents turning
on that tunneling term where
it spans the whole space.
I'm not going to
go into details,
but the bottom line is you
start off where it's everywhere
at once, and there's
not much of a problem.
It's everywhere.
And gradually this is where it's
oozing around in the landscape,
and the problem starts making
itself known, and at the end
you're not tunneling
around anymore.
You're in the bottom.
You're not everywhere at once.
You're in the solution.
So the question is when you
do this-- Now, keep in mind,
this is a first
generation processor.
There's a lot of
imperfections with it.
It's sort of hot off the
press for a few months,
and you say, OK,
let's compare it
to some off-the-shelf
solvers that
solve these kinds of
optimization problems
where they try to find the best
solution amongst a large set.
OK, so we took a single
core, Intel Xeon,
and there were these
three different kinds
of methods for searching
through those complex energy
landscapes.
And we ran these, and
you guys ran these.
The result was
really encouraging.
Right, so when we first
saw this this represents,
without going into
all the details,
this is kind of the
time to solution,
and how it scales with
the number of variables.
So it's this how
does it scale thing.
And also the point here is
the absolute time to solution,
say at 509 qubits.
So what was remarkable is that
this first processor we saw,
say, with some of these
commercial solvers something
like 35,000 times faster to the
best solution on our little 500
qubit chip.
Right?
Now if that was the
end of the story
and it did that for everything
we'd be done, and we'd be you
know.
Everyone would want one.
But the reality is these
are general purpose solvers
and ours is this special
hardware designed for this.
So if you did a
lot of iterations,
even with these
existing solvers,
maybe that 35,000 comes down
to 100 or a couple hundred.
Still really impressive,
because people
spend a lot of money
for a factor of two,
but this is really encouraging.
So what this tells me is
we've built a real computer.
We know that it has quantum
dynamics going on it.
It's finding the best solution
out of 2 to the 500 states.
This is a highly
nontrivial calculation
in just a few years.
And the hardware that it's going
up to is incredibly impressive.
If you look at that processor,
the Intel processor,
that's 50 years of development
and a trillion dollars.
OK, but then there were some
other tests everybody's aware.
So some good people got
together, Matthias Troyer,
and some guys, Martinis at UC
Santa Barbara, and you guys.
And the idea was, all
right, let's take the best
special purpose classical
processors and algorithms
and see if we can do
better, and go head to head
with this blue wave
optimization machine.
And so again, we're
solving this problem
of having spins with the
local field and interactions
between them.
There was a problem set chosen
where the interactions were
kind of random, and all the way
on or off to make it simpler.
The competition, and
just pointing this
out, the way this works is it
doesn't see all the solutions
simultaneously, quantum
mechanically thing,
instead you have a
spin lattice, and it
will turn spins over and
look at the energy, kind of.
And it'll keep doing
that really fast.
And every time it
turn spins over
it kind of looks at
the energy, and keeps
moving those spins around until
it comes to the best solution.
So interestingly, if you look at
something like this Intel Xeon,
it's about 5 billion flip
evaluations per second.
Pretty amazing.
So these are very
powerful chips.
These GPU's actually are
in the Titan supercomputer.
OK, so an experiment
was done by this group,
ran hundreds of
millions of experiments
on the D-Wave
processor, which is nice
because it's working robustly
even though it's early stage,
and billions of simulations,
and classical and quantum
Monte Carlo code.
OK, so preliminary results.
What this shows you again is
sort of the time to solution.
This is kind of a median
time to solution averaged
over a lot of problems as
a function of problem size.
And what you see here
are the blue lines
are the D-Wave machine.
And you see the
other lines that are
various forms of
using these highly
optimized classical solvers.
And you'll see here obviously
we're doing much better.
There's kind of a
generic solver, which
is something called a
metropolis algorithm that's
flipping these spins, subject
to a special algorithm that's
like simulated annealing,
doing this classical search
of those energy spaces.
We do better, say, in terms
of the time to solution.
And then you can see
that highly optimized.
This was playing with how you
actually use that processor,
knowing the details
of its operation.
Then you can get highly
parallelized things.
You take a couple of CPU's,
or eight CPU's I think it was,
in parallel.
So you can parallelize
some of these operations.
And you can see, OK, it's
competitive with what
the D-Wave thing is doing.
And then if you take GPU's with
these 2,600 cores or something
I can exceed that for
some set of benchmarks.
Now, more importantly what
people look at is they say,
well, I don't care so much
about the absolute time.
I'm sure D-Wave, there's
engineering overheads.
How long it take to
program the problem,
read out the problem
every time we
come at it from a new
version that gets faster?
What people tend to
look at is the scaling,
like I said before,
and complexity.
So if you get a smaller
slope, say with some quantum
processor, and you extrapolate
out to 10,000 variables,
then it's game changing for a
whole range of applications,
and you can port to this.
Now, what we know is
that this scaling is not
fundamental to the
physics of our device.
So what's very useful
about these studies is,
OK, let's take the best
stuff out there, and let's
learn what is it
about our processor
that enhances performance
and what robs performance?
We know right now that
most of the problem
is misspecifying the problem.
So the problem is
putting the local fields
in the interactions between
our qubits, and they're off.
We have some systematic errors
I'll talk about quickly.
And we also have some noise.
And so we expect that the
scaling will get worse
as you get bigger because
these errors accumulate.
So there's no evidence
that fundamental physics
is limiting it.
We also found that the
random problems that
were used to do this
benchmarking intrinsically
shouldn't be that hard.
So if you look at these
energy landscapes,
when you have lots
of these big valleys,
and there's lots of
ways to get to them,
even if you're a little
ball rolling around
there's lots of these deep
valleys you can fall into,
actually lots of ground states
that are sort of equivalent,
or paths to them.
So there was a
Professor Katzgraber,
the paper's on the
archive about how
these kinds of problems
with the kind of structure
we have in our processor
shouldn't be that intrinsically
hard for classical solvers,
because those energy
landscapes lend themselves
to classical solutions.
So this was
interesting, and this
was learned kind
of by doing this.
We also ran other
kinds of problems
where we tried to
engineer the landscapes so
that those deep valleys are
surrounded by mountains sort
of.
And then we find
different scaling
where we look much
better in those cases
where tunneling
should be important.
Now, these are all preliminary
results, but what's
exciting is we're starting
to learn what limits
our processor, how to
make it more powerful,
and we're showing after
just a few years, instead of
50 years and a trillion
dollars, and millions
of man hours of
algorithm development,
we're matching kind of
state of the art stuff.
And this is really exciting.
And the scaling will only
get better with improvements.
Some evidence to
that fact is we have
these different
generations of processors,
and every generation we improve
the precision to which we
can specify our problems.
So here's two generations,
V5, for Vesuvius.
We name our processors
after mountains.
And you can see the scaling, the
median time to solution for V5.
We made some improvements
on V6, and you
can see the scaling
starts getting better,
and we expect as we go
to V7, and Washington,
and future generations, as we
increase precision and reduce
noise, and all the factors
that could impact performance
it should just get
better and better.
Another thing too is the pace of
discovery [INAUDIBLE] going on.
We have our 128
qubit generation.
When we went from
there to Vesuvius,
typically in a cycle of
a couple of years people
go for a factor of
two, say Moore's law.
We got a factor of
300,000 speed up
from one generation
to the next, and it
was a complete
architectural change.
We didn't just expand
the number qubits.
We looked at everything
that effects performance,
programming it, reading it
out, the size of the qubits,
energy scales, anything that
might effect quantum dynamics,
and we've got this kind of
speed up in a couple years.
So really encouraging in terms
of the pace of discovery.
So next steps.
We know now the things that
are limiting our performance
on this current
generation of processors.
We know that we need
to increase precision.
You have to define
the right problem.
We might be getting the right
answer to the wrong problem,
and that skews the scaling.
We also know that we
need to reduce noise.
We also know that we have
to increase something
called our qubit energy scales.
The bigger the energy
scales of the qubits
the less fluctuations
play havoc with them.
Reducing the temperature,
I'll talk about that.
The other thing is to make
this thing easier to program.
Right now each
qubit's just connected
to six of its neighbors, right?
That's not that complex.
If I'm a variable, and I
want to connect to 20 others,
I have to use some kind of
complicated embedding for me
to connect to you.
And I'd have to use a lot
of intermediary qubits.
So we know that both to increase
the complexity of the problems
we can ostantiate and to make
them much easier to program
we want to increase
the connectivity.
We also were band limited on
how fast we could do this search
process with quantum tunneling.
We're going to be
speeding that up.
And in addition to all of
that, we're scaling up.
So I'll give you an idea
of the kinds of things
that limit precision when you
build large scale circuits.
Remember I told you that
when you put a magnetic field
into these little loops both
to control the whole quantum
computing process and also to
calibrate all these things,
and take out their variances.
When you put a magnetic
field into one sometimes
it'll spill over into another.
Why?
Because when you look at what
these processors actually
look like at scale you can see
here lots and lots of layers,
currents running
down, magnetic fields.
It took years to build programs
to do 3D modeling, lots
and lots of generations
to figure out
how to shield and isolate just
to get where magnetic fields
where we want and
not anywhere else.
So real quantum circuits are
going to be very complex,
and it's going to require lots
of engineering, and resources,
and iterations.
We're making our
ability to put flux
in those things more precise.
The flux goes in
in little units,
and we're making those
units more smaller
by changing our digital
to analog converters
in our superconducting
circuitry.
We had some knobs.
This was the knob
on our coupler.
On the previous generation
that you guys still have,
when you turn a knob, a little
turn in the knob and you
get a big change in the outcome,
that makes it difficult.
If you have a light switch
that turns a little bit,
and light turns on a lot,
it's hard to control.
In another version that
we built to confirm
we now have a flatter
response in that coupling.
So it makes it that much easier
to set the couplings correctly.
Additionally, we found one
of our biggest problems
in specifying problems.
When you have a
real physical spin
and I put a magnetic
field on it,
it doesn't change the
value of the spin.
We don't have real spins.
We have artificial spins.
So when I put a flux
in this loop that
acts like a magnetic
field this loop
will generate a little
current to kind of push
that field back out.
And when it does that means it
changed the value of the spin,
and it also percolates
out through its couplers
to its nearest neighbors.
So we call this
magnetic susceptibility.
You put a magnetic field
on, it reacts a little.
We now have a design to
cancel that out completely,
and that'll have a big
impact on precision.
I mentioned this, energy
scales, smaller qubits.
You want big energies compared
to thermal fluctuations
and noise.
We can make bigger energy scales
by smaller qubits and smaller
junctions, and we're already
building the next generation
that includes those.
Another thing is temperature.
At the end of your revolution
here's all the energies.
So remember, each
one of these energies
represents a different
configuration
of that spin lattice.
That's the lowest
energy one, but we
operate at 20 millikelvin.
And as cold as
that is, you still
have some thermal
energy to knock you
into these higher lying states.
And the extent to which you're
knocked into these higher lying
states depends exponentially
on temperature.
So small changes in temperature
give you a big bang,
give you a lot more probability
to be in the low-lying states.
So we're going to be
going from 17 millikelvin
to 10, which should
give us a big boost.
And that's in the
shorter term, which
we think we can do reasonably
soon, and eventually
go to 5 millikelvin.
And we should get a dramatic
increase in the ground state
probability, and the lower lying
answers, the better answers.
Noise, so we've been
asked this a lot.
So I've often heard
the comment that D-Wave
doesn't care about noise.
They want to put a
lot of noisy qubits
together and see what happens.
This is not true.
Here's the philosophy.
So people working on the
gate model, even to get
started, they have to
have very low noise even
to build few qubit systems
that can do anything,
because they operate
in this regime
where they're using multiple
higher lying energy states
and interactions between them.
When you have a bunch
of energy levels
everything always wants to go
to the ground state, the lowest
energy state.
So it's thermodynamically
unstable.
You're doing non
equilibrium computing.
You're driving things with
microwaves or light waves.
In the adiabatic
approach you always
want to stay in
the ground state.
You're in the ground state.
You evolve that
landscape, and you
want the ground state
to evolve with it.
So the dynamics
are slow, and you
tend to relax to the state
you want to be in anyway.
It's kind of natural
error correction.
And so it's thermodynamically
more stable against this.
So you don't need to have
these unprecedented low noise
levels to get started in
the adiabatic process.
So the decision we made
was everybody in the world
is working on low noise.
Right?
We don't need to.
We can get started.
But what nobody's working
on because they can't yet is
figuring out how to
build large scale
circuits to control
all their stuff.
So let's do that.
It's complementary
to their effort.
And in the last 10 years,
with about 500 man years
worth of materials
development, they've
had an improvement
by a factor of 1,000.
So now that we have
a working processor,
and because it's less
sensitive to noise,
and we have quantum
dynamical things,
and we're doing
real calculations,
we can leverage all
the learning done,
and start incorporating
in our processors
to eventually get
something like that.
And it's not so exotic
what was discovered
in terms of how to do this.
You can't just plop it in.
There's parametric changes.
But we know where to go.
And likewise, we've worked
on these large scale
superconducting circuits that
the rest of the community
can leverage on our
end because we've
been developing that
for the last 10 years.
We do a lot of this
work at JPL as well.
Just want to say hey to them.
And so where are we?
We built working
quantum processors
at a scalable architecture.
Even early generations
are showing promise,
matching state of the art
processors for some problems,
and on some problems
exceeding them.
We're starting to
really understand
what makes us powerful and robs
performance, rapidly increasing
our capabilities
with each generation,
developing real
world apps, and that
helps us understand the
architecture, connectivity,
and techniques we need.
It's fantastic that we have
an ever expanding group
of brilliant people
to think about how
to use this, and obviously
visionary partners.
So stay tuned.
We've been on this
Moore's law like scaling
for a better part
of 9 or 10 years.
We don't expect that to abate.
And this is what's
in the lab right now.
AUDIENCE: So Since you
are so rapidly developing
the processor technology
and everything else,
is this more of a
sales model or a lease.
Like so is Google stuck
on like the old generation
of the processor or do we
see the improvements from--
ERIC LADIZINSKY: Oh, yeah.
Of course.
Of course you will.
Yeah.
So obviously you started with
earlier generation processors.
You learn how to use them.
We understand.
And from the users end we're
getting a lot of insight
into what you need,
you know, whether it's
connectivity or all
that other stuff.
AUDIENCE: It sounded like one of
the main engineering challenges
was making sure that
these like SQUID machines
are all essentially
functioning identically.
But I didn't
understand why that's
so important for
the physical model.
ERIC LADIZINSKY: Oh, well,
so these qubits in order
for them to sort of evolve
together, they can't be--
So I'll give you one example.
Each qubit, you can think of
having two states separated
by a double well, a
double well potential.
When you lower that barrier
they tunnel, for instance.
You want them all to
be tunneling the same.
So let's say their barriers
were different heights.
Some of them might go
into a localized regime.
And there're couple
of other ones.
And they drag it into
a classical like state.
So they all have
to evolve together.
So it's critically important
that within some percentage
that we know that
they're all uniform
in their characteristics.
Otherwise it's a complete mess.
And so one, we had to develop
this in situ tuning capability.
We measure everything,
and then there's
a complex calibration
routine that
puts flux biases
in all those loops.
Because they're superconducting,
once you flux bias it,
it's there forever.
You don't have to redo it.
SPEAKER 1: Thank you Eric so
much for this very informative
talk, and thanks to
all of you for coming.
And stay tuned.
There will be a series
of interesting talks
to come in this quantum
AI speaker series.
