>> All right. I see a couple of
people in the audience who have
already seen a good
number of these slides.
In fact, one in the audience who I
stole a number of these slides from.
Thanks, Chris. So in the meantime,
if you've seen these slides,
and I know a little bit
later they're going to
get a little dry in places.
I suggest you go to
the Microsoft Blog on Monday.
We recently put out a note on
some work that we've
been doing recently with
Case Western Reserve University
on quantum inspired algorithms.
In particular, using
quantum inspired algorithms
to improve magnetic
residents fingerprinting.
So we can produce MRI scans with
higher resolution in a fraction of
the time that then you now run.
Now, obviously, this is
research and so by the
time it gets to market,
maybe some years to come,
but this really has
the opportunity to
change the way we do healthcare
diagnosis in the future.
Also on the blog, you'll find
from a couple of weeks ago,
a collaboration with
Willis Towers Watson,
a major insurance firm,
also using quantum
inspired algorithms to
improve the speed and
efficiency of risk modeling,
which of course, is one of
the major components in insurance.
So quantum inspired algorithms,
well, they're real, and
they're here today.
So my name is Brad Lackey.
I'm a Principal Researcher.
I will be your substitute
lecturer for this talk.
Matthias Troyer was really
hoping to give this himself,
but unfortunately, he's
out of the country today,
so I'll be covering this.
I'm going to talk about chemistry,
but on a day-to-day basis,
I work in quantum
inspired optimization.
So since questions came up,
I'll take a little bit of
time at the beginning to
talk about this rather
than at the end.
There are multiple modes
of quantum computing.
Models of quantum computing is
a more traditional way of saying it.
What I'm going to talk
about here is really
these Chemistry applications are
based on the Gate model approach,
which is really the focus
of this entire workshop,
but there are other models.
So for instance, if you
study cryptography,
some popular cryptographic
protocols such as
multiparty computation
or blind computation.
They're quantum versions of that,
but they're not built on
the gate model like we're
going to see today,
but they're built on
another model of computation
called measurement-based
quantum computation.
In that setting, you don't
have a circuit like we're
going to see a lot.
You have what's called a cluster
state or just a large
entangled state,
and you make small measurements
on pieces of it,
and adaptively tune those
to push the computation
in the direction you want to go
and so it looks very different.
Another model is
adiabatic quantum computing,
which is really what I specialize in.
Also, universal model.
This is based on
old ideas from physics,
the adiabatic theorem, which
came around the 1920s,
so it's been around for 100 years.
This case, you prepare
a system which you
understand everything about.
So it's easy to construct,
and then slowly transform
that system into one
whose lowest energy configuration
solves the problem of interest.
The adiabatic theorem says,
as long as this transformation
moves slowly enough,
the system will stay in
that lowest energy configuration.
So at the end, you've
solved the problem.
In many ways, adiabatic
quantum computing isn't
as popular today because
it's been around
for 100 years and so
there's a huge amount of
machinery to simulate
this type of physics.
So we have very,
very good classical algorithms
we can run on computers
today that will simulate
adiabatic evolution.
So if I can take
an adiabatic quantum algorithm,
and these are optimization
algorithms are
really easy to cast in
the adiabatic model,
then we can simulate these
on computers we have today.
So we're actually aren't
running quantum algorithms,
but we're running simulations
of what would happen.
Sometimes, they're not
perfect simulations.
They're analogous simulations
of what would happen
in solving problems
using these algorithms.
So that often goes to the name of
quantum inspired optimization.
If you read through our blog post,
you'll see we have
a lot of partnerships.
Almost all of our corporate
partnerships are based on that.
Optimization is a big deal
in the industry,
huge deal in the industry.
So any type of speed-ups solving
highly non-convex large
optimization problems
are going to have real applications.
But that's something completely
different because
adiabatic optimization,
adiabatic quantum computing is
a hard way to think about computing.
We've grown up with computers
in von Neumann architectures
that we're familiar with today,
were you ever registered or you
apply some gates to a register,
and that makes sense
to us intuitively.
So to develop algorithms is much
easier since we have
that intuition to do it.
Even though developing
quantum algorithms itself is a very,
very hard task, developing a model
that's completely alien
to us is even harder.
So there's really not a lot
of adiabatic quantum algorithms
out there because they're very,
very challenging to come up with.
So instead, we're going to
focus on some gate model stuff.
In particular, I'm
going to talk about
a collaboration that we had with
Pacific Northwest National Labs in
integrating their
quantum chemistry package,
NWChem, into using the Quantum
Development Kit, the QDK.
Let me get going. So
quantum programming,
well, that's what
it's all about as far
as I'm concerned. All right.
So we want to develop algorithm,
but we want to run those algorithms.
So to run these algorithms,
we need a programming
language to work with.
Here, I'm going to talk about Q#,
that being the topic of the day.
So really, what I'll
focus on today is
using a implementation
of an algorithm in Q#
to count the number of gates and
cubits that are necessary to
run the algorithm to get
an understanding of when
quantum computers come around,
how long is this
really going to take.
So initial estimates of
a classical algorithm just make
it completely intractable.
But if you take
the naive quantum algorithm,
just write it out and start
counting the number of gates.
Those are huge numbers,
that would just make it totally
impossible even for
quantum computers.
But as we refactor these algorithms,
we find better ways
of implementing them.
We find better tricks for
computing pieces of them.
We reduce the resource costs.
We can eventually get these much
more efficient implementations,
which make these things tractable
when sufficiently large
quantum computers come around.
So resource estimation is
a good way of looking forward,
even though we're not
necessarily going to run
these on quantum computers today,
to estimate what the impact of
these algorithms will be on
industrial workflows in
10 years or 15 years
or whatever time frame
we're really talking about.
So the problem that keeps me
awake at night is this one.
Debugging quantum algorithms.
I have enough time for
debugging classical algorithms.
So Chris was chuckling
because this is his slide.
He printed off this output,
but this is exactly what I see
whenever I write code and run it,
is just this complete mess.
So quantum algorithms,
this is a real challenge.
Debuggers, as we
understand them today,
you use the same basic tools,
you set a breakpoint,
you look in there, you
see what state is,
and is that what you expected?
Well, in quantum algorithms,
you can't do that.
If you examine the state,
you've destroyed the coherence
of the entanglement,
all of the nice quantum properties of
that state that you want to
actually complete the algorithm.
So you may see some
statistics at this point,
but that might not give
you a clue about where
the state is moving to.
So the concept of using
breakpoints and debugging in
that fashion just
isn't going to work.
Well, what else is there?
Yeah. All right.
I lose a lot of sleep over that.
But what I'm going to
talk about today is
actually bringing applications
today using quantum algorithms.
So this is really what I do on
a day-to-day basis because
debugging stuff is too hard.
So we've already seen
the notion of a stack indeed.
Today, we're going to
talk about, I guess,
what's mostly hovering at
the top of that top
squiggly arrow there.
I'm going to talk about chemistry,
but we're going to have a lot of
other algorithms that we
talk about this afternoon,
and really we're talking
how do we implement them,
how do we create code that would
push them down into
the control software,
and below that we're really not
going to discuss that too much.
That's some complicated engineering.
That's really far away from
what I do, so I'm afraid.
I can't really comment on that.
But what applications can we
see for quantum algorithms?
So quantum chemistry looks
particularly attractive because of
these numbers along the bottom.
When we run these resource counts,
we think that, "Yes,
we can get real payoff in
100 to 200 cubit range."
So putting some numbers forward,
we've mentioned of the IBM Q,
which is the quantum machine
that they have in New York.
The original one that was
on the web had five cubits.
The new one has 49 in
an interesting topology.
I don't know exactly what
the next generation is going to be,
but we're already starting to
see this become realistic.
Couple of years ago, Google
promised the Bristlecone ship,
which was to have 72 cubits
in a grid-like topology,
but we haven't seen that appear yet.
IonQ over in Maryland is
promising 128 cubits.
Those are ions. I don't
know how they're going to
fit that into a linear trap,
but Chrisman Rose is a smart guy.
So these numbers don't
look so outrageous.
Now, of course, those are going
to be very, very noisy computers.
Even though the sizes of
them are in the ranges
were talking about,
the length of the program you
can run is going to be very,
very short before the noise
and decoherence overruns it.
So yes, these numbers
say, "All right.
Yes, we can talk about
small quantum computers,
then we can think about these,
but we also have to
have long runtimes,
and that's where we have to worry."
Now, do we use this error correction
on these noisy things,
or do we attempt to build
a topological quantum computer,
say, when the noise is small
enough that we might be able to
run these without the need for
additional error correction.
On the other hand, things like
Shor's algorithm are in
the thousands of cubits,
and so that's out of range.
Material sciences also
grows pretty big.
Machine learning, of
course, is a huge promise,
but there's a lot of uncertainties
in the applications
to machine learning.
There's a little bit of
a war going on there.
Every time we get
a new quantum algorithm
that improves machine-learning,
somebody comes in
dequantizes it and says,
"Well, here's a classical algorithm
that will do just as good."
So there's been a lot of back and
forth on that for the past few years,
so I'm not sure where that's
going to finally end up.
All right. So let me just
recapture what I said there.
In the near term,
we have these machines
that are coming out now,
most of them are based on
superconducting qubits.
The one exception, the one
at ion cubing, ion trap.
Topological things are still,
I don't know because
I don't work there
the hardware but
they're not here yet.
The length of the program
is probably talking
about well let's say it's
maybe 1,000 cycles of
the analog of what a cycle
would be in the computer,
so 1,000 times steps to
get your algorithm done
and that's not nearly enough to do
any significant sized algorithm.
On the other hand, over
here we want to have
these really long-term
programs and we can run
millions or even billions of
cycles which are going
to be necessary,
then we're going to need to do
something to control the error.
They either get the
error correction directly in
the fiscal qubit itself or build
error correction on top of it.
So what I'm going to talk about
really is quantum phase estimation.
That's the underlying algorithm
that's really driving a lot of
these chemistry applications and
it belongs to
this category over here.
It's unfortunately
a relatively deep algorithm.
It takes a long time
to run even though we
might not require as
many qubits for it.
So in other words,
we're not there yet.
But five years, 10 years,
we might be running up against
the boundaries of what we can
actually do and of course at
that point then we want to use it.
All right. So here's
an example of something that
you might actually want to see come
out of one of these simulations.
Theory has a predicted curve
which relates the bond length
in a hydrogen molecule here
with the associated energy,
and we'd like to simulate around
this low point because that's
going to be where we find
the system at equilibrium.
So the methodology for this uses
the same methodology we've used
for these things for, I mean forever.
You somehow get the theory
out of some research paper.
You go up to the whiteboard and
you work out what it is in terms of
the simulation that you can do and
then you cut and paste
it into your simulator,
be it in MATLAB or now
in Q Sharp and out comes
some plots which
hopefully match pretty
well with the theory that's
been developed over the years.
So very very intensive and
not scalable to large things.
So what we'd see that his team up
with Pacific Northwest
National Labs which have
the product NWChem which is
a high power Quantum Chemistry tool.
This is really what they work with,
this is their specialty and integrate
all of the wonderful modeling
that we have going on in
that tool and bring it into the QKD
so that we can simulate it
directly and not have to
go through the human intensive step
of converting all of
the various pieces to
the right form that we can
then put into simulators.
Okay. So here's it's basically
a four-step process and I'll go
through a little bit of these.
Broom bridge is the name
of the protocol,
this doesn't really have anything
to do with what I'm
talking about today,
but it's such a nice picture
that I just had to include it.
This is broom bridge I
guess is what it is,
it's over a river in Dublin
and the link if you will,
you'd have to probably
look a little bit for this
is that this is the bridge where
Sir Rowan Hamilton famously
invented the rules
for the quaternions.
So what does that have to
do with quantum chemistry?
Well, we use Hamiltonians
and the names Hamilton.
So maybe a little bit tenuous there.
But it's a great picture
so I love it, good story.
But that gives us
a format for linking,
so gluing these two pieces of
sophisticated software together,
we can get all of the appropriate
chemical information that we'd
like using the modeling tools
in NWChem and here we see
examples of this and
that this can be brought
directly into appropriate
tools to transform it
into a Q# simulator for
the phase estimation
and other type algorithms for
estimating energy levels
both ground state and
excited state energies.
So this is a little bit flushing
code up, there's no fun.
Unfortunately, I'm going to
get a little bit more of that,
but we're going to see a lot
more of it as the day goes on.
Here's really what stage,
the workflow here is about
four steps and this is
characteristic of just about
any real-world application
you can imagine.
You have some method
of taking something of
interest and then mapping it
into a simple algorithm that
you can run on a computer.
Now, every stage of
that mapping typically has
dozens of options
you can choose from.
So finding a method
which will actually get
your result requires searching
over all of these things.
Of course that's really
where the computer age has
really brought us
forward in technologies
because we can automatically
do those things.
If I wanted to find
a fast algorithm for
computing something
in the 19th century,
I was digging through log tables
and stuff like that and
that was just no fun.
But now we can go through
all kinds of different options,
test them all out,
and just find what's the best,
and that is the method for going
from this 3,000-year estimate to run
this quantum algorithm down to
this one-day estimate during
this quantum algorithm.
It's how do we choose the best of
these to make an
efficient implementation?
So I'm going to briefly
run through these.
If you're interested
in quantum chemistry
and simulation of these things,
this work that we've done on
the chemistry library within
the QDK has all of
these little steps.
Well maybe I shouldn't say
all of these little steps.
It has at least one of
the major choices for each of
the steps in the reduction.
So I'm going to flash up
this pretty quickly because
I want to get to the point.
But the docs actually explain
everything that's going on
and there is sample code
available and so if you want to
construct a Hamiltonian and then
use like Georgia vigor
representation to
convert it into some cubits,
then the explanation of
all that is there and then
the utilities for doing this
has already been prepared.
Deriving a qubit Hamiltonian,
again, everything's in there.
If you go onto the docs, you can read
about this, it's very interesting.
What I'd like to spend
time on here is discussing
the simulation algorithms because
there's a lot of choices
that can happen.
We'd like to implement
Hamiltonian dynamics which we can
see in the slide going on right here.
Maybe the font is
not an ideal choice.
But we'd like to implement
unitary dynamics given by
a Hamiltonian and the Hamiltonian is
given by a sum of
sparse Hamiltonians or simple
to apply Hamiltonians,
and so there's a mechanism
for doing that,
and here we're illustrating
Trotter-Suzuki.
But there are many many more,
qubitization is another one.
Taylor series is a third
and each of those have
numerous parameters.
So how do we find the right thing
to do and run these?
But before I talk more about that,
let me just illustrate what
the output looks like.
So here's the actual workflow.
What we start with
is our NWChem box as
far as I'm concerned which has all of
the quantum chemistry built into
it that people can utilize.
I'm not a quantum chemists
myself and so
that really is a black box for me.
Somebody says we're
really interested in
say nitrogenase which
is the enzyme that has
the molecule inside of
it and so that might be
modeled up here under
this biomolecules data that's
in the appropriate database.
That can be manipulated
and exported in
this Burbridge file which then
can go through the Q# simulator.
We go through a variety of
choices on how we select
our options and then target in
this case a resource
estimator to say,
"Is that option really a good one
for simulating this
complex molecule?"
and so how long would that take
and then go back and forth.
So what we'd like to see is
something that looks like this.
So here lithium hydride simulation.
We'd like to understand
the ground state
and excited state curves
of their bond length
because at the lowest one,
that's going to be where
we have our stable states.
Okay. So here we're just seeing
some outputs of some simulators.
You have the ground state. We
simulate the first excited state.
There's about lines
as one would expect.
But one can then put all this
together and get a good estimates,
get good empirical plots of
the spectrum of this molecule
using these tools.
So this is a very small one and so
the theory plots can be
constructed directly.
So it's a nice example for who
likes chemistry experiment here.
It's what's the ones where
we're looking at here,
so for instance this is just
simulating the ground state,
and a couple of things didn't quite
come out right, and here we're,
let me go up and erase
the front one here,
and here we're simulating
the first excited state,
and so you can see the
outputs of the simulator
tracking the energy levels for
the first excited state,
so we built that in.
I don't understand why
it's producing things
way up in nowhere land.
That's curious. Yes,
do you got those?
>> Yes, because I can
sometimes have it as you learn
the phase module
[inaudible] small phase.
>> It'll pop and rep around.
Okay, yes. Then pop it way up there.
Yes. Great. Thanks. Okay. So, yes.
So ultimately, what we're looking
for is a plot like this
where we can see all of
the spectral lines of
the molecule. All right.
So let's look a little bit
this resource estimation.
There's a variety of options we
have for Hamiltonian simulation,
and each of them have
different requirements in terms of
the number of cubits
that are required.
Now, as we expect, we were
going to have limited
number of cubits so
that's a major concern.
But the training off that we
also have the number of
gates that are required,
which is the length of
the program that's going to run,
and so we'd like that
to be short as well.
Well, as expected those are
fighting against each other,
and so there's a variety of
choices we might want to make.
But in terms of actually doing it,
all of this is already
built in to the QDK,
and so it's relatively
straightforward.
We're seeing well pretty much
all of the code right here.
Now, here we have this thing called
TrotterStepOracle which of course
requires a great deal
of work to build,
and [inaudible] I imagine
has done a great deal of
work in making this practical.
But really, once it's
built into the QDK itself,
it's relatively
straightforward to use.
So in here, we can just
estimate now the cost of
TrotterStepOracle
based on some ideas.
What do we think would be good?
What order do we want?
Do we want to do
a first-order approximation
like linear approximation?
Do we want to do
quadratic approximations etc?
What's the size of the steps
we'd like to make?
Now, we can tune those to
see what's comes out of it.
So I guess this slide
is a little bit older than I was
expecting although I hope
it's still up to date.
In fact, I wanted to get the
Python one in here but I didn't.
So this is the C sharp
wrapper that goes
around that queue
sharp piece of code.
You'll probably see a lot
more of this this afternoon.
Really this little
four-line statement right
there is in some sense the driver,
which is running the quantum
computer if you will.
Now, here we're just doing
a gate count simulator,
but this is telling us to run
this trotter step algorithm
to simulate a Hamiltonian,
and three of the four lines
are about getting timings.
Then the bottom areas here,
this is just getting statistics
that we can then print
off to the screen,
and I'll show a picture
of that in a moment,
or at least a graphic
that comes out of this.
But really, there's not a lot
to running the code itself.
Takes a little bit
of getting used to,
but then you can create very
efficient code bases that automates
this task really well. All right.
So this is just the trotter case,
and so here's the output
that comes out of that.
There's a variety of
choices: we talked
about the Trotter we hear
Quibitization is another one,
then there's an
optimized Quibitization,
which attempts to reduce,
I believe in this case,
it's reducing the number of
rotations I believe it is.
Well, be that as it may,
you get these outputs
from the simulator
for all kinds of different molecules.
It's a little bit time
consuming to do it.
But we actually can
count the number of
gates that are required
to do each of these.
So for instance on
this ferric sulfide,
I'm not sure where it
is in the table here,
does not appear to
be there but it's on
the chart that will appear later.
There's a variety of
choices we can take.
Trotter for instance, gives us
a certain number of rotations,
and a certain number of CNOT gates.
In some sense that's relevant
to the depth of the circuit.
There's more computation that
has to be done to get that.
If we want to do
air corrected things,
which would be appropriate for
the modern superconducting
based here's the T-count.
T-gates are the stumbling block,
and so we'd want to count those,
and the optimized one
it's reduced the number
of non T-rotations to something
that's manageable like 18,
but still we have 69 million T-gates,
which tells us I mean
that's really going to be
the quantifier of how hard
is this going to be to run.
Then with some
assumptions this sort of
guides the statement
that we can do this in
a day with an appropriate
T-state factories
that had been lined up in
the right way and that's complex.
But really all I want to do here
is say that we have
all these different options
that can be automated,
and then we can find efficient
implementations by searching
through these hyperparameters
and selecting the best.
So just a quick overview
of what all of these do,
minimizes T-count that's
the optimized one over there,
and so we can choose which
are the best possible.
The answer is well of course it's
going to vary on what
you actually want to do,
and so the choice for
a small molecule might
not be the same as the choice
for a larger molecule.
It will vary from one to the other,
and so one has this now workflow
that's been designed that can
optimize this for
all the possible choices
that you might be interested in.
So let me call
the quick picture here.
So here is the [inaudible] molecule
here as part of
the nitrogenase enzyme.
It's actually living out here.
So what we see is along
the top of these charts,
these top lines, these
are whiteboard results.
We'd get up on
the whiteboard, we say all
right well here's how
the algorithm works.
Let me think about how I
would want to implement
it and prove some upper bounds
on its runtime.
Because I can get an implementation
that takes only this long according
to these asymptotic and
heuristics that one might use.
These get better and
better as time goes on,
but then we can run these counts,
and the simulators,
at least up to the size that
our simulators will run on,
to validate where the real runtimes
are for these types of
molecules. All right.
So getting to the end here,
I stole a slide from
our marketing group because it has
nice pictures on it and hopefully
this is very motivational.
So so our goal again is to be over
here where we can model
these large molecules with
large number of atoms.
Classical computing is just
not going to be tractable.
Wow, you can not see
that curve on there at all.
There is a very dark gray curve
running up here.
These are all exponential algorithms
and so they very very
quickly become incompatible
with classical methods.
Yes, it really is there but I
can see some standing right
up against the screen here.
But quantum computers while they're
physical devices and so it's not
surprising in some sense that
they're very good at simulating
physical devices and so we can
think of the quantum computer itself
as a programmable physics lab.
It's actually estimating what's
going on in a computation by
doing the underlying physics that
the computation is modeling.
So let me finish with same slide.
So this is outlook of
what we're going to do
a little bit later this
afternoon and some links.
If you want to look up all of these
and you can get started early.
I'm happy to take questions or
reserve a little bit of extra time
because hear people want
to hear about the quantum
inspired stuff as well,
and so I'm happy to
talk about that in
more detail or or anything else.
I mean, the floor is yours.
Thank you.
Yeah.
>> I have a clarifying question.
So you explained a lot
about the process
that goes into
developing these things.
>> Yes.
>> What is the process
of finding fields
where a quantum computer can
be useful in some sense?
>> Right. Okay. So yes.
In other words, how do you
develop new quantum algorithms?
That's that's a very
challenging process.
I guess it depends on if you're
interested more in
combinatorial things,
and Robinson's here in the front row,
so I might have tap to him
to answer some of this.
If you're looking for those,
then there's one approach that
you can approach more computer
science types approaches.
If you want to look at like
real-world applications,
those often don't fall into
these more combinatorial
things and are
more continuous valued and the
such and it's like
developing any algorithm.
It's a one-off. You
gain inspiration about
the problem itself and then
find new ways of examining it.
So there are also
very broad categories that
work and I've mentioned this
already, quantum
inspired optimization.
So at a very general level,
adiabatic quantum computing is
well-tuned to
optimization because what
it's doing is finding
the lowest energy configuration
of a Hamiltonian.
So if I want to solve
an optimization problem,
I just cast that optimization
that objective function as
the Hamiltonian and
then the adiabatic process
itself will find that.
Now, the hard part isn't actually
just writing down the algorithm,
that's in some sense the easy part,
the hard part is analyzing
how long is it going to
take to resolve this,
how long is it going
to run, and what's
the likelihood that it's actually
going to find the solution
that we want to find.
I mean, all of this is randomized.
These are all randomized algorithms.
But yes, I don't think
there's any easy answer.
Like, "I want to find
a new quantum algorithm
today, what do I have to do?"
Well, there's no easy
response there. Yes.
>> A couple of slides, there
was a bunch of graphs.
Next one. Okay.
>> So you want this one or this one?
>> No, next.
>>Okay, this one here.
>> Next one.
>> Yes.
>> That one okay.
>> Right.
>> So what I didn't
understand in this graph,
these are all very interesting,
but at the lower left you've
got the Microsoft simulations.
>> That's right. So once we get
the quantum algorithm written
out in terms of the code,
we can just simulate what's going on.
So if I have say, 20 qubits,
well 20 spin orbitals which
then get converted into
into roughly 20 qubits, 20 plus 01,
I think it's a statement,
then I can just write that as
a vector of those amplitudes.
So let's pretend it
was 20 for the moment,
and so I'd have two to
the 20 complex amplitude.
So a million complex amplitudes
I stored memory.
Then every time I apply a gate,
I just adjust those amplitudes
according to the action of the gate.
So at the end of the program,
I have all these amplitudes,
I just sample it.
So those plots where
all those red dots appeared,
that was what was coming
out of these sampling.
You sample it, you actually get
the results of the simulation.
But if you start scaling this up,
and so up here I don't
know what number that is,
say 24, 25, this is becoming
very memory intensive.
So we're already talking
to the megabytes.
These are complex numbers
of memory available.
Obviously, we need two gigabytes
now without too much difficulty.
If we move this up onto
the Cloud using Azure,
then we can get this up to
30 maybe 35 qubits just
because it takes
a huge amount of memory to
store all the amplitudes
for that quantum state.
>> Okay. So these these
other lines then are just?
>> Yes, these are
all theory lines here.
This is the Resource Counter.
So the Resource Counter doesn't
actually simulate,
it just says, "Well,
how long is this going to take
based on all of these gates that I've
counted and the number of
timestamps that I predict
it's going to run.
>> Well, the current Q#
software toolkit that
you're going to show us contains
the Simulator but also
the Resource Counter?
>> Yes, the Simulator and
the Resource Counter, both in there.
It's just a one line switched
to go between the two.
Do you "Instantiate" the Simulator
and then it runs the simulation
or do you "Instantiate"
the Resource Counter and more or
less except for some trace commands,
the Q# codes to say.
>> My laptop I shouldn't try to
run the Simulator on 80 qubits.
>> No, probably not on 80 qubits.
That would not be great. Although 70
might be in range for certain things.
So going back to
these quantum inspired
algorithms, the Bristlecone,
when Google announced that
a couple of years ago
and they said they wanted to run
what's called Low Depth Circuits,
which is a technical name as well.
This was going to be for
their Supremacy Experiment.
They we're going to say that, well,
with these Low Depth Circuits,
now we can sample from
distributions that you can
not sample from classically.
To say not sample from
classically means of course
not sample from classically in
any reasonable amount of time.
So a few friends of
mine down at NASA Ames,
where they have
a great big huge supercomputer
there called Pleiades,
and they were able to take
this Low Depth Quantum
Circuit problem and
recast it as tracing
a Tensor Network.
So this is really long
exact same lines of
these quantum inspire things.
We know we have a quantum algorithm.
It's going to be hard
to simulate that,
but we can estimate that using
a slightly different means,
in this case "Tracing"
a Tensor Network and they were
able to do that up into the 50
qubit range..They predicted that
with about a day's run time on
the Pleiades Supercomputer,
they could do 72 qubits.
So 72 qubits is that in
the quantum inspired range now?
Well, at least for
maybe Low Depth Circuits it is.
For these really long circuits,
this this technique's
not going to work.
So the the boundary between what's
quantum and what's quantum
inspired is always moving.
>> Just one other question.
>> Yes.
>> That means that NASA has a
quantum machine that simulate those?
>> Yes. It has a quantum annealer.
So quantum annealing is
the community name for
Adiabatic Quantum Computing
at non-zero temperature.
So temperature or thermal
fluctuations have
a real effect on that machine.
Zero temperature doesn't
exist in reality.
We're always going to be
on a non-zero temperature.
So they run it in
a handful of millikelvin.
But thermal effects actually do have
a non-trivial effect on
the dynamics of that machine.
But what it's meant to do is
"Simulate" Adiabatic Quantum
Computing using a quantum device.
Now it's, very incoherent,
but it's actually a pretty
amazing piece of machinery.
Again, really easy
to "Simulate" using
other methods like Quantum Monte
Carlo or Diffusion Monte Carlo
because we have the knowledge of
"Simulating" these using
different types of algorithms.
>> Okay. Thank you.
>> Thank you.
