>> Okay. Why don't we start?
Morning. My name is Jeongwan Haah
from Microsoft Quantum Team.
I'm one of the researchers
in the team and I'm going
to give you a brief overview
of what quantum computing is,
what is it and why we do
it, and how we do it.
First, we would like to understand
what quantum computing is.
Computing in general
means that it is to
manage and process and communicate
information for various purposes.
I don't want to define
what information is.
I just rely on your intuition.
So let me give you what
one of the first general
purpose computer looked like.
It is an ENIAC and installed in
Pennsylvania back in the 40s,
and it was designed
for the purpose of
calculating the trajectory
of a ballistic missile.
The first program it ran was
actually not about
that ballistic missile but about
some nuclear physics calculation
initiated by John von Neumann
and others for the Manhattan Project.
So it was calculating
some physical process actually.
Now, with the invention
of transistor,
the machine got so smaller and
so much faster and the physics
that enabled this
technological innovation
was based on quantum physics.
But quantum computing is not
about that quantum physics here.
It's more of a fundamental
level. So what is that?
So to give you an idea,
let me begin with the
probabilistic computing.
What is a probabilistic computing?
Let me give you an example.
Say I give you a function that's
defined on 100 variables,
and I ask you to find
the integral of that.
How would you do that?
It's a 100 dimensional,
so if you make fine mesh, say,
100 mesh points along
each direction and
the number of points you
have to sample would
be 100 to the 100,
astronomical number,
you shouldn't do that.
But more straightforward way is that,
you random a sample of points
and then take their average.
That will give you
a very reliable estimate of the
integral and for
all practical purposes,
that's the best way.
Another random algorithm is that,
if I give you a black-box of
function, a polynomial function,
I guarantee you that this function is
a polynomial in say 100 variables.
Then how would you
say this is not zero?
Well, if I don't tell you
anything about a function,
then the only deterministic algorithm
that we know of is essentially
going through all possible inputs.
Well, if the X1 are binary variables,
then there will be two to the
100. So you shouldn't do that.
But the probabilistic method is that
you randomly sample input points
and try to see if it is zero.
Surprisingly, there's
no deterministic algorithm
is known for this problem.
Another probably practically
useful algorithm
is that if I give you a factorization
of one matrix C into A times B,
then how would you check that
the factorization is correct?
Well, you could do the matrix
multiplication and that will
in practice close to
the end huge complexity.
But randomized algorithm can do it
faster because you
can randomly sample
an input factor and then check
the output of the two equations,
and if you do it
sufficiently many times,
then you have a high confidence
that the equation is correct,
and that's good for our purposes.
So theoretically, we can
imagine this probabilistic
computing in this way.
We start with just a
flat distribution of
some number of variables and
then my algorithm transforms
that distribution under the
classical rules and we end up with
some concentrated
distribution that is
hopefully concentrated
around the desired solution.
This is what's going on.
Now, observe that
the probability space,
the space of all
probability distributions
at the individual identity,
it has exponentially many entries
and in any probabilistic computing,
we don't keep track of this
probability density function,
but we just look over examples.
Similar principle carries over
to the quantum computing.
The difference is that
you take square root.
So if I had a beat,
then there will be two numbers.
Well, it is actually
essentially one number
because I am constrained
to the condition that
the probabilities
have to sum to one and I can denote
that probability as
a two-component vector
with the positive reals.
In the quantum computing,
we take a square root.
So each of the entries,
the state of my square root,
space of square roots
probabilities will reside on
the complex numbers
and each entry takes
values in the complex number
whose modulus is
between zero and one,
and with the rule that the amplitude
squared is equal to one.
Here the amplitude
is a technical word
to mean the square root
of probability.
So that's cubit and it is
apparent that if I have N cubits,
then there will be
exponentially many complex numbers
to describe that distribution.
However, in analogy with the
probabilistic computing,
we do not keep track
of each amplitude.
We shouldn't because
there are too many.
Some fundamental feature of
the quantum physics
relates the two models,
so if I start with some quantum
amplitude factor, if I look at it,
look is rather not too
appropriate but that's
most close term in the layman's work.
If I look at it then, I
would get the outcome
zero with the probability given
by the amplitude squared,
and probability one with
a probability given by
the amplitude square again.
In a sense, this fundamental
property of quantum physics limits
our ability to use
the quantum physics because
we're back to the probability
instead of the roots of amplitudes.
To exploit that quantumness,
here are the elementary gates.
Unlike the classical circuit where
everything can be
generated by NAND Gates,
there are several in the quantum
case but not too many,
there are only, well,
actually of a complete set,
I only need this much,
this and that and that one,
you can omit that one for example,
because this one is
application of twice of that.
The one multiple feature of
those matrices is that
they're all unitary.
Why unitary? Why do we need
a linear algebra here?
So the fundamental feature that
quantum measurements maps up
amplitudes to probabilities,
we shouldn't measure them,
we shouldn't collapse them,
we shouldn't intervene
the intermediate step
of quantum computation.
Well, if you do so,
then you are back to
the classical probabilistic computing
and you're not exploiting
the quantities here.
So we shouldn't look at it
and the only way to not look
at it but somehow operate is to
go through the unitary operation.
Therefore, we have a unitary set.
I want to emphasize that unlike
some popular social
media exploits it,
quantum is not so real in
the following technical sense.
In the probabilities computing,
I'm repeating myself,
it was all handled by
the exponentially many positive
real numbers that sum to one.
Taking the square root,
you'll end up with
exponentially many complex numbers
whose modulus squared sum is one.
Quantum circuit is nothing but
a unitary matrix multiplication
on that gigantic vector.
Therefore, a logician
will say anything
that can be done with
a quantum computer
can be done on a classical computer.
There is no in principle difference.
Anything that can be decided
in a classical computer,
can be done in quantum
and vice versa.
However, as a computer scientist,
not every finite things are equal.
Somethings are much harder
because you have to go
through many more steps
to accomplish the job,
and that's where
quantum computing is.
So why do we care about the quantum?
Because it can sometimes address
classical intractable problems.
I'm giving you two or three
most famous examples here.
So there are quantum
algorithm primitives
that we use over and over again.
We are waiting for a new primitive,
but so far, these are
pretty much complete list.
So probably the famous
one is the first one,
the search over the database
of N entries can be
done in square root instead
of looking at N entries.
There's a quadratic speedup.
A similar principle can be
applied to the sampling problem
where your accuracy
converges much faster.
In order to achieve
the absolute accuracy
in your statistical measurement,
then you'll have to
go through one over
epsilon squared number of
samples to achieve that goal.
But in a particular quantum setting,
that accuracy convergence can
be made quadratically faster.
The Fourier transform, which in
the classical world takes
about NlogN time where you
have N component vector,
this becomes exponentially faster.
You lose, you forget
about that N vector but
left with the log N vector.
Certain aspects of
linear equation can
be obtained exponentially
faster in the dimension.
If you have a matrix of
condition number Kappa,
then classically you
need N square root of
Kappa number of operations
to calculate something
about the solution.
But quantumly, the dependence on
the condition number remains,
but that the dimension dependence
gets exponentially better.
But note that
all these primitives are
applicable only for the data
that are generated on the fly.
It cannot be used for
real data for a simple reason.
It takes linear time to
even to load the data.
So there's a bottleneck.
However, there is
an exponential speedup in
the case for the user-defined data
when it comes to the
simulation of quantum physics.
So remember that
any ENIAC was first used
to simulate classical physics.
Now, quantum computing
is natural to solve
problems that involves
simulation of quantum physics.
This exponential speedup is
very typical and natural.
So what can we do with that?
One appealing example
is the following.
You may be surprised that
the production of ammonia
consume five percent of
all global natural gas consumption,
or two or three percent,
or more than two percent of
all global energy consumption,
because this chemical process
at the industrial level
requires hundreds of
degree of temperature and
the hundreds times the
atmospheric pressure.
So to make that environment,
you need a lot of heat.
Surprisingly, there are
some bacteria at the roots of
some vegetable that does this job at
the room temperature at
the natural atmospheric pressure,
and after several decades of
research, actually we don't know why.
Slightly more in depth,
the bacteria has a special enzyme,
and in the middle of that enzyme,
there lies this particular molecule,
Iron-Molybdenum
cofactor, it's called.
It's not too complicated molecule.
It involves hundreds of atoms.
It's not terribly complicated.
This molecule is believed to be
the key for this process that
converts nitrogen molecule and
combine with the hydrogen
to produce ammonia.
To understand how this works,
we need to compute
the energies of various
chemical configurations,
and that is computation
that are limited.
Even the fastest supercomputer is
not sufficient to
address this problem.
You may expect that some experiments
will tell us how this molecule works,
but it is still crude.
Maybe this process involves
several tens of steps,
each of which is hard to compute,
and we don't know how it happens.
If we understand how it happens,
then there's a good chance that
we can utilize them and to
make the ammonia production
much more efficient.
Some very scientifically
reasonable estimate
tells us that even very
small-scale quantum computer
can help us here,
like a one that has
only 200 qubits or
so maybe able to help us understand
how this molecule works.
Another famous example for
the application of quantum
computing is the factoring problem.
Almost all cryptography
practical protocol
relies on the hardness of factoring.
If you I give you
a composite number and
don't tell you about
the prime factors,
then it is hard for
you to figure out what
those prime factors are.
Based on that, we build out
the public-key encryption schemes.
If you have 100 cores in
the classical computer,
than merely factoring 1,000-bit
number takes about quarter of a year.
If you increase
that number to a 2,000,
then that will take
quarter of the age of
universe to compute extrapolating
the current technology.
But if you have a quantum computer
that has thousands of qubits,
then that problem can be
solved in the order of days.
This is due to
the exponential scaling.
You see initial gap here.
For smallest size, quantum computer
will not be a help
help because of
the huge constant vector.
I will get to that huge
constant factor later.
This poses a great problem or
a great concern to
the world of cryptography.
If the current technology
will be broken by quantum computer,
what should we do?
The quantum physics helps again here.
There is a private key
distribution protocol based on
the quantum physics
in such a way that
any eavesdropping will be detected,
and is guaranteed by
the laws of physics.
It inherently uses
the quantum correlation,
so-called entanglement.
Robust implementation is
still being developed.
To protect our information in
the post-quantum era in the future,
there is a quantum-resistant
classical crypto methods,
and there are candidates.
It will replace
the current public key crypto
systems based on the factoring.
With a requirement that it should be
interoperable with existing facility,
the private key distribution
is not considered in
this proposal and
such a purely classical algorithm
is now being standardized by NIST.
I heard that it is at
round two where tens
of protocols are then
competing with each
other to find the best one.
So that was major applications
or easy to access
applications of quantum.
Now how do we make one?
What are the challenges
and where we are now?
There are many abstractions in
the architecture of
a computer in any computer.
There should be a hardware that
actually operates something,
and there should be
some programming support
to control that hardware.
On top of that, we have to
think about what
algorithms can we learn.
At Microsoft, we have investments
in all stages and how to make one.
In this talk, we'll be
focused on the hardware side.
You will hear more about the quantum
programming side today later.
So there are many qubits
proposed developed by
their various industrial companies.
Some are based on default tongue,
its polarization.
Some are based on
the superconducting devices,
and you take a very particular
physical states in this device.
Some are thinking about
the quantum dots where you
isolate very tiny atoms or
artificial atoms in your lab.
Some are thinking about
the atoms that are
trapped in an optical lattice.
We are thinking about
a topological qubit.
What is a topological qubit?
Now, that may sound
scary but not so much.
So let me tell you a
bit more about it.
So to motivate, let me give you
the classical error correction.
Let's talk about the classical
error correction overhead.
Here is a picture of
deep space probe that we
have to communicate over.
The communication is fairly noisy.
You can't really hear
the clear sound over
that probe that are in a deep space.
Not light years but It
takes about a few hours for
the light to travel to that probe.
So imagine that each bit you send
is corrupted by the other bit.
Your bit is flipped with
some small probability.
Then, you can correct those.
You can fight with those noises
by making redundant message.
So you send 000 in place of zero,
and 111 in in place of one.
We say that these bits are physical
bits that are actually sent,
and these beats are logical bits
that we're meant to send.
Now, the receiver, which
can be at the probe or us,
decodes a potentially corrupted data.
If I receive triple zero
or two zeros and one,
then I will infer
that message was just zero,
and the other case, I infer
the message to be one.
That's reliable because
the logical error probability,
that is, I infer that it was zero,
but the sender actually
meant one or vice versa.
That logical error probability
gets smaller for
a given physical error rate by
this repeating our procedure.
If you do that analysis,
then given the physical error rate,
the number of actual
physical bits that you
have to send as a function of
this physical error rate
given the target,
the logical error rate,
scales like this.
So as long as your error probability
is less than half,
then you can send some information.
But the number of
actual physical bits that you
have to send increases quite
rapidly if your physical
noise is high.
Exactly the same thing happens
in the quantum world too.
So even though we are dealing
with the quantum information,
that square root of probabilities,
we can fight with errors by encoding
our coding information into the
correlation of quantum particles.
There exists a quantum error
correcting code in as much
as we have a repetition
classical error correcting code,
and the similar analysis
tells us this figure.
If you want to achieve
logical error rate to
be tens minus 10, here,
the error rate is the probability
that the qubit is
corrupted given a clock cycle
or a message, if you wish.
If we wish to achieve this very
small logical error probability,
then if your physical
error rate is 0.1 percent,
then you would have to deal
with thousands of qubit.
You need 1,000 qubits
only to send one qubit.
But that number dramatically decrease
if the physical
error rate is decreasing.
Now, note that difference.
In the classical world,
the threshold below which we can't
send any information was half, 0.1.
In the quantum case,
it is order of magnitude smaller.
That's the challenge we're facing.
That's why we do not have
a general purpose of
quantum computer yet.
So it is critical for
scalability to achieve
low physical error rate.
Otherwise, the overhead from
the air crash and will be too large.
Well, that differentiates
our approach from
more conventional approach.
One of the prevalence technology
to build a qubit is based on
the superconducting circuits and
they have error rate around here.
So you will need
approximately thousands of
qubit to make a
general-purpose computer.
But topological qubit,
if we have one,
it is expected to have
much lower error rate
hence much smaller overhead from
the quantum error equation.
That's why we are
focusing on good qubit.
That is very analogous
to this picture.
If you have a chalkboard
and you write something,
here the writer was counting
something, obviously.
The small particle in your chalk
is attached to your blackboard by
some very weak chemical bond
or friction but it can
be easily washed away.
It doesn't really have
a correlation between the amount of
tool and it doesn't fight
with noise anymore.
But if you count using
knots as the old Incan did,
then it can last for
thousands of years.
Because as long as
the string is unbroken,
the fact that there is
a knot is invariant.
So topological qubit conceptually
and at a very abstract level
does the same job.
So here's an actual example where
that autonomous correlation helps.
This is a figure from
a 1980 paper where
they measured the so-called
Hall Conductance across
the sample under low temperature
and a high magnetic field
and they can just
measure the resistance of
that sample across
a certain configuration.
What they found was that
there is a plateau in
the resistance that are pretty
wide and resilient to
external parameters.
They tested various geometries
and various other settings.
They even alter the currency or
[inaudible] and they
didn't see any change.
This accuracy is so
remarkable that it is for
a long time used as
the calibration standard.
Whatever sample you make,
how dirty you are,
it doesn't really matter.
The auto-correlation
among the electrons
gives you a real robustness.
We want to qubit that exhibits
a similar phenomenon.
So there is a theoretical proposal
that signals that if you
fabricate such a device by
putting some very
small wire on top of
very special material in
a low temperature and
apply certain magnets,
then there will be
some hidden states that we can
use to build a qubit and
one signature for that purpose
has been experimentally observed.
So we are optimistic that we will be
able to find the material and setting
such that the many-body electrons
help us to build a qubit.
That's what we are focusing on.
That's how we are going to
build a quantum computer.
So that's a recap what I
have said this morning.
So don't be scared about
the quantum computing.
It's just the square root of
probabilistic computing and to
make use of that quantumness,
we should remain in this space of
amplitudes instead of
probabilities and that's why
we should use the unitary operations
instead of other classical analogs.
Naturally, this quantum
setting can simulate
quantum systems efficiently whose job
is intractable on
a classical computer.
Therefore, quantum computer
will likely have
applications into quantum chemistry
and material science.
On top of that, there is some
computational problems including
the factoring becomes much
easier on the quantum computer,
and the quantumness is more
subtle than the
classical counterparts.
So to overcome the overhead
from the air crush shed,
the physical qubits fidelity is very
important and we are
focusing on a very good qubit
instead of many.
Our program covers every aspect
of that from hardware to
Intermediate Programming Language,
to higher-level
algorithms, and others.
You may wish to check
how our websites that
illustrates further
a conceptual explanations
in our programming toolsets,
and you will hear more
about it later today.
I'm ready to take your questions.
Thank you.
Yes.
>> So how long do you think
it'll take before you have
a functioning topological qubit?
Is it one year, 10 years?
>> Well.
>> You're not allowed to say.
>> Yeah, I personally don't know.
It's still under investigation.
>> So IBM has already
a functioning quantum systems
in their Cloud,
but that's not based on topological,
that's the magnetic resonance,
I think or, I'm not sure.
So do you have
any thought that that's what
they're doing is any good?
>> It is an interesting device,
and interesting in that
you can't actually
implement some experiments
and apply gates there.
But it's only five qubits or four.
I forgot the exact number.
Maybe I shouldn't say five.
It's anyway a small number
and anything that can be
done on their machine and it can be
done on our similar too very easily.
So scientifically interesting
and we'll see where it goes.
>> You had your example
of the ammonia looking,
so how is it that
quantum model helps us here?
Is it a steady-state or?
Because it doesn't give
us anything with respect
to time involved in
molecular dynamics?
>> Actually, a good question.
In fact, in quantum computer,
it's much better to simulate
dynamics instead of
estimating static properties.
But the timescale where
the chemical reactions happen is
far smaller than
the other thermal timescale.
So that's why we are asking about
the starting numbers there
and a quantum computer
can help there too because you can
estimate the energy of
intermediate steps.
So a catalyst is like financing unit.
You will make a profit in
the end but to make that profit,
you have to invest took much and
that's why some processes
are too slow in nature.
The catalyst is like
intermediate financing
facility where you lend
small money and then
make a small profit
and you have to keep doing
to reach the eventual goal.
How we lend money and where
to lend money depends on
the molecular structure and
the typical method would be,
"Well, okay, here's my catalyst."
To understand that, "Let
me attach an atom here.
How much energy do I get?"
You need to calculate that.
If I know the energy, then
I can calculate the rate
the chemical reaction happens along
that direction and
you do it many times.
Here the bottleneck is to
estimate that energy and
quantum computer will
facilitate that process.
You encode your molecule into
a quantum computer and let
it find the ground state
and you make a measurement.
The classical methods exist
but they are very crude.
>> So you're basically defining
the global minimum in
a high dimensional function?
>> It is like that, yes.
But it's much better than
the combinatorial case because
it is supposed to be a natural thing.
So you can simulate
what nature does in
a quantum computer and you
find the ground state energy.
>> One more question, so I read
a lot about
quantum-inspired algorithms
as people talk about
as quantum computing
will overtake classical
computing then they
keep especially computations
as they thought
there were no fast
classical solutions,
quantum-inspired algorithms
keep popping up.
Is that something that you see
in your work and your research?
Looking for new algorithms that
you run across quantum-inspired ones?
>> I think you will hear more
about that in 10 minutes.
>> Okay.
>> So I prefer that
answered to that talk.
>> Okay. Thank you very much.
