[MUSIC PLAYING]
ROLANDO SOMMA: So
today I will talk
about more quantum
algorithms for systems
of linear equations.
After being almost 18
years in this field,
I'm still positive that
quantum computers will one day
be able to solve problems
and applications that
are beyond reach of
classical computers.
And quantum algorithm
design appears
to be the main reason
why there are so
many investments in
these technologies.
But anyone that's
been in this field
knows that building
new quantum algorithms,
improving upon
the existing ones.
And even finding
applications for them
is really hard, really hard.
So here we are looking
for needles in haystacks,
basically.
And today I will talk in
particular for quantum
algorithms for linear
algebra problems.
So some of the
ideas that I present
have been discussed before, but
it will go into more detail.
A couple of early
relevant works, my talk
will be these two
papers in 2017 and 2019.
And let me introduce to you
the linear systems problem.
This is a problem that
is reproduced in science
and engineering technologies.
We are given a matrix A
specified in some way.
This matrix is of dimension
N. You may think of this big N
as being really large, OK,
exponentially large maybe
in some problem size.
We're given some vector B,
again, specified in some way.
And I'll explain
that more later.
And the linear system
problem is basically
finding a vector x that solves
the equation Ax equal to b.
All right.
So classical algorithms
for this problem
typically take time, which is
polynomial in the dimension
of the matrix.
And one of the best known
general purpose classical
algorithms, conjugate
gradient, the scaling
goes linearly in N
and linearly in kappa,
which is known as the condition
number of the matrix, which
basically is a way to
quantify how far an image is
from being inverted.
So the quantum
linear system problem
is some trick that we've done
to the biggest limit system
problem.
So this becomes now amenable
to quantum computers.
So the idea is that
given the previous system
of linear equations, now
our goal is not really
solving the system but
preparing a quantum
state that encodes
information out
of the solution of the system.
So this quantum state
x is proportional
to the solution vector of
the equation Ax equal to b.
So in more detail, I mean,
the problem is defined such
that we want to prepare a
quantum state that is epsilon
close to the exact state x.
So this epsilon
quantifies the error.
This distance is known
as the trace distance.
Basically, if I
were to prepare rows
of x on a quantum
computer, then I
wouldn't be able
to distinguish it.
It would probably be larger
than epsilon from the true state
that I wanted to prepare.
So all known results
on this problem that's
been discussed today, kind of,
they are reduced to this point.
So why is this
interesting at all, right?
So, OK, well on one side linear
systems appear everywhere.
On the other, we know
that quantum computers
are known to provide
exponential quantum
speedups for many problems.
So it is natural
to ask, understand
what they can do in linear
algebra problems such as these.
And beyond linear
systems, I would
say that studying new
problems like this one
sometimes results in new
algorithmic primitives that are
using other quantum algorithms.
And this has been indeed the
case for the linear systems
problem.
So this quantum
version of the problem,
however, is only
useful for computing,
for example, expectation values
on the solution of this system,
but not for obtaining
the vector because that
would require complexities
at least linear in N.
And we want to avoid any
complexity this scales
to the dimension of the matrix.
So to understand
quantum algorithms,
we need to understand
the complexity.
So many times we
resort to what we
call query complexity in which
we assume, for example, that we
have access to procedures
that provide information
about for example, in this case,
the matrix A or the vector B.
This is the way that we
can specify the problem,
the instance of the problem.
So in this case, a query
for A, for example,
would compute the
matrix element of A.
A query for b would prepare
an initial state that's
proportional to the vector b.
So this kind of set
the rules of the game.
The type of algorithms
that we'll build
will be based upon
these two procedures.
For simplicity, I will
assume that this process can
be implemented in
constant time like,
with a constant number
of two qubit gates.
And I will not
discuss in detail,
what are the inner
workings of this procedure.
So by setting this
query [INAUDIBLE],,
then I'm setting the
rules of the game
of the type of
algorithms I will build
for this particular problem.
So what are the known results
about the quantum linear system
problem?
Well, we know the
former HHL algorithm,
it is scaling, asymptotic
scaling this order,
in addition highest
constants, which may
be imported in applications.
But this was basically quadratic
in the condition number
and logarithmic in the dimension
of the matrix, which is what
made the algorithm excited.
This being a later
improvement by Ambainis
introduced the notion of
variable-time amplitude
amplification and was able
to reduce the complexity
to something that was almost
linear in the condition number.
And this can be proven
optimal in the query model.
More recently, my paper
with Childs and Kothari,
we were able to reduce the
scale in terms of precision
to something that
was poly-logarithmic.
So this was an
exponential improvement.
And I will discuss
this into more detail.
To this end, we
have to introduce
the notion of linear combination
of unitaries, which basically
replace the phase estimation
step in the HHL algorithm.
And there's been a couple
of other algorithms based
on adiabatic-evolutions.
This paper in 2018
is, again, inspired
by the adiabatic-evolutions
the randomized algorithm.
We were able, again, to obtain
linear scaling in the condition
number.
So it's almost optimal
there and other improvements
by An and Lin, where they
basically derandomized
the algorithm that we present.
I should point out that
there's been other algorithms.
We heard today about
other approaches.
Those approaches are similar.
Their core ideas are similar to
this idea of linear combination
of unitaries in which
they aim at approximating
the inverse function
for example.
So why are these results
important at all?
Well, so on one side,
it's nice to know
that we were able to
obtain optimal algorithms
for the quantum
linear system problem.
We knew about the lower bound.
But we didn't know whether
we could achieve it.
These improvements allow us to
prove some other true quantum
speedups and explain those
a little bit more later.
And for example
the approach based
on linear combination
of unitaries
or the related approach
on quantum processing,
this allows us to
reduce the complexity
in terms of precision, which
is very important if we need
to perform high precision
calculations at the end
in the state that we prepare.
And on the other side this
adiabatic-inspired algorithm
because it was so
simple, you had a record
at least until
last year, I think,
being the biggest
implementation to solve
a linear system on a quantum
computer, just this 8x8 system.
But it opened the
possibility of dealing
with larger problems like this.
All right.
So as I mentioned, right,
I mean, on one side
we're claiming it is
an exponential speedup
because the
complexities are only
poly-logarithmic in the
dimension of the matrix.
But in reality, these algorithms
do not output the full vector.
OK.
So when looking
for applications,
this becomes challenging.
People have looked at these.
Here I mentioned a few examples.
We know that, for
example, we can
use these algorithms
for computing
the resistance of the network.
The speed up there is not
exponential it's polynomial.
There's another application, for
example, computing the hitting
time of a Markov chain.
Again, the speedup we saw
there was polynomial, not
exponential.
And there's been applications
in machine learning and solving
certain linear differential
equations, where
the speedups are truly unknown.
OK, it will depend on the
type of [INAUDIBLE],, what type
of this speedup we can have.
So finding applications,
I mean, it's kind of--
finding applications is hard.
We kind of have the hammer.
But we need to plan
for more nails.
All right.
So I'll give you a quick
review of the HHL algorithm.
And then I show how to improve.
All right so let's assume
that we have a matrix A that
is in the spectrum of
the composition that
are eigen vectors of the matrix,
eigen values in lambda, vj.
All right, and then we'll assume
that kappa is my condition
number.
So the eigen value stretches
all the way from one over kappa
to one.
All right.
So if kappa is very large,
then this lowest eigen value
will be very small.
And [INAUDIBLE] will
be harder to impact.
So the HHL algorithm starts
by preparing the initial step
B that encodes the vector
b in the linear system.
OK.
We know, loss of
generality, this
is a spectral composition of
A. It uses the so-called phase
estimation algorithm to
perform a map in which now I
have a register of qubits
that give me some eigen value
estimates.
OK, then I apply a one-qubit
conditional rotation
to perform the map, in
whichever the one-qubit,
depending what the
eigen value was.
All right, undo the
phase estimation step
and at the end basically
I have something,
a quantum state that has two
branches, one branch noted
by zero here, which really
implemented something that
was proportional or approximate
to the inverse of the matrix,
the state that we
want and something
that I call the bad part of
the state that is labeled
by ancillary qubit b over one.
We can use a well
known technique
called amplitude
amplification that's
used in Grover's algorithm
to basically get rid
of the bad part of
the state we want
and boost the
probability of getting
the right part of the
state, the worst one.
The complexity of
these HHL algorithms,
merely given by how many rounds
of amplitude amplification
I need and what is the
complexity of the phase
estimation step, a
detailed analysis
gives the scaling that I
gave at the beginning, which
is almost quadratic in
the condition number
and poly-logarithmic in the
dimension of the matrix.
So how can we improve
such an algorithm?
OK, so the first idea that
came by Ambainis in 2012
was based on this technique
of variable-time amplitude
amplification.
So rather than doing
amplitude amplification in one
step as the HHL
algorithm does, OK,
what variable-time
amplitude amplification does
is split this state
into branches depending
what the eigen values are,
larger and lower eigen values,
and thus amplitude amplification
to each of the branches
sequentially.
OK, so this allows you to
go from a quadratic scaling
in the condition
number to something
that is linear and something
that I don't have time
to discuss more
details about this.
Another idea, which
is in our 2017 paper,
in order to improve upon
the precision scaling,
was to approximate the inverse
operator by something else.
OK, instead of doing
face estimation,
for example, we can use it
a Fourier transformation.
OK, that approximates
the inverse of A
as a linear combination
of unitaries.
So these unitaries
here would correspond,
for example, the evolution
standard of the matrix A
for some time.
I'm now going to show
that by using this Fourier
approach that time is the most
logarithmic in the inverse
of the precision parameter.
And this is what
allows us basically
to prove an exponential
improvement in terms
of precision.
So other approximations
can be used here
as polynomial approximations
as we saw before.
I picked the Fourier
transform as one of them
because it gives us the
interesting results.
But that paper also
contains approximations
based for example on
[INAUDIBLE] polynomials.
So once we do have this linear
combination of unitaries
how is it that we are
going to implement them?
So there is this nice
quantum primitive
that we developed for
Hamiltonian simulation methods.
Basically, what it
do is the map is
[INAUDIBLE] to a linear
combination of two unitaries B1
and V2 by using this quantum
primitive that we have here.
So the operation B
basically what it does here,
it rotates [INAUDIBLE] such that
the state after that rotation
caused the coefficient
alpha and beta.
And we help them do such
an operation at the end.
When we look at the
state at the end,
we have two components,
OK, two branches.
And one of the branches is
the state that we wanted.
So we may use, again,
amplitude amplification
to boost that amplitude up.
All right.
So while we proved that
this linear combination
of unitaries approach has
optimal asymptotic complexity,
it still requires
many ancillary qubits.
This can be further
improved by, for example,
using the techniques of
quantum signal processing.
But in this case, it still
would require many ancillars.
So we resolve this issue
of having many ancillars
by providing a new quantum
algorithm, inspired
by adiabatic-evolutions and then
go through it fairly quickly.
So this algorithm is based
on a randomization method
that we developed with Boixo,
Knill, in a 2009 paper.
The idea is similar to the
idea of adiabatic-evolutions.
There is a Hamiltonian
path an interpolating path,
such that the eigen state
of the first Hamiltonian
can be mapped
along the evolution
to the eigen state of
the final Hamiltonian.
And we chose this
Hamiltonians such
that the eigen state
of the primary one
is the desired state that
solves the quantum linear system
problem.
Rather than doing
these evolutions
in continuous linear time,
these randomization method,
basically, what it
does is it picks
Hamiltonians along the path.
And it works with them
for a random time.
This simulates measurements.
OK.
And these measurements basically
due to the scene effect
will transform one quantum
state to the other against it
with high probability.
So by picking the discrete
decision in the path right,
then we can assure that
we have the probability,
where we will evolve
towards this data we
want to prepare here.
So we could show that the
Hamiltonians that we chose,
[INAUDIBLE].
The minimum spector gap
goes with the inverse
of the condition number.
That being a set of
these Hamiltonians,
in fact correspond to
linear systems of increasing
complexity.
So at the beginning I'm
solving a very simple system.
At the end I'm solving
the system that I want to.
And in fact, the eigen state
of the final Hamiltonian
is the solution of the
quantum linear system problem.
When you look at the
scaling of this algorithm,
it almost scales linearly
with the condition number,
again, poly-logarithmic
in the dimension
and with the inverse of
the precision parameter.
And this technique
as I said before,
can be then randomized by a
purely adiabatic approach that
was provided by An and
Lin in the 2019 paper
and was discussed before.
All right.
So the next thing is that
the asymptotic complexity
of this adiabatic-inspired
approach
is almost optimal if we are
looking at constant precision.
The algorithm is built upon
simple Hamiltonian evolutions.
I didn't discuss
the Hamiltonians.
But those Hamiltonians
are fairly simple.
And they don't need techniques,
complicated techniques
such as variable-time
amplification that
require many ancillary qubits.
In fact in here, only one
additional ancillary qubit
was needed.
And while the
complexity was linear
in the inverse of
the precision, it
can also be made the
logarithmic in this quantity
by using one of the
many known methods.
For example, for again by
traversal or quantum state
filtering as we had
in the previous talk.
So there's been other proposals
for this quantum linear system
problem.
OK, many of those
proposals were based
on variational and
related quantum algorithms
to this problem.
And some of the claims
is that these approaches
may be useful for noisy
quantum technologies,
for example, for NISQ devices,
may solve, for example,
the quantum linear
system problem,
and may not need quantum
error correction, and so on.
But when you look at it,
you get a closer look
to such proposals.
When it becomes evident
that on one side
they require a costly
optimization loop,
remember these
additional approaches aim
at minimizing some
cost function.
All right.
So we have to have
some sort of feedback
in which the information
that we got from computing
some expectation value has to
be fed into the initial step
preparation and repeating
this many times.
At the same time,
unrelated to this
is that these cost
functions have
to be computed at
very high precision.
And that precision has
to depend on parameters
such as the condition
number of the matrix.
All right.
So when you put
everything together
and you look at random
instances of this problem,
you can see that this has
an unknown or even poor
performance.
And the performance
could be even worse
than that from the algorithms
that I described today.
In fact, I will say that
these two reasons are
two of the main reasons that
basically will kill many
of the proposals for using
NISQ devices in many problems
in linear algebra.
And these are two of
the things that we
will have to really
look into when
we design quantum
algorithms that
are amenable to NISQ devices.
All right.
So I'm wrapping up.
And so, what I want
to say basically
is concluding quantum
computing is promising
that we know that
there's quantum
algorithms for some
problems in linear algebra.
I describe some
quantum algorithms
for solving problems
related to linear systems
with a number of improvements
in terms of precision,
condition number, and so on.
The complexity is
logarithmic in the dimension.
All right.
But the techniques
that I developed
can also be used in our
algorithms and problems,
for example, in
Hamiltonian simulations,
as it was the case
for these techniques
that we developed here.
I also presented
a few applications
for this including
machine learning.
OK.
But it would be really
nice to have many more.
And again, as I said before,
I think we do have the hammer.
But it would be nice to
have more [INAUDIBLE]
for this problem.
All right.
Thank you very much.
