[MUSIC PLAYING]
SPEAKER: So this has many of the
advantages that [? Shelby's ?]
algorithm and framework has.
But the difference
is that it focuses
on quantum in our algebra
and quantum algorithms
that can be cast
in this language.
And it has the benefit
that it makes it easier
to design and
analyze algorithms.
And since this technique
is based and motivated
by actual quantum
physics insights,
the arising quantum circuits
are already very efficient,
which is a nice feature
of this framework.
And we wrote a paper, which is
some sort of a superposition
between the research
and the survey paper,
with my coauthors,
Yuan Su, Guang Hao Low,
and Nathan Wiebe.
And if you have more
questions about this technique
after my talk, you can find
out more in this paper.
So when you design
quantum algorithms,
it's very convenient to
live in fantasy world, where
you have a large fully
autocratic aquatic quantum
computer where you
don't need to worry
about any practical details.
You can just focus on the gold
standards of quantum algorithm
designs and eke out exponential
speed-ups or other large
quantum speed-ups.
But these algorithms
that people design
when living in
this fantasy world
are sometimes hard to implement.
And of course, we eventually
want to make these algorithms
around the quantum hardware.
So here is often faced
the question, like, OK,
how practical is your algorithm.
Is it actually going to
fit on this hardware, which
we have in the near term.
And if the algorithm
is very complicated,
then it can be really
challenging to analyze.
And it's a really hard
question to answer.
So it's useful to have a
theoretical framework, which
already gives you efficient
circuits right away,
So I would like to
introduce this framework
by starting with a bird eye's
view on quantum linear algebra.
And I would like to start
with a motivational HHL
algorithm, which
is one of the most
influential quantum
linear algorithms, where
we want to solve a large
system of linear equations.
Ax equals b.
And it is very promising because
a quantum computer can nicely
work with exponentially
large matrices.
And therefore, we want to
phrase this problem in a way
that this is a quantum question.
So given the quantum
state proportional to b,
we would like to
prepare a quantum
state which is proportional
to the solution A inverse b.
And so if we look at this
problem from far enough,
we can see that we
have an input matrix A.
And we have some sort
of implementation
of this matrix,
which can either come
from some sparsity
assumptions or really
any sort of assumptions.
The only important
thing is that we
need to have a
quantum circuit, which
is, of course, a unitary matrix
if we describe as a [INAUDIBLE]
operator.
And we need this A
matrix as top left corner
of this unitary matrix.
And then our algorithm will be
simply to design a new quantum
circuit, U prime.
But the top left
corner is some function
of this original matrix.
And in case of this matrix
measurement problem,
we just want to apply the
1 over x function, which
would invert the matrix.
And this can be achieved
using quantum singular value
transformation.
If someone read
the original paper,
knows that actually
implementing this HHL algorithm
in the original version
is much more challenging.
It uses faith
estimation or thinks
that we can escape this way.
And it turns out
that this rough view
that I just presented
you actually
fits many quantum algorithms.
You can view in the same way the
optimal Hamiltonian simulation
algorithm of Low
and Yuan, quantum
walks, fixed point, oblivious
amplitude amplification, HHL
and variants of it like
regression problems,
semi-definite program solving,
linear program solving,
and all sorts of other
machine learning problems.
So yes, it's widely applicable.
And now I would like to
turn to a little bit more
why we choose this so-called
block-encoding when
we look at these matrices
as the top left corner
of [INAUDIBLE] matrix.
This is a very useful point
of view because any matrix,
any complex matrix would
operate on [INAUDIBLE]
can, in principle, be embedded
into unitary matrix like that.
And indeed there
are many settings
when you can efficiently do
such an [? actual ?] circuit.
Construct a circuit which
does this efficiently.
And the simplest case is
when A is a unitary matrix.
Then of course, it's
a block-encoding
of its just matrix is unitary.
But if your matrix is sparse
with efficiently computable
elements, matrix elements, then
you can also do such circuits.
Or matrix is stored in
some clever data-structure
in a quantum RAM, you can do it.
Or it's a density operator that
you can prepare purification
of.
Or maybe most interestingly
for physicists, you
have a POVM operator M,
where you can somehow
sample from a random
variable or trace through M.
So it's like an
expectation value
that we can in some way measure.
Then you can also implement
the block-encoding
of this POVM operator.
And the nice thing is that
once you have block-encodings,
you can combine them.
If you have like A1
to AM, AJ matrices,
then you can just implement
convex combination of those.
And if you have
two, matrix A and B,
then you can just
basically literally
take the product of the
corresponding unitaries.
And then you can get a
block-encoding of A times B.
So it's with more complicated
than but it's still
very simple.
And so once we have this nice
view of block-encoded matrices,
we can do linear algebra with
them, addition, multiplication.
And we have quantum
singularity transformation.
And the main is at about
this is the following.
So suppose that we
have a polynomial map
P, which is a bounded polynomial
in the interval minus 1, 1,
maps it to minus 1, 1.
And it's an odd
polynomial, meaning
that only the order powers
have non-zero coefficient.
Then we have the following.
If we have a block-encoding
of a matrix A,
which can always be described
as some singular value.
That composition as
it in this light.
And we can turn it into
a new block-encoding
with a new quantum circuit.
But the top left corner has the
same singularity composition.
But the singular
values are transformed
according to this polynomial.
And so if the
polynomial degree d,
then this quantum circuit
essentially uses d rotation
and goes in a clever way.
And this is the quantum circuit.
So it has basically d blocks.
And in each block, you apply
the unitary or its inverse,
which is the block-encoding.
And then you apply in
each block a the gate,
and then the rotation on a
single cubit, and then another
to follow it.
And the only thing that depends
on the polynomial itself
is this angle sequence,
phi-1 to phi-d, which
is acting on the single cubit.
And similar is our thoughts
for even polynomials.
And if you don't like to
think about singular values,
then you can also think
about eigenvalues in case
the matrix is
actually had mission.
And so now I would like to show
how it solves the HHL problem.
If we have the matrix A,
then it has some singular
decomposition.
And the pseudoinverse, which
is the generalized inverse
of the matrix A,
denoted by A-plus,
has the composition
where we need
to reverse the order of the
unitaries in this singular
decomposition.
And we need to take the inverse
of the diagonal matrix almonds
that I know in 0.
So if this is the
spectrum of the matrix,
then we want to implement
of what x functionally.
But this is not a
bounded function,
and we need to use
bounded functions.
It's not it's not continuous,
so we can't do this in general.
But if we assume that the
smallest non-zero single value
of the matrix, not too
small, but in other words,
the effective condition
number is not too large,
kappa, then we can
just take a polynomial,
which only approximates
this function
at the relevant interval,
about 1 over kappa.
And then use this as our
approximating polynomial,
and then we get by our quantum
singular value transformation,
a very efficient
algorithm, which
has low wave different scenarios
and all sorts of nice features.
And the nice thing is
that the same story
can be told if you just
use a different function
and approximate that
function with polynomials.
Then we get a totally
different algorithm.
For example, if we
use sine x, cosine x,
we get Hamiltonian simulation.
If we use an
exponential decay, it
gives rise to Gibbs sampling.
Or as it turns out, singular
value transformation
needs a generalizational
Grover search,
an amplitude
amplification, or quantum
walks, as a matter
of fact, where
if you look at these algorithms
from this point of view, then
it turns out that these
algorithms are just
two singular value
transformation, according
to a Chebyshev polynomial.
So you can see as you use
more and more iterations
of the original circuits,
then in this graph, where
the back of the
horizontal axis is
like the amplitude of the
original success probability,
then it quickly rises.
So this really amplifies.
If you have a very
small amplitude,
then it goes to almost
one, very close to zero.
And as you have more
and more iterations,
it gets steeper to one.
But there is a problem
with this approach
that there is a so-called
overshooting, when
if your original probability was
a bit higher than you expected,
then you might
actually overamplify.
And you don't get such a
high success probability.
But in this point of
view, it's very easy
to fix this problem instead of
using the Chebyshev polynomial,
just use an approximating
polynomial, which
quickly jumps up to almost one.
And then using this
polynomial, you
can get a so-called fixed point
at the amplification algorithm,
where there is no
risk of overshooting.
So all of this happens using
this simple quantum circuit.
If we have a unitary,
which is potentially
a Hamiltonian or
some Markov chain,
then depending on
which angles we choose,
the same circuit
we apply accepts
these single cubit
rotation angles,
and then we get very
different algorithms.
If we use a specific
sequence of angles,
we get Hamiltonian simulation.
If you change the angles, you
might get the Gibbs sampler.
If we again change the angles,
we might get the quantum block.
And it's the same circuit.
So it's very interesting how the
single cubit rotations change
the landscape of the whole
algorithm and the whole quantum
circuit.
And it's related to the
talks from yesterday,
when it's orthodox had
some similar effect, maybe
it's gambling unitaries or in
this quantum gravity in the lab
experiments.
It's also all related
to this final.
And so it sounds pretty
magical, I guess.
So I just wanted to get
a little bit of insight.
And I believe that
[INAUDIBLE] will talk more
about these techniques.
The underlying technique
is called quantum signal
processing.
And I just would like to
connect the dots there.
So the question that
Low, Yoder, and Chuang
asked in their beautiful
paper from four years ago
was the following.
Suppose that you have a unitary
which implements x rotation.
It's some unknown angle,
which now, for simplicity, I
denoted by arc cosine x.
Then it has some matrix
formation like this.
And then you can
ask the question,
suppose that I can apply some
control, some zed rotations
with some angles that
I can control, then
what sort of unitaries can
I get as a result of this.
And the characterization
of this problem
gave rise to this
theorem that I described.
And the way it gives
rise to this problem
is that we can view
this from, again,
a bird eye's perspective
that the input
is a matrix, where
the top left corner
is just a one by one
matrix, number x.
We have some modulation,
some zed rotations.
And the output will
be a new quantum
circuit where the top
left corner is the P of x.
And if I don't do
anything just the matrix
next to each other many times,
with x and y different numbers,
then I get a two by two example.
And if I just look
at the same matrix,
just rearrange the
tensor factors,
then I get the diagonal
matrix, x and y.
And of course, the
mathematics work the same way.
And now you can
jump one more step.
Actually, any matrix
can be diagonalized
using the singular
value decomposition
and the same technique works.
So this is a very brief sketch
of why this [? preside ?] could
[? may ?] work.
And though, just as a summary,
I would like to briefly sketch
what type of speed-ups can
this framework recover.
So there are
exponential speed-ups
that arise, for example, from
dimensionality of the Hilbert
space, which appeared in
the Hamiltonian simulation
algorithms.
Or from precise
polynomial approximations
that improved HHL
algorithm, for example.
There are quadratic speed-ups
which stem from the fact
that single values
are in general
the square roots of
the probabilities,
like in Grover search.
Or single values are
easier to distinguish
in some sense, which gives
rise to amplitude estimation.
And in a way,
close-to-1 single values
are more flexible
in this technique,
which gives us the quadratic
speed-up in quantum walks.
And there are really
many other applications
but for the sake of time,
I would like to stop here.
[MUSIC PLAYING]
