[MUSIC PLAYING]
NICHOLAS RUBIN: Hello, everyone.
Thank you again for joining us.
I'm excited to kick off this
series of lighting talks
providing updates on
our group's experiments.
It's probably not
surprising to many of you
who are familiar with
our group that one
of the very first experiments we
ran was a chemistry experiment.
I think it's actually
fitting that we kick off
our move to beyond
classical and chemistry
with the bedrock in the field
and doing the Hartree-Fock
theory on the
Sycamore processor.
So this experiment
actually starts
a broader program which
is guided by the question,
are we on a path to quantum
chemistry simulation advantage
and using a quantum computer?
So there's certainly
been a lot of buzz
around simulating chemistry and
experimental demonstrations.
I've listed some
of the experiments
below chronologically.
And these experiments
have been used
as a litmus test for hardware
and determining directions
for algorithms in our
mitigation research.
But one thing I
want to point out
is that all these experiments
are smaller than six qubits.
And they just barely reach
the accuracies needed
to resolve chemical mechanisms.
And this isn't a criticism
of these experiments.
It's more of an
observation that they're
very difficult to get working.
And it's likely that
we'll need new ideas.
And so our perspective
on NISQ experiments
is to continue to use them to
inform research and validate
devices while focusing
on algorithmic primitives
that we'll need for broader
quantum physical simulation.
And so we know that
instances of this
are learning strategies, such
as efficiency, robustness,
and extensibility.
We know that, some
circuits, you'll
run into a concentration
of measure problem.
Others, you'll have a
barren plateaus problem.
So learning ways around this.
Of course, we want
to benchmark devices
and understand how generic
or custom error mitigation
strategies will work on our
particular architectures
as we scale up.
And of course, we
want to understand
how far we are from something
that's classically intractable.
So the chemistry
experiment we designed
focuses on performance
limits and improvements
to the basis rotation
algorithmic primitive.
We studied this in
the context of VQE
because basis rotations coupled
with energy minimization
actually corresponds
to Hartree-Fock.
And more importantly,
these circuits actually
look quite close
to what you'll need
to do for simulating the
fully interacting model.
And my colleagues in
the next lightning talk
will discuss how
you can generalize
this particular circuit
to something a little bit
more exciting than
mean field theory.
So the basis rotation
that I'm talking about
is a coordinate transform of
the one particle Hilbert space.
In chemistry, this
corresponds to taking
linear combinations of orbitals
and forming a new basis.
And when we represent this
in second quantization, which
we can think of right
here, this corresponds
to the action of
a unitary that's
generated by an operator that's
quadratic in many body order.
And so we're working with
an efficiently singable
system that can be laid out on
a linear array with match gates.
So we have an
optimal compilation
for this unitary with
no Trotter error.
And it needs only nearest
neighbor coupling.
And I think most exciting
is that we can compile
these general to qubit
gates, these data gates here,
into things that we
have readily available
on the Sycamore processor,
so two square root i
swap gates and three rz gates.
I've drawn a cartoon
here with the circuits.
On the left, there's one
set of basis functions
for six hydrogens.
And on the right, there's a
new set of basis functions.
And this is just supposed to
depict that the circuit is
performing some basis rotation.
And like I said, if you
optimize these angles
and couple that with
energy minimization,
you get Hartree-Fock
theory back.
So the next component of VQE
is the energy acquisition,
so evaluating our function.
And in general, to measure
the energy of a chemical
Hamiltonian, we need
to know the expectation
value of one-body and
two-body correlators.
And these are elements of the
one-body and two-body reduced
density matrices or one
Fermion or two Fermion quantum
marginals.
Now, the basis rotations
are an instance
of Fermionic Gaussian circuits.
And we know for Fermionic
Gaussian circuits
that higher order
moments are directly
computable from the
second order moment.
And thus, this leads
to a vast reduction
in the amount of
measurements that we
need to do for this
particular problem.
So we can just
focus on measuring
this particular object here.
And this is the one-body
reduced density matrix.
This gives us access to the
gradients of our circuit.
It lets us think
about error mitigation
and also gives us a
quantity that's related
to fidelity, fidelity witness.
So for the next part, I'm
going to explain a little bit
about what we did
and the results.
But then I'll end on just
some thoughts of where to go.
So we studied energies
and fidelities
of chemical series,
which is a natural thing
to study about
when you're looking
at performance of systems.
And we evaluated various
error mitigations.
So what I've shown
on the slide here
is a hydrogen change
series that simulates
bond dilation of four systems
of length six, eight, 10,
and 12 hydrogens.
And so the first
experiments that we do
is we solve for the best
rotation angles of this basis
rotation circuit classically.
We do that because
it's efficient to solve
Hartree-Fock, at
least when we start
close to the right answer.
And we run that circuit,
and we see what we get.
On the top, I have
the six qubit data.
On the bottom, I
have a table that
tabulates all four systems.
I should note the H12
system is the largest
system that we studied.
It's about 10 times
the number of gates
as previous chemistry
calculations and twice as
large.
It has 72 square root i
swaps and 108 RZ gates.
And so if you just
run the algorithm
and you see what you get,
you get those orange dots
or those yellow dots.
And it looks qualitatively OK.
But it's fairly far from what
we would expect from just
vanilla mean field theory.
In this case, that is exact.
One thing I want to point out
is that the raw fidelities,
like I said, we have access
to this fidelity witness,
look actually quite close
to what you would get out
of the supremacy experiment.
So we can start to think
about adding error mitigation
and how to adapt your algorithm.
And this is generally
how one might
want to run this experiment.
We know, right away, that
T1 loss or particle loss
is a large fraction of the
error in quantum chemistry.
We can't lose electrons.
And so a natural mitigation
process is post-processing.
So we developed-- or
is post-selection.
And so we developed a way to
post-select the entire one RDM.
Of course, that vastly
improves the fidelity.
It turns out, actually,
that there is another thing
you can do to mitigate T2.
And that's using
some work that I've
been pursuing, which
looks at the geometry
of Fermionic marginals.
In this case, we use
pure-state constraints,
which corresponds to
purification of the marginal.
And you can show that this
actually mitigates T2 errors.
And then finally, if we layer
on all of our error mitigation
strategies with
the VQE algorithm,
so we want to validate that
variational relaxation does
work at this scale, we see
about another order of magnitude
reduction in error.
And so what's really nice about
this is that, in the smaller
systems, we're getting up
to three nines in fidelity
for the circuit.
I want to just flash
to H12 data here
very quickly to show
that nothing happens.
Nothing weird happens
as we scale up.
It's just all the
curves are shifted up
a little bit in terms of error.
So in some sense, this is great.
It shows that layered
error mitigation works.
It works at the scale that
we need to do chemistry.
But even at two
nines of fidelity,
we're just reaching the
chemical accuracy required.
And so I think
this does suggest,
at least, new directions
and benchmarks
that we should be looking at.
So let me conclude
this lightning talk
with summarizing what I think
this experiment tells us.
First of all, we did the
biggest VQE experiment
by a large margin.
I think there's a lot to mine
from this in terms of the type
experiment that you do.
For example, we ran this in VQE.
But you could easily run
this in a phase estimation
mode or learning how to
calibrate these larger circuit
blocks.
And I think, with
cleverness, we can probably
push to be about 20 qubits
with our current error rates.
But going beyond that
might be challenging.
The Ansatz is interesting
because it actually
looks very close to
what we know simulates
the full model, namely,
the fall Fermionic swaps
in simulation instead of just
the Givens rotation circuits.
And for that, I think this
is a good starting point.
Like Hartman said, it's
a good stepping stone
to going to simulation
advantage in chemistry.
In terms of pointing the
direction of error mitigation,
I think this does
show that looking
at Fermionic
marginal constraints
is a fruitful way to go.
And understanding how to do
that beyond something that's
simple like a Fermionic Gaussian
state is a fruitful direction.
And finally, I think this
confirms that ground state
calculations are
very hard to beat,
but chemistry is about dynamics.
And thus, looking
at the simulation
targets like those might be a
fruitful approach to achieving
quantum advantage in chemistry.
And so with that, I hope
I stayed on schedule.
And I'll turn it
back to Marissa.
[MUSIC PLAYING]
