[MUSIC PLAYING]
KEVIN SUNG: Hi, I'm Kevin.
And I'll be reporting
on our effort
to implement a protocol
for certified random number
generation using
the Sycamore chip.
This project is being done
in collaboration with NIST.
So randomness is a
valuable resource
with many applications.
Usually, when we
need random numbers,
we just call random number
generator on our computers
and we don't worry too much
about whether the numbers are
truly random or unpredictable.
But there are some
situations-- for example,
in cryptography or lotteries--
were true randomness
would be desirable.
In classical physics,
there is no true randomness
because the physical
laws are deterministic.
On the other hand,
quantum systems
exhibit true randomness.
Here is a quantum circuit,
which, if executed correctly,
produces an unbiased coin flip.
The outcome of this experiment
cannot be predicted with
certainty even in principle.
Now, the issue with this method
of generating random numbers
is that the user has no way to
tell that the alleged quantum
device is not backdoored by
the NSA, who is actually just
giving you a bunch of bits
that it chose in advance.
What's needed is
a way to certify
that the random bits
were actually generated
by the claimed process.
It turns out that this
is actually possible.
There have been previous
proposals as well as
actual implementations.
While I won't review
those, I'll just
say that the use
case we have in mind
is that of a user accessing
our near-term quantum
computer through the internet
and being able to verify
the certification themself.
And previous proposals
don't work in this scenario.
Unlike some previous
proposals, our protocol
depends on a computational
hardness assumption.
This protocol was first
proposed by Scott Aaronson,
and I'm just supporting
our efforts to implement it
with some slight modifications.
I'll also mentioned that
these protocols are more
properly referred to as
randomness expansion,
since they require a
small initial random seed.
The starting point
for our protocol
is our experiment
from last year, where
we performed a classically
interactive computation
on our Sycamore chip.
The task we performed
is easy to describe.
We generated a random quantum
surrogate on 53 qubits
with 20 layers of single
qubit and two qubit gates.
Then we sampled the circuit
on the quantum computer
to obtain bitstrings.
This sampling task is easy
for a quantum computer,
but we believe it's hard
for classical computers.
To validate the
performance of our device,
we performed a test on
the bitstrings we got.
We simulated the
quantum circuits
to compute the ideal
string probabilities.
If our device was
working correctly,
then many of the bits
string should have
high associated probabilities.
Now of course, we couldn't do
this for the truly interactable
circuit, so what
we actually did was
we tweaked the circuits
slightly to make
it feasible to simulate, and
then validated our device
using those easier circuits.
And this gave us confidence
that our device was also
working correctly for
the original circuit.
I want to emphasize here that
the choice of qubit number,
depth, and circuit
tweaks allows us
to adjust the classical
difficulty of the sampling
task and its verification.
Now I'll give the
computational assumption
underlying our protocol.
Here is the problem.
Given a random quantum
circuit approximately
sampled from its
output distribution,
the assumption we make is
that solving this problem
in a short amount of time
is only possible by actually
executing and sampling the
circuit on a quantum computer.
Furthermore, the
output can be verified
with statistical
tests, including
the linear cross
entropy fidelity, which
is the average of the
probability values multiplied
by the Hilbert Space
Dimension minus 1.
It turns out that this is
an estimate of the fidelity
of the sampling.
Also note that for
large circuits,
the sample bitstring
should be unique.
The assumption of sampling
on a quantum computer
implies that true
randomness is generated.
Now I can explain the protocol.
First, the client generates
a number of random circuits.
Then, in each round
of the protocol,
the client sends one one of
the circuits to the server
and demands a response within
a short amount of time.
Now on this case,
the response is just
a list of big strings obtained
by sampling the circuit
on the quantum computer.
Next, the client chooses
a subset of the responses
and calculates the
bitstring probabilities
on a classical computer and uses
them to compute the fidelity.
Note that this step is
computationally expensive.
The client checks that the
fidelity is high enough
and does other statistical
tests on the bitstrings.
If the checks pass,
then the client
concatenates all the
bitstrings and runs them
through a randomness extractor,
which is like a hash function,
to produce near uniformly
random output bits.
Again, know that, unfortunately,
the verification step
is expensive, since it involves
simulating the circuits.
However, the key point
is that the server
is required to respond
in seconds or minutes,
while the client can do
the verification offline,
taking a lot more time.
In our implementation,
we focused
on exercising of the software
infrastructure needed
to execute this protocol
through the internet.
In particular, we used
fully automatic calibration
of our processor.
Due to temporary limitations
in the automatic calibration
procedure, so far
we have not achieved
experimental
parameters that would
provide true certification
of randomness.
However, we're confident that
we'll achieve those parameters
in the future.
For now, I'm just going
to be candid with you
and tell you what we
did manage to achieve.
We generated circuits on
23 qubits with 14 cycles.
We sampled each circuit
a million times.
And in doing so, we achieved
an effective sampling rate
of 3.8 kilohertz.
This includes the
latency of communicating
through the internet.
We got a linear cross
entropy fidelity of 6.8%.
Now, this isn't
as good as what we
can achieve with
manual calibration,
but it will improve.
We estimate that the
amount of entropy generated
is approximately
the fidelity times
the number of bits from
the quantum computer.
Now, this is a lot more than
Scott Aaronson will tell you,
because he prefers more
stringent assumptions
than ours.
Our approach will be published
in an upcoming paper.
To give a sense of
the experimental data,
I plotted here data from
sampling one of the circuits
a million times.
A million bitstrings
corresponds to a million
ideal probabilities, and
I've plotted the histogram
over these probabilities scaled
by the Hilbert Space Dimension.
The orange histogram is
the experimental data
and the solid orange line is
the idealized distribution
of sampling with fidelity equal
to the linear cross entropy
fidelity of those bitstrings.
As you can see, the data matches
the prediction quite well.
The green and blue histograms
are from simulations.
The green is fidelity
1 and blue corresponds
to uniformly random bit strings.
So the next steps
for this project
are to achieve experimental
parameters that
would allow truly
certified randomness, which
would be something like
53 qubits, 14 cycles,
at a fidelity of 0.8%, and a
sampling rate of 5 kilohertz.
Finally, I'd like to end with
the major open problem left
by our protocol, which is,
can the verification be
made less expensive?
In our protocol, verification
takes time period exponential
in a number of qubits.
And tuning the circuits to make
verification easier necessarily
also makes it easier
for an adversary
to cheat the protocol by
simulating the circuits
instead of sampling them.
This is the main difficulty
we're currently encountering
in our implementation.
Nevertheless, this might be
the very first application
of a near-term quantum computer.
That's all.
Thank you.
