[INTRO MUSIC PLAYING]
MURPHY NIU: Hi, everyone.
I'm Murphy Niu from Google.
It's an honor to speak on behalf
of our TensorFlow Quantum team
and introduce you to an open
source library for quantum
machine learning.
Before I start sharing with
you the awesomeness of TFQ,
let's go down the memory lane
of how much I wish I had TFQ.
Three years ago as a bored grad
student in physics from MIT,
I came to Google for
the summer internship.
[INAUDIBLE] gave me a very
hard quantum control problem
of finding the analog control
waveform for realizing any two
qubit gate while facing
realistic imperfections
such as leakage and
control uncertainties.
So conventional control
optimization methods
failed pretty bad.
So I saw there was no
reason not to try machine
learning while at Google.
But it wasn't easy at first.
After struggling with various
incompatibility issues
and get stuck for
innumerable time,
I also have to write my
own quantum simulator
on hyperparameter optimization.
So that experience
was exhausting.
And when I returned to
Google for another summer,
I decided to avoid
machine learning software
and just instead
write a custom network
and optimizer just with common
tensor algebra of Python
for my next project.
The same summer,
Masoud came up to me
and asked, "What if
I gave you a platform
to generate and
interface quantum
data with classical
machine learning structures
seamlessly?"
I answered, "Hell yes!
When can I use it?"
Masoud said wait for it.
So next year I joined
Google full time,
and I'm still waiting.
But Michael Brighton
is giving us hope
that we will get there soon.
And fast-forward to March 2020,
TensorFlow Quantum is launched.
So now it takes a physicist
just one software, TensorFlow
Quantum, to do machine
learning research
with and for quantum systems.
With this unified
platform, we can
process both classical
and quantum data
and using both classical
and quantum machine
learning architectures.
So, what do we mean
by "quantum data?"
One kind of quantum
data presents
in the fast-developing quantum
communication networks.
For example, the longest quantum
key distribution transmits
information carried
by a single photon
across the continent
between Australia and China
through a satellite relay.
Another kind of quantum
data is obtained
through various quantum sensing
methods ranging from NMR,
NV centers, to Rydberg atoms,
and many more for quantum
imaging and realizing
optical memories.
Lastly, we also have measurement
and calibration results
obtained in a quantum computer.
For example, the
superconducting one at Google,
it is super cool.
Also, we have analog
waveforms sent
in through microwave cables that
Marissa and others carefully
arranged to control
our quantum computers.
These diverse kinds
of quantum data
can be fed into equally diverse
machine learning architectures
to extract useful information
and make predictions.
I listed some examples of
classical and quantum machine
learning architectures here,
which I won't have time
to go into detail.
But the takeaway is this
diverse data and machine
learning architectures can
now be united under one
framework, TensorFlow Quantum.
It is a software framework
for hybrid quantum
classical machine learning
under TensorFlow and Cirq.
We aim at enabling fast
prototyping, training,
inference, and testing of
quantum models for quantum data
to eventually facilitate the
discovering of new quantum
algorithms for NISQ devices
and error-corrected quantum
computers.
So, what is the
secret behind TFQ
that allows such a unification?
Fundamentally speaking,
quantum circuits
are no more than tensors, just
like classical neural network.
So if we can seamlessly convert
quantum circuits to tensors
that are compatible with
TensorFlow data structure
and convert quantum measurement
outcomes to tensor contractions
of measurement
operators also specified
by tensors, or so-called "OPs,"
then quantum data and quantum
machine learning
architecture become
part of the overall
computational graph
and easily integratable
with classical machine
learning agents.
So with this big
picture in mind,
here is the more detailed
look at the software stack.
At the top is a
humongous amount of data
we needed for machine learning.
It can be classical
or quantum data
prepared by a quantum circuit.
The data are fed into the
machine learning architecture,
including both
traditional TensorFlow
layers and the new
quantum layer provided
by TFQ through Keras models.
The output specified
by the tensor operators
are passed down to
the hardware back
in via a TPU, CPU, or a
QPU through TensorFlow Cirq
or a QSIM simulator, depending
on whether one requires
a simulation of the
quantum measurement
or a execution of the
quantum circuit in the lab.
So now let's see
if I travel back
to the past what
I would have done
with TFQ for my reinforcement
learning project.
So no more struggling with
software compatibility issues
or writing your own
quantum simulators.
Moreover, with the
choice of a QPU,
you can optimize quantum
circuit in real time.
Now, you might be eager to
see how TFQ works in action.
Let me give you a taste
of it with real quantum
computers from
Google for learning 1
over f noise for a device.
1 over f noise is
notoriously prevalent
among solid-state qubits.
It manifests as a
slow drifting term--
a error in the qubit
frequency parameterized
by this Hamiltonian, where
f represents the amplitude
of the [INAUDIBLE] noise.
E represent a specific kind.
So the problem I want to
learn is defined as follows.
Given a state prepared
in the x basis,
I evolve it under
a Z Hamiltonian H
not for a certain amount of
time and perform the measurement
of the X spaces--
X expectation values.
So this is usually called
the Ramsey experiment.
And given this
experiment result,
I measured for different
amount of time up to T,
I like to predict the
measurement for the future time
step.
So a successful prediction
of this time sequence or data
will also mean a correct
inference of the 1
over f noise parameters.
It seems to be a simple
problem, but it's
difficult due to
the weak amplitude
and the long-term dynamics
of the 1 over f noise.
So now it's time to use the
magic of machine learning.
Recurring neural
network is particularly
designed to represent the time
sequential data efficiently.
One of the most widely-used
recurring neural network
is called long-term, short-term
memory, which explicitly
parameterizes the balancing act
of prioritizing the long-term
versus short-term memory.
It's represented
in the lower right.
It looks kind of
complicated, but you
don't have to worry about it.
TFQ will take care of it.
All you need to do is to
call an innate TensorFlow
function colored
in this yellow box
and while specifying in size
the hyperparameter and input
size of the recurring
neural network.
And then to train this LSTM
to learn the time sequential
structure of your
experimental data, what's left
is to define a cost function
and to choose the optimizer.
So you might be more ambitious.
Instead of using a
quantum simulator,
you might want to swap in
a real quantum computer.
And this is also very simple.
Just replace the expectation
layer with simple expectation.
And don't forget to choose
your best back end to be
the amazing quantum engine--
which Dave Bacon
will talk more about
and probably gave you some
free quota if you ask--
and specify the number of
repetitions of the measurement
of each circuit.
And bam, this is the fresh
data taken from yesterday
on our quantum engine, where
the upper right is the training
performance with the blue part
as the input data to the LSTM
and the orange part
as the prediction.
And the lower part of the
figure is the testing result.
So that summarized
my [INAUDIBLE] talk
for TensorFlow Quantum.
Although a meme is
worth a thousand words,
I hope that you will be
convinced to try converting
everything into tensors
through TensorFlow Quantum
and go out there to
solve big problems
and let us know how you like it.
Finally, I'd like to thank the
enthusiastic and hardworking
TensorFlow Quantum team.
Masoud started the
program back in 2018.
We probably wouldn't have
launched TensorFlow Quantum
without Michael leading the
engineering effort together
with amazing help from
the student researchers
through multiple internships
over the past two years,
and Ellen's amazing effort
on deploying TFQ eventually
as a TensorFlow product to
share with the broader public.
And it's still an ever-improving
open source library.
And that depends on
contributors like you.
So thank you for listening.
For the next step,
please go ahead
to check out Masoud's
full-length talk on TFQ
at TensorFlow Dev Summit
and our TFQ website, which
have many amazing videos
and demo tutorials,
as well as TensorFlow
white paper.
And if you have
any questions, feel
free to ask [INAUDIBLE]
and [INAUDIBLE]..
And I will give it
back to Marissa.
[OUTRO MUSIC PLAYING]
