Good afternoon everyone, my name is
Milton Halem and I'm going to speak about
a dream of Robert Feynman, the
inspiration for quantum computing. We
call it the holy grail of quantum, the
follow-on to the next generation of
computing. There are essentially two
approaches to quantum computing:
there's the quantum annealing approach,
which essentially is a quantum
reflection of simulated annealing,
but instead of using temporal
fluctuations it uses quantum
fluctuations to reduce to a ground
energy state. The second
approach to quantum computing
is either a gate or some laser ion
approach and things like that, but that
approach has the problem of assuming
that you could really eliminate most of
the quantum noise and anything that
could affect it. So there's this
intermediate step, which is the analog
quantum computer where people make use
of gate technology, but they haven't yet
developed the capability to moderate or
mitigate the quantum noise. So they run
with large quantum... with large rates of
error, and it would take quantum... it
would take probably orders of magnitude
more bits than they actually have as
available for that kind of thing.
I want to draw your attention to the
system that I'm going to be talking
about. This is the D-Wave computer, it's
located at the NASA Ames Research Center.
It's a box 10 feet by 10 feet by 10 feet.
It's a computer which contains one chip.
That one chip has - the more recent one
that they just delivered - has 2,000
cubits. A cubit is both a... is a quantum
bit, and the rest of the box is nothing
more than the refrigerator which is
intended to cool down the temperature on
that chip to about... I keep forgetting
the exact number, but
0.001 Kelvin. Millikelvin. 15 millikelvins,
okay. So you're down at the subatomic
level of photons moving okay, and the
the system that they've put together
consists of cells. Each cell is... consists
of 8 qubits, 4 by 4 totally
connected, and there's something like
12 by 12 by 12 such cells, but not
every one is working so you see
some missing. The IBM and Google
organizations, and I believe Fujitsu,
and there's a organization in the
Netherlands as well, are building or
attempting to build quantum systems as
well. And I won't even
mention the Chinese
effort in quantum computing because
(shesh 3 minutes and I've got 6
more charts). The Chinese have
just announced that they're
going to set up a $10 billion dollar
quantum computing center. So very quickly,
why is NASA  interested in quantum
computing? Well, we're pretty clear that
we're... that Moore's Law has been sort of
asymptoting, it's not doubling every two
years, etc. And NASA is now being consumed 
with data both from its own satellites,
but the satellites it's built for
NOAA and as well as for other
organizations. New element arriving in
science today is deep learning, and so we
want to be able to use computers that
really delve deeply into data analysis
to discover new information. The D-Wave
quantum computer is the only available
operational computer today, where you
could have access to it 24 hours a day
over the entire year. It's stable, it's as
stable as any of the other computer
products like... that you could go to
any computing center. And as I just
pointed out, a lot of the major computer
vendors - Microsoft, Google, Microsoft, IBM,
Fujitsu, and others are all rushing to
the - and Intel has even bought themselves
a quantum computer site that was trying
to run, is trying to develop a quantum
computer. But NASA needs to know if
quantum computing will become a
disruptive mission technology, and we
already know that the Chinese have
launched a satellite that's already
doing quantum communication over
distances about one to two thousand
miles using quantum entanglement. Our
approach was to actually see if we can
apply quantum computing to a real
scientific problem, and the problem that
we're trying to apply it to
is can we, given a collection, given CO2
data, can we infer, using a quantum
computer, the CO2, the turbulent flux of
CO2 into the land? That's the problem,
inferring turbulence from measurements
of surface CO2. So NASA launched a
satellite called the Orbiting
Carbon Observatory, and it's... in July
of 2014, and it's still delivering very
useful data. In fact they have an
improved version now that they just
released. So we have three and a half
years of CO2 data, we have land surface
models that take it off, and we have a
neural net. [let me go to the last chart,
okay] So we've tried to analyze the data but, here
is the key chart. We've taken
measurements from ground stations of CO2,
a number of meteorological variables, 
some long wave outgoing radiation
measurements, and in this top chart
we've looked at the... we've applied a
feed-forward neural net and we've get-
we've obtained RMS errors for the
training data of about 4.6, and when we
make our predictions it's down to the
order of 3.8. We've done similar
calculations on the D-Wave at Ames, we've
used the same data, the same code.
This was a C code, we implemented it on
the front end of the D-Wave, and we've
gotten very similar results after
different epochs.
So in conclusion, we've done the first
successful test of a feedforward gradient
descent implementation on the D-Wave; and
we've compared those results to the
Intel i5, i7 cluster;
we've incorporated real OCO-2 data into
the neural net, and we've obtained
near-identical inferences; we validated
classical computer input versus the
D-Wave system. We've had to run it
remotely, we did the running from here in
Maryland at the University of Maryland
Baltimore County, and every time we send
a thousand representations of
the weighting coefficients that are
being minimized, we have to wait several
seconds for it to reach, and then we have
to take the data back and average it. So
even remotely using it, we were able to
actually do those neural net
computations. Anyway that's it and sorry
I over-ran.
