[MUSIC PLAYING]
RYAN RABBUSH: Good
afternoon, everyone.
Today it's my
pleasure to introduce
our speaker, Jarrod McClean.
Jarrod and I actually did
our PhD in the same lab.
And during grad
school, Jarrod was
somewhat of a pioneer in the
field of developing algorithms
for quantum computers
to simulate materials
and chemistry.
In particular, he's
the inventor of one
of the most popular approaches
to using near-term quantum
computers for this
purpose, which
is known as the variational
quantum eigensolver.
Jarrod is now the Luis C.
Alvarez fellow at Lawrence
Berkeley National Labs.
And lately, he's been working
closely with the quantum AI
team at Google in order to
develop software which compiles
chemistry problems to
instructions that can be run
on near-term quantum computers.
In fact, he's a founder
of the open source package
FermiLib, which we just
released earlier this week.
So without further ado, I'd like
to introduce Jarrod McClean.
JARROD MCCLEAN: Thanks a lot
for the introduction, Ryan.
And it's my pleasure to be here
to talk a little bit about both
what I've been doing and
quantum computing in general.
So just a few words about the
level of this talk and content
of this talk.
So I asked around
as to what level
that these talks are
generally given at.
And they told me, well,
it might go on YouTube.
So it would be useful if
at least some part of it
appealed to a more general
audience, some part
appealed to, say,
computer scientists
or more general
computational folks,
and some part was geared
toward specialists at the end.
So I hope there's a little
bit of a gradient of this talk
that there's a little
something for everyone.
So if you're not so
familiar with quantum,
maybe it's a good start for you.
But you might be off to a
little bit of a slow start
if you're an expert.
So just that caveat
as I begin talking.
What I want to talk about
is quantum computation,
both in a general sense, and
for this problem that we think
it's going to be
very, very good for,
which is the discovery of
new materials in chemistry.
So I think it's appropriate
these days to start almost
any talk in quantum computation
with a discussion of why now?
So if you're like anyone
else in the field,
you've probably been looking
around and seeing news
about quantum computers,
seeing talk about it
for probably close
to two decades,
if you've been around
for any amount of time.
And if you graph the amount
of quantum excitement
over time that you
might have seen,
you'll see a brief peek
here at around 1994--
even though it was conceived
of in the '80s by Richard
Feynman--
where Shor's factoring
algorithm came out.
And if you don't
know what this is,
it was an algorithm that
promised to essentially break
RSA encryption and got everyone
in the government scared enough
that they started pouring money
into it and attention came.
And so people started looking
at this technology, saying,
wow, this can do
incredible things.
And you'll notice there is
a brief dip after that, when
people figured out that, wow,
these devices are actually
a little bit hard to make.
So there was a little
bit of a decline
as people went back
to their labs, refined
their ideas of what
they might want
to build these things from.
And you'll see that there's been
a pretty big increase recently.
And if you've been
looking around
at the news from
different companies,
from different universities,
and other places,
you'll see that there's a
pretty big increase starting
to happen right here, which
is related to the fact
that our qubits are
getting much better
and they're getting much
more manufacturable.
And people are
starting to think, wow,
this is something I need to
be ready for, because this
is coming.
And there's already been
predictions by this group
especially-- the Google group
is a leader in this quantum
information field--
that by the end of
perhaps this year or 2018,
that we'll reach a landmark
known as quantum supremacy.
And what this
means is that there
will be some well-defined
computational task
that a quantum device can
do much faster than all
of the combined classical
resources on earth today.
And that will be a true
milestone that says,
wow, these devices are
really capable in practice
of something that we could not
do with the classical resources
we have.
And after that, there'll
be some kind of rush
to move from quantum supremacy
to practical applications.
And that's where I'm
going to focus on
is getting over this
perhaps post-supremacy chasm
and really pushing this
technology forward by showing,
in the short term, what
practical applications, what
useful technologies
and algorithms
we can bring out of this.
And of course after that,
sometime in the future,
we're going to move towards
an error-corrected quantum
computation regime.
And I'll talk a little bit more
about what all these things
mean as the talk goes on.
But really this is what
I want to focus people on
is that if we can put
these practical quantum
algorithms out there
on near-term devices,
then this is really going
to give us new tools
and push us forward
in the technology.
So where do we expect to win?
So the other thing you might
be interested in is, OK, great,
you've delivered me
a quantum machine.
Where do these machines-- where
do we expect them to be useful?
And how useful will they be?
So some of the problems that
people have talked about--
and I'll focus only
on a few of these--
are perhaps the simulation
of different reactions.
And what I mean
by that is perhaps
knowing whether a drug will
work or not before you actually
do the experiments
in a lab, or knowing
which drug will be produced
by that particular catalyst.
Or another one I'll
talk about a little bit
is calculating radar cross
sections of perhaps planes
or machines you might
be interested in,
or pattern matching,
or machine learning,
and of course, breaking
RSA encryption,
if you're a three-letter
agency and you're worried
about that kind of thing.
And there's a few images of
what people might have conceived
these might look like.
And what is the advantage
that we actually expect?
So what a quantum
computer is not
is just a machine
that runs everything
you have at the moment faster.
It's a new way of
looking at problems
that I'll talk a
little bit about that
can get you real complexity
theoretic speed up.
So people talk about polynomial
speed ups or exponential speed
ups.
So I just wanted to give you
a feel for what that might
look like if you achieved a
polynomial or an exponential
speed up.
So if you had a
quadratic speed up
for a certain
instance of a problem,
that might be a
reduction of, say,
a year down to two weeks or
something on that magnitude,
depending on the
underlying factors.
And the really
exciting one is if you
can get an exponential speed
up, which has the potential
to reduce some problems
down from something like 10
to the 82 years to something
more like 300 seconds.
So what does 10 to be
82 years even mean?
Well, the age of the universe
is something like 14 times 10
to the 9 years.
So you've really made something
that was essentially impossible
routine.
And that's the goal
of a quantum device.
That's why we're
so interested in
it is the potential
for these speed ups
to make the impossible
an everyday occurrence.
So now backtracking to
what this technology means.
So I've already used
the word quantum a lot.
And you've probably heard
it any number of times
if you've been at all
interested in this field.
I'm going to back up and
say, what is quantum?
What do we mean by that?
So if you're biased at all by
media marketing or anything
like that today,
you might conclude
that it's a buzzword that you
attach to any product you might
want to sell a few more of or
have it be a little bit cooler.
I'm going to take a more
physics-oriented approach
and say that everything
is quantum in a way.
So the universe is
governed by quantum laws.
But why haven't you seen
any of that in reality?
So something that's
in reality that we've
observed from day-to-day
life we often characterize
as classical.
So classical objects have
very predictable behavior
from your intuition, because
that's what you're used to.
So take these beanbag
holes, for instance.
You just throw a
beanbag through.
You watch its trajectory.
And it goes through the
hole as you might expect.
So there's nothing terribly
hard about that prediction.
But if you took
that same object--
and in fact, people
have done experiments
on things quite large, up to
buckyballs, which, granted,
is much smaller than a beanbag--
and you cooled it
down enough and you
had a precise enough
instrument to measure it,
you would find out that if you
put one particle through that--
so if you kept throwing that
beanbag towards these two
holes, you would find that
the beanbag was interfering
with itself, which is kind
of a strange effect to have.
So this is a classical double
slit experiment, which--
or a quantum double
slit experiment, rather,
that showed some of the original
properties of why we started
to think, wow, we need a theory
beyond just balls rolling down
hills.
And these effects that
we see-- so a particle
interfering with
itself, and as a result,
it starts to occupy only
discrete levels and looks
like a very different
type of particle
than you had otherwise.
And when people started to
try to simulate and predict
these reactions or effects, they
said, wow, this is really hard.
What if we could actually use
this tool to our advantage
instead?
What if we could
use this difficulty
as the power of our computation?
But I'll get to that more later.
But effectively what I want to
define for you in a very loose
sense is that a quantum
system is some physical system
operated in a regime where
we actually need effects
like discrete energy levels
and particles interfering
with themselves to
accurately describe it.
In practice, that
often means something
that's very cold and
very well-controlled.
So I'm going to talk also
about quantum simulation.
And I think it's
interesting to go back
to the roots of simulation,
which are actually
in these devices called
orreries that have been dug up
from sites as early as 125 BC.
So a simulation in
some general sense
is that you have a model
you're interested in
and you'd like to push
it forward in time
and ask what that
model predicts.
So in the cases
of orreries, these
are models of solar
systems that you build
and you crank forward either
by hand or by a clock.
And you say do the orbits
of these fake planets
match the orbits
of my real planets?
And if the answer is
yes, then in some sense,
the model that
you've constructed
is somehow more correct.
So it would be very
difficult for those particles
that I just
described, which need
to interfere with each
other, to be built
with just classical balls.
So a idea that
originated, the field
of quantum simulation and
thus quantum computing,
was that Richard Feynman,
famous physicist,
said, well, if you want to
do a quantum simulation,
you probably need to do
it with quantum particles.
And I'll describe a little
bit what that means.
But these blue balls here are
just any particular quantum
system you might have.
And your ability-- these
puppeteer strings on top
are the ability to
experimentally control
those much better than you could
in a different physical system.
And we'll use that
to examine what
quantum particles might do.
And what are examples
of quantum systems?
So I have a few
pictures here of ones
that people have proposed
for use in computation.
And generally, these
meet the requirement
that they're in this regime that
you need interference effects
and things like that.
And they're also highly
controllable systems
so that you can do
things that you wouldn't
be able to do in nature.
And I've pictured a
number of machines,
like superconducting qubits.
One here from the Google group,
one from the Siddiqi group,
quantum photon setups
and ion trap setups--
basically any number
of controllable systems
that you can bring to
this particular level
are potential building blocks
for these quantum devices,
each with their
strengths and weaknesses
that might exhibit as you
start to manufacture them.
And quantum
simulation, of course,
was the first idea
of if you want
to study the effects
of a quantum system,
and this is a very
hard thing, then
perhaps you can just use another
analogous quantum system,
much like the planets in
the orrery, the example
that I gave before.
And this simulation idea led
almost immediately-- well,
not almost immediately--
into a more abstract
concept that I'll
start to introduce
now, which is qubits.
So it's a very big leap to
say I've taken some lasers
and pushed around ions
in a trap to moving
towards an algorithm that
talks about factoring products
of two large primes.
There's a big leap there.
It's very hard to
imagine how ions resemble
these types of systems.
But a similar leap was made
in the original systems
as we moved from, say,
models of planets all
the way to digital computing.
So it's, of course, very hard
to imagine how a planet fits
in a digital computer,
but we've managed
to come up with abstractions and
encodings and discretizations
that have made this possible.
And really, the leap
from quantum simulation
to quantum computation
is this abstraction,
this model of
universality, that allows
us to code problems
we're interested in
into these sophisticated
devices and leverage
those powers of interference,
entanglement, and all
of these hard to
describe physical effects
to utilize them in a
more computational way.
And so just to talk about this
quantum computing abstraction,
you've probably heard
of it many times.
So if you imagine one of
these quantum systems that
I've described-- so a
physical system in this low,
controllable energy space--
and it occupies only the first
two discrete energy levels
that it's allowed
to, then we typically
are calling this a qubit.
So the qubit is a
quantum generalization
of a classical bit.
You may have heard this
explanation many times.
And I'll actually
debunk a little bit
of that in a second, which is
that a classical bit is 0 or 1,
and a quantum bit is
something like 0, 1,
or anything in between.
So you can have superpositions
of these objects.
You can have entanglement
between them, which are truly
important quantum effects.
And operations on these
bits are just called gates.
And we typically denote this
with these gate diagrams.
So you can read down
the lines, for example,
and just say, well, qubit one
gets acted on by these gates.
Qubit two gets acted on
by these gates, and so on.
So the details of understanding
exactly what these operations
are not terribly important
at this juncture, but just to
know that we have this
model of computing
that we fit in our
arbitrary problems
to, much like the module
of digital computing,
which we've built so
much on in the past.
And just to give you
notational familiarity,
we often call these
generic states psi.
So you'll notice
with the qubit, I've
put these brackets
around the 0 and 1.
And with generic states,
I'll often put a bracket
around some state psi.
It just means some
number of qubits,
some number of these
physical systems put together
to make my quantum computer.
So I'm going to
consider a quantum
computer a collection of
one or many of these qubits
put together that I
can control in any way
that I'm interested in
within feasible resources.
So it behooves us for
a moment to debunk
a lot of the things that get
written in popular science
articles on quantum computing.
So I'd just like
to highlight some
of these for people that
are in computing to help you
understand that there's a
little bit more to the speed ups
than meets the eye when people
discuss these algorithms.
So one thing that
gets said a lot
is that it's faster
or better because it
can use an exponential
number of states.
So this is a nebulous statement,
because you don't really
know what it means.
Use, you don't really know
what popular science writers
mean by this.
And I'd like to point out
that a set of classical n bits
can also be in an
exponential number of states.
So if you take it
at very face value
that you can just have zero,
one, or many things in between,
then there must be something
a little bit missing
from that statement and
little clarifications that
are needed, such
as entanglement,
or such as the number
of states that can
be occupied per given resource.
Another myth is that it's faster
or better because bits can
be 0 and 1 at the same time.
So it's, in fact, a
little bit nebulous even
to say bits are 0
and 1, or 0 or 1.
Quantum mechanics
often requires actually
a different theory of logic.
And more importantly, what
people often mean by this
is that when you can
dial between 0 and 1,
that somehow occupying all
those states in the middle
is more powerful.
But if that were the only case,
that an analog bit might serve
just as well as a quantum one.
And a collection of analog
bits would be just as powerful
as a quantum device.
But that's not true.
And a lot of these
things are covered
in a very interesting
comic, which
debunks the third myth,
which is that work
is done by computing all the
answers in massive parallel.
And so I want to highlight
this SMBC comic, which
is co-written by Scott
Aaronson, which I think
highlights many of
the aspects of this.
It's called "The Talk," where
a mother explains to her child
the important caveats to the
magic of quantum computing.
And a thing they write here
is the important thing for you
to understand is that
quantum computing
isn't just a matter of trying
all the answers in parallel.
So if you actually
thought that was the case,
you can look into
this comic and see
some of the details of why,
if you perform measurements,
you would then only
get one answer.
And really, you get the speed
up when all of these inputs
combine in just the right way to
give you just the right answer.
And so this is a
delicate matter.
But if you read through many
pop science articles on quantum,
you'll encounter these
arguments over and over again.
So it helps to know that
they're not completely true
or not the whole story.
So what are the challenges
in quantum computing
from both an algorithmic
and a design standpoint?
So these algorithms tend
to follow a simple pattern,
if you look at many of them.
So you prepare some
state of your qubits,
meaning I manipulate the qubits
in a particular way with laser
or microwave pulses
or things like that.
I evolve them forward under
my given set of gate sequence.
And I perform some
measurement at the end.
And I'm measuring out
a particular piece
of information.
I'm not characterizing
the entire quantum state.
Otherwise, there's no
way I would get out
a speed up from this
particular process.
And each one of these steps
has a number of challenges.
For example, in
preparing the state,
I'm often limited by the
number of qubits on my device.
When I think of a 64-bit
classical processor, that
doesn't mean the largest problem
I can fit in my processor
is 64 qubits or 64 bits.
I have some ability to
take problems by chunks
and move along and
compute on them.
That's often not the case
with quantum algorithms,
where you depend on the
entanglement between all parts
to get your speed up.
And chunking problems
becomes much, much harder.
So we need larger devices than
we would conceive of otherwise.
There's also issues of
coherence time, which
mean the amount of
time, essentially,
that a qubit is
good for as you're
acting on it before
you'd like to move on
or you need to refresh
that particular device.
And it sets a time limit
on the number of operations
that you can actually perform.
And finally,
information extraction
can be fundamentally
different, which
I'll highlight in a
particular example after this.
And one solution to
all of these problems
is, of course, better hardware.
But what I like
to say is that we
have to meet hardware
designers halfway.
We need to co-design
better algorithms as well.
And what this means
is that in the past,
like this VQE algorithm,
we've designed coherence time
flexible algorithms that
work with the coherence
time of the device.
And I think in the
future, we need
to worry about how
to build qubit number
flexible algorithms and to
improve this halfway point
where we couple quantum and
classical devices together,
because we really do have
well-defined classical
resources today
that hopefully we
can leverage in that process.
So this information
extraction point
I want to get back to with one
particular example of a quantum
algorithm that got written down.
So this algorithm
thinks about solving
linear systems of equations.
So if you're not
familiar with this,
you can just think
of this as how
do I solve for x in
this particular problem?
It appears in any
number of, say,
logistics problems,
machine learning,
everyday optimizations,
things like this.
And classically, when I say
I want to solve this problem,
it essentially amounts
to I want to write down
all the entries of x.
And that seems like a
reasonable thing to do.
A quantum algorithm came out,
which was exponentially faster,
which is that
enormous speed up I
talked about before, which
is exponentially faster
at delivering the solution.
But it changed the definition
of solution a little bit.
Notice the brackets around
the x and the b now.
And what it meant was
the solution translates
to preparing a
state x from which
one can efficiently sample.
And so that's not
exactly the same thing
as writing down all the entries,
because if you wrote down
all the entries, you would lose
that advantage necessarily.
And so what I want to say
is that's not a bad thing
necessarily.
What it really is is
it's solving the problem,
not reproducing the
classical algorithm.
So if I'm out, and I'm Boeing,
and I'm looking at my plane,
and I'm trying to make
a stealth aircraft,
do I really care about all
of the entries of where
every single radar bounces?
Or do I care about an
aggregate cross-section
as fast as possible?
And if the answer is that I
care about this cross-section,
then really what I've
done is use a new tool
to solve this problem
rather than just translate
an old classical algorithm.
And this step was
necessary in order
to achieve this
quantum speed up.
And it's one that
I think you have
to conceptualize moving forward
is what quantum computers are
good at.
Quantum computers
are not about taking
an old classical algorithm
and running it faster.
They're about using
a new set of tools
to solve the problem
that you're interested
in in a fundamentally new way.
And I think that's
highlighted very well
by this particular
comparison between
the classical and quantum
version of solving
a system of linear equations.
So where do we think
early applications
are going to be highlighted
for these particular devices?
We think some of the
earliest problems
perhaps will not be breaking
codes or bringing down
the world financial system
or something like that.
They'll be in areas
like optimization.
So say quantum
approximate optimization
of logistics or other
multi-variate surfaces
that you might find in
any kind of problem.
They might be in some kind
of relational representation.
So you can think
of different kinds
of quantum neural networks
that link together variables
in a way you didn't
know how to do before.
And the one that
I'm interested in
and I'm going to talk
a little bit more about
is quantum simulation.
So this idea of how you
perform experiments that would
have needed to go into a
laboratory on a computer
beforehand.
And areas like chemistry
or perhaps even
high-energy physics,
where you can
predict what
catalysis would happen
inside the fusion of a
nucleus, or perhaps you
could look at what drugs
are disease-preventing,
are the types of simulations
that we want to look at.
And I'm going to
focus in on quantum
chemistry in particular.
So why do we want to
simulate quantum chemistry?
Or why do we want to simulate
chemistry in general?
What is the dream of
this field, essentially?
The dream is that
someone gives you
an idea of what a
molecule or a protein
or something in someone's body
looks like, or a material,
and just from that I
would like to understand
many things about it.
I'd like to understand
how it absorbs light.
I'd like to understand how it
complexes with other species.
I'd like to understand how
that molecule likes to talk
to surfaces and move around.
And from that
understanding, I'd like
to develop some
level of control.
So if I know why a
molecule absorbs light,
what functional
groups affect that,
maybe I can design new
photovoltaics, new solar cells
to put on my house.
If I know why certain species
complex within a protein,
or why a protein
folds or misfolds,
maybe I can design
an inhibitor that
prevents the onset of
certain types of diseases.
And if I know why
molecules complex
with these catalytic
surfaces, maybe
I can get platinum out of
my catalytic converters
and understand how
to lower the energy
consumption of these processes.
And these are things that
are all made possible
only by very high accuracy
simulations, which
are the kind that we're
going to be aiming for.
And this problem is
also an interesting one,
because you might say, well, all
those things you just described
sound like very lofty goals.
But where are you even
going to start from?
And it's an interesting problem,
because "the underlying laws,
physical laws necessary
for the mathematical theory
of a large part of physics
and the whole of chemistry
are thus completely known.
And the difficulty is only
that the exact application
of these laws lead
to equations much too
complicated to be soluble."
This was said by
physicist Paul Dirac.
And what he
essentially meant was
that if you could just solve
equations large enough--
so this very innocuous
equation that I have written
on the right hand
side here essentially
represents all of those things
that I was just talking about.
If I can find a way to code a
molecule into these equations
and solve this linear
eigenvalue problem,
then I'm going to
start to understand
how these molecules
absorb light,
how they complex with
other species, what
are the rates of
chemical reactions,
and do everything that
I just talked about.
And so that's kind
of the exciting dream
of quantum chemistry.
And it's one I think we can
achieve with quantum computers.
And one problem that
I want to highlight
for this in
particular-- so you say,
that's a very general type
of argument that you've made.
What specifically are
you going to look at?
And I want to highlight for you
one particular problem, which
is the production of
fertilizer from nitrogen. This
is a process that goes on all
over the world, all the time.
It's how do you take N2
and make it into ammonia.
It's this nitrogen
fixation problem.
And humans currently do
this at massive scales.
We do it for crops
all around the world.
And we use a process called
the Haber process, which
happens at 400 degrees
Celsius and 200
times atmospheric pressure
and currently uses 1% to 2%
of all energy on earth today.
And then we look over at our
friends in the plant kingdom
and the animal
kingdom and fungi,
and we say, well, how
do these guys do it?
They existed long before we
had fertilizer, plants building
all of this chemical processes.
And they manage to do it at
25 degrees Celsius and one
atmospheric pressure.
So room temperature and
atmospheric pressure,
essentially.
And people have managed to boil
down in one particular area,
nitrogenase, where the action is
happening, to this FeMoco core,
as they call it.
And people have tried to study
this with current methods.
And it's beyond the reach of
all current classical methods
to really understand even
where the substrate attachment
happens, what the electronic
structure process is,
and how we can move
forward on this problem.
And what I want to propose
is that while there's
no clear path classically, I
think quantum mechanically,
with something like 150
to 200 logical qubits,
there's a straight path
forward to studying
the electronic structure
of this problem.
And so why is this
problem so difficult?
So I talked about
the fact that I only
needed to solve this one
very simple-looking equation
and all of these properties
would come to me as I imagined.
So the problem is
not in the set up.
It's in the dimension
that you need to solve.
So if you imagine that another
way of phrasing this problem
that I want to solve
is that I would
like to know where I should put
all the electrons in my system.
So this is called the
electronic structure problem.
And if you imagine that I
discretize things, as I always
need to do for a computer,
and put down m sites,
I can ask, how many
ways can I arrange each
of these number of electrons?
And if it's just one, I can
arrange them m number of ways.
And if it's two, it's m squared,
making some coarse arguments
about antisymmetry.
And if I go to just, say, 100
sites and, say, 80 particles--
so you can see that this
grows as m to the m,
and that's about the size
of molecule you might expect
for something like 100 sites
that I've pictured there,
much smaller than a protein--
the dimension of this
problem that I need to solve
becomes 10 to the 160.
So this is a number that's
very hard to get a feeling for.
So I've given you a
barometer for that,
which is the number of particles
in the universe is estimated
to be roughly 10 to the 80.
So that's as if every
particle in the universe
had another universe within it.
And I needed to account for
all of the particles in those.
And I need to solve
a problem that's
on the dimension of
that scale, which
seems totally intractable.
But I want to remind you
that this same difficulty is
the power that we're harnessing
when we use a quantum computer.
So we've essentially turned our
lemons into lemonade, in a way.
So I just want to draw
back an aside for everyone
who might not be familiar
with quantum to understand
another exponential object that
you might be more familiar with
and the relation between
that and this problem.
So you might be more familiar
with probability distributions,
where you say if you have, say,
16 different places that you
might like to look
for lunch, and you
have your own set of
preferences, p1 for each store
that you go to, and someone
else has their own set
of preferences, if the two
of you don't know each other,
these probability
distributions factorize.
So even though this joint
distribution on the left
has a lot of size, the structure
allows you to kind of simplify
this problem so you only need
a linear amount of information.
However, if the two of you
definitely know each other,
whether you're
friends or enemies,
then a correlation
gets introduced
between those two things.
And it no longer factorizes.
And you can imagine if there
is many, many of these stores
and many, many of these people,
keeping that joint probability
distribution is
horribly complex.
And the key caveat
of why quantum
is different from these types
of probability distributions
is that classically,
we know how to sample
from even these large
spaces using things
like Monte Carlo methods.
But that interference
phenomenon that I talked about,
and entanglement
mean that we can't
use some of these methods.
It's unheard of in
classical probability
that two people would
interfere with each other
and not be in a
store, for example.
And having to account
for these effects
makes it very hard to
simulate classically.
So how do we actually
simulate these problems
on a quantum computer?
So I've told you that there's
a lot of promise for doing so.
And I want to show
you how we actually
start to build these things.
So if we go way back to,
say, high school chemistry,
you might have seen
a model that looks
like some electrons rotating
around a proton and a neutron.
And then if you continue
taking chemistry,
you learn very quickly that
perhaps this model is not
so accurate.
It doesn't make many
predictions at all.
And if you entered chem 101 in,
say, something in university,
you might have found the
molecular orbital model,
which is often pictured as the
last vestige of these things.
And I want to tell you that this
molecular orbital model, which
predicts things like bond
order and where the spins are
is exactly like that factorized
probability distribution
that I had before.
It assumes no explicit
correlation of the electrons.
It's like a naive
Bayes model, if you're
doing machine learning.
And it's not a
good enough model.
So it's simple, but
it's not good enough.
And to give you a picture
for what that might mean,
it's so if I wrote down that
same picture for the simplest
molecule like H2, which
is a hydrogen molecule,
and I try to pull it apart,
which is this picture that I
have over to the
right, then I would
find that while the
shape is generally OK,
what I would predict is this
top dotted line up here,
which seems to be quite a
bit off from the exact line
underneath it.
So you come to the conclusion
that if chemistry has anything
at all to do with the making
and breaking of bonds, which
you feel it might
be, then this is not
a sufficiently good model.
And the reason for
this is electrons
actually care a lot about
where the other electrons are.
These correlation effects
make the wave function
very much non-separable
in these types of regions.
And you have to figure out a way
to build that into your model.
So how do we put
these things in?
So I don't want to belabor
the symbols here too much.
But essentially, we
take some model of space
and we chop that
model of space up.
So in much the same way, if
you drew a line on a computer,
you would eventually have
to go down to the pixels
and discretely draw each
one of them on your screen,
we have to do the
same thing here.
So we divide space up and we put
it down on some type of grid.
We choose a specialized grid.
But this is
essentially the same.
And the output of this is
this problem Hamiltonian
that you get here.
And the only thing I want
to emphasize about that
is that we classically
pre-compute it.
And it tells us everything
about the problem
that we'd want to solve.
We often then go ahead and
solve that mean field problem
that I told you about to get a
decent starting point for where
we want to go.
And this defines this concept
of molecular orbitals for us,
these different
levels that things
can fill that are these
uncorrelated motions
of electrons.
And how do we build
in the correlations?
And the simplest
possible way is that I
could go back and ask about
that sort of joint probability
distribution.
If I could just enumerate every
single possibility of filling,
then that would be
one way to solve it.
It would, of course,
scale horribly,
like this universe-sized
solution that I had before.
But conceptually, it's the
simplest way of doing so.
And it's often called
exact diagonalization
or full configuration
interaction in chemistry.
And of course, there's been many
methods between this mean field
uncorrelated and the
exact solution developed
over the years.
And I won't belabor
this slide too much.
But the gist of it is many
of the classical methods
for those problems that we're
interested in are either too
costly or they don't
capture the correlations
that we need enough.
So what we like is this
exact level description
but for a cost of a method
that's something more
like density
functional theory here,
or perhaps QMC or Monte
Carlo or something like that.
So exact solutions
of the quality we
need, but for the price
that we can afford to pay.
And so why do we think
quantum computers
might be good at this?
So a paper that came out in
2005 by Alan Asparu-Guzik, who's
now at Harvard and was
my PhD adviser, as well
as Ryan's, showed that for some
instances of chemical problems,
if you put in a state
under certain assumptions,
then computing that energy,
which I just told you about,
only costs a time
that scales polynomial
in the system, rather than
this universe-sized object.
And a key portion of this is
that you prepared the state.
You have to do some evolution.
But your measurement
only extracts
a little bit of information.
It doesn't read out that
whole state, because you don't
care about that whole state.
There are pieces
of that state you'd
like to know things about.
But your quantum
computer has allowed you
a way to zoom in on
the information you
want without being
burdened by the information
that you don't need.
And this led people to see
that, classically, this
might have an exponential
cost and quantum mechanically,
it seemed like a
modest polynomial cost.
The challenge, when we went to
put it on experimental devices,
was that it often required
many more resources
than were available in the lab.
So we tried to look towards
a different approach.
And we took a
co-design perspective.
So what do I mean by that?
So if you imagine the previous
perspective as someone
sits down with a problem.
They try to write down a
circuit specification that
optimally solves this problem.
And then there's a
big question mark,
because of this qubit
and coherence time
problem of does it fit in
my quantum blue jean cube?
And if it does, it gives me the
answer to life and everything.
If it doesn't, then
perhaps it simply
doesn't run, which
isn't very useful.
So what we wanted
to do instead was
consider the task and
the current architecture
and try to find the
best solution possible.
So what this means is
combining the problem
and the architecture
that we have,
getting a circuit
sequence that's
compatible with these
two, and doing kind
of a classical feedback loop.
So the answer we get back
out might not be perfect.
So it might only be close to the
answer to life and everything.
But for a lot of chemical
applications, say,
does this react or not?
Is this a valid drug or not?
Where you're only interested
in some coarse answer that
depends on an
accurate energy, that
might be better than
you can do classically.
It might be good enough to make
that prediction that you're
interested in.
And to do this, you
need to ask, OK,
so I have a quantum computer
with limited resources.
How do I build to that device?
What is this device
best at doing?
What are the minimal
specifications
for which I can call
this a quantum computer?
And one is I would like to be
able to do an operation, which
if I've done all of
my state preparation,
I'd like to be able
to look at a qubit
and ask is this
qubit a 0 or a 1.
And I'd like to do that
over and over again
until I've decided what the
average value of that might be.
And I'd also like to look at
many qubits at the same time.
And I'd like to ask what
those correlated values are.
So this is something
that's efficient to do
on any prepared quantum state.
But in general, it might
be very hard to calculate
this expectation value
on a classical device,
depending on the state
that I've prepared.
So I've really boiled down
to some essence at least
one simple operation for
a prepared state that
would be very hard for
me to do classically.
So how am I going to use
this on my chemistry problem?
So I'm going to switch
formulations a bit.
And so that problem
that I showed you before
was written as an
eigenvalue problem.
And it turns out that
for exterior eigenvalues,
you can play a little
bait and switch.
And you can always write
a eigenvalue problem
as a minimisation over this
kind of constrained unit vector.
And so I'm going to switch over
into that formulation and say,
now the problem I
would like to solve
is some minimisation
of an average quantity.
And if I decompose
my Hamiltonian
in the way that's standard
for some of these systems,
then essentially
by linearity, I get
a problem that this expectation
value that I'd like to minimize
over, that is equivalent
to my eigenvalue problem,
I have two tasks--
one which is easy for
a quantum computer.
So I need to compute a bunch
of these little averages here.
So repeatedly
looking at each qubit
and telling me if it's 0 or 1.
And then I have
those inputs that I
got from the
discretization I chose,
how I divided my problem up.
And my classical
computer is very
good at adding a lot
of numbers together.
So why wouldn't I
use that resource
for this particular problem?
So I perform a bunch
of measurements
on my quantum computer,
which are easy.
I feed those to my
classical computer
and ask it for an update step.
So this kind of
suggests a hybrid scheme
where I parameterize
my quantum state
with some classical
experimental parameters.
And I compute averages
using a quantum computer.
And I update that
state classically.
And pictorially,
this algorithm looks
like this, where I have a
quantum module, where I prepare
some state and feed it in.
These expectation
values that I read out,
and I add them together
on my classical computer.
And then I just loop that
algorithm back and back
and back until I've reached
some level of convergence,
which I think represents
how good that device can do.
So again, the
answer might not be
the best absolutely possible,
because you're constrained
by the device itself.
But it's perhaps better than you
could do on a classical device.
So this is interesting,
because if I showed you
the algorithm
before, just briefly,
which was this quantum
circuit, I just
described an algorithm
that looks effectively
like this, which is
that I've removed all
of this interior part without
the millions and millions
of quantum gates in it.
And so where is the
advantage coming from?
It has to come from that
state that I've input.
So it really begs
the question of what
are the interesting
quantum states to look at?
If you imagine the space
of all possible states,
somewhere inside it's
the ones I can reasonably
make on my quantum computer.
And within that,
there's the ones
that are easy to model
on a classical computer.
So I'm really looking at this
part in the blue space which is
not covered by the yellow here.
And where do those states live?
And I think that's one of the
most interesting questions
that a quantum computer is going
to be able to answer for us.
And conveniently for
that is that I can then
define what states I want
to explore by the device
that I have.
So if you call it a quantum
hardware ansatz, or sometimes
sub-logical ansatz,
then you can imagine
that that parameter space you
explore is any of the knobs
that you can repeatably
turn on your device.
So if you can do
that operation twice,
you can call that some quantum
state you're interested in.
And you use the complexity of
the device to your advantage.
And the coherence
time requirements
are going to be set
by the device instead
of the algorithm.
It won't be that case that you
get I need 10 million gates.
Well, I can't do
10 million gates.
And you get nothing.
It's I have this amount of time.
And you get the best answer you
can within that window of time.
And of course, we have
some theoretical constructs
that I won't go too deeply into.
But we like to be able to design
these particular ansatzes as
well in a logical formalism.
So of course, I
haven't really talked
about what happens with
quantum errors yet.
So you can imagine that these
devices are not perfect.
I've alluded to this many times.
And if you look at the
level of the device,
you might imagine that there
are certain areas called
coherent errors.
So imagine if I--
an example of this,
if I want to rotate
by some angle,
but every time I
rotate by a little bit
too far, then that's going to
be a type of coherent error.
And this quantum device also
lives in a larger environment.
So it's always seeing
electromagnetic waves
coming in.
Maybe it's even feeling
the effects of temperature
from bleeding in
from the outside.
And this is going to
cause random errors that
are more incoherent.
So these are types
of dephasing errors.
So how does this algorithm
perform in the presence
of these types of noise?
And one of the
things we conjectured
is that because of this
classical feedback loop,
that certain types
of coherent errors
would be corrected
by this procedure.
And we're lucky enough to have
participated in an experiment,
in collaboration here
with the Google group,
actually, that showed that
this was in fact the case.
So let me describe briefly
what this image shows.
So it's actually
a good depiction
of what a quantum algorithm
looks like in practice,
or at least a small scale one.
So to your left,
you have the picture
of both the hardware at the top
and the software at the bottom.
So the hardware at
the top are of course,
these transmon qubits that
come from John Martinis's lab.
And you see that the
software on the bottom
labels them as just
qubit zero and qubit one.
And as how I apply these
quantum gates or just
operations to qubits, you
can see how that corresponds
to pulse sequences at the top.
And this goes through.
And you measure out just the
expectation values that I
was talking about over here.
And a quantum feedback loop
goes to change our one parameter
z here in the middle.
So we use this kind of circuit
to study this bond association.
So what this is a picture of is
if you take two hydrogen atoms
and you pull them apart,
what's the energy?
What's the resistance
to this pulling?
And we've studied
it by that algorithm
and by another called
[INAUDIBLE] estimation.
What we found was that
that feedback loop really
gave us something to the tune
of some type of coherent error
correction.
And we feel that the
smoking gun for this
was this plot that
essentially took this problem.
So this is the same
problem, but I'm only
plotting the errors now.
And the green dots--
whoops, the green dots here are
if I took the exact solution
of this problem.
So it's relatively
small at the moment.
And I can check what the
exact angle of that z
in the middle of the circuit
should have been, this z theta.
And I plug this into my device,
which if my device was perfect,
then I would have
gotten the exact result.
And what we found was
these green dots instead.
And then on the red is if you
run this variational feedback
loop, you find that the
errors drop, in some cases,
by over an order of magnitude.
And you get much better
and consistent results
across the curve.
And this is indicative
to us that it's
coming back to fix these
types of over-rotation errors.
Because if you want to
rotate by some angle
and you did it by
theta plus delta,
this is really just a
labeling error in some regard.
What you care about
is the quantum state
you've produced rather
than the particular labels
that you've given it
at the end of the day.
So this is our smoking gun
for this type of quantum error
suppression.
But we also wanted to look
at generalizations to this.
Could we tackle
incoherent errors?
Could we look at other states?
And one thing that you look
to when you look classically
at these types of problems.
So eigenvalue problems are
defined in these linear spaces.
And what we have here is
instead a parameterization
that explores something that
looks a little bit non-linear.
So if you imagine that
this is my parameter space,
this gray manifold here, and
I'm walking around that space,
it looks curved
similar to the way
you might have a neural network
parameterized by some weights
or something like that.
It's not exactly linear.
So if you want to
learn something
about the distribution,
and you want
to leverage the power
of linear algebra
that we've always used
in quantum mechanics
to tell us things like excited
states, interior eigenvalues,
and other things, you
want to cast this back
into a linear space, but one
that's relevant to the problem
that you're interested in.
So you can imagine
that it might be
possible to look at this
point that you're at
and expand it just around
that particular point.
Build a little flat plane where
I can do my analysis, even
though for that point,
this is both one
that I don't know much about
and can't prepare classically.
But I can build this
little flat plane
and learn something
about the action
of an operator within it.
So what I'm going to
do to do that is I'm
going to act this
set of operators.
I'll choose a set
that determines
my little flat space.
And I'll act the
Hamiltonian on that,
because these are where
the energy eigenvalues are
coming from.
And I'll probe with states
also within that space.
So this will tell me how any
state within this little flat
plane that I've built
moves to another state
within that space.
And I'll do this to build a
matrix representation of that,
which will tell me how it
acts on all of these states,
even though the states
I don't actually
know exactly what they are.
And moreover, I have to
build some representation
of the identity, which
is also the local metric.
But in doing so, I build now an
offline classical generalized
eigenvalue problem.
So this is again a coupling
between quantum and classical.
I've built some
offline problem based
on the measurements I
took on my quantum device
to improve what
I could have done
without the classical computer.
And this gives me some
estimate of my excited state
energies and also
something a little more.
So how does this look for
a real problem pictorially?
So if you imagine that this
is the curve that I got before
of my hydrogen bond breaking,
assume I got it equally well.
At each point on
this curve, I'm going
to do this expansion
that I talked about.
And this expansion
only corresponds
to extra measurements
on my system.
There's no additional
coherence time required.
And it looks like if my system
went through this decohering
channel over here, and
I prepare the state,
then these expansions
sometimes actually let
me go outside the
original set of states
that I was able to prepare.
And I solve this problem.
And I get out these excited
state energy and properties.
But what I also
find is that because
of that expansion
outside this cone
that I'm allowed to
occupy from decoherence,
that I can sometimes improve
the energies even of my ground
state and correct
for some of these,
quote unquote, "incoherent"
errors that are in my system.
And to see how this works, you
can build a very simple one
qubit example to do this.
So if I just make some
one qubit Hamiltonian up,
and I imagine two
characteristic errors.
One is this side
pier here, which
corresponds to no matter what
angle that I give this thing,
it always gives me
back the zero state.
So it's a nonfunctional
control, essentially.
And the other is my machine is
so bad that it gives me back 0
or 1 essentially randomly.
And this is how you
define these kind
of operators within this space.
And I'm going to choose
these sets of operators
that build my little planes.
And the first one
is the identity.
So that's just the
original algorithm.
And then in the
second set, I'm going
to include only bit flips.
And in the third set, or
set two, as I'm calling it,
you do, essentially, full
tomography on this qubit.
And what you find if you look
at the error in the lowest
eigenvalue, when you do
this, at the initial outset--
so this is just the original
un-expanded version,
you have quite bad errors.
So this is log 10 error
in the lowest eigenvalue.
If you add bit
flips, you perfectly
fix this case of a
nonfunctional control.
And you moderately fix
the mixed state error.
And if you do full
tomography, you
correct both cases with just the
measurements from the device.
So we went out and we did
an experiment based on this,
in collaboration with the
Siddiqi group at Berkeley.
And we found that if we did both
that type of expansion, which
I'm characterizing as the
linear response or lowest level
excitations, plus a few select
ones from the next level,
so a few two qubit flips as
well, then what you get out
are much improved ground states
and also the excited states.
So the excited state
curves are just
the ones you get out from these
extra measurements that you do.
And I'll highlight
that the yellow points
down here with the
larger error bars
are the uncorrected values.
And the other ones with
very small error bars
are what you got as
corrections to those.
So it improved both the energies
and the spreads of the energies
that we got out
in the experiment.
So we're very happy
about this, because we
feel like it's evidence
that for some problems,
there are application-specific
error correction modules, which
you can consider
building, rather
than a general-purpose
error correction
and that quantum simulation
might be a very fruitful area
for working in this.
So I want to just
close with slides
saying this is an
exciting time for quantum.
This is just a short picture of
all of the different companies
and different groups
that have started
to do research in this area.
And if you looked at
this list even, say,
five or 10 years ago, it would
have been not even a fraction
of this list.
It's taking off
with dramatic speed.
We're having more and
more qubits every day
that are better and more and
more groups getting interested.
Our government is
finally getting on board,
assuming that the current
administration doesn't
mess things up.
But I think it's an exciting
time to be in this field,
both on algorithmics and
superconducting qubits.
I hope that I convince
you that there's
at least one key problem we're
really driving towards that we
can work on for this.
And with that, I'd like
to just recap the things
that I've said, which
are if you go back,
what was quantum computing?
Quantum computing was
looking at these particles
while they're in this state
that requires interference
and discrete energy
levels, asking
what happens if we use those
to simulate other particles,
and what happens if we build
a computing abstraction on top
of that?
Can we do different or
new, interesting algorithms
that look nothing like
their classical counterpart?
And we saw some
examples of where
that was true-- for example,
this linear solution
of equations or this
ability to propagate forward
chemistry problems in time.
I hope I convinced you
that, in some ways,
coupling classical and
quantum devices lets
you do a little
bit more than you
would have just running on
the quantum device, at least
for the short term.
So we have very well-developed
classical computers,
and I think it's a mistake to
cut them out of the problem
entirely at the moment.
And we did this
interesting expansion
that I hope you're
excited about.
And that we're really moving
towards real problems.
Now that we have
devices in the pipeline,
I think we've finally exited
the time of quantum computing
when you imagine--
you work on algorithms
within your imagination
that no device will ever exist.
Now's the time to work
towards real problems
and real applications.
And I think we're
not that far away.
So with that, I'd like to
thank you all for listening,
and I'm happy to
take any questions.
AUDIENCE: Thanks, Jarrod,
for the nice talk.
Maybe as a little
addendum, I know
you guys have been working
on a new algorithm,
again in the spirit of
variation in quantum algorithms.
But it uses
well-adapted basis sets
so that we have a
chance of running it
on one of the upcoming
near-term devices.
Can you talk a little
bit about that?
JARROD MCCLEAN:
Yeah, definitely.
So when I talked about
originally the chemistry
problem, I talked about how
you cut that problem up much
in the same way if you wanted
to draw a line on a computer,
you need to discretize it.
So there's a lot of choices
you can make in that regard.
And some of the ones that
we've made previously
have just been I guess I would
say carry-over or heritage
from classical computing.
And in fact, some of those
basis sets are so old,
they were inspired by
the need to do them
with a hand-crank calculator.
And so the question comes, was
that really the optimal way
to slice it up?
Especially if now you're dealing
with an entirely new type
of computing technology.
So some work that I've
been doing in collaboration
with Ryan Rabbush here has
said that there are, in fact,
better basis sets that look
like these plane wave dual basis
sets.
And we'll have a paper
out on that soon, which
make an interesting trade off.
So the discretization is perhaps
a little bit less compact,
but the circuit
depth needed to run
them is expected to be
much, much smaller, almost
linear in the system size.
And so what that
means is that if you
have a device with
a lot of qubits,
but perhaps a modest gate depth
available or a modest coherence
time, then you have a
really good chance of doing
an early important problem
with that architecture,
rather than one that focuses
on compacting the qubits,
but requires a
very long runtime.
And I think that's
got a good opportunity
to be one of the first
applications that runs
on one of these real devices.
AUDIENCE: So at the end here,
you mentioned your feeling
that we shouldn't cut classical
computers out of the picture
and we should develop some way
of using classical computers
and quantum computers together.
So here at Google, obviously,
one of our strengths
is machine learning.
And there are even
some people at Google,
on the Google accelerated
sciences team, who are actively
thinking about how
they can use machine
learning to accelerate,
say, electronic structure
calculations.
So my question for
you is do you think
that there is any
role that, say,
neural networks and
sophisticated machine learning
can play to accelerate
quantum approaches
to electronic structure?
Is there any sort of
interplay between, say,
the variational algorithm and
machine learning paradigms
that might be interesting?
JARROD MCCLEAN: So I think
there's a lot of interplay that
can happen here.
And one interesting
connection is, of course,
that neural networks don't look
so different from certain types
of quantum circuits.
But that's kind of
a different topic
than using a classical
neural network to accelerate
this type of computation.
And so for example,
some areas that I
could see these
being beneficial in
is perhaps even
choosing the basis
set that you might
be interested in.
You might be able to learn
what discretizations are
best for a particular problem.
You might be able to
optimize the search procedure
of this particular problem.
So for example, I did
that subspace expansion
at the end, which
told me something
about this excited states that
are nearby this ground state.
So I can imagine that
a neural network might
be able to learn a different
optimal set of operators,
which corrects both for--
which not only looks
at the excited states,
but also hopefully
learns something
about the errors in my
real, physical system.
Because if you can create
just the right perturbations,
you can reduce the error quite
a bit in that particular model.
So I think there's a wealth
of unexplored areas as to
how, say, classical machine
learning could hook up
to a real quantum
experiment, especially ones
with classical feedback loops
and both accelerate it and push
it forward to new levels
that we haven't seen before.
AUDIENCE: So I have a
question about the qubits,
the actual numbers.
So you hear companies
like IBM, they
say they have 16
qubit computers.
And D-Wave has thousand
qubit computers.
But I guess all qubit
are not the same.
How does these numbers compare?
JARROD MCCLEAN: So there's
a lot of important facts
that you have to consider
when you look at comparisons
on number of qubits.
In fact, I think
IBM has even started
proposing some metric
that includes both
the qubit number and
their coherence time
and the quality of operations
that you can perform on it.
I think they call it
the quantum volume.
And what that really
captures is that it's
important to look at a lot
of other factors as well.
For example, some machines
don't have arbitrary couplings,
so even if you wanted to solve
certain problems on, say,
the 1,000 or 2,000
qubits that you have,
you just cannot fit that
problem into your device.
And so it's important to look
at both the qubit number,
the coherence time, their
connectivity, and all
of these other factors when
comparing across this regime.
And I think by introducing
something like quantum volume,
I think IBM started a
good trend in that regard.
And we need more evaluations
of that metric and developments
on that metric and for
people to actually report
with that metric so we can do
comparisons to say this quantum
device, it outperforms this
one in some meaningful way.
So it's a good question.
AUDIENCE: Thanks.
So slightly going a little
further afield here,
I don't know if people are
interested in this or not.
But I was wondering,
so there are all
these different interpretations
of quantum mechanics, which
as far as I know, they're
all observationally the same,
though I've heard some people
claim that maybe some of them
aren't, exactly.
But for a first
approximation, they're
observationally the same.
So you've got the Copenhagen.
You've got many-worlds.
You've got the Wheeler
transactional interpretation.
I'm wondering, is
there one of these
that's more fruitful,
more perspicacious
for thinking about quantum
computing than the others.
As opposed to just your
personal preference,
is there one that you
make progress better
if you think of it that way?
JARROD MCCLEAN:
So I have to admit
that I'm not a leading
expert on what I
would call quantum fundamentals
or interpretations.
And I know many
people make arguments
in terms of, say, the
many-worlds theory
for this superposition
over all inputs.
And you need all
of the many worlds
to come together and agree
in just the right way.
I personally haven't
really delved
deeply into which
interpretations
aid in the development
of algorithms.
I've always, I guess, prescribed
to this Copenhagen argument
that you can build a
model that predicts
how the hands on a clock move.
But that might not
tell you anything
about how the gears are
constructed behind it.
So I don't think I have a
great answer for your question.
AUDIENCE: Because I
mean, it sums it up,
like you were saying with
Scott Aaronson cartoon.
So the idea that these
things are massively parallel
is not exactly right, but it's
not exactly wrong, either.
JARROD MCCLEAN:
That's right, yes.
AUDIENCE: And the
many-worlds seems
like it's the natural way of
thinking of it as parallel.
But maybe it makes it too easy
to think of it as parallel.
I don't know.
JARROD MCCLEAN: Yeah,
well, the many-worlds
is certainly the most exciting
from a science fiction
standpoint.
I've always liked
it for that reason.
But in terms of
interpretation, I
agree that it has many appealing
aspects, at least in terms
of computations agreeing.
But it's hard-- as to the
specifics of your question,
I've never found one more
helpful than the other
for constructing an algorithm.
AUDIENCE: OK, thanks.
JARROD MCCLEAN: Yeah, sure.
[APPLAUSE]
