HARTMUT: Jeremy is the director
of the Center for Quantum
Photonics in Bristol, which
is sort of the-- well,
it's pretty fair
to say one of the,
if not the, world's most
important places for doing
photonic implementations
for quantum computing.
Jeremy originally is from
Australia where he got his PhD
and also spent many years
as a senior researcher
on, of course, related topics,
and confined electrons,
and organic structures,
superconducting devices,
and of course, photonics.
And folks may know
that the UK just
announced-- I think it's
half a billion pounds--
UK pound-- dollars?
OK, so half a billion dollars
on the new Quantum Computing
Initiative.
And some of those
funds are probably
going to the
University of Bristol
to further the work you guys
are going to hear about.
And that, of course,
allows to scale up
initial prototypes
in interesting ways.
And that will breathe,
I think, additional life
into the overall field and also
will create broader interest
around quantum photonic devices.
So Jeremy, very interested to
hear what you're going to do.
JEREMY O'BRIEN: All right.
Thanks very much, Hartmut.
So firstly, thanks very
much for the opportunity
to be here and tell
you a little bit
about what we're doing in
Bristol in photonic quantum
technologies, generally.
I'm going to focus
on quantum computing
because I think that's probably
what you're most interested in.
But I'll touch on these other
areas on the way through.
Before I get going, I
to need to acknowledge
that there's quite a large
number of people at Bristol
and beyond that
have done the work
that I'm going to describe.
And I'll try to highlight
who they are as I go along.
So before I get
going, I just want
to put up a quick
advertisement for the Centre
for Doctoral Training in Quantum
Engineering, which Google
are part of whether
you like it or not.
Thanks very much to Hartmut for
giving us quite a bit of advice
and supporting this enterprise.
The ambition here is
to train of a order
of 50 to 100 graduate students
in quantum engineering
over the coming
eight or nine years.
And the focus is very
much on the engineering.
So I think it's
all in the title.
It's about training a new
generation of engineers
who understand the
quantum physics.
But who are very much focused
on delivering the technology.
So I guess the
message for you is
if you want to host some
very smart, young people
with this sort of mindset
who will have some pretty
substantial initial training,
then please get in touch.
And I think my
colleague, Mark Thompson,
who's the director of
the Centre will probably
be sending Hartmut a request
for projects pretty soon.
So just bear that in mind.
So in the usual way,
at Bristol we're
interested in these
quantum technologies
in communications-- secure and
other types of functionality
that you get by harnessing
quantum mechanics in sensors
which reach the ultimate
precision limits dictated
by the laws of quantum physics.
We're interested in simulations
in particular, analog
simulations that are distinct
from a full-scale quantum
computer, as well as the science
that underpins that area.
And what I'm going
to mostly talk about
is quantum computation.
I'll mention some
of these highlights
that I've shown here
where we've made
a demonstration of this
sort of handheld device
to bank ATM machine, for
example, of secure key growing.
We've measured the concentration
of a blood protein using
entangled photons in a device
like this, which combines
those entangled photons
with a microfluidic device.
And we've done a simulation of a
helium hydride molecule, again,
using one of these
integrated devices.
And as I say, I'll touch on
those three things a little bit
as I go along.
But mostly I want to tell
you about our efforts
towards a full-scale
photonic quantum computer.
This is really my advertisement
for a photonics approach
to these things.
So I think it's fairly
clear that there's
no really feasible alternative
than photons for communicating
quantum information
over any distance.
Optical interferometers
like this
that operate in the
classical regime
are arguably the most
powerful precision
measurement tools
we already have.
And so enhancing their
precision using photonics
is a very natural way.
And I think it's
fair to say that it's
an historical fact
that photonics
has led the way in exploring the
foundations of quantum physics
and then more
recently in exploring
the fundamentals of
quantum information science
through violation
of Bell inequalities
through entanglement of multiple
systems and teleportation
and so forth.
And so the argument
would be that if you're
doing analog-type
simulations, that photonics
is a very appealing
way to do that where
you have much smaller
scale systems.
And what I'm going to
make an argument for
is for photonics for a
full-scale, programmable,
digital, quantum computer.
And so I'll start just with
a reasonably brief background
on photonic quantum
computing and then tell you
about our particular
approach to doing that
and try to highlight some
of the current challenges
and the future challenges
to all of this.
So just briefly,
this is an article
that I wrote with
several colleagues
a few years back on
physical approaches
to the platforms for
quantum computing.
And, of course, we
couldn't get away
with publishing this paper
without the editors insisting
that we produced a
league table of all
of the different approaches.
And we were pretty
reluctant to do that.
And we were made
to do that, trying
to compare all these
different approaches.
And I guess the reason
that we were reluctant
is that this table
is not necessarily
the meaningful thing to do.
And the reason that there
is this sort of table
is because building
a quantum computer
has a list of requirements that
are pretty much contradictory
to one another.
Namely, you need
these physical systems
that have very,
very low noise which
means that they're
very beautifully
isolated from their environment.
And yet they interact extremely
strongly with one another.
If you want to
implement logic gates,
for example, they
interact very strongly
with your preparation
and readout apparatus.
So there's these sort of
contradictory requirements.
And what you find is that
invariably things look good
on several of those requirements
and not so good on others.
So nevertheless, even back
then it was pretty clear to us
that actually what was
important was the ability
to operate these things
in a fault tolerant way
and actually realizing
architectures
for any of these approaches.
And that would be a more
meaningful comparison.
And I guess what
I'd argue now is
that we're moving
even further ahead.
And I think the
arguments now in my mind
are about manufacturability.
So can we really make
large-scale devices?
And this is a cartoon of a
large-scale, photonic, quantum
computer where unless you're
a photonic specialist, most
it will be fairly meaningless.
But the point that I'd
like to make at this stage
is simply that all
of the elements
there are ones that are
familiar to photonics engineers.
They've been used in
telecommunications and other
applications for
years or decades.
Those same photonics
engineers would
be a bit daunted at the
scale of the whole thing.
But certainly the components
that are going into it
are familiar to them.
And what you don't see in there
is any physics breakthroughs
required.
You don't see any exotic
systems-- atomic scale
fabrication, or
milli-Kelvin temperatures,
or ultra-high vacuum,
or anything like that.
So it's a fairly
friendly environment.
And the actual fabrication
of these components
is done using the
same techniques that
are used in fabricating
microelectronics.
So that's the promise
of scalability
in terms of being able to
manufacture the components.
And then I think
there's a big challenge
in terms of assembly
and manufacture.
And I'll come back to give
you some details on that.
Now just this sort of
introduction, if you like,
to the photonic approach
to quantum computing.
I was at an ARO kick-off meeting
in Maryland last weekend.
I introduced the introduction
by telling them all
that they might have forgotten
what quantum computing is
since IARPA kicked us out of
their program several years
ago, which made them chuckle.
But I hope that they can also
see the promise of this now.
And obviously, they wouldn't
be funding us if they didn't.
So the encoding in the
polarization like this
is very appealing.
My colleague at
Bristol, John Rarity,
likes to say that the
lifetime of these qubits
is at least the
age of the Universe
because the microwave background
radiation is polarized.
Now I guess that's
a fair statement.
You'd need a fairly
big working space
to take full advantage of that.
But intrinsically low
noise is the point here.
So effectively a zero
temperature system
intrinsically, which
is very exciting.
We know a lot about the
polarization of light.
And everything we know about
the polarization of light,
for example, is
directly transferable
to the polarization
of single photons.
And, in fact, we can
manipulate the polarization
of photons in a similar way.
And It's often said that we
use the same off-the-shelf
components to do this.
And that's the advantage.
We do use the same
off-the-shelf components.
But I would argue
that you manipulate
a spin with the same
off-the-shelf components.
The difference here is that
you have a very, very powerful
means to calibrate these
things very precisely.
And that is simply to
send a bright laser
beam through your device with
exactly the same properties
except its intensity.
And then you can calibrate
your quantum systems,
your photonic systems,
very precisely.
There's plenty of other degrees
of freedom that you could use.
So here you can see how you
go from a polarization-encoded
qubit via a polarizing
beam spinner
to a path-encoded
qubit where you now
have a superposition of a photon
in this path or that path.
And, in fact, it's
this path in coding
that I'm going to
talk about pretty well
exclusively from here on in.
And, of course, there's
always a conservation
of trouble in life,
and in particular,
in quantum computing.
And the trouble comes
for photonic approaches
in the form of, how do
we answer this question?
So the flip side
of these photons
that don't interact
with their environment
is that they don't
interact with one another.
They don't interact with
anything very readily.
That's the challenge.
And this is the
proof by cartoon.
And at this point I'd
like to do a survey
and ask if you'd raise your hand
if you've done this experiment.
So that's four people.
That's an equal record
that was just set,
I think, at Microsoft last week.
It's amazing how few people
have done this experiment
or are willing to admit that
they've done this experiment.
And I saw a few reluctant
hands raised there.
The reason is because it's
a pretty boring experiment.
You know what's going to happen.
All of those of you who
haven't done it already
know what's going to happen.
Nothing happens, right?
You should be bit
surprised by that
if you think of it in terms
of beams of particles flying
at the speed of light
into one another.
But it does really tell
you the issue here.
In Australia where
I come from, this
is what we call a backyard
experiment because you go home
from the lab at night and
you do it in the backyard.
In the UK, no one
has a backyard.
And there's a lot of talk around
the state of science in the UK.
And I think this
is the real issue.
Of course, here in sunny
California you guys
have plenty of space and
backyards, and so on.
And that's the reason
for success, I'm sure.
So that's the proof by cartoon.
This is the sort of slightly
more sophisticated version
of it.
Imagine you wanted to realize
a controlled-NOT gate-- so
the quantum analog
of an XOR gate
where the logical operation on
the two qubits is shown there.
If we had a path-encoded
target qubit
where a photon in this top
rail represented a zero
and in this bottom
rail represented a one,
we could arrange
an interferometer
with a 50-50 beam splitter
here and a 50-50 beam splitter
here which both of those devices
transmit half the light that
comes in and reflect half
the light that comes in.
And so if we send a single
photon in this zero mode,
it goes into a superposition
of being in the top--
in the zero mode
and in the one mode.
It then interferes with
itself in such a way
that it comes out
here with certainty.
So there's constructive
interference for it
to come out here and
destructive interference for it
to come out here.
And similarly, a single photon
coming in this one input
here goes into the
minus superposition
inside the interferometer and
then interferes with itself
to come out the one state.
So that's nothing more
than classical interference
of waves described at
the single photon level.
And it's also nothing more than
just the identity operation.
It doesn't do anything so far.
It just maps a zero to a
zero and a one to a one.
And by linearity of
quantum mechanics,
the superposition of those two
stays the same superposition,
which is precisely
what you'd like
to happen if you're in the
top half of this table.
So if you have your control
similarly encoded, then very
loosely speaking,
you want nothing
to happen to your control.
And if the control's
in the zero state,
you want nothing to
happen to your target
as I've just described.
While if your control's
in the one state,
you want a bit flip operation.
And the way you'd imagine
doing that, potentially,
is to introduce a pi phase shift
or a half wave length change
in optical path in one arm
of this interferometer,
let's say the top arm,
conditional on there being
a single photon in
the control one mode.
And so that's job done.
That's how you realize
a controlled-NOT gate
on two photon qubits.
The problem, of course, is that
to realize this operation here,
you'd imagine using some
non-linear, optical material.
So a material whose
refractive index
depends on the intensity
of light in that material.
And what you'd be requiring is
that the intensity of this one
photon here would change
the refractive index such
that this other photon would
see an effective pi phase shift.
And it turns out that for
conventional, non-linear
materials, you'd need
something like 10
to the 9 meters of that material
for the intensity of one photon
to impart a pi phase
shift on the other photon.
Now I would argue that's
not completely impractical.
You might imagine a
spool of optical fiber
that was 10 to
the 9 meters long.
That would say something
about the clock
rate of your computer.
And unfortunately,
the transparency
of any such material
is not so high
that you'd have very much
probability of either
of those photons
coming out at the end.
And so this was the dead end
for optical quantum computing,
at least as I'm
proposing it here.
And the understanding was you'd
need some sort of exotic system
here like an atom
cavity system that
would mediate an affective
interaction between photons.
And at that stage,
this was regarded
as an interesting--
photonics was
interesting in the
context of communication
and an interesting proving
ground for quantum technologies
but not an ultimately scalable
approach until Knil, Laflamme,
and Milburn came
along and showed
that the intuition
that I've just sketched
in some laborious detail on
the previous slide is wrong.
And that, in fact,
you can implement
that precise
controlled-NOT operation
on two photonic qubits using
only a linear optical network.
So nothing fancy in there--
just mirrors and beam
splitters and so
on together with
auxiliary or ancillary
photons and photon detection.
So this is cartoon represents
a controlled-NOT gate in which
you send in your control and
target photonic qubits together
with some other
photons that don't
encode any information going in.
And when you detect a single
photon here in a single photon
here, out will come your
control and target photons
with the appropriate
operation applied to them.
Now this was a surprise
because of the intuition
that I described.
And, in fact, it was a
surprise to the authors.
So Manny Knil and
Ray Laflamme set out
to prove the intuition
from the previous slide.
And as sometimes happens
rather beautifully in science,
they discovered something
far more interesting,
which is that that
intuition is wrong.
And the opposite is true.
You can do it.
And they went on
to present a recipe
for full-scale optical
quantum computing using
only the resources that
you see on the slide here--
so single photons,
linear optical networks,
and single photon detectors.
And I think at the
time-- so at that time,
I was working on
solid state approaches
to quantum computing, in
particular, phosphorus
and silicon approaches.
And I think it was pretty fair
to say that this was received
with, OK, well,
that's mathematically
proven to be possible.
But who really
believes that you're
going to make a computer
out of photons flying around
at the speed of light?
And actually when you
look at that the details.
The number of these auxiliary
photons that you'd actually
need was polynomial and you
get an exponential advantage.
So if you're a theoretical
computer scientist,
that's fine.
It's job done.
If you're the guy who's
got to build this stuff
and that polynomial is huge,
then that's pretty worrying.
And it was huge.
And I'd say that in the
intervening 15 years or so,
the situation has changed.
And in the first
phase, it changed
due to theoretical
developments, particularly
in the realm of
measurement-based or
cluster-state quantum
computing, which
reduced that overhead by
many orders of magnitude.
And then in more
recent times, I would
argue that all of the
components that you need
have been demonstrated in
isolation and in conjunction
with one another in small scale.
And there's a
promise of actually
being able to scale
these things up.
And so the summary
situation is if you
want to pursue a photonic
quantum computer,
there's going to
be a price to pay.
And you're going to
need many more photons
than you would need
other physical systems.
But my argument is
that's a small price
to pay relative to the
gain in scalability
and manufacturability,
which I'll talk about now.
I want to talk more
about these KLM scheme.
And I should emphasize that
I don't think anyone really
expects to make a
quantum computer pursuing
this type of KLM approach,
circuit model approach.
But it's useful from a
pedagogical perspective.
The reason that people wouldn't
pursue this approach is simply
because the fault tolerance
thresholds that we know
of for topological cluster-state
quantum computing is so good.
And there's not an
expectation that they'll
be matched in the gate model.
So I'm just going
to briefly explain
to you what's going on in this
picture, what makes it work.
And then I'm going to
start moving fairly quickly
through the technical stuff.
So what's going on inside
there is quantum interference.
So if you have a
50/50 beam splitter
as we encountered before.
And we send a single
photon into each input
as indicated here
such that they arrive
at the beam splitter
at that the same time.
And we ask ourselves what's
the probability for one photon
to come out the top and one
photon to come out the right?
And if we just naively looked
at that picture classically,
we would get a
probability of a half
because there's two
ways for it to happen.
And we just use our
usual probability theory.
But, of course, this is
an inherently quantum
mechanical system.
And we should better apply the
quantum approach to that which
involves summing
indistinguishable probability
amplitudes, which are complex
numbers and can therefore
interfere in ways that their
classical probability theory
counterparts cannot.
We take a mod squared to
get sensible probabilities
between zero and one at the end.
And what happens
here is precisely
that sort of
interference because we
get this phase
shift on reflection.
And these two amplitudes
completely cancel one another.
And so for me this is maybe
the simplest, uniquely quantum
mechanical phenomenon
to understand.
I have hopefully explained it to
you in just a couple of lines.
It's also about as
close as you can
get to doing quantum mechanics
with your hands in that you can
get into a lab and, for example,
you can change the arrival
time of this photon with
respect to this photon using
a micrometer to introduce
a delay, for example.
And what you'll see
is data like this.
This data that I'm showing you
is a quarter of a century old.
Nevertheless, graduate
students around the world
still celebrate when
they see data like this
because it's very
hard to generate.
And it's hard to
generate, typically,
because in writing
things down like this,
I'm implicitly saying that
these probability amplitudes are
indistinguishable from
one another in principle.
So no measurement allowed
by the laws of physics
could distinguish
those two amplitudes.
And as soon as that
statement is not true,
then this effect goes away.
So the reason that
I've labored the point
is that this is all
that's different in terms
of single photons
than everything
we know about bright light.
And it's this phenomenon that
drives a quantum computer
basically.
Now, you'd like to know
where the photons go.
And they go into a
coherent superposition
of both being in this path
and both being in this path.
And if this beam
splitter is not 50-50,
then this dip that we see
here at zero delay time
doesn't go all the way to zero.
But you'd still have a
probability for that to work.
So, for example, here I have
a one-third beam splitter
inside what is a linear
optical controlled-NOT gate.
And so, again, we've got our
control and target modes.
And, again, we've got
that interferometer
that we saw before.
And now quantum
interference, as I've just
described between a photon and
control one mode and the target
photon, imparts that pi
phase shift that we required.
You can see directly that this
gate doesn't work all the time.
A photon could come
out this top mode
or the bottom mode-- could
get two here or two here.
It works precisely when one
photon comes out in the control
and one photon comes
out in the target.
And this is how my colleague,
Jeff Pryde and I first
realized such a gate
in Andrew White's lab
using some tricks on making
interferometers stable
and so forth.
You'll note that
in this circuit,
there's a control and
target photon going in.
And I just told
you that it works
when I detect a photon here
and I detect a photon here.
That's not very useful
inside of a computer
because detection
of those photons
typically involves
their destruction.
And so it's hard to embed
that into a circuit.
And so a gate just like
the original KLM gate
was implemented in
Shigeki Takeuchi's lab
in a collaboration that we
had going for many years
where-- you can't probably
do the translation directly,
but it's the control and
target photons coming in.
And these auxiliary photons
which you then detect.
And then your control
and target photons
are then free
propagating afterwards.
Again, this is just not
necessarily the way you do it.
The point I'd like to
make here is simply
that if you saw that circuit
in the lab or these ones, then
they would all look
pretty well the same.
And they would look like a
forest of optical elements--
mirrors, beam
splitters, and so on--
bolted to a one ton
vibration isolation table.
And this gate here might
consume several square feet
of table space.
So if that's your
transistor, you've
got a very big computer
at the end of the day.
And, in fact, if you
wanted to make a sensor,
deployment is
challenging if it's
bolted to a one-ton
vibration isolation table.
And actually it takes
really clever people
like Ryo Okamoto and Tomohisa
Nagata 6 to 18 months
to get these things working.
And graduate students
don't keep working for that
long that you could imagine
making circuits many times
more complicated than this.
And that's what
we've been working
on at Bristol over the
last eight years or so.
The first efforts were in
fiber, which I won't talk about.
I think they're important
in terms of communications
and making those same
logic gates all in fiber.
But I think in terms
of ultimate scaling,
you replace a forest
of optical elements
with a bowl of spaghetti
of optical fiber.
So the approach that
we have been pursuing
is this cartoon here of photons
in wave guides on a chip.
And I very much enjoyed my good
friend and colleague Thaddeus
Ladd who was then at
Stanford interrupting me
when I first showed this
picture at a workshop in Tokyo
to say, Jeremy, those wave
guides there are very lossy.
There's light scattering out
all over the place there.
I am a little
worried about that.
And I said thank you
Thaddeus for that question.
And for you the low loss version
of the artist's impression
of the wave guides
which he appreciated.
The idea is a simple one.
And that is to
basically fabricate
square, optical fibers on chips.
So in this case, in
silica, on silicon,
where you have a slightly
higher refractive index
core of silicon of silica with
a slightly lower platting there.
And you guide light by a
total, internal reflection
just as in a fiber.
And if you make the
dimensions right,
you can support just a
single transverse mode there.
Then you make your
beam splitters
by bringing two of
these wave guides
into proximity with one another
such at the evanescent field
from each wave guide
couples to the other.
And by controlling the length
of that coupling region,
you can control what the
equivalent reflectivity is.
You could also do it
by the spacing between.
We use the length, which is
a more reliable way to do it.
So using this approach
we've implemented
that exact, same CNOT gate
that I showed you before.
So control and
target modes-- here's
that interferometer formed
by the two 50/50 beam
splitters-- quantum
interference at the central one.
And the important
point of this was
that we saw that
quantum interference
that I described to you
at the start with a dip
where the visibility
of that dip was
100% to within very
small error bars.
And that's important
because that
would be a fundamental
limit to the performance
of the operation
of these devices.
We combined several
of these gates
into a compiled vision of
Shor's factoring algorithm
for factoring 15.
And for those of
you who don't know,
you should compiled, not as
everyone else would understand
compiled, but as a euphemism for
already knowing the answer when
you construct the circuit
for the algorithm.
And I don't know how this
abusive language propagated.
But that's what it is.
So inverted commas
around compiled
when you ever hear about a
compiled Shor's algorithm.
We've been able to implement
one qubit operations
with very high fidelity
using these resistive phase
shifters on the chip
where you locally
heat the wave guide
underneath and thereby change
its refractive index.
We've implemented
one qubit operations
with 99.998 fidelity.
I like to put that
eight on the end there.
It's not quite five nines.
But it's pretty close.
And this technology
is great in every way.
It's very robust, reliable,
repeatable, et cetera.
It's slow.
And that's a challenge
that I'll come back to.
We've also explored, just
as an aside, models outside
of conventional
quantum computing.
So here's a device where we
implement a quantum walk.
And in this device,
you have 21 wave guides
in this central region here.
Instead of just two wave guides
that are evanescently coupled,
you have 21 wave guides
that are evanescently
coupled with one another.
And you can implement some
very interesting quantum
walks there.
And it's possible that you
might be able to directly do
some simulations of important
physical systems using that.
Another application
of that that I'll just
mention in brief because
maybe you've heard of this
and you're interested.
This ozone sampling problem
has got quite a lot of interest
recently.
And the idea is that if you take
a unitary operation for modes,
for optical modes.
Let's say you have n modes.
And you send route n
photons, of order route n
photons, into those
modes, then calculating
where those photons come
out, what the probability
distribution is
for those photons,
is intractable on a
conventional computer.
And I guess the reason that
people are pretty excited
by this is that once you get of
order 100 modes and 10 photons,
that's about the limits of
what my laptop could calculate
and you don't have to go very
much further before it's just
simply not calculatable.
I won't talk about applications
or lack thereof of that thing.
I think it's pretty clearly
an interesting thing
to do to say I've
got a device that
will outperform a
classical computer.
That's not so far away.
Of course, once you
cross that threshold
where you can't do-- you can't
use a conventional computer
to tell you what the
output should be.
How on earth do you
verify the output of it?
And our approach here
is two reprogram--
let's say this unitary
is reprogrammable,
which I'll talk about
in a little bit.
Reprogram it to implement
the same unitary
as that quantum walk
would implement.
And that is straightforwardly
calculatable
what the output should be.
And, in fact, what you see
in those sorts of quantum
walks is if you
have three photons,
you see this sort of clouding.
So this is where the photons
come out for 21 modes.
So on each axis,
each photon, then
you see this clouding
behavior here in contrast
to the classical behavior
where you see no such thing.
And I can't really display
four or five photon data
very easily.
But if you look at the
five photon data here,
then you can see a
clear contrast here.
So that's a sort
of verification--
an experimental verification
technique, I think,
which is quite interesting.
But what I'd really like
to talk to you about
is the problems and how
we've addressed them.
So this is the chip that
was used for the factoring
algorithm.
And you can see by inspection
a few problems here.
One is scaling.
So it factors 15.
And I just admitted
that it doesn't really
factor 15 at all.
So how would we scale that
up to do something useful?
You can see it doesn't have
any knobs or wires on it.
So it's not reprogrammable
to do other things
other than what it
was fabricated to do.
And actually, it's
still too big.
Right?
If that's a couple
of transistors,
then you still got a very, very
huge computer at the end of it.
So, just briefly, how we've
addressed those things.
I want to just quickly
highlight that they're
relevant in these
other scenarios.
So I mentioned
this at the start.
This is a system that
we've-- this one here--
that we've now prototyped and
patented jointly with Nokia.
And to do that polarization
control uses exactly
the same wave
guide architectures
that I've described.
Of course, there the challenge
is to make it for a few cents
and fit into a very small
bit of existing chip ideally.
But the challenges are
the same in the sense
that you need to make those
components very small and very
highly functional.
And, again, I showed
you this measuring
of blood protein concentration
with entangled photons.
And, again, a similar
challenge if you
want to deploy those
sensors where you've
got these microfluidic
channels and wave guides,
you need to miniaturize
them in a similar way.
And, in fact, for anyone who's
still interested in science--
and I certainly am--
there are reasons
for pursuing these things to
explore the very foundations
of quantum mechanics,
for example.
And this is an
experiment where I
think we've shed some light
on the wave particle duality
conundrum that underpins all
of quantum physics and quantum
technologies.
Anyway, but this
is a technical talk
about addressing these issues.
And we've addressed them
in the following way.
So scaling-- I'm a simple-minded
experimental physicist
by training.
So I don't have grand
ambitions of making
exponential improvements
in anything.
But if I can make enough
factors of two improvement,
I might be able to turn the
impractical into the practical.
And this is an example
of that I'd say.
So here's how you
might implement
controlled unitary operations.
And as you're no doubt pretty
well aware if you're here
in this field, all a
quantum computer really
does is to control
unitary operations,
a lot of very big, controlled
unitary operations.
And so if we could do
those more efficiently
that would make
life a lot easier.
The idea is trivial almost.
Here's a control qubit.
You could imagine
as many as you like.
And here are some target qubits.
And you simply take your target
qubits and based on whether
that control is in the 01, you
switch them into the red mode,
which bypasses the
unitary, or the blue mode,
which experiences the unitary .
So I hope you can just see
immediately there that this
does indeed do the controlled
unitary operation where
that unitary could be a
black box that I gave you
where I didn't even tell
you what the unitary was.
And that's quite a saving
over the usual decompositions
that you would do to realize
their controlled version
of a particular unitary.
And it's applicable
to any system
where you have access
to these four levels
here in a controllable way.
And it's precisely that that
circumvents a no go theorem
that you may be
familiar with which
suggest that you
couldn't do this.
And you can't do this if you
just stay in the qubit world.
We've used that in a
Shor's factoring algorithm,
factoring 21.
The bigger number
is not the news.
The news is that we've done a
sequence of logic gates that
we've got a probability
distribution at the end
that's non-uniform, which means
it's distinct from noise .
So that's interesting.
We've used it in what
I think is a more
exciting simple
application, and that
is to implement the phase
estimation algorithm which
underpin Shor's
algorithm and a lot
of other important algorithms.
And in this case, we genuinely
didn't know the phase
before we ran the algorithm.
So it's a small-scale algorithm.
But it really
calculates something.
And I don't really have time to
talk about the details of this,
but we've also performed--
we developed a new algorithm
for quantum simulation,
for quantum chemistry,
together with Alan
Aspuru-Gusik's group
at Harvard.
And we've implemented
that on a chip.
And the headline, I
guess, is that instead
of doing this sort
of Trotterization
that you may be familiar with
where you have a very, very
long coherent operation with
many, many gate sequences,
the task is now simply to
prepare states, calculate
expectation values of Pauli
operators on those qubits.
Your Hamiltonian is
described as a sum
of simple products of
those Pauli operators.
And you can
therefore efficiently
calculate from those
expectation values
the energy of that state.
And then you can use a
classical feedback loop
to then simply variationally
modify your input state
and find the ground state.
I think that's pretty exciting.
The question marks are over
whether the measurement side
of things can be
made fault tolerant.
So the distinction
here is that the output
is expectation values of
qubits, which it's not
the usual sort of
digital output.
Anyway, on with this story.
So obviously reconfigurabilty--
well you need lots of wires.
There's a bunch of
wires going to a chip.
Those wires connect to these
eight phase shifters here.
So you can see that I've
now got these marks enter
interferometers.
But I've got a phase shift
in the middle and afterwards.
That allows me to
send a single photon
in this input, for
example, and prepare
any one qubit pure state at
the output in principle here.
Similarly here, we've got
the reverse over here.
And you've got one of
these controlled-NOT gates
in the middle.
And so by setting those phase
shifters to 1,000 random values
and then looking at the
probability distributions
at the output, you can see
how robust and reliable
this technology is because the
fidelity is very nicely peaked
right near one.
You can generate Bell state.
You can generate a sort of
continuum of entangled states
and perform a continuum of
Bell-state type measurements.
And you can write
psi inside the blocks
here if you trace over
one of the qubits.
So you can prepare an arbitrary
one qubit mixed state.
And at this point we
should have a raging debate
over whether psi is really
the appropriate symbol
to draw inside the block sphere.
I think I'm willing
to accept that rho
would be far more appropriate.
But I would also argue that
psi is a much more beautiful
character to draw in there.
If you'd like to draw rho
inside the block sphere,
then please go to
this web address.
Log into the device
that I've shown you.
Control those phase
shifters directly yourself
and draw the symbol rho with it.
In all seriousness, this
is available to anyone
with web access to log in and
start programming and using
this small scale device.
And if you have any idea
to do with it, including
serious ones, and you don't like
the GUI that has you dialing up
the phase shifts with
your mouse or whatever,
then let us know and
you can plug directly
into the Python
script that runs it.
It's sort of targeted at
school children and so forth.
Miniaturization--
there's a promising one
that's architectural.
And that is to replace those
two by two beam splitters with n
by n beam splitters
directly using
a so-called multi-mode
interference device.
And n by n beam splitters are
a very important operation.
And doing them directly
instead of composing them
into a whole lot of two
by two beam splitters
is a great real estate
saving on a chip.
An even greater real
estate saving on a chip
is to go from these silica
devices to silicon devices.
So in all of these silica
devices that I've shown you,
the size of the
device is largely
dictated by the
refractive index contrast
between the core
and the cladding,
which determines how tightly the
light is confined in those wave
guides.
And that, in turn,
tells you how fast
you can go around a corner.
So just as if you
take an optical fiber
and you bend it enough,
the light comes out.
So too if you go around a
corner too fast with these wave
guides, the light
will spill out.
And the minimum bend
radius in all the devices
that I've shown you so far
is of order 10 millimeters.
And that's why we have these
relatively large chips.
In these silicon devices
here, that minimum bend radius
is one micron.
And so the component
density increase
is then a million-fold in
going from these silica
devices to silicon devices which
is a much bigger step, in fact,
than going from the benchtop
to the first chip devices.
Silicon is appealing for
all sorts of reasons.
This is just an
aside that I like.
So here's your
silicon wave guide.
Your single photons are indeed
bright light propagating along
here.
Here's some
micro-ring resonators
that are coupled
to that wave guide
with a Bragg grating wrapped
around the inside of it
so that light is then
emitted vertically.
And this could be a
solution to chip stacking
with photonic interconnects
or photonic vias
between them, whether that's in
the classical or quantum world.
And I think that's a really
important architectural issue,
is how we get light into
an orthogonal direction.
But what I want to spend a
few minutes talking about now
is the rest of the story.
So all I've have
talked about really
so far is light-- photons
flying around in the wave
guides on a chip.
But we, of course, have
to get photons in and out
of there typically using fibers.
We have to generate those
photons typically using
a process called spontaneous,
parametric, down conversion
whereby we send a bright laser
beam into a non-linear crystal
such that with very
low probability one
of those photons
in the laser beam
spontaneously splits
into two daughter photons
conserving momentum and energy.
Now that's a very nice
approximation to two photons.
And if you detect
one of the photons,
you know that the other
one's there with certainty
because they are born pairs.
But it's totally
useless in terms
of making a scalable quantum
computer because it's
spontaneous.
And so you have no control
over when that event happens
or whether that
event happens or not.
And I'll come back to explaining
a solution to that in a moment.
We , of course, detect
the photons using either
semiconductors or
super-conducting single photon
detectors, do some
sort of pre-processing.
And then, ideally, we feed back
what we learn onto the system
itself.
And so needless
to say in the work
that I've shown you so far,
the whole quantum optics lab
hasn't shrunk to the scale
of a chip because of all
of this surrounding
paraphernalia still fills it.
And what we'd like to
do is head towards this,
where we have all of those
components integrated
so those non-linear sources,
integrated single photon
detectors, fast
routers, and so forth.
And here's a particular
example of what
we'd like to do with that.
And that is a mechanism
or an architecture
for making those
non-deterministic
and therefore
useless single photon
sources into deterministic
single photon sources that
will do the job for
full-scale quantum computing.
The idea is a very simple one.
And that is that let's say
I have this source here.
I send my bright laser beam in.
I send a pulse in.
And there's a low
probability, let's say, 10%,
of producing a pair of
photons in that given laser
pulse in any other laser pulse.
If I have any of
those sources all
pumped with laser
pulses in parallel,
then the probability of
none of them producing
a pair of photons is
negligible for any decent n.
And then my task is simply
to take one of the 7, or 12,
or whatever it is per pulse
of sources that produced
a pair of photons and detect
one of the pair which tells me
with certainty that the other
one of the pair is there.
And then based on that,
use this n by one switch
to switch that photon
into this output.
And so you've turned then,
at least in cartoon form,
a 1 or 10 gigahertz
train of laser
pulses into a 1 or 10 gigahertz
train of single photons
with very high probability.
Now this looks like a brute
force engineering solution
to the problem.
And I totally agree that it is.
I also think it's a
very, very promising one.
And it's promising because
we have very nice control
over the photons.
We can generate beautiful
photons in this process.
Dispersion engineering and
phase matching engineering
allows us to produce
very nice photons that
interfere with one
another and are
suitable for these purposes.
And then, furthermore,
all of those things
are sort of mass
manufacturable and scalable.
That's the argument.
That's not to say that
there aren't plenty
of engineering
challenges in there.
But once you've got a
handful of these things,
then scaling up to hundreds
and thousands and so on
should be relatively
straightforward.
The final point is
that once you've
solved all the problems
of this, you've
in fact solved all the
problems for everything
that you need for a
full-scale quantum computer
because that contains all of
the elements that you need.
For the aficionados
in the audience,
this is your menu
where you choose
one of these approaches
for the sources.
You choose one of these
approaches for the detectors,
and so forth.
I won't go into that
in any sort of detail.
But the point is that there
are solutions out there
that have been demonstrated.
I'll talk about some of
the things that we've done
and some of the things that
other people have done.
Here's a ring resonator source
of photons in silicon wave
guides.
We've looked at
lithium tantalate
and also chalcogenide.
Plenty of people
around the world
have done all sorts
of demonstrations
of generating photons in
non-linear wave guides on chip.
The fast switches
for that n by one
router that I showed
you before-- well, I've
shown you these thermal phase
shifters which are too slow.
And in the
telecommunications, well,
lithium niobate modulators
that operate at 40 gigahertz
have been in use
for a long time.
And we've used that
same sort of technology
to rapidly manipulate path and
polarization of single photons.
In terms of detectors, we really
like these superconducting
detectors which
operate at around 3K
in a closed-cycle system.
And that's the only thing
that's unfriendly about them.
They have very low dark
counts, very low timing jitter,
and very high efficiency.
This is just a step
towards that vision
of multiplexing
where now I've got
two sources on a chip
with one another here.
And we see quantum interference
between photons generated
in those two sources in these
beams that are on the chip,
again with unit
visibility or unit
fidelity to within
very small error bars.
And I think that's a very
key step towards this getting
hundreds of these sources
running in parallel.
This is some work by
other groups showing
those superconducting
detectors growing directly
on gallium arsenide and
silicon wave guides.
And the key point is
very high efficiency
because you can mode match
directly to those detectors.
This is some work from
Caltech showing very long,
low-loss delay
lines which could be
a very useful addition
to the toolbox.
You'll note that in the
pictures that I've shown you,
you've got a bright
laser beam coming
in one side and single
photon detectors sitting
at the other side.
So you need a lot of
attenuation of that laser beam--
100db of attenuation
and, again, there's
promising results
out there for that.
And so back to this cartoon
that I started with.
All of these elements have
now being demonstrated
to the sort of
performance levels
that are required in isolation
in some combinations.
And now the task is to maintain
those performance levels
as you integrate the whole
thing and manufacture
the whole thing.
To give you a flavor of how
the computation proceeds,
here's the physical
photons in wave guides.
And what you would
do is you'd attempt
to fuse them into
a cluster state.
And because you don't have these
deterministic interactions,
that fusion doesn't work
with unit probability.
And so you end up
with a giant cluster
state that has a
lot of holes in it.
But you're above a
percolation threshold,
which is a phase transition.
So you know with certainty
that in some given volume
you've got an entangled string
of qubits in each direction.
And then you essentially
renormalize and say, all right,
that block is my qubit.
And then I renormalize
into a Raussendorf lattice.
And from there
forwards, it's exactly
the same form of fault-tolerant,
topological, cluster-state
quantum computing as anyone
who's pursuing a gate model
is looking at.
So the price that
you pay photonically
is that you've got this
additional step here.
And then thereafter
it's the same.
The final points
to make are-- well,
I'll firstly show
you how you might
imagine doing that
in a small cartoon.
So here is a bunch of multiplex
sources-- ring resonators.
You see four of them.
You should imagine 400 of them.
And here's eight of them.
Only four of them are depicted.
But you can see where
the other four would go.
You run that into a linear
optical network here.
And you take your
unentangled single photons
and you untangle them into
the star cluster state here.
The success or otherwise
of that circuit
is given to you by
the detection pattern
that you get at the output here.
And then you multiplex
this whole thing
until you're generating these
with near unit probability.
And then everything's
ballistic thereafter.
You simply attempt to fuse those
things into the cluster state.
You do that imperfectly.
But you're above a percolation
threshold through that cluster.
And then you
proceed as per here.
And you get these, the usual
very nice, fault tolerance
thresholds.
Probably there's a
couple of things to say.
I'll start with the
most important thing.
The physical depth-- so
you imagine this computer
then looks like a bank of single
photon sources on one side
and a bank of single photon
detectors on the other.
And in between
them is a big slice
of this cluster state
with the dimensions
in the other
directions determined
by the physical size of
your computer, the length
of the computation that you
do determined by how long you
run this computation for.
And the point is that
the physical depth
between the sources and
detectors, the number
of optical elements
that they through,
is fixed as you scale
up your computer.
So the bigger you
make your computer,
the depth of that
thing remains fixed.
Now that's very exciting
for me because if I told you
that that depth
grew even linearly,
you'd probably be a bit worried
because the bigger you make
your computer, the more
loss you're going to have,
the more elements that
you have to go through.
And that's simply a function
of the local operations
that you need to
generate cluster states.
So that's very appealing.
The other thing is that the
error a model for this system
is not yet known.
But there's good
reasons to suspect
that it might be quite
a bit more benign
than the normal error
models that we are used to.
And that is because of this
effective zero temperature
that I described before.
There's no intrinsic
coupling to a thermal bath.
So maybe those Pauli errors that
we worry so much about usually
are not applicable.
There's a prospect to turn
a lot of errors into loss.
Why would you want to do that?
Well, it turns out that
loss is the best thing that
can go wrong with
you quantum computer.
You can lose 50% of your
qubits and you can still
do full-scale quantum
computing as long
as everything else
works perfectly.
So if you can convert
everything into loss,
then you have these incredibly
favorable thresholds.
And how might you do that?
Well, let's say you have a
polarization encoded qubit
and it's depolarized
somehow, well, you
put it through a polarizer.
You've turned that
depolarization into loss.
Let's say you've got some
temporal jitter in your photon.
Well, you put it through
a spectral filter,
and you've turned that
timing jitter into loss.
Or if you have fast
enough detectors,
you can indeed do that
filtering temporally.
So there's great
promise, I think,
for turning these
things into loss.
The only thing that
I can think of that
can't be turned into loss in
that way, which proves nothing
I should add-- that just might
say something about my brain
rather than about reality-- is
dark counts on those detectors.
Now you might believe that dark
counts is a more benign error
that you could deal with more
directly and more efficiently
than other sorts of errors.
And, in fact, it might
be that you could get it
to the sort of-- you could
get dark counts to 10
to the minus 20 level
where you wouldn't
encode against them at all.
And I think that's
everything that I
want to say about that approach.
I think there's one more
thing, but it's escaped me.
Just a minute.
I've probably said
enough in any case.
And then, of course, this is
the visio-- stolen directly
from IBM-- of conventional
computer chips
and in the future where
you replace copper
with photonic wires.
And so with a simple bit of
Photoshop and relabeling,
you just then have your
photonic quantum computer
and all of the associated
microelectronics
sitting right alongside.
And that's deliberately
a little bit facetious,
but not completely facetious.
And so I'll just
finish by saying
that this is a way of
seeing the photonic quantum
computing stack.
You've got all these
components down here.
I'd say that there's been
plenty of work done down here,
plenty of work done up here.
And that the task
over the coming years
is really to worry about things
in this middle region in terms
of architectures.
And there's plenty of
interesting and exciting stuff
to be done.
All right.
Thank you.
[APPLAUSE]
AUDIENCE: So I would like to ask
you about two different things.
One, regarding the
detection efficiency
whether the practical
limitations of the existing
detectors, especially
if you need
to do photon number resolution.
Also, the other
question is regarding
the generational
singular photon sources.
I would like to see
what the scale up is
with that kind of
brute force method
that you mentioned in
the sense of generating
a large number of single
photons at the same time,
so making sure that
you have enough time
resolution between generation of
those singular photon sources.
JEREMY O'BRIEN: Sure.
AUDIENCE: Thanks.
JEREMY O'BRIEN: OK,
so on the detectors
there's reports
of 95% efficiency
with these
superconductors and there
may be even better already.
And there's
expectation that that
will get significantly better.
That's already well
within the threshold
for this loss-only regime
that you'd like to work in.
I'm reasonably convinced
that our task is
to do sufficiently
compelling demonstrations
with those
super-conducting detectors
that a semiconductor company
is sufficiently inspired
to then make semiconductor
detectors work
to the performance
levels required-- ie,
put the tens or
hundreds of minions
into the problem
that would be needed.
The reason they're
not doing that
now is that there's no market
for single photon detectors,
right?
Quantum computing doesn't
even rate a mention.
There's a lot of biologists
doing microscopy.
There is not a big
market for these things.
So I think the
detector thing, again,
there's plenty of important
and interesting engineering
to be done.
But it's definitely a
solvable problem, I'd say.
And then the second question
was about the sources,
the multiplex sources.
And this brute force approach,
I guess, the promise of it
is all about
manufacturability in the sense
that if you're making
this wave guides using
a conventional process, if
not exactly the same process
that people in the
semiconductor industry
are using to make their
optical interconnects
and make that photonic layer
that I showed you right
at the end, then once you've got
a few of these things working
together, then scaling up should
be reasonably straightforward.
I've just shown you some sort
of hot-off-the-press results
of two of those sources
interfering with one another.
And I'm pretty optimistic.
In fact, I've
advocated to the guys
back in Bristol, why don't
you just quickly make
a circuit where
you have a hundred
of these sources in parallel
but you only wire up
three of them-- the first
two and the last one?
And then show me interference
between all three
of those sources.
And I'll be willing to believe
that, with high probability,
all of the sources
in between will
because I've used this
reliable manufacturing
process to generate them.
They don't like the idea.
I think they want to wire
up all hundred of them.
Does that answer your question?
AUDIENCE: I didn't
quite get the point,
you said on these
cluster-based [INAUDIBLE],
the circuit depth
doesn't grow with the--?
JEREMY: So the physical
depth doesn't grow.
And the actual circuit
depth-- so the computation
that you are performing--
is just how long
you run things for-- how long
you run the computation for.
On one side, you're generating
photons at 10 or 100 gigahertz,
let's say.
And then you're entangling into
the slice of the cluster state.
In And the actual
conceptual cluster
state that you're doing
the computation on
is the physical dimensions
this way that determines that.
And then in this
dimension is how long
you run the computation for.
So it's a physical depth.
So there are a
number of elements
that you have to go through.
And the point is that
because that's constant,
that's just a clear,
fixed threshold
that you have to
get to no matter
how big you want to build
your computer in the other two
dimensions, which determines
the width of your computation.
So you'd be a bit worried if I
said the physical depth grows
quadratically with the
size of the computation.
Then suddenly, as
the computer gets
bigger, the probability
of actually exceeding
a threshold decreases.
Is that clear?
AUDIENCE: Well I don't
quite get to understand
why the physical
depth doesn't work?
JEREMY: Why the
physical depth is fixed?
AUDIENCE: Yeah.
JEREMY: It's simply
because to generate
that cluster state just requires
these local interactions.
So maybe this is a
slightly clearer picture
or maybe it's not.
And here you imagine
you're entangling
more and more of these photons
on the right-hand side.
It looks to be backwards,
this picture, actually.
So, yeah, so you're entangling
more and more photons
onto the right-hand side.
And then you're measuring
them out of the cluster
on the left-hand side.
So those ones-- you
should just ignore
the gray ones that
are measured out.
And so to actually generate
that cluster state, as you make
the cluster state bigger
in two dimensions,
doesn't require any greater
depth in terms of operation.
And then once you've got the
cluster state, all you're doing
is doing measurements
on a sheet at a time
and feeding the output
onto the next one.
AUDIENCE: I see.
So you basically are
measuring the cluster state
as it is being generated.
JEREMY: Exactly.
AUDIENCE: Streaming the cluster.
JEREMY: Exactly,
yeah, so you never
have to see the whole
cluster state in existence.
AUDIENCE: Right, you're just
streaming for the clusters.
JEREMY: Yeah.
AUDIENCE: Thank you.
AUDIENCE: [INAUDIBLE].
JEREMY: So I guess none of
the components scare me.
What's most challenging, where
there's most work to be done,
is in that multiplex
source as I say.
And the state-of-the
art is interfering
a couple of sources.
But the point is that doing that
in-- using scalable technology
to make them holds promise.
So that worries me a bit.
But I think the
thing that really
worries me is the
kind of assembly
and manufacture off the wafer.
So likely this
billion qubit device
doesn't fit on a single wafer.
My back-of-the-envelope
calculation suggests that you
might need something-- that this
thing here or this thing might
be this kind of dimension
in cross section.
So you've got a bank of single
photon detectors here running
through and detectors
at the other end.
So making a photonic
brick, if you like,
that doesn't scare me.
It excites me.
But it's not been done before.
And it's beyond what the
semiconductor industry might
do.
And assembling that thing
optically, electrically,
and mechanically represents
outstanding challenges.
But I guess the point
is that all the bits
and pieces themselves we can
stamp out on silicon wafers.
And I think it would be exciting
to understand just what you
could do on a single wafer.
Like, how big a scale
could you get to?
And could you take some
problem and really perform
that on there?
So I think that billion
qubit devices, let's say
10, 24-bit factoring
machine-- so basically
a reprogrammable,
digital computer--
but I think there
are great prospects
for smaller-scale devices that
attack particular problems very
efficiently, and
where there's a sort
of a fit of the algorithm to
the hardware, if you like,
the hardware of photonics.
And I think this boson
sampling, whilst it isn't clear
that there is or ever will
be an application for it,
it's a very nice example
of fit to the hardware
because that thing does
work deterministically.
You've got
non-interacting bosons.
You just launch
photons into this thing
and ask the photons to do what
they naturally want to do.
Do what you do best.
Just fire through that thing.
And you've got an
example of things.
Now I think all circuit
model of quantum computing
is a bit the other way
around where we conceptually
understand how a
digital computer works.
And we map that onto a
quantum computer and say,
all right, well, you be qubits.
And we're going to have
logic gates and so on.
I think something
from the bottom
up would be pretty
exciting where
you target the problem
from the bottom.
AUDIENCE: [INAUDIBLE].
JEREMY: Yeah, yeah.
So this is-- as I say from here
forward is pretty standard.
So you just have
Raussendorf lattice.
And you get these
very nice thresholds.
And if there's a tweak or two
in improvement that comes along,
then we could generate a
different lattice and so on.
The other thing to say is
that, like in this picture,
this is not going to be the
ultimate solution because I've
conceptually divided
this up into generating
single photons with
neo-determinism
into generating these star
clusters with neo-determinism
and then letting
the thing evolve.
Well, I would bet a
reasonable amount of money
that that's not going to be
the most efficient way of doing
things.
It's a way for us to
understand it now.
But why artificially
draw a line down here
and draw a line down here?
You might as well
just say, all right,
I'm going to start
with a bunch of things
here and generate that thing.
Again, that could produce
orders of magnitude savings
by doing things like that.
And that's the sort
of theoretical work
that needs to be
done now in my view.
