[MUSIC]
>> Hello, everyone.
Welcome to AI Show.
Today's episode will
be very interesting
because we will be talking
about Quantum Machine Learning.
Alex, please tell us who
you are and what do you do.
>> Hello, I'm Alex Booker,
Principal Researcher with
Microsoft Quantum Systems group.
I have been with the group pretty
much since its early inception,
and after working through
some hard Quantum compilation
issues about three years ago,
we turned to exploring the domain
of Quantum Machine Learning.
>> Interesting. Quantum computing
is becoming this new buzz work,
everybody is super
interested about it.
So can you tell me or explain to
me more why that is happening?
>> Well, in my opinion,
the Quantum computing is
the most exciting agenda
of the 21st century.
Quantum computer is
not a silver bullet.
It's based on principles
of Quantum mechanics,
which are very hard to harness,
but it's very good at solving
rather hard problems that are
pretty much insolvable with
traditional computation technologies.
To name just a few,
that would be advances
in material science,
cryptography, physics
simulation, and such.
>> Nice. What about machine
learning in the AI?
>> So compared to the lifespan of
the Quantum computing industry,
Quantum Machine Learning
has a shorter history.
It started just over a decade ago
with a set of fundamental works,
where people figured out a way to
quantize familiar setups
in machine learning,
such as support vector machines,
classifiers, neural nets,
and Quantum recommenders.
So all of this,
at least in theory,
is available and working.
>> All right, in theory.
So when will we see Quantum
neural networks in practice?
>> Well, the original designs
are based on so-called
Quantum linear algebra,
and are only meant for very large,
very well error-corrected
Quantum computers,
which are not coming in
for a decade or two.
So in a shorter term,
the community turned to a more
practical technology based
on things called variational Quantum
circuits and related ideas to that.
>> Can you tell me more
about the properties
of those variational
Quantum circuits?
>> So variational Quantum
circuit is a smaller,
one can view it as a program
running on Quantum computer.
But it's just a smaller
one and it's trainable,
and being small, it does two things.
First of all, it uses some of
the Quantum computing power,
what the Quantum
computer does the best.
But a hybrid training
scheme makes use of
a traditional computing surrounding,
which does all the bookkeeping,
takes care of the
pre-processing of data and
post-processing of the
results and in turn them,
they do a good job training
those moderate size machine
learning solutions.
>> Interesting. Is it pure
theoretical at this point,
or do we have practical solution?
>> So the Quantum community
put recently a lot of effort
and hours into creating
various open source
proposals and offerings,
where the machine learning solutions
based on Quantum variational
circuits are reduced to practice.
This includes our most recent
addition to our Q# QDK world.
What we offer now is
an open-source Quantum
Machine Learning library,
which at this point, offers
just Quantum classifiers.
So you can download it, play with it,
and you could actually
see how different is
the Quantum Machine
Learning training and
generalization from the
traditional models.
>> Right, and Chris will demonstrate
this to us in a few minutes.
But before I let you go,
can you please share your opinion,
where do you think it will all
evolve with our next steps?
>> So the main thrust
should be, in my opinion,
is to be in scaling out
the solutions that we have now
and which have been
reduced to practice,
running on simulator so far.
We need to make sure they
run well on actual hardware,
and more importantly,
they scale out well,
meaning we have some certainty that
the power and the precision of
those Quantum Machine
Learning solutions
grow in the lockstep
with the growth of
the Quantum computers as they are
becoming available and
more and more powerful.
>> Get it. Well, thank
you very much, Alex.
That was super interesting
and I hope to see you more in
this show telling us
more interesting things about
Quantum Machine Learning. Thank you.
>> I will be looking
forward to that as well.
Thank you very much for having me.
>> Hi, Chris. Welcome to AI Show.
Alex just gave us a
really nice introduction
into Quantum Machine Learning.
So now it's your turn to show
us practical side of things.
But before we do that,
please tell us who you
are and what do you do?
>> Yeah. Thank you
for having me here.
I'm Chris Grenade, a Research
Software Development
Engineer on the Quantum systems
team here at Microsoft.
Much of my day-to-day,
I work with putting
libraries out there,
such as the Q# Standard Library,
and also more domain
or application focus
libraries such as the
machine learning library
that I'll be talking about more.
>> Nice. All right. So let's jump in.
How can we use Quantum Computing
for Machine Learning applications?
>> So that's where the Quantum
Development Kit comes in.
The QDK provides a new language, Q#,
that you can use to write
Quantum programs to solve
problems that you care about using
the power of Quantum computer.
>> I see, and what about the
development environment?
Do I need special,
Quantum IDs or something?
>> So let me pop on over to
Visual Studio online so you can
see what a Hello World program
looks like in Q Sharp.
You could also use
Visual Studio code or Visual
Studio as you prefer.
But if I look at a Hello
World program in Q Sharp,
the thing that really I like
to point out here is it
looks a lot like a Hello World
program in any other language.
It's just Hello World.
>> They don't see any
manipulations with the qubits.
So can you show an example of that?
>> Yeah. So if I look at
the Quantum random
number generator sample
provided with the
Quantum Development Kit.
One of the things that
really helps me understand
how the sample works is
to think of Q Sharp in
the same way I might think
of code that I run on
a GPU or an FPGA as a accelerator.
Run a program here
on a Quantum device,
to solve problems that I care about
on a Quantum computer when
that's the best tool to use,
and I do that by
sending instructions to
that Quantum device represented in
Q Sharp by built-in operations.
So for instance, if I
ask the machine for
a qubit using a using statement,
I can then send instructions to
the device to go do
things to that qubit,
such as the h instruction,
short for [inaudible] ,
that let's me prepare
some superposition,
or m, short for measure,
that measures back and gets
classical data that I can use,
and the rest of that Q Sharp program,
or sent out to the languages
and tools that I might use and
the rest of my workflow
such as Python or C Sharp.
>> Very interesting. But does
it mean that I will have to
learn all of those Quantum gates,
all those operators to use
Quantum Machine Learning?
>> Thankfully, no.
Q Sharp is a high-level language,
so that you get things like
the ability to
pre-process your data and
train classifiers using Q Sharp
library functions and operations,
to represent the high-level structure
of the task that
you're trying to run.
So for instance, I can write out,
adding a product kernel
to some features from
some training data or I can
write out the rest of
my pre-processing,
how I want to sample,
some training data to
make many batches,
as well as what the structure of
a Quantum classifier
that I want to train is,
and once I have all that,
I can then pass that to the
Q Sharp operation to train
a classifier that's provided with
the Q Sharp Quantum
Machine Learning library.
When I do that I get
to control how that
training proceeds by
passing various options,
some of which are probably fairly
familiar to a traditional
machine learning workflow,
such as the learning rate,
with the minibatch size.
Others such as the
number of measurements,
reflects that my goal
here really is to
send right instructions to
that Quantum device to prepare
a superposition that represents
the data that I'm trying
to classify during training.
>> Since we are talking
about the data set itself,
maybe you can share a little bit
more what the problem
we're trying to solve.
So tell us about between data
and the data set itself.
>> Absolutely. So the sample
that you just saw trains
a classifier to label
a half-moon data set.
Here that's a synthetic data set
that's a little bit difficult
to break into classes,
so that we can use it to benchmark
and understand how classifiers work,
and how they might be used for
more real-world or applied data,
and what makes it hard is that
those two moons
interlock at their ends.
So that if I try to draw
a straight line anywhere
through that plot,
I'm going to chop off
one or the other and
I'm going to wind up
misclassifying that way.
>> All right.
I think, we're ready just to
look at the codes running.
>> Awesome. Well, so
let me pop on over to
Jupyter Notebook so that I can
call the sample that you
just saw from Python,
and that will let me use
things built into Python,
such as JSON functionality to go
load the training data
that I want to use,
and there you see all of
the features broken down,
that's just the x and y
coordinates from the plot
you saw a moment ago,
and then I can set
some starting points for where
I want to begin the training,
and pass all of that to
the Q Sharp operations you
saw and the sample before
using the simulate method,
and what happens when I call that
is the Quantum Development Kit will
run that Q Sharp operation
locally on my machine,
and simulate how it might work
on a real Quantum device.
That way, I can use a conventional
computer to understand, test,
and develop an algorithm,
and make sure that it works
in the way that I wanted
to before running it
on actual hardware.
When I do that, I get a
trained classifier back out,
and I can then go do
the same thing with
a validation operation to
understand that it has
about 13 percent miss rate,
which is pretty good
given that data set.
I can also use standard
Python tools like NumPy and
Matplotlib to help me understand
what the classifier actually did,
where those
misclassifications came from,
and so when I plot that,
you see that it missed two points,
one of which was on the very
tail of one of those moons,
which as we saw earlier is where
it gets the most difficult,
and the other was where there is
a just enough noise
to push one of those moons
too close to the other,
and so it's a very difficult
point to classify,
and that gives me the
understanding I need to
improve my training
procedure going forward,
as I look at more applied uses
of Quantum Machine Learning.
>> Well, this is very
interesting and I'm curious,
if I want to play with it myself,
is it available right
now for download?
>> Absolutely. Everything
you've seen here
today as a part of the
Quantum Development Kit,
you can check it out all open
source on GitHub and use it today.
>> Perfect. All right. Well,
Chris, thank you very much.
But before I let you go,
I want to ask your
opinion or your vision,
where do you think it will
go in next couple of months,
a year, a couple of years from now?
>> So that's a great question.
The thing that really
helps me to remember,
and I think of that is,
how much of what you saw
here today came out of
the cutting edge research taking
place on the Quantum-Systems
Team here at Microsoft.
Everything from the
classifier that you
saw to the way that we
train that classifier,
are ideas that came out of research
on the Quantum system's team.
As that research proceeds
we'll see new algorithms,
new approaches, new applications,
and we'll also see improvements to
the existing applications such as
the training that you saw today,
and that research will let us make
the Quantum Machine Learning library
better and put more
tools into your hands,
so that you can explore how
Quantum Machine Learning impacts
things that you care about.
>> All right Chris. Thank
you very much once again,
and I hope to see you soon with
more interesting news about Quantum
Machine Learning. Thank you.
>> I look forward to it.
Thanks for having me.
[MUSIC]
