The words cycle, bike, and bicycle
are all closely related, but how does a computer know?
Humans are very good in recognizing a face,
but for computers such recognition is typically hard.
The reason behind this is that
the computer programming language of computers
is built up in a very logical manner; and this is not quite
compatible with many of these problems.
So instead of this standard programming language,
we can also take a different approach:
called machine learning.
Machine learning algorithms are more flexible
when it comes to dealing with situations
that are slightly different every time,
or when the data is noisy.
They are also really good
at inferring patterns from large data sets.
This makes them very useful for applications
such as self-driving cars or face recognition.
So machine learning is the discipline
of learning patterns from large amounts of data.
Instead of coding complicated algorithms
into computer programs,
the idea of machine learning is to show
a simple computer algorithm lots and lots of examples
of correct and wrong  behavior.
For example, if we want
to train a computer to recognize a cat,
we could show the computer many examples of cats.
This is a cat, this is a cat.
Hey, this is not a cat.
In this way, the computer starts to "learn"
from the data what the expected behavior is.
The requirements for these kinds of algorithms
are enormous computing power
and the ability to work with many data points.
Thousands, millions or even billions of data points.
This requirement of scale means that
machine learning techniques,
even though they have been known for decades,
have not been used
in real world situations until very recently.
How can quantum solve this?
As machine learning is a vision or an approach
rather than a single algorithm,
it is not immediately clear what use quantum
computing could have for machine learning.
However, upon closer inspection, it becomes
apparent that many machine learning techniques
rely on more basic tasks such as linear system
solving for their functioning.
Linear equations can be efficiently solved
using the HHL algorithm,
named after its inventors Harrow, Hassidim and Lloyd.
However, as we will discuss in a moment,
there are significant challenges
in the realizations of this approach.
Another cornerstone machine learning technique,
called Principal Component Analysis
can also be sped up significantly
on a quantum computer.
Principal Component Analysis can be understood
as identifying the most important parts
of a large matrix
and using these to describe the matrix.
If we return to our example of recognizing cats,
we could show a computer many examples of cats,
and use these images to create an average cat.
To reconstruct individual pictures,
we start by using the most important variable,
which is the one that gives the largest variance.
This could be for example the size of the cat,
but it could also be a combination of factors,
for example a combination of weight and size.
It could also be something else,
but the computer will find out by comparing all images.
The variable with the largest variance
is the first principal component.
After that we can take the second
most important variable, the third, and so on.
In doing so we end up with a set of equations
that can reconstruct a cat to a desired accuracy.
While this is a powerful way for recognition,
it is still hard for classical computers.
The quantum algorithm for principal component
analysis is called the LMR algorithm,
named after Lloyd, Mohseni and Rebentrost.
This algorithm can provide an exponential
speed up and would therefore be highly valuable.
What are the caveats?
Because the advantages of quantum machine learning
mostly rely on the HHL algorithm
that can be used to solve linear equations,
all caveats related to solving linear systems
apply here as well.
Perhaps the most significant challenge
is the Data input problem.
This is especially challenging
for machine learning algorithms,
since by design, we are dealing with very noisy
and difficult to parse data.
This means that finding an efficient way
of storing this data in a quantum memory or QRAM
for later reference is highly non-trivial.
A similar encoding problem
holds for the LMR algorithm
used for quantum principal component analysis.
If we can overcome these challenges, there will be
many applications for quantum machine learning.
Application areas can range from security,
for fast recognition of persons in large crowds,
personal advertisements, medical applications,
such as recognizing diseases,
or even for accurate prediction of the weather.
