Quantum Error Correction is a theory of how to reverse 
or undo noise and errors on quantum systems.
This theory was developed in the '90, 
right when the field of quantum computing took off.
It was believed 
that this theory was an absolute necessity
for quantum computers to be ever realized.
The reason that quantum error correction is
important, is that we want quantum computers
to do large computations.
Such a computation or quantum circuit breaks down 
into many steps, namely single and two-qubit gates
and a measurement of all the qubits at the end.
When each gate fails to operate properly 
with some probability, then these failures will
accumulate leading 
to a final answer which cannot be trusted.
And the more gates in the circuit, the higher
the chance that the final outcome will be
essentially garbage.
In addition, a qubit which does not undergo
any gate, tends to decohere and relax to a fixed,
typically thermal, state.
So how to make the failure probability 
per gate low, for example as low as 10^-15?
For none of the qubits under current investigation
like the ones that you hear about in this MOOC,
do we have gates with failure rates of order 10^-15.
At best, the failure rate of a single 
or two-qubit gate is less than, say, 1%.
The past 20 years have seen a lot of progress
in making better qubits with longer lifetimes
and better gates, but to reach 10^-15 
by hardware advances just seems implausible.
Now, the idea of quantum error correction is 
to represent quantum information redundantly.
For example, 
instead of a single qubit, we use 7 qubits.
And these together constitute 
one new so-called logical qubit.
On this logical qubit we operate as follows.
First, we constantly monitor 
for errors on any of the 7 qubits.
We can't do this by measuring these qubits,
why not, otherwise we lose their state.
Instead, we will actually include ancilla 
or helper qubits, which we couple to the data qubits
and we will only read out the ancilla qubits.
And with this error information 
we infer what errors have happened.
In this way we can undo or reverse these errors.
But wait, we can't just correct any old error.
First of all, a qubit can undergo 
two types of elementary errors,
there is bit flip and there are phase flip errors.
All other errors on a qubit are 
linear combinations and/or products of these errors.
Now quantum error correction codes are designed
such that only errors 
on small subsets of qubits can be corrected.
For example, for the 7 qubits shown previously,
an error on any single qubit can be corrected,
but not an error on any subset of 2 or more qubits.
Error correction is then effective 
when errors on single qubits are much more likely
than errors on pairs of qubits.
In other words, likely errors will get corrected
and the less likely errors will remain.
So we improve matters.
By adding more redundancy, for example 
representing a single logical qubit by, say, 49 qubits,
one can correct larger subsets of errors.
And this means that the failure rate of the logical qubit,
which is determined by the errors 
which do not get corrected, can get really small.
With 10.000 qubits it may be possible 
to have a failure rate of 10^{-15},
particularly with a code called the surface code.
An attractive feature of the surface code is 
that its qubits can be physically placed
on a 2D planar chip and only local connections
are needed for error correction and logical gates.
But what quantum error correcting codes are
there and which one is in fact the best?
Well, there are many classes of codes 
and there is no one single answer to which one is best.
A code can in general encode k into n qubits.
Furthermore, the code has some distance d
which means that it can correct almost up to d/2 errors.
The Steane code that we have seen has 
distance 3 and it can correct a single error.
So typically one would like to have 
a high distance and high k versus n.
But more important in practice is in fact,
how one gathers error information
and how one does that fault-tolerantly.
And this is what we will discuss in the next video.
