We've seen previously how quantum phases can be used to output the results of a quantum algorithm.
Now we're going to see a technique
for estimating these phases
that incorporates another quantum algorithm,
in order to ensure that we get a precise estimate.
Relative phases are important
and ubiquitous in quantum information.
We can produce one of these relative phases
using a unitary U by inserting an eigenstate of U,
which we call psi_k,
into a controlled U operation,
resulting in the corresponding phase phi_k
being kicked back onto the control qubit.
Alternately,we can produce
a relative phase in a physics experiment.
For example, assume that we have a beam-splitter here
that splits an incoming light beam into two paths,
one of which passes through a piece of material
that shifts the phase of the light's wavefunction.
This can produce a physical state
with that relative phase on it.
Now we come to the question,
how do we learnthe values of these phases?
If we input a plus state to the phase gate phi,
and then measure in the X basis,
the plus state transforms from 0 + 1 over root 2
to 0 + e to the i phi 1 over root two,
which has coefficients 1 + e to the i phi
and 1 - e to the i phi in the +- basis.
This results in a probability of measuring the output
to be in the + state that is 1 + cos(theta) over 2.
Let's take a look at the performance
of the cosine of phi measurement.
Now, phi is a point on a circle
and the cosine of phi is the x coordinate of phi,
just on the x-axis there.
Already we can see one of the problems
with doing a cosine of phi measurement on its own,
there's another angle phi' with the same cosine as phi, making it impossible to tell the difference between them
with only these measurement results.
Now, if that weren't a big enough problem,
there's also the fact that,
if we have some imprecision 
on our measurement of cos(phi),
suppose for instance that we know
the cosine of phi is in this gray band,
but not where exactly,
then our estimates of phi get very imprecise
when phi is near 0.
This is in contrast with the situation
when phi is close to pi/2, or 90 degrees,
where the same amount of imprecision on cos phi
results in much less imprecision
on our estimate of phi,
a much smaller wedge seen here.
So how do we fix this?
Well, we can run a second type of experiment
which involves preparation of the +i state,
which is 0 + i 1,
then we feed that through the phase gate,
and measure out in the x basis as before.
If we do the same algebra to analyse this experiment,
we end up with a probability of measuring the + state
which now depends on the sine of phi,
which is the y coordinate of phi
in the diagram we drew previously.
Now, we have only one range of angles
that's consistent both with our imprecise estimates
of sine phi and the cosine of phi.
Also, note that the precision of the estimate
is independent of phi itself,
and remains the same as we travel around the circle.
This precision, the width of the interval for phi,
scales as one over the square root of s,
where s is the number of measurements.
Now that's all well and good,
but you may wonder whether we can do better.
We can, in fact, but first we'll have to consider
a simplified case of phase estimation.
Let's focus on the phases 0 and pi.
In the + state experiment from earlier,
these angles produce a probability of measuring +
that's either 0 or 1 exactly.
As long as we're guaranteed that the phase
we're trying to estimate is one of these two values,
then one measurement suffices
to give us exact knowledge of the phase,
so we don't need to worry about precision.
To generalise to other, smaller angles,
we'll need some notation,
namely that of binary fractions.
Here we express phi as 2 pi times a number omega,
which is between 0 and 1.
Omega is a sum over these values b_k,
eachof which can be 0 or 1,
times 2 to the minus k, for k ranging from one to infinity.
We can write this out as 0.b1b2b3 et cetera.
For example, one over 2 will now be denoted 0.1,
where normally you would use 0.1 to denote 1/10, in base ten.
Also, one eighth, 1/2^3, will be 0.001,
and we can express
other fractions like 3/8 as binary strings like 0.011.
In order to make full use of this notation,
we'll also need to note
that we can control the phase that gets kicked back
by applying multiples of phi, rather than phi itself.
This is true both in the case
where we use phase kickback on a quantum circuit,
and in the case where we're doing a physics experiment.
The main difference between these two cases is that,
in a quantum computer,
U to the Mth power
may be implemented more cleverly and in less time
than simply doing U M times in a row,
as you have to do
if you're running one of these physics experiments.
For this reason, let's assume from now on
that we're doing simulated U in a quantum computer
rather than a direct physical U in a physical experiment.
Now we're ready to estimate phases
that have one of four values;
0, pi/2, pi or 3 pi/2.
There are a few convenient facts to discuss.
First, we recall that e to the 2 pi i is 1.
Next, we use this fact
to derive that applying the four-valued phase twice
gives us a two valued phase which is either 0 or pi, depending only on the value of b_2.
This gives us an intuitive algorithm to follow
to learn these phases with these four values.
First, we apply the phase 2 phi to the + state,
and perform a single measurement
to learn the value of b2.
Then, we prepare a new special input state
that cancels out the phase 0.b2
and perform another measurement to learn b1.
As long as we can adjust the phase on the input state
such that the output state always has a phase of 0 or pi,
one measurement is all we need
to determine that bit of the phase exactly.
Now, in order to estimate arbitrary phases,
we begin by choosing
a number of measurements that we'd like to do,
and we’ll call it s again for simplicity.
We apply the phase 2 to the s - 1 phi to +
to learn the last bit of b that we're interested in,
called b_s.
Then, we adjust the phase on the input state
to get rid of the last bit on the output,
and we apply 2^{s - 2} phi,
so that the output phase is 0.b_{s-1}.
After this, we can continue cancelling
known bits of the phase by shifting the input state,
and cancelling unknown bits of the phase
that we're not yet ready to measure
by applying a multiple of phi,
until we've learned all the bits of omega that we wanted.
Since this returns
one of 2^s uniformly spaced values around the circle,
we get a precision that's one over 2 to the s,
much more precise
than the 1 over root s scaling we saw earlier.
And one of the final very special things
we can learn about this procedure
is that it can be implemented coherently,
using all of the phased plus states at once.
We take advantage of the fact that,
if we perform a Hadamard
on a state with a 0 or pi phase,
we get a state that's 0 if the phase is 0,
and one if the phase is pi.
Then, we can use these states
as the coherent controls to phase shifting gates
that perform the shifts we discussed earlier.
If we write out the entire circuit,
we see that its inverse would take a bit string as input,
and return a set of phased states
where the phases encode those bit values.
This is precisely the Quantum Fourier Transform,
which underpins many quantum algorithms.
