After a week of mathematical distraction, I know you’re probably having withdrawal symptoms from quantum computation, and I have just the antidote.
Today, we study the details of quantum oracles, those unitary (and therefore reversible) transformations that simulate the classical Boolean functions which are the actual targets of
our algorithms. I can’t show how to build an oracle from an arbitrary function – that’s a craft that requires knowledge of the specific function.
There are lots of papers out there that address the construction of oracles for specific Boolean functions on a onesy-twosy basis.
What I will do is show you how we construct the matrix of the oracle -- for any f.
While we’re at it, we’ll extend what we’ve already studied -- single-valued functions – to the more general vector-valued functions, that is, those that have multiple Boolean outputs.
Finally, this is the right place to introduce the difference between relativized vs. absolute quantum speed-up over classical computing.  It’s a distinction that we must always
keep in mind when designing our algorithms. So, we start off by analyzing the small one-in, one-out functions, f, of which there are only four.  Not only do we get to cover every one in
depth, but the techniques we employ help us build oracles for more complicated functions. One of those four functions is the identity f (x)= x.  From the truth table for the oracle, we
easily produce its matrix.
As you can see, its unitary – which, by the way, we suspected, after all, an oracle is always its own inverse, but it’s nice to confirm here. By standing back a bit, we can
actually resolve U-sub-f as being composed of two smaller 2x2 sub-matrices:  the identity and sigma-x, the Pauli-spin matrix for the observable S-x. This is more than just a
casual observation, as you’re about to learn. We’ll repeat this with a slightly larger function – still single-valued, but one that has a wider input – or data – channel. The process is the
same. Build a truth table and construct the matrix.  Nevertheless, we go very slowly and carefully through the steps, because there are so many combinations, it would be
easy to get lost, otherwise.
Our diligence pays off, though, because at the end, we find that this necessarily larger matrix is still composed of only two moving parts. The 2x2 identities, and Pauli-spin matrices,
sigma-x. When we finally look at a vector valued function – the smallest possible, one with two outputs – we see the pattern repeat with slight variations. The matrix consists of
small separable operators, whose 2x2 components are just identities, or sigma-x. Such observations confirm, by the way, that U-sub-f is unitary, as required by the quantum circuitry.
We wrap up with the topic of relativized vs. absolute growth improvement. When we build a quantum circuit and its accompanying algorithm, we’ll be counting both the number
of gates needed, and the steps used to get the answer. That’ll account for the computational complexity, right?  Wrong.
Until we incorporate the complexity of the oracle, through which every qubit must travel, all those other computations
merely provide a lower-bound for the big-O.  It could get worse if that oracle has a more costly growth rate. So how to do we accommodate this inconvenient truth? With vocabulary.  If we
don’t know the oracles computational complexity, but we do know that our circuit around that oracle has a better growth rate than the classical algorithm – also measured around --
but not including -- the function f, then we say we have a “relativized” speed-up. You see, the oracle will have the same time complexity as the function.  And a classical algorithm
will use the function at its core.  So we compare our solution minus the oracle, with the classical solution minus the function. If it’s faster, we have relativized speed-up.
On the other hand, if we happen to know the complexity of the function and its oracle, then we do have complete knowledge of the whole enchilada.  Our big-O is going to be
accurate – for that particular function. In that case, we can make hard claims about the growth rates, and when we get speed-up, it’s called "absolute speed-up" over the classical.
With Simon’s algorithm, we’ll only get relativized speed-up, because we won’t know the functions, and some periodic functions happen to have exponential growth. The same will
be true of Shor’s period-finding algorithm. However, for Shor’s factoring algorithm, we will know enough about the function to claim that it has polynomial growth.  I’ll demonstrate that in
two weeks.  So when we design our polynomial fast algorithm around the oracle, and compare it to the exponentially slow classical solution around the function, we’ll have absolute
speed-up.
