I’ve been throwing around terms like “exponential growth” and “time complexity” without formally defining them.  This chapter defines these terms and gives us a solid measurable
criterion that we can use to test an algorithm’s efficiency. There are 10 pages here, of which you really only need to read the first six.  Your customized path is waiting for you, as
always, in the table, below. Let’s make a distinction between problems and algorithms.  If I ask three engineers to devise a solution to the same problem, they may come up with three
totally different algorithms. One may be more efficient or faster than the others. My point is, what we’re going to measure is the algorithm, not the problem.  Once you tell me
your algorithm, I can tell you its efficiency – or to use a more precise term that we’ll define today -- its “computational  complexity.” All the algorithms we’re interested in involve
some kind of inherent integer size, call it N. If we’re writing a search or a sorting algorithm N would be the number of objects in the array to be searched or sorted.
A different problem may be to factor a large number, and the N for that algorithm would just be the number to be factored. Most of our algorithms’ sizes relate to how many inputs a
function has.  In that case, the N might be the number of inputs. Whatever our problem, we expect that the larger N is, the longer our algorithm will take to run. We’d like to
quantify this growth.  How much longer does our algorithm require when we double N?  How about when we triple or quadruple N? I’ll use a simple linear search to make a point.
If it takes five seconds to search through one million records, how long do you think it would take to search through two million?  Twice as long you say?  Calm down.
It’s not that easy.
Maybe the key for which we searched for when had one million records was the last item in the array. So we had to go through all one million before we found it.  But when we
doubled N, and searched for another key, that one may have appeared at beginning of the array and we found it instantly.
As you can see, it’s a challenging question to answer.  And there’s more than one way to answer it.
The most important answer for us is called “Big-O growth.” That means we always consider the worst case. In searching an array, we always assume
that record to be found is in the worst possible position, say the last element in the array.
(Of course, the actual worst position always depends on the particular algorithm.) With that nuance, though, we can, indeed, say that our linear search algorithm will take twice as
long if we double N. The mathematical verbiage for this concept takes three lines.  We compare our algorithm to some mathematical function, f, say f(N) = N^2 or
f(N) = logN x N^3.  We say that the algorithm is “Big-O of f” if the time the algorithm takes is bounded by f (details to be pondered when you actually read the definition).
If our algorithm is big-O of the function f(N) = N^2, then our algorithm is said to be "quadratic." That’s not too bad.
If it’s big-O of log(N), our algorithm is "logarithmic." That’s really good.  If It’s big-O of 1,  our algorithm is "constant." That’s the best. In that case, our function doesn’t care
how big N is – it always takes the same amount of time. At the other end of the spectrum,  if it’s big-O of 2^N, that’s really bad.  It’s called "exponential." The main two categories
in quantum computing are exponential and polynomial, which is essentially everything else.    Our goal is to solve a problem
whose classical algorithm seems to always be exponential (that’s called a hard problem), and to do so with quantum algorithm
that’s polynomial. Well, you’ll read about all this in those short 6 to 10 pages.
But there’s one final detail I’d like preview here. That’s the distinction between time complexity and circuit -- or spatial --  complexity.
We can always make our algorithm better if we’re willing to build a larger hardware circuit that does much of the processing in parallel.  So to be fair, we need to consider
both the time and the space complexity together. I’ll do that when I think it’s warranted, but we’re mostly going to be talking about time complexity, because all of our
circuits will have spatial complexity of size N or N^2, very small when compared with the exponential complexity of the classical solution. You see, if we come up with an
algorithmic time complexity that is polynomial around one of those circuits, then adding in those circuit details won’t spoil our
victory, because we’ll still end up with a big-O that’s polynomial, and that’s our main goal.
