[VIDEO PLAYBACK]
DAVID MALAN: --we're bound on how
long this algorithm might take, given
end steps.
So Big-O itself is a formal--
[END PLAYBACK]
DAVID MALAN: So this is
where we really start
to make things formal
in the class, and we
introduce asymptotic notation like
Big-O, Omega, and a little bit of Theta
as well.
And I think it's
helpful to give students
this vocabulary with
which to start to talk
about the running times of algorithms.
DOUG LLOYD: I have to say, when I've
taught section in the past, it's a--
I feel like this is a really
tricky topic to teach,
because it gets mathematical, and I
feel like people's eyes can just totally
glaze.
DAVID MALAN: It is, and so this is why
we don't spend a huge amount of time
on the formal definition of
each of these, where you really
capture the constant factor and
the definition of Big-O and Omega
and that, especially.
In years past, I think in 2007,
maybe '08 or give or take,
I actually gave the formal
definition of all of these,
and it just felt too much detail.
And I think we've benefited from a
more intuitive appreciation for what
these is--
for what each of these is.
And even then, I mean, I've
done a couple of angles here,
right, we did a bit of the math where
we show what the recurrence would end up
being, mathematically, and how
you can then factor everything out
and you can actually see
the higher-order terms,
the lower-order terms, and
then start striking them.
And I've never felt super
comfortable with how
we go about striking those high-order
terms until I gave a proof by example,
which was certainly not a proof,
but when we do the example of,
like, if n is a million, and you really
start to see that a million squared
minus a million divided by two is really
so close, it seems, to just a million
squared.
DOUG LLOYD: Right.
DAVID MALAN: And so that, with that
example, with that proof by example
so to speak, I think am I really
trying to get students to appreciate it
or just kind of like
accept, like, OK, it's
not really that big deal
once, asymptotically,
and is approaching a really, really,
really big number as we present.
DOUG LLOYD: It's
tricky, though, I think,
to broach this topic in an intro class.
Like, this is-- there
are classes at Harvard,
here, where you can spend an entire
semester learning about Big-O notation.
DAVID MALAN: For sure.
But to be fair, I mean, we use it
at a fairly simple level, right.
I mean, we define it formally
as a Big-O is an upper-bound,
Omega is a lower-bound, and a theta
one, they're one and the same.
But we also take the liberty of
generally only describing the cases
to which these are interesting to us.
So we almost always use Big-O,
if not always use Big-O,
to talk about really
worst case running times.
But you can use Big-O to talk,
of course, about average case
running time, best case running
time, and any number of other cases.
But for us, it's the intuition of, like,
applying an upper bound to the worst
case and, conversely, applying a
lower bound to the best case, that
allows us to have, I think, a pretty
reasonable or pretty accessible
discussion that's
still useful, too, when
it comes to the analysis of
some of the algorithms we
could look-- we look at in class.
DOUG LLOYD: Yeah, it
allows us to explain things
like why bubble sort can be faster
or what that short-circuiting could
mean or translate to numerically.
DAVID MALAN: And I think
a high-order class--
a higher-level class to
which we ultimately defer
can go into the nuances of
average case versus best case,
worst case, amortized
case, and so forth.
And that's perhaps where that
level of nuancing belongs.
