We start by talking about one of the
simplest norms, called the Frobenius norm.
Now to motivate it, what I'm going to do
is I'm going to link how matrices are
stored in memory to vector norms and
then back to the Frobenius norm. So a
typical way in which matrices are mapped
to memory [Let's see. What's the problem
here? A matrix is a two-dimensional array
of numbers; Memory tends to be linear] is
to say, we'll store the first column of
the matrix first and then we'll store
the next column of the matrix and then
the final column of the matrix.  And
that's known as column major order
storage, or column major storage. Okay.
well, computing a norm of these numbers,
you can take your pick of vector norms.
And 2-norm is a nice norm.  So we could
say to measure the magnitude of this
matrix, we will simply think of it as a
vector of numbers.  And then we can
compute the 2-norm of this vector of
numbers.  And with tha,t maybe we end up
with a norm.  Now it would be tempting to
call that the matrix 2-norm, but it turns
out we're going to use the term, matrix 2-norm, in a slightly different way.  And
this is then actually called the matrix
Frobenius norm, Fro-be-ni-us.  Okay, and what
is it?  It's just the sum of the squares
of the absolute values, and then you take
the square root of that.  Okay, so it's the
square root of 1 squared, plus minus 2
squared, plus 0 squared, plus minus 1
squared, [well we might even want to put it over here]
minus one squared, plus three squared, plus two squared, plus one squared, plus minus one
squared, plus one squared, square root. 
Alright, very simple. Now you would think
that if we think about the numbers as a
vector then we get a norm that that
would then automatically mean that the
Frobenius norm is a norm indeed.  And
we're going to let you work that out.
