SPEAKER: And the reason I call it
strategic practice is kind of that--
they're going to be grouped by
theme, like these are problems
that help me practice this
topic, these are problems that
help you practice that topic and so on.
But just like, you know, as I said, I'm
a chess player and my favorite chess
tactics book, they'll start
out with like a chapter on pins
and then chapter on forks
and chapter on skewers,
or you're just practicing
individual chess tactics,
and then towards the
end, you get the chapters
that mix everything
together and you don't
know there's going to be a
fork or a pin or a skewer.
Well, you know, like on
an exam, I'm not going
to tell you whether it's a fork or
a pin or a skewer, but, you know,
by doing a lot of problems, you're
going to be improving your pattern
recognition skills.
So a lot of this course is
about pattern recognition
and that just takes practice, right?
You have to practice as much as you
can, so the more practice problems
you do, the better, right?
I'm not trying to torture
anyone with a lot of problems,
but as I said last time, this is a
difficult course and the best way
to learn it is just by my
doing a lot of problems.
So anyway, those are there.
That also serves-- if you look at the
solutions to these strategic practice
problems--
then also, you could think of that as
kind of an example of what ideally I
would like to see on the homework.
So when you're writing
up your homeworks, what
I don't want to see is like the kind of
thing I see sometimes, like, you know,
you have like x plus 3 equals
4, and then you kind of decide
to subtract 3 from both sides--
minus 3 equals 1, and then I need
to square it for some reason.
A 1 squared is still 1, so that's
OK, then it adds 7 equals 8.
Put a box around it,
then that's the answer.
That's what you should not do.
What you should do is actually
have words and sentences.
Just because this is a mathematical
class, doesn't mean you
shouldn't be using English and
explaining things as well as equations.
So I'd like you to be as clear
and detailed as possible in just
fully justifying your answers, right?
Like, if the answer is 42, well,
the TF knows that the answer is 42.
The question is, you know, what
reasoning led you to think that?
So clarity is a good word.
And I'd even say honesty.
I hate it when students kind of
like have a very, shall we say,
sloppy argument that, I think
the answer should work out to 12,
and so they're just like,
you know, put in some stuff
somewhere and write it in a very
messy way and come up with 12--
put a box around it.
That's very bad.
So if you don't understand something,
then I'd much rather that you--
well of course, try to talk
to people and figure it out,
but I'd rather say you don't understand
that than try to make something up
like random gibberish.
I always think of it as like,
you know, if I were considering
hiring you to build a bridge for me,
and I'm thinking of three possibilities,
like either--
if you just tell me,
here's the specifications,
you do this, this, and
this, I'm not going
to have much confidence if you just
say that that's the answer, right?
I'm not going to have blind faith
in-- even if you've successfully
built bridges before, I want to
see why this is going to work.
And on the other hand, if you tell
me a bridge design and it collapses,
do you think I'd prefer that?
Or would I prefer if you tell me, you
don't know how to build the bridge?
Well, if you tell me you don't
know how to build the bridge,
then I would much
rather have that happen.
So clarity, honesty, and words.
You may be used in other math
classes or just write, oh,
the derivative of x squared
is 2x, and maybe you
don't need to write any words, but
I like to see words, justifications,
and thinking, not just equations.
OK?
So that's the style of the homework
and, you know, you can look-- as I said,
you can look at the strategic
practice for some examples of what
that should look like.
Doesn't mean we're going to be
grading based on your grammar
and spelling and things
like that, and it
doesn't mean you have to
write a great American novel,
but it should it should be
something that you could actually
take your homework-- you should be
able to actually read it out loud
and it would actually sound like
English with some equations thrown in.
But even equations, you
can read out the equations.
In principle, you should be able to do
that, that's what it should look like.
Whereas if you did this, then
you just can't read that.
OK?
So homework's due in a week.
What else?
The review.
So math review handout.
I posted before, but I've
made a few updates to it,
so you can download a new
copy of the math review
and hopefully most of
that material is review,
but you should definitely take
a look at everything in there.
And then today at 2:00--
review sessions are
Fridays at 2:00 in Hall E.
Usually these Friday
review sections we'll just
be reviewing the week's
worth of material,
but today's one, which
Bowe is going to do,
is going to be a math review, so kind
of dovetailing the math review handout.
So again, I mean, that's completely
optional, but some of you
may be rusty on your math
and may find that useful.
It's supposed to be videotaped.
I have no control over when the
video is posted, things like that.
Usually they're posted fairly
soon, but that's not up to me,
but anyway, that will be available.
OK.
Any questions by the way on
anything about the course
that we didn't get to yet?
Before we actually start?
OK.
Any other announcements?
Math reviews, strategic practice, OK.
Oh, and I also want to mention that--
so the homework's due at
the beginning of class
and there's really no leeway with
late homework, because first of all,
it's such a large core;
secondly, I'm going
to post the solutions to the homework
very soon after you turn them in.
But I do drop the two lowest
homework scores, so if it's late,
it would just become
one of your dropped two.
OK.
All right, so last time I was kind of
quickly mentioning some of the areas
where probability is used.
I wanted to just very briefly
continue my little list,
and then we'll start with the naive
definition of probability, which
was kind of the historical
roots of the subject,
but we'll quickly want to
move beyond the naive stage.
OK?
But just to continue my
little list a little bit.
So last time, I just mentioned very
briefly in physics, quantum mechanics,
it's all probability; genetics, you
can't do genetics without probability;
and some of the sciences--
econ, econometrics, and
game theory, and so on.
But I want to mention a few of the less
obvious applications, like history.
Well, you might think, what
does history and probability
have to do with each other?
I just want to mention very quickly one
example of a really famous beautiful
example that Mosteller-- you can look
up details if you want you online--
Mosteller-Wallace did some beautiful
work studying The Federalist Papers.
They were trying to resolve--
so The Federalist Papers were crucial
documents in the history of the US,
you know, having to do with the
ratification of the US Constitution.
And there's a lot of disagreement
over the authorship-- you know,
who wrote some of them, so that's
an important historical question,
and so they were using
probability, Bayes' rule,
think things we're going to
be doing to address that.
And Mosteller was actually the founder
of the staff department in here,
and he's actually my grand advisor--
so my advisor's adviser, so I
like to talk about Mosteller.
I didn't get to meet him.
Actually, he died in 2006 and I came
here in 2006 so I just missed him.
But anyway.
You know, it is used in history,
and I wanted to mention--
when I mentioned history, it
reminds me that I was also
going to say something about pass/fail.
So if anyone wants to take the
course pass/fail, I do allow it.
The reason is for flexibility,
is because I trust
you to decide what's best for you.
Obviously you have to--
if you're counting this
for concentration credit,
you'll have to check with your
concentration advisor about
whether that be OK.
It's OK with me.
I don't usually recommend it, but I've
have had some cases before of students
who were really afraid to take the
course because they didn't know if they
had enough math and they
did pass/fail, and the story
that this reminds me of here is one--
I had a history concentrator
a few years ago.
He had almost no math
background, just basic calculus--
like 1B, so he was scared to take this,
but he was interested in learning it.
So I let him take a pass/fail
and he really loved it,
and after that, every single elective
he took for the rest of his time
here was a stat course-- he
stayed a history concentrator
but did lots of statistics electives.
And now he's doing a PhD at
Stanford in applying statistics
to political science and history,
so there are examples like that.
So there are more and more applications
in history and government also.
You can check out IQSS at Harvard--
Institute for Quantitative
Social Science.
Just take a look at their web page-- if
you have any interest in social science
and government, look at the IQSS stuff.
There there's a huge
amount of activity there
that's at the intersection
of statistics,
political science, history,
government-- all come together.
There are a lot of
applications-- an increasing
number of applications in social
sciences and even the humanities.
Last time I mentioned
finance very quickly,
and I wanted to especially
recommend STAT 123,
which is not offered this year, but it
will be offered next year and probably
the following year-- the prerequisite
is STAT 110 and it's a very interesting
course for many reasons, but--
I won't talk about it more
now, but it is worth--
if you're interested
in finance, then you
should definitely be interested in this.
If you're not interested in
finance, then don't worry about it.
All right.
And then one more is gambling.
And depending on how
cynical you are, you
could say that I've
just repeated myself,
but gambling is fun to talk about.
Gambling is sort of--
it's illegal in a lot of--
most forms of gambling are
illegal in most parts of the US
and has kind of an unsavory reputation.
And I never know if anyone has some
strong moral objections to gambling
is going to be offended
if I talk about gambling.
It hasn't happened yet, but
let me know if you're offended
and I'll try to be
more careful about it.
But anyway, gambling-- I feel justified
in talking about gambling in STAT 110
because gambling is where
probability came from--
that the historical roots of the
subject are exactly in games of chance--
gambling.
And so it gives me a chance to talk
about a little bit of the history.
It's also some familiar concrete
example, like dice and cards
and coins and things like
that that people gamble with.
As I mentioned, the math
prerequisite before, I
didn't mention the cards prerequisite.
You should know what a 52 deck
of cards, you know, like the--
you should know what--
basically what a standard
deck of cards looks like.
So if you've never
played-- you don't actually
have to know how to play
poker in this class,
but you should be familiar
with a deck of cards.
But you can easily learn that if you
haven't played any card games before.
So this is historically well-motivated.
And it's also just a source
of interesting examples
that we can easily explain without
getting into a lot of technicalities,
and then we can learn things.
So there are some early
examples, I'm just
going to mention our Fermat
and Pascal in the mid- 1650s.
And there are other examples as well,
but this is arguably the most important
kind of historical root of probability.
So Fermat, you've probably
heard of Fermat's Last Theorem.
He was a famous mathematician
and also a lawyer on this--
actually, he was a lawyer
and a mathematician
on the side, not the other way around.
Pascal was another
very, very famous guy.
You know, Pascal's Wager, Pascal's.
Triangle, Pascal-- it was a programming
language before C and so on.
Very, very famous person.
So anyway, you know, they didn't
have email back then obviously,
so they were writing very long
letters back and forth to each other.
And most of the letters have survived,
you can actually find them online
if you just look up, you know,
Fermat Pascal correspondence.
It's pretty interesting to read
their letters back and forth,
and they were just writing letters
back and forth to each other
analyzing different
gambling games, and it's
like, if you have this gambling
game, what's the probability
that this will happen?
What's the probably that will happen?
And it was all completely
new at the time.
You know, no one had
mathematically derived these rules
and how to work with
probability, so they
were just developing it just by writing
letters back and forth discussing
gambling.
OK?
So we will discuss some
of those games that they
talked about when we get to it.
And then I mentioned life
at the end last time.
I'll come back to that.
Life.
I like to say statistics is
the logic of uncertainty.
Math is the logic of certainty,
statistics is the logic of uncertainty.
Everyone has uncertainty.
If you're 100% certain of everything,
then there's something wrong with you.
Everyone has a lot of
uncertainties and probability
and statistics are how we
quantify and update our beliefs
and deal with uncertainty.
So that's what this course
is going to be about.
It's going to be about
quantifying uncertainty.
All right?
So now we can get to the naive
definition of probability,
which was the origins of the subject.
So let me tell you what
a sample space is first,
then I'll give you the naive definition.
So a sample space--
sample space just means the set of all
possible outcomes of some experiment.
And we're going to talk about
experiments a lot in this class,
but you should interpret
the word experiment
in an extremely broad manner.
An experiment can be
just anything, right?
Do anything-- as long as there
are certain possible outcomes.
So before the experiment, you
don't know what's going to happen,
because there are
different possible outcomes
and you don't know which
one's going to happen.
You do the experiment and
then something happens.
OK?
So what I just said was
very, very general, right?
And I mean, that could describe
any number of situations.
So a sample space is the set of all
possible outcomes of an experiment.
And we can interpret
experiment however we want.
So this is a very general concept.
All Possible outcomes of
an experiment-- and we
might say it's a random
experiment, but I'm not
going to use the word random right
now because we haven't defined it.
I'm just interpreting
this very, very generally.
OK.
And then we need one more concept,
which is that of an event,
and we'll come back to
this, but the earlier you
start thinking about events, the better.
An event is a subset
of the sample space.
And by the way, there's
also like a one-page handout
that's on the course web page called
Probability and Sets or something
like that.
One of the big
breakthroughs in probability
that made it possible to actually treat
this as a mathematical subject instead
of just something more like astrology
was the idea of using sets, OK?
So most of you have seen
like unions and intersections
and things like that, but if
you don't know much, you know,
like basic facts about
set theory like that,
I put a short introduction
into the review handout,
but you definitely need to be
comfortable with unions, intersections,
and complements in this course
That was a huge breakthrough, because
before, people just kind of tried
to like solve probability problems
by just kind of writing down
some stuff that sounded
intuitive, or reasoning by analogy
and various heuristics
Most of those heuristics
unfortunately turned out
to be completely wrong.
And we'll talk about some of those
famously wrong heuristics later.
But even-- I should
mention Isaac Newton.
Sort of later than this--
roughly around the same time--
Isaac Newton also-- you know,
Newton is one of the most famous
and probably top three most famous
mathematician physicists of all time,
and there were gamblers who were asking
Newton gambling questions as well.
Because at the time, no one
knew how to do probabilities,
so if you were like a degenerate gambler
and you really needed to know the odds,
then you had to go to someone of
the stature of Isaac Newton, Fermat,
or Pascal to get an answer.
You know, that was 300 years ago,
and so one of the cool things
is that after a few
weeks of STAT 110, you'll
be able to easily do calculations
that 300 years ago, you'd
have to consult Isaac Newton with.
And Newton-- we'll talk
about some of Newton's stuff
probably in the next lecture, but
Newton did the calculation correctly,
but even Newton's intuition turned
out to be wrong for one of these.
It's just a gambling problem about dice.
His intuition was wrong.
So one thing that makes
this subject difficult
is that we're going to do a lot
of things that are deeply, deeply
counterintuitive to almost everyone.
And I think that makes
this a fun subject.
It's a lot of fun teaching
it, there are a lot
of paradoxes we'll talk about, a
lot of very surprising results.
So to me, that makes this
more fun than calculus.
When you take a calculus class, I've
never seen anyone shocked by anything.
I mean, the fundamental
theorem of calculus,
which we will need occasionally,
is a very cool result.
It links differential calculus--
so derivatives and integrals
are inverse to each other.
I mean, it's pretty cool,
but it doesn't amaze anyone,
It's not that counterintuitive.
Statistics is full of
counterintuitive stuff, which
just means you have to work
hard, you have to think hard,
and it will become more intuitive
the more you think about it,
the more problems you solve, OK?
But at first, a lot of this might seem
counterintuitive, even to Isaac Newton.
So that's why we need to be more
mathematically precise about it,
because our intuitions can easily
be completely wrong in probability,
so that's why we need to
make it more mathematical.
And then as I said, the breakthrough,
mathematically-speaking,
was to start thinking of
events as subsets, OK?
So I'm going to draw a lot of
Venn diagrams in this class.
Usually I'll call the sample space
capital S. And that-- it's just a set.
The elements of the set are possible
outcomes of the experiment, OK?
So if our experiment is
to roll two dice, OK?
Six-sided dice, then there
are 36 possible outcomes--
we'll actually get to where
the 36 comes from in a bit.
There's 36 possible
outcomes, and then this set
would consist of all those outcomes,
and then an event, let's call it A,
is just some subset.
OK.
And so there are a lot of--
we'll get into this more
later, but you should also
look at the handout later about
probability and sets, where we--
the purpose is to connect
intuitive ideas about events
and make that mathematically precise
using unions and intersections
and things like that.
OK?
So we'll get into that
more later, but I just
wanted to get this word out there
to help us now with the naive
definition of probability.
You could also call it the very
naive definition of probability.
So you can only use
this definition when you
have strong justification for doing so.
The definition just says that
the probability of an event A--
so this is our first use of a letter,
capital P. Throughout this course,
capital P means probability, so we
write P of A, where A is an event.
So I'm imagining we have some
experiment that we're considering,
we have this sample space, we have
a subset we're interested in--
A-- because we want to
know what's the chance,
what are the odds that some
particular event will occur?
That's the question we're going to be
considering throughout this course, OK?
So we want P of A.
How do we get that?
Well, that's a hard question, that's
what this entire course is about.
But the naive definition
would be to just say,
that's just number of possible
outcomes in the denominator,
and then the number of
favorable outcomes--
and by favorable, I
mean favorable to A--
divided by a number
of possible outcomes.
So the denominator is just
the size of the sample space,
it's the number of possible outcomes.
And the numerator is just how many
of those outcomes did A occur?
OK?
So for example, if we flip a coin--
flip coin twice, there are
four possible outcomes, right?
Either the coin lands
heads on the first toss
and heads on the second toss, or
heads and then tails, or tails
and then heads, or tails and then tails.
So we have these four
different outcomes, OK?
Now suppose we want to know, what's the
probability that both tosses are tails?
Then according to this, it
would be one quarter, right?
Because we have one-- this
would be the favorable outcome,
there are four of them, 1/4, that's it.
So that's sort of like the high
school definition of probability
as well, to just count how
many possibilities there are,
how many of them did the thing
you want happen, and that's it.
But notice, though, I didn't say
anything about, is it a fair coin?
Is it, you know--
and well, that's the question,
is what does it actually
mean for a coin to be fair?
I mean, we have to be careful
of some circularity here.
So if we say a coin is fair, we mean
heads and tails are equally likely, OK?
But even then, that's just talking about
one toss, and if we have two tosses,
well, what if the coin has some kind
of sticky property that it lands tails
and it's likely to land
tails again the next time?
There are all kinds of different
possibilities that we'll consider.
But the most naive
way to write this down
would be, OK, we have these four cases.
If we treat them all
as equally likely, then
if we want the probability of some
event, we just count how many of those
happen, divide by a number
of things, that's it.
That's the naive definition.
So it has a huge assumption.
It assumes that all
outcomes are equally likely.
And it also assumes that there
are finitely many outcomes.
I'll have to say finite sample space.
So if the outcome of
your experiment could
be like any real number or any integer,
then the denominator would be infinity
and then this is meaningless, OK?
So it has to be--
in order to apply, it has to be
that we have a finite denominator.
And the assumption is that
everything is equally likely, well,
that's a very, very
strong assumption, OK?
Now that's a reasonable
assumption in some problems where
we have some kind of symmetry, right?
Like if we roll a six-sided
die and we think, you know,
if all six sides are equally likely
just because it's a symmetrical cube,
then maybe it's reasonable to say
each one is 1/6, 1/6, 1/6, right?
But it could just as
easily be a loaded die
that's weighted towards one side
more than another and then, you know.
So taken to an extreme, this kind
of led to a dead end at some point
and got kind of ridiculed.
Like, if you take this
to an extreme, you
could say, well, what if I want to
know what's the probability that there
is life on Neptune, OK?
Well, I've never been there.
I don't know if there is
life on Neptune or not,
so either there is or there
isn't, that's two possibilities.
One of them has life, the other
one doesn't, so it'd be 1/2, OK?
So most people would agree,
that's a ridiculous argument.
Despite that, you can find many
examples in the media, in the news--
there's a Daily Show clip I really
like kind of making fun of this,
I'll post the link at some point.
And I've seen various examples
where people are taking seriously
arguments of that form,
or they're using the naive
definition with no justification.
There's no justification for using the
naive definition in that case, right?
And the situation gets even worse.
I just said, well, according to this,
the probability of life on Neptune
is 1/2.
Now what if I asked you instead,
what's the probability that there's
intelligent life on Neptune?
Well, again, either there is or there
isn't, so that would also be 1/2.
But there's something that seems--
something's severely wrong with-- you
know, shouldn't it be
strictly less likely
that there's intelligent life than
that there's any kind of life?
Or there should be a strict
inequality there, OK?
And that's not reflected.
So we'll quickly need to go beyond this.
However, as I said, this is
where the subject got its start--
and for gambling, so
it's still important.
It's important both to understand
how the subject evolved,
but it's also important
for a lot of problems
where we are able to
assume equally likely.
I'm just emphasizing the fact
that you need some justification
or be very clear about
what you're able to assume.
And if you are able to assume that
all the outcomes are equally likely
and there's finitely many, then this
is the definition of probability,
it's perfectly good,
otherwise you can't.
OK?
So in order to be able to actually--
I mean, I did a very,
very simple example here,
I just wrote down the four
outcomes and we did it,
but for anything that's
more difficult than that,
it would become too tedious
to list everything out.
So therefore, the first major topic
in this class is, how do we count?
OK?
So I said that calculus
is a prerequisite.
Counting is not really a prerequisite,
so we're going to start--
I'm trying to make this as
self-contained as possible,
we start with counting.
So some basic principles of counting.
Because if we don't know
how to count, then we
would never be able to compute the
numerator and denominator there, right?
All right.
So there are a couple of
principles that we need.
First one, I don't really know
if it has a standard name,
but I just call it the
multiplication rule.
And it says-- it's a
pretty simple principle,
but it underlies most of
what we'll need for counting,
except for one other counting
method that we'll get to next time.
Multiplication rule says that
if we have an experiment--
again, I'm going to say
this kind of abstractly
because this is a general
principle, and then we'll
see examples of how to use this.
So if we have an experiment with,
let's say, n1 possible outcomes, OK?
So we do some experiment and there
are n sub 1 possible outcomes.
And then we do a second
experiment such that
for each outcome of
the first experiment,
then there are n2 possible
outcomes for the second experiment.
I'll just abbreviate experiment to expt.
There are n2 possible outcomes
for the second experiment--
that should be n2 here--
for a second experiment, et cetera,
do as many experiments as you want.
So I'll just put dot-dot-dot.
Let's say we're going to do r
experiments, and there are n sub r--
and so for each-- no
matter what happened
with the previous r minus
1 experiments, there
n sub r possible outcomes
for the rth experiment.
OK.
Almost done stating this.
There are n sub r outcomes
possible for the rth experiment.
Then the conclusion-- so we sort of
have all these separate experiments
that we're doing sequentially
one after the other, OK?
Then overall, there
are n1 times n2 times
blah blah blah times nr overall possible
outcomes for the combined experiment.
The combined experiment consists of
doing all these smaller experiments
one after the other.
Overall possible experiment outcomes.
Formally, you can prove
this using induction,
but I would prefer that you
understand why this is true just
by thinking about it,
thinking about examples, OK?
And the way I like to visualize it is
just by drawing a simple tree diagram.
So I promised someone
while were waiting outside
that I'd mentioned ice cream today.
I like to talk about ice
cream examples with counting
for reasons that I might mention.
Ice cream example-- this is
a very, very simple example,
but once you understand
this example completely,
then all this stuff I
wrote here becomes obvious.
Simple experiment, you
go and get ice cream
and suppose you have different options.
Suppose just for simplicity that
there are only three flavors--
chocolate, vanilla, and strawberry--
and suppose that there are
two different types of cone.
And so you go in, you choose
which type of cone you want
and which type of flavor
you want, that's it.
OK?
That's the experiment.
So that experiment consists
of two parts, right?
The first experiment is you
choose which type of cone you want
and on the second experiment is you
choose which flavor you want, right?
So very, very simple.
So I would just depict it like this,
where let's say at the first branch,
you choose either C for a cake
cone or a W for a waffle cone,
and then once you've chosen
which type of cone you want,
then you can either choose
chocolate, vanilla, or strawberry--
CVS.
Was not a sponsor today, but
maybe it'll get sponsorship later.
Chocolate, vanilla, strawberry, there
we go-- that's the tree diagram.
Once you understand this tree, all this
abstract stuff should become obvious.
How many possibilities are there?
Well obviously there's 6.
1, 2, 3, 4, 5, 6.
Why is it 6?
6 equals 2 times 3.
Now notice also that 6 equals 3 times 2.
We could have-- they're not going
to force you to first choose a cone
and then choose the flavor.
You could choose the flavor
first and then choose the cone
and they should be able
to handle that, right?
So you can draw that tree for yourself,
it could have split three ways and then
two ways, there's still six outcomes.
That is all this is saying, right?
So you can imagine a massive tree
where it branches many times,
millions of branches, but if you
understand this simple little tree,
then you'll see where
that's coming from.
OK?
So you all know about
exponential growth.
6 is still a pretty
small number, but if you
imagine that there are
many branches and each time
it keeps branching different ways,
if you multiply 2 times 2 times
2 many times, it's going
to grow exponentially fast,
like 2 to the 10th power is 1,024.
So if we had 10 choices and each choice
we can only choose between two things,
there's still over 1,000 possibilities.
So these grow very, very,
very fast, and that's
why it's hopeless to try
to just list them out
except for the very simplest problem.
OK, so that's the multiplication rule.
Well let's do one quick example.
Find the probability of a full house in
poker-- and I'll tell you what that is.
I'm assuming you know
what a deck of cards is,
but I'm not assuming you know the
term full house with a five-card hand.
So a standard deck of cards has
52 cards, and you get five cards.
And we're assuming that the
cards are completely shuffled
so that all sets of five
cards are equally likely, OK?
That's the assumption.
OK.
Then I have to tell you
what a full house is.
But if I'm going to use the
naive definition of probability,
I want to know, what's the
number of possible hands?
Well that's 52, choose 5.
I think most of you have
seen this, sometimes
people write this as like 52C5,
you know, combinations and things--
this is a preferable notation.
I'll remind you what it is
in case you haven't seen it,
but hopefully most of
you seen that before.
We'll be seeing a lot
of these in this course,
those are called a binomial coefficient.
It's pronounced n choose
k and written like that.
And it's defined as n factorial over
n minus k factorial k factorial.
I'm assuming you've seen
factorials before, but if not,
you definitely should make sure
you know what factorials are.
We'll also define this as
0 if k is greater than n.
What this quantity is supposed to
represent is the number of way--
if you have n people, how many ways can
you choose k out of the n people, OK?
So choose a subset of size k
where order doesn't matter.
So I'll say a number
of subsets of size k
where order doesn't matter of a
group of n people or n objects.
So if k is greater than n,
it has to be defined as 0,
because you can't choose--
if you have 10 people, you can't
choose 11 of them, it's impossible, OK?
So if k is less than or equal
to n, it's equal to this,
and let me just quickly
tell you the reason why.
If we choose-- we know we need
to make some choices, right?
This follows almost immediately
from the multiplication rule,
so I'll just quickly justify this.
Let's choose the first person, OK?
So we have we have n people and
we want to select k of them, OK?
Pick the first person.
There are n choices, right?
Because you can pick anyone.
Now, and then the next person could
be anyone except the one you already
chose, so it's going to be n minus 1.
And then the next one is n minus 2.
And it goes all the way like that
until it goes down to n minus k plus 1,
because if k is 1, I want to stop at n.
If k is 2, I want to stop at n minus 1.
So that would be the answer if we were
picking people in a specific order, OK?
But these k people I just selected, I
could have chosen them in any order,
right?
So I have to divide this
by k factorial because I've
over-counted by that factor.
And that's actually the same thing
as n factorial over n minus k
factorial k factorial, because
if you write up these factorials,
all of the stuff is going to cancel
and this is what's left, right?
If you imagine-- this is n times n minus
1, n minus 2, all the way down to 1;
this 1, you know, the same
thing, starting at n minus k--
cancel stuff, this is what's left.
OK?
So that's where this thing comes from.
All right.
Now coming back to this
full house problem,
a full house is defined as having three
cards of one rank and two of another.
For example, three 7's and two 10's, OK?
That's called a full house.
So if we use the naive
definition, which is justified
if we assume that the cards
are completely shuffled,
the denominator is 52 choose 5, because
I'm just choosing 5 cards out of 52
with all possibilities equally likely.
Now let's get the numerator.
For the numerator, so I'll just say--
for a full house then, I'm just
going to write as an example,
three 7's and two 10's.
It really helps to just have some
concrete example in mind, some numbers
to think about.
So we want to-- now
what's the probability?
Well, first of all, I need to choose--
what do I have three of?
I wrote down 7's here, but that
could have been anything, all right?
There are 13 possibilities,
or you could say 13 choose 1.
Now I'm multiplying because I'm
using the multiplication rule, OK?
So in my mind, I'm imagining--
I'm not going to draw the whole tree
because I'd have to draw 13 branches,
but I'm imagining that
at the first branching,
I'm choosing any rank--
ace, 2, 3, 4, whatever.
Now in my mind, I'm imagining I chose
7, so I'm focusing on the 7 branch,
and then it's branching further.
I have 13 choices and I have
7's and I need three 7's.
Well, there are four
7's in a deck of cards
and I need to choose
three out of the four,
so I'm going to multiply by 4 choose 3.
Then I need to choose what the
other one is, so I wrote 10's here,
but it could have been anything.
There's 12 possibilities,
because it could
be anything except 7's in that case.
And then we need two
of those, 4 choose 2.
And that's it.
Now there are other ways
to write the answer, OK?
But I recommend thinking about it this
way, thinking in terms of the tree.
It's a more structured way to do it,
you're less likely to make a mistake.
If you have some other method you
like for doing these problems,
you can try to do it and then
compute and see if it's the same,
but it helps to think
in terms of the tree--
think in terms of the
multiplication rule.
OK.
Well that's the probability
of a full house.
So n choose k is what's
called a binomial coefficient,
but it's also an example
of a choice, right?
We're choosing k things out of
n where order doesn't matter.
So I want to quickly talk about
what happens if the order matters,
that kind of thing.
So what we're doing is sampling,
so I call this the sampling table.
So it's going to be a 2x2 table,
and we'll try to fill it in.
So sampling means we
have some population
of items or people or anything,
and we're drawing a sample.
So we're choosing k
objects out of n, and we
want to know how many
ways are there to do it.
And there are two possibilities.
Either we do it--
I'm going to draw a 2x2 table.
Either we sample with replacement--
replacement means, for example,
imagine we were conducting a survey
and we pick a person and ask
them a bunch of questions
and then we kind of put them back.
With replacing, we're allowed
to pick the same person again.
Without replacement would mean,
then the next person we pick
has to be someone different.
So there's two different
applications-- sometimes
the replacement is relevant
and sometimes not replacement,
so sampling with or without replacement.
And then the other possibility--
do we care about order or not?
So I'll say order matters
or order doesn't matter.
And they want to fill in this table.
Does everyone understand
the setup of the problem?
We have. n objects or n people,
we're going to pick k of them.
In this case, this is with replacements,
so we pick one, put it back; pick one,
put it back, OK?
Until we have k of them.
And order matters, meaning if I pick,
you know, Fred and then I pick John,
that would count differently from--
as a different possibility of
picking John and then Fred, OK?
So in this case, it's immediate--
how many possibilities there are
is just n to the k.
I don't need to do any
calculation for that, that's just
immediate from the multiplication rule.
There's n choices each time, OK?
So that's it.
Now this corner here,
that's what we just did.
Pick k people without replacement, order
doesn't matter, that's n choose k--
is the number of ways to do that, OK?
Now let's think about this one--
order matters, and we
pick without replacement.
Again, it's immediate from
the multiplication rule, OK?
So you don't have to memorize it.
I don't want you to memorize this
table, I want you to understand, like--
you should understand why this
is immediate, this is immediate,
this is immediate.
I didn't write it yet, but it's n
times n minus 1, blah blah blah,
all the way down to n minus k plus 1.
Because there's n choices
for the first person,
and then since I'm not replacing that
person and there's n minus 1 choices
and so on, until we get
to n minus k plus 1.
Now this is closely connected to
this as we just showed on that board.
If you take this thing and divide it
by k factorial, you'll get this, OK?
And you should think about
why that makes sense.
OK, so these three boxes were
all very easy to fill in, right?
So hopefully this one will be easy.
Actually, it's not.
You can try to figure this
one out for yourself--
and I think it's good practice
to try to think about it,
but it's very, very difficult, at
least compared to these three, OK?
Order of magnitude more
difficult. It turns out
that the answer is n
plus k minus 1 choose k.
And for practice, you may want to try--
just choose some very
small values of n and k
and verify that this is
correct in a small example.
And if you enjoy, you know,
solving these kinds of things,
you could think about why this is true.
We'll prove this next time.
So these three should be obvious to you
once you understand the multiplication
rule.
This one is much more subtle but useful.
So for now, you know, you
should know this result,
but we'll prove it next time.
So this is basically kind of summarizes
most of what we need for counting.
This is all you need for counting--
almost everything for the homework.
And there are-- at this point,
you can do most of the homework.
There are a couple of little
things we'll get to on Wednesday.
Normally I'll cover everything you
need by Monday, but in this case,
we have a holiday, so there's
a couple of loose ends.
Most of the homework you can do
already, so just start already.
OK, so have a good weekend.
