Today we’ll be looking at the Doomsday Argument,
a probability based argument that predicts
the end of humanity sometime in the next few
thousand years.
Also, since it applies to any alien civilizations,
how this Argument came to be seen as one of
the possible solutions to the Fermi Paradox.
The apparent contradiction between just how
old and big the Universe is and the seeming
absence of any other intelligent life besides
ourselves.
There’s more than one Doomsday Argument,
but today we are looking at the version known
as the Carter Catastrophe.
We’ll be looking at it is stating, how it
came about, what is claims and why, and its
strengths and weaknesses.
Also this video is something of a companion
video to last week’s episode on the Simulation
Hypothesis and forms a loose trilogy with
the video on Transhumanism and Immortality,
so you might want watch those first though
it isn’t absolutely necessary.
As is often the case in these videos I’ll
be using some more obscure terms at times
so coupled to my speech impediment I’d suggest
you turn on the closed caption subtitles now.
Also, since we will be talking about the Fermi
Paradox, one of the major interests on this
channel, you will occasionally see video links
pop up on the screen like we have now, where
we discussed some topic in more detail.
Clicking on those just pauses this video and
opens that in a new window.
Now the Doomsday Argument is one of those
concepts most people hate, what’s interesting
is that their reasons for doing so are often
quite different and often fairly logically
unsound.
There’s a kneejerk response to the concept
a lot of times simply because it is a Doomsday
Scenario and it’s also statistical, but
I think it is because people often do a bad
job explaining it and often cite related concepts
instead the actual argument and the assumptions
it rests on.
It is also a very hard one to explain the
statistical concept on in an intuitive fashion,
as I just found out myself trying to craft
this script.
It’s incredibly easy to explain to anyone
with a background in Bayesian analysis, trying
to circumvent that led to a few drafts where
I realized I’d spent about 20 minutes explaining
statistics before even stating the basis of
the argument, then another 10 explaining the
Sleeping Beauty Problem, which is a somewhat
creepy thought experiment involving doping
a young lady up with sleeping pills and mind
altering drugs.
You knock her out, wake up, ask her a question,
flip a coin, and based on the results drug
her up with some memory erasing agent and
wake her again.
Now there’s a more disturbing version of
that thought problem that Nick Bostorm thought
up, and for that matter I developed a pretty
morbid one involving clones about a decade
back, unsurprisingly while I was stationed
in a warzone, which eerily resembles a recent
Doctor Who episode called ‘Heaven Sent’,
one of Steven Moffat’s best scripts in my
opinion, but we won’t be looking at either
of those today.
Now the Sleeping Beauty Problem is very relevant
to the Doomsday Argument but we’ll be mostly
bypassing it today, just covering it briefly
near the end, and I mention it because what
it also reminded me of was that he had developed
what I think is probably one of the most intuitively
clear explanations of the Doomsday Argument
so I am going to borrow his with some minor
modifications to update it, and I will link
his write-up on the matter in the video description
below.
Bostrom’s also very well known for the Anthropic
Principle, which is very heavily tied into
Doomsday Argument, but another fellow who
is linked up with that concept and actually
formally named it is Brandon Carter, from
who the Doomsday Argument’s Alternate name,
the Carter Catastrophe, gets its name.
Carter is not a philosopher like Bostrom,
he’s a theoretical physicist and got his
doctorate under the same professor as Stephen
Hawking did the year before, and the two developed
quite a lot of black hole and general relativity
concepts together with some others.
So Carter developed the Anthropic Principle
and in 1983 did a presentation on the Doomsday
argument, as a thought experiment, he never
published it but it stirred up a lot of thought
on the matter and a lot of future papers.
Let me lay out the concept with this thought
experiment, again borrowing heavily form Bostrom’s
explanation.
You have been placed in a building with a
hundred identical rooms without windows, just
a door and you’re locked inside.
You find a note telling you that there are
100 such rooms with a person in each one.
It also says that each door, on the outside
that you can’t see, has been painted either
red or blue.
And that you must guess correctly and only
get one guess.
Now at this point you don’t know anything
but that, and you’ve got a coin in your
pocket and figure you might as well flip it
to decide, because there’s a 50/50 chance
it will be right.
Even if the number of doors painted each color
isn’t the same.
Even if 90 doors are blue and only ten red,
even if all the doors are just one color.
Your coin has a 50/50 chance of being right
and is the best odds for success you’ve
got.
Whichever color your door is, you’ve got
a 50/50 chance of being right with the coin
flip.
Now another note is slipped under the door
and it tells you that, indeed, 90 doors are
blue and 10 are red.
This is new information, you don’t know
what your door is but if you choose to assume
you yourself are a random sample of the rooms,
we’ll call this the self-sampling assumption,
then you should figure there’s a 90% chance
your door is blue and a 10% chance it is red.
Such being the case, by selecting blue you
have 90% chance of being right and while the
coin flip would still give you a 50% chance,
it would seem logical to pick blue on that
90% chance offered by the self-sampling assumption,
that you yourself are a random sample of the
100 observers.
Okay now let’s switch things up, one hundred
rooms again but this time they have a number
outside, 1-100.
This time your note tells you that they took
a list of 110 people and you were placed in
this room after they, the people doing this,
flipped a coin.
They will not tell you what the result was
or what your room number is, but that on tails
they took the first 100 people on the list
and placed one person in each room, and on
heads they took the last 10 people on the
list and placed a person in each of the first
ten rooms, 1-10.
You are asked to guess if there are 10 people
or 100.
So now you know, if it came up heads you must
be in rooms 1-10.
You are after all here.
If it came up tails then you could be in any
of the hundred rooms.
It would seem coin flip odds there are 10
or 100 people and it would initially seem
flipping a coin yourself was the logical move.
And indeed, once again that offers a 50/50
chance of being right based on what you know.
Yet at the same time if there are 110 people
on that list, wouldn’t it seem more likely
you were one of the 100, not one of 10?
100 being 91% of 110, isn’t there arguably
a 91% chance you are one of the 100, and only
a 9% chance of being one of the 10?
If you assume you were randomly picked off
that list of 110 than simply being here yourself
indicates it is more likely you are one of
the 100, we’ll call that Self-Indication
Assumption.
There is, incidentally, a huge and constant
battle between supporters of these two assumptions,
SSA and SIA.
But let’s say I get my door open somehow
and can see the number 7 on the outside, but
before I can do anything else I’m chased
back into my room by a monster and new note
is passed under saying ‘decide now, was
the coin heads, 10 people, or tails, 100 people’.
Okay, I have a new piece of information.
I am in room 7.
What does this tell us?
I mean if I’m part of the hundred I could
still be in room 7, someone had to be, but
if I was one of 10 I could obviously be too.
So how did I get here, was there a 50% chance
that it came up heads, 10 people, and a 1
in 10 chance I am in room 7, or a 50% chance
it came up tails, 100 people, and a 1% chance
I am in room 7?
Well using Bayes Thereom we can crank that
out real quick, and this time it will tell
me there’s a 9% chance I am part of 100,
and a 91% chance I am part of the 10.
The exact opposite odds I get if I assumed
I was more likely to have been part of the
100 on the list of 110.
Now before we proceed I want you to take a
moment to decide how you feel on this.
Do you think, by seeing the #7 on the door,
that we should be assuming we are probably
in the 10, or do you think that’s flawed?
It’s not a trick question or anything.
I just want you decide if you feel seeing
door #7 makes it more likely you are 1 of
10, and keep your answer in mind.
Okay, let’s step this over to the Doomsday
Argument now.
If you’ve been following the Fermi Paradox
videos then you might remember one of our
categories of solutions was that idea that
high-tech alien civilizations are rare because
they kill themselves off, and we looked at
a lot of those extinction scenarios in the
Fermi Paradox Apocalypse How video.
We’ll call this general set of scenarios
the Doom Soon group, and for our purposes
today it doesn’t matter what causes the
apocalypse anymore than it matters how you
got stuck on that list of 110 or where the
building actually is.
We’ve also explored on this channel the
concepts of colonizing our solar system and
our galaxy and of building giant megastructures
and dyson spheres and how in such setups you’d
expect immense populations where more people
are born every day then ever lived so far,
and how such civilizations could go on for
billions or trillions of years or even longer,
outliving even the stars.
Though in the end time and entropy catch up
to you and your civilization ends.
We’ll call this the Doom Late group.
In the Doom Soon group, if you sat down every
person who ever lived and handed them a numbered
card based on when they were born the Adam
and Eve get #1 and #2 and based on our most
recent estimates you and I would be getting
one just over 100 billion, and the last man
born would get, say, 1 trillion.
In the Doom Late scenario we give out the
same cards and our actual number is going
to be disgustingly high, especially if we
throw in the Whole Brain Emulation folks operating
under the Landauer Limit we discussed in the
last couple videos, in that you can get a
total number of human lifetimes out of a galaxy
well in excess of a trillion, trillion, trillion,
trillion people.
But for simplicity I’ll just go with a million
trillion, or a quintillion people, shrinking
it by about 30 orders of magnitude, the difference
between a grapefruit’s mass and our sun’s.
So Doom Soon, about a trillion people will
ever live, and we’re about a tenth of the
way through that.
Doom Late, way more than a million trillion
will have lived, and we’d be among the tiny,
tiny, tiny fraction that born first.
Conceptually, Doom Soon, with its trillion
people, is like being one of the ten people
in our second, numbered door example.
And Doom Late corresponds to our 100 people
in that same example.
And I already told you that our estimates
put you in at about #100 billion on birth
order.
That corresponds to stepping out to see the
#7 on your door.
You are about number #100 billion, so am I,
we’ve all stepped out and seen a low number
on our door.
I asked you a little while ago how you felt
about seeing door #7, if you felt that made
it more likely you were part of that 10 room
group than the 100 room group.
If you did feel that was sound reasoning,
and it’s pretty solid, then when you realize
you are #100 billion in human race it should
seem, on the face of it, that you are way
more likely to be part of the Doom Soon group
of a round a trillion not the Doom Late group
of a million trillion and way more.
I mean in one case it’s like finding out
that for a conference of ten people you randomly
arrived first, and in the other, I would say
it would be like finding out that you randomly
arrived first in an entire stadium except
that barely even begins to describe the level
of improbability of having been born this
early in even the most conservatively small
interstellar civilizations.
That would be more akin to the odds winning
the lottery several weeks in a row.
Only you didn’t win, you kind of lost.
Most of us would rather have been born nowadays
then in ancient times when life was a lot
harder and a lot shorter on average.
And as we discussed in the Transhumanism and
Immortality video, and how most of us who
are techno-optimists tend to feel, being born
further ahead in time would probably be better.
That was one of our lines of reasoning against
the Simulation Hypothesis last time, where
we were faced with similar super huge odds
that faced us with the high improbability
we are the people actually lived through modern
times rather than being Ancestor Simulation,
that advanced civilizations might feel it
was unethical to subjects us to these primitive
and harsh modern times.
But keep this in mind, in the Doom Late Group,
you are trying to argue that of all the people
in their near endless trillions who will ever
live you and I just happened by freak luck
to have been born in that first fraction of
that first trillion.
Whereas in the Doom Soon Group we’re not
even arguing we’re in the first half.
A lot of times on the Doomsday Argument, when
people roll through the math and get that
1-2 trillion figure, they misinterpret that
as meaning that is how many people will live,
when what it is actually saying is that the
most people who could live before the odds
start getting ridiculous, the odds favor it
would happen a lot sooner, probably in the
next 100 billion people since there have been
100 billion so far, and at current birth rates
of 131 million a year, that would be about
the 28th century, sooner if the birth rate
went up.
Now while that’s sinking in let me walk
you through a statistical example.
Some groups of experts has calculated that
there’s a 1% chance of us all being killed
each year.
For our purposes, doesn’t matter which,
it could be nuclear war, a lot of people thought
1% would have been generous during the Cold
War.
Asteroids or some new doomsday device that
got the diagram leaked all over the internet
and can be built by anyone with decent skill
and a few thousands bucks, or some designer
virus or plague, ebola’s big brother.
Doesn’t matter.
Their spokesman goes before a congressional
committee and say “1%, every year, a 1%
chance we’ll be dead each year.”
So the oversight committee chair says, “Hey
wait, are you telling me there’s a 1% chance
we’re going to die every year?
That means there’s a 50/50 chance we’ll
all be dead in 50 years.”
And another committee person says, “And
either way we’ll be dead in a century.
1% a year, 100 years, gone.”
Now at this point the testifying expert says,
“No sir, I’m saying there’s a fifty-fifty
chance of this happening in about the next
70 years, odds don’t add up that way.
There’s a 99% chance, or .99 chance, of
us surviving every year so the odds of us
surviving 50 years wouldn’t be 50%, it would
be .99^50, or 60.5%, it doesn’t drop to
50% for 69 years, .99^69 is about 50%, so
69 call it 70 years, a whole human lifetime.
Now in full 100 years it’s .99^100, or .366,
or 36.6% we’ll still be alive and 63.4%
we’ll be dead, not 100%, it will never be
100%.”
“Well I’m a gambling man.”
Says one of the committee, “How long before
it drops to just 1%?”
“Well that would be logarithm in base .99
of .01, which would be about 458 years.
It drops to 1 in 1000, or .001, in log base
.99 of .001 687, and if you want lottery odds
of 1 in 10 million sir, even that would only
be 1600 years.”
We have a statistical concept for degree of
belief, or certainty.
It’s like when you see all those polls that
say something like 52-48% for some pair of
candidates and at the bottom it says +/-3%.
Someone’s says the one candidate is up 4
points, but they’re actually in a statistical
tie.
Because they could be separated as much as
55-45 or it could got the other way and be
as low as 49-51.
But it could also be outside that window,
that margin, for error.
Usually we use a 95% confidence level for
such margins, meaning 19 out of 20 times the
result will be in that 3% window, but 1 in
20 it won’t be.
95% is a pretty common pick, and is one of
the reason on many older Doomsday argument
write-ups you’d see 1.2 trillion, they were
using an older estimate for humans who had
ever lived of 60 billion, newer estimate is
108 billion, and multiplying 60 by 20 would
give you 1.2 trillion.
In our last example there, the 1% a year chance,
to get up to 95% where there’s only be a
5% chance we’d still be alive, would take
300 years.
I could say I was 95% certain we’d be dead
in 300 years.
Doesn’t mean we have that long.
Could be shorter, could be longer, but not
a lot longer because the odds of survival
keep plummeting, going from 1 in 2 in 70 years
to 1 in 20 in 300 years to 1 in 100 in 460
years to 1 in 1000 in 700 years and only 1
in million in 1600 years.
If I asked most people if they thought there
was a 1% chance we’d kill ourselves off
in the next year, I’d wager I’d get about
an even mix of folks who nodded and said that
was about right with folks who thought it
was being generous and ones who thought it
was pessimistic on our survival.
So coupled to the Doomsday Argument that you
were randomly born in time and its improbably
you were born very early, this is a pretty
strong probabilistic argument.
You weren’t born early, you didn’t get
bad luck to born at the very dawn of mankind
because it won’t sprawl out into an empire
over a billion suns lasting for billions of
years.
You were probably born right around the middle
of the group, maybe tilted a bit early, maybe
a bit late, but not improbably so, and the
end is coming not that long off.
Now if you picked that seeing door #7 meant
you were probably one of ten people and not
a hundred, from earlier, that doesn’t mean
you are stuck with the Doomsday Argument.
And most people hate the Doomsday Argument,
I do, but if you dislike it is important to
make sure your reasoning is on solid footing,
we don’t discard a logical sound premise
because it is distasteful and it is pretty
logical sound.
Fortunately there are counter-arguments, many
of them, some good, some bad, some close to
home, we’ll just cover a few.
For instance, in the Transhumanism video we
are constantly talking about post-human states
and immortality too.
You could argue post-human isn’t human,
that cyborgs or digital people or some souped
up Strong AI planet sized computer brain is
not a human, I’d argue otherwise but it’s
not a bad notion, and so the Doomsday Argument
would hold even though we kept on going in
a way after that.
And with genuine immortality, since there
are only finite resources, birth rates could
drop to a trickle or stop entirely as those
immortals decided they wanted to keep all
remaining matter and energy for their own
continued existence, and only replace the
few rare deaths or not even that, just dividing
up the deceased’s stockpiles for themselves,
so they could keep going even longer.
The flip side of that, you could say that
if a transhuman or post-human is a human,
then maybe our 108 billion estimate for people
born thus far is being unfair, and we are
excluding a lot of proto-humans or even other
primates and dolphins and elephants or maybe
even wider, back to first critters, many millions
if not billions of extra years for births
and including a lot more from each year too.
Also, as I’ve indicated, the Anthropic Principle,
the Doomsday Argument, and the Simulation
Hypothesis are often linked up together, and
for good reason.
Consider the Simulation Hypothesis we discussed
last time, inside that we don’t know our
Birth Order, our real Birth Order, at all.
Sure the original folks of the 21st century
were ranked around 100 billion but for all
we know we could be ancestor simulations running
around the real year 4 Billion AD, keeping
some last immortal company while he sticks
around to watch the sun die, last dude still
kicking around from the original 21st century
and he keeps replaying it, and we’re not
birth order 100 billion, we’re somewhere
up in the gajillions and just don’t know
it.
We’re in his version of Groundhog day.
We also don’t know that there will be a
finite number of humans, we tend to assume
so but at the same time most of us tend to
think the Universe, I mean the whole thing,
every place be it alternate universes or alternate
dimensions or whatever, the whole grand shebang
if you would, is infinite in size and duration.
And if that were the case there would be an
infinite number of places we could call home
and if we had a way of getting there, you
could have an infinite number of people.
Counter-intuitively, like with the St. Petersburg
Paradox, while an incredibly large number
of humans as in the Doom Late Group would
imply it was improbable we were born this
early, an infinite number of people doesn’t
make it improbable at all.
If there’s an infinite number of us then
you are genuinely as likely to be born whenever
in the order.
Another, and one a bit similar, goes back
to the Self-Indication Assumption.
Near the beginning, when I set up the reasoning
that if you were on that list of 110 people,
10 of whom had even odds to be in 10 rooms
and 100 of whom had even odds be in 100 rooms,
says that you are far more likely to have
been in that group of 100 people.
And from that, if there are a ton of people,
as in the Doom Late version with our trillions
of trillions of people, you actually more
likely to exist if there were a huge number
of us, and so are more likely to exist in
a universe in which there were a huge number
of us, a Doom Late Universe, then in one in
which few of us existed, the Doom Soon Universe.
Yes it is statistically improbable to be born
early in a Doom Late Universe, but it would
also be statistically improbable to be born
into a Doom Soon Universe not a Doom Late
one, since there are so many more people you
can be in that one, so it kinda cancels out.
The self-indication assumption, SIA, along
with the Self-Sampling Assumption, SSA, can
be looked at a bit more clearly in the Sleeping
Beauty Problem, as I mentioned earlier, but
I’m opting not to cover that in detail today,
because I’ve noticed I tend not to be great
at explaining that.
Probably because I tilt heavily towards SIA
and am a ‘Thirder’, the enemy camp to
the ‘Halfers’ on the Sleeping Beauty Problem,
so I think I pre-bias my explanations.
Also like Bostrom, who came up with a third
position known as SSSA, Strong Self-Sampling
Assumption, I feel SSA needs to take into
account some other things.
If you do spend some time on the Sleeping
Beauty Problem, after you’ve absorbed it,
try contemplate it for scenarios where the
experiment doesn’t end after day two but
just keeps going on, or where you are using
tons of clones who all begin with the same
memory of events prior to the experiment beginning.
That can be a bit unnerving as you let your
mind dwell on it though, as we just keep drugging
someone and erasing their memory, demanding
to know how likely they think this is the
first time they’ve been woken up, or replace
the scenario and question with tons of copies
of a person and ask them how likely they think
it is they are the original, especially if
you’ve already gone through the simulation
hypothesis video.
Now there a plenty more objections but I’d
rather you pushed the idea around in your
head more before looking at those, the one’s
I just covered relate to our recent topics
so I wanted to cover them.
And one more, we’ve talked a lot about the
Fermi Paradox on this channel and this video
is in that context but I haven’t mentioned
it much in this video, and yet it does have
a specific place.
Now you’re probably assuming that’s as
a solution, that the Doomsday Argument applies
to alien civilizations as much as to ourselves
and that suggest alien civilizations don’t
last long and for that reason we don’t detect
them.
And that’s how it often does get viewed,
but there is an additional implication.
If you extend your Self-Sampling Assumption
to include all intelligent beings, not just
humans, which is to say you could have been
born an alien, and we assume aliens are reasonably
common, then our birth order number approach
begins to fall apart, much as it did when
I suggested including our pre-human ancestors
or other smart animals like chimpanzees and
dolphins.
And that seems a nice fix since it means your
birth wasn’t really improbable, there were
probably clever aliens born billions of years
ago.
However, and especially in the context of
the Dyson Dilemma, it still raises some Fermi
Paradox problems because if we do assume intelligent
civilizations are prone to expanding outward
and building Dyson Swarms around everything,
we’d still have to assume that since this
hasn’t happened yet, and since we are actually
quite early in the Universe’s Star Forming
age which itself pretty short compared to
the ages which could follow when life could
still be going on, we would still be confronted
with the freakish odds of being born this
early, only now they are ratcheted up as we
are not just considering all possible humans
who will ever exist, but also possible intelligent
being show will ever exist.
So that’s where we’ll wrap it up for today,
hopefully you’ve got some fun new concepts
and ideas to mull around and better understand
this argument, it’s strengths, and its flaws,
and maybe have a new take on coin flips too.
Speaking of which, there’s about coin flips
odds I’ll be skipping next week’s video
for time constraints.
If I don’t it will probably be to do one
of our shorter upcoming topics, if I do one
of the longer ones, so I won’t say what
the next video will be yet.
If you haven’t already watched them, I’d
suggest going back and watching the videos
on Transhumanism and the Simulation Hypothesis,
or you can try out any of these videos playlists.
If you enjoyed the video, don’t forget to
like it and share it, and subscribe to the
channel for alerts when new videos come out.
Questions and comments are welcome, I try
to get to as many as I can, and as the channel’s
been growing its attracted a lot of clever
and bright commenters you can mull over the
ideas with too.
So I hope you enjoyed the video, thanks for
watching, and we’ll see you next time!
