Okay.
Toby, you're working on a book at the moment.
Just to start off, kind of tell us about that.
Sure.
Yeah, I've been working on a book for a couple
of years now, and ultimately, I think with
big books - this one is on existential risk
- I think that they're often a little bit
like an iceberg, and certainly Doing Good
Better was, where there's this huge amount
of work that goes on before you even decide
to write the book, coming up with these ideas
and distilling them.
But I'm trying to write really the definitive
book on existential risk.
I think the best book so far, if you're looking
for something before this comes out is John
Leslie's The End of the World.
That's from 1996.
That book actually inspired Nick Bostrom to
some degree to get into this.
I thought about writing an academic book.
Certainly a lot of the ideas that are going
to be included are cutting edge ideas that
haven't been talked about really anywhere
before.
But I ultimately thought that it was better
to write something at the really serious end
of general fiction, to try to reach a wider
audience.
That's been a really interesting aspect of
writing it.
And how do you define an existential risk?
What counts as existential risks?
Yeah.
This is actually something where even within
effective altruism, people often make a mistake
here, because the name existential risk, that
Nick Bostrom coined, is designed to be evocative
of extinction.
But it's really designed to say that there's
extinction risk, the risk of human extinction,
but there's also a whole lot of other risks
which are very similar in how we have to treat
them.
They all involve a certain common methodology
for dealing with them, in that they're risks
that are so serious that we can't afford to
have even one of them happen.
We can't learn from trial and error, we have
to have this proactive approach.
The way that I currently think about it is
as risks that threaten the destruction of
humanity's long-term potential.
Extinction would obviously destroy all of
our potential over the long term, as would
a permanent unrecoverable collapse of civilization,
if we were reduced to a pre-agricultural state
again or something like that, and as would
various other things that are neither extinction
nor collapse.
There could be some form of permanent totalitarianism.
If the Nazis had succeeded in a thousand-year
Reich, and then maybe it went on for a million
years, we might still say that that was just
an utter disaster that perhaps would be irrevocable.
I'm not sure that at the time it would have
been possible to do that with the technology
of the time, but as we get more advanced surveillance
technology and genetic engineering and other
things, it might be possible to have lasting
terrible political states and things like
that.
So it includes both extinction and also these
related areas.
And then in terms of what your kind of aims
with the book as well, the change you're trying
to effect?
Yeah.
One key aim is to introduce the idea of existential
risk to this wider audience.
I think that this is actually one of the most
important ideas of our time.
It really deserves a proper airing, trying
to really get all of the framing right.
And then also, as I said, to introduce a whole
lot of new cutting edge ideas that are to
do with new concepts, mathematics of existential
risk and other related ideas, lots of the
best science, all kind of put into one place.
There's that aspect as well, so it's definitely
a book for everyone on existential risk.
I've learned a lot while writing it, actually.
But also, when it comes to effective altruism,
I think that often we have some misconceptions
around existential risk, and we also have
some bad framings of it.
It's often framed as if it's this really counterintuitive
idea.
There's different ways of doing this.
A classic one involves there could be 10 to
the power of 53 people who live in the future,
and if there's only a .. and things like this
that make it seem unnecessarily nerdy, where
you've kind of got to be a math person to
really get any pull from that argument.
And even if you are a mathsy person, it feels
a little bit like a trick of some sort, like
some convincing argument that one equals two
or something, where you can't quite see what
the problem is, but you're not compelled by
it.
Whereas actually I think that there's room
for a really broad group of people to get
behind this idea.
There's no reason that my parents or grandparents
couldn't be deeply worried about the permanent
destruction of humanity's long-term potential.
These things are really bad, and I actually
think that it's not a counterintuitive idea
at all.
In fact, ultimately I think that the roots
of existential risk, and worrying about this,
came from the risk of nuclear war in the 20th
century.
My parents were out on marches against nuclear
weapons.
At the time, the biggest protest in US history
was 2 million people in Central Park protesting
nuclear weapons.
It was a huge thing.
It was actually the biggest thing at that
time, in terms of civic engagement.
And so when one can see that there's a real
and present threat that could threaten the
whole future, people really get behind it.
That's also one of the aspects of climate
change, is people perceive it as a threat
to continued human existence, among other
things, and that's one of the reasons that
motivates them.
So I think that you can have a much more intuitive
framing of this, because the future is so
much longer than the present that some of
the ways that we could help really could be
by helping this long-term future, if there
are ways that we could help that whole time
period.
Okay.
And then looking to the next century, let's
say, where do you see the main existential
risks being?
What are all the ones that we are facing,
and which are the ones we should be most concerned
about?
I think that there is some existential risk
remaining from nuclear war and from climate
change.
I think that both of those are kind of current
anthropogenic existential risks.
The nuclear war risk is via nuclear winter,
where the soot from burning cities would rise
up into the upper atmosphere, above the cloud
level, so that it can't get rained down, and
then would block sunlight for about eight
years or so.
The risk there isn't that it gets really dark
and you can't see or something like that,
and it's not that it's so cold that we can't
survive, it's that there are more frosts,
that the temperatures are depressed by quite
a lot, such that the growing season for crops
is only a couple of months.
And there's not enough time for the wheat
to germinate and so forth, and so there'll
be widespread famine.
That's the kind of threat there.
And then with climate change, they're both
changes in the climate.
Climate change is a warming.
I think that the amount of warming that could
happen from climate change is really underappreciated.
There's actually a talk simultaneous with
the second half of mine on this topic, by
John Halstead, who actually worked with me
on looking at some of these things.
I think that it's really underappreciated.
The tail risk, the chance that the warming
is a lot worse than we expect, is really big.
Even if you set aside the actually, I think,
serious risks of runaway climate change, of
big feedbacks from the methane clathrates
or the permafrost, even if you set all of
those things aside, scientists say that the
estimate for if you doubled CO2 in the atmosphere
is three degrees of warming.
And that's what would happen if you doubled
it.
But if you look at the fine print, they say
it's actually from one and a half degrees
to four and a half degrees.
That's a huge range.
There's a factor of three between those estimates,
and that's just a 66% confidence interval.
They actually think there's a one in six chance
it's more than four and a half degrees.
So I think there's a very serious chance that
if it doubled, it's more than four and a half
degrees, but also there's uncertainty about
how many doublings will happen.
It could easily be the case that humanity
doubles the CO2 levels twice, in which case,
if you also got unlucky on the sensitivity,
that would be nine degrees of warming.
And so when you often hear these things about
how many degrees of warming they're talking
about, they're often talking about the median
of an estimate.
If there saying we want to keep it below two
degrees, what they mean is want to keep the
median below two degrees, such that there's
still a serious chance that it's much higher
than that.
If you look into all of that, there could
be very serious warming, much more serious
than you get in a lot of scientific reports.
But if you read the fine print in the analyses,
this is in there.
And so I think there's a lack of really looking
into that, so I'm actually a lot more worried
about it than I was before I started looking
into this.
By the same token, though, it's difficult
for it to be an existential risk.
Even if there were 10 degrees of warming or
something beyond what you're reading about
in the newspapers, the warming, it would be
extremely bad, just to clarify.
I've been thinking about all these things
as they relate to, could they directly be
an existential risk, rather than could they
lead to terrible situations, which could then
lead to other bad outcomes?
But one thing is that in both cases, both
nuclear winter and climate change, coastal
areas are a lot less affected.
There's obviously flooding when it comes to
climate change, but a country like New Zealand,
which is mostly coastal, would be mostly spared
the effects of either of these types of calamities.
Civilization, as far as I can tell, should
continue in New Zealand roughly as it does
today, but perhaps without low priced chips
coming in from China.
I really think we should buy some land in
New Zealand.
Like a hedge?
I'm completely serious about this idea.
I don't actually see why it would be.
Effectively, if we really...
I mean, we definitely should not screw this
up.
This is a really serious problem.
It's just a question that I'm looking at is,
is it an existential risk?
Ultimately, it's probably better thought of
as the usable areas on the earth.
They currently don't include Antarctica.
They don't include various parts of Siberia
and some parts of Canada, which are covered
in permafrost.
Effectively, the usable parts would move a
bit, and they would also shrink a lot, if
we had this bad climate change.
It would be a catastrophe, but I don't see
why that would be the end.
Between those two, do you think climate change
kind of, by EA, is too neglected?
Yeah, actually, I think it probably is.
Although you don't see many people in EA looking
at either of those.
I think they're actually very reasonable.
In both cases, it's unclear why it would the
end of humanity, and people generally in the
nuclear winter research do not say that it
would be.
They say it would be catastrophic, and maybe
90% of people could die, but they don't say
that it would kill everyone.
I think in both cases, they're such large
changes to the earth's environment, huge unprecedented
changes, that you can't rule out just that
something that we haven't yet modeled happens.
I mean, we didn't even know about nuclear
winter until more than 30 years after the
use of nuclear weapons.
There was a whole period of time when these
things, new effects could have happened, and
we would have been completely ignorant of
them at the time when we launched a war.
So there could be other things like that,
and in both cases, that's where I think most
of the danger of existential risk lies, just
that it's such a large perturbation of the
earth's system that one wouldn't be shocked
if it turned out to be an existential catastrophe.
So there are those ones, but I think the things
that are of greatest risk are things that
are forthcoming.
Okay, perfect.
Just a reminder to everybody, before we get
on to the really scary stuff, is if you want
to ask any questions, I will kind of feed
him questions during the interview, and just
use the Bizzabo app, the kind of poll.
So tell us about the kind of unprecedented
technology.
Yeah.
The two areas that I'm most worried about
in particular are biotechnology and artificial
intelligence.
When it comes to biotech, there's a lot to
be worried about.
If you look at some of the greatest disasters
in human history, in terms of the proportion
of the population who died in them, great
plagues and pandemics are in this category.
The black death killed between a quarter and
60% of people in Europe, and it was somewhere
between 5 and 15% of the entire world's population.
And there are a couple of other cases that
are perhaps at a similar level, such as the
spread of Afro-Eurasian germs into the Americas
when Columbus went across and they exchanged
germs.
And also, say, the 1918 flu killed about 4%
of the people in the world.
So we've had some cases that were big, really
big.
Could they be so big that everyone dies?
I don't think so, like the natural cases.
But maybe.
It wouldn't be silly to be worried about that,
but it's not my main area of concern.
I'm more concerned there with advances that
we've had.
We've had radical breakthroughs recently.
It's only recently that we've discovered even
that there are bacteria and viruses, that
we've worked out about DNA, and that we've
worked out how to take parts of DNA from one
and put them into another.
How to synthesize entire viruses just based
on their DNA code.
Things like this.
And these radical advances in technology have
let us do some very scary things.
And there's also been this extreme, it's often
called democratization of this technology,
but since the technology could be used for
harm, it's also a form of proliferation, and
so I'm worried about that.
It's very quick.
The human genome was... you'll probably all
remember when the human genome project was
first announced.
That cost billions of dollars, and now a complete
human genome can be sequenced for $1,000.
It's kind of a routine part of PhD work, that
you get genome sequenced.
These things have come so quickly, and other
things like CRISPR and also if we look at
gene drives, these were technologies, really
radical things, CRISPR for putting arbitrary
genetic code from one animal into another,
and gene drives for releasing it into the
wild and having it proliferate, that were
less than two years between being invented
by the cutting edge labs in the world, the
very smartest scientists, Nobel Prize-worthy
stuff, to being replicated by undergraduates
in science competitions.
Just two years, and so if you think about
that, the pool of people who could have bad
motives, who have access to the ability to
do these things, is increasing massively,
from just a select group of people where you
might think there's only five people in the
world who could do it, who have the skills,
who have the money, and who have the time
and everything to do it, through to a thing
that's much faster and where the pool of people
is in the millions.
There's just much more chance you get someone
with bad motivation.
And there's also states with bioweapons programs.
We often think that we're protected by things
like the Bioweapons Convention, the BWC.
That is the main kind of protection, but there
are states, we know that the Russians have
been violating it for a long time.
They had massive programs with more than 10,000
scientists working on versions of smallpox,
and they had an outbreak when they did a smallpox
weapons test, which has been confirmed, and
they also killed a whole lot of people with
anthrax accidentally when they forgot to replace
a filter on their lab and blew a whole lot
of anthrax spores out over the city that the
lab was based in.
There's really bad examples of bio-safety
there, and also the scary thing is that people
are actually working on these things.
The US believes that there are about six countries
in violation of this treaty.
Some counties, like Israel hasn't even signed
up to it.
And also it has the budget of a typical McDonald's,
and it has four employees.
So that's the thing that kind of stands between
us and misuse of these technologies, and I
really think that that is grossly inadequate.
The Bioweapons Convention has four people
working in it?
Yeah.
It had three.
I had to change it in my book, because a new
person got employed.
How does that compare to other sorts of conventions?
I don't know.
It's a good question.
They're the types of reasons that I'm really
worried about, about developments in bio.
Yeah.
And what would you say to the response that
it's just very hard for a virus to kill literally
everybody, because they have this huge bunker
system in Switzerland, nuclear submarines
have six-month tours and so on?
Obviously, this is unimaginable tragedy for
civilization, but still would be enough people
that over some period of time, populations
would increase again.
Yeah.
I mean, you could add to that un-contacted
tribes and also researchers in Antarctica
as other hard-to-reach populations.
I think it's really good that we've diversified
somewhat like that.
I think that it would be really hard, and
so I think that even if there is a catastrophe,
it's likely to not be an existential disaster.
But there are reasons to try to push something,
to be extremely dangerous.
For example, as I said, the Soviets, then
Russians after the collapse of the Soviet
Union, were working on... they're weaponizing
smallpox, they're weaponizing Ebola.
It was crazy stuff, and tens of thousands
of people.
And they also had a whole lot of people...
don't forget that they were involved in a
mutually assured destruction nuclear weapons
thing with a dead hand policy, where even
if their command centers were destroyed, they
would force retaliation with all of their
weapons.
There was this logic of mutually assured destruction
and deterrence, where they needed to have
ways of plausibly inflicting extreme amounts
of harm in order to try to deter the US.
So they were already involved in that type
of logic, and so it would have made some sense
for them to do some of these terrible things,
assuming that logic makes any sense at all.
So I think that there could be realistic attempts
to do that with it.
It is also an area where I should say that
I think this is an area that's under-invested
in, in EA.
I think that sometimes there's about...
I would say that the existential risk from
bio is maybe about half that of AI, or a quarter
or something like that.
But a factor of two or four in how big the
risk is.
If you recall, in effective altruism we're
not interested in work on the problem that
has the biggest size, we're interested in
when you can do the most work, what impact
you'll have.
And it's entirely possible that someone would
be more than a couple of times better at working
on trying to avoid bio problems than they
would be on AI problems.
And also, the community among EAs who are
working on it is much smaller as well, so
one would expect there's kind of good opportunities
there.
But it does require quite a different skillset,
because in bio, because a lot of the risk
is misuse risk, either lone individual, small
groups, or nation states, it's much more of
a security-type area, where one working in
biosecurity might be talking a lot with national
security programs and so forth.
It's not the kind of thing that one wants
free and open discussions of all of the different
things.
And one also doesn't want to just say, "Hey,
let's have this open research forum where
we're just like on the internet throwing out
ideas, like how would you kill every last
person?
Oh, I know, what about this?"
So we don't actually want that kind of discussion
about it, which puts it in a bit of a different
zone.
But I think that for people who think that
they actually are able to not talk about things
that they find interesting and fascinating
and important, which a lot of us have trouble
not talking about those things, but people
who could do that and also perhaps who already
have a bio background, it could be a very
useful area.
Okay.
And so you think that EA in general, even
though they're taking these risks more seriously
than maybe most people, you think we're still
neglecting it relative to the EA portfolio.
I think so.
And then AI, I think, is probably the biggest
risk.
Okay, so tell us a little bit about that.
Yeah.
You may have heard more than you ever want
to about AI risk.
But basically, my thinking about this is that
the reason that humanity is in control of
its destiny, and the reason that we have such
a large long-term potential...
If you think about the set of all the things
that we could achieve, and if we really set
our minds to it, how good could the future
be?
That our potential's so great... is because
we are the species that's in control.
For example, gorillas are not in control of
their destiny.
Whether they flourish or not, I hope that
they will, but it depends upon human choices.
We're not in such a position compared to any
other species, and that's because of our intellectual
abilities, both what we think of as intelligence,
like problem-solving-type thing, also our
ability to communicate and cooperate is part
of our intellectual abilities, and that's
a key part of the story as well.
But these intellectual abilities have given
us the position where we have the majority
of the power on the planet, and where we have
the control of our destiny.
If we create some artificial intelligence,
generally intelligent systems, and we make
them be smarter than humans and also just
generally capable and have initiative and
motivation and agency, then by default, we
should expect that they would be in control
of our future, not us.
Unless we made good efforts to stop that.
But the relevant professional community, who
are trying to work out, how could you do that?
How could you guarantee that they obey commands
or that they're just motivated to help humans
in the first place, so you don't need to command
them?
They think it's really hard, and they have
higher estimates of the risk than anyone else.
That's a kind of situation.
There's disagreement about it, but there's
also some of the most prominent AI researchers,
both attempting to build such generally intelligent
systems.
This is their goal, to build human-level or
beyond general intelligence.
Not of the whole community, but of parts of
it.
They're trying to do it, and they're also
very scared about it.
There's a couple of them who say, that's a
really fringe position in AI, that they're
actually just lying or they're incompetently
ignorant, because they should notice that
Stuart Russell and Demis Hassabis are very
prominently on the record saying this is a
really big issue.
So I think that that should just give us a
whole lot of reason to just expect, yeah I
guess creating a successor species probably
could well be the last thing we do.
And maybe we'd create something that also
is even more important than us, and it would
be a great future to create a successor.
It would be effectively our children or our
mind children, maybe.
But also, we don't have a very good idea how
to do that.
We have even less of an idea about how to
create artificial intelligence systems that
have themselves moral status and have feelings
and emotions and kind of strive to achieve
greater perfections than us and so on.
More likely it would be for some more trivial
ultimate purpose.
They're the kind of reasons that I'm worried
about.
Yeah, you hinted briefly, but what's your
kind of overall... over the next hundred years,
let's say, overall chance you'd assign some
existential risk event, and then how does
that break down between these different risks
you've suggested?
Yeah.
I would say something like a one in six chance
that we don't make it through this century.
I think that there was something like a one
in a hundred chance that we didn't make it
through the 20th century.
Overall, we've seen this dramatic trend towards
humanity having more and more power, often
increasing at exponential rates, depending
on how you measure it.
But there hasn't been this kind of similar
increase in human wisdom, and so our power
has been outstripping our wisdom.
The 20th century is the first one where we
really had the potential to destroy ourselves.
I don't see any particular reason why we wouldn't
expect, then, the 21st century to have our
power even more outbalance our wisdom, and
indeed that seems to be the case.
We also know of particular technologies that
look like this could happen.
And then the 22nd century, I think would be
even more dangerous.
I don't really see a natural end to this until
we discover almost all the technologies that
can be built or something, or we go extinct,
or we get our act together and decide that
we've had enough of that and we're going to
make sure that we never suffer any of these
catastrophes.
I think that that's what we should be attempting
to do.
If we had a business as usual century, I don't
know what I'd put the risk at for this century.
A lot higher than one in six.
My one in six is because I think that there's
a good chance, particularly later in the century-
Lower than one in six.
Sorry.
I think the chance that we would not make
it through, the chance we suffer some kind
of existential catastrophe this century, is
about one in six.
The reason that it's only one in six, and
not, say, a half or something, is because
I think that there's a good chance that we
will get our act together.
Okay, cool.
Okay.
So if we just, no one really cared, no one
was really taking action, it would be more
like 50/50?
Yeah, if it was pretty much like it is at
moment, kind of run forward, then yeah.
I'm not sure.
I haven't really tried to estimate that, but
it would be something, maybe a third or a
half.
Okay.
And then within that one in six, how does
that break down between these different risks?
Yeah.
Again, these numbers are all very rough, I
should clarify to everyone, but I think it's
useful to try to give quantitative estimates
when you're giving rough numbers, because
if you just say, "I think it's tiny," and
the other person says, "No, I think it's really
important," you may actually both think it's
the same number, like 1% or something like
that.
I think that I would say AI risk is something
like 10%, and bio is something like 5%.
And then the others are less than a percent?
Yeah, that's right.
I think that climate change and...
I mean, climate change wouldn't kill us this
century if it kills us, anyway.
And nuclear war, definitely less than a percent.
And probably the remainder would be more in
the unknown risks category.
Maybe I should actually have even more of
the percentage in that unknown category.
Yeah.
Well, let's talk a little bit about facts.
There was a question from the audience as
well, of how seriously do you take unknown
existential risks?
I guess they are known unknowns, because we
know there are some.
Yeah.
How seriously do you take them, and then what
do you think we should do, if anything, to
guard against them?
Yeah, it's a good question.
I think we should take them quite seriously.
If we kind of think backwards, and think what
risks would we have known about in the past,
we had very little idea.
Only two people had any idea about nuclear
bombs in, let's say, 1935 or something like
that, a few years before the bomb was first
started to be designed.
It would have been unknown technology for
almost everyone.
And if you go back five more years, then it
was unknown to everyone.
I think that these issues about AI and, actually,
man-made pandemics, there were a few people
who were talking these things very early on,
but only a couple of people, and it might
have been hard to distinguish them from the
noise.
But I think ultimately, we should expect that
there are unknown risks.
There are things that we can do about them.
One of the things that we could do about them
is to work on things like stopping war.
Now, you mentioned Brian Tse is talking on
this tomorrow, which sounds fantastic, so
I think that, say, avoiding great power war,
as opposed to avoiding all particular wars.
Some of the wars have no real chance of causing
existential catastrophe.
But things like World War I and World War
II or the Cold War were cases where they did
have plausibility, or at least II and the
Cold War.
I think the way to think about this is not
that war itself, or great power war, is an
existential risk, but rather it's something
else, which I call an existential risk factor.
I take inspiration in this from the Global
Burden of Disease, which looks at different
diseases and shows how much does, say, heart
disease cause mortality, morbidity in the
world, and adds up a number of disability
adjusted life years for that.
They do that for all the different diseases,
and then they also want to ask questions like
how much ill health does smoking cause, or
alcohol?
You can think of these things as these pillars
for each of the different particular diseases,
but then there's this question of cross-cutting
things, where something like smoking increases
heart disease and also lung cancer and various
other aspects, so it kind of contributes a
bit to a whole lot of these different things.
And they ask the question, well, what if you
took smoking from its current level down to
zero?
How much ill health would go away?
They call that the burden of this risk factor,
and you can do that with a whole lot of things.
Not many people think about this, though,
within existential risk.
I think our community tends to fixate on particular
risks a bit too much, and they think if someone's
really interested in existential risk, they'll
hear someone, "Oh, you work on asteroid prediction
and deflection?
That's really cool."
You're part of the kind of in group or the
team or something.
And if they hear that someone else works on
global peace and cooperation, then they'll
think, "Oh, I guess that might be good in
some way."
But actually, if you ask yourself, let's say,
conditional upon how much existential risk
is there this century?
I just said one in six.
But if you said, "What if we knew there was
going to be no great power war?"
How much would it go down from, say, 17%?
I don't know.
Maybe down to 10% or something like that,
or it could halve.
It could actually have very big effect on
the amount of risk.
And if you think about, say, World War II,
that was a big great power war, they invented
nuclear weapons during that war, because of
the war.
And then we also started to massively escalate
and invent new types of nuclear weapons, thermonuclear
weapons, because of the Cold War.
So it has a history of really provoking this,
and I think that this really connects in with
the risks that we don't yet know about, because
one way to try to avoid those risks is to
try to avoid war, because war has a tendency
to then try to delve into these dark corners
of technology space.
So I think that's a really useful idea that
people should think about.
Yeah, the risk of being wiped out by asteroids
is in the order of one in a million per century.
I think probably lower.
Whereas, as I just said, great power war,
taking that down to zero instead of taking
asteroid risk down to zero, is probably worth
multiple percentage points of existential
risk, which is way more.
It's like thousands of times bigger.
While certain kind of nebulous peace-type
thing might have a lot of people working on
it, it might not be that neglected, I think
trying to avoid great power war in particular,
so thinking about the US and China and Russia
and maybe the EU, and trying to avoid any
of these poles coming into war with each other,
that is actually quite a lot more neglected.
So I think that there would be really good
opportunities to try to help with these future
risks through that way.
And that's not the only one of these existential
risk factors.
You could think of a whole lot of things like
this.
Yeah.
Do you have any views on how likely a great
power war is over the next century then?
I would not have a better estimate of that
than anyone else in the audience.
Okay.
Reducing great power war is one way of these
kind of unknowns.
Another way of kind of just making slightly
more robust might be things like refuges or
other kind of greater detection measures,
backing up knowledge in certain ways.
David Denkenberger's work with ALLFED and
so on.
What's your view on these sorts of activities
that are about ensuring that small populations
of people, after the global catastrophic but
not extinction risk, then are able to flourish
again rather than actually just dwindle?
It sounds good.
Definitely, the sign is positive.
How good it is compared to other kinds of
direct work one could do on existential risk,
I'm not sure.
I tend to think that, at least assuming we've
got a breathable atmosphere and so on, it's
probably not that hard to come back from the
collapse of civilization.
If you look at the history of this, and it
sounded like Rose is going to be talking about
history, I've been looking a lot when writing
this book at the really long-term history
of humanity and civilization.
And one thing that I was surprised to learn
is that the agricultural revolution, this
ability to move from hunter-gatherer, forager-type
life, into something that could enable civilization,
cities, writing, and so forth, that that happened
about five times in different parts of the
world.
So sometimes people, I think mistakenly, refer
to Mesopotamia as the cradle of civilization.
That's a very western approach.
It was actually independently... there are
many cradles, and there were civilizations
that started in North America, South America,
New Guinea, China, and Africa.
So actually, I think every continent except
for Australia and Europe.
And ultimately, these civilizations kind of
have merged together into some kind of global
amalgam at the moment.
And they all happened at a very similar time,
like within a couple of thousand years of
each other.
That's something, which is basically as soon
as the most recent ice age ended and the rivers
started flowing and so on, then around these
very rivers, these civilizations developed.
So it does seem to me to be something that
is not just some kind of complete fluke or
something like that.
I think that there's a good chance that things
would bounce back, but work to try to help
on that, particularly to do the very first
bits of work.
As an example, printing out copies of Wikipedia,
putting them in some kind of dried out, airtight
containers, and just putting them in some
places scattered around the world or something,
is probably this kind of cheap thing that
an individual could fund, and maybe a group
of five people could actually just do.
We're still in the case where there are a
whole lot of things you could do, just-in-case
type things.
I wonder how big Wikipedia is when you print
it all out?
Yeah, it could be pretty big.
You'd probably want to edit it somehow.
You might.
Justin Bieber and stuff.
Yeah, don't do the Pokemon section.
Okay, so one question from the audience, which
is, I think, an important part of your book,
is asking about well, what are the non-consequentialist
arguments for caring about existential risk
reduction as well?
And something that's distinctive about your
book is you're trying to unite various moral
foundations.
Yeah, great.
That's something that's very close to my heart.
And this is part of this idea that I think
that there's a really common sense explanation
as to why we should care about these things.
A lot of the case is because people are just,
it's not salient to them that there are these
risks, and that that's the reason that they
don't take them seriously, rather than because
they've thought seriously about it, and they've
decided that they don't care whether everything
that they've ever tried to create and stand
for in civilization and culture is all destroyed.
I don't think that many people explicitly
think that.
But my main approach, the kind of guiding
light for me, is really thinking about the
opportunity cost, so it's thinking about everything
that we could achieve, and this kind of great
and glorious future that is open to us and
that we could do.
And actually, the last chapter of my book
really explores that and looks at the epic
durations that we might be able to survive
for, the types of things that happen over
these cosmological time scales that we might
be able to achieve.
That's one aspect, duration.
I think it's quite inspiring to me.
And then also the scale of civilization could
go beyond the Earth and into the stars.
I think there's quite a lot that would be
very good there.
But also the quality of life could be improved
a lot.
People could live longer and healthier in
various obvious ways, but also they could...
If you think about your peak experiences,
like the really kind of moments that shine
through, the very best moments of your life,
they're so much better, I think, than the
typical experiences, such that if you just
could somehow make...
Even within human biology, we are capable
of having these experiences, which are much
better, much more than twice as good as the
typical experiences.
Maybe we could get much of our life up to
these types of levels.
So I think there's a lot of room for improvements
in quality as well.
These ideas about the future really are the
main guide to me, but there's also these other
foundations, which I think also point to similar
things.
One of them is a deontological one, where
Edmund Burke, one of the founders of political
conservatism, had this idea of the partnership
of the generations.
What he was talking about there was that we've
had ultimately a hundred billion people who've
lived before us, and they've built this world
for us.
And each generation has made improvements,
innovations of various forms, technological
and institutional, and they've handed down
this world to their children.
It's through that that we have achieved this
greatness.
Otherwise, we know what it would be like.
It would be very much like it was on the savanna
in South Africa for the first generations,
because it's not like we would have somehow
been able to create iPhones from scratch or
something like that.
Basically, if you look around, pretty much
every single thing you can see, other, I guess,
than the people in this room, was built up
out of thousands of generations of people
working together, passing down all of their
achievements to their children.
And it has to be.
That's the only way you can have civilization
at all.
And then it could be, is our generation going
to be the one that breaks this chain and that
drops the baton and destroys everything that
all of these others have built?
It's an interesting kind of backwards-looking
idea there, of debts that we owe and a kind
of relationship we're in.
One of the reasons that so much was passed
down to us was a kind of expectation of continuation
of this.
I think that's, to me, quite another moving
way of thinking about this, which doesn't
appeal to thoughts about the opportunity cost
that would be lost in the future.
And another one that I think is quite interesting
is a virtue approach.
This is often, when people talk about virtue
ethics, they're often thinking about character
traits which are particularly admirable or
valuable within individuals.
I've been increasingly thinking while writing
this book about this at a civilizational level.
If you think of humanity as a group agent,
so the kind of collective things that we do,
in the same way as we might think of, say,
United Kingdom as a collective agent and talk
about what the UK wants when it comes to Brexit
or some question like that.
That if we think about humanity, then I think
we're incredibly imprudent.
We take these risks, which are insane risks
if an individual was taking them, where effectively
the lifespan of humanity, it's equivalent
to us taking risks to make the next five seconds
a lot better or something like that.
Entire risks to our whole future life.
With no real thought about this at all, no
explicit kind of questioning of it or even
calculating it out or anything, just blithely
take these risks.
I think that we're very impatient and imprudent.
I think that we could do with a lot more wisdom,
and I think that you can actually also come
from this perspective.
When you look at it, it does not look like
how a wise entity would be making decisions
about its future.
It looks incredibly juvenile and immature
and like it needs to grow up.
And so I think that's another kind of moral
foundation that one could come to these same
conclusions through.
Thanks.
Another question from the audience, then,
was about timelines on the development of
general artificial intelligence, or plug in
some precise definition there.
What are your views on that?
How has that changed over the course of writing
the book, if at all, as well?
Yeah.
I guess my feeling on timelines has certainly
changed over the last five or 10 years.
Ultimately, the deep learning revolution has
gone very quickly, and there really are, in
terms of the remaining things that need to
happen before you get artificial general intelligence,
there aren't that many left.
Progress seems very quick, and there doesn't
seem to be any fundamental reasons why the
current wave of technology couldn't take us
all the way through to the end.
Now, it may not.
I hope it doesn't, actually.
I think that would just be a bit too fast,
and we'd have a lot of trouble handling it.
But I can't rule out it happening in, say,
10 years or even less.
Seems unlikely.
I guess my best guess for kind of median estimate,
so as much chance of happening before this
date as happening after this date, would be
something like 20 years from now.
When it comes to these estimates, though,
it's interesting to...
But also, if it took more than 100 years,
I wouldn't be that surprised.
I allocate, say, a 10% chance or more to it
taking longer than that.
But I do think that there's a pretty good
chance that it happens within, say, 10 to
20 years from now.
Maybe there's like a 30, 40% chance it happens
in that interval.
That is quite worrying, because this is a
case where I can't rely on this idea that
humanity will get its act together.
I think ultimately the case with existential
risk is fairly clear and compelling.
This is something that is worth a significant
amount of our attention and is one of the
most important priorities for humanity.
But we might not have been able to make that
case over those time periods, so it does worry
me quite a bit.
Another aspect here, which gets a bit confusing,
and it's sometimes confused within effective
altruism, is try to think about the timelines
that you think are the most plausible, so
you can imagine a probability distribution
over different years, and when it would arrive.
But then there's also this aspect that your
work would have more impact if it happened
sooner, and I think this is a real thing,
such that if AI is developed in 50 years'
time, then the ideas we have now about what
it's going to look like are more false.
Trying to do work now that involves these
current ideas will be more shortsighted about
what's actually going to even help with the
problem.
And also, there'll be many more people who've
come to work on the problem by that point,
so it'll be much less neglected by the time
it actually happens, whereas if it happens
sooner, it'll be much more neglected.
Your marginal impact on the problem is bigger
if it happens sooner.
You could start with your overall distribution
about when it's going to happen, and then
modify that into a kind of impact-adjusted
distribution about when it's going to happen.
That's ultimately the kind of thing that would
be most relevant to when you think about it.
Effectively, this is perhaps just an unnecessarily
fancy way of saying, one wants to hedge against
it coming early, even if you thought that
was less likely.
But then you also don't want to get yourself
all confused and then think it is coming early,
because you somehow messed up this rather
complex process of thinking about your leverage
changing over time, as well as the probability
changing over time.
I think people often do get confused.
They then decide they're going to focus on
it coming early, and then they forget that
they were focusing on it because of leverage
considerations, not probability considerations.
In response to the hedging, what would you
say to the idea that, well, in very long timelines,
we can have unusual influence?
So supposing it's coming in 100 years' time,
I'm like, "Wow, I have this 100 years to kind
of grow.
Perhaps I can invest my money, build hopefully
exponentially growing movements like effective
altruism and so on."
And this kind of patience, this ability to
think on such a long time horizon, that's
itself a kind of unusual superpower or way
of getting leverage.
That is a great question.
I've thought about that a lot, and I've got
a short piece on this online.
Can't remember what it's called.
The Timing of Labour Interests?
Yeah, that's it.
It's not a great name.
The Timing of Labour Aimed at Existential
Risk Reduction or something like that.
And what I was thinking about was just this
question about, suppose you're going to do
a year of work.
Is it more important that that year of work
happens now or that a year of work happens
closer to the crunch time, like when the risks
were there?
And you could apply this to other things as
well as existential risk as well.
Ultimately, I think that there are some interesting
reasons that push in both directions, as you've
suggested.
The big one that pushes towards later work,
such that you'd rather have the year of work
be done in the immediate vicinity of the difficult
time period, is something I call nearsightedness.
We just don't know what the shape of the threats
are.
I mean, as an example, it could be that now
we think AI is bigger than bio, but then it
turns out within five or 10 years' time that
there've been some radical breakthroughs in
bio, and we think bio's the biggest threat.
And then we think, "Oh, I'd rather have been
able to switch my labor into bio."
So that's an aspect where it's better to be
doing it later in time, other things being
equal.
But then there's also quite a few reasons
why it's good to do things earlier in time,
and these include, what you were suggesting,
growth, but there are various things to do
with your money in a bank or investment could
grow, such that you do the work now, you invest
the money, the money's much bigger, and then
you pay for much more work later.
Obviously, there's growth in terms of people
and ideas, so you do some work growing a movement,
then you have thousands or millions of people
try to help later, instead of just a few.
Also growing an academic field works that
like that.
A lot of things do.
And then there's also other related ideas,
like steering.
If you're going to do some work on steering
the direction of how we deal with one of these
issues, you want to do that steering work
earlier, not later.
It's like a kind of idea of like diverting
a river.
You want to do that closer to the source of
the river.
And so there's various of these things that
push in different directions, and they help
you to work out the different things you were
thinking of doing.
I like to think of this as a portfolio, in
the same way as we think perhaps of a EA portfolio,
what we're all doing with our lives.
It's not the case that each one of us has
to mirror the overall portfolio of important
problems in the world, but what we should
do together is kind of contribute as best
we can to humanity's portfolio of work on
these different issues.
Similarly, you could think of a portfolio
over time, of all the different bits of work
and which ones are best to be done at which
different times.
So now it's better for a kind of thinking
deeply about some of these questions, trying
to do some steering, trying to do some growth.
And whereas direct work is often more useful
to be done later, there are some exceptions.
For example, it could be that with AI safety,
you actually need to do some direct work just
to prove that there's a "there" there.
And I think that that is actually sort of
effectively direct work on AI safety at the
moment.
The main benefit of it is actually that it
helps with the growth of the field.
So anyway, there are a few different aspects
on that question, but I think that our portfolio
should involve both these things.
I think there's also a pretty reasonable chance,
indeed, that AI comes late or that the risks
come late and so on, such that the best thing
to be doing was growing the interest in these
areas.
In some ways, my book is a bet on that, to
say it'd be really useful if this idea had
a really robust and good presentation, and
to try to do that and present it in this right
way, so that it has the potential to really
take off and be something that people all
over the world take seriously, and to be well
thought out.
Obviously, that's in some tension with the
possibility AI could come in five years, or
some other risk, bio risk, could happen really
soon.
Or nuclear war or something like that.
But I think ultimately, our portfolio should
go both places.
Terrific.
Well, we've got time for one last short question.
First question that was on the Bizzabo.
Will there be an audiobook?
Yes.
Will you narrate it?
Maybe.
I really think Nathan Labenz should narrate
all EA in that lovely caramel voice.
Okay, cool.
We had better wrap up, but thank you so much
for taking the time to come answer all these
questions, and will you be around?
I will.
I've got office hours for the next half hour,
and then I'll be around all day as well.
So try to grab me and have a chat if you're
interested in some of these things.
Great.
Well, let's thank Dr. Toby Ord.
