Many people believe that we are living in
the most peaceful period of human history.
John Lewis Gaddis proclaimed the long peace
period since the end of Second World War.
Steven Pinker further popularized the idea
in his book, The Better Angels of our Nature,
and explained the period by pointing to the
pacifying forces of trade, democracy, and
international society.
This first graph shows the percentage of time
where great powers were at war.
500 years ago, the great powers were almost
always fighting each other.
Then, the frequency declined steadily.
This graph shows the deadliness of war, which
shows a trend that goes into the opposite
direction.
Although the great powers' wars were fewer
in number, they were more damaging, but the
trend did an about-face after the Second World
War.
For the first time in modern human history,
great power conflicts were fewer in number,
shorter in duration, and less deadly.
Steven Pinker expects the trend to continue.
Now, not everyone agrees with this optimistic
picture.
Nassim Taleb believes that great power conflict
in a scale of 10 million casualties only happens
every century.
Now, the Long Peace period only covers 70
years, so what appears to be a decline in
violent conflict could merely be a gap between
major wars.
In his paper on the statistical properties
and tail risk of violent conflict, he concludes
that no statistical trend can be asserted.
The idea is that extrapolating on the basis
of historical data assumes that there is no
qualitative change to the nature of the system,
whereas many people believe that nuclear weapons
constitutes such a change to the data generating
process.
Some experts seem to share a more sober picture.
In 2015, there was a poll done among 50 international
relations experts from around the world.
60% of them believed that risk has increased
in the last decade.
52% believe that nuclear great power conflict
would further increase in the next 10 years.
Overall, they gave a median 5% chance of a
nuclear great power conflict killing at least
80 million people in the next 20 years.
And then there are some international relations
theories which suggest a lower bound of risk.
The book, The Tragedy of Great Power Politics
proposed the theory of offensive realism.
This theory says that great powers would always
seek to achieve regional hegemony, maximize
wealth, and achieve nuclear superiority.
Through this process great power conflicts
will never see an end.
Another book, The Clash of Civilizations suggests
that the conflicts between ideologies during
the Cold War era is now being replaced by
conflict between ancient civilizations.
In the 21st century, the rise of non-Western
societies presents plausible scenarios of
conflict.
And then, there's some emerging discourse
on the Thucydides' Trap, which points to the
structural pattern of stress, when a rising
power challenges a ruling one.
In analyzing the Peloponnesian War that devastated
Ancient Greece, the historian Thucydides explained
that it was the rise of Athens, and the fear
that this instilled in Sparta, that made war
inevitable.
Graham Allison in his recent book, Destined
for War, point out that this lens is crucial
for understanding China-US relations in the
21st century.
Okay, it seems that these perspectives suggests
that we should be reasonably alert about potential
risk of great power conflict, but how bad
would these conflicts be?
For the purpose of my talk, I first define
contemporary great powers.
They are US, UK, France, Russia, and China.
These are the five countries that have permanent
seats, and veto power on the UN Security Council.
There are also the only five countries that
are formally recognized as nuclear weapon
states.
Collectively, they account for more than half
of the global military spending.
So, we should expect that conflict between
great powers to be quite tragic.
In the Second World War 50 to 80 million people
died during the war.
By some models, these wars cost on the order
of national GDPs, and likely to be several
times more expensive.
This also presents a direct extinction risk.
At a Global Catastrophic Risk Conference hosted
by the University of Oxford, academics predicted
that there is 1% chance of nuclear extinction
risk in the 21st century.
The climatic effects of nuclear wars are not
very well understood, so nuclear winter presents
a plausible scenario of extinction risk.
Now, it's also important to take into account
model uncertainty in any risk analysis.
Now, one way to think about great power complex
is that it is a risk factor, in the same way
that tobacco use is a risk factor in global
burden of diseases.
Tobacco use can lead to a wide range of scenarios
of death, including lung cancer.
Similarly, great power conflicts can lead
to a wide range of different scenarios of
extinctions.
One example is nuclear winter and the subsequent
mass starvation.
Others are less obvious, which could arise
due to a failure of global coordination.
Here let's consider the development of advanced
AI as an example.
Wars typically cause faster technological
developments, often enhanced by public investment.
Countries become more willing to take risk
in order to develop technology first.
One example was the development of nuclear
weapons program by India after going to war
with China in 1962.
Repeating the same competitive dynamic in
the area of advanced AI is likely to be catastrophic.
Actors may trade-off safety research and implementation
in the process, and that might present a extinction
risk as discussed in the book Superintelligence.
So now, how neglected is the problem?
To analyze this dimension I proposed a framework
to understand this.
First, I make a distinction between broad
versus specific interventions.
By broad interventions I roughly mean promoting
international cooperation and peace, and this
could be improving diplomacy and conflict
resolution.
With specific interventions, there are two
categories of conventional risk versus emerging
risk.
I define conventional risk by those that are
studied by international relations experts
and national security professionals.
So, chemical, biological, radiological, and
nuclear risk, collectively known as CBRN in
the community.
And then there are some novel concerns arising
from emerging technologies such as the development
and deployment of geoengineering.
Now let's go back to the framework that I
used to compare with global burden of diseases.
Lower tobacco tax can lead to an increased
rate of smoking.
Similarly, development of emerging technologies
such as geoengineering can lead to greater
conflict between great powers or lead to wars
in the first place.
Now in the upcoming decades I think that it's
plausible to see the following scenarios.
Private industry players are already setting
their sights on space mining, major space-faring
countries in the future may compete for the
available resources on the moon and asteroids.
Military applications of molecular nanotechnology
could be even more destabilizing than nuclear
weapons.
Such technology will allow for targeted destruction
during attack, and also allow for greater
uncertainty in the capabilities of an adversary.
With geoengineering every technologically
advanced nation could change the temperature
of the planet.
Any unilateral action taken by countries could
lead to disagreement and conflict between
them.
Gene-editing will allow for a large-scale
eugenics program, which could lead to a bio-ethical
panic in the rest of the world.
Other countries might be worried about their
national security interest, because of the
uneven district version of human capital and
power.
Now it seems that these emerging sources of
risk are likely to be quite neglected, but
what about broad interventions and convention
risk?
It seems that political attention and resources
have been devoted to the problem.
There are anti-war and peace movements around
the world advocating for diplomacy and the
support of anti-war political candidates.
There are also some academic disciplines,
such as international relations and security
studies, that are helpful for making progress
on the issue.
Governments also have the interest to maintain
peace.
The US government has tens of billions in
the budget in nuclear security issues, and
presumably a fraction of it is dedicated to
the safety, control, and detection of such
risk.
Then, there are also some inter-governmental
organizations that put aside funding for improving
nuclear security.
One example is the International Atomic Energy
Agency.
Now it seems plausible to me that there are
still some neglected niches.
In a report of nuclear weapons policy done
by Open Philanthropy Project, it concludes
there're some of the biggest gaps in the space
are outside of the US and US-based advocacy.
In a report that comprehensively studies US-China
relations and their charter diplomacy programs,
the report concludes that some of the think
tanks were actually constrained by a committed
source of funding from foundations interested
in the area.
Since most of the research is done on behalf
of governments and thus could be tied to national
interest, it seems more useful to focus on
public interest philanthropy and nonprofit.
One example is the Stockholm International
Peace Research Institute.
With that perspective, it seems that the space
could be more neglected than what it appears
to be.
Now, let's turn to assessment of solvability.
This is the variable that I'm most uncertain
about, so what I'm going to say is pretty
speculative.
By reviewing literature, it seems that there
are some levers that could be used to promote
peace and reduce risk of great power conflicts.
Let's begin with broad interventions.
First, you can promote international dialogue
and conflict resolution.
A case study was that during the Cold War,
five great powers including Japan, France,
Germany, UK, and US decided that a state of
peace is desirable.
After the Cuban Missile Crisis, they basically
resolved all the dispute on the United Nations
and other international forums for discussions.
However, one could argue that promoting dialogue
is unlikely to be useful if there is no pre-alignment
of interest.
Another lever is promoting international trade.
In the book Economic Interdependence and War,
it suggests the theory of trade expectations
in predicting whether increased trade could
reduce risk of war.
If state leaders have positive expectations
about the future, then they would believe
in the benefits of peace, and see the high
cost of war.
However, if they fear economic decline and
the potential loss to foreign trade and investment,
then they might believe that war now is actually
better than submission later.
So it is probably mistaken to believe that
promoting trade in general is robustly useful,
once you only do it under specific circumstances.
Within specific and conventional risk, it
seems that work on international arms control
may improve stability.
Recently the nonprofit International Campaign
to Abolish Nuclear Weapons brought about a
treaty on the prohibition of nuclear weapons.
He was awarded the Nobel Peace Prize in 2017.
Recently there's also campaign to bring nuclear
weapons off hair-trigger alert.
However, the campaign and the treaty have
not been executed for that long, so the impacts
of these initiatives are yet to be seen.
With the emerging sources of risk it seems
that the space is heavily bottlenecked by
under-defined and entangled research questions.
It's possible to make progress on this issue
by just finding out what are the most important
questions in the space, and what's the structure
of the space like.
Now, what are the implications for the effective
altruism community?
Many people in the community believes that
improving the longterm future of the civilization
is one of the best ways to make a huge positive
impact.
Both the Open Philanthropy Project and 80,000
Hours have expressed the view that reducing
great power conflicts, and improving international
peace could be a promising area to look into.
Throughout the talk I expressed my view through
the following arguments.
First, it seems that the idea of Long Peace
is overly optimistic as suggested by a diverse
perspective of technical analysis, expert
forecasting, and international relations theories.
Second, I have argued that great power conflicts
can be understood as a risk factor that could
lead to human extinction either directly,
say through a nuclear winter, or indirectly
through a wide range of scenarios.
Third, it seems that there are some neglected
niches that arise from the development of
novel emerging technologies.
I gave examples of molecular nanotechnology,
gene-editing, space mining, and gene-editing.
Lastly, I've expressed significant uncertainty
in the solvability of the issue, however,
my best guess is that doing some disentanglement
research is likely to be somewhat useful.
Additionally, it seems that there are comparative
advantage for the EA community to work on
this problem.
A lot of people in the community share strong
cosmopolitan values, which could be useful
for fostering international collaboration
rather than being attached to their national
interests, and national identities.
The community can also bring in the culture
of explicit prioritization, and long-termist
perspective to the field, and then, some people
in the community are also familiar with concepts
such as The Unilateralist's Curse, Information
Hazard, and Differential Technological Progress,
which could be useful for analyzing emerging
technologies and their associated risk.
All things considered, it seems to me that
risk from great power conflicts can really
be the Cause X that William MacAskill talks
about.
In this case, it wouldn't be a moral problem
that we have not discovered, instead it'll
be something that we're aware of today, but
for bad reasons, we deprioritized.
Now my main recommendation here is that a
whole lot more research should be done, and
this is a small version of research questions.
I hope this talk could serve as a starting
point of more conversations and research on
a topic.
Thank you.
Well, that's kinda scary.
Questions... we've got a few minutes, actually
probably 10, so go ahead, and fire them off
through the bizzabo app, and again the website
london.eaglobal.org/polls.
I guess just starting with... well, I'll start
with a question on your expertise.
How much do you pay attention to current news,
like 2018 versus the much zoomed out picture
of the century timeline that you showed?
I don't think I pay that much attention to
current news, but I also don't look at this
problem just on a century timeline perspective.
I guess from the presentation that did, it
would be something that is possible in the
next two to three decades.
I think that more research should be done
on emerging technologies, but it seems with
space mining, with geoengineering is something
that is possible in the next 10 to 15 years,
but I'm not sure whether paying attention
to the everyday political trends would be
the most effective use of the time of effective
altruists in terms of analyzing the long-term
trends.
Yeah.
It seems also that a lot of the scenarios
that you're talking about remain risks even
if the relationships between great powers
are sort of superficially quite good because
it's not... it seems to me...
I mean, I'm just spitballing from the first
row here, but it seems like the majority of
the risk is not even in direct hot conflict,
but in sort of other things gone wrong via
rivalry and escalation.
Is that how you see it as well?
Yeah, I think so.
I think the reason why I said that it seems
like there is some neglected niche in the
issue, is that most of the international relations
experts and scholars are not paying attention
to these emerging technologies.
And these technologies could really change
the structure and the incentive of the countries,
so even if China-US relations appear to be...
well, it is a pretty bad example because now
it's not going that well, but suppose in a
few years some international relations appear
to be pretty positive, but the development
of powerful technologies could just change
dynamics so much that people just didn't see
it then.
One question from the audience is... and you
put up one example of this with the graphic
that showed the causes of near misses in terms
of like nuclear... for bad reasons first strike.
How... is that the extent of the near misses
literature, or what other near misses have
people investigated in the past?
I don't think I got the question around the
first part of it.
I think to try to put it simpler, my fault.
Have there been a lot of near misses?
We know about a few of the nuclear near misses.
Have there been other kinds of near misses
where great powers nearly entered into conflict
but didn't?
Yeah.
I think one paper shows that there were almost
40 near misses, and I think that was put up
by the Future of Life Institute, so some people
can look up to that paper, and I think that
in general it seems that experts agree some
of the biggest risk from nuclear would be
accidental use, rather than deliberate and
malicious use between countries.
That might be something that people should
look into, just on improving the detection
systems and improving the technical robustness
of the reporting, and so forth.
One person is asking, it seems like one fairly
obvious career path that might come out of
this analysis would be to go into the civil
service and try to be a good steward of the
government apparatus.
What do you think of that and are there other
career paths that you have identified that
you think people should be considering as
they worry about the same things you're worrying
about?
Yeah.
I think apart from civil services, working
at think tanks seems also possible, and if
you are particularly interested in the development
of emerging technologies like the examples
I have given, then it seems that there are
some relevant EA organizations that would
be interested.
FHI would be one example, and I think doing
some independent research could also be somewhat
useful, especially if we are still in a stage
of disentangling the space, and see what are
some of the most promising topics to focus
on.
What effect do you think climate change has
on the risk of great power conflict?
I think that one scenario that I'm worried
about would be geoengineering.
Geoengineering is like a plan B for dealing
with climate change, and I think that there
is a decent amount of chance that the world
wouldn't be able to put together in dealing
with climate change in time.
In that case, we would need to figure out
a mechanism in which countries can cooperate
and govern the deployment of geoengineering.
One example would be, China and India are
geographically very close, and if one of them
decided to deploy the geoengineering technologies,
that would just affect the climatic interest
of another one.
So, disagreement and conflict between these
two countries could be quite catastrophic.
This is probably a good time to mention your
office hours because there's a lot of questions
here, and we will not be able to get to them
all.
Your office hours will be at 4:30 today?
Is that-
Yeah, and there's also a meet-up precisely
on this topic at 3:30 to 4:30.
3:30 and 4:30.
That's right.
And that should be in the program in terms
of the location.
But to keep going through questions as much
as we can, what do you think the role in the
future will be for international organizations
like the UN and other similar international
organizations?
Are they too slow to be effective, or do you
think they have an important role to play?
I am a little bit skeptical about the roles
of these international organizations, especially
because of two reasons.
One is that these emerging technologies are
being developed very quickly, and so if you
look at AI, I think that nonprofits and civil
society initiatives and firms will be able
to respond to these changes much more quickly
instead of just going through all the bureaucracy
of UN for example.
Another it seems that historically nuclear
weapons, and bio-weapons were mostly driven
by the development of countries, but with
AI, and possibly with space mining, perhaps
with gene-editing, private firms are going
to play a significant role.
I think I would be keen to explore other models,
such as multi-stakeholder models, firm-to-firm,
or lab-to-lab collaboration.
And also possibly the role of epistemic communities
between researchers in different countries,
and just get them to be in the same room and
agree with a set of principles.
Example was the Asilomar Principles to regulate
biotechnology 30 years ago, and now we have
a converging discourse and consensus around
a Asilomar Conference on AI, so I think people
should export these confidence models in the
future as well.
Those specific shared understandings relate
to the next question, which is somebody points
out that seemingly an important factor in
the European peace since World War II has
been a sense of European identity, and a shared
commitment to that.
Do you think that it is possible or desirable
to create a global sense of identity that
everyone can belong to?
Yeah, this is quite complicated.
I think that there are two pieces to it.
One is the creation of a global governance
model may exaggerate the risk of global permanent
totalitarianism, so that's a downside that
people should be aware of.
But at the same time there are benefits of
global governance in terms of just better
cooperation and security that seem to be really
necessary for regulating the development of
synthetic biology.
So, a more widespread use of surveillance
might be necessary in the future, and people
should not disregard this possibility.
I'm pretty uncertain about what's the trade-off
there, but people should be aware of this
trade-off and keep doing research on this.
Probably the last question we'll have time
for right now, but again 3:30 for a meet-up,
and 4:30 for office hours, you can get more
time with Brian and ask additional questions.
But what is your vision for success?
The most likely scenario in which global great
power conflict is avoided.
Is that just managing the current status quo
effectively or does it really require a sort
of new paradigm or a new world order to take
shape?
I guess I am hopeful for cooperation based
on a consensus on the future of a world of
abundance.
I think that a lot of framework that went
into my presentation was around regulating
and minimizing the downside risk, but I think
it's possible to foster international cooperation
around the positive future.
Just look at how much good we can create with
safe and beneficial AI.
We can potentially have universal basic income.
If we cooperate on space mining, then we can
go to the space and just have amazing resources
in the cosmos.
I think that if people have an emerging view
on the huge benefits of cooperation, and just
the irrationality of conflicts, then it's
possible to see a pretty bright future.
Well, we certainly hope for that.
By the size of the crowd here, and the number
of questions we've received, I know that you're
hitting on a topic that is really resonant
with a lot of EAs.
So thank you for bringing it to us.
How about a round of applause for Brian Tse?
Thank you very much.
Good job.
