What we're going to do is we're going to introduce
ourselves briefly so you kind of know where
we're coming from.
Then we've got two moots which we have just
then decided were the two moots that we're
going to talk about.
We'll chuck them up on the board and we'll
spend about half a session talking about one
and then half a session talking about the
other.
This is a session where we'd both love for
you guys to toss us your questions right throughout
it basically so, yes, get ready to have your
questions ready and we'll open it up pretty
much soon after the intro.
Briefly intro to myself.
I currently am based in the Future of Humanity
Institute, and the work that I do specifically
looks at the relationships between large multi-national
technology firms and governments, specifically
National Security and Defense components of
governments in the US and China.
And the questions that I ask are about how
these actors should relate to each other,
cooperate, coordinate, to steer us towards
a future, or set of futures, that are more
safe and beneficial than not, with transformative
AI.
My background is in engineering, I am masquerading
as international relations person, but I'm
not really that.
I do a fair amount in the global governance
space, in the IR space largely.
That's me.
Cool.
I'm Seth Baum, I was introduced with the Global
Catastrophic Risk Institute, and as a think
tank we try to sit in that classic think tank
space of working at the intersection of, among
other things, the world of scholarship and
the world of policy.
We spend a lot of time talking with people
in the policy worlds, especially down in DC.
For me, it's down in DC, I live in New York.
I guess from here it would be over in DC.
Is that what you say?
You don't live here.
Sure.
Over in DC.
And talking with people in policy.
I work across a number of different policy
areas, do a lot on nuclear weapons, little
bit on biosecurity, and then also on AI, and
especially within the last year or two there
have been some more robust policy conversations
about AI.
The policy world has just started to take
an interest in this topic and is starting
to do some interesting things that have fallen
on our radar, and so we'll be saying more
about that.
Do you want to?
Yeah, sure.
So the two institutions that we're going to
chat about, is firstly the National Security
and Defense.
We might focus on the US National Security
and Defense, and have a bit of a chat about
what makes sense to engage them on in the
space of our strategy, and how we should be
thinking about their role in this space.
That's the first moot.
The second will turn to more international
institutions, the kind of multilateral groups,
e.g. the UN but not strictly so, and what
role they could play in the space of AI strategy
as well.
We'll kind of go half and half there.
Just so I have a bit of a litmus test for
who's in the audience, if I say AI strategy,
who does that mean anything to?
Ah, awesome.
Okay, cool.
Maybe we'll just start with getting Seth's
quick perspective on this question.
So the moot here is, this house believes that
in the space of AI strategy, we should be
actively engaging with National Security and
Defense components of the US government.
Do you want to speak quickly to what your
quick take on that is?
Sure.
So an interesting question here is engaging
with, say the US government especially on
the national security side, is this a good
thing or a bad thing?
I feel like opinions vary on this, maybe even
within this room opinions vary on whether
having these conversations is a good thing
or a bad thing.
The argument against it that I hear is essentially,
you might tell them AI could take over the
world and kill everyone, and they might hear,
AI could take over the world, hear that and
then go on to do harmful things.
I personally tend to be more skeptical of
that sort of argument.
The main reason for that is that the people
who are in the government and working on AI,
they've already heard this idea before.
It's been headline news for a number of years
now, some people from our communities including
your organization caused some of those headlines.
I feel like you're asking me to apologize
for them, and I'm not going to.
If one is concerned about the awareness of
various people in government about runaway
AI, you could ask questions like, was the
publication of the Superintelligence book
a good thing or a bad thing?
You could maybe there make a case in either
direction-
Could we do a quick poll actually?
I'd be curious.
Who thinks the publication of Superintelligence
was on net, a net positive thing?
On net, a negative thing?
Hell yeah.
Doesn't mean that that's actually true.
Fair enough.
Just to be clear, I'm not arguing that it
was a net negative, but the point is that
the idea is out, and the people who work on
AI, sure, they're mostly working on a narrow
near term AI, but they've heard the idea before.
They don't need us to put the thought into
their heads.
Now of course we could be kind of strengthening
that thought within their heads, and that
can matter, but at the same time when I interact
with them, I actually tend to not be talking
about superintelligence, general intelligence,
that stuff anyway.
Though more for a different reason, and that's
because while they have heard of the idea,
they're pretty skeptical about it.
Either because they think it probably wouldn't
happen or because if it would happen it would
be too far in the future for them to worry
about.
A lot of people in policy have much more near
term time horizons that they have to work
with.
They have enough on their plate already, nobody's
asking them to worry about this, so they're
just going to focus on the stuff that they
actually need to worry about, which includes
the AI that already exists and is in the process
of coming online.
What I've found is then because they're pretty
dismissive of it, I feel like if I talk about
it they might just be dismissive of what I
have to say, and that's not productive.
Versus instead if the message is we should
be careful about AI that acts unpredictably
and causes unintended harms, that's not really
about superintelligence.
That same message applies to the AI that exists
already: self driving cars, autonomous weapons.
You don't want autonomous weapons causing
unintended harm, and that's a message that
people are very receptive to.
By emphasizing that sort of message we can
strengthen that type of thinking within policy
worlds.
That's for the most part the message that
I've typically gone with, including in the
National Security communities.
Cool.
I've got a ton of questions for you, but maybe
to quickly interject my version of that.
I tend to agree with a couple of things that
Seth said, and then disagree with a couple
specific things.
I think generally the description of my perspective
on this is that there's a very limited amount
of useful engagement with National Security
today, and I think the amount of potential
to do wrong via engaging with them is large,
and sufficiently large that we should be incredibly
cautious about the manner in which we engage.
That is a different thing to saying that we
shouldn't engage with them at all, and I'll
nuance that a little bit.
I think, maybe to illustrate, I think the
priors or assumptions that people hold when
they're taking a stance on whether you should
engage with National Security or not, is people
I think disagree on maybe three axes.
I said three because people always say three,
I'm not entirely sure what the three are but
we'll see how this goes.
I think the first-
I can take notes, right?
We're supposed to actually use the, okay.
Yes, please.
I feel like we should actually use the whiteboard.
Keep talking.
So I think the first is people disagree on
the competence of National Security to pursue
the technology themselves, or at least to
do something harmful with said information
about capabilities of the technology.
I think some people hold the extreme view
that they're kind of useless and there's nothing
that they can do in-house that is going to
cause technology to be more unsafe than not,
which is the thing that you're trying to deter.
On the other hand, some people believe that
NatSec at least have the ability to acquire
control of this technology, or can develop
it in-house sufficiently so, that an understanding
of significant capabilities of AI would lead
them to want to pursue it, and they can pursue
it with competence, basically.
I think that kind of competence thing is one
thing that people disagree on, and I would
tend to land on them being more competent
than people think.
Even if that's not the case, I think it's
always worth being conservative in that sense
anyways.
So that's the first axis.
Second axis I think is about whether they
have a predisposition, or whether they have
the ability to absorb this kind of risk narrative
effectively, or whether that's just so orthogonal
to the culture of NatSec that it's not going
to be received in a nuanced enough way and
they're always going to interpret whatever
information with a predisposition to want
to pursue unilateral military advantage, regardless
of what you're saying to them.
Some people on one end would hold that they
are reasonable people with a broad open mind,
and plausibly could absorb this kind of long-term
risk narrative.
Some other people would hold that information
that is received by them will tend to just
be received with the lens of how can we use
this to secure a national strategic advantage.
I would tend to land on us having no precedent
for the former, and having a lot more precedent
for the latter.
I think I'd like to believe that folks at
DOD and NatSec can absorb, or can come around
more to the long term risk narrative, but
I don't think we've seen any precedent enough
for that to place credence on that side of
the spectrum.
That's kind of where I sit on that second
axis.
I said I had a third, I'm not entirely sure
what the third is, so let's just leave it
at two.
I think that probably describes the reasons
why I hold that I think engaging with NatSec
can be plausibly useful, but for every kind
of one useful case, I can see many more reasons
why engaging with them could plausibly be
a bad idea, at least at this stage.
So I'd encourage a lot more caution than I
think Seth would.
That's interesting.
I'm not sure how much caution...
I would agree, first of all I would agree,
caution is warranted.
This is one reason why a lot of my initial
engagement is oriented towards generically
safe messages like, "avoid harmful unintended
consequences."
I feel like there are limits to how much trouble
you can get in spreading messages like that.
It's a message that they will understand pretty
uniformly, it's just an easy concept people
get that.
They might or might not do much with it, but
it's at least probably not going to prompt
them to work in the wrong directions.
As far as their capability and also their
tendency to take up the risk narrative, it's
going to vary from person to person.
We should not make the mistake of treating
National Security communities even within
one country as being some monolithic entity.
There are people of widely varying technical
capacity, widely varying philosophical understanding,
ideological tendencies, interest in having
these sorts of conversations in the first
place, and so on.
A lot of the work that I think is important
is meeting some people, and seeing what the
personalities are like, seeing where the conversations
are especially productive.
We don't have to walk in and start trumpeting
all sorts of precise technical messages right
away.
It's important to know the audience.
A lot of it's just about getting to know people,
building relationships.
Relationships are really important with these
sorts of things, especially if one is interested
in a more deeper and ongoing involvement in
it.
These are communities.
These are professional communities and it's
important to get to know them, even informally,
that's going to help.
So I would say that.
I tend to agree with that sentiment in particular
about building a relationship and getting
trust within this community can take a fair
amount of time.
And so if there's any sort of given strategic
scenario in which it's important to have that
relationship built, then it could make sense
to start some paving blocks there.
It is an investment.
It is an investment in time.
It's a trade off, right?
What's an example of a productive engagement
you can think of having now?
Say if I like put you in a room full of NatSec
people, what would the most productive version
of that engagement look like today?
An area that I have been doing a little bit
of work on, probably will continue to do more,
is on the intersection of artificial intelligence
and nuclear weapons.
This is in part because I happen to have also
a background on nuclear weapons, a scenario
where I have a track record, a bit of a reputation,
and I know the lingo, know some of the people,
can do that.
AI does intersect with nuclear weapons in
a few different ways.
There is AI built into some of the vehicles
that deliver the nuclear weapon from point
A to point B, though maybe not as much as
you might think.
There's also AI that can get tied into issues
of the cybersecurity of the command and control
Systems, essentially the computer systems
that tie the whole nuclear enterprise together,
and maybe one or two other things.
The National Security communities, they're
interested in this stuff.
Anything that could change the balance of
nuclear power, they are acutely interested
in, and you can have a conversation that is
fairly normal from their perspective about
it, while introducing certain concepts in
AI.
So that's one area that I come in.
The other thing I like about the nuclear weapons
is the conversation there is predisposed to
think in low frequency, high severity risk
terms.
That's really a hallmark of the nuclear weapons
conversation.
That has other advantages for the sorts of
values that we might want to push for.
It's not the only way to do it, but if you
were to put me in a room, that's likely to
be the conversation I would have.
So if you were to link that outcome to a mitigation
of risk as an end goal, how does them understanding
concepts better in AI translate into a mitigation
of risk, broadly speaking?
Assuming that's the end goal that you wanted
to aim for.
One of the core issues with AI is this question
of predictability and unintended consequences.
You definitely do not want unpredictable AI
managing your nuclear weapons.
That is an easy sell.
There is hyper-caution about nuclear weapons,
and in fact if you look at the US procurement
plans for new airplanes to deliver nuclear
weapons, the new stealth bomber that is currently
being developed, will have an option to be
uninhabited, to fly itself.
I think it might be remote controlled.
The expectation is that it will not fly uninhabited
on nuclear missions.
That they want a human on board when there
is also a nuclear weapon there, just in case
something goes wrong.
Even if the system is otherwise pretty reliable,
that's just their...
That's how they would look at this, and I
think that's useful.
So here we have this idea that AI might not
do what we want it to, that's a good starting
point.
Sure, cool.
Let's toss it out to the audience for a couple
of questions.
We've got like 10 minutes to deal with NatSec
and then we're going to move on into multilaterals.
Yeah, go for it.
I didn't realize you were literally one behind
the other.
Maybe you first and then we'll go that way.
I was just in Washington, DC for grad school
and had a number of friends who were working
for think tanks that advise the military on
technical issues like cybersecurity, or biosecurity,
and I definitely felt like I had this sense
of maybe the people in charge were pretty
narrow-minded, but that there's this large
non-homogenous group of people, some of whom
were going to be very thoughtful and open-minded
and some of whom weren't.
And that there's definitely places where the
message could fall on the right ears, and
maybe something useful done about it, but
it would be really hard to get it into the
right ears without getting it into the wrong
ears.
I was wondering if you guys have any feelings
about, is there a risk to giving this message
or to giving a message to the wrong people?
Or is that like very little risk, and it will
just go in one ear and out the other if it
goes to the wrong person?
I feel like you could think about that either
way.
Yeah, I'm curious to hear more about your
experience actually, and whether there was
a tendency for certain groups, or types of
people to be the right ears versus the wrong
ears.
If you've got any particular trends that popped
out to you, I'd love to hear that now or later
or whenever.
But as a quick response, I think there's a
couple of things to break down there.
One is, what information are you actually
talking about, what classifies as bad information
to give versus good.
Two, is whether you have the ability to nuance
the way that it's received, or whether it
goes and is received in some way, and the
action occurs without your control.
I think, in terms of good information, that
I would be positive about good ears receiving,
and a bit meh about more belligerent ears
received it, they couldn't actually do anything
useful with the information anyway.
I think anything that nuances the technicality
of what the technology does and doesn't do,
generally is a good thing.
I think also the element of introducing that
risk narrative, if it falls on good ears,
it can go good ways, if it falls on bad ears,
they're just going to ignore it anyway.
You can't actually do anything actively bad
with information about there being a risk,
that maybe you don't have a predisposition
to care about anyway.
I'd say that's good information.
I think the ability for you to pick the right
ears for it to be received by, I'm skeptical
about that.
I'm skeptical about the ability for you translate
reliably up the hierarchy where it lands in
a decision maker's hands, and actually translates
into action that's useful.
That would be my initial response to that,
is that even if it exists and it's a more
heterogeneous space than what would assume,
I wouldn't trust that we have the ability
to read into that well, is my response.
I would say I find it really difficult to
generalize on this.
In that, each point of information that we
might introduce to a conversation is different.
Each group that we would be interacting with
can be different, and different in important
ways.
I feel, if we are actually in possession of
some message that really is that sensitive
then, to the extent that you can, do your
homework on who it is that you're talking
to, what the chain of command, the chain of
conversation looks like.
If you're really worried, having people who
you have a closer relationship with, where
there may be at least some degree of trust,
although, who knows what happens when you
tell somebody something?
Can you really trust me with what you say?
Right?
You don't know who else I'm talking to, right?
So on for anyone else.
At the end of the day, when decisions need
to be made, I would want to look at the whole
suite of factors, this goes for a lot of what
we do, not just the transmission of sensitive
information.
A lot of this really is fairly context specific
and can come down to any number of things
that may be seemingly unrelated to the thing
that we think that we are talking about.
Questions of bureaucratic procedure that get
into all sorts of arcane minute details could
end up actually being really decisive factors
for some of these decisions.
It's good for us to be familiar, and have
ways of understanding how it all works, that
we can make these decisions intelligently.
That's what I would say.
Cool.
All right, so from what I understand, a lot
of people are new to this space.
What sort of skills do you think would be
good for people to learn?
What sort of areas, like topics, should people
delve into to prove themselves in AI strategy?
What sort of thinking is useful for this space?
That's a good question.
Should I start?
Yeah.
Okay.
That's a good question.
I feel for those who really want to have a
strong focus on this, it helps to do a fairly
deep dive into the worlds that you would be
interacting with.
I can say from my own experience, I've gotten
a lot of mileage out of fairly deep dives
into a lot of details of international security.
I got to learn the distinction between a fighter
plane and a bomber plane for example.
The fighter plans are smaller and more agile,
and maneuverable and the bombers are big sluggish
beasts that carry heavy payloads and it's
the latter that have the nuclear weapons,
it's the former that benefit from more automation
and a faster more powerful AI, because they're
doing these really sophisticated aerial procedures,
and fighting other fighter planes and that's...
The more AI you can pack into that, the more
likely you are to win, versus the bomber planes
it just doesn't matter, they're slow and they're
not doing anything that sophisticated in that
regard.
That's just one little example of the sort
of subtle detail that comes from a deeper
dive into the topic that, in conversations,
can actually be quite useful, you're not caught
off guard, you can talk the lingo, you know
what they're saying, you can frame your points
in ways that they understand.
Along the way you also learn who is doing
what, and get in that background.
I would say it helps to be in direct contact
with these communities.
Like myself, I live in New York, I don't live
in Washington, but I'm in Washington with
some regularity attending various events,
just having casual conversations with people,
maybe doing certain projects and activities,
and that has been helpful for positioning
myself to contribute in a way that, if I want
to, I can blend in.
They can think of me as one of them.
I am one of them, and that's fine.
That's normal.
While also being here, and being able to participate
in these conversations.
So that's what I would recommend, is really
do what you can to learn how these communities
think and work and be able to relate to them
on their level.
Addition to that would be, try to work on
being more sensible, is the main thing I would
say.
It's one of those things where, a shout out
to CFAR for example, those kind of methodologies...
basically, I think the people that I think
are doing the best work in this space, are
the people who have the ability to A. Absorb
a bunch of information really quickly, B.
Figure out what is decision relevant quickly,
and C. Cut through all the bullshit that is
not decision relevant but that people talk
about a lot.
I think those three things will lead you towards
asking really good questions, and asking them
in a sensible way, and coming to hypotheses
and answers relatively quickly, and then knowing
what to do with them.
Sorry, that's not a very specific answer,
just work on being good at thinking, and figure
out ways to train your mind to pick up decision
relevant questions.
CFAR would be a good be a good organization
for that, is that what you're saying?
CFAR would be epic, yeah.
We've got a couple people from CFAR in the
audience, I think.
Do you want to put your hand up?
If you're here.
Nice.
So, have a chat to them about how to get involved.
The other thing I'd say, is there is a ton
of room for different types of skills, and
figuring out where your comparative advantage
is, is a useful thing.
I am not a white male, so I have a less comparative
advantage in politics, I'm not a US citizen,
can't do USG stuff, those are facts about
me that I know will lead me toward certain
areas in this space.
I am an entrepreneur by background, that leads
me to have certain skills that maybe other
people marginally don't have.
Think about what you enjoy, what you're good
at, and think about the whole pipeline of
you doing useful stuff, which starts probably
at fundamentally researching things, and ends
at influencing decision makers/being a decision
maker.
Figure out where in that pipeline you are
most likely to have a good idea.
Another shout out to 80k, who does a lot of
good facilitation of thinking about what one's
comparative advantage could be, and helps
you identify those, too.
You mentioned the white male thing, and yeah
sure, that's a thing.
That was genuinely not a dig at you being
a white male.
No.
I promise.
It's a dig at all of you for being white males.
I just realized this is recorded, and this
has gone so far downhill I just can't retract
any of that.
We're going to keep going.
So, for example, if I was attending a national
security meeting instead of this, I might
have shaved.
Right?
Because, it's a room full of a lot of people
who are ex-military, or even active military
or come from more... much of the policy culture
in DC is more conservative, they're wearing
suits and ties.
Is there a single suit and tie in this room?
I don't see one.
It's pretty standard for most of the events
there that I go to.
Simple things like that can matter.
Yeah.
You don't have to be a white male to succeed
in that world.
In fact, a lot of the national security community
is actually pretty attentive to these sorts
of things, tries to make sure that their speaking
panels have at least one woman on them, for
example.
There are a lot of very successful women in
the national security space, very talented
at it, and recognized as such.
You don't have to look like me, minus the
beard.
In order to recap, I just wanted to-
Nice.
That's good to know.
It's always useful having a token women's
spot, actually.
All right, one last question on NatSec, then
we're going to move on.
Yeah?
What do you think about 
the idea of measurements of algorithmic and
hardware progress, and the amount of money
going into AI and those kinds of measurements
becoming public, and then NatSec becoming
aware of?
That's a really interesting question.
I'm generally very, pro-that happening.
I think those efforts are particularly good
for serving a number of different functions.
One is, the process of generating those metrics
is really useful for the research community,
to understand what metrics we actually care
about measuring versus not.
B, the measurement of them systematically
across a number of different systems is very
useful for at least starting conversations
about which threshold points we care about
superseding, and what changes about your strategy
if you supersede certain metrics particularly
quicker than you expected to.
I'm generally pro- those things, in terms
of...
I guess the pragmatic question is whether
you can stop the publication of them anyway,
and I don't think you can.
I would say that if you had the ability to
censor them, it would still be a net positive
to have that stuff published for the things
that I just mentioned.
I would also plausibly say that NatSec would
have the ability to gather that information
anyway.
Yeah.
I don't necessarily also think it's bad for
them to understand progress better, and for
them to be on the same page as everyone else
about, specifically as the same as the technical
research community, about how these systems
are progressing.
I don't think that's a bad piece of information
necessarily, sorry, that was a really handwoven
answer, but...
I feel like it is at least to an approximation
reasonable to assume that if there's a piece
of information and the US intelligence community
would like that information, they will get
it.
Especially if it's a relatively straightforward
piece of information like that, that's not
behind crazy locked doors and things of that
sort.
If it's something that we can just have a
conversation about here, and they want it,
they will probably get that information.
There may be exceptions, but I think that's
a reasonable starting point.
But I feel like what's more important than
that, is the question of like, the interpretation
of the information, right?
It's a lot of information, the question is
what does it mean?
I feel like that's where we might want to
think more carefully about how things are
handled.
Even then there's a lot of ideas out there,
and our own ideas on any given topic are still
just another voice in a much broader conversation.
We shouldn't overestimate our own influence
on what goes on in the interpretation of intelligence
within a large bureaucracy.
If it's a question of, do we communicate openly
where the audience is mostly say, ourselves,
right, and this is for our coordination as
a community, for example?
Where, sure, other communities may hear this,
whether in the US or anywhere around the world,
but to them we're just one of many voices,
right?
In a lot of cases it may be fair to simply
hide in plain sight.
In that, who are we from their perspective,
versus who are we from our perspective?
We're paying attention to ourselves, and getting
a lot more value of it.
Again, you can take it on a case by case basis,
but that's one way of looking at it.
Cool.
We're going to segue into talking about international
institutions, maybe just to frame this chat
a little bit.
Specifically the type of institutions that
I think we want to talk about, are probably
multi-lateral state-based institutions.
That being, the UN and the UN's various children,
and those other bodies that are all governed
by the system.
That assumes a couple of things: one, that
states are the main actors at the table that
mean anything, and two, that there are meaningful
international coordination activities... is
that the world?
What's our guess, what do we think that is?
Is that the world?
That's... my PhD in geography.
I'm kind of seeing chicken nuggets.
We have a white board, we might as well use
it for something.
South America is completely messed up, my
apologies to South America.
That's not even close, so I'm going to try
that again, keep talking, I'm sorry.
Also, New Zealand's not on the map unfortunately.
I'm from New Zealand!
The best country in the world.
Anyway, institutions that are composed of
state representatives and various things.
The question here is, are they useful to engage
with?
I guess that's like a yes or no question.
Then if you want it nuance it a bit more,
what are they useful for versus what are they
not?
Does that sound like a reasonable...
Yes.
Great.
While Seth is working on his artwork ...
South America, there we go, that's close enough.
I don't think that's any better, friend.
My quick hot take on that, then I'll pass
it over to Seth.
I'll caveat this by saying, well I'll validate
my statement by saying that I've spent a lot
of my academic life working in the global
governance space.
That field is fundamentally very optimistic
about these institutions, so if anything I
had the training to predispose me to be optimistic
about them, and I'm not.
I'm pessimistic about how useful they are
for a number of reasons.
I think A is to do with the state-centric
approach, B is to do with precedent, about
what they're useful for versus not, and C
it's also the pace at which they move.
To run through each one of those in turn,
I think the assumption that a lot these institutions
held, and they were built to rely on these
assumptions, that states the core actors who
are needing to be coordinated.
They are assumed to have the authority and
legitimacy, to move the things that need to
move, in order for this coordination to do
the thing you want it to do.
That is a set of assumptions that I think
used to hold better, but almost certainly
doesn't hold now, and almost certainly doesn't
hold in the case of AI.
Particularly so, actors that I think are neglected
and aren't conceptualized reasonably in these
international institutions, large firms, and
also military and security folks, or that
component of government, doesn't tend to be
the component of government that's represented
in these institutions.
Those two are probably the most important
actors, and they aren't conceptualized as
the most important actors in that space.
That's one reason to be skeptical, that by
design they aren't designed to be that useful.
I think two, in terms of historically what
they've been useful for, I think UN institutions
have been okay at doing non-setting, non-building,
non-proliferation stuff, I think they've been
okay at doing things like standard setting,
and instituting these norms and translating
them into standards that end up proliferating
across industries.
That is useful as a function.
I'll say particularly so in the case of technologies,
the standardization stuff is useful, so I'm
more optimistic about bodies like the ISO,
which stands for the International Standards
something, standards thing.
Organization, I guess.
Does that seem plausible?
That seems plausible.
I'm optimistic about them more so than I am
about like the UN General Council or whatever.
But, in any case, I think that's kind of a
limited set of functions, and it doesn't really
cover a lot of the coordination cooperation
that we want it to do.
And then third is that historically these
institutions have been so freaking slow at
doing anything, and that pace is not anywhere
close to where it needs to be.
The one version of this argument is like if
that's the only way that you can achieve the
coordination activities that you want, then
maybe that's the best that you have, but I
don't think that's the best that we have.
I think there are quicker arrangements between
actors directly, and between small clubs of
actors specifically, that will just be quicker
at achieving the coordination that we need
to achieve.
So I don't think we need to go to the effort
of involving slow institutions to achieve
the ends that we want to.
So, that's kind of why I'm skeptical about
the usefulness of these institutions at all,
with the caveat of them being useful for standard
setting potentially.
I feel like people at those institutions might
not disagree with what you just said.
Okay, the standards thing, I think that's
an important point.
Also... so the UN.
A lot of what the UN does operates on consensus
across 200 countries.
So yeah, that's not going to happen all that
much.
To the extent that it does happen, it's something
that will often build slowly over time.
There may be some exceptions like astronomers
find an asteroid heading towards Earth, we
need to do something now.
Okay, yeah, you could probably get a consensus
on that.
And even then, who knows?
You'd like to think, but... and that's a relatively
straightforward one, because there's no bad
guys.
With AI, there's bad guys.
There's benefits of AI that would be lost
if certain types of AI that couldn't be pursued,
and it plays out differently in different
countries and so on, and that all makes this
harder.
Same story with like climate change, where
there are countries who have reasons to push
back against action on climate change.
Same thing with this.
I'd say the point about states not necessarily
being the key actors is an important one,
and I feel like that speaks to this entire
conversation, like is it worth our time to
engage with national and international institutions?
Well, if they're not the ones that matter,
then maybe we have better things to do with
our time.
That's fair, because it is the case right
now that the bulk of work of AI is not being
done by governments.
It's being done by the private corporate sector
and also by academia.
Those are, I would say, the two main sources,
especially for the artificial general intelligence.
Last year, I published a survey of general
intelligence R&D projects.
The bulk of them were in corporations or academia.
Relatively little in governments, and those,
for the most part, tended to be smaller.
There is something to be said for engaging
with the corporations and the academic institutions
in addition to, or possibly even instead of,
the national government ones.
But that's a whole other matter.
With respect to this, though, international
institutions can also play a facilitation
role.
They might not be able to resolve a disagreement
but they can at least bring the parties together
to talk to them.
The United Nations is unusually well-equipped
to get, you know, pick your list of countries
around the room together and talking.
They might not be able to dictate the terms
of that conversation and define what the outcome
is.
They might not be able to enforce whatever
agreements, if any, were reached in that conversation.
But they can give that conversation a space
to happen, and sometimes just having that
is worthwhile.
To what end?
To what end?
In getting countries to work on AI in a more
cooperative and less competitive fashion.
So even in the absence of some kind of overarching
enforcement mechanism, you can often get cooperation
just through these informal conversations
and norms and agreements and so on.
The UN can play a facilitation role even if
it can't enforce every country to do what
they said they would do.
What's the best example you have of a facilitated
international conversation changing what would
have been the default state behavior without
that conversation?
Oh, that's a good question.
I'm not sure if I have a...
And if anyone actually in the audience actually
has... yes.
Montreal Protocol.
That's so... do you want to expand...
I don't think that was not going to happen.
What was the example?
Montreal Protocol.
Do you want to expand on it?
So the Montreal Protocol for ozone.
Did you want to expand on that?
Yeah, it was a treaty that reduced emission...
They got a whole bunch of countries to reduce
emissions of greenhouse gases that would effectively
destroy the ozone layer, and brought those
emissions to very low levels, and now the
ozone layer is recovering.
Arguably, without that treaty, like maybe
that wouldn't have happened.
I don't know what the counterfactual would
be.
Maybe.
Yeah, and I think the Montreal... that's a
good example.
I think the Montreal Protocol... there was
a clear set of incentives.
There were barely any downsides for any state
to do that.
So put that alongside the Kyoto Protocol,
for example, where the ask was somewhat similar,
or similarly structured.
Off the record, she says as this is being
recorded live, I don't think the Kyoto Protocol
had any win... as close as effective as the
Montreal Protocol/wasn't even close to achieving
whatever the goals were on paper.
I think the reason was because the gas that
was being targeted, there were very clear
economic incentives for states to not mitigate
those.
In so far as the Montreal Protocol was a good
example, it maybe like pointed out a really
obvious set of incentives that just were going
downhill anyways.
But I don't know if it tweaked any of those,
would be my response to that.
It is the case that some types of issues are
just easier to get cooperation on than others.
If there's a really clear and well-recognized
harm from not cooperating, and the cost of
cooperating is relatively low.
I am not as much an expert on the Montreal
Protocol but, superficially, my understanding
is that addressing the ozone issue just happened
to be easier than addressing the climate change
issue, which has just proved to be difficult
despite efforts.
They might have gone about the Kyoto Protocol
in a rather suboptimal fashion potentially
but even with a better effort the climate
change might just be harder to get collective
action on, given the nature of the issue.
Then likewise, the question for us is so what
does AI look like?
Is it something that is easy to get cooperation
on or not?
Then what does that mean for how we would
approach it?
Yeah, and I think, if anything... if you were
to put the Montreal Protocol on one end of
the spectrum where, I guess like the important
things to abstract away from that particular
case study is that you had a very clear set
of incentives to mitigate this thing, and
you had basically no incentive for anyone
to keep producing the thing.
So, that was easy.
Then somewhere in the middle is the Kyoto
Protocol where you've got pretty large incentives
to mitigate the thing because climate, and
then you've got some pretty complicated incentives
to want to keep producing the thing, and the
whole transition process is like hard and
whatnot.
And then we didn't sufficiently have sort
of critical mass of believing that it was
important to mitigate the thing, so it just
became a lot harder.
I think AI, I would put on that end of the
spectrum, where you've got so many clear incentives
to keep pursuing the thing.
If anything, because you've got so many different
uses that it's just economically very tasty
for countries to pursue, not just countries
but a number of other actors who want to pursue
it.
You've got people who don't even believe it's
worth mitigating at all.
So I think, for that reason, I'd put it as
astronomically bloody hard to do the cooperation
thing on that side, at least in the format
of international institutions.
So I think the way to make it easier is to
have a smaller number of actors and to align
incentives and then to make clearer, sort
of like binding mechanisms for that to have
a shot in hell at working, in terms of cooperation.
But it could depend on which AI we're talking
about.
If you would like an international treaty
to just stop the development of AI... yeah,
I mean, good luck with that.
That's probably not going to happen.
But, that's presumably not what we would want
in the first place because we don't need the
restriction of all AI.
There's plenty of AI that we're pretty confident
can be a net positive for the world and we
would not want that AI to be restricted.
It would be in particular the types of AI
that could cause major catastrophes and so
on.
That's what we would be especially interested
in restricting.
So an important question, this is actually
more of like a technical computer science
question than an international institutions
question, but it feeds directly into this
is, so which AI would we need to restrict?
With an eye towards say future catastrophe
scenarios, is it really like the core mainstream
AI development that needs to be restricted,
because all of that is a precursor to the
stuff that could get out of hand?
Or is it a fairly different, distinct branch
of AI research that could go in that direction,
such that the mainstream AI work can keep
doing what it's doing?
So there'll be some harms from it but they'll
be more manageable, less catastrophic.
How that question is answered, I think, really
speaks to the viability of this.
Yeah.
I guess what I'm skeptical of is the ability
to segregate the two.
Like I don't think there are clear delineations,
and if people have ideas for this please tell
me, but I don't think there are clear delineations
for separating what are civilian, peaceful,
good applications from military applications,
at least in technical terms.
So it becomes hard, if you want to design
a thing, if you don't what the thing is that
you're targeting, where you can't even specify
what you're targeting to mitigate.
So that's something that I'm currently skeptical
of, and would love people to suggest otherwise.
Real quick, I would say it's not about civilian
versus military, but about whether-
Good versus bad.
But I'm curious to see people's reactions
to this.
Yes.
Yeah.
Tangential, but coming back to the... you
sort of were suggesting earlier the information
asymmetry with national security is sitting
very much on their side.
That if they want the information, we're not
keeping it from them.
They're probably going to have.
In a similar vein, do you think that in terms
of the UN and the political machinery, that
they're even necessarily going to have insight
into what their own national security apparatus
are working on, what the state of affairs
is there?
If that's sort of sitting in a separate part
of the bureaucratic apparatus from the international
agreements, how effective could that ever
even be if you don't have that much interface
between the two?
Does that...
Essentially like, how can you monitor and
enforce an agreement if you don't have access
to the information that... with difficulty.
This is a familiar problem, for example, with
biological weapons.
The technology there can also be used for
vaccine development and things of that sort.
It can cut both ways and a lot of it is dual-use,
that's the catch phrase, and because of that,
you have companies that have the right sort
of equipment and they don't want other people
knowing what they're doing because it's intellectual
property.
So the answer is with difficulty, and this
is a challenge.
The more we can be specific about what we
need to monitor, the easier it becomes but
that doesn't necessarily make it easy.
We have five minutes left.
We have five minutes left.
Okay.
Something governments seem to hate is putting
the brakes on anything that's like making
them money, tax money.
But something they seem to love is getting
more control and oversight into corporations,
especially if they think there's any sort
of reputational risk or risk to them, and
that the control and oversight is not going
to pose any sort of economic slowdown in costs.
Do you think there's a possibility of framing
the message simply as, the countries should
agree that non-state actors get to be spied
on by states, and the states get some sort
of oversight?
And the states might all agree to that, even
if the non-state actors don't like it very
much.
And the non-state actors might be okay if
there was no... if it seemed like it was toothless
at the start.
So maybe if there was some sort of like slippery
slope into government oversight to make things
more safe that could be started with relatively
low barrier.
Nice.
I like the way you think.
That's nice.
Yeah, I think the short answer is yes.
I think the major hurdle there is that firms
will hate it.
Firms, particularly multinational technology
firms, that actually have a fair amount of
sway in a number of different dimensions of
sway, just won't be good with it and will
threaten some things that states care about.
As someone who does AI research for a multinational
firm, I really do actually feel a lot of friction
when allowing certain sorts of code to cross
national boundaries.
So actually, I would like to say that state
regulation is making more of an impact than
you might realize, that there are certain
sorts of things, especially around encryption
protocols, where state agreements have made
a big difference as to what can cross state
boundaries, even with a lot of states not
being in on the agreement.
Just like the developed nations as of 30 years
ago all agreeing, "Hey, we're going to keep
the encryption to ourselves."
Means that my coworkers in India don't get
to see everything I get to work with because
there's protocols in place.
So, it does matter to international organizations,
if you can get the laws passed in the first
place.
Yeah, sure.
Any other examples aside from encryption,
out of curiosity?
I know the encryption side of it relatively
well but are there other-
Well, there's the privacy.
My American nonprofit organization had to
figure out if we needed to do anything to
comply with Europe's new privacy law.
You sound very happy about that.
I say nothing.
We are just about out of time, though, so
maybe we should try to wrap up a little bit
as far as take home messages.
I feel like we did not fully answer the question
of the extent to which engaging with national
and international organizations is worth our
time in the first place, to the question of
like are these even the key actors?
Superficially, noting we're basically out
of time, I can say there are at least some
reasons to believe they could end up being
important actors and that I feel like it is
worth at least some effort to engage with,
though we should not put all our eggs in that
basket, noting that other actors can be very
important.
Then, as far as how to pursue it, I would
just say that we should try to do it cautiously
and with skill, and by engaging very deeply
and understanding the communities that we're
working with.
I think the meta point maybe to point out
as well is that these are very much... hopefully,
illustratively, it's a very much alive debate
on both of these questions.
It's hard and there are a lot of strategic
parameters that matter, and it's hard to figure
out what the right strategy is moving forward
and I hope you're not taking away that there
are perspectives that are held strongly within
this community.
I hope you're mostly taking away that it's
a hard set of questions that needs a lot more
thought, but more so than anything it needs
a lot more caution in terms of how we think
about it because I think there are important
things to consider.
So, hopefully that's what you're taking away.
If you're not, that should be what you're
taking away.
All right, thanks guys.
