Here's a question for you: Imagine we've built
AGI.
So you wake up one morning, hop on Twitter,
word is floating around that X has built AGI.
Who would you want X to be?
You've got four options.
A: Alphabet.
E.g.
Google.
Two, you've got the US government.
Three you've got Baidu, which is one of the
leading AI companies in China.
And four you've got the Chinese government.
Now bear in mind this is a question about
who you would want to be in control of the
technology, not who you think is most likely
to get there.
And no, you are not allowed to say, "I don't
want any of these actors to develop AGI."
Alright, who wants A: Alphabet?
B, the US government?
C, Baidu?
And D, the Chinese government?
Alright who is over 50% confident of the answer
that you just gave me?
Nice.
Good Bayesians.
So the point here isn't that there is a correct
answer and I'm not going to tell you what
the answer should be.
The point here being that I think this is
one of the most important questions that we
need to be able to answer.
The question here being who do we want to
be in control of powerful technology like
advanced AI?
But also the question of who is likely to
be in control of that?
And these kinds of questions are critical
and really, really difficult to answer.
So what I'm going to do for you today is not
to answer the question.
What I'm going to try to do is to equip you
with a framework or a methodology for thinking
about how you can go about answering these
questions sensibly, or at least generating
hypotheses that kind of make some sense.
So the proposition is this: That you can frame
AI governance as a set of strategic interactions
between a set of actors that each have a unique
and really large stake in the development
and deployment of advanced AI.
The set of actors that I think are most important
and I'll spend time talking about today are
large multi-national technology firms who
are at the forefront of developing this technology
and states.
Specifically national security and defense
components of the state apparatus.
As a meta-point, because we love meta-points,
this is going to be a talk that demonstrates
how we can do tractable research in AI governance
and AI strategy, given information that we
have today, to figure out what futures could
look like, should look like, that are more
likely to be safe and beneficial than not.
So hopefully by the end of this you can feel
like there are some things that we can figure
out in this large landscape of questions that
all seem really large and uncertain.
So I'll take you through three things.
First I'm going to expand on this case for
why looking at actors and strategic interactions
is one of the most fruitful ways of looking
at this problem.
And then I'm going to take you through a toy
model for how you can think about strategic
interactions between firms and governments
in this space.
And then finally, I'm going to apply that
to a case study which gives you some meat
to the bones of what I'm talking about.
And we'll end by a few thoughts on how you
can take this forward if you're interested
in using this.
So in terms of the propositions, I think there
are three key reasons, and quite obvious ones,
for why focusing on actors is a good idea.
Number one, actors are part of the problem,
and a big part of it at that.
Specifically misaligned actors who have different
goals that can somewhat lead you to a suboptimal
outcome.
The second is that actors are very much exactly
the people who are shaping the solutions that
we talk about.
So at any point at which we talk about what
solutions to AI governance look like those
are products of actor decisions that are being
made.
Number three, I think we are less uncertain
about the nature of actors in this space than
we are about a bunch of other things.
And so gravitating towards the things that
we are more certain about makes a bunch of
sense.
So I'm going to run through these in turn.
Number one, ask yourself this question: Why
do we not assume that the deployment and development
of transformative AI is a given?
You would tend to come across two types of
answer to this question.
The first bucket of answers tends to be that
it's just a really, really hard technical
problem.
It's not easy to guarantee safety in the design
and deployment of your system.
Putting that bucket aside, the second bucket
tends to rely on you believing these three
statements.
Number one: That there are a number of actors
who are out there who prioritize capabilities
above safety.
Number two: You also have to believe that
these actors aren't incompetent.
If they were incompetent, we wouldn't have
to worry about them, but you have to be convinced
that there's at least a subset of them that
have the ability to pursue capabilities above
safety.
And that leads you to number three: Which
is that plausibly they could get there first.
So if you believe these three things, then
you believe that misaligned actors are going
to be at least part of this safe development
and deployment problem that we need to solve.
Reason number two why focusing on actors makes
sense.
We often talk about solutions, and if you
read a bunch of the research in this space
you'll have propositions floating around of
things like multilateral agreements, joint
projects, coordination things, etc.
The quite obvious thing to state here is that
all of these are products of actor choices,
capabilities, incentives.
Upstream of these solutions are a set of actors
that are haggling and tussling over what these
solutions should look like.
And so, analysis-wise, we should be focusing
upstream to try to figure out what solutions
are likely versus unlikely, what solutions
are desirable and undesirable.
And then critically, how do you make the thing
that is likely the desirable thing that you
actually want?
Reason number three is because we are less
uncertain about actors.
Here are a couple of photos of my colleagues
who work in AI strategy.
There's a ton of uncertainty in this space.
And it's kind of a bit of an occupational
hazard just to be comfortable with the fact
that you have to make some assumptions that
you can't really validate at this point in
time, given the information that we have.
The point here isn't that uncertainty is a
bad thing, it's just kind of a thing that
we have to deal with.
The point here though, is that I think among
a number of things, we are less uncertain
about the nature of actors compared to a lot
of our parameters that we care about.
The reasons being that A: You can observe
their behavior today, more or less.
B: You can look at the way that these very
same actors have behaved in the past in analogous
situations of governing emerging and dual-use
technologies.
And three, we've spent a lot of time across
a number of academic disciplines trying to
understand the environments that constrain
these actors, whether that's in economics,
policy, politics, legal situations, etc.
And so we have a fair number of models that
have been developed through other intellectual
domains that give us a good sense of what
constrains these actors and what supports
these actors' behaviors in that sense.
So three reasons why actors are a good thing
to focus on.
Number one: They're part of the problem.
Number two: They're part of the solution/they
design the solutions.
And number three: We have less uncertainty,
although still a fair amount of uncertainty,
about what these actors do, think, how they
behave.
So gravitating towards those interactions
between them makes a bunch of sense, as plausibly
an area that can tell you some stuff about
AI strategy.
I'm going to assume that you buy that case
for why focusing on actors is a good idea.
And we're gonna segue into actually talking
about the actors that we care about.
So here are a subset of who I think the most
important actors are that we need to think
about in the space of AI strategy.
Number one: You've got the US government.
Now the US government, in 2016, really first
came out and said AI is a thing that we care
about.
It kicked off by the Obama administration
establishing an NSTC subcommittee on ML and
AI.
And subsequently across the year of 2016 we
hosted five public workshops, there were requests
for information on AI and that culminated
a set of reports at the end of 2016 that made
the case collectively that AI is a thing that
is a big deal for the US economically, politically,
socially.
Since the change in administration, there's
been a bit of other stuff going on that's
distracted the U.S. government.
But what's not to forget is that the DOD sit
alongside/within the U.S. government and they
haven't lost focus at all.
So turning a little bit of a focus to the
DOD specifically, in 2016 as well, they commissioned
a bunch of reports that explored the applications
of AI to DOD's missions and capabilities.
And that set of reports made a case for why
DOD, specifically, should be focusing on AI
to pursue military strategic advantage.
AI was also placed at the center of the third
op-sec strategy, which was the latest piece
of military doctrine that the U.S. put forward.
The last little data point is that in 2017
Robert Work established a thing called the
algorithmic cross warfare functional team
thing.
And what the remit of that team is explicitly
is, to quote, "accelerate the integration
of big data and machine learning into DOD's
missions."
And that's a subset of the data points that
we have about how much DOD care about this.
So that's US.
Now we're gonna turn to the Chinese government,
who, in quite a different fashion, but in
similar priority, has placed AI at the center
of their national strategy.
Among many data points that we have, I'll
point out a couple.
We had the State Council's New Generation
AI Development Plan published in 2017.
And in that there was a very explicit statement
that China wanted to be the world's leading
AI innovation center by 2030.
At the report to the 19th Party Congress,
President Xi Jinping also reiterated the goal
for China to become a science and tech superpower,
and AI was dead center of that speech.
Turning again to the military side of China,
the People's Liberation Army have also not
been shy about saying that AI is a thing that
they really care about and really want to
pursue.
There's a number of surveys from the Center
for New American Security that does a good
job of summarizing a lot of what PLA is pursuing.
And as of Feb 2017, there were a number of
data points that told us that the Chinese
government were pursuing what they call an
"intelligentization strategy."
Which basically looks like a unmanned automated
warfare.
And as you can imagine, AI plays a very central
technical role in helping them achieve that.
Last but not least, there was the establishment
of the Civil Military Integration Development
Commission.
And that's headed up by President Xi Jinping,
which signals how important it is to China.
And what that does, among a number of other
things, is it makes it incredibly seamless
to have civil AI technologies translated through
to military applications as a state mandate.
So that's China.
And the last subset of actors I'll point to
are multinational technology firms.
And these are the folks who are conclusively
leading the way in terms of developing the
actual technology.
I'll point out a couple of the leading ones
in the US and China, and the reason there
being that A: They are the leading ones worldwide.
But B: Also there is something there about
them being US v. Chinese companies, and I'll
say a little bit more about that in a sec.
But you've got the likes of Alphabet, DeepMind
specifically, Microsoft, etc.
In China, you've got Baidu, Alibaba, and Tencent.
And these guys are all competing internationally
to be leading the way.
And they also have some interesting relationships
with their governments and their defense components
as well.
So these are the actors that we're talking
about.
What do we do with information about them?
How do we look at what they do, what they
think, how they act?
And how do we interpret that in a way that's
useful for us to understand the space of AI
strategy?
What I'm gonna do is give you a toy model
for how you can think about doing that, and
this is one of many ways you can be considering
how to model this space.
First you can break down each of the actors
into three things: their incentives, their
resources, their constraints.
Their incentives are the things that they're
rewarded for.
What behaviors are they naturally, structurally
incentivized to pursue?
And what behaviors consistently are rewarded
such that they keep pursuing them?
Resources.
What does this particular actor have access
to that other actors don't?
Whether that is money, whether that's talent,
whether that's hardware.
And constraints, finally, are the things that
constrain the behavior of these actors.
What do they care about that stops them from
doing the thing that's optimal for their goal?
That can be a lack of resource, that can also
be things like public image, and a number
of other things that any given actor could
care about.
So each individual actor can be analyzed as
such, and then you can start looking at how
they interact with each other in bilateral
relationships.
And the caricature, simplified dichotomy here
is you can get two types of relationships.
You can get synergistic ones or conflictual
ones.
Synergistic ones are the ones where you have
people pursuing similar goals, or at least
not mutually-exclusive goals, and there are
complimentary resources at play, and/or the
other actor has the ability to ease a constraint
for the other actor.
And so naturally you fall into this synergy
of wanting to support each other and cooperate
on various things.
On the other hand you can have conflicts.
So conflicts are areas where you've got different
goals, or at least divergent goals, but that's
not sufficient.
You also have to have interdependency between
these actors.
You need to have one depend on the other for
resources or one to be able to exercise constraints
on the other, such that you can't ignore the
fact that the other actor is trying to pursue
something that's different to what you want.
It's key to flag that synergy sounds nice
and conflict sounds bad, but you can get good
synergies and bad synergies, good conflicts
and bad conflicts.
An example of a good synergy is one where
you incentivize cooperation pretty naturally
between two actors that you want to cooperate
on something like safety.
An example of a bad synergy, which we'll talk
about in a second, is one where you incentivize
the pursuit of say, a somewhat unsafe technology
and the pursuit of that technology is rewarded
by the other actor.
An example of a good conflict could be one
where you introduce friction, such that you
slow down the pace of development or incentivize
safety or standardization because of that
friction.
An example of a bad conflict is one where
you can get race dynamics emerging between
two, for example, adversarial military forces.
So that's just the point being don't fall
into the trap of thinking that synergies are
always good and conflicts are always bad.
And last but not least, if you really want
to go wild, you can look at a set of bilateral
relationships in a given context.
That's what I do.
I look at a set of bilateral relationships
in US and a set of bilateral relationships
in China, and try to figure out how this mess
can be structured and make sense and tell
you something about what's likely to occur
in that given, say, domestic political context
that I care about.
This is kind of all a little bit abstract,
so we're going to take a sec to concretize
this by looking at a recent case study, which
is the Google Project Maven case study.
For those who aren't familiar with what happened
here, the long of the short of it is that
in March 2018, Google, it was announced, against
Google's will, that they had become a commercial
partner for the Project Maven DOD program.
And what Project Maven is, is it's a DOD program
explicitly that's about accelerating an integration
of AI technologies, specifically deep learning
and neural networks, to bring them into action
in active combat theater.
Now, when we look at this case study, we can
try to put it into this framework and understand
a) which actors matter, b) what matters to
these actors, and c) how that's likely to
pan out, and then we can compare and contrast
to what actually panned out, and that can
tell you something about how these strategic
interactions end up mattering.
I'll also take a bit of a step back and say
this is an interesting case study for a number
of reasons, not least because it's a microcosm
case study of this bigger question of what
happens when a government defense force wants
to access leading AI technology from a firm.
That, in general, is a question that we actually
care a lot about, and we specifically care
about how it lands and what happens and who
ends up getting control of that tech.
So, when we're walking through this case study,
think about it as an example of this larger
question that is generally very decision relevant
for the work that we do.
The first actor we can think about is DOD.
Their incentives are quite clear.
They want to have military strategic advantage
in this particular case by pursuing advanced
AI tech.
The resources that they have is they have
a lot of money.
The constraints that they have is that they
typically don't have in-house R&D capabilities
basically, so they don't develop leading AI
tech within DOD.
That means that they have to go to a third
party.
Enter Google management, who make decisions
on behalf of Alphabet.
The incentives that they have, again caricatured,
but plausibly somewhat accurate, is that they
are pursuing profit, or at least a competitive
advantage that will secure them profit in
the long run.
The resource that they mostly have is this
technology that they're developing in-house.
The constraints that they have, among many
other constraints, but the constraint that
ended up mattering here, surprisingly, was
this public image constraint.
Google has a thing about doing no evil, or
at least not doing enough evil to get attention,
and that ended up being the thing that mattered
a lot in this case.
Last but not least, you've got Google engineers.
These are the employees of the company.
These guys, again, simplified caricature of
their incentive is that they want rewarding
employment.
They want employment that is not just financially
rewarding, but somewhat aligns with the values
of them as humans, them as individuals, and
also aligned to the reputation that they want
to have as a person.
Resources is themselves.
Okay, so we haven't caught up.
AI talent is one of the hottest commodities
around and people will pay a stupid amount
of money to get a good AI engineer these days,
and so by being an engineer you are that really
good resource.
A constraint that they face, is that they
don't have access to decision-making tables.
As an employee, you are fundamentally structurally
limited by what you can do or what you can
say, in terms of it affecting what this company
does or doesn't do.
Think a bit for a sec and think about these
actors and think about how they're likely
to interact with each other, and this is an
exercise in trying to figure out if you can
observe this behavior about key strategic
actors in the AI space.
What should we assume is going to happen and
is that a good or bad thing/what's the end
outcome in terms of things that we care about
like control of this technology?
Because I'm running out of time, I'm going
to give you a spoiler alert, and these are
the two main bilateral relationships that
ended up really mattering in this case.
You had a synergistic one between Google management
and DOD.
This is quite an obvious one where DOD had
a bunch of money to give, and they wanted
to get tech, Google had the tech, they wanted
the money.
So, that's a kind of contractual relationship
that fell out pretty naturally.
What was particularly kind of interesting,
House of Cards-y about this one, is that the
contract itself wasn't that large.
It was $15 million, which is pretty dang large
for most people, but for Google that's not
much.
What was key though, is that as part of Project
Maven, it helped Google accelerate the authorization
that they got to access larger federal contracts.
Specifically there's one on the horizon called
the JEDI program, which is about...
I'm not...
JEDI actually stands for something quite sensible,
it just happens to be that the acronym worked
out for them, and what that contract is about
is it's about providing Cloud services to
the Pentagon.
That contract is worth $10 billion, which
even an actor like Google doesn't sniff their
nose at.
So, by engaging in Project Maven, that by
all accounts helped them accelerate the authorization
for them to be an eligible candidate to vie
for that particular contract.
We'll revisit that in a second, but that one
is still live and that's a space to watch
if you're interested in this set of relationships
that we're talking about.
In any case, that's a synergistic one.
Then you've got the conflictual one, and this
emerged between Google management and Google
engineers.
Basically, Google engineers kicked up a fuss
and they were really upset when they found
out that Google had engaged in Project Maven.
A number of things that they did was a) to
start this letter, an employee letter that
was signed by thousands of employees.
Notably by Jeff Dean, who was head of Google
AI research, as well as a number of other
senior researchers that really matter.
The letter basically asked Google to stop
engaging in Project Maven.
It was also... reportedly, dozens of employees
resigned as a result of Project Maven as well,
particularly when Google Cloud wasn't budging
and they were still engaging in that.
So Google management actually knew this was
going to be a problem.
There was a number of leaked emails, and in
those leaked emails the head of Google Cloud
was very explicitly concerned about what public
backlash would occur, and throughout the whole
thing there were a number of attempts by Google
management to host town halls and host meetings
to assuage the engineers, and that didn't
do enough or it didn't do much.
These two relationships are somewhat conflictual.
One wants Google to pursue the contract, one
doesn't, and the finale of this whole case
study is that in June they announced that
they would not renew their Project Maven contract.
So Google was going to continue until 2019,
and then they weren't going to uptake the
next round that they were originally slated
to uptake.
In some ways, this was surprising for a number
of people, and you can get all psychoanalysis
on this and say there are a number of things
you can get from this that tells you something
about where power sits within a company like
Google.
The cliffhanger though, which is a space that
we need to continue to watch, is that as a
result of this whole shenanigan, Google recently
announced their AI principles, and in that
they made statements like, "We're not going
to engage in sort of warfare technologies
or whatnot", but there was also an out in
there that basically allowed Google to continue
to engage in Pentagon contracts, eg., this
JEDI program thing that they really want.
The cynic in you can think about this as a
case where Google basically just won because
Google assuaged the concerns of their employees.
They looked like they responded to it, but
in practice they can still pursue a number
of military... or at least government contracts
that they originally wanted to pursue.
In any case, the meta point here is that you
can look at a case study like this, think
about the actors, think about what strategic
interactions they have with each other, and
it can plausibly tell you something about
how things are likely to pan out in terms
of control and influence of a technology in
this case.
Key takeaways to leave you with is a) there's
a case for looking at strategic interactions
as a domain by which you can get a lot of
information about AI strategy.
Particularly you can look at what's likely
to occur in terms of synergies and conflicts,
and what bottlenecks are likely to kick in
when you think about cooperation as a mechanism
you want to move forward with.
Not just descriptively so, you can also think
about strategic interactions as a way of telling
you what you should be doing now, to avoid
outcomes that you don't want.
If you can see that there's a conflict coming
up that you want to avoid, or a synergy coming
up that is a bad synergy, or will translate
into unsafe technologies being developed,
then you can look upstream and say, "What
can we tweak about these interactions, or
what can we tweak about the incentive structures
of these actors to avoid those outcomes?"
Finally, meta point is I've got a ton more
questions and hypotheses than I have answers
and that's the case for every researcher in
this space.
And so, as Carrick mentioned, there's a bunch
of reasons to think this is a really good
area to dive into and if you have any interest
in doing analysis like what I described, or
to address any of the questions that were
on Carrick's presentation, please come talk
to us.
We'd love to hear about your ideas, and we'd
love to hear about ways of getting you involved
and getting you guys tackling some of these
questions as well. Cool.
Thanks very much.
All right.
Have a seat guys.
We've got some time for a little Q&A, and
again, through the Bizzabo app or on the website
at sf.eaglobal.org/polls would be the place
to submit your questions.
A number have already come in.
One question just for starters, a lot of the
AI talk at an event like this tends to focus
on AGI, that is general intelligence, but
I wonder if you think that this kind of governance,
and the dynamics that you're talking about,
becomes important only as we approach general
intelligence or if it becomes important much
sooner than that potentially?
Do you want to take this?
I think there's a set of things which I hope
are robust things to look at regardless of
what capabilities of AI we're talking about,
and I think the mindset that at least I approach
it with, and I'd say this is pretty general
across the Governance of AI Program that we
both work at, is that it's important to focus
on the high stakes scenarios, but there is
at least a subset of the same questions that
translates into actions that are relevant
for kind of more nearer-term applications
of AI.
I do think though, that there are some strategic
parameters that significantly change if you
assume scenarios of AGI, and those are absolutely
worth looking at and will to some extent change
the way that you analyze some of those questions.
I would also like to add that it depends a
little bit on what part of the question you're
looking at.
I think when we think in terms of geopolitical
stability and balance of power and offense
defense dynamics, that near term applications
matter a lot, and trying to imagine keeping
that sort of stable and tranquil, as you potentially
then move from that up to something potentially
like AGI, so that you're not already adversarial
or having these dynamics is quite important.
A question about kind of the, just the two
nations that you spoke about, the United States,
China.
It seems that the cooperation between enterprises
and the government is much tighter and more
collaborative in China.
How do you think that... first of all, is
that a fair assumption?
And how do you think that affects where this
is likely to go?
Yeah, I think that's absolutely a fair assumption.
I think one of the key differences, among
many differences, but one of the most notable
ones in China is that the relationship between
their firms and their government and their
military is, I don't want to say monolithic
necessarily, but at least there's a lot more
coherency between the way that those actors
interact, and also the alignment in their
goals is a lot more similar than you get in
the US.
I think in the US it's pretty fair to assume
that those are three pretty independent actors,
whereas in China that assumption is closer
to not being true, I think.
In terms of the implications of that, there
are a number of them.
The really kind of obviously robust implications
of those are that the pace at which I think
China can move with respect to pursuing certain
elements of the AI strategy is a lot quicker,
and is a lot more coherent.
I would also plausibly say that the Chinese
government have more capacity and more tools
available to them to exercise control and
influence over their firms than the US government
has over US firms.
That has a number of implications, which I
don't want to go on record to put on paper.
But you can use your imagination and figure
out what that will tell you about certain
scenarios of AI.
Is there any sort of... we clearly see some
power exerted by Google engineers in this
case.
It maybe is unclear exactly where things shake
out, but it's a force.
Right?
I mean people can leave Google.
They're eminently employable in lots of other
places.
Yeah.
Are there any examples or signs of that same
consciousness among Chinese engineers, who
might say, “Hey, I'm just not gonna do this.”
That's an excellent question.
I don't have a very clear answer to that.
There are a number of researchers, who are
working on getting answers to that, and will
have better answers than I do.
I'll particularly flag Jeff Ding, who's a
researcher at the Governance of AI Program,
who does excellent work on trying to understand
what the analogous situation looks like in
China.
There's Brian Z, who is potentially in the
audience.
Yo.
Brian!
Hey.
Brian is also working on this, and trying
to understand it better, as well.
So, yeah.
I can't comment on that necessarily, but there's
been less data points, is the one thing that
I conclusively can say.
What I will say is, there might be something
like a third option, where Chinese AI researchers,
again, who are quite employable, could go
to DeepMind or something that maybe seems
a little more neutral, if this is something where
they don't like the dynamic.
But I'm not sure this is something that has
actually taken off.
Or again, this might be something where having
something like an intergovernmental research
body, that's pursuing science in a pure sense
and has this international credibility, might
be quite useful.
It can provide an exit for people, who are
not quite sure, if there is a race dynamic,
that they want to be engaged in a race dynamic.
Lot of questions coming in through the app.
We'll try to do as many as we can.
It's just about time for our lunch, but we
can stay for a couple extra minutes.
One question on the possible role of patents
and intellectual property in this.
Do those rules have force, or not really?
In the United States, the US Department of
Defense has the right to use any patent and
pay just compensation, so you can't actually
use a patent to block DOD.
There's a special exemption for it.
Other IP, between firms, it's not uncommon
for Chinese firms to steal a lot of American
intellectual property.
I don't know how this would work, for example,
with the Department of Defense interacting
with a Chinese patent.
I haven't looked into that.
A number of people complimented you on the
model of focusing on these actors and then
the general paradigm.
But one somewhat challenging question is,
how much do you think individuals matter to
this analysis?
For example, in this Google case, at least
one questioner says, “Eric Schmidt, personally,
is a big part of this story”.
So, if you zero in on that one person, you
get these idiosyncratic possibilities, where
maybe if it was just one individual swapped
out, things could be quite a bit different.
What do you think about that?
Yeah.
That's an excellent question.
Schmidt definitely is a particular individual,
who had a lot of influence on this particular
case.
I suppose there's two ways to answer this
question.
One is that, yes individuals matter, but I
tend to lean towards assuming that people
assume that they matter more than they actually
do.
I think fundamentally, a person like Eric
Schmidt is still constrained by the structural
roles that he has been given, in relation
to some of the institutions that matter.
So that's one answer.
I think two, even if individuals do have a
fair amount of influence in this case, if
we're talking about trying to do robust analysis,
it's a better analytical strategy to focus
on an aggregate set of preferences that's
housed in an entity that's likely to exist
for a while.
Or you actually have to assume that they will
likely have a role to play in AI strategy
and governance.
I think individuals tend to turn over a lot
quicker, than is reasonable to place a lot
of an analytical weight on them for.
Schmidt again is a bit of an exception in
this case, because he's been around and dabbling
in both of these scenes, in terms of defense
and Google, for a fair amount of time.
But I think there are also fewer individuals,
who you could point to.
But there are a larger number of actors, whose
behavior you can consistently observe historically
as well, which is really where the power of
the analysis comes from.
What I would also add is that, I think sometimes
it matters a little bit who the individual
is and what they're motivated by.
I think for most people, maybe their values
and what motivates them is a little underspecified.
As a result, they can be pushed around by
the dynamics around them.
Whereas, I think that one of the reasons why
it makes sense to have people who are motivated
by EA considerations and altruistic considerations,
to get involved in government and to get involved
in these firms, is because they can potentially
be steadier and less subject to the currents,
and keep their eye on what part of this is
actually important to them.
Whereas, I'm not sure that's the case that
most people have that bedrock.
So that's a great segue into our second to
last question, which is who has how much power
here, as you guys see it?
The questioner asserts or puts forward the
hypothesis that, there's not that much talent
in this space.
The best talent is so scarce, that maybe the
most power really is there, which would suggest
an evangelism opportunity, or a very specific
target for who you'd want to reach out to
with a particular message.
Do you think that's right?
How do you see the balance of power, as it
exists today?
I don't know the overall balance of power.
I do think it is the case, and it seems to
be the case, that researchers do have a lot
more power than they would in normal industries.
Which is why I think the Department of Defense
actually needs to cater to AI researchers,
in a way they haven't really ever catered
to cryptographers or other bodies where they've
gone in with their money and papered over
things.
With that being the case, I think if the AI
research community allows itself to treat
itself as a political bloc that has values,
and those values it wants to advance, then
it will have to be taken seriously.
The AI research community, generally, has
very good cosmopolitan values, they do want
to benefit the world, they don't have very
narrow, parochial interests.
I think having them treat themselves as a
political bloc and maybe evangelizing them,
to treat themselves as a political bloc,
could be a fantastic lever in this space.
One damper to put on that.
I promise I'm not a skeptic by nature, but
historically, I think research communities
haven't mattered as much as one would hope.
That's also true.
Looking at cases like biotechnology, nanotechnology,
and what not where you had somewhat analogous
concerns pop up.
You also have had this transnational research
community vibe exist.
Not even just vibe, actually, but institutions,
professional networks, and what not, that
institute the existence of that epistemic
community.
That has had limited influence on decisions
that are made by key actors in this space.
It hasn't had no influence.
That's absolutely not true.
There are some really good examples of this
transnational research community mattering
a lot, but I think that's been fewer and further
between, than one would hope.
I'd like to say something on both sides of
this, because I think you're right.
This is a difficult line.
There was an idea with the International Air
Force.
When people were proposing this they were
saying, “Aviators are the natural ambassadors.
They're in the sky.
They fly between countries.
They're so international, that of course they
would never bomb one another.
This wouldn't make sense.”
They were saying this immediately before WWI.
Then like that, it wasn't even a question.
So there's a thing where you can be captured,
again like cryptography and other areas.
To some extent, physics, during The Manhattan
Project.
The physicists were not American, most of
them.
They came over and they still engaged in this
interest.
But also with physicists, afterwards they
were a lot of the push towards disarmament,
towards safety protocols, towards taking this
quite seriously.
They still are actually a really important
part of that, so it's a little unclear.
I think with AI researchers, given that they
do seem to have a somewhat coherent set of
values, and they are a small group, they might
be more on one side of this than some of the
others.
But yeah, I agree.
It's not a guarantee and it's not easy.
One can hope.
So maybe the last question then would be,
can you sketch somewhat of a vision, I'm sure
it's still a work in progress, for what rules
or governance regimes we should be trying
to put in place?
We've got certain bodies of researchers putting
forward statements.
We've got Google now putting forward, some
pretty cosmopolitan values and principles.
But what is the framework that everybody might
be able to sign onto?
Do we have a vision for that yet?
I think it probably doesn't make sense to
try and have too substantive of a vision,
at this point.
I like the idea - I'm sorry I'm being a lawyer
here for a second - but a procedural vision.
This idea where you say, “What we agree
to is that everyone will have a say.
That we won't move until we've hit this procedure
in place, and that we've taken into consideration,
not just the actors who are relevant in sense
of having control of this, but the people
who do not have much say in this, or who aren't
having access to the levers.”
To some extent, other moral considerations
like animal welfare, and the benefit of the
earth, and these things.
I also think that, mostly in terms of substantive
research, we're trying to push towards something
like this procedure and coordination, with
the hope that this naturally falls out.
More than putting forward too much of a substantive
suggestion.
The exemption to this, being something like
a commitment to a common good principle, which
I think is almost the same thing as a procedural
thing, because it's underspecified in some
ways.
I agree entirely with that.
I would hesitate for folks thinking about
working in space, to try to drive towards
articulating specifically what it is, that
the end goal is.
As I alluded to, I think there's a lot of
uncertainty, so those things are more likely
than not, to not be robust at the stage.
That being said, I think there are number
of robust things, like the common good principle
and the common commitment to that.
I am, for as much as I sounded like a wet
blanket on that, I am hopeful that research
communities are really important.
I think anything that can A. Boost the power
of them, B. Make that community stronger and
more coherent, in terms of encapsulating a
set of values that we want, that is good.
Then as well, I think the other robustly good
thing is, to acknowledge that states aren't
the only ones that matter in this case, which
is a pitfall that we tend to fall into when
we're talking about international governance
things.
In this case, firms matter a lot, like a lot,
a lot.
And a robustly good thing is to focus on them,
and place them somewhat center stage, at least
alongside states.
And to understand how we can involve them,
in whatever the solution looks like.
Awesome.
Well this has been an outstanding hour.
You guys are gonna be available for office
hours after this?
Immediately after this, yes.
Alright, fantastic.
How about another round of applause for Jade
Leung and Carrick Flynn.
