PRESENTER: It's a great
pleasure to introduce
this session on ethics.
As you know, there's a very
well-documented definition
of consciousness, which
is fine, for combining
supervised and
unsupervised learning,
we defined kappa gamma, which
is the definition of ethics.
So we ranked all the people
at CBMM based on ethics.
And the two most ethical
people that we could get
are Matt Wilson and Max Tegmark
will be leading the discussion.
[INAUDIBLE] informal
conversations.
We started to speculate
about the notion
that we are a homo
sapiens 1.0, and what's
going to happen with
homo sapiens 2.0.
It turns out that
I got it all wrong.
It's Life 3.0.
I just found out that
that's the new era.
This is a book that's come out.
MAX TEGMARK: On Tuesday,
it's coming out.
PRESENTER: By Max
Tegmark that I think you
discuss a lot of these issues.
I look forward to reading it.
I think it's going
to be fascinating.
MAX TEGMARK: It's
a great doorstop.
PRESENTER: So the
way this is going
to work is we're going
to have each of them
give a short introduction
and spelling out
some of the main issues,
some of the main submissions,
and what they think about
some of the conversations
about ethics in AI.
And then, we'd like to open
this up for discussion,
for questions, and joint
brainstorming by all the people
here.
OK?
MATT WILSON: Yeah the focus will
be on provoking conversation
since this is not about
us instructing you
on the ethics of AI.
It's just raising--
because we all
have an interest in AI,
its relationship to brains.
These are things that
we've thought about.
And it's not the first
time we've done this.
This is just a
little white paper
that I put up here that came
from a similar session in one
of the CBMM retreats
from last year.
And Max and I were also on this.
And so I thought I would
just kind of throw this
up here because it points
out some topics that we
might want to discuss.
And they were that kind of
the easy things-- autonomous
vehicle, medical diagnostic,
lethal autonomous weapons.
These are specific applications
and discussions of, how should
we approach these things?
What are the concerns?
Autonomous vehicles, should you
kill the dog or the two people
in the crosswalk?
I mean, I don't think those
are really the key issues.
I think the key
issues are those that
raise the questions of
our social responsibility.
One of the reasons I am here
is that I also teach this--
for many, many, many
years, a regular course
in responsible
conduct and science,
which asked these same questions
involving cognitive science,
neuroscience, what are
our responsibilities,
what are the things
that we have to think
about the application of
our science to technology
and its impact in society.
So I sort of pointed out
some of these things.
There are some
interesting little links
that are worth looking at.
Other people have
talked about it.
But these are some
of the topics that
came up in this discussion.
The first one was this
idea of transparency
and predictability.
You develop these AIs,
what is your responsibility
to know what they're doing and
how they're actually doing it?
You build an algorithm that--
for instance, I
remember back in the day
before there were
really AIs, algorithms
that would determine whether
you qualify for a bank loan.
You just punch in
some parameters,
and it tells you, yes, you get a
loan; no, you don't get a loan.
What was the basis
for that decision?
I mean, you don't know.
So it turns out a lot of these
algorithms had implicit bias.
And that is that they chose
factors that might have worked,
but their weighted in
such a way that people
of modest economic
capability were excluded,
and those that had plenty
of resources were rewarded.
Now that wasn't implicit
in the algorithm.
It's just the idea,
well, it's an algorithm.
So it must be it's unbiased--
clearly not the case.
And so to the extent that
we have a responsibility
to at least think about how
these algorithms are actually
going to operate when
they're actually put out
into the world,
that's this question,
the transparency question--
the morality and
decision-making,
that's how do algorithms
choose who lives, who dies.
There's the liability
question which
is, eh, I think that's an
interesting question, which
comes up with
autonomous vehicles,
vehicle kills somebody,
who's responsible?
The drivers?
The algorithm?
The person who
wrote the algorithm?
Or the manufacturer who put
the algorithm into the car?
That development of artificial
general intelligence,
which we kind of think of
as an objective, and then
there's some
concerns, things that
come up which I sort
of point out here,
which are interesting.
And they relate to these--
the topic I pointed out before,
and that's this predictability
and transparency.
And that is with these kind
of domain-specific AIs,
you design them to solve
a particular problem.
You might see the metrics for
how well they're doing it.
You may even have insights
on how they're doing it.
With general intelligence,
you don't necessarily
have that access to
transparency and predictability.
And so that's a concern.
And then when you extend that to
the idea of super intelligence,
and that is a general
intelligence that now exceeds
the capability, which is
of humans, life 4.0, now,
we have all these issues
that kind of come up
with regard to AI,
and that is sort
of the displacement of effort.
If they can do things
better than humans, should
we just concede
that they should--
the AIs should have
both the authority
and, through their demonstrated
advanced capability,
the preferred judgment when
it comes to making decisions?
And so that, you can think of--
I think of that more as the--
if there's a singularity--
it's not the singularity
of AI has kind of
turned on us, although
in the end here,
the sudden emergence
of super intelligence,
the non-human-like
motivations, which
you might think
of as evil intent,
and then non-human-like psyches.
I think, really, the
question becomes,
when do we cede control to
entities that have demonstrated
superior capability?
And we do this now.
We make these
judgments currently.
In a social context,
we have those that
are in positions of authority.
We cede control to them.
And largely it's based
upon this assumption
that if you centralize power
in a few capable, smart people,
you're going to get
better decisions.
Although empirically,
that is a question,
that that's probably
not the best
way to make optimal decisions.
The best way is to
actually distribute.
That's kind of decision-making
across a broad spectrum
of people and abilities.
So that kind of question,
when do we let, or even
beg the super intelligences
to help us out, cure diseases,
solve our problems, and then
ultimately decide our futures?
And so I thought those were
some of the issues that came up.
And I'll turn it over to Max.
MAX TEGMARK: Great.
Thank you so much
for the invitation
[? that can be here. ?]
[? It's ?] [? fascinating. ?]
So I'm just going to add a few
more words to what Matt said
here.
Yeah, I think of the
goal of this session
here as taking all the
knowledge that you guys are all
developing about how
to understand and build
intelligence,
biological intelligence,
artificial
intelligence, and think
about what you actually want
to do with all this knowledge.
I'm generally a
pretty happy camper.
Partly, I blame it on
my wife, but it's also
partly because I'm
optimistic that we can create
an awesome future with
technology as long as we win
the race between the
growing power of technology
and the growing wisdom
with which we manage it.
Now this school has been
almost entirely focused
on the growing
power of technology,
how do you figure
out how stuff works?
How do you make it better?
This discussion here is
about the wisdom part.
How do you want to use this?
So I don't have a white paper,
but I have a whiteboard.
So I'm just going to
write a few words on it.
In terms of the issues
that we face here,
what to do with our
technology, you already
summarized it very well, Matt.
I'm just going to write some
keywords in larger font.
Obviously, in the
very near term,
there's all the stuff
related to jobs.
What kind of career advice
should we give our friends
and ourselves in the future?
Is it true that technology would
build or increase inequality?
If so, what should
we do about it?
And so on.
Then, there is the question
of the legal system.
As Matt mentioned, can
we make our legal system
more efficient and fairer by
having robo judges or something
like that?
If so, how can we
make this transparent
so we actually understand
what our systems are doing?
Do we want to go
in that direction,
ceding a lot of
control to machines?
Then, there is the
question of weapons.
Even though most of the progress
so far and research in AI
has come from the civilian
sector, I would say by far,
most of the money is actually
going for military sector
right now.
The US government
was just talking
about putting in $13 billion
more into militarized AI
and so on.
And there is a United
Nations meeting in November
to discuss whether
one should try
to get an international
ban on killer robots,
just like we banned
biological weapons.
But it's very up in the air,
interesting ethical arguments
on both sides of that.
Then there is the
business about bugs.
How can we transform today's--
raise your hand if you've ever
had your computer crash on you.
And maybe it was a nuisance,
or maybe it was even funny,
but it's less funny, of
course, if what crashed
was the computer
that's controlling
your self-driving car,
your self-driving airplane,
or your power grid, or
your nuclear reactors,
or your nation's
nuclear weapons system.
How can we transform today's
buggy and hackable computers
into really robust AI systems
that we can really trust?
It's not particularly
ethical questions,
should we try to make our
systems more robust and less
hackable.
Everybody agrees with that.
But the interesting
ethical question
is, how much of a
priority should that be?
Until now, it's been
a very low priority.
That's why your operating
systems keep crashing.
Should we decide that we
should shift more of our effort
also into this wisdom
part and making
things robust and unhackable
and doing what we want?
Or is the balance kind
of fine the way it is?
I think that's a fun
ethical question.
As we look farther
forward in time
comes the question that you
mentioned there about AGI.
Should we ultimately try
to build machines that
are as smart as us or smarter?
And if so, what kind of society
do we want to create with them?
What do we want the role of--
who do we want to be in control?
Us or some sort of machines?
If it's some sort of
machines, in that case,
how can we make machines
learn our goals,
and adopt them, and retain them?
And whose goals anyway?
That's a very ethical question
which cannot just be left
to computer geeks like myself,
because it affects everybody.
So what kind of kind of
society, what kind of future
do we want to create?
That's very much an
ethical question, I think.
I get a lot of students coming
into my office for career
advice.
And I always ask them, where
do you want to be in 10 years?
If you come into my office
and you answer that question
by saying, oh, I think
I'll have cancer maybe
or I'll be unemployed
and I've gotten stabbed.
I'd get really pissed at you.
Because that's not a healthy
strategy for planning out
your future, just thinking about
everything that could go wrong
and trying to run
away from it, right?
I want you to come in with
fire in your eyes and be like,
this is what I
want to accomplish.
Then, we can talk about all
the obstacles, and pitfalls,
and how we can
navigate around them
so you can accomplish
your dreams.
But we, as a society, do
exactly that ridiculous parody
of future planning.
You go to the movies and watch
something about the future.
It's almost always
dystopia, right?
Almost always this stupid
terminator robot or some sort
of disaster.
I see very few serious
attempts to actually envision
what kind of future we want.
And I think that's a really
interesting ethical question,
too.
So we were thinking, Matt and I,
that this is your hour to bring
up-- well, actually we
can do whatever you want--
maybe we can start with the
things that are a little bit
more in the near-term.
And then if you feel that it's
too boring because everybody
agrees, we can move
farther into the future
to really get an
adrenaline going.
So the floor is yours.
AUDIENCE: I guess this a
more [? media ?] maybe, kind
of practical question.
But a lot of
arguments [INAUDIBLE]
artificial intelligence
[INAUDIBLE] not to say we're
going to make a
weapon with this,
but it'll be useful
[INAUDIBLE] a system
that can see and understand
the world better.
Do you think there's
a moral obligation
on the researchers who take that
money from those organizations,
knowing that their long-term
goal may be to eventually make
killer robots or a
system that [INAUDIBLE]??
Or not because the immediate
state of [INAUDIBLE]
MATT WILSON: I don't know.
What do you all think?
I mean, would you
take the money?
And why would you take it?
I mean, the way the
question is posed
is like, you have the choice.
Don't take it and then what?
Stuff doesn't get done
or somebody else does it?
What do you think?
How many of you
would take the money?
MAX TEGMARK: Maybe we could
make this a bit more fun
by adding some color to it.
Because it's not
really either or.
I think there's
sort of a spectrum.
For example, there
were some chemists
who built this gas, Zyklon B.
You probably heard about it.
And they knew full
well that this
was going to be used
in concentration camps,
but they produced it anyway.
They were actually sentenced to
death in the Nuremberg trials.
That's one end of the spectrum.
Would you take that
funding to do that job?
On the other end
of the spectrum,
suppose you get
a job designing--
draw a little drone for
Amazon's new 10-minute delivery.
Yeah, you know that that
technology can maybe also
be used to drop bombs
sometime in the future,
but in the short term, Amazon
is not at all in that business.
And would you take that money?
Would you just
decide to stop doing
any kind of technology
development at all
because bad things could happen?
That's sort of the opposite
end, I think, of the spectrum.
Where would you kind
of draw the line?
MATT WILSON: Well, it's
somewhere in between, actually.
So you can say there's
the social irresponsible,
you know, sentenced to death.
And then, there's just sort
of the socially agnostic.
OK, this is just--
this technology is going make
my shopping easier, right?
But then there's this
idea of socially engage.
Then I have a responsibility.
This is going to be important.
This is going to
impact people's lives.
And the question
really becomes not
is the technology
going to be developed,
but how is it going
to be developed?
And who is going
to control this?
And so the question
is really, do I
engage and try to influence
the direction and impact
or do I disengage and
let someone else do it?
And so, I mean, I
imagine all of you,
I say, who would take
the money, you know,
it wasn't about the money.
Who would feel either that
it's appropriate to engage
or maybe even that they have
some sense of obligation.
Look, I have the ability
to move this thing
in the proper direction.
Maybe I should actually engage.
AUDIENCE: So along those lines,
[INAUDIBLE] someone else might.
So my decision might not
actually stop the progress
of [INAUDIBLE].
And these kinds of feelings,
which everyone has,
will, I imagine,
propogate progress
in this field, kind
of irrespective
of any of the conversations
we have today.
My question, I
guess, [INAUDIBLE]
let's say we come up
with some solution
we agree with for jobs, and 95%
of the world agrees with us,
but 5% doesn't and
continues to develop
AI in a way that isn't in
accordance with our ideas.
Any resistance to our moral
outlook on AI will, I imagine,
end up squashing our
well-intentioned restrictions.
So how do you create
a moral system
that stops people from
developing [INAUDIBLE] think
are bad, but is executed in a
morally responsible way that
is in play?
Well, any researcher who
develops that kind of thing,
[INAUDIBLE]?
MAX TEGMARK: I
think you're raising
two separate questions which
are both really important.
What should we do
as individuals?
And then separately,
what should we
encourage our government to do,
or the world governments to do?
There's two separate things.
Certainly as individuals,
raise your hand
if you know of Tom Lehrer.
This song about
"Wernher Von Braun,"
where he goes, once
the rockets go up,
who cares where they come down?
That's not my department,
say Wernher Von Braun.
That attitude [? built ?]
V2 rockets for Hitler
to hit London.
Then, he started
working for the US
instead to build our rockets.
It's somewhat common
among scientists.
One can take that point
of view that certainly,
as an individual,
that one is just not
going to do something that
one feels is morally wrong,
even if someone else
will take one's place.
I'm pretty sure you would not
have developed the Zyklon B
gas, even if you knew that some
other chemist was going to do
it if you didn't.
AUDIENCE: It's kind of
an important question.
I imagine who
develop weapons now,
building killer robots
will justify themselves
and might be justified
in certain things,
like, if I build these well,
it will cause less damage
to non-targets.
MAX TEGMARK: Well,
for the Zyklon B case,
you couldn't even
make that argument.
This wasn't a defensive
weapon to defend
Germany against invasion
by other countries.
And similarly, if
you're in charge of--
there was a big, big
scandal outside--
and not among US AI researchers,
but among US psychologists
and the American
Psychological Association,
where it turned out that
some psychologists had
been working on torture,
helping the CIA with torture.
And in the end, there was
a revolution in the APA.
And they adopted a
position now that says
any psychologists who would do
this will just get disbarred.
So I know, in some cases, this
is justified. , Because, oh,
we have to defend ourselves.
But there are many cases
where it's not even--
where it's even simpler.
I'm not sitting here
telling anyone what to do.
But as an individual, everybody
should draw the line somewhere,
I think, and say
these are things
that I think are
just morally wrong,
and I'm not going to do
it, even if someone else
is going to do it.
Then the question for everybody
is, where is that line?
And I also think we can also
do something separate, which
is try to speak up and
persuade other people
that this whole thing
should be stigmatized.
So bio weapons went through
this whole thing, for example.
And then the US, and Russia,
and China, and all the others
signed a ban on
biological weapons, which
really helped stigmatize this.
Same thing happened
for chemical weapons.
This kind of activism was,
in both of those cases,
driven a lot by biologists
and chemists, by scientists.
And they mainly had an
effect not by individually
boycotting to work
for bio weapons labs,
but just from speaking up and
using the expertise they had.
So that's another
thing that you can
do if there's something
you feel strongly about.
So stigma can be powerful.
Because if it turns
out that whoever
is trying to do these
bad things can only
get third-rate
scientists to work
for them who couldn't
get another job,
then they're not going
to be able to do probably
as much harm.
MATT WILSON: Yeah, I don't know.
I think that this idea
that you have to choose--
right-- and develop--
either I choose
to develop AI knowing that they
might be applied in some manner
that I find morally or
ethically objectionable,
or I don't engage, or I engage
in some sort of proactive way,
I don't think that
AI necessarily
falls into that category.
I think it's closer-- you
think of like nuclear energy.
So you think there was
nuclear technology.
And there are ways of
developing it that are more--
that are easier or more
difficult to weaponize.
And so you have
developing technology,
the science can actually
help to contribute
the development of a
technology, which maximizes
the societal benefits
while minimizing
or, in some ways, mitigating,
or enhancing the ability
to oversee the negative
impacts of that technology.
And when thinking about
AI, these questions of,
like, transparency, for
instance, predictability,
these are things
where if, either you
can choose to engage and develop
a technology which possesses
these attributes, which I think
would both simultaneously make
them more attractive-- if
you had the choice of using
a technology which you had
no idea what it's going to do
or one that actually has
been designed and engineered
specifically to provide
that kind of access,
it's more likely that
would be adopted.
And so, again,
you can contribute
to something which is less
likely to be easily abused.
And so that would be
a positive engagement.
You're contributing--
AI's gonna get developed.
You can contribute to
it in a way that will
help to advance the societal--
the positive societal benefits.
And I think this is one
of the things about AI,
unlike these other very
narrow technologies, where
it's very hard to
see how they could
be directed in a
beneficial way necessarily.
With AI, I think this is
one of the things that
makes it an interesting,
important topic for discussion
and debate.
And that is, in principle,
it could apply and infiltrate
everything that we do, literally
every single human activity
that we engage in could be--
right now, many of them
are impacted by AI.
So I don't know.
MAX TEGMARK: I agree with that.
AI, of course,
has very dual use.
It can be used for
so many good things
and also for causing problems.
And I think you, just
a brief paraphrase
of what you said there, I
think it's silly to ask,
are you for AI or against it?
It's like asking, are you
for fire or against fire?
Well, I'm for using fire to keep
our homes warm in the winter.
I'm against fire for arson.
If I'm worried
about fire, I'm not
going to go campaigning
to ban fire,
but, if I really care
about it, maybe I
could invent the
fire extinguisher
or advocate for starting
a fire department
and things like this.
MATT WILSON: Fireproof
your building.
MAX TEGMARK: Yeah, there are a
lot of very constructive things
one can do as an
AI researcher which
will greatly increase the
odds that things go well here.
And finally, coming back to
your moral question there,
I think there's one thing
which you can all do,
where I think there's
absolutely no downside
but which many people
still don't do.
Just think about
these questions.
Whenever you're developing
some new technology,
just think a little bit about
what might the implications be.
Some people, almost
on purpose, just
aren't interested
in the implications.
They think it's cool
to think or inevent.
That, I think, is
taking it too far.
I think it's always
good to think about it.
And then, you can make your
mind up on a case-by-case basis.
AUDIENCE: All
right, so I hang out
in a community [INAUDIBLE]
sort of a moral obligation
or a most pressing
problem for humanity,
and they think it's AI safety.
We heard a talk a few days
ago about how it's actually
having-- or how AI's going
to take over all the jobs,
and that's the most
immediate problem.
[INAUDIBLE] we should
all be worried about.
What do you think is the
most pressing problem?
And are we all obligated
to go try to work on it?
MAX TEGMARK: First
of all, I think
there's no consensus as to what
the most pressing problem is.
Different people care about
slightly different things.
And I think it's good that it's
like that, where all bases get
covered, rather than
everybody is just barking up
the same tree.
Second, these things are
all quite interrelated, too.
I think what unifies
them all is that they all
involve technology.
And as I said, I'm optimistic.
I think our goal should
be to always make
sure the wisdom with which
we manage our technology just
keeps up with the pace which
the technology itself grows.
You mentioned AI, and what was
the other one you mentioned?
AUDIENCE: AI, [INAUDIBLE],,
and bioterrorism.
MAX TEGMARK: Bioterrorism.
So maybe an AI
developer can actually
help bioterrorism, for example.
Certainly if you want to
have a better grip on who
or what's happening in
the bioterrorism area,
there could very well be
machine learning techniques
you can use to detect that
something isn't right here
and someone is
planning something bad.
It's not even so simple,
that just because you only
care about bioterrorism
you should ignore the rest.
We've seen so many examples of
how different technologies play
together.
So I think, figure out
what you're excited about
and what you're good
at in life and do that.
It's a quite good algorithm.
But if other people are very
excited about other things,
you can always think
about how your work can
help with that, too.
MATT WILSON: Yeah, and I
think the safety concern--
I mean all technology is kind
of subject to these safety
considerations and concerns.
So it's hard to
imagine that AI would
be applied in a way in which
people said, well, we're not
going to worry about safety.
We have autonomous drones.
We're not going to worry
about whether or not they
crash into people or do crazy--
there would be a lot of concern.
There will always be that
kind of safety oversight,
whether it's nuclear
weapons, AIs, or bathtubs.
I mean, there are going to
be these safety concerns.
I think the real question
is whether or not
AIs will be capable
of subverting
that kind of concern.
Will they will they be
designed in a way that
don't allow us to
actually determine
or even influence whether
or not they operate safely?
And that's where, I think, the
science and the engineering
of AI and our
obligation to ensure
that they are subject to that
kind of oversight, that's
where that kind of comes in.
And I think this is the--
your admonition to simply
not move forward blindly.
If you're thinking about--
OK, these things are
going to be used,
and we know that
everybody is going
to be concerned about safety.
We just want to
make sure that they
can be subject to all
of these oversights
through these mechanisms
of predictability,
transparency, and
that sort of thing.
It gets harder when
you get up into the AGI
and the super intelligence,
where now their ability
to circumvent oversight
because, now, they're not under.
They are over.
They are the overseers.
That might be a question.
But even there, I think
that that move is not
going to be made in some
sudden overnight, a flash,
the singularity,
where, suddenly,
these capabilities place us
in a subordinate position.
MAX TEGMARK: Leaving aside
singularity and human-level AI
and stuff like that and
staying closer to the present,
let me just push back on one
thing that you said there
so we don't get too boring
and agree on everything.
I would make the claim
that we're actually
not paying enough
attention to AI safety,
even in the short term.
And I think the
reason for that is
that if you have any science--
it usually starts out in the
lab, people messing around,
trying to get
things to even work
and with no impact
on society at all,
and then it's not that
relevant to talk about safety.
And then, you get to a
point where it's really
having an impact on society--
like cars, or nuclear
weapons or whatever.
And then people are very
concerned about safety.
And AI, I think, is right on
the cusp of where it's beginning
to come out and have an effect.
And we haven't yet really fully
absorbed the safety engineering
mentality that a lot
of other fields have.
It's, I think, absolutely
pathetic how many big hacks
there have been recently, for
example, even in this country.
It was more than one billion
Yahoo accounts hacked.
The US government had over a
million top security clearance
things hacked because
they were cleverly
stored at the information
server in Hong Kong.
I mean, that--
Jesus Christ.
It just shows that
people are not
making these things a priority.
We've also had a lot of really
unnecessary accidents which
were caused by just
really lousy software
or lousy user interfaces.
Even Three Mile Island sort of
killed the US nuclear industry
partly.
It's partly because it's really
lousy user interface, where
there was a lamp that
wasn't correctly indicating
whether a valve was shut.
This sort of stuff,
I think, is endemic--
it's because we're stuck in an
outdated mentality of dealing
with safety.
I talked earlier about, we
want to win this wisdom race
and make sure that the
power of our technology
is always outpaced by the
wisdom of which we manage it.
But we always used to stay
ahead by learning from mistakes.
That was a great strategy
for less powerful tech--
invented fire, screwed
up a bunch of times,
went to the fire
extinguisher, invented
the car, screwed up a
bunch of times, invented
the seat belt and the airbag.
So a lot of people just
figure, well, yeah, we're
just going to always
keep doing it like that.
But when the technology
gets powerful
beyond a certain
point, we don't want
to learn from
mistakes anymore when
it's nuclear weapons we're
talking about or maybe AGI,
right?
We obviously want to get
things right the first time
because that might be
the only time we have.
So then you want to switch
into this safety engineering
mentality that we had when we
sent Apollo 11 to the moon.
And that worked.
There were a ton of things that
could have gone wrong for Neil
Armstrong, Michael Collins,
and Buzz Aldrin, but it didn't.
Why?
The safety engineering.
People thought through very
systematically everything
that could go wrong.
And then, they made
a plan, and made
sure it didn't happen, right?
Because people felt the
stakes were too high.
They wanted to get it right the
first time, and they succeeded.
This is the attitude
I think we should
be having also when we let AI
be in charge of our power grids,
our nuclear reactors, and more
and more of our infrastructure,
and more and more of our lives.
I think, frankly, we're being
way too flippant about it
so far.
And there is actually
a great interest
among researchers to
work on these questions
to make AI more
robust and unhackable.
This thing that we did when
we worked with Elon Musk
to give out AI safety
research grants,
there was a huge appetite.
We were kind of blown away.
We got 300 teams
from around the world
who asked for $100 million
to do this research, which
was much more than Elon had
said he would give, of course.
But now, we gave out money
to 37 teams around the world,
many in universities
like MIT, doing stuff.
And we were hoping that the
governments of the world
would sort of step up
and say, OK, there's
clearly a lot of AI researchers
who want to do this stuff,
let's just make this a standard
part of our computer science
funding, the AI safety
research, its own field.
But so far, there's
almost zilch.
Now there are
billions of dollars
that just go into making AI more
intelligent, but very little
into this.
And the AI, would be, I think
there's a real opportunity
for making this a priority.
I think it makes sense for
governments, in particular,
to fund this.
Because private companies,
it makes sense for them
to fund the stuff that they can
own the IP for and make profits
on in the near term.
But safety stuff,
that's more going
to be beneficial in
10 years or 20 years
and is going to
help everybody, you
don't want anybody to
own the patent on that
and not share it, right?
It's much better if that's
done by university researchers
[? that's ?] government
funded and then
made public so everybody can
use these safety solutions.
Something
MATT WILSON: So I think
[? that ?] was good concern.
There was a concern,
really, to, AI.
Other people, concerns, I mean,
what are the things that--
[? Tyler? ?]
AUDIENCE: [INAUDIBLE]
handed over responsibility
and decision-making to
algorithms, [INAUDIBLE]..
And we just saw
very clearly what
happens when [INAUDIBLE]
algorithms the task of making
more money in financial
markets [INAUDIBLE]
global recession in 2007, 2008.
Again, they did a very good job
of authorizing [? short-term ?]
terms.
They did a very bad job at
stabilizing the global economy.
And what we saw, as a
consequence of that,
was exacerbating such qualities,
not just in the United States,
all throughout Europe, arguably
all throughout the world.
And in a sense, the
people who were using AI
had a certain objective.
And they use AI to their
benefit, [INAUDIBLE]..
So my question is, in light
of that little experiment,
having seen how it played
out, in thinking about that,
not just in terms of
the capability of AI
but who is using
artificial intelligence
and for what purpose?
Getting also back to the
question about [? dark web ?]
funds, why are we developing
these algorithms and for what
purpose?
How can we think
about increasing
the amount of people
who are trying
to use artificial intelligence?
How can we diversify
the community
of users and
developers [INAUDIBLE]..
And what can we
do as a community
to include more people
into this project?
MATT WILSON: I think those are--
I mean, those are
really great issues.
I mean they're very broad.
I mean, the sort of the
question of oversight,
and so I think that
it's a great example.
I mean, the financial
collapse, where
you have technology placed
in the hands of a, few
without the
complimentary oversight--
so AI's were serving
the interests of a few,
and the safety mechanisms were
simply not up to the task.
And so there, you can argue
either, well, the solution is,
don't let anybody have--
don't automate trading,
or you place into
the hands of people--
this is through the
government-- the ability
to apply comparable AI as
an overseeing mechanism-- so
better AI to oversee the
other distributed AIs.
That's the role of government.
Government is-- it's
the arm of the people
to ensure that our interests are
protected against the interests
of the few.
Now the question of how do you
engage the public in a way--
either you have the
public that says,
look, the government
is going to do this,
or the government is just some
kind of dissociative entity,
no, the government is,
again, the arm of the people.
And so people need
to know what AI--
how AI is being used,
how it's developed,
and this is just the
general responsibility
to educate people.
More people should be empowered
to understand, develop,
and advocate for AI.
And so I think part
of our small mission
here is to contribute
in some way
to the development and
dissemination of mechanisms
for broader education
in the formation of AI.
But this is a small enterprise.
I mean, we're hoping
to influence a few,
the 30 here will influence
another 30 and another 30.
And in some way, you
will get the word out,
and that's part of the mission.
But I think the efforts to--
with open AI and other efforts
to try to empower people,
not place this technology
in the hands of the few,
but make it available
to the many.
I certainly embrace that.
AUDIENCE: [INAUDIBLE]
was that you
have Goldman Sachs
was willing to pay
top dollar for
[INAUDIBLE] students
to write a groundbreaking code
that people in the Security
Exchange Commission had no
idea how to keep with up it.
Talk about not [? commenting ?]
[? your ?] [? code, ?] like,
there was a real structural
disincentive to write transport
code--
MATT WILSON: Exactly.
AUDIENCE: --able to interpret
and then to regulate
[? price. ?] On that hand,
it was a big arms race.
MATT WILSON: Yeah, so
it is the arms race.
I think this idea that enforcing
this transport-- so that could
be something that we
simply agree, look,
complex algorithms that
are not transparent,
the risk of their application
to society is too great
and we simply won't
allow that to occur.
And we could do that.
If you were going
to legislate AI,
it wouldn't be to
keep the technology
from being applied but to
apply standards for how
it's going to be applied.
And I think that this is where
it requires people to engage,
to show how it could be done.
If it just becomes some
intellectual exercise,
sure, it would great if
it could be done, But.
We don't have any
examples of that
because we don't have
competent people doing it.
And all the examples you
give, it's not-- it's really--
these are questions
of competence.
It just says, this
is what happens
when you do not ensure that
competent people are actually
in charge of this.
MAX TEGMARK: Right.
Exactly.
MATT WILSON: And the solution
to that is more competence.
MAX TEGMARK: I agree with that.
So three separate things you
mentioned I want to comment on.
First of all, you asked this--
first of all,
about transparency,
I think there's so much value
in that for so many reasons.
And in fact, in my
research group at MIT,
we're doing this big
project on, what I call,
intelligible intelligence.
So if you have, for
example, a deep learning
system that you're trained
enough to do something
really cool, can you
develop an automated way
to transform it
into a system that
does, basically, the
same thing but in a way
that you can actually
understand and therefore trust.
If anyone in the future
is interested on working
on that kind of
stuff, you should
apply for my postdoc
[INAUDIBLE] email me.
Second, you asked
this great question
about what we can do to engage
more people in these issues.
I asked myself that
question many times.
And one of the
answers I came up with
is, I should really write a
book that's more accessible
and not so bogged down
[? with ?] terabytes
and gigabytes.
MATT WILSON: That's a good idea.
MAX TEGMARK: So I did.
We'll see you on
Tuesday how it goes.
And third, the
government, I think
you put it very
well three when you
said that the government is--
shouldn't think of it
as just other people,
as an amorphous blob, but the
government is ultimately us.
You should think
seriously about, actually,
if you care about these things,
how many different countries do
we have represented
here, Gabriel?
AUDIENCE: I don't
know the number.
A lot.
MAX TEGMARK: A lot.
So you should seriously
consider working a little bit
for your own government.
I'm not saying you should give
up your awesome research career
and run for president or
whatever in your countries,
but you don't need to.
There are many,
many ways in which--
there's a huge appetite
in governments [INAUDIBLE]
to have people who really know
technical things, for example,
about technology and AI to work.
And you can often
rotate and do something
for one year, or two
years, often even
on a part-time basis,
some consulting
without torpedoing
your research career.
And it's incredibly valuable
for these governments.
These are typically
non-political appointments.
So you don't have to
worry about the ad
hominem attacks being directed
at your family or that sort
of stuff.
You can really, really help.
So instead of complaining
about governments,
see if you can help
your government get
more wisdom into it.
AUDIENCE: [INAUDIBLE]
[? concern ?] is [? the ?]
comparison with climate change.
So despite an almost unanimous
consensus of scientis that
climate change is
something's happening,
and yet the government is
not-- the US government--
MAX TEGMARK: This one, yeah.
AUDIENCE: --is not
heading in this direction.
And I think that the
evidence for climate change,
how much is understandable,
is much more,
by people that are
not scientists,
much more for
climate change than
for artificial intelligence.
And I guess the problem
with climate change
is there are big corporations
that, in their own interests,
they don't want to protect this.
But this [INAUDIBLE].
[INAUDIBLE]?
MAX TEGMARK: I absolutely
share your concern, of course.
Money and profit
interests are, of course,
very often used also to try
to affect the governments
in the countries
and not necessarily
to listen to scientists.
I think, in AI,
I personally know
a lot of the leaders
of leading AI companies
today who are actually quite
idealistic people generally.
And I think the biggest
risk of getting something
like climate change happening
on the AI side I think
is actually on
the military side.
Because if you start
getting an arms
race in lethal autonomous
weapons, not drones that
are remote controlled by human,
but little things that just
decide by themselves
who to kill,
that's going to be so lucrative
once the market starts
to be there, that there's going
to be huge lobbying for it,
and it'll become
very hard to stop.
That's why I think that the
best chance you have of actually
preventing that arms race is
before it's really started,
which is basically now.
If you wait long
enough, it's going
to become like climate
change, I think,
where they will have
so much lobbying
it becomes impossible to stop.
AUDIENCE: [INAUDIBLE] This
arms race, [INAUDIBLE]..
These drones are basically
artificial intelligence.
So because it has
already started, then,
what do you think we should do?
[INAUDIBLE]
I know.
I think we should really
encourage these negotiations
that are going to be happening
in the United Nations
in November.
We had an open letter came
out this last weekend.
You might have seen
it in the news, where
the CEO of Google DeepMind and
a whole bunch of other people
signed this letter, saying
we support the UN's attempts
to negotiate a ban on
lethal [INAUDIBLE] weapons
and keep AI mainly
a civilian effort.
I know it sounds very hard
to do this, but there were--
a colleague of mine
at MIT was very active
in the biological weapons ban.
And these people
also kept telling him
it's impossible
to never be done.
But we have a ban on
biological weapons,
and we have a ban now
on chemical weapons;
we have a ban on blinding
laser weapons; and so on.
And why do we have that?
Who was it that
really pushed it?
It was actually the scientists.
The bio weapons ban was really
driven by biologists initially.
And if you go ask your
parents, what are the first--
what do they mainly associate
biology with, is it medicines,
or is it bio weapons?
They're probably gonna
say medicine, aren't they?
And biologists are really,
really happy about that.
If you ask what people mainly
associate chemistry with today,
they may probably think new,
cool materials and things
like that.
They probably don't
mainly associate it
with chemical weapons.
Chemists are really
happy about that.
That's why they fought so
hard for that ban, right?
In 20 years, if you asked your
parents what they associate AI
with, I hope it's
going to be all sorts
of wonderful new
things, not some sort
of horrible lethal
autonomous weapons.
So I'm optimistic that
the researchers who really
know most about this
really push hard
for it, that can go the same
way as bioweapons, and chemical
weapons, and actually
become banned.
Then, [? the ?] AI [? town ?]
can we focus on the civilian
uses.
MATT WILSON: Yeah,
but I think this also
raises an interesting
question about-- there
was sort of an advocacy point.
And that is, how do
you get the word out?
And I think that, unfortunately,
the climate change
is an example of failed effort.
And this is scientists
thinking that they
can engage in this kind
of public advocacy,
they could change the
dialogue by just--
through their words.
And I think these emails from
these climate scientists, where
their feeble efforts to try
to manipulate public opinion
came out, I think that did more
to undermine climate science
and climate policy than anything
that any scientist has ever
done, just those emails.
And so it's this
idea that somehow we
can socially engineer
things through our influence
and involvement.
The way we engage policy in the
public is through information.
Do the things that
you're good at doing.
It's not influencing
public opinion.
It's informing public opinion.
It's informing policy.
It is engaging in
a way that says,
bio weapons are bad
because they can't really
be employed and controlled
because the underlying
nature of the science
is such that you
don't have the kind of
control you imagine you have.
And people will see--
I mean, this is the
noncynical view--
if you can make a genuinely
scientifically compelling
argument and
presentation, you have
to trust that that will
actually have impact.
And I think it's the
same thing with AI.
Just arguing, don't
worry about AI.
It's not going to
impact your job.
They're all going to be safe.
This is not going to work.
I mean, that's why we
have these sort of panels
to engage in a realistic way to
say, these are the real issues.
And the information
you get from scientists
is going to reflect the
genuine issues and concerns
and the knowledge that we have.
How can they be used?
How can they be controlled?
And what do we need to do?
MAX TEGMARK: I think
that's exactly right.
MATT WILSON: You
can't bullshit people.
MAX TEGMARK: We can't
bullshit people.
And I think if we start
bullshitting people--
MATT WILSON: It will backfire.
MAX TEGMARK: --it would
undermine our reputation
as scientists.
MATT WILSON: Exactly.
It backfires completely.
MAX TEGMARK: But
the good news is
that I think with lethal
autonomous weapons,
there are actually
some scientific facts
which, are on our side.
I had the fortune to
actually ask Henry Kissinger
last winter about how
the biological weapons
ban came about.
And so there was a Harvard
biologist who persuaded him
that this was a good
idea, and then he
persuaded Henry Kissinger.
And then, he managed to persuade
[? Brezhnev ?] and others
that they should do this.
And the basic argument there
was, US was already top dog.
So if it ain't
broke, don't fix it.
Don't throw a wild
card into the game
that might cause all sorts of
unpredictable things to happen.
That argument is even more
true with these AI weapons.
It's very different.
If you compare AI weapons with
nuclear weapons on the other,
nuclear weapons are
really expensive to buy
and really hard to
make because you
need to get hold of all these--
you need to get hold of highly
enriched uranium, or plutonium,
or something like
that, which means ISIS
doesn't have nuclear weapons.
AI weapons are
not all like that.
The superpowers, they know--
this is a fact we scientists
can remind them of,
if superpowers figure
out a way of mass
producing the bumblebee-sized
killer drones that
cost $50 each, where you can
just program in with Bluetooth,
the face of whoever you
want to assassinated,
and they go off and do that, or
parts of whatever ethnic group
you want to kill, if
these things can be bought
with $50 or $500 of
bitcoin by anybody,
it's just a question of time
until North Korea makes them,
and then they're on the
black market everywhere.
They're just going to become
the Kalashnikovs of the future.
That's a scientific fact.
Good luck trying to ban
machine guns, and Kalashnikovs,
or trucks, or vans,
or whatever else is
being used by terrorists today.
It's hopeless.
And the superpowers-- we can
remind them, as scientists,
that this is the end point.
This is where the arms
race is going to end up.
It's going to weaken America.
It's going to weaken China.
It's going to weaken Russia--
and greatly benefit
ISIS, Boko Haram,
anyone with an ax
to grind, really,
who doesn't have a lot
of money, and doesn't
have the wherewithal to
develop their own tech,
they are gonna be
the big winners.
I think that's the main--
if we convey that
scientific point
that the superpowers have it
in their interests that really,
really stigmatizes, and
clamp down on it, and keep--
that's why they're going
to do it, not because of,
as you say, any kind
of manipulation,
which we should not do.
MATT WILSON: Right,
don't do that.
AUDIENCE: So it seems
that AI is [? getting ?]
better, and better.
The question is,
suppose one day,
AI can do better [? rational, ?]
[? optimal ?] [? decisions, ?]
whereas, at the same time,
for the same problem, people,
because of their rationality,
we made a decision that is
different from [INAUDIBLE].
And then the [INAUDIBLE]
between our decision and the AI
decision, the question is, is
this more ethical to respect
human rationality
in decision-making?
Or is it more ethical to go with
AI and its optimal decision?
And this could even
go to our daily life,
[INAUDIBLE] say, I
want to do something,
and then my AI
system is tell me,
no, no, you should
do other things.
So is it better [INAUDIBLE] for
me to make decision or I listen
to the AI system?
MAX TEGMARK: What do you think?
AUDIENCE: Hm?
MAX TEGMARK: What do you think.
AUDIENCE: Sometimes
it's really hard.
Because I know I may have made
some irrational [INAUDIBLE]..
And I know I'm irrational.
So maybe I'll just go for AI.
But then [INAUDIBLE] AI
tell you, if you do this.
But some people, because of
irrationality, [INAUDIBLE]..
Especially, you don't
really know the future.
Should we respect
human decision?
Because [INAUDIBLE] or should we
go for, say, AI's [INAUDIBLE]..
MAX TEGMARK: Or you could
also have a third option where
each individual gets to decide.
Like, if you're driving and
your GPS says, "turn left,"
it's ultimately your choice
whether to trust that and do
that or not.
AUDIENCE: Could
you describe what
you mean by irrationality
a little bit more
that's built into this is that
there is an objective function
that we have--
like, the AI may
be more rational
in that it's better
at maximizing
some objective function.
[? Sure, ?] [? I ?] find it
difficult to find any way
to arbitrate which objective
function is the one
to maximize.
That seems like a preference.
And so I'm not sure
how you can even--
just speak about irrationality.
MAX TEGMARK: Yeah, so AI
will be developed in terms
of occupying [INAUDIBLE].
And then their [? cost ?]
[? function ?] may be even
[? with ?] [? your ?]
cost function.
AUDIENCE: But that's not
irrationality anymore.
It's now a preference over--
you need to define irrationality
in context to some goal.
if I like cake and
you like salad--
I prefer cake, you prefer salad,
neither of us is irrational.
If we both want to
be healthy, then we
can say which one is
the correct decision.
[INAUDIBLE]
MATT WILSON: I mean, I think
in that scenario, the fact
that you have the
right to choose, right?
So AI's are informing.
That's the best case.
That's what we hope, that AI's
help us make better decisions,
but in the end, we choose.
I think the real
ethical concern become
when you don't get to choose.
And they're very simple
cases where you can say,
humans engage
irrational behavior.
I see this every single day.
It's people crossing
against the light.
We have a system, these
little stoplights,
red light, green
light, tells you
when you should and should
not cross the street.
It's just a suggestion.
It says, don't cross the street
because there's traffic coming.
Every single day, people
run across the street--
MAX TEGMARK: While
they're texting.
MATT WILSON: This is
completely irrational.
We put systems,
suggestive systems
to try to help you make a
better rational decision,
people defy that.
Now suppose we had an AI.
You think, this would
be a great idea.
This is sort of what cars do--
little AI policemen
that keep people
from crossing the street.
They see people,
and they stop them
from doing irrational things.
You don't get to choose;
the AI gets to choose.
You could say, well, that
makes a whole lot of sense.
But this would raise exactly
these kinds of concerns.
Because I can come up with
all kinds of scenarios
where, you know, I need
to out on the street
because there's a
little kid out there
and I want to keep him
from getting run over.
Or, my stroller just rolled
out there, I gotta save him.
I mean, these kinds
of things happen.
In the end, you
can say, could you
have any AI that's capable
of actually incorporating
all of these things,
the rational decision,
against the simple objective
function, which is just
keep people from crossing
against the light, which seems
obvious, but we have
solved that by saying,
I put the light up
there, but, by God,
if you want to go cross,
it's your decision.
And so that's sort of
how we operate things.
And so I think we always
will have that-- we
can have that discussion.
There aren't going to
be AI policeman standing
there keeping you from--
I hope not-- keeping people
from crossing the street.
It be precisely for that reason.
And I don't know that we'll
get to the point where we say,
I think it's just-- better
let the guys decide what kind
of control we should
have, whether or not
we should be able to
make the decisions, which
is why I think a lot of the
concerns over the singularity
are probably a little bit
misdirected because it comes
down to that, will
we cede control
to the our AI overlords.
And certainly this is a
decision that we will make.
The AIs are not going
to make that decision.
MAX TEGMARK: I have
some questions for you.
You mentioned the S word there.
You've been asking singularity.
You've been asking us
a lot of the questions.
So I have some
questions for you.
We'll do them
very, very quickly,
with just the show of hands, OK?
So first of all, my
first question is,
do you think it's
possible, according
to the laws of physics,
to actually build
machines that are smarter
than us, that are better
than us at all cognitive tasks?
Raise your hand
if you think yes.
Raise your hand if you think no.
And raise your
hand if you didn't
raise your hand because
you're not sure or are tired.
OK, raise your hand if
you think it actually
will happen at some point, there
will build machines that can--
OK.
And raise your hand if you
think it will never happen.
OK.
And raise your hand if you
didn't raise your hand.
MATT WILSON: The
optimists out there.
MAX TEGMARK: Raise
your hand if you
think it's actually going
to happen in your lifetime.
Raise your hand if
you think it will not
happen in your lifetime.
OK, and raise your
hand if you didn't
raise your hand because you're
thinking about it and not sure.
OK, interesting.
OK.
Now raise your hand if you
would like it to happen,
if you would like us
to actually develop
human or superhuman-level AGI.
OK.
And raise your hand if you
would prefer that we don't.
This is really fun.
[LAUGHTER]
You have your debates now set
for over drinks at dinner,
right?
Raise your hand if you
didn't raise your hand
because you weren't sure
or still deliberating.
OK, so suppose it actually
does happen that we actually
creates machines that
are more capable of us
at all tasks, basically
super intelligence, that
are way beyond us, how would
you like these to be controlled?
Would you like them
to be controlled--
who would you like
to be in charge?
Like, humans to be
in-- raise your hand
if you would prefer humans
to remain in charge.
AUDIENCE: Of the
super-intelligent machines?
MAX TEGMARK: Yeah.
So that if you
have, for example,
some basically
enslaved god machine,
[? and ?] [? its ?] [? boxed. ?]
You can use this thing
to figure out
everything for you,
but you're still in control.
AUDIENCE: But the
superman definition
was exactly what
Matt posted up there,
where it's better than
humans in every domain?
MAX TEGMARK: It's
better than us--
OK, so let me flesh
that out a little bit.
I mean, suppose you have a
machine, it's in Building 46
in your office, and
it's better than humans
at all cognitive tasks.
Like, I could ask
it to write my book,
and it would write a better one.
I could ask it to give this
important presentation,
and it would simulate
a video image of me,
and it would speak much
more coherently than I do.
And it could do anything better.
But it's still in
his office, and he
has the power to plug it off.
It's not connected
to the internet.
So it can't happen
break out or take over
the world or anything.
But if he asks it, hey, tell
me about the stock market
tomorrow, where it'll
give him all this advice,
make him the richest
professor on campus.
And so it has all these
capability, but he controls i.
So that's one option--
humans in control.
Another option would
be that you let
this AI be in control
in some way or another
and hopefully programming it
so that it shares our goals
and takes good
care of us somehow.
You can have other options, too.
Who would prefer
some version where
humans are still in control?
We haven't really
defined control, though.
MATT WILSON: Simple controls,
who controls the switch.
MAX TEGMARK: It's
in your office.
You can switch it
off if you want.
[INAUDIBLE] switch.
Who would prefer
that this machine,
since it's so much smarter
than us, it's in control?
AUDIENCE: [? Absolutely. ?]
MAX TEGMARK: OK.
MATT WILSON: [INAUDIBLE].
AUDIENCE: [INAUDIBLE] division
between the haves and the have
nots.
And so if you give all
the haves the ability
to have all these machines,
they can tell them everything.
And the have nots
don't have this ability
because they don't have
the financial resources.
And that's just gonna increase
this divide between the haves
and the have nots.
AUDIENCE: I disagree.
MAX TEGMARK: That's an
accessibility question.
That's the difference.
AUDIENCE: Then they [INAUDIBLE].
AUDIENCE: Yeah, but
it they're better
than the humans in
every domain, they
can teach themselves to
not have those biases.
[CHATTER]
MAX TEGMARK: Yeah, this
is a wonderful question.
My hidden agenda asking these
questions towards the end
was not that I was
going to tell you
any kind of answer,
because I don't have it,
but rather to provoke
really awesome
after-dinner conversation.
Because it's a really good--
what both of you said
there is, I think,
really, really important.
MATT WILSON: Who should
have access to it, I think
that's probably the
biggest question,
trumps all these
other questions.
MAX TEGMARK: It makes
a big difference
if it's the Dalai
Lama who controls it
or Adolf Hitler who
controls it also.
And some great
after-dinner discussions.
AUDIENCE: How do these
question responses compare
to other groups you've done?
MAX TEGMARK: Because the
book has an online survey
with questions like this,
and so far, of the 100
responses or so, pretty--
it's interesting.
First of all, you
disagreed on everything,
which is what the other
[? respondents ?] [? do. ?] And
this is really interesting
because you know much more
about biological and artificial
intelligence than those other
respondents did, and you still
disagree about everything,
which tells me that these are
really interesting questions.
I can write down the
site if you want to look
at what other people have said.
I think, right now, it just
has some stuff about the book,
but come the weekend, it's going
to actually have the survey,
where you can continually
see and also look at people's
comments and why they think.
MATT WILSON: I
mean, I don't want
to put anybody on
the spot, but I
think it's really interesting
when you ask the question, who
feels that these kind
of super intelligence
should not be developed.
And there were some people--
anybody want to throw
out their thoughts?
I mean, you're in a course
presumably that is--
whose objective is
to advance the effort
to develop precisely such
intelligences, and you're here.
So any thoughts about
what are the fears?
AUDIENCE: [INAUDIBLE]
MATT WILSON: What's that?
AUDIENCE: [INAUDIBLE]
[LAUGHTER]
MATT WILSON: I'm just curious.
I mean, there are obviously--
these are perfectly
legitimate concerns.
MAX TEGMARK: Yeah.
MATT WILSON: And we are
all just cheerleaders.
AUDIENCE: [INAUDIBLE]
super optimistic about how
intelligence [INAUDIBLE].
I'm pretty sure-- and
I might be wrong--
that those intelligent
[INAUDIBLE]
are not going to be as
human-like as we now discuss.
I'm not sure that it's even
relevant to [INAUDIBLE]..
Personally, I think that there
is something about [INAUDIBLE]..
MATT WILSON: Well, I think
those applications are probably
easier.
[INAUDIBLE],, who
should control it?
Like, in air traffic
control, I think
it's inevitable that in AI--
and this will happen this will
be the lives of thousands--
hundreds of thousands
of people will
be placed in the
hands of algorithms
that make sure the planes
don't crash into each other.
And if they do a better job than
people, they don't fall asleep,
they don't mistakenly drag the
wrong dot to the wrong line,
we would all be ecstatic.
We would be-- it's like, yes,
that's exactly what we want AIs
to do--
keep planes from crashing into
each other and do a better job.
But that's because they're
not doing what people do.
I mean, they are doing
a job that people do,
but they don't have any of the
liabilities that people have.
They're not human-like.
They're focused on that.
MAX TEGMARK: Gabriel?
MATT WILSON: So I think
that's a good example.
And following up on the last
question that Max [? wrote, ?]
I know you, Max,
have been thinking
a lot about what kind
of society we want.
So can you say a
few words about--
there's a lot of concerns
about [INAUDIBLE]..
You started by
saying, well, what
are the amazing
things that we can do,
and what's the amazing
future you can imagine?
What are the mental
challenges [INAUDIBLE]
technology and this knowhow?
And you can't answer, what do
you think because [INAUDIBLE]..
MAX TEGMARK: Oh.
AUDIENCE: What kind of
society do you want?
How can AI get us there?
MAX TEGMARK: What kind of
society do I personally want?
[INAUDIBLE] is my
witness, actually.
So I wrote-- the book is a
series of thought experiments,
among other things,
about different--
as broad a spectrum
of futures as I
could think of from pretty
horrible ones to ones
that you might really like.
And you're my
witness, there was not
a single one of
them, even the one
that I tried to make
sound really cool,
that I didn't have at least
some serious misgivings about.
So--
AUDIENCE: [INAUDIBLE]
misgivings about [INAUDIBLE]..
MAX TEGMARK: As well.
So I really feel I--
I would love to talk more
with more people about this
and get more ideas.
I think if billions of people
think about something together,
they can come up with better
ideas certainly than I can.
AUDIENCE: There's also
[INAUDIBLE],, which makes
a huge difference [INAUDIBLE].
MAX TEGMARK: Yeah, that's true.
In the very short term, like
for the next three years or five
years, I would like to see the
National Science Foundation,
the big funding agencies
in other countries saying
that AI safety research is
an important research field.
It's going to be
a real priority.
It's going to be given a
serious chunk of funding, just
like other brands
of computer science,
and that we're no
longer going to view
it as acceptable to have
systems that just routinely get
hacked, and crashed, and
so on, because one thing
I can say for sure
I don't want is
to have my society controlled
by some machine that can get
hacked or just has bugs in it.
I mean, that I
absolutely don't want.
And if we can't even
keep Microsoft Windows
from getting
hacked, why should I
have any confidence in much
more sophisticated things doing
actually what we want?
So that's a very
short-term thing.
I would also very much
like a future in 10 years,
in 20 years, when you ask your
parents what they associate
AI with, they associate it with
this new cool cure for cancer
or all these new wonderful
positive technologies, not
with these new, horrible
autonomous drone assassinations
that are plaguing their city.
So I'd like to see
an international ban
on lethal autonomous weapons.
I would love it if
we can, in 10 years,
have the top two words
that people associate
with a legal system not be--
maybe efficiency and
fairness, probably not
on most people's top two lists--
not today.
I think AI can help
a lot of there.
And with jobs, what
do I hope there?
First of all, I hope,
in our education system,
we can start giving actually
really good career advice
to kids, what sort of things
that should actually go into
and what sort of things
they should avoid.
I think people are maybe
underestimating a little bit.
They're a little bit
stuck in the past.
A big question I don't
have the answer to,
honestly, though, is--
actually, let me take
a step back here.
If you take a longer view, the
apocryphal Luddites, if they
existed, who supposedly
smashed looms because they
were afraid of weaving jobs
getting lost in the Industrial
Revolution in England, I think
they were too narrow-minded.
They were just obsessing about
a particular kind of job.
And now, we would look
back and say, well,
as long as there are
other jobs, don't obsess
about that particular kind.
Now, we might end up-- if we
can get machines that can do all
our jobs for $0.01 an
hour and electricity,
of course we can't--
then, there will
be no jobs, where
I can get paid more
than $0.01 an hour.
And many people think of
that as automatic doom.
I think that's too
narrow-minded also.
Why do we like jobs?
Basically, for three reasons--
income, sense of purpose,
and the social connection
that we get.
I think we can get all three of
those things without jobs, too.
I mean, suppose I told
you, Gabriel, and you,
Matt, that you don't have
to ever teach a course again
if you don't want to.
We're just going to keep paying
you for the rest of your life--
all of you, actually.
It's a paid vacation.
I bet many of you would
still have no problem--
MATT WILSON: I like teaching.
MAX TEGMARK: --finding
meaning in your life.
You would probably continue
doing just as much research as
before, if not more, right?
Another group of kids who seem
to have no problem finding
meaning and purpose in
social groups without jobs
are children.
Did you feel that life was
meaningless when you were 5?
I don't think so.
So if we can create all this
wonderful wealth with AI,
one thing I actually-- this
is something I really hope for
is that we can find a way
of sharing this wealth so
that everybody gets better
off rather than just making
all the wealth going to me,
and I own all the machines,
and I don't share
anything with any of you,
and you just all
stave, I think there,
Europe has more of a
tradition, actually,
of this idea that government
should take care of its people
and provide free
education, free health,
and they're even doing some
basic income experiments now
in Finland, for example.
The US, there's, as you know,
a huge political resistance
towards any kind
of use of tax money
to help people who
have met hard luck.
I would love to
see a future where
we can create a society
where everybody gets better
off because of this technology.
The resources are
obviously going
to be there thanks to this tech,
whether the political will will
be there is the
really big question.
MATT WILSON: I agree
with that completely.
I think the potential for
the generation of wealth
is clearly there--
enhancement of
productivity-- I mean, you
could put a robot in my office.
It did my job.
Really, the question is, who
should benefit from that?
And I think you
wouldn't think of that
as displacing my effort.
It should complement that,
and everyone should benefit.
So it's wealth distribution, who
benefits from the windfall that
will come from this technology.
MAX TEGMARK: Yeah.
MATT WILSON: I think the
question of the better society,
where do we see AI
making contributions,
it's exactly the
points that you make.
In biology, what do you
think of bio weapons?
You think of medicines.
You think of those things
that actually advance
the well-being of people.
You think AI, where
would it be applied?
It would be applied
in areas where people,
where they are concerned
about the well-being
or safety of people.
Cars are the most
obvious example
because you have tens of
thousands of people every year
killed in accidents
that are largely
due to the insufficiency
of human intelligence
in navigating vehicles.
So if you have algorithms
that can do better,
it will save people's lives.
The example I gave with
air traffic control, where
you have lives
that are at stake,
these are obvious places, have
the AIs save people's lives.
Medical diagnostics,
you have people that--
even more people die as a result
of the fallibility of humans.
And if you can have
algorithms do a better job,
these are things that we
all would advocate for.
And of course, you would
want to have these concerns,
make sure they're safe,
they're transparent,
but that would be the--
that's the better life
through AI that I see.
AUDIENCE: Do you
think that there are--
we talked a bit about
human cost function.
Do you think there are any
inherent contradictions
within human cost functions
[INAUDIBLE] crystallized
in code?
[INAUDIBLE]
MAX TEGMARK: What do you mean
by a [? human ?] cost function?
AUDIENCE: I'm sorry?
MAX TEGMARK: What do you mean
by a human cost function?
Our values?
AUDIENCE: [? People have ?]
different goals from AI.
And there's
disagreement about that.
[? But even with ?]
a certain person,
if they could make an
AI that would magically
do all the things
that they wanted,
those things might not
actually be compatible.
MATT WILSON: Could
you give an example?
AUDIENCE: Say that we were
asking an AI to maximize
happiness for the
greatest number of people
and, at the same time,
do something else, say,
preserve to preserve
the Earth's environment.
How do capture these things?
And do you think that there--
since AI is a magnification
of our own ethical system,
our ethical system
has to be very good
for it to be magnified
without running into problems.
Do you think that there's
going to be any issue there,
where we say, this is what
we want, and the AI says,
OK, and then we get a
society that we're really not
very happy with.
Or, the AI says, sorry,
that's impossible.
MAX TEGMARK: I think there are--
some parts of your question
are very hard to answer,
some are very easy.
The hard parts are, whose values
are we even talking about?
Is it your values,
David [INAUDIBLE],,
or is it the leader
of ISIS's values?
It's not like we have a
great consensus on Earth,
even, as to what
direction we want to go,
even within a given country.
This country, for
example, there's
strong differences of opinion.
For that reason, I
think we can't just
leave this conversation entirely
to AI research because we have
to see what can [INAUDIBLE].
The easy part of your
question, I think,
is are there some constructive
things we can already do,
which are much better
than the status quo.
And I said, absolutely.
I think we tend to focus so
much on the differences we have
in values and forget that
there are a lot of things
that we pretty much all agree
on, like, for example, when I
call it kindergarten morality.
Like if you're an
aircraft manufacturer who
makes passenger jets, you do
not, under any circumstances,
want that airplane to fly into
a building or a mountain, right?
There is absolutely
no reason today
why it should be physically
possible for the pilot
to do that, yet that was
possible during September 11,
and is was also possible
from [INAUDIBLE]
this Germanwings pilot to
[INAUDIBLE] the autopilot,
fly at 100 meters
through the Alps,
even though the computer had
the whole topography map.
So if someone had spent
five minutes just thinking
about that, they could have--
they could've just put in
the little AI system that
raises a red flag and just
switches over to autopilot,
disables autopilot input,
lands at the nearest airport.
I think there are a lot of very
low-hanging fruit like that
that we can put into our
technology already, reflecting,
at least, those values that
pretty much everybody agrees
on, and we would
already be better off.
Also, a lot of industrial
accidents, for example,
happen just because
the machine is so dumb,
it doesn't realize that this
is a human and not an auto part
or something like that.
That's not something
that there's
a big ethical controversy about.
MATT WILSON: You had a question.
AUDIENCE: Yes, I had a question
going back to [INAUDIBLE]..
MATT WILSON: Well,
I mean they can.
I mean, this is our
ongoing social obligation
to ensure that the
government serves the people.
The idea that somehow
you can just turn it over
to anything and
AI, a politician,
and then it's just going to go
in the way that we all hope it
will, that's not going to work,
whether it's an AI or person.
So I think it's
the same question.
We have to constantly work
toward the efforts of equality,
and that's not an
AI-specific question.
MAX TEGMARK: Yeah.
MATT WILSON: This
is Max's engagement.
I mean, you've
got to-- you know.
MAX TEGMARK: It's a
great, great question.
I agree with what you
said there, [? Martin. ?]
If you look at the statistics
on inequality in the US,
for example, the GDP of the
US has, of course, grown
pretty steadily for
the last 100 years.
But if you look at the actual
income and real dollars
of somebody in the US without
the college education,
it's actually gone
down since 1970.
They're actually poorer now
than their parents were.
And this is not just in the US.
This is something,
which I think is
a key part of the
explanation of why
Donald Trump won this
election, why Brexit happened.
You have a lot of
really angry people.
And I think it's
important to understand
that a key part of this anger
actually comes from the fact
that they really are
feeling that they're
worse off because they are.
Then, maybe
opportunistic politicians
will take advantage
of the anger.
But the anger is real.
Why is the inequality growing?
Of course there is a lot
of different opinions,
but my MIT colleague,
Erik Brynjolfsson,
who's an economist
in the Sloan School,
has done a lot of
research just showing
that, actually, technology,
in particular automation,
is a key driver of this.
Because what's happening is
that a lot of middle class jobs
are getting automated
away, and the people
have to switch to
lower paying jobs.
And he thinks that this
is just going to continue
and accelerate with AI.
So this is sort of
a wake-up call also.
I think if you want to have
a flourishing democracy,
you can't have too
much inequality
because, then, you just
get so much raw anger there
and people haven't had the
opportunity to even get
a good enough education
to really constructively
participate in
politics and stuff.
And you get big, big problems.
And rather than just twiddle our
thumbs until things get worse,
I think this is the
perfect wake-up call
to really try to
reduce inequality.
He and other
economists have a lot
of very concrete ideas for how
we should do this, which have
been, so far, widely ignored.
But I think we should
listen to them.
PRESENTER: Two last questions.
[INAUDIBLE]
AUDIENCE: [INAUDIBLE]
well, it's important
if you're talking about
things like regulating
the AI, especially when you're
talking about [INAUDIBLE],,
for example, which is
what we talk about now.
[INAUDIBLE] we have
been talking about it
as if its some
clearly defined thing.
Where do you draw the line
between a simple machine
learning algorithm
[INAUDIBLE] and if say, like,
[INAUDIBLE] maybe
everybody would
be against using autonomous
drones that might decide
on their own who to
target and whatever,
but I guess machine learning
algorithms are already used
by the military [INAUDIBLE].
MAX TEGMARK: I can take
a first crack at that.
So in my book--
so there's a gazillion and
one different definitions
of intelligence by
different people.
But in my book, I define it
in a very broad way, simply
as to define intelligence
as the ability
to accomplish complex goals.
So first of all, that means
it's not like you either
are intelligent or not.
There's a spectrum.
Second, you can't measure
it by a single number
and argue about whether a
chess-playing computer is
smarter than a [INAUDIBLE]
playing computer games,
they're each better at
one goal than the other.
It's a spectrum of abilities.
It can be broad, or
narrow, or whatever.
I don't think it's so
interesting to quibble about
whether something has
high enough intelligence
that it should be called AI,
but when it comes to weapons,
I think rather the million
dollar question is,
is there a human in the
loop or not in the decision
to kill somebody?
And right now, there's
a lot of controversy
about the drone strikes
that the US is doing,
but there's always a human pilot
who's remote controlling it.
And there's still a human who
actually makes the decision.
But we're now getting very
close to crossing this line
and developing all
sorts of weapons
where there's no
human in the loop.
It just goes off and
kills a bunch of people
according to some algorithm.
AUDIENCE: [INAUDIBLE]
how much [INAUDIBLE]
like, the human can still be
in the loop, but less involved.
And I guess that also
dependent on how much you
trust the decisions--
MAX TEGMARK: Yeah, that's also a
very important issue you raise.
Because suppose the
human is in the loop,
but the machine
presents what seems
like a really compelling
case for doing the attack
and gives you five
seconds to decide.
It's kind of like when my
GPS tells me to turn left.
I'm like, OK.
And we've already had some
unfortunate outcomes of this.
For example, there was a US
warship in the Persian Gulf,
where the computer
said, you're being
attacked by an Iranian fighter
plane descending towards you.
And the captain, Captain
Rogers, gave the order
to shoot it down.
He shot it down.
It turned out-- raise your
hand if you've actually
heard about this.
Yeah.
It turned out to be an Iranian
civilian jumbo jet with about
300 people who all died.
And that's obviously
not the sort
of outcomes you want to
see more of in the future.
So if there is a
human in the loop,
that has to be a human
who's properly in the loop
and is actually given correct
information and given the time
to make the right
decisions in that case.
Otherwise, it's, as
you say, useless.
PRESENTER: Last
question, and it has
to be a positive
one [INAUDIBLE]..
AUDIENCE: How do you
[? define ?] the [? weapons? ?]
[LAUGHTER]
[INAUDIBLE] by the government
to find people [INAUDIBLE]
MAX TEGMARK: There are
very hard questions.
And Gabriel wants us to end
on a positive note here also.
I think the advocacy that
we in the AI community
are doing for a ban on lethal
autonomous weapons, what we're
actually saying
is, we don't want
to micromanage the
whole process and try
to solve all these difficult
questions from the get-go.
Rather if there are negotiations
that actually take place
at the UN, they are going to
discuss exactly your questions.
Like, how do you
define exactly what--
how do you do verification?
How do you enforce it?
That's why negotiations
are called "negotiations,"
because there will be a big
process with, hopefully,
very, very thorough discussions
about all these issues
to try to come up with
something really good.
At this point, what the AI
research community is mostly
saying is just, come on, guys.
Start that process.
Start those negotiations.
PRESENTER: Are you
sure it's positive?
Super positive?
AUDIENCE: [INAUDIBLE].
For the near future,
the cognitive inferences
that machines are able
to make is more or less
a direct function of the
kinds of data [INAUDIBLE]..
And if you feed a
machine biased data,
you will get biased inferences.
If you feed a language learner
a core base of data which
is gender biased
or racially biased,
you will get gender and
racially biased results.
Specifically, you're
talking about [INAUDIBLE]..
I am really excited
about the idea.
It would be hard to create
worst laws than the ones
that we have now.
But for the foreseeable
future, the statistics
of a training set will directly
relate to the statistics
of the inferences that we make.
So how do we begin to
create the kind of data
that we need to create
the kinds of learners,
behind the inferences that will
make the decisions that we feel
like are going to lead to more
optimistic futures [INAUDIBLE]??
What kind of public,
private partnerships
are necessary to make
those things [INAUDIBLE]??
MAX TEGMARK: Say something
quick and optimistic,
and I'll say something
quick and optimistic.
MATT WILSON: No, I mean,
I think that this all
comes to this transparency
and predictability.
You have to know what's actually
going into this in the first--
the first thing to know
is that it is a function
of the data that drives it.
And so it will drive
those policies.
I can't tell you what
those policies should be,
but clearly these are
the kinds of policy
that people need to
be thinking about
and concerned about, that
it's not a black box that just
push stuff in and get
optimal answers out,
that all of these things
should be knowable
and are subject to our
control and scrutiny.
So these the
optimism is, we don't
have to turn this over to you
know some blind authority.
We have the ability
to control it.
We should control it, and I
believe we will control it.
MAX TEGMARK: All
right, and I'll just
close by saying that I think
every single way in which 2017
is better than the Stone Age
is because of technology.
Every thing I love
about civilization
is the product of intelligence.
So if we can amplify our human
intelligence with machine
intelligence wisely--
MATT WILSON: I agree.
MAX TEGMARK: --then I think
we have a huge potential
to help humanity flourish
like never before.
So let's do the best we can
to create as good a future
as we can.
PRESENTER: OK, let's thank our--
[APPLAUSE]
