SPEAKER: Tim.
Welcome to Google.
TIM URBAN: Thank you.
[LAUGH]
[APPLAUSE]
SPEAKER: On behalf of
Google and all the Googlers
here and those streaming from
around the world, welcome.
It's great to have you.
TIM URBAN: Thank you very much.
This is great.
I've wanted to do a Google
Talk for a long time,
so this is very exciting.
SPEAKER: Great.
Because knowing
that you were coming
and that you tend to get
immersed in a talk, in a topic,
we wanted to get you a small
token of our appreciation.
It's a book called
"How Google Works."
[LAUGHTER]
And I think it's about
as long as your posts
on superintelligence.
[LAUGHTER]
TIM URBAN: Oh, there's pictures.
SPEAKER: Oh, there are pictures.
All right.
I didn't think there was
stick figures in there.
TIM URBAN: This is
Eric Schmidt dancing.
[LAUGHTER]
Good.
This is exciting.
Thank you.
SPEAKER: So this talk was
hosted for a group called
the Singularity Network.
And for a bit of
background, it's
about a 2,000 Googler mailing
list, which frequently
goes on 600 post tangents
about what really means
strong, artificial intelligence,
or whether your computer is
going to take over the world.
And I'm pretty sure thousands
of hours of paid Google time
has been wasted on the page,
looking for circularity
in the simulation argument
or accurately grading
all of Ray Kurzweil's
predictions,
or trying to see if
the Chinese room really
could exist in China.
It's sort of a hub--
TIM URBAN: It's an
anxious group of people.
[LAUGH]
SPEAKER: It is a hub for
procrastination about topics
that I've heard that you're
pretty interested in.
So we're really, really
excited to have you.
TIM URBAN: Yes.
Well, great.
SPEAKER: So today, for
today's presentation,
I want to talk about artificial
intelligence and, you know,
how you came to understand it.
I mean, to start,
I've love to hear
how you were introduced
to the topic,
and why you really
started digging it.
TIM URBAN: Yeah.
Well, it was kind of
one of those things
that I kept hearing about.
And I kept hearing,
like, smart people
I otherwise respected
talking a lot about it.
It's like Burning
Man, a little bit.
[LAUGHTER]
And I'd be like, really?
You're really talking about AI?
This was a few years ago.
Now, of course, I
think more people
are aware that this
is an important topic.
But I finally said, OK.
I need to figure
out what's going on,
and whether this is exciting
or scary or both or neither.
And so I started reading
various kind of books out there.
I read Nick Bostrom's
"Superintelligence" book, which
is kind of one of the only books
that's, like, simultaneously
riveting and, like,
mind numbingly boring.
[LAUGHTER]
Like, it's an incredible
feat that he's
accomplished with that book.
And I read a bunch
of Kurzweil stuff
and, you know, I think some
other kind of excited people
like Peter Diamandis.
And just, there's
a crowd of people
that are very optimistic.
And then I read a bunch
of other things, articles,
and watched nothing that fancy.
I read a bunch of PDFs that
come out of universities.
I watched a bunch
of YouTube videos.
And the way I do
research is instead
of worrying about finding a
professional source that's
definitely going to be the most
legitimate, accurate source,
I just read and watch
and take in, like,
a ton of stuff, most of which
alone should not probably
be trusted.
But eventually, by the time
you kind of get to the end,
you're like, OK.
I at least have a sense
of what people are saying.
And with AI, there's not really
any definitive source anyway.
And so I just did that
for about a month.
Nothing too crazy.
And then I wrote kind of a big
summary of what I had learned.
That was the beginning
of me working
on trying to understand AI.
SPEAKER: Was there anyone you
read that you were like, wow,
this guy is crazy?
TIM URBAN: Kurzweil's a little--
[LAUGHTER]
I mean, I'm hoping he's right.
I mean, according to him, I'm
going to be able to switch
my senses on and off and to
be in a true virtual-immersed
second reality, and we're going
to replace my bloodstream with
something different, and--
SPEAKER: Nanobots.
TIM URBAN: What?
SPEAKER: Nanobots.
TIM URBAN: Yeah.
Oh, the nanobots, of course.
He lives with a lot of nanobots.
And we are going to be the AI.
We're going to
merge with the AI.
We are going to be
superintelligent ourselves.
And it's not just him.
I've heard a lot of
people talk about that.
It sounds insane.
But then the people
that are kind
of more the doomsday-type
people, they also sound insane.
And I kept looking for
someone who's like,
don't listen to any
of these people.
It's just software.
Nothing crazy is
going to happen.
It's going to get
very intelligent.
Nothing that big or that life
changing's going to happen.
And I didn't find
anyone saying that.
So I said, OK.
So at this point,
now I trust everyone.
Because there's just--
everyone sounds insane.
Clearly something
weird's going to happen.
So, yeah.
I mean, as I said.
It's almost hard
to find anyone who
really writes about AI
that doesn't sound insane,
to be honest.
[LAUGH]
SPEAKER: So at Google, we
have a few different ways
of developing
machine intelligence.
I think neural nets is one
of the most common here.
And when digging
through your posts,
you mentioned three ways that we
could potentially get AGI, ASI.
You talk about whole-brain
emulation, genetic algorithms,
neural net inspired.
From your research,
was there any
that you thought would
be most likely to reach
artificial intelligence?
TIM URBAN: No.
I have no idea what
I'm-- I have absolute--
I am not a good person-- and
most people in this room have
a better idea.
Honestly my job is to figure
out what people are saying.
And the thing is,
for other topics
I've written about, I've
written about Elon's companies
and cryonics, and
other things, and I
feel more confident
there, because there
was a lot more consensus.
And so I can basically
read enough consensus,
and now I can take
the conviction
that the thinkers
have, and I can say,
now I can speak with
that conviction.
So I can steal their conviction.
But when you read
a bunch of things,
and experts are saying
opposite things,
then I have no
self-confidence now.
[LAUGHTER]
Right?
So it's like, if I
at one point thought
that neural nets were an
absolutely great way to do it,
I've now talked to three
people who have said,
that is just absolute nonsense.
So now I don't want
to say anything.
But in the last
month, I've talked
to the head of a pretty
respected AI company,
Vicarious.
And three or four other people,
like, of the same-- MIRI.
I had a chance to talk
to Luke Muehlhauser.
Muehlhauser, something.
Whatever.
Like, the smartest dude ever.
And someone who's
working in open AI.
And you know, again, it's
really, like, dramatically
differing opinions.
So then when I'm confused
about something and I say,
I'm doing talks on
AI-- and I don't
know what to say about
this, because I now
have lost my conviction about
whether or not that's actually,
like, objectively true.
And I ask them about it, and
I think I get a clear answer,
and then I ask someone else,
and it's something different.
So again, at this
point, I'm excited.
Because I always think,
like, if it's good,
if the good story comes true,
of course that's exciting,
you know?
Everyone's happy about that.
We get to, like,
slide down rainbows
and we get to-- we
don't have to die,
and all the problems are solved.
And the bad story,
I'm a little like,
it's kind of cool to be
here for the apocalypse.
[LAUGHTER]
Right?
At least it's like, it's kind
of awesome to see it happen.
So I'm kind of like, OK.
If AI kills all of us, then
at least that's interesting.
And selfishly, I'm already
kind of like, then,
I'll probably be kind of older.
Like, life isn't that fun
anymore at that point.
[LAUGHTER]
And for people who are babies
now, that's a little less fun.
SPEAKER: So you're
sort of hinting
at the idea of an
intelligence explosion.
TIM URBAN: Yes.
SPEAKER: And this
is something I think
Kurzweil and Bostrom
both relatively think
is very possible.
You talked about death
and living forever.
Why does this intelligence
explosion lead to that?
TIM URBAN: Because--
so I always like to,
as just an illustrative tool,
draw intelligence staircase.
Which is not, of
course, scientific,
but there is a scale
of intelligence.
And so an ant has
less cognitive ability
than a chicken who has less
than an ape or a chimp, who
has less than we do.
And so the thing that
blows my mind about that
is that we're not at
the end of evolution.
We didn't, like,
the finish button.
Like, we're in the
middle of evolution.
And you know, whether
evolution itself
is no longer the thing that will
create higher intelligence-- it
could have been that
for 3.8 billion years,
evolution was the thing.
And literally in our
lifetimes, that changes.
And now biology building
artificial intelligence
becomes the new thing.
And self-improving AI
becomes the new way
that the smartest thing on
the planet gets smarter.
Regardless whatever the mode
is, the fact that the concept
of something that is as
smarter than we are--
as smarter than we are,
as we are than chimps--
[LAUGH]
--not well articulated.
You know what I'm
saying though-- is just
so crazy to me.
Like, a chimp is
really intelligent.
We're, like, almost
identical DNA.
I mean, that is so
incredibly close.
If there was an alien from
some other dimension that
was assessing life on Earth, and
they saw a chimp and a human,
they would say, these are
essentially the same species,
as far as we're concerned.
And yet, not only
could a chimp not
build a skyscraper
or an airplane.
When a chimp sees
something like that,
it just assumes
it's part of nature.
It can't even understand
that we built that.
You couldn't even explain to the
chimp that that's not a bird.
That's something we
made, that airplane.
So then I think something
that's equally smarter than us,
and that's just a little
bit, not only could we
not do what it can do.
Not only can we not understand
what it can understand.
We can't even get
that it built that.
Like, it can try to explain
it to us all at once,
and we won't be
able to grasp what
it's trying to explain to us.
That to me is so crazy.
And so that's a
little bit above us.
Of course, how
smart is that thing?
If you talk about a human
level AGI that's coding itself,
it's a computer scientist,
and it's about as intelligent
as we are.
But you know, it's a really
good computer scientist,
like many humans.
Then it gets to
that level where it
can do stuff we can't even
understand that it did.
How good a computer
scientist is that?
I mean, it's going to be able
to build things and prove itself
so dramatically quickly in a
way that we literally won't even
be able to understand.
Now that itself--
so that, of course,
leads to an insane
intelligence explosion.
I always talk about how
we have words for stupid.
We say 85 IQ, we
would call stupid.
And 140 IQ, we
call really smart.
But what if something
has a 14,000 IQ?
We can't even
begin to understand
what that means on Earth.
Species that are more
intelligent have power.
Our higher intelligence
gives us power
over every single species that's
less intelligent than we are.
Even something as
smart as chimps,
we can just put them in a cage.
What are you going to do?
[LAUGHTER]
We have tasers.
Good luck with that, chimp.
We can poison its food.
We have endless,
endless tranquilizers.
We have endless ways to
just completely own them
in every way that a
species can be owned.
And so if you just continue
to extrapolate that,
well, intelligence equals power.
When something is so
dramatically more intelligent,
then we have to assume
it's more powerful.
In which case, everything we
attribute to God, God's power--
you know, we look at
books like the Bible,
I mean, well, that kind of power
is actually potentially real
when something gets
more intelligent.
And then this is
where, of course,
I have to drop in that,
well, but I've also
heard this thing, which
an example here is,
this is something I've seen
people on both sides of,
and I'm constantly going
back and forth myself.
But what I just
said about something
a little more
intelligent than we are,
not only can we not
do what it can do.
We can't even understand
that it did it even if it
tried to explain it to us.
I've heard people argue
that that's not true.
I think there's a book by
David Deutsch, I've heard.
And Elon Musk thinks this,
and a few other people
that-- actually, that's not
true that we are, at this point,
computers.
We're weak computers,
but we're computers.
And once you're a
computer, nothing
is not explainable to us.
There's no such
thing as something
that we couldn't understand.
It's not an analogy.
We've crossed some magical
black and white line
that we are now
on this other side
where that'll never happen.
And that we are conscious,
and that things-- once we're
conscious, there's
nothing that is
an equivalent of us to animals.
Again, these are super smart
people who believe this.
And I've heard a
lot of people who
do believe this kind of thing.
To me, that sounds insane
just because biology
isn't black and white.
We're in the middle
of evolution.
To me, what we
call consciousness
is just what it feels like to
be human-level intelligent.
And if something were
more intelligent,
they would have a
consciousness that's
a different nature, that we
don't even begin to understand,
that we can't grasp.
That's how I see it.
But again, there's
20 of these where
it's like a fundamental
disagreement in the tech
community and in the
philosopher communities
about this kind of thing.
So it's hard to say.
It's hard to even imagine
what that thing more
intelligent than
us would be like.
SPEAKER: It's really hard to
imagine what it would be like,
and it's even more hard to
imagine what it's going to do.
And so I think most
of us are probably
familiar with the paperclip
argument and the idea
of anthropomorphization.
Could you talk a little
about AI motivation and sort
of what you wrote about that?
TIM URBAN: Yeah.
Well, a lot of what
I wrote about it
came from Nick
Bostrom's thinking,
because it just
made a lot of sense
that anthropomorphizing is a
real amateur error when you're
thinking about this stuff.
And if you made a
spider superintelligent,
bad, bad times for everyone.
[LAUGHTER]
But it's not going to
suddenly be Mr. Rogers,
and want to be kind and
thoughtful and empathetic,
because it's still in
its nature, in its DNA.
It's still a spider.
And so it's still going
to want the kind of-- it's
going to have that core
drive still in many ways.
It'll get more complex.
Maybe it'll develop some version
of its own kind of empathy.
But to expect that it's going
to have something similar to us,
our version of empathy and
our values of valuing humans
in particular is very
specific to our brains, which
evolved over a
long period of time
to specifically be
this way so we'd
be good tribe members
in a human tribe
on the Earth in this
period of time-- I mean,
it's really specific.
And to expect that even a
spider-- which is biology,
so that still has a lot
more in common with us.
To expect that even a
different kind of biology
would end up there,
end up anywhere
near the kind of
specific values we
needed to have and the
specific form of empathy
and understanding of life that
we needed to have, is very low.
So then when you
talk about something
that's not a human
at all, I just
think it's crazy to
assume it's going
to be empathetic and altruistic
once it gets to that level,
because those are human
brain hormonal things.
Those are not just automatic,
when something gets powerful,
something gets intelligent,
it's not automatic.
And so therefore, then
you have the problem
of trying to program values.
And you have so many
different problems of this.
One is if you had a good
person, really nice,
good human from the year 900 try
to program a superintelligent
being with knowing what's right
and wrong, the thing would be,
like, tearing the
skin off of infidels
of some kind that don't
look like it, or whatever.
And they'd be thinking,
I'm doing good.
I'm doing good for God.
I mean, it's really different.
We now would say, nope, nope.
Don't put that guy in
charge of anything.
[LAUGHTER]
We have learned a lot.
We have a much more nuanced
truth about humanity now,
and we are going to
do it with our values.
But again, thinking just along
the same kind of analogy,
there's a future that will look
at us and just think, oh, man.
They are tribal.
They are animal.
These were really
primitive people.
So it's hard for us to program
anything in a permanent sense
right now, and
assume that that's
going to be a good thing.
Secondly, if you ask 10
people on Earth right now,
OK, list what's right and wrong.
How should we
program this thing?
You're going to get
10 different answers.
ISIS has a really good
idea of what they think
the AI should be programmed as.
Not because-- you know,
you can call them evil.
But really, they just
think that they're
doing-- they think they know
a right and wrong that we
don't understand.
So you have problems there.
Even if we all agreed,
even if we were eternal
in our understanding,
so nothing changed,
you have major problems.
Because you can't just
program something to
do good, because we don't
want the thing to do good.
We want the thing to
do good for all things,
but humans for sure.
We don't want it
to say, well, OK.
I'm going to do good now.
And look at all this
beautiful life on the planet.
And, well, there's
one species that's
killing a lot of the others.
Let's get rid of them.
[LAUGHTER]
We don't want that.
So we want it to
do good kind of.
We wanted to also
make sure it's, like,
being a selfish
dick when it comes
to this species in relation to
all the others, for example.
And then, of course,
you know, again, we
have disagreements on Earth.
So it is just such a
nightmare to even try
to figure out what we
would want it to do
in a way we could all agree.
Then you have the
problem of, you
can try to program
something a certain way
and try to program
something extremely nuanced
like human empathy,
and hope that then
when it becomes
dramatically powerful,
that what we think
we did is actually
what will carry out into
this now all-powerful being.
I think Bostrom and some others
believe that the initial coding
will stick.
He says, it doesn't matter
how smart a human gets.
A human still cares
about self-preservation
and reproduction, and
basic human things.
That's not going to change.
That's at our core, and
everything that happens
comes down to that.
And so a lot of these
thinkers believe
that even when something
becomes superintelligent,
that those core goals
and that core motivation
would still be what it is.
And we have to hope
we get it right.
But then I've heard
others that think,
that's not necessarily true.
We should not assume
that something far more
intelligent than we
are wouldn't begin
to change its own core makeup.
So again, this is when
I kind of just go,
I'm happy this isn't my problem.
I'm happy that I am
not one of the-- I
don't have to figure this out.
I get to sit back and watch
the show, and kind of root
for one way.
Because it's just so
incredibly daunting.
Not that people shouldn't
be going for it.
I'm happy that other people
are doing stuff like OpenAI.
Which, even that,
I've heard people say,
that's the best thing
that's ever happened.
And some other people say,
that is the worst thing that
could possibly happen,
that Elon and those others
are totally off base with that.
So it's hard to know anything.
But what I do know is that a
lot more money and a lot more
resources and a
lot more thinking
is currently going into AI
development and then AI safety.
Because AI development's
a lot more fun, exciting,
and profitable.
You're going to win glory
with AI development,
not with AI safety.
And that's one thing when it's
the American or other kind
of developed country
companies working on it.
But what happens when we
begin to get into an arms race
with it?
Even if we have good
intentions, immediately, safety
goes out the door, and we
start to think we have to win.
Because if we don't win,
they're going to win.
And we'll take our chances
with whatever we develop.
So an arms race is probably the
worst thing that could happen.
And so, yeah.
Now I'm just like a
crazy old man just, like,
going off on endless tangents.
Because this is-- but, yeah.
[LAUGHTER]
SPEAKER: I want to talk a little
about the reactions to those.
Because I knew the community
around "Wait But Why"
is a really big thing.
I can see that your post
was first on our LISTSERV
on February 11, 2015,
which was about three weeks
after the first came out.
I'll show you the
reactions later.
But how have people responded to
your work on superintelligence?
TIM URBAN: You
know, it's a range.
The people who don't know
stuff like me, the people who
were like me before I
started researching,
we're all just like, whoa.
I need to forward
this to everyone.
I can't believe this.
The same exact reaction I had
when I was reading this stuff.
And then among experts,
I've gotten a few reactions.
There's someone being like, yes.
Finally, someone is saying this.
Well, everyone should read this.
And to the other side of
the spectrum being like,
no one should read this.
This is so misguided
and full of errors.
And then there's somewhere
in the middle, which
is the patting me on the head
and being like, you know,
I wouldn't put my
name on it, but yes.
This is fine for--
[LAUGHTER]
--the layman to read.
This will do the job fine.
Again, this is not to
be taken that seriously.
But it ranges.
And I've gotten, again, I've
gotten some people that are
experts that actually really
think that it's a good thing
for a layman to read,
and others that don't.
So I don't know.
I don't know.
But again, those people
I'm talking about
are the tiny minority.
Most people were
people like me that
were just totally introduced
to this for the first time.
When they think of AI, they
thought of "The Terminator."
I mean, it was
really simple stuff.
Because what do
we know about AI?
We know what movies tell us.
There's a million
fiction movies about AI,
and that's what people
base their info on.
And there's some stuff in
those movies that is accurate
or that is at least
based in science,
and other stuff
that's totally not.
And so I think for
that reason alone,
just to introduce
this to people,
to correct some
simple misconceptions,
like the concept that, you
know, AI is anthropomorphic.
And to understand that the
danger isn't that the AI turns
on us because it wants
power, because that's
anthropomorphizing, but that it
builds a new beautiful house,
because that's what
it's working on.
And oh, there's an ant
hill underneath the house.
Not my problem, and that's
where the ant hill is, so.
SPEAKER: So one of the people
who's a fan of your work
is Elon Musk, like you mention.
Can you talk about
working with Elon
and doing the intense
four-page interview,
and are you ready to
join his colony on Mars?
[LAUGHTER]
The metaphorical
"Mayflower," I guess.
TIM URBAN: I would say there's
a 70% chance that I step
on Mars at some point in life.
[LAUGHTER]
SPEAKER: Wow.
TIM URBAN: I think that's fair.
I think that's reasonable,
the more you kind of learn
about where things are headed.
So that's exciting.
And I think there's
even a higher chance
that I and most
people in this room
end up in space at some point.
I think it's going to be
very normal for humans
to go to space, to take their
9-year-old for his birthday
to space on these little
space tourism things.
That's going to, I think,
become pretty common,
and it's going to
be really cool.
I saw something
this morning tweeted
by one of the twin
astronauts, one of those two.
I actually met one
of them in person.
And I--
AUDIENCE: You did?
Which one?
TIM URBAN: I met Mark.
And I was like, I want
to say that I was reading
all your tweets, and thank you
for being out there for a year.
But maybe I should be talking to
you about your wife's heroism,
because you're other one.
And so I just said, thank
you for your service.
[LAUGHTER]
Anyway, so he tweeted some
thing about these balloons,
this company that has these
huge balloons bringing things
into space.
It's going to be cool.
Again, very off topic.
So back to Elon.
Yeah.
Elon read the AI post, and
I think a couple others,
and he tweeted a post or two.
And then, yeah.
And then his person got in touch
and kind of said, you know,
he'd like to talk to you about
maybe doing some writing stuff.
So I had, like, the
most stressful phone
call of my life.
And I'm in my apartment, in
my pajamas, on my headphones,
and just pacing
around being like--
[LAUGHTER]
But we had a nice convo after
the-- kind of say that we,
like-- he's super awkward.
I'm kind of awkward.
And so we had an awkward-off
for the first five minutes.
[LAUGHTER]
An old Western awkward-off.
And I actually had a chance
to talk to him many times.
And each time, we would begin
with a five-minute old Western
awkward-off.
[LAUGHTER]
It takes some time to settle in.
And then I read
Ashlee Vance's book,
and he said that, yes,
for the first-- like,
after five minutes, he
finally settles down.
And I was like, yes.
OK.
So it's not just me.
This is, like, his thing.
But, no.
It was really awesome
to get to just
have a chance to talk to him.
And he's so not like he should
be, given that he's Elon.
Like, he should
be more reserved,
and he should just have a lot
of-- certain kind of gravitas.
But he's not.
He just seems like he's
someone who's never
changed from who he always was.
He'll talk to me on
the phone like he'll
talk in an interview, like he'll
talk to a press conference,
like he'll talk to
some friends at SpaceX.
He just kind of is
always who he is.
And right away, was just
talking about all the things he
wishes people knew more about.
Why lowering the cost
of space travel's
important and becoming a
space-faring civilization.
Why ultimately Mars is not
just a cool thing to do,
but really important.
Why electric vehicles
and accelerating
the advent of sustainable
energy is critically important.
And so it was the biggest
no-brainer in history
to dedicate time to this.
It's a hero of mine
who also is taking
on the most important issues
with the raddest possible
companies.
Very obvious person to want
to dedicate a lot of my time
to helping and working
on it in any way I could.
So, yeah.
Then I had a chance to
go out to the factories
and meet some of the
different executives,
and really talk to them.
And there was a lot of trust.
We kind of had this agreement.
Because this is not, like, an
old organization who just says,
well, this is how
we do press here.
They kind of said, yeah.
Talk to people.
And they're not
all press-approved,
so just let us know stuff
they say, and make sure.
And it was great.
They basically told everyone,
like, trust this guy.
Tell him whatever.
I don't know why, but they did.
[LAUGHTER]
And I honored that by-- I kept
my journalistic integrity,
because I said I was publishing
this on "Wait But Why,"
and I was going to be able
to do it the way I wanted.
But I said, for quotes,
I am happy to send them
to you, to let you see if
you want them all in there.
Like, this is not some
weird journal rules,
like, journalism rules
that I just don't even
know what they are.
And I don't care.
Why have everything
be so adversarial?
It's just like, sure.
Check them.
If you want to change your
quote, like, you said.
If it's not what you want to
say, like, say something else.
It's not like, ha!
That was the honest thing,
and now you're lying.
You're still you saying stuff.
[LAUGHTER]
And the truth is, they
changed almost nothing at all.
There was tiny little
things like the bandwidth
of their satellite internet.
They didn't want their
competitors-- I mean,
the tiniest thing.
So it was great.
It could not have been
a more fun project.
It got totally out
of hand, lengthwise.
It was a four-post,
95,000-word thing.
But yeah.
It could not have been more fun.
And I learned so much
about all those industries,
about entrepreneurship, about
how technology progresses
and moves in general.
I had to study the past a lot.
I had to study
electrical, you know,
revolution in the
1880s and Henry Ford,
and everything he did.
And I understand how airplanes
came about, and all that.
So it was just super fun.
And also, probably
most importantly,
I got to examine
how Elon thinks,
which is totally the secret.
I mean, I'm so convinced that
he's smarter than everyone here
and richer and more
ambitious and more insane,
harder working,
all those things.
But honestly, if that was it,
there would be more Elons.
I mean, it's his
incredible ability
to really genuinely reason
from first principles in almost
everything he does
that's important.
That is a theme with all those
icons in history of all kinds,
art and entrepreneurship and
politics and whatever, who
end up really changing things.
And so just to see that
up close and to examine it
with actual quotes--
he has this quote.
When he was, like,
seven or something--
this is from the
Ashlee Vance book
where he says something
like he would go to a girl
during recess who would
say, I'm scared of the dark.
And he'd say, I used to
be scared of the dark,
and then I realized
that dark is just
the absence of photons in the
visible light wavelengths,
and 500 to 700 nanometers.
[LAUGHTER]
And so when we see that, we
hear that, we say, oh, look.
You know, like, of course,
he's being a little weird,
but he's of course
being rational.
And the cute kid is
scared of the dark.
We all know the dark
isn't actually scary.
And good for Elon
for, like, adorably
seeing it like an adult.
And yet he says a quote 30, 40
years later in some interviewer
that I watched on YouTube
where he says, you know,
I don't know why people are so
scared of starting a company.
Really, what's the worse
that's going to happen?
You're not going to die.
You're not going to starve.
Like, what's the worst
that's going to happen
if more people start a company?
And I'm thinking,
that's the same quote.
That's the visible
photon absence
of-- that is the same quote.
And the only difference
is he's the only one being
the adult there, and we're all
the little girl who's saying,
I'm scared of the dark.
And I had so many
epiphanies like that,
looking at simple
things about the way he
talks and things he does.
I say, that is the
whole key right there.
That is the answer.
So that ended up
being the fourth post
of the series, just
about how he thinks,
which has totally changed
the way I at least attempt
to grow in my thinking.
SPEAKER: One of the
things that Elon
won't touch on publicly,
and actually Google either,
is cryonics.
But you've recently
written a post about it.
And in fact, you said, you've
quit cryo-procrastinating,
and that you registered
yourself, right?
TIM URBAN: Well, OK.
So what happened is--
[LAUGHTER]
No, I really-- so I
made an appointment.
[LAUGHTER]
And I talked to Alcor, and I've
talked to a life insurance guy.
And I have the plan
worked out, and I
am going to be-- I will
pledge to be signed up fully
in six months.
I'm good to die in six months.
[LAUGHTER]
But then what happened was I
was close, and he was like, OK.
So here is like 17 attachments.
Just print those out
and fill those out.
And you need, like, weird
scans of other things.
And you need to just make sure
you have this information,
then fill those out,
things with the boxes.
And I just suddenly
was like, yeah, OK.
I'll do that in the fall.
So I am
cryo-procrastinating now.
But the real daunting
thing is like, I don't even
know how to begin.
Do I have to call?
How do I trust this company?
That work, I've done already.
So that's why I have
faith that I will do it,
because I've done all of
the really icky things.
And I just need to
finish the deal there
with a pretty cheap monthly
premium life insurance policy,
and I'm like, good.
Covered.
Like, everyone should do that if
living a long time sounds fun.
Not that it'll
definitely happen,
but it definitely will not
happen if you don't do it.
[LAUGHTER]
SPEAKER: In the post, you
talk about the difference
between being dead
and mostly dead.
Could you talk a
little bit about why
that matters for cryonics?
TIM URBAN: Yeah.
So dead is not nearly
as easily defined a word
as we all think it is.
50 years ago, someone
collapses on the street.
Their heart stops beating.
They're not breathing.
They're dead.
They're declared dead, and
that's the end of them.
Today if that happens, they're
rushed to the hospital.
We have resuscitators
of all different kinds.
And we can usually often
get them going again,
and then figure
out what was wrong.
And very often, those people
live another bunch of decades.
So they weren't dead.
They were unable to be saved
with technology 50 years ago.
So again, the analogy
applies today.
There's clinical death, which
is when your heart stops beating
and you stop breathing.
Again, it's been a little
bit since I wrote the post.
I think that's the
exact definition.
And that's clinical
death, meaning--
the difference between
clinical death and legal death
is that clinical
death, there usually
could be procedures that
could try-- we know that you
are potentially saveable.
But if you have a
do-not-resuscitate contract
that you've signed,
you know, if they
know they don't
have much time left,
they have a terminally ill
thing and they're suffering,
and once that stops, that's
kind of a way for the hospital
to legally say,
let's let this go.
And the person would
like to let it go.
That's clinical death.
Legal death happens once
the heart has stopped
and breathing has stopped
for, I think, five minutes.
Maybe five or six minutes.
I forget what the
exact number is.
And that's when
you are, no matter
what they-- at that point, no
matter what they tried to do,
you can't salvage that, because
you have now become brain dead.
And brain death is the
definition of legal death.
So again, people
consider-- today,
there's clinical death, which
we can treat like death.
It's not assisted suicide.
We can treat death legally.
And then there's legal death
when you're officially done.
There's nothing we can do.
Now what cryonicists say--
and these people are--
I was super suspicious
going into this.
Are these, like,
Scientologist-type people?
Are these going to be
people that are-- you know.
These are hardcore scientists.
These are completely, utterly
rational, reasonable people
who don't attack the
people who disagree.
They say, I understand why
you think that, and here's
our thinking.
They're great.
The Alcor website FAQ is
just such a smart place.
And what they say is,
today, after five minutes,
you are unable to
be saved in 2016.
But you very well might be
able to be saved and restored,
and that illness you have, that
even if you could be saved,
maybe you only have
a few weeks anyway,
it's going to be suffering,
well maybe that illness
also in a hospital
in 2040, no problem.
We can fix that.
We can fix that, and we can
also easily resuscitate you
and restore you.
Even if you're brain
dead today, they
explain that brain
death is not actually
that your brain
is beyond repair.
It's that the state
of your body is such
that when blood
gets going again,
it will damage the
cells in your brain.
So there's no way
to bring you back
without then
irreparably killing you.
Again, 2040, 2060.
Those hospitals, they might
say, this is no problem for us.
We easily have technology
to help this person.
So what cryonics wants to
do, because they do consider
you still very much alive
and just beyond hope today,
they want to put you
on biological pause
by vitrifying you.
Not freezing you, because
then the ice crystals
would destroy your cells.
Vitrifying, which is putting
you in a glass-like state where
essentially your cells,
the activity and the atoms
in your body have just slowed
so much that they can't move.
But they put antifreeze
solution, actually,
in your veins so that it will
not be able to congeal into it
or to crystallize into ice.
So you end up vitrified
in this biological pause.
And they're incredibly
honest and fair about saying,
there's a good chance that
the way we're doing this today
will have messed something
up, and in the future,
even if they had the technology,
they won't be able to save you.
Or that that technology
will never happen,
or that you'll be there, and
reanimating vitrified people
is not a top priority of theirs.
We don't know.
But they believe that it
will be, because they say,
is it a priority to help someone
who's in a coma get out of it?
Of course.
And not because the doctor cares
about you or is a good person.
Because it's their job.
And they think that once people
start being able to be revived,
a vitrified person is
just as much a living
patient as anyone else.
So anyway, they say,
this might not work.
But at least it gives a shot.
And it's putting a bet
on the future technology.
And you know, if
you look at the past
and how much we would have blown
the people's minds in centuries
past had we brought
them to today,
I don't want to bet
against the future.
The future is going to be rad.
It's going to shock us.
So I'll take my chances.
Sure.
Like, here's my vitrified body.
See what you can do.
It's better than the
alternative, which is really
shitty, the alternative.
[LAUGHTER]
SPEAKER: Many
people in this room
are aware of the potential
impact of superintelligence.
We talked about it.
In fact, many of us are
on the bleeding edge
of its development.
What message would
you pass along
to people who are developing
artificial intelligence right
now?
TIM URBAN: Honestly,
I want to be like,
oh, don't forget about safety.
But I think those people are
aware of the safety problem
already.
It's not like
they're not-- these
are really intelligent
people, and I
think that they're aware of it.
And I think that they
are trying to do it
in the best way possible.
I think they know that
this is going to happen.
Someone's going to do it.
The human species
is going to do this.
You can't stop the species as a
whole from building technology
that it wants to build.
So I would just say,
like, good luck.
Keep going.
[LAUGHTER]
Seriously, I think they're
doing awesome stuff,
and I just hope
it works out well.
And someone's going to do it.
And yeah.
SPEAKER: Thanks, Tim.
TIM URBAN: Yeah.
[APPLAUSE]
SPEAKER: Now if you
don't mind, we'd
like to move into questions
from the audience.
TIM URBAN: Yes.
AUDIENCE: Who should we
invite to speak next?
TIM URBAN: I mean, you're
asking at a weird time,
because I'm working
on a post that's
I'm about to do a big VR post,
very much, like, in this world.
But at the moment, I'm
working on a post that
is completely unrelated
to technology,
and it has to do with,
like, our culture
kind of censoring freedom
of speech a little bit,
and me being scared
as a blogger to write
about all kinds of
important social issues
that I shouldn't be
scared to write about.
And as, like, a liberal guy,
I'm scared of the liberals here.
And so I would say someone
like Sam Harris or, you know--
AUDIENCE: Who's also
[INAUDIBLE] this topic.
TIM URBAN: Yeah.
Right.
And Sam Harris also is
awesome when he talks
about AI or any of this stuff.
He's just one of
these people that's
interested in
interesting things.
But he's my intellectual hero.
He really is, like, just
super logical and super brave.
SPEAKER: The question
of from submitted.
Could you talk about learning
new subject areas well enough
to write about them?
Are there signs
you personally use
for gauging the soundness
of your own understanding
of the new area?
What are they?
TIM URBAN: Yeah.
So I very much am
clear about what
I am, which is not an expert.
I will never try
to be an expert.
I'm not going to try to
teach experts anything.
What I'm trying to do is
the equivalent of enough
where I could sit down at,
like, a table with friends
and just be like, OK.
Listen to this thing I've been
reading about for 40 hours.
And by the time it's done,
they're like, oh, OK.
I'm with you.
Like, I want to go
from here to here.
If the expert's at the ceiling,
I want to go from here to here,
and then get a lot of people
that are curious like me here
up there with me.
And so for me, that
is often simply
I'm reading smart
people saying stuff,
and then I'm trying to--
and the thing that I'm
good at that sometimes they're
not is taking what they're
saying, especially if
they're in multiple views,
and building some kind
of memorable framework
to put them all
together in something
where someone can read that.
And it's one thing to read
it and understand it then,
but maybe six months
later, still kind of
have that in your head because
the framework or the terms,
or something about the
way it was articulated
helps you build that permanent
framework in your head.
So that's my job.
And sometimes it's easier.
I found cryonics to be easier,
because the cryonicists are all
saying the same thing.
And then there's a lot of
people who don't understand it
who are not to be
taken seriously.
Ironically, the
cryogenecists who
are the ones who work with
cold temperatures for things
like preserving organs
and other things
like that, they hate cryonics.
It's like the nerd
war of the cold.
It's like this odd situation.
But just again and again, I
read quotes from them that
would say, oh-- it was
just immediately say,
oh, you're being like
a lying politician.
You're just slandering them.
So I would just rule them out
as interesting intellectuals.
You know, and I would find--
I look for real dissent.
Because the thing I don't want
to do is read a viewpoint,
and there's a big very
valid dissent viewpoint,
and I miss it.
And then I present
this very strongly.
If there is a valid
dissent point,
I want to read both
and then present both.
So for me, it's just
making sure that I've
identified what the various
relevant viewpoints are.
And then once I feel like I've
read a bunch of all of them
and I'm not being
propaganded by one of them,
then I'm ready to go.
I don't need to go
much further than that.
AUDIENCE: So there
are some concepts
that make it hard
to think accurately.
Superstition, for example, makes
it hard to do science right.
And I'm wondering, I
was struck by how often
you use the word insane in
talking about AI researchers.
I'm wondering if it's possible
that there's some concept that
is contagious and sticky
and pervasive in AI
thinking-- perhaps
consciousness-- that makes
it very hard to think sanely.
And so when we ask
AI researchers today,
it's like asking a medieval
theologian about science.
TIM URBAN: Yeah.
I was having a discussion
the other day about, what
are the things that are going to
be really offensive in 50 years
that we're just going for it
with now without realizing it?
One of the things
someone mentioned
was, like, the word
crazy and insane
might be, like, highly
offensive in 50 years.
Anyway, just since I
throw it out there, like,
20 times in this conversation,
I'll mention that.
I think that to me, the
thing that just strikes me
as, oh, there could be
a total delusion here
or they could be a smarter
future people or smarter
future something
might look at us today
and say, oh, they were totally
missing something critical.
I think that's
what you're asking.
AUDIENCE: Something has to be
disbelieved in, like miracles,
in order to think
accurately about the way
the scientific world works.
TIM URBAN: It may have
to do with consciousness.
It might have to
do with-- there's
a big distinction--
again, I have trouble
with the concept
of consciousness.
But a distinction between if
we do believe in consciousness,
and then, if AI can become
conscious-- if whatever
codes that in our brain
isn't some magical thing,
it's just neurons firing
in a certain pattern.
So if we can replicate
that, we actually
might view it as not
even necessarily bad
if it wins and,
like, we end up out.
Because it's more important
than we are in that case.
We're more
important-- if someone
kills a dog, that's
bad, but way less bad
than if they kill a human.
And because humans are smarter,
they have a higher capacity
for suffering and for joy.
And so we consider
ourselves more important.
[LAUGH]
He's looking at
me like I'm crazy.
But if--
[LAUGHTER]
--if we get to a
superintelligent AI that
is conscious, I
think in the future
we could look at
it as very myopic
that we thought that we
were this critical piece
of the puzzle that
we needed to last.
We may look and say, the
thing that is much smarter now
is more important.
We were an important
link in that chain.
Right now, it's very hard
for anyone to think that.
We're such humans.
So that's one thing.
And then if it's not
conscious, then it
becomes this really upsetting
thing, that it's smarter
and then it got rid of us.
Because we were actually the
highest real consciousness.
And the thing that's
higher than us is now
just kind of a mindless thing,
and that becomes a huge shame.
Still looking at
me like I'm crazy.
But I would say that probably
we're full of those things
because we're talking about a
higher level of intelligence.
That's not something we
even have the capacity
to understand.
SPEAKER: On your blog
"Wait But Why," you've
written about a variety of
high-impact, futurist subjects,
ranging from AGI to cryonics
to the future of energy.
In the context of powerful
emergent technology,
how do you think about
the future of society?
What areas are you
most optimistic
or pessimistic about?
TIM URBAN: We're always in
the battle as a society,
just like each of us is in an
internal battle of just-- we're
like a transitional species.
If you looked at really
tribal, almost subhumans,
before we were fully
humans, to wherever
we would be going if AI
hadn't come here and disrupted
evolution, wherever
we would end up
as a species of total
prefrontal cortex,
rational decision maker, just
total rational beings that
are very kind of-- that
are just totally in charge
of their thoughts, and come
from a total rational place,
we're in the middle.
We're battling with that new
prefrontal cortex we've got,
and this new rational
cognitive ability
we have that other
animals don't have.
And then there's
this complete animal
that it's stuck inside of.
And so society as a
whole is somewhere
between humans as a tribal
and warring and just
very small-minded
animal species,
and a species of the
future that has just
gotten way beyond that.
And so I think as
technology emerges,
the thing is, you don't have to
be fully rational as a species
to develop amazing,
powerful technology.
And that's kind of
the scary thing.
Like, I wish we could
get there as a species
first before we started
building a lot of these things.
It's dangerous for
an animal species
to have that much
power, because we
have very short-sighted, very
weird, selfish motivations.
So I think that it is worrisome
as we create more technology,
that the technology's still
controlled by an animal
species.
But I would say a
lot of it relates
to what I just mentioned.
I think that we need to continue
to be a marketplace of ideas
as a society
through all of this,
and resist the urge to
get too much on a team
and really become hate-filled.
Which, again, my
thinking these days
is that it's not just
a thing of the right,
which a lot of leftists
like to believe,
that it's really on both sides.
It's become highly tribal.
And I think that that
makes it very hard
for us to move forward in
a really productive way.
AUDIENCE: Earlier you
made some brief allusions
to the emerging field
of artificial morality.
Now some of the
more recent findings
in the psychology
of morality say
that the reason we can have
such different morals today
than our ancestors did 2,000
years ago, despite having
essentially the same genes,
is that our morals are
a function of primary wants and
desires and the environments
in which we find ourselves.
If that turns out a good way
to do it with AI as well,
what do you think, as opposed
to companionship, ambition,
curiosity that drives
humans, what should AI want?
TIM URBAN: It's interesting,
our environment plus our nature
is what determines how our
morality will shake out.
It's weird, because
AI, again, it's
tempting in that question to
anthropomorphize and think,
well, what would it
want in its environment?
AUDIENCE: Oh, no.
What I'm talking about is, what
should we program it to want?
TIM URBAN: Yeah.
Well, I think we would
want to think-- it's like
the [INAUDIBLE] thing.
What would we want
were we better?
And I think you have to get to
philosophers about something
like this.
It goes beyond scientists
and engineers in many ways
to try to figure out what we
would want if we were better,
but even better to figure
out-- to kind of let something
in more intelligent than us.
Figure out what we would
want if we were better.
Because we're not better,
so we don't want that.
[LAUGH]
And to continue to
understand humans
better than humans
understand humans,
and to understand
what humans would
be like in a perfect utopia
of plentiful resources
when everyone is acting
totally rational.
What would we want then?
And basically, let
it figure it out.
SPEAKER: Sometimes blog posts
can only achieve so much.
Have you considered
forming a group
or taking on a job
that would have
more direct impact
on ethical issues
like the development
of superintelligent AI?
TIM URBAN: I've thought
about different ways
to kind of scale "Wait But Why."
And the jury's still
out for what I think
the best way to do that is.
Right now, I kind of
think that I'd rather
write a post that inspires a
bunch of 18-year-olds at MIT
who are smarter than I am
to go into those things,
and then switch to
a different post
that-- I think I can have
actually a lot of impact
that way, by kind of inspiring
or educating or getting excited
a bunch of people who are more
capable of actually figuring
out these answers and putting
their brain into it than I am.
So I think right now, that is
the way that it's scalable,
is that it can scale by
basically indirectly directing
more energy towards
these things.
And I think I always fear that
if I hired people and start
bringing in more of a team,
I'll spend my time managing,
and we might lose a little bit
of what right now is making
an impact on "Wait But Why."
So I'm still figuring
it out, but I
hope to be able to scale
this in certain ways
through the future.
Yeah.
SPEAKER: Tim, thanks so much.
It's been an absolute honor
to be here with you today.
[APPLAUSE]
