Sam Harris On Danger of Ai Artificial Intelligence | Joe Rogan Podcast
that we figure out a way in some way to
I'm not endorsing like taking people's
money and giving it to other people but
in some sort of a way to eliminate
poverty is that even possible is it ever
gonna be possible to completely
eliminate poverty worldwide and within
like a lifetime well I think we talked
about this the last time when we spoke
about AI but I mean this is the
implication of much of what we talked
about here if you if you imagine
building the perfect labor saving
technology right where you met imagine
just having a machine that can build any
machine that can do any human labor you
powered by sunlight more or less for the
cost of raw materials right so you're
talking about the ultimate wealth
generation device and now we're not just
talking about blue-collar labor we're
talking about the kind of labor you and
I do right so like the the artistic
labor and scientific labor and you know
good just a machine that comes up with
good ideas right so we're talking about
general artificial intelligence this if
in the right political and economic
system this would just cancel any need
for people to have to work to survive
right it just would be there'd be enough
of everything to go around and then the
question would be do we have the right
political and economic system or where
we actually could spread that wealth or
would we just would we just find
ourselves in some kind of horrendous
arms race and and a situation of wealth
inequality unlike any we've ever seen
it's a we don't we don't have that it's
not in place now and if someone just
handed us this device you know if and it
were you know all of my concerns about
AI were gone I mean there's no question
about this thing doing things we didn't
want it would do exactly what we want
when we want it and there's no there's
just no danger of it it's interest
becoming misaligned with our own it's
like a perfect Oracle and a perfect
designer of new technology if it was
handed to us now you know I would expect
just complete chaos right I would split
if Facebook built this thing tomorrow
and announced it or rumor spread that
they had built it right what are the
implications for Russia and China well
insofar as they are as adversarial as
they are now it would be rational for
them to just nuke California right
because because the the happiness device
is there's a winner-take-all scenario I
mean you win the world if you have this
device you can turn the lights off in
China you know at the moment you have
this device you can just it's just the
ultimate because literally a we're
talking about and you know many people
are made out whether such a thing is
possible but again we're just talking
about the implications of intelligence
that can make refinements to itself in
overtime course that is there's no
relationship to what we experienced as
Apes right so you're talking about a
system that can make changes to his own
source code and become better and better
at learning and more and more
knowledgeable has instantaneous access
to the Internet
it has instantaneous access to all human
and machine knowledge and it does you
know thousands of years of work every
every day of our lives right they does
thousands of years of equivalent human
level intellectual work it's just day
it's on I mean our intuitions completely
falter to capture just how immensely
powerful such a thing would be and
there's no reason to think this isn't
possible I mean the only the most
skeptical thing you can honestly say
about this is that this isn't coming
soon right it's like this is not but to
say that this is not possible
makes no scientific sense at this point
there's no reason to think that a
sufficiently advanced digital computer
can't per can't instantiate general
intelligence of a sort that we have
there's no reason to think that I mean
the intelligence has to be at bottom
some form of information processing and
if we get the algorithm right with
enough hardware resources and the the
limit is definitely not the hardware at
this point it's it's the algorithms
there's just no reason to think this
can't take off and and scale and that we
would be in the presence of something
that is that is like having an an
alternate human civilization in a box
that is making thousands of years of
progress every day right so just imagine
that if you had in a box you know the
ten smartest people who have ever lived
and you know every time every week they
make twenty thousand years of progress
right because that is the actual in the
you know we're talking about electronic
circuits being a million times faster
than than biological circuits so even if
it was just and I believe I said this
the last time we talked about AI but
this is you know this is what brings it
home for me even if it's just a matter
of faster right it's not it's not
anything especially spookiest just this
can do human level intellectual work but
just a million times faster and again
this totally under sells the prospects
of super intelligence I think you know
human level intellectual work is is it's
gonna seem pretty paltry in the end but
if you just imagine just speeding it up
if you imagine if we were doing this
podcast imagine how smart I would seem
if between every sentence I actually had
a year to figure out what I was gonna
say next right and so I say this one
sentence and you say you ask me a
question and then in my world I just
have a year I'm gonna go spend the next
year getting getting ready for you know
for Joe and it's gonna be perfect and
this is just compounding upon itself
like not only can I not not only I am i
working faster ultimately I can change
my my ability to work faster I mean like
we're talking about software that can
change itself you're talking about
something that that becomes you know
self improving so there's a compounding
function there but
it's the point is it's unimaginable in
terms of how how much change this could
affect and if you imagine the best case
scenario where this is under our control
right where there's no alignment problem
where it's just it doesn't this thing
doesn't do anything that surprises us
this thing will always take direction
from us it will never it will never
develop interests of its own right which
is again the fear but let's let's just
say this is totally obedient it's just
an Oracle and a genie route you know in
in one and you know we say you know cure
Alzheimer's and it cures Alzheimer's you
know you solve the the protein folding
problem and and it just it's just off
and running and to develop a perfect
nanotechnology and it does that this is
all again going back to David Deutsch
there's no reason to think this isn't
possible
because anything that's compatible with
the laws of physics can be done given
the requisite knowledge right so you
just you get enough intelligence as long
as you're not violating the laws of
physics you can do something in that
space so but the problem is this is a
winner-take-all scenario so Facebook
does it tomorrow and China and Russia
find out about it they can't afford to
wait around to see whether the US
decides to do something not entirely
selfish with this right because that
their with their worst fears could be
realized if Donald Trump is president
what's Donald Trump gonna do with a
perfect AI when he has already told the
world that he you know hates Islam right
it's uh it's a we would have to have a
political and economic system that
allowed us to absorb this ultimate
wealth save it will wealth producing
technology and and again so this may all
sound like pure sci-fi craziness to
people I don't think there is any reason
to believe that it is but walk way back
from that edge of craziness and just
look at dumb eh I you know narrow AI
just self-driving cars and automation
and
in intelligent algorithms that can do
human level work that is already poised
to change our world massively and create
massive wealth inequality which we have
we have to figure out how to spread this
wealth you know what do you do when you
can automate 50 percent of human labour
were you paying attention to the
artificial intelligence go match yeah
it's me I don't actually play go so I
wasn't paying that kind of attention to
it but I'm aware of what happened there
and you know the rules of go not not so
I know actually I don't I don't I don't
play it I know I don't I don't know I
know I know vaguely how you how you how
it looks when a game is played but I
don't supposed to be very complicated oh
yeah more complicated and more
possibilities than just yeah and that's
why it took 20 years longer for a
computer to be the best player in the
world it's it is did you see how the
computer did it - well I didn't I I know
I mean this is the company that did it
is deep mind which is was acquired by
Google and they're at the cutting edge
of AI research and yeah well it's the
cartoons are unfortunately not so far
from what is possible but the yeah I
mean there's again this is no this is
not general intelligence like we're
talking it so these are not machines
that can even play tic-tac-toe right now
there's some there there have been some
moves away from this like deep mind has
trained an algorithm to play all of the
Atari games like from 1980 or whatever
and it is it very quickly became
superhuman on most of them I think I
don't think a superhuman on all of them
yet but it could play in a space
invaders and all these and breakout in
all these games that are too highly
unlike one another and it's the same
algorithm becoming expert and superhuman
in all of them and that's that's a new
paradigm and it's using a
technique called deep learning for that
and that's and that's been you know very
exciting and I will be incredibly useful
you know this is I mean the other the
flip side of all this I know that
everything I tend to say on this sounds
scary but this is all like I mean the
then the next scariest thing is not to
do any of this stuff it's like we want
intelligence we want automation we want
to figure out how to solve problems that
we can't yet solve so intelligence is
the best thing we've got so we want more
of it but we have to have a system where
I mean it's scary that we have a system
where if you gave the best possible
version of it to one research lab or to
one government it's not obvious that
that wouldn't destroy humanity right
that wouldn't lead to massive
dislocations where you'd have you know
some trillionaire who's trumpeting his
new device and and just you know 50%
unemployment in the u.s. you know in a
month right I mean like it's not obvious
how we would absorb this level of
progress and we we definitely have to to
figure out how to do it and no of course
we can't assume the best-case scenario
right that's the best-case internal I
think there's a few people that put it
the way you put it that terrify the shit
out of people right and everyone else
seems to have this rosy vision of
increased longevity and automated
everything and everything fixed and easy
to get to work and medical procedures
would be easier they're going to know
how to do it but everybody looks at it
like we are always going to be here but
are we obsolete I mean is this idea of a
living thing that's creative and wrapped
up in emotions and lust and desires and
jealousy and all the pettiness that we
see celebrated all the time we still see
it it's not getting any better
right if are we obsolete I mean what if
this thing comes along and says listen
there's a way to do you can abandon all
that stupid shit you can abandon all
that makes you all that stuff that makes
you fun to be around yeah it also fucks
with you can live three times as long
without that stuff well I I think it it
would in the best case would usher in
eh-eh the possibility of this kind of
fundamentally creative life we're on the
order of something like the matrix
whether it's in the matrix or it's just
in the world that has been made as
beautiful as possible based on what
would functionally be an unlimited
resource of intelligence I'm just like
for there to be a an ability to solve
problems of a sort that we can't
currently imagine I mean it's just it
really is like a place on the map that
you can't you can't you can indicate
it's over there you know it's like the
blank spot on the map is why it's called
the singularity right it's like oh this
is this is a it was it was john von
neumann the the inventor of game theory
who a mathematician who along with Alan
Turing and a couple of other people is
really responsible for the computer
revolution he was the first person to
use this term singularity to describe
just this that there's a speeding up of
information processing technology and
Apryl a cultural reliance upon it beyond
which we can't actually foresee the
level of change that can come over our
society it's like you know an event
horizon past which we can't see and this
certainly becomes true when you talk
about these intelligent systems being
able to make changes to themselves and
again we're talking mostly software it's
not I'm not imagining I mean that the
most important breakthroughs are almost
certainly at the level of better
software I mean is we have you in terms
of the computing power that if the
physical Hardware on earth it's not
that's not what's limiting our AI at the
moment something we need more more
hardware but we will get more hardware
to up to the limits of physics and it'll
get smaller and smaller as it has
and you know if quantum computing
becomes possible or practical that will
and actually David Deutsch is is the
physicist I mentioned is one of the
fathers of the concept of quantum
computing that will open up a whole
nother area a you know extreme of
computing power that is not at all
analogous to the kinds of machines we
have now but it's just when you imagine
it people don't people seem to always
want to that I just had this
conversation with with Neil deGrasse
Tyson on my podcast
he named Robert yeah I'm skipping punky
people I'm just I'm just attributing
these ideas to hell he's not at all he
doesn't take this line at all he's not
all he thinks is all bullshit right he's
not all worried about AI what does he
think he thinks that you know we just we
just use he's drawing an analogy from
how we you currently use computers that
they just they just keep helping us do
what we want to do like we decide what
we want to do with computers and we just
add them to our process and that process
becomes automated and then we'll find
new jobs somewhere else like you didn't
you don't need a stenographer once you
have voice recognition technology and
that's not a problem a stenographer will
find something else to do and so the
economic dislocation isn't that bad and
computers will just get better than they
are and you know eventually Siri will
actually work you know and you'll you'll
answer your questions well and you're
not it's not going to be a you know a
laugh line what Siri said to you today
and then all of this will just proceed
to make life better right now none of
that is imagining what it will be like
to make because it would be a certain
point where you'll have systems that are
you know it's like the chat the best
chess player on earth is now always
going to be a computer right this it's
never there's no it's not going to be a
human born tomorrow it's going to be
better
in the best computer I mean that's it
like it's already it's like it's we have
superhuman chess players on earth now
imagine having computers that are
superhuman every every task that is
relevant every intellectual task right
so the best physicist is a computer you
know the best medical diagnostic is a
computer the best prover of math
theorems is a computer that's engineer
as a computer right there's no there's
no reason why that we're not headed
there I mean it would be the only reason
I could see we're not headed there is
that something massively dislocating
happens that prevents us from continuing
to improve our intelligent machines but
if you just did the moment you admit
that intelligence is just a matter of
information processing and you admit
that we will continue to improve our
machines unless something heinous
happens because intelligence and
automation are the most valuable things
we have at a certain point whether you
think is in five years or five hundred
years we are going to find ourselves in
the presence of super intelligent
machines and then at that point the best
source of innovation for the next
generation of software or hardware or
both will be the machines themselves
right so then so then you just have then
that's where you get what what it was
what the mathematician I J good
described as the intelligence explosion
which is just the process can take off
on its own and this is where you know
the singularity people either either are
hopeful or worried but because there's
nothing there's no guarantee that this
process will be remain aligned with our
interests and and every person who I
meet even you know very smart people
like Neil who says they're not worried
about this when you actually drill down
on why they're not worried you find that
they're actually not imagining machines
making changes to their own source code
and they're not a bore or their they
simply believe that
the this is so far away that we don't
have to worry about it now right what
and that's actually a non sequitur I
mean to say that this is far away is not
actually grappling with it's not an
argument this isn't going to happen and
it's based on what - and it's and it's
based on I mean first of all there's no
there's no reason to believe Jamie want
to find out where there is there's no I
mean we don't know how long it will take
us to prepare for this right so like
like if you were if you knew this it was
gonna take 50 years for this to happen
right is 50 years enough for us to
prepare politically and economically to
deal with the ramifications of this and
and to do it and to add to say nothing
of actually building the a ice safely in
a way that's aligned with our interest I
don't know I mean so 50 years is it's
like we've had the iPhone for what 10
years 9 years I mean it's like 50 years
not a lot of time right - deal - deal
with this and there's no reason to think
it's it's that far away if we keep
making progress it means it's not it
would be amazing if it were 500 years
away I mean that that seems like it's
it's it's more likely me from what I in
the sense I get from the people who are
doing this work it's far more likely to
be 50 years than 500 years like you know
I mean the people who think this is a
long long way off or I mean they're
saying you know fifty to a hundred years
no one says 500 years no no a as far as
I know no one who's actually close to
this work and some people think it could
be in five years right I mean the people
who are you know like deep mind people
who are very close to this are the sorts
of people who say because the people the
people who are closest work are
astonished by what's happened in the
last 10 years like we went from a place
of
you know very little progress too you
know wow this is all of a sudden really
really interesting and powerful and and
again progress is compounding in a way
that's counterintuitive people
systematically overestimate how much
change can happen in a year and
underestimate how much change can happen
in ten years and you know as far as
estimating how much change can happen in
50 or 100 years I don't know that anyone
is good at that how could you be with
giant leaps come giant exponential leaps
off those leaps and it's it's almost
impossible for us to really predict what
we're gonna be looking at 50 years from
now but I don't I don't know what
they're gonna think about us that's
what's most bizarre about it as well we
really might be obsolete if we look at
how ridiculous we are look at this
political campaign look at what we pay
attention to in the news look at the
things we really focus on we're a
strange ridiculous animal and why if we
look back on you know some strange
dinosaur that had a weird neck why
should that fucking thing make it you
know why should we make it we might be
here to make that thing and that thing
takes over from here with no emotions no
lusts no greed and just purely big
existing electronically and for what
reason well that's a little scary there
are there are computer scientists who
when you talk about why they're not
worried or talk to them about why
they're not worried they just swallow
this pill without any qualms we're gonna
make the thing that is far more powerful
and beautiful and important than we are
and it doesn't matter what happens to us
I mean that was our role our role was to
build these mechanical gods and and it's
fine if they squash us and I've you know
literally heard a people say heard
someone give a talk I mean that's what
woke me up - OH - how interesting this
area is I went to this conference and in
San Juan about a year ago and there were
you know like the people from deep mind
were there and there were the people who
were very close to this work were there
and I mean to hear some of the reasons
why you shouldn't be worried from people
who were interested in in calming the
fears so they could get on with doing
their very important work it was amazing
because they were highly uncompelled
reasons not to be worried it's just so
so they had a they had a desire to be
compelled they're not they're not well
not at all well no yeah
they're people people want to do this
work there's a deep assumption in many
of these people that we can figure it
out as we go along right we're gonna
we're just gonna get we're gonna get
closer we're foot we're far enough away
now even five year even if it's five
years five years we'll get there once we
get closer once we get something a
little scary then we'll pull the brakes
and talk about it but the problem is
they are a sent everyone is essentially
in a race condition by default and you
have you know Google is racing against
Facebook and the u.s. is racing against
China and every every group is racing
against every other group however you
want to conceive of groups this this is
a to be the first one to be the first
one with mmm incredibly powerful narrow
AI is to be the next you know
multi-billion dollar company right so
everyone's trying to get there and if
they suddenly get there and sort of
overshoot it a little bit and now
they've got something like you know
general intelligence you know or
something close what we're relying on
every and and they know everyone else is
tempting to do this right we don't have
a system set up where everyone can pull
the brakes together and say listen we
got to stop racing here we have to share
everything we have to share the wealth
we have to share the information we have
to this truly has to be open source in
every conceivable way and we have to
defuse this winner-take-all dynamic
you know I think we need something like
a Manhattan Project to figure out how to
do that you know not if not to figure
out how to build the AI but to figure
out how to build it in a way that does
not create an arms race that does not
create an incentive to build unsafe AI
which is almost certainly going to be
easier than building safe AI and just to
work out all these issues because it's
it's not because what I think we are
we're going to build this by default
we're just going to keep building more
and more intelligent machines and this
is going to be done in by everyone who
can can do it you know and with each
generation if we were even talking about
generations it's going to be it will
have the tools made by the prior
generation that are more powerful than
you know anyone imagined 100 years ago
and it just it's gonna keep going like
that did anybody actually make that
quote about giving birth to the
mechanical gods no that was just me yeah
but it was there was a scientist that
actually was thinking and saying that
but that was that was the content of
what he was saying he said we're gonna
build the next species that is far more
important than we are and that's a good
thing
and what and actually I can go there
with him I mean it actually is he the
only the only caveat here is that unless
they're not conscious right like so and
let if you the true horror for me is
that we can build things more
intelligent than we are more powerful
than we are and that can squash us and
they might not they might be unconscious
right there might be nothing like the
universe could go dark if they squash us
right or at least our corner of the
universe could go dark right
and yet these things will be immensely
powerful so if and this is just you know
the jury's out on this but if there's
nothing about intelligence scaling that
demands that consciousness come along
for the ride then it's possible that I
mean nobody thinks our machines are you
know very few people would think our
machines are that are intelligent or
conscious right so at what point does
consciousness come online maybe it's
possible to build super intelligence
that's unconscious
you know super-powerful does everything
better than we do you know it'll
recognize your emotion better than then
another person can but then the lights
aren't on that's that's also I think
possible
you know mate but maybe it's not
possible but that's that's the worst
case scenario because it in the ethical
silver lining and speaking you know
outside of our self-interest now but
just from a bird's-eye view the ethical
silver lining to building these
mechanical gods that are conscious is
that yes okay we've in fact if we have
built something that is far or wiser and
has far you know more beautiful
experiences and deeper experiences of
the universe and we could ever imagine
and there-there's something that it's
like to be that thing that's just you
know it is it has a kind of god-like
experience well that would be a very
good thing then we will have built you
know we will have built something that
was you know if you stand outside of our
narrow self-interest I can understand
why the he would say that he was just
assuming what was scary about that
particular talk as he was assuming that
consciousness comes along for the ride
here and I don't know that that is a
safe assumption well in the the really
terrifying thing is who if this is
constantly improving itself and it's
under the beck and call of a person then
so it's either conscious conscious word
access itself right it acts as an
individual thinking unit right or as a
thing outside of its aware right either
it is or it isn't and if it isn't aware
and some person can manipulate it like
imagine if it's getting 10,000 how many
how many thousands of years in a week
did you say if it was just improvement
it was just a million times faster than
we are it's just twenty thousand years
twenty thousand years in a week in a
week in a week so with every week this
thing constantly gets better at even
doing that right so it's reprogramming
itself so it's all exponential
presumably it just just imagine again
you could keep it in the most restricted
case you could just keep it at our level
but just right
faster just a million times faster but
if it did all these things if it kept
going and kept every week was thousands
of years right we're gonna control it a
personal no I know that's even more
insane just imagine being in dialogue
with something that had that that lived
the twenty thousand years of human
progress in a week and you come back you
know on Monday and say listen that thing
I told you to do last Monday I want to
change that up and this thing has made
twenty thousand years of progress and if
it's in a condition where it has access
I mean so we're imagining this thing you
know in a box you know air gapped from
the internet and it's got nothing it's
got no way to get out right even that is
an unstable situation but just imagine
this emerging in some way online right
already being out in the wild right so
let's say if it's in a financial market
right that's again this is uh what
worries me most about this and what is
also interesting is that our intuitions
here I think the primary intuition that
people have is no no that's just that's
just not possible
or not at all likely but if you're gonna
fund if you think it's impossible or
even unlikely you have to find something
wrong with the claim that intelligence
is just a matter of information
processing I don't know any scientific
reason to doubt that claim at the moment
and very good reasons to believe that
it's just undoubtable and the and you
have to doubt that we will continue to
make progress in the design of
intelligent machines and but once you
that then is then that all that's left
is just time right if if intelligence is
just information processing and we are
going to continue to build better and
better information processors at a
certain point we're going to build
something that is superhuman and so
whether it's in five years or 50 it's
I mean it's it's the biggest change in
human history I think we can imagine
right so and people I what I thought
fine I keep finding myself in the
presence of people who seem at least to
my eye to be refusing to imagine it like
they're treating it like the y2k virus
or whatever it's just or the y2k bug
where it just may or may not be an issue
right like like it's a hypothetical like
maybe this is just we're gonna get there
and it's gonna be it's either not gonna
happen or it's it's it's gonna be
trivial but how you don't if you don't
have an argument for why this isn't
gonna happen then you have to have then
then you're left with okay what's it
going to be like to have systems that
are better than we are at everything in
the intellectual space and you know what
will happen if that suddenly happens in
one country and not in another right
it's um it's uh it's it has enormous
implications but it just sounds like
science fiction I don't know what's
scarier the idea that an artificial
intelligence can emerge it's conscious
its aware of itself and then acts to
present protect itself or the idea that
a person a regular person like of today
could be in control of essentially a god
right because if this thing continues to
get smarter and smarter with every week
and more and more power and more and
more potential more and more
understanding thousands of years I mean
it's just yeah this one person a regular
person controlling that is almost more
terrifying than creating a new life or
any group of people who don't have the
total welfare of humanity as their
central concern and so just imagine what
would what would China do with it now
right what would we do if we thought
China you know if I do or what or some
Chinese company was on the verge of this
thing what would it be rational for us
to do you know I mean if North Korea had
it it would be rational to nuke them
given what they say about what you know
their relationship with the rest
so it's um well that kind of power just
isn't rational that kind of powers it's
so life-changing it's so well
paradigm-shifting right but if you to
wind this back to what someone like Neil
deGrasse Tyson would say is that the
only basis for fear is yeah don't give
your super intelligent AI to the next
Hitler right that's that's obviously bad
but if we don't if we're not idiots and
we just use it well or fine and that I
think is an intuition that is just
that's just a failure to to unpack what
is entailed by again something like an
intelligence explosion a process that be
once once you're talking about something
that is able to change itself and you
have to gear it so what would it be like
to guarantee well so we decide okay
we're just not gonna build anything that
can make changes to its own source code
you know any change to software at a
certain point he's gonna have to be run
through a human brain and we're gonna
have veto power well is every person
working on AI going to abide by that
rule it's like we've agreed not to clone
humans right but you know we kind of
stand by that agreement for the rest of
human history and is you know is our
agreement binding on China or Singapore
or you know any other country that might
think otherwise it's just we have it
it's a free-for-all and at a certain
point we're gonna be you know close
enough
everyone's gonna be close enough to
making the final breakthrough that
unless we we have some agreement about
how to proceed if someone is gonna get
there first that is a terrifying
scenario of the future you know you
cemented this last time you were here
but you're not as Extreme as this time
you seem to be accelerating rhetoric
yeah exactly
I'm you're going deep yeah oh boy I hope
you're wrong I'm on Team yai Neil
deGrasse Tyson rice one yeah Neil and
well and so in defense of the other side
to I should say that you know David
Deutsch also thinks I'm wrong but he
thinks I'm wrong because we will
integrate ourselves with these machines
I mean so that we this will be
extensions of ourselves and they can't
help but be aligned with us because we
will we will be connected to them that
seems to be the only way we could all
get along we have to merge become one
yeah but I just think there's no there's
no deep reason why like even if we
decided to do that right like in the US
or in half the world one there's I think
there are reasons to worry that even
that could go haywire but there's no
guarantee that someone else couldn't
just build AI in a box I'm if we can
build AI such that we can merge our
brains with it
someone can also just build AI in a box
right and and that's and then then you
inherit all the other problems that
people are saying we don't have to worry
about if it was a good Coen Brothers
movie it would be invented in the middle
of the presidency of Donald Trump and so
that that's when a I would go live and
then a I would have to challenge Donald
Trump and I would have like an insult
with that that's when this thing becomes
so comically terrifying where it's just
just imagine Donald Trump being in a
position to make the final decisions on
topics like this for the country that
his act is gonna do this almost
certainly in the near term it's like
should we have a Manhattan Project on
this point mr. president you know the
idea that anything of value could be
happening between his ears on this topic
or a hundred others like it yeah I think
is is now really inconceivable and so
what what price could we might we pay
for that kind of inattention and and you
know self-satisfied in attention to
these kinds of issues well this this
issue if this is real and if this could
go live in 50 years this is the issue
yeah unless we fuck ourselves up beyond
repair before then and shut the power
off if it keeps going yeah no I think it
is I think it is the issue but
unfortunately it's the it's the issue
that doesn't it sounds like a goof yeah
it does just sound does you sound like a
crackpot even worrying about this issue
it sounds completely ridiculous but that
might be what's how it's sneaking in
yeah yeah it's just just imagine that
the tiny increment that would make
suddenly make it compelling I mean just
imagine I mean chess doesn't do it
because chess is so far from any central
human concern but just imagine if your
if your phone recognised your emotional
state better than any and your best
friend or your wife or anyone in your
life and then did it reliably right does
your body like that movie with her yeah
he falls in love with his phone right I
mean that's just not you know that is
not far off that far off it's not it's a
very discrete ability I mean you could
do that you could do that without any
any other ability in the phone really
it's like it doesn't it doesn't have to
to stand on the shoulders of any other
kind of intelligence it could just you
know it just you have I mean this could
be you could do this with just brute
force in the same way that you have a
great chess player that doesn't
necessarily understand that it's playing
chess you could have you know it's a
facial recognition of emotion and the
and the tone of voice recognition of
emotion and the idea that it's gonna
it's going to be a very long time for
computers to get better than people at
that I think is is very far-fetched I
was thinking yeah I think you're right I
was just thinking how stranger to be if
you had like headphones on and your
phone was in your pocket and you had
rational conversations with your phone
like your phone knew you better than you
know you like I mean I don't know what
to do I mean I I don't think I was out
of line she held at me I mean what
should I say and I would listen to every
one of your conversations with your
friends exactly
train up on that and just talk to you
about and go listen man this is what you
gotta do
you're sounding angry you got defensive
you got defensive apologize relax let's
all move on if you could accelerate it
okay you're right man right man and like
you're talking this little art efficient
of artificial intelligence that we
suggest we go all right let's give it a
shot and like self-help guys in your
phone have like a personal trainer in
your phone how to talk to girls slow
down dude slow down you're talking too
fast gotta act cool yeah I would I mean
literally like giving you information
that would be like step wand I'd be like
the Sony Walkman remember when you had a
Walkman like a cassette player that was
like way to what we have today where you
have fucking thirty thousand songs in
your phone or something I think I
remember the first Walkman the first
thing it's when I back when I skied
there was something called it was called
astral Tunes or something it was like it
was like a car radio that that you could
just put on in a pack on your chest yeah
if they kept coming out with those
sounds they would get smaller and
smaller so then that little little dude
would start telling you yo man dude
listen
they keep replacing me every year just
let him stick me in your brain we'll be
together all the time
yeah I've been giving you good advice
for years bro I mean your brain and so
you and this little artificial
intelligence of you have a relationship
over time eventually it talks you into
getting your head drilled and they screw
it in there and your artificial
intelligence is always powered by your
central nervous system
have you seen most of these movies like
did you see her and no I didn't know and
exact did you see ex machina that was
one of my that was good top 10 all-time
favorite movies yeah I loved that movie
actually I liked it I saw it twice I
said I was slow to realize how well they
they did it I mean it was just the first
time I saw it I thought I wasn't as
impressed and I watched it again and
they really first of all the performance
of I forgot the actresses name
Vivek and ER Lisa vikander Emma the
woman who plays the robot in ex machina
it was just fantastic but scary good
yeah
doing anything we're getting a little
full on time yeah what are we like five
hours and a half hours in but I just got
a note how this is about to fill up wait
a minute how many hours four and a half
hours no computers about to fill up yeah
we did we just did a four and a half
hour pocket yeah we're ready to keep
going to Jamie tick tock you know what
man once you opened up that box that
Pandora's Box about I've looked up is
there any sort of concept of like autism
in a I like a spectrum of AI like oh
there are dumb AI and there's gonna be
smart AI
oh yeah ya know so the scary thing so
that yeah it's like super autism there's
no across the board there's AI think
that super intelligence and motivation
and goals are totally separable so you
could have a super intelligent machine
that is purposed toward a goal that just
seems completely absurd and harmful and
non common sensical and they said the
example that Nick Bostrom uses in his in
his book super intelligence which was a
great book and did more to inform my
thinking on this topic than any other
source he talks about a paperclip
Maximizer you could you could build a
super intelligent paperclip Maximizer
now not that anyone would do this but
the point is you could build a machine
that was that was smarter than we are in
every conceivable way but all it wants
to do is produce paperclips
right now that seems counterintuitive
but there's no there's no reason when
you kind of dig deeply into this there's
no reason why you couldn't both build a
superhuman paperclip Maximizer I just
wants to turn everything you know just
literally the atoms in your body would
be better used as paperclips and so this
is just the point he's making is that
super intelligence could be very
counterintuitive it's not necessarily
going to inherit everything we find as
you know commonsensical or emotionally
appropriate or wise or desirable it
could be totally foreign totally trivial
in some way
focused on something that means nothing
to us but means everything to it because
of some quirk and how its motivation
system is structured and yet it can
build the perfect nanotechnology that
will allow it to build more paperclips
right so and Lisa at least I don't think
anyone can see why that's ruled out in
advance I mean there's no reason why we
would intentionally build that but the
fear is we might build something that
either is not perfectly aligned with our
goals and our common sense and our in
our aspirations and that it could form
some kind of separate instrumental goals
to get what it wants it wants that are
totally incompatible with life as we
know it and that's you know I mean again
the the examples of this are always
cartoonish like you know how I mean Elon
Musk said you know if you built a super
intelligent machine he told it to reduce
spam well then it could just kill all
people that's a great way to reduce spam
right but see the reason why that's lat
I mean it's laughable but you there's
you can't assume the common sense won't
be there unless we've built it right
like you have to have anticipated all of
this you can't if you say take me to the
airport as fast as you can again this is
boström you know and you have a super
intelligent automatic car you know a
self-driving car you'll just you'll get
to the airport covered in vomit because
it'll just it's just gonna go as fast as
it can go so it's a it's our intuitions
about what it would mean to be super
intelligent necessarily are I mean
there's no we have to correct for them
because I think our intuitions are bad
you're freaking me out
and you've been freaking me out for over
an hour I'm freaked out that we did four
half hours and I think that ridiculous
coming up on three um man I hope you're
wrong about all that stuff doesn't it
doesn't seem that I don't know it
doesn't look that rosy Jamie I'm sorry
to protect your buzz gonna look to the
woods might have to figure out how to
live off the land
well you're the old me dude a prepper
call you when I'm looking I'm bad at it
I'll starve wall star I won't be a
vegetarian I'll come to your house for a
bear meat and it might get ugly folks
let's hope sam harris is wrong thank you
brother appreciate it and your podcast
tell people how to get your waking up is
my podcast and you can find on my
website sam harris org or on iTunes you
can get his book one of his books if you
go to audible.com /jo right isn't that
get one go get one of those hooks all
right thank you ladies gentlemen thank
you Sam that was awesome bro
you
