you know in terms of things that I think
are most likely to affect the the future
of humanity I think AI is probably the
single biggest item in the near term
that's likely to affect humanity so it's
very important that we have the advent
of AI in a good way that that's is
something that if you if you could look
into Cristobal and and see the future
you would like you would like that
outcome because it is something that
could go could go wrong and as we've
talked about many times and so we really
need to make sure it goes right that's
that's I think AI working on a I'm and
making sure it's a great future that's
that's the most important thing I think
right now the most pressing item
speaking of really important problems AI
so you've been outspoken about AI could
you talk about what you think the
positive future for a it looks like and
how we get there okay I mean I do want
to emphasize that this is not really
something that I advocate or or this is
not prescriptive this is simply pretty
hopefully predictive as we look some say
oh well like this is something that I
want to occur instead of so this
something I think that probably is the
best of the available alternatives the
best of the available alternatives that
I can come up with and maybe somebody
else can come up with a better approach
or a better outcome is that we achieve
democratization of AI technology meaning
that no one company or small set of
individuals has control over advanced AI
technology I think that that's very
dangerous it could also get stolen by
somebody bad you know like some evil
dictator or the country could send their
intelligence agency to go steal it and
gain control it just becomes a very
unstable situation I think if you've got
any any incredibly powerful AI
you just don't know who's who's gonna
control that so it's not as I think that
the risk is that the AI would develop a
will of its own right off the bat I
think it's more the consumers that some
someone may use it in a way that is bad
or and even if they weren't gonna use in
a way that's bad but somebody could take
it from them and use it in a way that's
bad that that I think is quite a big
danger so I think we must have
democratization of AI technology and
make it widely available and that's you
know the reason that obviously you me
and the rest the team you know created
open AI was to help with the democracy
help help spread out AI technology so it
doesn't get concentrated in the hands of
a few and but then of course that needs
to be combined with solving the high
bandwidth interface to the cortex humans
are so slow humans are so slow yes
exactly
but you know we already have a situation
in our brain where we've got the cortex
and limbic system and the limbic system
is kind of over that's that's the
primitive brain it's kind of like the
your your instincts and whatnot and then
the cortex is a thinking upper part of
the brain those two seem to work
together quite well occasionally your
cortex and limbic system may disagree
but they attending it works pretty well
generally works pretty well and it's
like rare to find someone who I've not
found someone who wishes to either get
rid of the cortex or get rid of the
Olympic system very true yeah it's
that's unusual so so I think if we can
effectively merge with AI by improving
that the neural link between your cortex
and the your digital extension yourself
which already likes that already exists
just has a bandwidth issue and then then
effectively you become an AI human
symbiote and and if that then is
widespread with anyone who wants it
have it then we solved the control
problem as well we don't have to worry
about some sort of evil dictator AI
because kind of we are the AI
collectively that seems like the best
outcome I can think of so you've seen
other companies in the early days that
start small and get really successful
hope I don't work at asking this on
camera but how do you think open AI is
going as a six month old company that's
used to go pretty well I think we've got
a really talented group at opening I and
yeah really really talented team and
they're working hard open area
structured as see a 501c3 nonprofit but
you know many nonprofits do not have a
sense of urgency it's fine they don't
have to have a sense of urgency but
opening ideas is I think people really
believe in the mission I think it's
important and it's it's about minimizing
the risk of existential harm in the
future and so I think it's going well
I'm pretty impressed with what people
are doing in the talent level but I
think if it reaches the threshold where
it's this s modest the smartest most
inventive human then I mean it really
could be I mean a matter of days before
it's smarter than some of humanity we're
headed towards either super intelligence
or civilization ending and another point
that I think is really important to
appreciate is that we are all of us
already are cyborgs so you have a
machine extension of yourself
in the form of your your phone and your
computer and all your applications you
are already superhuman but by far you
have more powerful capability then press
in the United States had you know 30
years ago if you have an internet link
you have an article of wisdom you can
communicate to millions of people can
communicate to the rest of Earth
instantly and these are magical powers
that didn't exist not that long ago
so everyone is already superhuman it is
the biggest risk
we face as a civilization is artificial
intelligence it's sold to a group of
leaders what would you advise that we
should how should we be addressing
something that's it's so such a large
landscape and yet obviously so important
I think that the you know one of the
rules of government is to ensure the
public good and to that dangerous to the
public are addressed so that hence the
regulatory thing I think the first order
of business would be to try to learn as
much as possible you know to understand
the nature of the issues to look closely
at the progress that is being made and
the remarkable achievements of
artificial intelligence
last year I go which is a quite a
difficult game to beat that people
thought would never be beaten with my
computer that the curricula computer
would we either never beat the best
human player or that it was twenty years
away and last year
alphago which was done by deep mind
which is a kind of a Google subsidiary
absolutely crushed the world's best
player and now now that now I can crush
it can play at the top fifty
simultaneously impossible so just like
that pace of progress is remarkable and
and there's you can see more and more
coming out and robotics you can see
robots that can learn to walk from from
nothing you know within hours like way
faster than any biological being but the
thing that's most dangerous is and it's
the hottest that kind of rap kind of get
your arms around because it's not a
physical thing is kind of a deep
intelligence in the network
and said well what home could have deep
intelligence in the network do so well I
could start a war fight create by doing
fake news and spoofing email accounts
and fake press releases and just by you
know manipulating information the pen is
mightier than the sword so I mean as an
example I want to be emphasized I do not
think this actually occurred
this is purely a hypothetical that I'd
digging my grave here but you know that
like that though there was that second
Malaysian airliner that was shot down on
the Ukrainian Russian water and that
that really amplified tensions between
Russia and the EU in a massive way well
like let's say if you had a an AI that
was where the ai's goal was to maximize
the value of portfolio of stocks one of
the ways to maximize value would be to
go long on defense short on consumer
start a war and then how could it do
that well you know hacking into the
Malaysian Airlines rat aircraft routing
server a routed over a war zone then
sent an anonymous tip that an enemy
aircraft is flying overhead right now
let's go to Governor Ducey and then
we'll have after governor Ducey will
finish our gubernatorial questions and
then two questions and we quick
questions or one audience question and
we'll be dad we're running short on time
governor Ducey thanks Elon I really
enjoyed your comments today and as
someone who has spent a lot of time in
his administration trying to reduce and
eliminate regulations
I was surprised by your suggestion to
bring regulations before we know exactly
what we're dealing with with AI you know
I've heard the example used if I were to
come up with a colorless odorless
tasteless gas that was explosive people
would say well you have to ban that and
then we'd have no natural gas so you've
given some of these examples of how a a
I can be an existential threat but I
still don't understand as policymakers
what type of regulations beyond slowdown
which typically policymakers don't get
in front of entrepreneurs or innovators
well I think the first order of business
would be to gain insight right now the
governor does not even have an insight
and I and the right order of business
would be to stand up regulatory agency
initial goal gain insight into the
status of AI activity make sure it the
situation is understood once it is then
put regulations in place to ensure
Public Safety that's it and for sure
the company is doing AI will most of
them not mine
will squawk and say hey this is really
gonna stifle innovation blah blah it's
gonna move to China it won't and it
won't because like it's like it has like
it has Boeing moved to China nope they
were pulling aircraft here same with on
cars and so it's there's the notion that
if you say establish regulatory regime
that companies will simply move to
countries with with lower regulatory
comments is false on the face of it
because none of them do and unless it's
really overbearing but that's not what
I'm talking about here I'm just talking
about you know make sure that there is
awareness at the government level I
think once there is awareness people
will be extremely afraid as they should
be well first why I think on the
artificial intelligence front you know I
I have exposure to the very most
cutting-edge AI and I think people
should be really concerned about it
I keep so much sounding the alarm bell
but you know until people see like
robots going down the street killing
people like they don't know how to react
you know because it seems so ethereal
and I think we should be really
concerned about AI and I think we should
yeah this is AI is a rare case where I
think we need to be proactive in
regulation instead of reactive because I
think by the time we are reactive in AI
regulation it's too late and normally
the way regulations are set up is that a
whole bunch of bad things happen there's
public outcry the the and then after
many years a regulatory agency is set up
to regulate that industry there's a
bunch of opposition from companies who
don't like being told what to do by
regulators and it takes forever that
they that in the past has been bad but
not something which represented a you
know a fundamental risk to the existence
of civilization AI is a fundamental risk
to the existence of human civilization
in a way that car accidents airplane
crashes faulty drugs
or bad food we're not they were not they
were harmful tin to a set of individuals
within society of course but they were
not harmful to society as a whole
AI is a fundamental existential risk for
human civilization and I don't think you
hopefully appreciate that you know it's
not it's not fun being regulated it's
not you know you're pretty or 'some but
I you know in the car business you know
we get regulated by Department of
Transport by EPA and a bunch of others
and this regs for agencies in every
every country you know in the in space
we get regulated by FA and but but you
know if you ask the average person hey
you want to do you want to get rid of
the FAA and just like take it take a
chance on manufacturers not cutting
corners on the aircraft because you know
profits were down that quarter I was
like hell no that's sounds durable so
you know I think even people who are
pretty extremely like the libertarian
free-market they were like yeah we
should pull have some me keeping an eye
on the aircraft companies making sure
they build a good aircraft and good cars
and that kind of thing so yeah I think
there's there's a role for regulators
that's very important and I'm against
over-regulation for sure but man we I
think we better get on that with AI
fronto
and so they'll certainly view a lot of
job disruption because what's gonna
happen is robust will be able to do
everything better than us
I mean I'm quitting I mean all of us you
know
yeah what's real exactly what to do
about this
it's like the it's the like it was
really like the scariest problem to me
I'll tell you and yeah so I really think
we need to go a regulation here just uh
because this is you know ensuring the
public good is served because you're
good companies that are racing that they
kind of have to race to build AI or
they're gonna be made uncompetitive you
know like that essentially if your
competitor is Racing's evolved AI and
you don't they will crush you so then
you're like ah we don't request so you
know I guess we need to bullet to that's
where you need the regulator's come in
and say hey guys you all need to really
you know just pause and make sure this
is safe and like when when it's cool and
working a bit and regulators are
convinced that it's safe to proceed then
you can go but otherwise slow down and
bit slung them but you need the
regulator's to do that for all the teams
in the game you know otherwise the
shareholders will be saying like hey why
you developing AI faster because your
competitor is like okay we've got to do
that anyway so it's like hey there's
like something like 12% of jobs or
transport transport will be one of the
first things to go fully autonomous but
when I say everything like the robots
will be able to do everything barn or
nothing
well I think it is difficult to
appreciate just how far artificial
intelligence has advanced and how far it
is advancing because we have a double
exponential at work we have an
exponential increase in hardware
capability and we have an exponential
increase in software talent that is
going into AI so whenever you have a
double exponential it's very difficult
to predict prediction is almost always
going to be too conservative in terms
thinking it'll be further out than it is
you know you start to see things like if
you seem like the video is where you can
sort of really quite accurately video
simulate someone and put words in their
mouth that they never spoke he was
Google is it's really pretty amazing
and then they they had this they called
a generative adversarial Network it had
had two of them compete with one another
to make the most convincing video no one
would generate the video and then the
other one would identify where it it
looked fake and and then that that would
the other one would fix that and then
they'd go back and forth to the point
where can tell which one was the real
real video which was the paperboy and
you know sleep have been some very
public things like the defeat of alphago
or to be of go by alphago the world's
best go champion people thought
defeating go was either never or 20
years away that was world's best go
player was defeated and now that same
alphago system can defeat the top 50
players simultaneously with zero percent
of chance of them winning and that's one
year later so the degrees of freedom to
which artificial intelligence is able to
apply itself or really increasing I
think by ten orders of magnitude a year
that's really crazy so I think and we're
talking yeah and this is on hardware
that is really not as well suited for
neural nets you know like a GPU is maybe
and we're back to be better than CPU but
something but a chip that is designed
optimally for neural nets is an order of
magnitude better than a GPU and that is
there are a whole bunch of neural net
optimized chips coming out
either late this year or else your so I
think we should I think the other father
all the government is to make sure the
public is safe like to take care of
public safety issues and I think so I
think the right move is to Salish some
government regulatory agency which at
first is just there to gain insight so
it's not about like shooting from the
hip and just putting in rules before
anyone knows I think but you gotta stuff
agency it's got a gain insight once that
inside is gained then start applying
rules and regulations we have that full
of the you know for aircraft they got
that for cars good that for you know
drugs for food and I don't think anyone
wants the FAA to go away or FTA to go
away or you know any of those regulatory
agencies then I think we just need to
make sure people do not cut corners on
AI safety it's maybe it's gonna be a
real big deal and it's gonna come on
like a like a tidal wave
