- The following is a conversation
with Elon Musk, Part 2.
The second time we spoke on the podcast,
with parallels if not in
quality then in outfit
to the, objectively
speaking, greatest sequel
of all time, "Godfather Part II".
As many people know, Elon
Musk is a leader of Tesla,
SpaceX, Neuralink and the Boring Company.
What may be less known is
that he's a world class
engineer and designer,
constantly emphasizing
first principles thinking in taking on
big engineering problems
that many before him
would consider impossible.
As scientists and engineers, most of us
don't question the way things are done.
We simply follow the
momentum of the crowd.
But revolutionary ideas
that change the world
on the small and large scales happen
when you return to the fundamentals
and ask: "Is there a better way?"
This conversation
focuses on the incredible
engineering and innovation done
in brain computer interfaces at Neuralink.
This work promises to help
treat neuro biological diseases
to help us further
understand the connection
between the individual neuron,
to the high level function
of the human brain
and finally, to one day expand
the capacity of the brain,
through two-way communication
with computational devices,
the internet and artificial
intelligence systems.
This is the "Artificial
Intelligence Podcast".
If you enjoy it, subscribe
by YouTube, Apple Podcast,
Spotify, support on Patreon
or simply connect with me
on Twitter @Lexfridman,
spelled F-R-I-D-M-A-N.
And now, as an anonymous YouTube commenter
referred to our previous
conversation as the quote,
"Historical first video of two robots
"conversing without supervision"
here's the second time,
the second conversation
with Elon Musk.
Let's start with an easy
question about consciousness.
In your view, is consciousness something
that's unique to humans,
or is it something
that permeates all matter, almost like
a fundamental force of physics?
- I don't think consciousness
permeates all matter.
- Panpsychists believe that,
there's a philosophical--
- How would you tell?
(both laugh)
- That's true, that's a good point.
- I believe in scientific method,
don't wanna blow your mind or anything
but the scientific method is like
if you cannot test the hypothesis
then you cannot reach meaningful
conclusion that it is true.
- Do you think consciousness,
understanding consciousness,
is within the reach of science
or the scientific method?
- We can dramatically
improve our understanding
of consciousness.
I would be hard pressed
to say that we understand
anything with complete accuracy
but can we dramatically improve
our understanding of consciousness?
I believe the answer is yes.
- Does an A.I. system in your view
have to have consciousness in order
to achieve human level or
superhuman level intelligence?
Does it need to have some
of these human qualities
like consciousness, maybe
a body, maybe a fear
of mortality, capacity
to love, those kinds
of silly human things?
- There's the scientific method
which I very much believe in
where something is true to the degree
that it is testably so and otherwise
you're really just
talking about preferences
or untestable beliefs
or that kind of thing.
So ends up being somewhat
of a semantic question
where we're conflating a lot of things
with the word intelligence.
If we parse them out and say,
are we headed towards the future
where an A.I. will be able
to outthink us in every way,
then the answer is unequivocally yes.
- In order for an A.I. system that needs
to outthink us in every
way, it also needs to have
a capacity to have
consciousness, self-awareness
and understand--
- It will be self-aware, yes.
That's different from consciousness.
I mean to me, what
consciousness feels like,
it feels like consciousness
is in a different dimension
but this could be just an illusion.
If you damage your brain
in some way, physically
you damage your consciousness,
which implies that consciousness
is a physical phenomenon, in my view.
The things that I think
are really quite likely
is that digital intelligence will be able
to outthink us in every
way and it will soon
be able to simulate what
we consider consciousness
so to a degree that you would not
be able to tell the difference.
- And from the aspect of
the scientific method,
it might as well be consciousness,
if we can simulate it perfectly?
- If you can't tell the difference,
it's sort of the Turing test
but think of it more of sort
of an advanced version of the Turing test.
If you're talking to a
digital super intelligence
and can't tel if that is
a computer or a human,
like let's say you're
just having a conversation
over a phone or a video
conference or something aware,
looks like a person, makes
all of the right inflections
and movements and all the small subtleties
that constitute a human
and talks like a human,
makes mistakes like a human
and you literally just can't tell is this,
are you video conferencing
with a person or an A.I.?
- Might as well be human,
so on a darker topic,
you've expressed serious concern
about existential threats of A.I.
It's perhaps one of
the greatest challenges
that our civilization faces
but since, I would say,
we're kind of an optimistic
descendants of apes,
perhaps we can find
several paths of escaping
the harm of A.I., so if I can give you
three options, maybe you can comment
which do you think is the most promising?
So one is scaling up
efforts on A.I. safety
and beneficial A.I.
research, in hope of finding
an algorithmic or maybe a policy solution.
Two is becoming a multi-planetary species
as quickly as possible and
three is merging with A.I.
and riding the wave of that
increasing intelligence,
as it continuously improves.
What do you think is the most
promising, most interesting.
As a civilization that
we should invest in?
- I think there's a tremendous amount
of investment going on in A.I.
Where there's a lack of investment
is in A.I. safety and there should
be in my view, a government agency
that oversees anything related to A.I.
to confirm that it does not represent
a public safety risk, just
as there is regulatory
authority for the Food
and Drug Administration,
that's there for automotive safety,
there's the FAA for aircraft safety.
We're generally coming to the conclusion
that it is important to
have a government referee
or a referee that is
serving the public interest
in ensuring that things are safe
when there's a potential
danger to the public.
I would argue that A.I. is unequivocally
something that has
potential to be dangerous
to the public and therefore should have
a regulatory agency, just as other things
that are dangerous to the
public have a regulatory agency.
But let me tell you, the problem with this
is that government moves very slowly.
Usually the way a regulatory
agency comes into being
is that something terrible happens,
there's a huge public
outcry and year after that,
there's a regulatory agency
or a rule put in place.
Take something like seatbelts.
It was known for, I don't
know, a decade or more?
That seatbelts would have a massive impact
on safety and save so many
lives and serious injuries
and the car industry
fought the requirement
to put seatbelts in tooth and nail.
That's crazy and I don't know,
hundreds of thousands
of people probably died
because of that and they said
people wouldn't buy cars
if they had seatbelts,
which is obviously absurd.
Or look at the tobacco industry
and how long they fought
anything about smoking.
That's part of why I help make that movie,
"Thank You for Smoking",
because you'll see just how pernicious
it can be when you have these companies
effectively achieve regulatory capture
of government, they're bad.
People in the A.I. community refer
to the advent of digital
super intelligence
as the singularity.
That is not to say that it is good or bad
but that it is very difficult to predict
what will happen after that point
and that there's some probability
it will be bad, some
probability it will be good.
We obviously want to
affect that probability
and have it be more good than bad.
- Well let me, on the merger with A.I.,
question an incredible work
that's being done at Neuralink.
There's a lot of
fascinating innovation here,
across different disciplines going on.
So the flexible wires,
the robotic sewing machine
that responds to brain movement
and everything around
ensuring safety and so on.
So we currently understand
very little about the human brain.
Do you also hope that
the work at Neuralink
will help us understand more
about the human mind, about the brain?
- Yeah, I think the work at Neuralink
will definitely shed a lot
of insight into how the
brain, the mind works.
Right now, just the data we have regarding
how the brain works is very limited.
We've got FMRI, that's
kind of like putting
a stethoscope on the
outside of a factory wall
and then putting it all
over the factory wall
and you can sort of hear the sounds
but you don't know what the
machines are doing really.
You can infer a few things
but it's a very broad brushstroke.
In order to really know
what's going on in the brain,
you have to have high precision sensors
and the you wanna have
stimulus and response
like if you trigger a
neuron, how do you feel,
what do you see, how does it change
your perception of the world?
- You're speaking to physically,
just getting close to the
brain, being able to measure
signals from the brain, will give us,
open a door inside the factory?
- Yes, exactly, being able to have
high precision sensors that tell you
what individual neurons are doing
and then being able to trigger a neuron
and see what the response is in the brain.
So you can see the consequences
of, if you fire this neuron,
what happens, how do you
feel, what does it change?
It'll be really profound
to have this in people
because people can
articulate their change.
Like if there's a change in mood,
or if they've can tell
you if they can see better
or hear better or be
able to form sentences
better or worse or their
memories are jogged
or that kind of thing.
- So on the human side,
there's this incredible,
general malleability,
plasticity of the human brain.
The human brain adapts, adjusts and so on.
- [Elon] It's not that
plastic, to be totally frank.
- So there's a firm
structure but nevertheless
there is some plasticity
and the open question
is if I could ask a broad question,
is how much of that
plasticity can be utilized?
On the human side, there's some
plasticity in the human brain
and on the machine side,
we have neural networks, machine learning,
artificial intelligence,
it's able to adjust
and figure out signals so
there's a mysterious language
that we don't perfectly
understand that's within
the human brain and then
we're trying to understand
that language to
communicate both directions.
So the brain is adjusting, a little bit,
we don't know how much and the machine
is adjusting, where do you see,
as they try to sort of reach together,
almost like with an alien
species, try to find a protocol,
a communication protocol that works.
Where do you see the
biggest benefit arriving,
from on the machine
side, or the human side?
Do you see both of them working together?
- I should think the machine
side is far more malleable
than the biological
side, by a huge amount.
It'll be the machine
that adapts to the brain.
That's the only thing that's possible,
the brain can't adapt
that well to the machine.
You can't have neurons start to regard
an electrode as another neuron,
because neurons is like the pulse
and so something else is pulsing,
so there is that elasticity
in the interface,
which we believe is
something that can happen
but the vast majority of malleability
will have to be on the machine side.
- But it's interesting, when you look
at the synaptic plasticity,
at the interface side,
there might be like an
emergent plasticity,
'cause it's a whole nother,
it's not like in the brain,
it's a whole nother
extension of the brain.
We might have to redefine what it means
to be malleable for the brain.
So maybe the brain is able to
adjust to external interfaces.
- There'll be some adjustment to the brain
because there's gonna be something reading
and simulating the brain and
so it will adjust to that thing
but the vast majority of the adjustment
will be on the machine side.
It just has to be that
otherwise it will not work.
Ultimately, we currently
operate on two layers.
We have sort of a limbic
prime furtive brain layer
which is where all of our
impulses are coming from.
It's sort of like we've got, we've got
like a monkey brain with
a computer stuck on it.
That's the human brain
and a lot of our impulses
and everything are driven
by the monkey brain
and the computer, the cortex is constantly
trying to make the monkey brain happy.
It's not the cortex that's
steering the monkey brain,
it's the monkey brain steering the cortex.
- Like the cortex is the
part that tells the story
of the whole thing, so
we convince ourselves
it's more interesting than
just the monkey brain.
- The cortex is what we
call human intelligence.
That's like the advanced computer,
relative to other creatures.
The other creatures do not have,
really they don't have
the computer, or they have
a very weak computer, relative to humans.
It sort of seems like,
surely the really smart thing
should control the dumb thing?
But actually, the dumb thing
controls the smart thing.
- So do you think some of the same kind
of machine learning
methods, whether that's
natural language processing applications
are going to be applied
for the communication
between the machine and the brain?
To learn how to do certain
things like movement
of the body, how to process
visual stimuli and so on?
Do you see the value of
using machine learning
to understand the language of the two-way
communication with the brain?
- Yeah, sure, absolutely.
I mean, we're a neural net
and the A.I.'s basically a neural net.
So it's like digital
neural net will interface
with biological neural net and hopefully
bring us along for the ride, you know?
But the vast majority of our
intelligence will be digital.
So think of like the
difference in intelligence
between your cortex and your
limbic system is gigantic.
Your limbic system really
has no comprehension
of what the hell the cortex is doing.
It's just literally
hungry or tired or angry
or sexy or something, you know?
And that communicates
that impulse to the cortex
and tells the cortex to go satisfy that.
A massive amount of thinking,
like truly stupendous
amount of thinking has gone into sex.
Without purpose, without procreation
which is actually quite a silly action
in the absence of procreation.
It's a bit silly, so why are you doing it?
Because to make the limbic
system happy, that's why.
But it's pretty absurd really.
- Well the whole of
existence is pretty absurd
in some kind of sense.
- Yeah but this is a lot of computation
has gone into how can I do more of that?
With procreation not even being a factor?
This is, I think, a very important
area of research by NSFW.
- An agency that should
receive a lot of funding,
especially after this conversation.
- I propose the formation
of a new agency (laughs).
- Oh boy, what is the most exciting,
or some of the most
exciting things that you see
in the future impact of Neuralink?
Both on the science and engineering
and societal broad impact?
- So Neuralink, I think
at first, will solve
a lot of brain-related diseases.
So could be anything from
like autism, schizophrenia,
memory loss, like everyone
experiences memory loss
at certain points in age.
Parents can't remember their kid's names
and that kind of thing,
so there's, I think,
a tremendous amount of
good that Neuralink can do
in solving critical damage
to brain or the spinal cord.
There's a lot can be done
to improve quality of life
of individuals and those
will steps along the way
and then ultimately,
it's intended to address
the existential risk associated
with digital super intelligence.
We will not be able to be smarter
than a digital super
computer, so therefore
if you cannot beat 'em, join 'em
and at least we will have that option.
- You have hope that
Neuralink will be able
to be a kind of connection to allow us
to merge, to ride the wave of
the improving A.I. systems?
- I think the chance is above 0%.
- So it's not on zero, there's a chance.
- Have you seen "Dumb and Dumber"?
- Yes (laughs)
- [Elon] So I'm saying there's a chance.
- You're saying one in a
billion or one in a million,
whatever it was on "Dumb and Dumber".
- You know, it went from
maybe one in a million
to improving, maybe it'll
be one in a thousand
and then one in a hundred
and then one in 10.
Depends on the rate of
improvement of Neuralink
and how fast we're able to make progress.
- Well I've talked to a few folks here
that are quite brilliant
engineers, I'm excited.
- I think it's fundamentally
good, giving somebody back
full motor control after they've
had a spinal cord injury.
Restoring brain
functionality after a stroke.
Solving debilitating genetically
oriented brain diseases.
These are all incredibly great I think
and in order to do these,
you have to be able
to interface with the
neurons at a detail level
and you need to be able
to fire the right neurons,
read the right neurons
and then effectively
you can create a circuit,
replace what's broken
with silicon and
essentially, they'll end up
with the same functionality
and then over time,
we develop a tertiary
layer, so limbic system
as the primary layer, then the cortex
is like a second layer and as said,
the cortex is vastly more
intelligent than the limbic system
but people generally like the fact
that they have a limbic
system and a cortex.
I haven't met anyone who wants
to delete either one of them.
They're like okay, I'll
keep them both, that's cool.
- The limbic system's kinda fun.
- Yeah, that's where
the fun is, absolutely
and then people generally don't
wanna lose their cortex either.
Right, so they like having the
cortex and the limbic system
and then there's a tertiary layer
which will be digital super intelligence
and I think there's room for optimism,
given that the cortex is very intelligent
and the limbic system is
not, they work together well.
Perhaps there can be a tertiary layer
where digital super intelligence lies
and that will be vastly more
intelligent than the cortex
but still co-exist
peacefully and in a benign
manner with the cortex and limbic system.
- That's a super exciting future,
both on the low level engineering
that I saw is being done here
and an actual possibility
in the next few decades.
- It's important that
Neuralink solve this problem
sooner rather than
later, because the point
at which we have digital
super intelligence,
that's when we pass the singularity
and things become just very uncertain.
Doesn't mean that they're
necessarily bad or good
but the point at which
we pass the singularity,
things become extremely unstable.
So we want to have a human brain interface
before the singularity, or
at least not long after it
to minimize existential risk for humanity
and consciousness as we know it.
- But there's a lot of fascinating,
actual engineering of low level problems
here at Neuralink that are quite exciting.
What--
- The problems we face at
Neuralink are material science,
electrical engineering, software,
mechanical engineering, micro fabrication.
It's a bunch of engineering
disciplines, essentially.
That's what it comes down
to, is you have to have
a tiny electrode so it
doesn't hurt neurons
but it's gotta last for
as long as a person.
So it's gotta last for decades
and then you gotta take that signal
and you got to process that signal locally
at low power, so we need a
lot of chip design engineers,
because we gotta do signal processing
and do so in a very power-efficient way.
So that we don't heat your brain up,
because the brain's very heat-sensitive.
And then we gotta take those signals
and we gotta do something with them
and then we gotta stimulate the back
so you could bidirectional communication.
So somebody who's good
at material science,
software, mechanical engineering,
electrical engineering,
chip design, micro fabrication,
those are the things we need to work on.
We need to be good at material science
so that we can have tiny electrodes
that last a long time
and it's the tough thing,
the material science
problem is a tough one
because you're trying
to read and stimulate
electrically in an
electrically active area.
Your brain is very electrically active,
electrochemically active,
so how do you have
say a coating on the electrode
that doesn't dissolve over
time and is safe in the brain?
This is a very hard problem.
And then how do you collect those signals
in a way that is the most efficient?
Because you really just
have very tiny amounts
of power to process those signals
and then we need to automate
the whole thing, so it's LASIK.
If this is done by neurosurgeons,
there's no way it can scale
to large numbers of people.
And it needs to scale to
large numbers of people
because I think ultimately
we want the future
to be determined by a
large number of humans.
- Do you think this has a chance
to revolutionize surgery, period?
So the neurosurgery and
surgery all across--
- Yeah, for sure, it's
gotta be like LASIK.
If LASIK had to be hand
done, done by hand,
by a person, that wouldn't be great.
It's done by a robot
and the ophthalmologist
kinda just needs to make sure you're you.
Your head's in the right position
and then they just press a button and go.
- So Smart Summon and
soon auto park takes on
the full, beautiful mess of parking lots
and their human-to-human
nonverbal communication,
I think it has actually the potential
to have a profound impact in changing
how are civilization
looks at A.I. and robotics
because this is the
first time human beings,
people that don't own a Tesla,
may have never seen a Tesla
or heard about a Tesla,
get to watch hundreds of thousands of cars
without a driver.
Do you see it this way?
Almost like an education tool
for the world about A.I.?
Do you feel the burden of
that, the excitement of that,
or do you just think it's
a smart parking feature?
- I do think you are getting
at something important
which is, most people have
never really seen a robot.
And what is the car, that is autonomous,
it's a four wheel drive robot.
- It communicates a certain
message with everything
from safety to the possibility
of what A.I. could bring,
to its current limitations,
its current challenges,
it's what's possible.
Do you feel the burden
of that, almost like
a communicator, educator
to the world about A.I.?
- We're just really trying to make
people's lives easier with autonomy
but now that you mention
it, I think it will
be an eye-opener to people about robotics
because they've really never seen,
most people have never see a robot
and there are hundreds
of thousands of Teslas.
It won't be long before
there's a million of them
that have autonomous capability
and they drive without a person in it
and you can see the evolution
of the car's personality
and thinking with each
iteration of autopilot.
You can see it's uncertain about this,
now it's more certain, now it's moving
in a slightly different way.
I can tell immediately if
a car is on Tesla autopilot
because it's got these
little nuances of movement.
It just moves in a slightly different way.
Cars on Tesla autopilot,
for example, on the highway
are far more precise about being
in the center of the lane than a person.
If you drive down the highway and look
at where cars are, the human driven cars
are within their lane.
They're like bumper cars, they're
moving all over the place.
Car on autopilot, dead center.
- Yeah, so the incredible work
that's going into that neural network,
it's learning fast, autonomy's still very,
very hard, we don't actually know
how hard it is fully, of course.
But you look at most problems you tackle,
this one included with an exponential lens
but even with an exponential improvement,
things can take longer
than expected, sometimes.
So where does Tesla currently stand
on its quest for full autonomy?
What's your sense, when can we see
successful deployment of full autonomy?
- Well on the highway
already, the probability
of intervention is extremely low.
So for highway autonomy, with
the latest release especially,
the probability of needing to
intervene is really quite low.
In fact I'd say for stop and go traffic,
it's far safer than a person right now.
And so the probability of an injury
or an impact is much, much lower
for autopilot than a person.
And then with navigating autopilot,
it can change lanes,
take highway interchanges
and then we're coming at
it from the other direction
which is low speed, full autonomy.
In a way this is like how
does a person learn to drive?
You learn to drive in the parking lot.
First time your learned to drive,
probably wasn't jumping on Market Street
in San Francisco, that would be crazy.
You learn to drive in the parking lot,
get things right at low speed
and then the missing piece
that we're working on
is traffic lights and stop streets.
Stop streets, I would
say actually are also
relatively easy because you kind of know
where the stop street is, worse case
you can geo-code it and
then use visualization
to see what the line
is and stop at the line
to eliminate the GPS error.
So actually, I'd say it's probably complex
traffic lights and very windy roads
are the two of things
that need to get solved.
- What's harder, perception
or control for these problems?
So being able to perfectly
perceive everything?
Or figuring out a plan, once
you perceive everything,
how to interact with all the
agents in the environment?
In your sense, from a
learning perspective,
is perception or action harder
in that giant, beautiful,
multi-task learning neural network?
- The hardest thing is having
accurate representation
of the physical objects in vector space.
So taking the visual input,
primarily visual input,
some sonar and radar and then creating
an accurate vector space representation
of the objects around you.
Once you have an accurate
vector space representation,
the plank and control
is relatively easier.
It is relatively easy.
Basically, once you have accurate
vector space representation, you're kinda
like a video game, like
cars in Grand Theft Auto
or something, like they work pretty well.
They drive down the
road, they don't crash,
pretty much, unless you crash into them.
That's because they've got an accurate
vector space representation
of where the cars are
and then they're rendering
that as the output.
- Do you have a sense, high level,
that Tesla's on track on being able
to achieve full autonomy?
- Yeah, absolutely.
- And still no driver
state, driver sensing.
- We have driver sensing
with torque on the wheel.
- That's right, by the way,
just a quick comment on karaoke.
Most people think it's
fun but I also think
it's a driving feature I've
been saying for a long time,
singing in a car's really
good for attention management
and vigilance management.
- That's right, Tesla karaoke is great.
It's one of the most
fun features of the car.
- Do you think you'll connect
between fun and safety sometimes?
- Yeah, you can do both at
the same time, that's great.
- I just met with Andrew
and wife of Carl Sagan,
who direct "Cosmos".
- I'm genuinely a big fan of Carl Sagan,
he was super cool and had a
great way of putting things.
All the clashes of all civilization,
everything we've ever known and done
is on this tiny blue dot.
You also get too trapped in there,
there's like squabbles amongst humans
and there's nobody thinking
of the big picture.
They take civilization and
our continued existence
for granted, they shouldn't do that.
Look at the history of civilizations.
They rise and they fall and now,
civilization is globalized
and so we're a civilization,
I think now, it rises and falls together,
there's not geographic isolation.
This is a big risk,
things don't always go up.
That should be, that's an
important lesson of history.
- In 1990 at the request of Carl Sagan,
the Voyager 1 spacecraft
which is a spacecraft
that's reaching out farther
than anything human made
into space, turned around
to take a picture of earth
from 3.7 billion miles
away and as you're talking
about the pale, blue dot, that picture,
the earth takes up less than
a single pixel in that image.
Appearing as a tiny blue dot,
as a "pale, blue dot"
as Carl Sagan called it.
So he spoke about this dot of ours in 1994
and if you could humor me, I was wondering
if in the last two minutes, you could read
the words that he wrote
describing this pale blue dot.
- Sure, it's funny, the universe appears
to be 13.8 billion years old.
Earth is like four and
half billion years old.
You know, in another
half billion years or so,
the sun will expand and probably
evaporate the oceans and make
life impossible on earth.
Which means that if it had
taken consciousness 10% longer
to evolve, it would never
have evolved at all.
Just 10% longer.
And I wonder, I wonder how many dead
one planet civilizations there
are, out there in the cosmos.
That never made it to the other planet
and ultimately extinguish themselves
or destroyed by external factors.
Probably a few, it's only just possible
to travel to Mars, just barely.
If G was 10% more, wouldn't work really.
If G was 10% lower, it would be easy.
We can go single stage
from the surface of Mars,
all ways over to the Earth.
Because Mars is 37% of the
Earth's gravity, thereabouts.
We need a giant boost to get off Earth.
Channeling Carl Sagan,
"Look again at that dot.
"That's here, that's home, that's us.
"On it, everyone you
love, everyone you know,
"everyone you've ever heard of,
"every human being who ever was
"lived out their lives.
"the aggregate of our joy and suffering,
"thousands of confident religions,
"ideologies and economic doctrines,
"every hunter and forager, every hero
"and coward, every creator
and destroyer of civilization,
"every king and peasant,
every young couple in love,
"every mother and father, hopeful child,
"inventor and explorer,
every teacher of morals,
"every corrupt politician,
every 'superstar',
"every 'supreme' leader, every saint
"and sinner in the history
of our species lived there
"on a mote of dust,
suspended in a sunbeam.
"Our planet is a lonely speck in the great
"enveloping cosmic dark.
"In our obscurity, in all this vastness,
"there is no hint that help
will come from elsewhere
"to save us from ourselves.
"the Earth is the only world known
"so far to harbor life,
there is nowhere else.
"At least in the near future,
"to which our species could migrate."
This is not true (laughs).
This is possible, Mars.
- And I think Carl Sagan
would agree with that.
He couldn't even imagine it at that time.
So thank you for making the world dream
and thank you for talking
today, I really appreciate it,
thank you.
- [Elon] Thank you.
