- The following is a
conversation with Noam Chomsky.
He's truly one of the
great minds of our time
and is one of the most cited scholars
in the history of our civilization.
He has spent over 60 years at MIT
and recently also joined
the University of Arizona
where we met for this conversation,
but it was at MIT about
four and 1/2 years ago
when I first met Noam.
My first few days there I remember
getting into an elevator at Stata Center,
pressing the button for
whatever floor, looking up
and realizing it was
just me and Noam Chomsky
riding the elevator, just me
and one of the seminal figures
of linguistics, cognitive
science, philosophy,
and political thought in the
past century if not ever.
I tell that silly story
because I think life
is made up of funny
little defining moments
that you never forget for
reasons that may be too poetic
to try and explain, that was one of mine.
Noam has been an inspiration
to me and millions of others.
It was truly an honor for me
to sit down with him in Arizona.
I traveled there just
for this conversation,
and in a rare, heartbreaking moment
after everything was set up and tested
the camera was moved and accidentally
the recording button was
pressed stopping the recording.
So I have good audio of both
of us but no video of Noam,
just a video of me and my
sleep deprived but excited face
that I get to keep as a
reminder of my failures.
Most people just listen
to this audio version
for the podcast as opposed
to watching it on YouTube,
but still it's heartbreaking for me.
I hope you understand
and still enjoy this
conversation as much as I did.
The depth of intellect that Noam showed
and his willingness to truly listen to me,
a silly looking Russian
in a suit was humbling
and something I'm deeply grateful for.
As some of you know, this
podcast is a side project for me
where my main journey and dream
is to build AI systems that
do some good for the world.
This latter effort
takes up most of my time
but for the moment has
been mostly private,
but the former, the podcast
is something I put my heart
and soul into and I hope you feel that
even when I screw things up.
I recently started doing ads
at the end of the introduction.
I'll do one or two minutes
after introducing the episode
and never any ads in the middle
that break the flow of the conversation.
I hope that works for you
and doesn't hurt the listening experience.
This is the Artificial
Intelligence podcast.
If you enjoy it, subscribe on YouTube,
give it five stars on Apple Podcast,
support it on Patreon, or simply
contact with me on Twitter
@lexfridman spelled F-R-I-D-M-A-N.
This show is presented by Cash App,
the number one finance
app on the App Store.
I personally use cash app
to send money to friends,
but you can also use it to buy, sell,
and deposit Bitcoin in just seconds.
Cash App also has a new investing feature.
You can buy fractions of a stock,
say $1 worth, no matter
what the stock price is.
Broker services are provided
by Cash App Investing,
a subsidiary of Square and member SIPC.
I'm excited to be working with Cash App
to support one of my favorite
organizations called the FIRST
best known for their FIRST
robotics and LEGO competitions.
They educate and inspire
hundreds of thousands of students
in over 110 countries
and have a perfect rating
on Charity Navigator which
means the donated money
is used to maximum effectiveness.
When you get Cash App in
the App Store or Google Play
and use code LexPodcast you'll get $10
and Cash App will also
donate $10 to FIRST,
which again is an organization
that I've personally seen
inspire girls and boys
to dream of engineering a better world.
And now here's my conversation
with Noam Chomsky.
I apologize for the absurd
philosophical question,
but if an alien species
were to visit Earth,
do you think we would be able
to find a common language
or protocol of communication with them?
- [Noam] There are arguments
to the effect that we could.
In fact, one of them was Marv Minsky's.
Back about 20 or 30 years ago he performed
a brief experiment with a
student of his, Daniel Bobrow
they essentially ran the
simplest possible Turing machines
just free to see what would happen.
And most of them crashed,
either got into an infinite loop
or were stopped, the few that persisted
essentially gave
something like arithmetic.
And his conclusion from that was
that if some alien species
developed higher intelligence
they would at least have arithmetic.
They would at least have what
the simplest computer would do
and in fact he didn't
know that at the time,
but the core principles
of natural language
are based on operations
which yield something
like arithmetic in the limiting
case, in the minimal case.
So it's conceivable that
a mode of communication
could be established based
on the core properties
of human language and the
core properties of arithmetic
which maybe are universally
shared so it's conceivable.
- [Lex] What is the
structure of that language,
of language as an internal
system inside our mind
versus an external
system as it's expressed?
- [Noam] It's not an alternative.
It's two different concepts of language.
- [Lex] Different.
- [Noam] It's a simple fact
that there's something
about you, a trait of yours,
part of the organism you that determines
that you're talking English
and not Tagalog, let's say.
So there is an inner system.
It determines the sound and meaning
of the infinite number of
expressions of your language.
It's localized, it's not in your foot
obviously it's in your brain.
If you look more closely it's
in specific configurations
of your brain and that's essentially
like the internal
structure of your laptop.
Whatever programs it has are in there.
Now, one of the things
you can do with language,
it's a marginal thing in
fact is use it to externalize
what's in your head.
I think most of your use
of language is thought,
internal thought, but can do
what you and I are now doing.
We can externalize it.
Well, the set of things
that we're externalizing
are an external system, they're
noises in the atmosphere,
and you can call that language
in some other sense of the word,
but it's not a set of alternatives.
These are just different concepts.
- [Lex] So how deep do the roots
of language go in our brain?
- Well--
- Our mind,
is it yet another feature like vision?
Or is it something more fundamental
from which everything else
springs in the human mind?
- [Noam] Well in a way it's like vision.
There's something about
our genetic endowment
that determines that we have a mammalian
rather than an insect visual system.
And there's something
in our genetic endowment
that determines that we have
a human language faculty.
No other organism has
anything remotely similar.
So in that sense it's internal.
Now, there is a long tradition
which I think is valid
going back centuries to the
early scientific revolution
at least that holds that language is the
sort of the core of
human cognitive nature.
It's the source, it's the
mode for constructing thoughts
and expressing them and
that is what forms thought
and it's got fundamental
creative capacities.
It's free, independent,
unbounded and so on.
And undoubtedly I think the basis
for our creative capacities
and the other remarkable
human capacities that lead
to the unique achievements
and not so great
achievements of the species.
- [Lex] The capacity to think and reason.
Do you think that's deeply
linked with language?
Do you think the internal
language system is essentially
the mechanism by which we
also reason internally?
- [Noam] It is undoubtedly the
mechanism by which we reason.
There may also be other,
there are undoubtedly
other faculties involved in reasoning.
We have a kind of scientific faculty.
Nobody knows what it
is, but whatever it is
that enables us to pursue
certain lines of endeavor
and inquiry and to decide what makes sense
and doesn't make sense and
to achieve a certain degree
of understanding in the
world that uses language
but goes beyond it just as using
our capacity for arithmetic
is not the same as having the capacity.
- [Lex] The idea of capacity,
our biology, evolution,
you've talked about it defining
essentially our capacity,
our limit and our scope.
Can you try to define
what limit and scope are,
and the bigger question,
do you think it's possible
to find the limit of human cognition?
- [Noam] Well that's an
interesting question.
It's commonly believed,
most scientists believe
that human intelligence
can answer any question in principle.
I think that's a very strange belief.
If we're biological organisms
which are not angels
then our capacities ought
to have scope and limits
which are interrelated.
- [Lex] Can you define those two terms?
- [Noam] Well, let's
take a concrete example.
Your genetic endowment, it determines
that you can have a
mammalian visual system
and arms and legs and so on
and therefore become a
rich, complex organism,
but if you look at that
same genetic endowment
it prevents you from
developing in other directions.
There's no kind of experience
which would yield the embryo
to develop an insect visual system
or to develop wings instead of arms.
So the very endowment that
confers richness and complexity
also sets bounds on what can be attained.
Now I assume that our cognitive capacities
are part of the organic world
therefore they should
have the same properties.
If they had no built-in
capacity to develop a rich
and complex structure we
would understand nothing
just as if your genetic endowment
did not compel you to
develop arms and legs
you would just be some kind
of a random ameboid creature
with no structure at all
so I think it's plausible
to assume that there are limits,
and I think we even have some
evidence as to what they are.
So for example there's a classic moment
in the history of science
at the time of Newton.
There was from Galileo to
Newton modern science developed
on a fundamental assumption
which Newton also accepted,
namely that the world, the entire universe
is a mechanical object and
by mechanical they meant
something like the kinds of artifacts
that were being developed
by skilled artisans
all over Europe, the
gears, levers, and so on.
And their belief was, well the world
is just a more complex variant of this.
Newton to his astonishment
and distress proved that there
are no machines, that there's
interaction without contact.
His contemporaries like
Leibniz and Huygens
just dismissed this as
returning to the mysticism
of the Neo-Scholastics and Newton agreed.
He said, "It is totally absurd.
"No person of any scientific intelligence
"could ever accept this for a moment."
In fact, he spent the rest of his life
trying to get around it somehow
as did many other scientists.
That was the very criterion
of intelligibility
for say Galileo or Newton.
Theory did not produce
an intelligible world
unless you could duplicate it in a machine
and he showed you can't,
there are no machines, any.
Finally after a long
struggle, took a long time
scientists just accepted
this as common sense,
but that's a significant moment.
That means they abandoned the search
for an intelligible world
and the great philosophers
of the time understood that very well.
So for example, David Hume
in his encomium to Newton
wrote that, who was the
greatest thinker ever and so on.
He said that he unveiled
many of the secrets of nature
but by showing the imperfections
of the mechanical philosophy,
mechanical science
he left us with, he showed
that there are mysteries
which ever will remain, and
science just changed its goals.
It abandoned the mysteries.
It can't solve it, they'll put it aside.
We only look for intelligible theories.
Newton's theories were intelligible
it's just what they described wasn't.
Well, Locke said the same thing.
I think they're basically right and if so
that showed something about
the limits of human cognition.
We cannot attain the goal
of understanding the world,
of finding an intelligible world.
This mechanical philosophy,
Galileo to Newton,
there's a good case that can be made that
that's our instinctive
conception of how things work.
So if say infants are tested with things
that if this moves and then this moves
they kind of invent something
that must be invisible
that's in between them that's
making them move and so on.
- [Lex] Yeah, we like physical contact.
Something about our brain seeks--
- [Noam] Makes us want a world like then
just like it wants a world that has
regular geometric figures
so for example Descartes
pointed this out that
if you have an infant
who's never seen a triangle
before and you draw a triangle
the infant will see a distorted triangle
not whatever crazy figure
it actually is, you know,
three lines not coming quite together
or one of them a little
bit curved and so on.
We just impose a conception of the world
in terms of perfect geometric objects.
It's now been shown that
it goes way beyond that,
that if you show on a
tachistoscope, let's say,
a couple of lights
shining, you do it three
or four times in a row
what people actually see
is a rigid object in motion
not whatever's there.
We all know that from a
television set basically.
- [Lex] So that gives us
hints of potential limits
to our cognition?
- I think it does,
but it's a very contested view.
If you do a poll among scientists
they'll say impossible.
We can understand anything.
- [Lex] Let me ask and
give me a chance with this.
So I just spent a day at a
company called Neuralink,
and what they do is try
to design what's called
a brain machine, a brain
computer interface.
So they try to just do thousands
of readings in the brain,
be able to read what
the neurons are firing
and then stimulate back, so two-way.
Do you think their dream
is to expand the capacity
of the brain to attain information,
sort of increase the bandwidth
at which we can search
Google kind of thing?
Do you think our cognitive
capacity might be expanded,
our linguistic capacity,
our ability to reason
might be expanded by adding
a machine into the picture?
- [Noam] It can be expanded
in a certain sense,
but a sense that was known
thousands of years ago.
A book expands your
cognitive capacity, okay,
so this could expand it, too.
- [Lex] But it's not a
fundamental expansion.
It's not totally new
things could be understood.
- [Noam] Well, nothing that goes beyond
our native cognitive capacities
just like you can't turn the visual system
into an insect system.
- [Lex] Well, I mean
the thought is perhaps
you can't directly but you can map.
- [Noam] You could be we know
that without this experiment
you could map what a
bee sees and present it
in a form so that we could follow it.
In fact every bee scientist does that.
- [Lex] Uh-huh, but you
don't think there's something
greater than bees that we can map
and then all of a sudden
discover something,
be able to understand a quantum
world, quantum mechanics,
be able to start to be able to make sense.
- [Noam] You can, students at MIT study
and understand quantum mechanics.
- [Lex] (laughs) But they
always reduce it to the infant,
the physical, I mean they
don't really understand--
- [Noam] Not physical,
that may be another area
where there's just a
limit to understanding.
We understand the theories,
but the world that it describes
doesn't make any sense.
So you know the experiment,
the Schrodinger's cat
for example, can understand the theory
but as Schrodinger pointed out
it's not an intelligible world.
One of the reasons why Einstein
was always very skeptical
about quantum theory, he described himself
as a classical realist
and wants intelligibility.
- [Lex] He has something in
common with infants in that way.
So back to linguistics,
if you could humor me,
what are the most beautiful
or fascinating aspects
of language or ideas in linguistics
or cognitive science that you've seen
in a lifetime of studying language
and studying the human mind?
- [Noam] Well, I think the
deepest property of language
and puzzling property
that's been discovered
is what is sometimes called
structure dependence.
We now understand it pretty well,
but it was puzzling for a long time.
I'll give you a concrete example.
So suppose you say, the
guy who fixed the car
carefully packed his tools.
That's ambiguous, he could
fix the car carefully
or carefully pack his tools.
Now suppose you put carefully in front.
Carefully the guy who fixed
the car packed his tools.
Then it's carefully packed,
not carefully fixed.
And in fact you do that
even if it makes no sense.
So suppose you say, carefully the guy
who fixed the car is tall.
You have to interpret it
as carefully he's tall
even though that doesn't make any sense.
And notice that that's
a very puzzling fact
because you're relating carefully
not to the linearly closest verb
but to the linearly more remote verb.
Linear closeness is a easy computation,
but here you're doing a much more,
what looks like a more
complex computation.
You're doing something that's taking you
essentially to the more remote thing,
it's now if you look at the
actual structure of the sentence
where the phrases are and so on turns out
you're picking out the
structurally closest thing,
but the linearly more remote thing.
But notice that what's linear
is 100% of what you hear.
You never hear of structure.
So what you're doing is and
instantly this is universal.
All constructions, all languages
and what we're compelled
to do is carry out
what looks like the
more complex computation
on material that we never
hear and we ignore 100%
of what we hear on the
simplest computation.
And by now there's even
a neural basis for this
that's somewhat understood,
and there's good theories
but none that explain why it's true.
That's a deep insight
into the surprising nature
of language with many consequences.
- [Lex] Let me ask you about
a field of machine learning
and deep learning, there's
been a lot of progress
in neural network-based machine learning
in the recent decade.
Of course, neural network
research goes back many decades.
- [Noam] Yeah.
- [Lex] What do you think are
the limits of deep learning,
of neural network-based machine learning?
- [Noam] Well, to give
a real answer to that
you'd have to understand
the exact processes
that are taking place, and
those are pretty opaque
so it's pretty hard to prove a theorem
about what can be done
and what can't be done.
But I think it's reasonably clear,
I mean, putting technicalities aside
what deep learning is doing
is taking huge numbers
of examples and finding some patterns.
Okay, that could be interesting
and in some areas it is
but we have to ask here
a certain question.
Is it engineering or is it science?
Engineering in the sense of
just trying to build something
that's useful or science in the sense
that it's trying to understand
something about elements
of the world so it takes a Google parser.
We can ask that question, is it useful?
Yeah, it's pretty useful.
I use Google Translator
so on engineering grounds
it's kinda worth having like a bulldozer.
Does it tell you anything
about human language?
Zero, nothing, and in
fact it's very striking.
From the very beginning
it's just totally remote from science
so what is a Google parser doing?
It's taking an enormous text,
let's say The Wall Street
Journal corpus and asking,
how close can we come to
getting the right description
of every sentence in the corpus?
Well, ever sentence in the corpus
is essentially an experiment.
Each sentence that you produce
is an experiment which is,
am I a grammatical sentence?
Now the answer is usually
yes so most of the stuff
in the corpus is grammatical sentences,
but now ask yourself, is there any science
which takes random experiments
which are carried out
for no reason whatsoever and tries
to find out something from them?
Like if you're, say, a
chemistry PhD student
you want to get a thesis can you say,
well I'm just gonna do a
lot of, mix a lot of things
together, no purpose, and
maybe I'll find something.
You'd be laughed out of the department.
Science tries to find
critical experiments,
ones that answer some
theoretical question.
Doesn't care about coverage
of millions of experiments.
So it just begins by being
very remote from science
and it continues like
that so the usual question
that's asked about, say, a Google parser
is how well does it do, or some parser,
how well does it do on a corpus?
But there's another
question that's never asked.
How well does it do on something
that violates all the rules of language?
So for example, take the
structure dependence case
that I mentioned, suppose
there was a language
in which you used linear
proximity as the mode
of interpretation, these deep learning
would work very easily on that.
In fact, much more easily
than on an actual language.
Is that a success?
No, that's a failure.
From a scientific point
of view that's a failure.
It shows that we're not discovering
the nature of the system at all
'cause it does just as well or even better
on things that violate the
structure of the system,
and it goes on from there.
It's not an argument against doing it.
It is useful to have devices like this.
- [Lex] So yes, neural networks
are kind of approximators
that look, there's echoes of
the behavioral debates right,
behavioralism.
- More than echoes.
Many of the people in deep learning
say they vindicated.
- (laughs) Yeah.
- [Noam] Terry Sejnowski for
example in his recent book
says this vindicates Skinnerian behaviors
and it doesn't have
anything to do with it.
- [Lex] Yes, but I think there's something
actually fundamentally different
when the data set is huge,
but your point is extremely well taken.
But do you think we can learn, approximate
that interesting, complex
structure of language
with neural networks that will somehow
help us understand the science?
- [Noam] It's possible,
I mean, you find patterns
that you hadn't noticed, let's say.
Could be, in fact it's very
much like a kind of linguistics
that's done, what's called
corpus linguistics when you,
suppose you have some language
where all the speakers
have died out but you have records.
So you just look at the records
and see what you can figure out from that.
It's much better to have actual speakers
where you can do critical experiments,
but if they're all dead you can't do them
so you have to try to
see what you can find out
from just looking at
the data that's around.
You can learn things.
Anthropology is very much like that.
You can't do a critical experiment
on what happened two million years ago
so you're kinda forced to
take what data's around
and see what you can figure out from it.
Okay, it's a serious study.
- [Lex] So let me venture into
another whole body of work
and philosophical question.
You've said that evil in society
arises from institutions,
not inherently from our nature.
Do you think most human beings are good,
they have good intent or
do most have the capacity
for intentional evil that
depends on their upbringing,
depends on their environment, on context?
- [Noam] I wouldn't say
that they don't arise from our nature.
Anything we do arises from our nature.
And the fact that we
have certain institutions
and not others is one mode
in which human nature
has expressed itself.
But as far as we know, human nature
could yield many different
kinds of institutions.
The particular ones that have developed
have to do with historical contingency,
who conquered whom and that sort of thing,
then they're not rooted in our nature
in the sense that they're
essential to our nature
so it's commonly argued that
these days that something
like market systems is
just part of our nature,
but we know from a huge amount of evidence
that that's not true, there's
all kinds of other structures.
That's a particular fact of
a moment of modern history.
Others have argued that the
roots of classical liberalism
actually argue that
what's called sometimes
an instinct for freedom, an instinct
to be free of domination
by illegitimate authority
is the core of our nature.
That would be the opposite of this.
And we don't know, we just
know that human nature
can accommodate both kinds.
- [Lex] If you look back at your life,
is there a moment in
your intellectual life
or life in general that jumps from memory
that brought you happiness
that you would love to relive again?
- [Noam] Sure, falling
in love, having children.
- [Lex] What about, so
you have put forward
into the world a lot of
incredible ideas in linguistics,
in cognitive science, in terms of ideas
that just excites you
when it first came to you
that you love to relive those moments.
- [Noam] Well, I mean,
when you make a discovery
about something it's exciting like say
even the observation
of structure dependence
and on from that the explanation for it,
but the major things just
seem like common sense.
So if you go back to, take your question
about external and internal language.
You go back to, say, the 1950s
almost entirely language is
regarded as an external object,
something outside the mind.
It just seemed obvious
that that can't be true.
Like I said, there's something
about you that determines
you're talking English
not Swahili or something.
But that's not really a discovery.
That's just an observation
of what's transparent.
You might say it's kind of like
the 17th century, the
beginnings of modern science
17th century, they came from being willing
to be puzzled about things
that seemed obvious.
So it seems obvious that a heavy
ball of lead'll fall faster
than a light ball of lead,
but Galileo was not impressed
by the fact that it seemed obvious.
so he wanted to know if it's true
He carried out experiments,
actually thought experiments
never actually carried
them out which showed
that it can't be true, you know.
And out of things like that,
observations of that kind,
you know, why does a
ball fall to the ground
instead of rising, let's say?
It seems obvious till you
start thinking about it
'cause why does steam rise, let's say.
And I think the beginnings
of modern linguistics
roughly in the 50s are kind of like that,
just being willing to be
puzzled about phenomena
that looked from some
point of view obvious.
And for example a kind of doctrine,
almost official doctrine
of structural linguistics
in the 50s was that languages
can differ from one another
in arbitrary ways and
each one has to be studied
on its own without any presuppositions
and in fact there were
similar views among biologists
about the nature of
organisms that each one's,
they're so different when you look at them
that you could be almost anything.
Well in both domains it's been learned
that it's very far from true.
There are very narrow constraints
on what could be an organism
or what could be a language.
But these are, you know, that's
just the nature of inquiry.
- [Lex] Science in general, yeah, inquiry.
So one of the peculiar things
about us human beings is our mortality.
Ernest Becker explored it.
In general do you ponder
the value of mortality?
Do you think about your own mortality?
- [Noam] I used to when
I was about 12 years old.
I wondered, I didn't care
much about my own mortality,
but I was worried about the
fact that if my consciousness
disappeared would the
entire universe disappear.
That was frightening.
- [Lex] Did you ever find
an answer to that question?
- [Noam] No, nobody's
ever found an answer,
but I stopped being bothered by it.
It's kind of like Woody
Allen in one of his films.
You may recall he goes to
a shrink when he's a child
and the shrink asks him,
"What's your problem?"
He says, "I just learned that
the universe is expanding.
"I can't handle that."
- [Lex] (laughs) And
another absurd question is,
what do you think is the
meaning of our existence here,
our life on Earth, our
brief little moment in time?
- [Noam] That's something we
answer by our own activities.
There's no general answer.
We determine what the meaning of it is.
- [Lex] The action determine the meaning.
- [Noam] Meaning in the
sense of significance
not meaning in the sense that
chair means this, you know,
but the significance of your
life is something you create.
- Noam, thank you so
much for talking today.
It was a huge honor, thank you so much.
Thanks for listening to this
conversation with Noam Chomsky,
and thank you to our
presenting sponsor Cash App.
Download it, use code LexPodcast.
You'll get $10 and $10 will go to FIRST,
a STEM education nonprofit
that inspires hundreds
of thousands of young minds to learn
and to dream of engineering our future.
If you enjoy this podcast
subscribe on YouTube.
Give us five stars on Apple Podcast,
support on Patreon, or
connect with me on Twitter.
Thank you for listening and
hope to see you next time.
