ANNA: Welcome to the fourth
lecture of the lecture
series, "God and Computers:
Minds, Machines, and
Metaphysics."
The first lecture was
about the human factor,
given by Paul Penfield.
Then we heard about
animals by Mark Hauser--
animals and consciousness,
animal and ethics.
Last week, we heard about
brains, and brain function,
and where spirituality
might be localized in them
or might be able
to localize them.
And today, now, we
hear about robotics.
Before we can start about
that, just briefly, one thing--
if you happen to
have this brochure--
I have also some left.
The discussion meetings at
Harvard Divinity School--
the first one is on Monday, at
12 o'clock, at Harvard Divinity
School--
45 Francis Avenue-- run
by Harvey Cox and me.
So you are very welcome to bring
your questions and discussions
you cannot address today to
this discussion group on Monday.
The other dates you will all
find in that little brochure.
And now I'm very pleased
and happy to welcome
to Rodney Brooks.
And with the introduction, it's
always a little bit difficult
if you actually have to
introduce your own boss.
So I am not quite
sure how to do that.
But Rod is currently the
director of the Artificial
Intelligence Lab here, at MIT.
He holds a PhD in Computer
Science from Stanford,
where he was mostly
interested in vision.
And when he came
to MIT, in 1981,
he was still mostly interested
in classical AI problems,
but, slowly, his
focus began to shift.
Since AI had failed
to reach its goal
of human-like,
intelligent machines,
Rod broke with most assumptions
of his field and started anew.
Based on the work of
Maturana and Varela--
Varela, by the way, will give
his talk in two weeks here--
Rod started to
perceive intelligence
as being in the world, as
interaction, as embodiment.
He then built creatures--
insect-like robots--
which were far better
than anything else
which had been
built before in navigating
an unstructured world
and natural environments.
For his work, he
received numerous awards,
became, in 1996, a
fellow in the American
Association of the Advancement
of Science, and is,
today, accepted as the
so-called father of embodied AI.
And recently, he even
became a movie star.
His movie, Fast, Cheap,
and Out of Control--
named after a research
paper of his--
is currently running
at Kendall Theatre
and I can only highly
recommend you all to go there.
Rod's latest attempt to
rebuild intelligent creatures
goes a little bit
beyond insects,
hence the title of his
paper, "Artificial Humanity."
And so I'm very happy, and
glad, and pleased to welcome you
here.
BROOKS: Thank you.
Well, this talk is a little
different from the ones
I normally give--
even less technical
detail than normal.
And the title is
"Artificial Humanity,"
but it's a non-linear talk
because, to be honest,
I couldn't figure out how
to fill up a whole hour.
So I've got some
little diversions
along the way, which
aren't necessarily
straight along the line here.
But the question is, to
me, whether we can ever
have humanity in
robots-- whether we
can think of them in the same
way we think of other humans.
And that's all tied up with
lots of beliefs we have
and lots of
insecurities we have.
I know that, in the
last few hundred years,
we've seen mankind's
retreat from specialness.
Galileo and others
around there let
us know that the
Earth was no longer
the center of the universe.
And so this piece of rock
that we're standing on
wasn't particularly special.
And then Darwin
came along and we
saw that humans and animals
have common ancestors.
Their origins are
not that different.
Crick and Watson, with DNA
and the mechanism of life,
means that humans and
yeast are pretty similar.
And you see, sort of,
that we're retreating here
from all these special things.
And each of these retreats has
been met by a lot of argument
and a lot of fear.
And as you get further
and further down here,
and as you get further and
further south in the US,
not everyone has accepted
all these things.
Then if you look back in
the middle of '50s, you have
[INAUDIBLE],, McCarthy,
[? Neil Simon, ?] et cetera--
and Turing, of course.
Human thought is the
same as computation
and, hence, fits on machines.
So the thought process,
the logic process,
was sort of taken away from us.
And as we've understood
biochemistry better and better,
we find that we're
collections of tiny machines.
And where the essence
of humanity is,
is harder and harder to find.
And very recently,
just this year, we
started to see that human
flesh and body plans
are subject to
technological manipulation.
Was it over the weekend?
Yeah.
I think, over the
weekend, there were
the stories about
the headless frogs
that were grown in England.
And then a whole raft of new
stories-- that Lewis Wolpert,
in London, responded to--
about whether we could now
start building headless humans
as a way of growing new
organs and all the fears
that that produced.
So we're sort of retreating
to be less and less special.
And each of those retreats
comes with quite a bit
of consternation, quite
a bit of argument.
And the one I want to
talk about today is
whether robots can feel, love--
and since God is in
the title of this--
worship, and have souls.
So as Anna pointed
out, I've been playing
with humanoid robots recently.
This is a version of our robot,
Cog, from a little while back.
It was built here at the AI Lab.
And our goal was to build a
robot that develops and acts
in the world in the same way
that humans develop and act
in the world.
And why human?
Well, we wanted it to
have similar sensory motor
experiences, very
much influenced
by Mark Johnson and--
who's the guy at Berkeley?
Women, Fire, and
Dangerous Things.
AUDIENCE: Lakoff.
BROOKS: Lakoff, et cetera.
But I have to admit, this
particular motivation
is the one that worries
me most, the one that
makes me think that,
well, maybe we're
engaged in this exercise and
a bit of cargo cult science.
Do people know about the
cargo cults in New Guinea?
This is one of the
little diversions.
Anyone not know about
the cargo cults?
OK.
So during the Second World
War, both the Allied troops
and the Japanese troops
had lots of fighting
going on in New Guinea.
And they would calm, and they
would clear out some land,
and build a control tower.
And then these silver
birds would come down
and disgorge lots of supplies.
And so after all
the troops left,
a lot of the native
tribes started
flattening out areas of land,
building bamboo, control
towers, and sitting up there,
waiting for these silver birds
to come down from the sky.
And so I do worry a little bit
that in building this robot
to have similar sensory
motor experiences, maybe
we're engaging in that sort
of cargo cult, as it's called,
because it doesn't really
have a lot of the stuff
that humans have.
But one thing that has
certainly turned out to be true
is that humans interact
with these robots
in natural sorts of ways.
They can't help themselves.
And this is not something
we've studied in great detail,
so it's rather anecdotal.
And I'll only give
anecdotal incidents today.
But humans just can't help but
interact with these machines
that look like humans,
in human like ways.
And that, I think, is
the key to the question
of answering whether robots
will eventually have souls.
And that's what I'm going
to try and get to today.
People just find themselves
interacting with it,
like a human.
Let me show you the first
video, just to show you Cog
doing some stuff.
And compared to two
Star Wars movies,
it's a little disappointing.
But it took us a long time.
So I'm supposed to press--
do I have something up there?
Yeah.
So here, you see there's an
old version of the robot head
up there.
It keeps putting
things in its hand.
It's attracted by the motion.
It's looking at things.
We built the arms with a series
of elastic actuators developed
by Gill Pratt and
Matt Williamson,
which enabled us to
interact with this robot.
This is the head system.
You see the eyes
saccading, from place
to place, very rapidly,
at about human speeds.
There's a wide angle lens and a
narrow angle lens in each eye.
This is saccading to motion.
You see the eye is rapidly
moving from place to place.
It's a smooth pursuit,
where the eyes are smoothly
following something.
These are all the basic
sorts of visual operations
that people do.
You can't move
your eyes smoothly
from side to side unless
you've got something to track,
it turns out.
This is the the
vestibular system
simulating the inner ears.
Here, it's not switched on.
As we move the head around,
the eyes just waggle around all
over the place.
When we switch on
the inner ears,
now the eyes are stabilized, as
we move the head unpredictably
and we take locked
onto one position.
And that's very important to
you, to stabilize your vision.
Now here, you see the eyes
saccade somewhere and then
the neck tries to
get the eyes back
into, roughly, the center
of their range of motion.
And an efference copy
signal is sent to the eyes
to compensate for the motion.
When we put this in,
the robot started
to feel much more human--
the way it looked at us.
Here's Matt
Williamson-- designer.
This is with his old arm.
He's just showing some
basic, infant-like reflexes.
This is withdrawal reflex.
It feels the touch and
pulls its arm back.
Infants have the grasp reflex,
withdraw reflex, et cetera.
And in order for Matt to
get his Master's thesis,
I made him prove that the arm
was safe to interact with.
And this is some new arms.
We've got a bunch of
new stuff with this,
but I'm not going
to show that today.
It's not relevant.
This is a tape we made up for
AAAI a couple of months ago.
This shows you some physical
coupling through the system.
The body is important, as with
physical bodies of humans.
And recently, Matt
has had Cog playing
with Slinkys and [INAUDIBLE]
feeling pendulums
swinging back and forth.
This is Cog learning to reach--
learning hand-eye coordination.
It saccades its
eyes to some point,
and then reaches out
its hand, moves its hand
to see how well it
got to the center
of the location of
its eyes, and then
learns how to move its hand.
So it sits there
for a few hours,
reaching out to places it looks.
This is its point of view.
You'll see the saccade
happen in a second.
And during the saccade, we
have to suppress the motion
detection, just like happens
in the human visual system.
And then it puts its hand out
and it moves its hand around
to see where its arm ended up,
so it can learn how to operate.
And after about three
hours of learning,
here, Brian [INAUDIBLE]
gives it a motion cue.
It reaches out.
The hand wasn't
working at this point.
Now he's going wave.
Watch the eyes up there.
And you'll see the eyes,
in a second or two,
saccade over to that motion.
There, they saccaded.
Now it's going to try
and reach out to that.
This is the result
of its learning
for a few hours with no prior
knowledge of the kinematics,
or dynamics, or whatever.
It's all learned.
And here's an
interesting case where
Cynthia Farrell, one of
the designers of the robot,
was just trying to engage
some motions of the arm.
And when we looked
at the videotape,
we saw that the robot
and she were playing--
taking turns.
And we weren't planning
on putting turn taking
into the robot for quite well.
But she, even as a
non-naive observer,
couldn't help but get into
that dynamic of playing--
oh, what happened?
Getting into that dynamic of
playing a game with the robot.
And that's going to be an
important point later on.
Now here, I'll do another aside.
We're building this
humanoid robot.
There are lots of other people
building humanoid robots,
particularly in
Japan, right now.
It's become quite an industry.
And there's been some pretty
surprising developments
recently for robotics
people in the world.
Honda Motor Corporation
has had a secret project--
it's just like a
James Bond movie--
for 10 years, developing
this robot called P2.
P1 was just a pair of
legs with a box on it.
It looks pretty weird.
But they were completely
secretive about this.
No one knew what was going on.
Last October, I started
hearing a couple whispers.
And then, in December,
they announced.
It's 250 kilogram,
the first biped that
can walk without
a tether, but it's
mostly mechatronics rather than
AI research, at this point.
But MITI-- the Ministry
of International Trade
and Industry--
was talking about a plan to
build lots of humanoid robots
and give them to universities.
And Honda didn't
want to be left out
of the action, given that
they had put $100 million
into this over 10 years.
They announced
that, last December,
by about three or
four weeks ago,
they had their new model out--
the P3-- which is smaller,
weighs about 110 kilograms
with a 30 kilogram payload.
And so this was done in
just nine or 10 months.
They've now got 100 people
working on this project,
building humanoid robots.
And Honda has suddenly
become a major robot company
in the eyes of the Japanese
public, through this project.
There's some talk about
having the P2 or P3
at the opening of the world--
what do you call it?
What's the world soccer
tournament thing?
AUDIENCE: The World Cup.
BROOKS: The World Cup.
Having it out there on the
field, kicking the ball.
It won't work, if the
ground is soggy, by the way,
because the algorithms that
they use our rather fragile.
But lots of other
people in Japan
are also building
humanoid robots.
This is Waseda University with
a couple of their humanoids.
They now have 100 people working
in their humanoid research
laboratory.
And there are other humanoid
research laboratories.
There's the ones at ETL, there's
the ones at Tokyo University,
the HARP Lab, et cetera.
So it's become a big industry.
And there's going to be lots
and lots more humanoid robots.
Most of these Japanese
robots, at this point,
don't have much in the
way of intelligence
or emotional models,
et cetera, although one
of the robots at
Waseda does have
a model of the amygdala, the
hippocampus, and a whole bunch
of other inner, emotional
sections of the brain.
Perhaps they're trying
to put that together.
That was an aside.
Let's get back to
whether we're going
to be able to build robots
with humanity in them,
whether we're going
to respect them.
You just saw a
videotape of our robot.
But I need to come clean
a little bit, I think,
in this talk.
I haven't been to
any of the talks
previously in the series
because I've been out of town.
But Anna tells me that,
at the first talk, when
Paul Penfield spoke,
someone complained
that they wanted to hear
what an atheist had to say.
And she pointed that I was going
to be the atheist coming along
to talk.
So my assumptions
are that there is
a completely mechanistic
explanation for everything
in the universe.
I believe we're all
made up of gazillions--
and I chose that word because
it just seemed large--
mindless, soulless robots--
molecular machines.
The little, biochemical
molecules are these machines.
We are made up of robots,
as [INAUDIBLE] pointed out.
And I believe that every human
interaction, in principle,
can be reduced to such
mindless explanations.
Now is this how I live my life?
No, I don't live
it this way at all.
I don't think of my
kids as, oh, they're
just those little, mindless,
gazillions of machines.
I think of them as one
big, mindless machine.
That's not true.
I operate in a totally
different level.
And I must admit, I was
perplexed, for a long time,
by scientists who
are also religious.
For a long time, it
just didn't seem, to me,
to make sense that
someone could be
religious and, at the
same time, be a scientist.
I couldn't see that duality.
And it was only fairly
recently that I realized
that that's exactly what I do--
I do live in this
dual world, where
I have this level
of explanation,
but, for my everyday
life, I operate
at a very different level.
And although they
may deny it, I think
all atheists operate
this way and I
think all scientists
operate this way--
having different parallel belief
systems that they operate with.
So I'm atheist, but I'm not--
I've got to be careful
to choose my words here.
No, I won't use them.
I'm one of the good atheists.
Now related to
this, it seems to me
that most scientists engage
in a certain cultural
constructivism.
All researchers base their
work on some, usually
unstated, dogmatic beliefs.
And this gets a lot of people
upset because a lot of people
believe that they're
in search of truth
and there is one truth.
But I think that when you push
people who speak that way,
many of those beliefs are in
the scientific method, which
has a certain religious aura
about it when a lot of people
talk about it.
Or whether they
believe, as I do,
that humans, ultimately, have
mechanistic explanations, when
it gets down to it,
we can't explain it
beyond that in any logical way.
We base our scientific lives
on some set of assumptions.
And I've observed the challenges
to these implicit beliefs
are often met with hostility.
And for those of you who've
been on the mailing lists
around here, you
certainly saw that
with the announcement
of God and Computers,
the course that goes along
with this series of talks.
So those two slides
are just sort
of saying where I
come from in this.
Oh, boy.
That sounds awful
'70s, doesn't it?
So can we have
artificial humanity?
Can we ever build robots
where we, people--
our sort of machines, as
distinct from those sort
of machines--
will all agree that these robots
are things to be empathized,
whether they're things we
should pity, when appropriate--
whether there will ever be
an appropriate time to pity
a robot--
actually I did.
Do you remember--
was it RoboCop?
Before RoboCop sort
of got invented,
they had this legged machine
that sort of blew everyone
away.
Do you remember that?
And they chased it out, and
they chased it down the stairs,
and it fell down, and
its legs were quivering.
I was building walking
robots at the time.
I really felt for that robot.
So whether we should protect
these robots, when necessary--
when the bad graduate student
is going to switch it off,
we should stop them.
And then, ultimately,
whether they
should be equal before the law.
And, as we all know, being
equal before the law is
very different from being
equal before our hearts.
And that's an even
bigger step to get to.
So can we have
artificial humanity?
From a religious point
of view, I would think,
if we had artificial
humanity, we
would like to say that
these things had souls.
I can talk about
this, as an atheist.
So I believe that, in the same
sense that humans have souls,
it would be impossible to
build humanoid robots that
have human-level interactions
without them having souls.
But I use my definition
of a soul, which
we'll get to a little later.
And we won't even have to
try to give them souls,
they'll just have them.
And they'll have them because
of the way we feel about them.
That's a sort of
a tricky question.
So a simpler question is,
can a robot be afraid?
Now I think people are willing
to say we can make robots
act as if they're afraid.
We've seen robot
actors, in movies,
act as if they're afraid.
We can make robots
that seem to be afraid.
We can make robots that
simulate having fear.
And a while back, people used
the same sorts of caveats
when they talked about
reasoning systems in computers.
But I think, today,
most AI researchers
are willing to say that
robots or programs can reason
about facts, they
can make decisions,
and they can have goals.
This simulate, act as if, and
seem have been replaced by can.
But I'm not sure that AI
researchers, in general,
are willing to say that robots
can be afraid, because I think
AI researchers are still--
they've got that specialness.
They've been pushed
back, by their own work,
into giving up a lot of stuff.
But this visceral fear
that we all know and feel,
are we willing to
say that a machine is
going to have that sort of
real, visceral sort of fear?
Or is that something it
will just seem to have?
It will just be a bunch
of, heaven forbid,
C programs doing stuff.
Well, what I want to claim is
that, ultimately, the robots
will be viscerally afraid.
And it's a matter
of us accepting
that rather than any great
technological breakthroughs
necessary.
So in this sense, this is
a little disappointing talk
because I'm going
to say we don't need
technological breakthroughs.
It's sort of ultimate
cop out talk.
So I want to examine a few
other people who talk about some
of these similar
sorts of issues,
although not exactly this issue.
And I think there are some
generic ways in which they
go wrong.
One way they go wrong is
through an implicit rejection
of mechanistic
explanations, where
that rejection is often denied,
but, nevertheless, it's there.
And I'll show you an
example in a minute.
And they conserve the
specialness of humankind
that way and they rationalize
it as a scientific argument,
but, in fact, they're
doing something sneaky.
They're getting rid of a
mechanistic explanation
by dressing it up as though it
is a mechanistic explanation.
And I'll give you an example.
And the other thing,
which is much more common,
is amongst AI researchers.
It's an adoption of
a higher mechanism.
Researchers want to maintain a
purely mechanistic explanation,
but they can't face
reductionism to current models
of the universe, so they invent
or wish for-- and I'll show you
examples of both--
some super mechanism that
would be rather special.
And we don't know about it
yet, so that sort of maintains
the specialness of humans.
First case-- implicit rejection
of mechanistic explanation.
And the great example is
John Searle, from Berkeley.
So here's John Searle.
And the web is great.
I just went out, and
looked, and I found
John Searle-- a picture of him.
And he even fit the clip art.
So Searle makes this argument--
suppose he's in a room,
and he's got some
instructions to follow,
and someone feeds
him a piece of paper
with some Chinese symbols on it.
And he doesn't know Chinese.
I don't know Chinese.
I found these on the web, too.
I have no idea what this
says, but that fits the story.
And I have no idea
what the output says.
It should be related,
but who knows.
Anyway, so Chinese
symbols come in.
And this is a question--
he follows the rules,
and he outputs some answer
by following these rules,
mechanistically,
and he says, see,
John Searle still
doesn't know Chinese.
He can act as if
he knows Chinese,
but it's not the same
as knowing Chinese.
But what he misses is
that the whole system
does know Chinese--
John Searle, and his
pencil, and his paper,
and his book of rules, and his
procedure that he's running
does no Chinese.
But he wants the Chinese
to be in his head
or, otherwise, the system
doesn't know Chinese.
And I think his argument maps
pretty well to the argument
that, well, horses
can transport things,
but a car can't transport
things because, when
you look inside a car,
there's no place where
the transportation is happening.
There's wheels, and there's
gasoline, and some engines,
and there's no
horses there either.
So it can't be
transporting stuff
like a horse transports stuff.
And this is the same
argument he makes,
because he wants to find the
understanding in a component.
But he doesn't insist that
the same thing happens
for humans or for animals
because he says, well, animals
are animals.
Recently, there was
a TV program where
they interspersed
something I said
with something Searle said.
They interviewed me.
I didn't know Searle was
going to be in the program.
And they got me
to say that if it
walks like a duck, talks like
a duck, smells like a duck,
it's a duck.
And Searle came
on and said, if it
walks like a duck, talks like
a duck, smells like a duck,
it ain't a duck,
because a duck's a duck.
And he says the same thing about
intelligence-- intelligence
is intelligence.
It's only in humans, therefore
it cannot be in other machines.
See?
I proved it.
Because he sort of flushed away
that mechanistic explanation,
but he denies that
he flushes it away.
Roger Penrose, on
the other hand,
at Oxford, he wants
everything to have
a mechanistic explanation.
He actually misunderstands
Godel's theorem and Turing
computability.
And I think he does that
because he wants to maintain
the specialness of humans-- that
humans can prove theorems that
mere machines can't.
And if you read his
description of this,
there is a
misunderstanding there.
Gerald Edelman, by the
way, makes the same mistake
in his analysis
of the human kind.
So this is not uncommon--
looking at Godel's
theorem and saying we're
better than Godel's theorem.
So his conclusion is that
people can't be computers,
but he wants everything
to be mechanistic.
So people are more complicated
than ordering machines
and consciousness is
mysterious, but he really
wants mechanistic explanations.
So what's he to do?
Well, he's a physicist, so he
says, well, quantum mechanics
is more complicated
than ordinary physics
and is mysterious--
hey, they must be
the same thing.
And really, I
don't think there's
anything more in his argument.
It's this wishing for
some other explanation.
David Chalmers, now at UC
Santa Cruz, a philosopher,
comes at it a slightly
different way.
He talks about consciousness.
And he's quite seriously.
He did organize that
consciousness extravaganza
in Arizona last year,
but he is serious--
besides that.
And he talks about
mass force, et cetera,
and natural physical kinds.
You can't reduce
them simple things.
They're just things
in the universe.
They're natural kinds.
They're stuff.
And his argument
is consciousness
is yet another natural kind.
It can't be reduced
something simpler.
Therefore, there's no
need or way to explain it.
So he's maintained
the specialness
by having consciousness as
being some natural kind.
Well, that may be,
but I sort of doubt it
because all of
other natural kinds
display some sort of
observable interaction.
So we would expect
if consciousness
was a natural kind--
telekinesis or some interaction
between consciousness
and other stuff, which
didn't quite fit.
We don't see that.
So I suspect that that's
not going to work.
Now, of course, I
think everyone's
guilty of this in other ways.
I'll tell you my own version
of this, my own folly.
If you look at an
engineered system
and you look at a
biological system,
people don't make mistakes.
Even young kids, pretty
much, don't make mistakes.
They get fooled by some things,
when they're very young,
but as they get a little older,
they get pretty sophisticated.
They don't make
mistakes and think
that's a live animal,
when it's really
a robot or something like that.
They can make that distinction.
Robustness, the generality,
the adaptability,
the domain of the performance--
it all gives it away.
So, it seems to me,
biological systems
are still fundamentally
different from almost all
our engineered systems.
So my version of looking
for the higher thing
is something I call the juice.
I really believe
this, by the way.
Is there something
different in life?
Not an essence of life,
in the normal sense.
My belief is we're
not going to have
to go outside of current
day physics or chemistry.
But in the same way
the idea of computation
changed what we
could think about--
before Turing came along
and formalized computation,
you could think
about certain things.
And after, you could think
about a whole bunch more things.
That notion of
computation wasn't
a change in the
universe, but it enabled
us to think about new things.
If you took a late 19th
century mathematician,
you could teach them
the fundamental ideas
of computation in a few days.
And they wouldn't be holding
their heads, saying, oh my god,
this can't be, this
is so foreign to me.
It wasn't very foreign.
It was just another idea that
fit on top of 19th century
mathematics.
But it completely enabled
thinking "how to" knowledge.
So my version of
what we're missing
is there's some,
conceptual juice--
I call it-- waiting
for us to discover it--
some different way of
thinking about organization
and processes of
complex systems that
are in all these biological
systems, at all sorts
of different levels.
Either they're in at
the molecular level--
in the maintenance of a cell--
they're in at the
neural level, they're
in at the genetic level, they're
in at the immunological level.
Some sort of organizing
stuff that's in there--
sort of like computers
inside mechatronic devices.
That's there, but
we just haven't
got a way of describing it yet
or a way of thinking about it,
so we never think to put it
in our artificial systems.
So this is my version of this.
By the way, when I first talked
about this-- two years ago,
in Switzerland, at a workshop--
that night, a graduate
student from Oxford
was sitting at the
dinner table with me.
And he said, oh, yeah, I wasn't
surprised to see you talk
that way today, in the talk.
I think that's the
sort of ideas a lot
of scientists have when they're
in the subset of their career.
So that was an
aside, that last one.
Where am I going?
Let's go back a couple slides.
Where am I getting to?
I'm getting to-- what will
it take for us to be willing
to say that robots
can be afraid--
they can be viscerally afraid?
And I gave those
examples of what I think
is wrongheaded thinking
to see where we might go
wrong in thinking about that.
And one thing, it
seems to me, is
that, if we're really going to
be thinking about robots being
afraid, we're going to have
to have some sort of models
of emotions.
If we're to empathize
with robots,
we may need to be able
to identify, physically,
with them.
And that's why I built
this human shaped robot.
But we may need to be able to
identify emotionally with them,
too.
So can robots have emotions?
And there's been
quite a bit of work--
many people in this room
have done work on this--
in putting emotions
into robots or at least
into software agents.
And I just want to go
through and briefly recap
what I think are sort of
three different versions
of emotional models--
surface level
emotions, subsurface
emotions, and emergent motions
are three different ways of
putting emotions into systems.
The surface emotional models
seem less satisfying, somehow,
in terms of things
having visceral emotions,
visceral feelings.
The primitive types are
directly the emotions--
happiness is a number.
Jim Albus even had love as
a 4-bit number in a paper
on system, man, and cybernetics,
only a couple of years ago.
And then external
events excite or depress
certain emotional levels.
These sorts of models
have been around
since the '60s, by the way.
And there may be lateral
inhibition mechanisms,
so that you can't be happy
and sad at the same time.
If you're really happy,
it pushes sadness down.
If you're really sad, it pushes
happiness down, et cetera.
And then there's some
external reflection
of the surface emotions.
And, in fact, that's
maybe all there is.
And that's what
this emotional model
is-- it's just a surface model.
Then the subsurface models--
here, there are some
inner drives and needs
that are the fundamental
types, and the emotions
sort of come out of
those inner drives
and needs being
satisfied or not.
So the level of satisfaction
of these drives or needs
determines the mood or
emotional state of the system.
And then the emotional
state may excite or inhibit
certain classes of
behavior of the robots.
And it may change
the way it operates.
And then there might also
be some explicit, external
reflections of motions
designed just to exhibit that
to the external world.
So you know what
sort of mood robot
is in, you know what sort
of behaviors it's probably
going to engage in, and so you
know how to interact with it.
A robot or a software agent--
if it's busy, it's harried,
it's looking over the net
for some dumb question
you asked it, don't ask it
another one right now.
So that's the subsurface model.
And then there are
emergent emotional models,
where there are still
internal drives and needs--
by the way, evolution put
these internal drives and needs
into us.
We're putting them
in our robots.
And changes in
these are reflected
in changes in excitation
or inhibition of behaviors
of the robot.
And this is not aimed at
showing an emotional response,
it's aimed at getting the
robot to satisfy the drives
or needs, which are
directing the robot to do
what it was designed to do.
So these behaviors that
get excited or inhibited,
they're all associated with the
primary mission of the robot
rather than as some explicit,
external, emotional display.
So there are no
emotional variables,
in the sense of having happiness
as a 4-bit number or whatever.
There are no explicit,
external displays,
but the observer attributes
the emotions to the systems.
And these are deeper
sorts of models.
They are much harder to do.
And no one's really,
I don't think,
done a successful
one at that level.
Here, in the lab and
around the place,
various people are working
on versions of this.
We have a pet robot.
This is Cynthia
Ferrell's child robot.
It's got eyebrows,
and ears, and stuff--
now it has lips, but I don't
have a photo of that yet--
to give emotional responses.
This is over at my
company, IS Robotics.
It's a Robot called IT,
which had a very surface
level, emotional model.
And I want to show you
another emotional robot
from IS Robotics, which is
sort of a subsurface, almost
emergent level, emotional model.
This is a doll.
The neat part about
this is we think
we can manufacture
for very little money.
But let me let me
show you this one.
And this is an advertisement
tape for the company,
so there's a little
bit of hype here,
which you'll excuse, please.
[VIDEO PLAYBACK]
[MUSIC PLAYING]
- This is Bit, our
interactive baby doll.
Like [INAUDIBLE],, Bit has
invested intelligence which
allows for interactive play.
He understands bouncing,
sucking his thumb,
tickling, hugging, paddy
cake, and much, much more.
Like a real baby, Bit
not only has his moods,
but expresses them very well,
using his face and voice.
Bit understands the difference
between play, soothing
activity, and things
he doesn't like,
which all contribute
to his mood state.
The more you play,
the happier he gets.
But if you keep
playing for too long,
he will get tired and cranky.
[BABY CRYING]
Keeping Bit happy
is not always easy.
But to solve all
your problems, just
give him his favorite thing
in the world-- his bottle.
[BURPING]
[END PLAYBACK]
BROOKS: That's all been
done on an 8-bit processor.
It's not a lot of
computation behind there,
but it's engaging,
at some level,
of having this emotional model.
So why have these
emotional models?
It gives humans feedback
and, perhaps, other animals,
it gives them feedback.
Maybe the dog wants to
know when the garbage
robot is in a bad mood and
should keep out of the way.
It's intuitive,
high level feedback
on what the machine is
trying to do currently,
what its current set of
goals are likely to be.
That's the surface level.
But it also, inside,
provides a mechanism
for focusing the behavior of the
machine in some, coherent way,
when it's got lots of competing
pressures it's trying to do.
And this is not too
dissimilar from Damasio's talk
about from the frontal
lobes, et cetera.
Let me show you
another videotape.
The important message here is
that the observer can really
get sucked in fairly
easily, I think.
This is Cog again.
And this is the work
of Robert [INAUDIBLE],,
who's somewhere in the audience.
And what we've got--
just a second.
One of the other things
we're doing in Cog
is trying to localize sound.
So we see motions, we
saccade the motions,
we learn that sort of mapping.
And as in the superior
colliculus in the human,
besides the visual motion
map that's coming in,
the ocular motor map, there's
also a sound, oral map.
And we learned
that coordination,
between being able to
saccade to where a sound is
and from hearing the sound.
When you put that on
this robot, suddenly it
starts to appear to
be a lot more engaged.
So it's saccading to
where it hears sound.
Now, hopefully,
there's going to be--
okay.
Now watch its eyes.
You can watch the
videos in the back.
As they talk, it sits
there, looking back
and forth between the two
people having a conversation.
And to a third
observer, seeing that--
look, it's there,
back and forth--
it seems to be engaged
in understanding what's
going on in that conversation,
even though it's not
understanding, in
any deep sense.
But we, as human observers, are
willing to grant that licence
that it is doing this
higher level thing.
Emotional content may often
be in the eye of the observer.
And the level of
engagement may often
be in the eye of the observer.
And the observer, as we saw
with the earlier videotape--
where Cynthia Farrell was
playing with the eraser with
the robot, back and forth--
becomes a component
of the dynamics
of the behavior of the robot.
And we, as humans,
seem to be built to--
we sort of can't but help
give the benefit of the doubt
to systems that
looked biological
and to assume that
they're engaged
much more than they maybe are.
So as our systems
become more complex,
the engagement
with be longer term
and the illusion will
be shattered less often.
Let me give you a quote
from Sherry Turkle's latest
book, Life on the Screen.
Sherry came over to our lab.
And this is, actually,
a couple of years ago.
And when she walked in-- this
is from her book-- she said,
Cog noticed me soon
after I entered its room.
It's head turned
to follow me and I
was embarrassed to note
that this made me happy.
I found myself competing
with another visitor
for its attention.
At one point, I felt sure that
Cog's eyes had caught my own.
By the way, the colors here are
mine, not hers, in the book.
My visit left me shaken-- not
by anything that Cog was able
to accomplish-- because it
was just a few, fairly simple,
feedback loops--
but by my own reaction to him.
For years, whenever she'd
heard Rodney Brooks speak
about his robotic
creatures, I'd always
been careful to
mentally put quotation
marks around the word.
But now, with Cog, I'd
found the quotation
marks had disappeared.
Despite myself-- and
this is important thing.
Despite myself--
she's a skeptic--
and despite my continuing
skepticism about this research
project, I'd behave as though in
the presence of another being.
So, it seems to me, that
as we build these systems,
whether or not we
put juice in them--
because that's just my personal
belief that they need this
juice--
they will appear, more and
more to us, to have emotions.
Maybe they've got this emergent
emotional thing going on, maybe
that's been designed into them,
to force that to happen to us.
And we will get
ourselves to the point
where we want to
attribute to the systems
things such as fear.
When viewed in its
complete context,
a robot can be just
as afraid as a person.
Well, we're all willing to
say a person can be afraid.
A chimpanzee-- can
they be afraid?
Yeah.
A dog?
They certainly
seem to be afraid.
Birds?
Yeah.
Lizards?
Beetles?
What about dust mites?
Do dust mites get afraid?
And they don't have
the same stuff as us.
I think it gets down
to the level of empathy
that we're willing to
give these systems.
So it's going to take us
another intellectual leap
to get beyond our own fears
of this lack of specialness
in order that we can finally
admit to such possibilities.
Some people will claim
that even a chimpanzee can
have no feelings, no fear, but
I think the most common belief
these days is
somewhere around here--
that's where fear, and
pain, and things go away.
So once machines can be afraid,
they will certainly have souls.
Once we're willing to
admit they're afraid,
then I think we'll have
to be willing to admit
they have souls, in just the
same way people have souls.
See, this is my
little trick here,
because I don't really
believe people have souls.
So in the same way you all
believe that people have souls,
you'll have to believe
that about machines.
But here's a question--
AUDIENCE: I think you were
getting emotionally involved
there.
BROOKS: I was.
But I do worry.
I do worry.
What can machines
legitimately be afraid of?
Biological systems have
their information content
locked into their cells.
Or they have, until recently.
This may change for us, too.
But in robots, that
information content
can be located offboard,
at least the equivalent
of the genetic material.
Darwinian evolution
had this urgency
to make us be self-preserving.
Because there wasn't a data bank
over there which had our stuff.
If we got killed, we died, then
our biological gene material
died.
So Darwinian evolution had
to invent the need for fear
and, incidentally, souls.
But our robots are
not necessarily based
on Darwinian evolution.
It's Von Neumann evolution.
I made that up.
But it seems to me that
we can reproduce the robot
from this master copy.
It's a different
sort of evolution
than Darwinian evolution.
So robots may not have to
be afraid, in the long term.
And they could be nasty.
If they don't have to be
afraid, if they don't have
to be self-preserving,
they could be
rather nasty, different aliens.
But if we need to nurture them,
if they're complex systems,
and if we build robots
that need nurturing
to develop-- as a lot of
people are starting to work on.
Because if we think that
we're the embodiment, that's
the only way we're going to
get to the right information
content inside the head of
the current robot, which
will be very dependent on that
robot, its physical embodiment,
and the world it
interacted in to get there.
We'll feel we have some
investment in them.
We'll instill them
with fear, so they
don't go off, and do silly
things, and get hurt.
And their culture
that they develop
will, irrationally,
just as ours--
as I was talking about earlier,
this dual belief system--
continuing to instill that
fear in future generations.
And finally, we will have built
worthy successors to ourselves.
And I'll take
questions, I guess.
AUDIENCE: I'd like to bring
up two examples in the history
of physics that
[INAUDIBLE] you would
be replaced by a few of you.
One was Descartes'
cave cosmology.
He thought the planets could be
explained by interactive gears.
And then Newton came along
with his gravitational theorem,
so it blew that
out of the water.
And then you also have
the ether hypothesis
of electromagnetic waves, where
people understood acoustics--
that had to have a medium.
So electromagnetic had a medium.
And, as a matter of fact,
that was replaced by a field.
And so I was just wondering
if whether, maybe, your idea
of juice might be
equivalent to replacing
a mechanistic viewpoint
by something more--
BROOKS: Yeah.
Maybe.
Although I think David
Chalmers fundamental type
of consciousness is also
a field-type replacement.
But I just don't see
any evidence for it.
The only evidence I have
for wanting this juice,
which I believe is some
mathematical construction,
is that I don't see the current
explanatory power of the gears
being enough to do it.
Yeah.
So maybe.
AUDIENCE: This is
very interesting.
But the robots seemed
to lack one thing--
and that's subjectivity--
to make it human.
BROOKS: What do
you mean by that?
AUDIENCE: The thing that's
going on inside of us right now.
BROOKS: So why can't
the robots have that?
Is it because it's special?
Because only humans
can have subjectivity?
If we assume a
mechanistic explanation,
then we can put that
mechanistical in there.
We may be missing a
few technical details,
like the juice, but I don't
see anything, in principle--
AUDIENCE: What's going on in the
mind of the robot [INAUDIBLE]..
BROOKS: That's the
John Searle argument.
AUDIENCE: What?
BROOKS: That's a
John Searle argument.
That's the special stuff
that makes people special.
And since the robots don't have
that, they cannot be people.
It's that same,
circular argument
that John Searle makes.
You want it to be
special for humans,
and you're defining
it to be special,
and including it special.
AUDIENCE: The most
wonderful thing about myself
is that I'm so [INAUDIBLE].
BROOKS: You're just a bunch of
little molecules messing about.
And it's only because I've got
this other, dualistic notion
that I don't just
come and squash you.
And I don't see
why we can't feel
the same way about our machines
and have that dual nature.
You want to have the specialness
be a little box in there.
It's the homunculus
argument again.
I don't buy it.
But I don't expect us to
be able agree on that.
I don't think John
Searle and I can ever
agree on that,
because he's stuck
in his set of dogmatic
beliefs and I'm stuck in mine.
Yeah?
AUDIENCE: I'm wondering if one
of the subtle, implicit-- maybe
explicit-- values
of a computer is
things such as
efficiency, validation
of what they're doing.
BROOKS: Do you use
Microsoft software?
It's too cheap.
I can't.
AUDIENCE: And
finding explanations.
The mechanism is put together
to find explanations,
which, if so, would mean that--
BROOKS: I think when we're
building such complex systems,
we can't, any longer, look
for the explanatory level
that we used to be able to.
AUDIENCE: Then it would
be that, if not defined,
an answer would then mean
that it's no longer needed.
At such time, no further
explanations are needed.
But then a machine
that just says, now
that we have all
explanations, our only goal
is to self preserve.
That is it.
So either it is just
self preserve or explain.
There are no values that
can be attached [INAUDIBLE]..
BROOKS: Oh, yeah.
We could we could build in that
it should be nice to people
and clean up their trash.
We can build that in as a drive.
AUDIENCE: Oh, okay.
Then you're saying that one
of the things that you built
into the [INAUDIBLE] thing is
that the necessity of human
beings--
which then would mean that
they wouldn't be replaced.
BROOKS: Yeah.
They might change
things, after a while,
but we could make ourselves feel
good for a while by doing that.
Up there.
AUDIENCE: Yeah.
The thing is that the
process of evolution--
for individuals in
evolution-- is to reproduce,
carry on your DNA.
BROOKS: I think that's
an emergent property
in the system.
AUDIENCE: Oh, you
think that's emergent?
I was going to say that
if people just reproduce,
then they shouldn't
be afraid to die
because their important
information has been passed on.
BROOKS: Oh, yeah.
But I feel like I have to look
after my kids for a while--
until we get them through
college, stuff like that.
AUDIENCE: So once they go to
college, you can [INAUDIBLE]..
BROOKS: Yeah.
In fact, there's
been recent papers
about why do human women
live so long after menopause.
And there's been a
bunch of recent papers
about the grandmothers
who aren't
done with their children, then
they can continue and help--
AUDIENCE: One of the
things that robots
may have a different
[INAUDIBLE],,
even though all
their information now
was something else.
Maybe they would have some other
reason to want to be preserved.
BROOKS: We could
build that into them.
AUDIENCE: No.
Even with that, it
might be emergent,
like it us for us
to be afraid after--
BROOKS: Yeah.
Once you get a
complex system, what
the emergent consequences will
be is sort of hard to predict.
So yeah.
I think I agree with you.
I think I agree with
you, but I'm not sure.
AUDIENCE: Rod?
BROOKS: Yeah?
AUDIENCE: It seems
to me you might
have pulled off a trick here
in answering the questions.
BROOKS: Rats.
You found me.
AUDIENCE: Essentially
by redefining emotions--
BROOKS: Oh, now you're trying
to pull off a subtle trick.
Go on.
AUDIENCE: [INAUDIBLE].
Let's see.
If emotion is something
I attribute to you--
it's what I experience-- and
if emotion is a handy way
to describe your behavior,
it becomes a [INAUDIBLE]..
Fear is a [INAUDIBLE]
for certain--
BROOKS: For a whole
bunch of stuff.
AUDIENCE: It's
observable behavior.
Yes, that's fine.
If that's what emotion
is, then that's fine.
And in that sense-- to
get back to the question--
I think the argument that
comes is this issue of emotion
as an experiential phenomenon.
I'm the only one who can
report on my experience.
So when I say, I know what
fear is, I'm saying something
about an internal experience.
And I can't even say that
I think you experience fear
because, to the extent
that I'm talking
about the internal
experience part that--
BROOKS: Which goes back
to this, perhaps, cargo
cultish sort of thing of
building the physical body,
so it has the same
sorts of experiences,
because then we think it's going
to turn out to be the same.
And we'll be happier that, in
fact, it is experiencing fear.
AUDIENCE: And I agree
with your suggestion,
early on, that maybe leading
a cargo cult kind of approach
to this and then justifying it
by redefining the [INAUDIBLE]..
BROOKS: Probably
the version I used
was based on it not
being cargo cult stuff.
Yeah.
I think we're still right
in building the human form.
I think that's why
dolphins are aliens to us.
We don't really understand
them because they're not
experiencing the
world in the same way.
So I may be wrong.
AUDIENCE: It's not about
building humanoid robots,
it's about answering the
question of whether robots
are to have emotion.
You've taken the
definition of emotion
and put it outside, as an
external definition phenomenon.
And by an external
definition, sure, we
can make it happen,
because we will
use that as an abbreviation.
And we see it in
all sorts of things,
regardless if it's
related to robots.
We attribute emotions to
things-- my car is really
morbid this morning.
BROOKS: No.
But my car will
do transportation.
It won't do it in the same
way as a horse does it.
And so it's a level of
abstraction argument.
If we're willing to accept
that you can simulate something
with a different
level of abstraction,
without going all
the way down, then I
think you have to admit
the possibility that this
might work.
To guarantee it is
a much harder thing.
AUDIENCE: We didn't intend to
turn this into a [INAUDIBLE]..
BROOKS: Our offices
are a few feet apart,
so he never gets the
chance to do this.
AUDIENCE: We're actually in
opposite corners of the office.
It depends on whether
you're talking
about the internal
phenomenon or the external.
It isn't just a
level abstraction.
BROOKS: Okay.
AUDIENCE: Well,
actually, [INAUDIBLE]..
Because you said
there's a redundancy.
If you thought about
a two way dialog--
just think about it.
Is there any dialog
that doesn't have
two [INAUDIBLE] it comes from?
BROOKS: I've been involved
in some, on both sides.
AUDIENCE: With this definition
of emotion as being one--
BROOKS: With the time here--
there was a hand back there,
for a while.
AUDIENCE: So what
do you hope to learn
from building humanoid robots?
Do you think your
juice will, somehow,
be none homeomorphic
with folk psychology?
That you'll get some other
level of explanations?
I mean, we use psychology
to predict human behaviors--
BROOKS: I think the juices are
at a totally different level
from folk psychology.
But the question of what I hope
to achieve by building Cog--
a couple of things.
Certainly, I don't expect
that that, by itself, is going
to make the juice pop out.
The juice is an
intellectual thing
that is a totally different
sort of investigation.
By building the humanoid robot,
we get to do a few things.
We get to play with peoples'
scientific, psychological
theories, and try and
implement them on the robot,
and find out how they fall
down, what pieces are missing,
because most psychological
theories are, actually,
built in isolation.
They connect to a few conference
papers around here and there,
but they don't have to connect
to all the other pieces.
There's a lot of hand
waving that goes around.
So by trying to
put them together,
we find out what's missing.
We also find out
the conclusions that
may be made in the
experiments-- where someone
has an experiment, and
they have a certain result,
and they say, therefore,
it must be the case
that inside this six
month old infant's head is
this particular set of stuff.
If we can reproduce
that experiment
without putting that stuff
inside the head of the robot,
then we can get
a negative result
from psychological theory.
So there's the negative
psychological results
and then there's
the thing of trying
to find out where the
missing parts are,
as we put stuff together.
AUDIENCE: And the
robot is alive?
BROOKS: Yeah.
That's exactly what I said.
AUDIENCE: Is the robot alive?
Or is just behavioral--
BROOKS: It's as alive as
you are, in the future.
Yeah?
AUDIENCE: This
feeling that we have
that we're special is an
expression of our narcissism.
And we need therapy
to learn to from it.
But as we create
these robots, we're
going to have therapy
tailored to them.
BROOKS: Well, very few of us
actually have that therapy.
And we're getting along.
AUDIENCE: I mean, there's
something about the human that
is, basically, narcissistic.
And so, in order to
create a human robot,
wouldn't it have to be, in
some instances, narcissistic?
BROOKS: That's an
interesting point.
I haven't thought about that.
There was a question
back there, somewhere.
I don't know.
How long are we supposed
to be going on here, Anna?
ANNA: You can go on for five
or 10 minutes, if you want to.
BROOKS: Maybe people
want to leave.
Yeah?
AUDIENCE: One would think
that to answer the question
if robots are afraid,
you would have answer,
how do you know
when you're afraid?
Say, From a
scientific viewpoint,
scientists would say, how
do we know what we know?
How do we know
when we're afraid?
BROOKS: We make
certain assumptions.
They are those built-in,
dogmatic assumptions
we're not willing to admit to.
And that's how we have
our scientific explanation
to ourselves.
I don't see why that can't
be the same for the robots.
Push, you had your hand
up for a while there.
AUDIENCE: Yeah.
I just wanted to
complain about the juice.
I just can't stand it.
[INAUDIBLE]
BROOKS: This is this old guy
who thinks that, maybe, there's
just missing one critical thing.
And it's just like the
idea of computation,
but it's something different.
And if only we had that, then--
AUDIENCE: [INAUDIBLE].
BROOKS: Oh, yeah.
But one as good as computation
would help us for a long time.
Yeah?
AUDIENCE: Computers--
or at least
[INAUDIBLE] computers--
at a very granular level,
[INAUDIBLE], our deterministic.
Do you feel that human beings,
with neurons and synapses,
where molecules and
neurotransmitters
decide whether or not
the signal's [INAUDIBLE]??
Are you saying that you
feel that humans or living
things are deterministic?
BROOKS: Well, you
certainly see that, even
in a lot of the
basic, biochemistry,
there's a lot of quantum
tunneling and stuff like that.
So is that deterministic?
That's a tricky question.
I think this deterministic thing
though is a bit of a canard.
It's certainly what Gerald
Edelman got stuck on.
And he said the problem
with these digital computers
is they're deterministic.
So he had a robot--
this was when he was at
NYU, before he moved out
to San Diego--
and had a Cray
supercomputer to run it.
And he put a pseudo
random number generator
on the computer.
So it was no longer
deterministic.
He writes this in the book.
And therefore, now, it's
beyond normal computation.
But it was a standard, pseudo
random number generator.
Actually, I think
Linn Stein's talk
is going to be about
that topic, right, Anna?
Yeah.
So I'll put it off until
later in this series.
ANNA: Three weeks.
BROOKS: Three weeks.
Yeah?
AUDIENCE: Can you talk about
fear and whether robots
should [INAUDIBLE]?
Perhaps they need to
because everything's
going to be downloaded
to the internal source.
For a lot of theology, that
would be equivalent to saying,
download somebody's soul.
And then one of the
problems with this idea
that the soul could
be downloaded,
separate from the body, you're
creating a body [INAUDIBLE]..
Are you sure that it's okay
to just download the software
portions of the robots?
BROOKS: Well, no.
In fact, my embodiment approach
sort of argues against that.
But there's certainly
a lot of people in AI--
Marvin Minsky, Hans
Moravec, to name some--
who talk about downloading
the information
content of the human brain
into some other form,
and, therefore,
continuing existence.
And this is in their quest
for eternal life, which
is yet another dogmatic,
religious sort of belief.
AUDIENCE: So wouldn't your robot
have a sense of preservation
of of its physical [INAUDIBLE]?
BROOKS: I don't know.
When we're not under the
pressures of evolution,
we might be able to tweak
things in a different way
and not have that.
It depends, it seems
to me, on how much
of the physical experiences
the robot has in the world
to develop itself.
It depends on the details
of its particular mechanical
instantiation.
Tricky questions.
I don't know.
Yeah?
AUDIENCE: I've done it
many times on [INAUDIBLE]..
And it's always down or off.
BROOKS: Yeah.
AUDIENCE: And for a person,
that would be the cruelest kind
of torture-- to be paralyzed.
So I wonder, if
you really intend
that you're treating
this thing that way,
why do you turn it off so much?
BROOKS: No, that's a good point.
I think we will have
succeeded in making
it something when we feel
bad about switching it off.
We don't feel bad about
switching it off right now.
That that little
doll that you saw--
you give that to a kid
and, first off, they're
scared of it.
But after someone
shows it to them,
they start playing with it and
they're very careful with it.
But one of the adults
or one of the engineers
who built it-- like
me-- come up to it,
and we just grab it by its
feet, and wave it around,
and show how upset it gets.
So it depends on how
engaged you become with it.
What we've built
so far with Cog,
it's not engaging enough to
make any of these sorts things
happen.
But, in principle,
I don't see why--
given my dogmatic belief of
a mechanistic explanation
for everything, I don't
see why that cannot happen.
But we're not
there yet, I agree.
Yeah?
AUDIENCE: What do you
think [INAUDIBLE]??
BROOKS: I think I'd be so happy.
I mean, that would be wonderful.
AUDIENCE: Aren't you
then acknowledging
that there is a creator
that's just being denied?
BROOKS: Sorry?
AUDIENCE: Aren't you
then acknowledging
that there is a creator that
is then just being denied?
BROOKS: Cog isn't created.
Yeah.
But just because Cog does it,
doesn't mean that I'm doing it.
AUDIENCE: Yeah.
I guess my question
or thoughts on that
is kind of in two parts.
One is that you keep talking
about the dogmatic assertions.
And I guess my
question is, is there
anything that
would lead us to be
able to go past our dogmatic
assertions [INAUDIBLE]??
And, in particular, if
we give up this idea
that, somehow,
humans are special,
what are the
implications for that
for how we run our society?
BROOKS: Well, that's
what I pointed out.
I think many people--
scientists and all of us--
do have these dual systems
that we operate in.
And so I'm not
worried about that,
because I feel like I,
for instance, am already
operating in that dual mode.
So I don't see that
as a great problem.
AUDIENCE: So you're saying that
we'd live [INAUDIBLE] lives
and that's simply the
way it is going to be?
BROOKS: Yeah.
AUDIENCE: And what will
be the implications
if we have artificial
[INAUDIBLE] in that regard?
Do we accrue them rights
or do we just kind
of say, well, we
made you or whatever?
BROOKS: I think that gets
to this point of us being
careful how we build them.
And I think I used up the
extra 5 or 10 minutes.
ANNA: I just want to interrupt.
I'm really sorry that I have
to interrupt at this point.
Rob, thank you so much.
It was a wonderful talk.
