[applause]
Hi, my name is Melody and my concentration
here at Gallatin is the history and
philosophy of intelligence.
So today, I'm going to talk to you all a little bit
about both human intelligence and
artificial intelligence, and the goal is
that by the end of my talk, I'll be able
to explain to you why this, a Roomba, is
actually really intelligent.
So I'm going to start off by asking all of you to
think of someone or something that you
think might be intelligent.
Chances are  you probably thought about
someone like this, Albert Einstein.
You might have thought about the world's
first computer programmer, Ada Lovelace.
There's a chance you might have also
thought about the brain.
Or if you went down a more
futuristic path, there's also a chance
you might have
thought about some sort  of robot.
Well the thing that all of these pictures that I just showed you,
and most traditional definitions of intelligence, is the fact
that they actually have something to do with the mind.
This is a commonly held assumption that most people
have today, and it was certainly the case that this was a
commonly held assumption that most
people had
when artificial intelligence as a field was just coming together.
So let me give you a little bit more historical context.
The field of AI as a scientific research field is actually relatively new.
It didn't really come around until the mid
1950s and early 1960s, and the term "AI" was used
for the very first time in 1956 during a
conference that was held at Dartmouth College.
This is also really interesting because
this is the same decade that is immediately following
the Second World War, and while people do normally
associate the Second World War with advancements in the fields
of nuclear weapons research, and then
later atomic weapons research, the fact is that there
were a lot of scientific and technological advancements
during this war that were really instrumental in
forming artificial intelligence as a field later on.
So let me give you a few  examples.
On our left here we have Alan Turing,
who's really well known for his work with
British intelligence.
He was instrumental in helping the British
decode German ciphers that were set by
the Enigma machine,
but, at the same time this is also when
he starts to come up
with this idea of a Turing machine, which is a hypothetical machine,
which means that he never actually created one,
that consists of just two parts.
One is an infinitely long strand of paper,
and the second part is just a table of rules.
And it's this hypothetical machine that he came up with
during this time that serves as a basis for
all modern day computers that we have now.
On our right we have John von Neumann,
who is a mathematician, who was really instrumental in helping
with things like the Manhattan Project.
But he also published this book called
The Computer and the Brain, in which he
is beginning to explore this analogy
that's starting to show up with comparing
those two things together.
And unsurprisingly, as computer models developed
and improved over time, following the war, this analogy
that von Neumann starts to explore in his book,
between the computer and the brain,
becomes more and more dominant
and popular, especially with AI researchers.
AI researchers during this time were
really focused on a goal-based sort of intelligence,
so they really began to see the computer and the brain
as two different types of logic machines
that followed a really similar three-step process
when trying to solve problems.
For humans the three steps are
perception, reasoning, and action.
And then, for computers, it's also a really similar
three-step process, where it's getting
input of data, processing that data,
and then getting some sort of output.
Now, the AI research that was happening during
the 50s, 60s, and even 70s was really
centered around three different institutions,
which were Carnegie Mellon, MIT, and Stanford.
And one of these computer programs
that came out of the three institutions
was this program called SHRDLU.
SHRDLU was developed by Terry Winograd at MIT
in the late 1960s early 1970s, and SHRDLU
consisted of two different parts.
As one of the earliest natural language processing
computer programs, it had one, a microworld,
which is what you can basically see behind me now,
which consists of blocks of different shapes,
colors, and sizes. The second part of SHRDLU is
a robotic arm that can control to move
these blocks around within the microworld.
So, you could be sitting at a computer at
MIT and you could have a conversation with SHRDLU.
And you could type in,
"SHRDLU, can you move that red block and place it on top of the green one?"
SHRDLU would respond saying whether or not
it could, and if it could, it would use its robot arm to do so.
Despite the fact that there was so much
initial excitement and interest in
programs like SHRDLU, AI researchers began to realize that they
would actually run into a few problems
with programs like this.
And the first of these problems is the fact that SHRDLU
failed to generalize beyond its microworld.
When it's trying to make a
plan in order to move the blocks it
works totally fine, but if you wanted to
add a few more blocks or if you want to
add objects within its microworld that
are more than just blocks, researchers
realized that even with the most minor of adjustments,
the complexities of the codes that they needed to write
actually increased exponentially, and at some
point, they realized that they would hit a wall.
It's just too hard to make the code more and more complicated.
The second issue is that programs like these fail to be flexible.
And when I say flexible, I really mean that they fail
to adapt to changes in their environment.
So in the example we were using before, where SHRDLU
had to move that red block and place it on top of the green block,
if for some reason you were to go into the program
and just manipulate it so that the red block
was misplaced slightly from its original location, SHRDLU
would still try to reach for that block in the same place
that it thought it was, and when it didn't find it there,
it would just crash. It would have no idea
what to do. It couldn't adapt to
real-time changes in its environment.
So what can we learn from programs like
SHRDLU
and other AI programs during this time?
Well, I think it actually just shows us that
intelligence has much more to do
than just being related to the mind.
In reality, intelligence also has an
aspect to it that has to do with being flexible.
You need to be able to adapt to your environment
in order to be intelligent.
But the really fascinating thing about humans
that programs like SHRDLU don't need to worry about
is the fact that we have bodies.
We have physical bodies that we need to have
control of in order to manipulate things in our environment.
But, at the same time, while we do have
these bodies, we're also constrained by them.
So, if intelligence also has to do with learning this fit
between our bodies and our constantly
changing environment, how can we study
how this ability
is developed over our lifetime?
Well, at the Infant Action Lab, a motor development lab
right here at New York University,
we found that a hammering task where we ask participants
to just pick up a hammer and pound on
a peg, lets us do just that.
Let me explain to you what I mean.
So in the back here we have the hammering tasks
that we showed the adults and the kids,
and when adults reach to pick up the hammer
to pound on the peg, they usually reach
for it
using some sort of overhand grip.
This is what we call a habitual grip,
like you can see in the video behind me,
because this is something that you can
do almost instinctively,
you don't really need to put much thought into doing it.
And in this case, this is also the most
efficient grip to use, because it allows
for the easiest transition
into actually using the hammer to pound down the peg.
But, if we do something as simple as just
changing the direction of the hammer
that we place in front of you,
the most efficient grip to use in that scenario changes as well.
So in this case, instead of using that overhand grip
that I showed you earlier, the best grip to use
is actually an underhand grip instead,
and this shows that adults understand
that there was a change in their
environment, and they can adapt their
behavior to it. But, in the same hammering task
that we show to adults, kids even up
to four years of age have actually shown
great levels of inefficiency in doing it.
Some kids will pick up the hammer the
same way that the adults do
using the overhand grip and the underhand grip,
Some kids will only sometimes change to an underhand grip,
and then some kids, like the one you can
see in the video behind me, will always use
that overhand grip, regardless of what direction that
you put the hammer in front of them.
So now that we know that 4-year-olds
show a great level of variability,
[laughs]
Now that we know that 4-year-olds show great levels of variability, what are the reasons,
what are the sources for this inefficiency
and difference between these kids?
Well, we used the same hammering task,
and our lab just added a bunch of different
technologies to it. So our lab is run by
Dr. Karen Adolph, and during the task,
what we did was, we actually added three different technologies.
We added a head mounted EEG cap
so we could record their neural activity.
We also had them wear these head mounted
eye tracking glasses, so we could see
exactly where they're looking.
And finally, we put the kids in a motion tracking sleeve
so we can see exactly when and how
their hand is moving at all times.
And when you put it all together it actually looks a
little something like this. I know it might seem a little
intimidating to see the kids do this
task, but I can promise you that all the
4-year-olds actually had a lot of fun.
We told them that they were robots
during the task, so that's actually how
we got each of these kids through sixty
trials with the hammering, which is really amazing
even for a motor development lab.
So what turned out to be different?
What did we find was different
between the kids who hammered efficiently
and those who didn't? Well, surprisingly
we actually found that there were
differences in multiple areas.
He's smiling right? He's having fun.
With the EEG cap we found that there were
different patterns of neural activity
across the trials that were performed by
kids who would behave efficiently and
those who didn't.
With the glasses we actually found that kids looked at
different things too, so the kids who use
the efficient grip would look particularly at the hammer or the peg
before moving, whereas the kids who were
inefficient would look pretty
indiscriminately at all parts of their
environment.
Finally with that motion tracking sleeve,
we found that kids actually moved
differently too. So the kids who would
pick up the hammer efficiently usually
reached for the grip into some sort of
like wavy motion for the hammer,
whereas the kids who were inefficient at
hammering
sometimes would go all the way up, then go down,
sometimes they just start down and they would do all sorts
of crazy motions in between as well.
So what can we take away from the study?
Well, going back to that working
definition of intelligence
that we had before, I think that it actually shows us
that there's another layer for our definition
that we hadn't thought about earlier.
And that is that intelligence, or the ability
to behave intelligently, or in our case, efficiently,
isn't just a result of one simple process or quality that
you have, it's not an all-or-nothing kind
of thing.
Instead, the ability to do something that
we see as trivial as just picking up a hammer
and pounding down a peg is actually the result of multiple
systems coming together at once,
and working in a timely fashion for you
to be able
to succeed at something as simple as that.
The 4-year-olds in our task not only had to pay attention
to the task, but they had to pay attention to the right
things in our tasks. They had to be able
to process
all that information into a working motor plan,
and then finally, they even had to have the fine motor skills
in order to put all of that together
and pick up the hammer the way that they
intended to in the first place.
So, now that we have this definition of
intelligence here that is far more
complex and way more complicated than
just having to do with the mind,
why did I show you that picture of a Roomba in the first place?
What does that have to do with my talk
and hammering? Well that is a great question.
Not only can you dress up Roombas
like your favorite Star Wars characters
and have it roam around your house,
or post videos on YouTube with your cats riding them around
while they're wearing shark costumes.
But Roombas actually represent a lot of
those qualities
that we were talking about a little earlier.
For instance, Roombas do sort of
come with a kind of mind, and when I say
a sort of mind or a kind of mind, I mean
that they don't necessarily come with a
blueprint of your house,
it doesn't know that you have your cat's bed in your bedroom,
or that your living room is 100 feet wide.
But what it does come with is a very simple set of behaviors
or responses that it should display when
it comes across different stimuli that
it can find in your house.
For instance, a Roomba knows that if it's
roaming around and it comes across a wall,
it needs to slow down and turn all four of its
wheels in a certain angle and at a certain speed
so it goes in a different direction and doesn't
just crash. It also knows that if its sensors
find something in front of it, that it needs to slow
down so it's vacuums actually have time
to do what they're supposed to do,
which is vacuum your house.
And it's through these really simple interactions
with the world, and learning how its body should function in
its environment, that these really small
systems within the Roomba
that control individual parts of it like its wheel,
or its sensor, or its vacuum, that can work
in tandem and create some really complex
behaviors
that are way more intelligent
than just simple vacuum cleaning.
So, now that I've walked you through this and we've talked
about why a Roomba shows those characteristics of
intelligence that we talked about, I also
think that it's really important to bring up
the fact that those weren't the
only factors that we could have been
discussing. We could have talked about
so many other things
when trying to define intelligence,
like creativity, pattern recognitions, spatial skills,
or maybe even empathy. And those are all
really important characteristics that we've discovered
over time that makes us, as humans,
intelligent, and are important for us that we most certainly
need to apply when trying to create artificially intelligent
agents now, and most importantly, in the future as well.
Thank You.
[applause]
