- I am Baymax.
- I am scanning your interrogatives.
- We're going to have to work together.
[clanking]
- Hi, my name is Chris Atkeson.
[bell dinging]
- [Narrator] Chris is a professor
at the Robotics Institute at
Carnegie Mellon University.
- Today I'll be breaking
down clips from movies and TV
about artificial intelligence
and robotics again.
Autonomous decision-making
for the classic movie
"2001: A Space Odyssey."
- Open the pod bay doors Hal.
- [Hal] I'm sorry Dave,
I'm afraid I can't do that.
- So "2001" is a classic robot movie
for a couple of reasons.
It was just way, way ahead of its time.
For example, it made clear that a robot
doesn't have to have a human-like body.
It can be anywhere.
- What's the problem?
- [Hal] I think you
know what the problem is
just as well as I do.
This mission is too important
for me to allow you to jeopardize it.
- Some people when listening to Hal think
he actually has feelings,
he's gloating a little bit
about that he's outfoxed the humans.
- [Hal] I know that you
and Frank were planning
to disconnect me and I'm afraid
that's something I cannot allow to happen.
- He may well have been
programmed with emotions.
That makes him a much
better helper to the humans.
Everything we say has some expression,
some emotional content.
Working with a robot
would be incredibly boring
if it was just this monotone.
- [Hal] This conversation can serve
no purpose anymore, good bye.
- Hal?
Hal?
- This movie and the
book before it make clear
it's not that the robot hates us,
the robot was given a
mission and it figured out
that the humans were going to
interfere with the mission.
- [Hal] This mission is too important
for me to allow you to jeopardize it.
I know that you and Frank
were planning to disconnect me
and I'm afraid that's something
I cannot allow to happen.
- In order to achieve the
mission, the humans had to go.
And this is a story that
we see over and over again
in talking about robots.
Decision-making, "Star Wars."
[whistling]
- Where do you think you're going?
[beeping]
We'll I'm not going that way.
It's much too rocky,
this way is much easier.
[beeping]
What makes you think there
are settlements over there?
[whistling]
- The charm of "Star Wars" robots is
that they were all quirky
and had personalities.
And it was fun to watch them.
[squeaking]
Unfortunately, reality
is a lot more boring.
- What are you talking about?
- Robots first of all,
wouldn't use sound to talk to each other.
So you'd have two robots standing there
and there would be nothing to see.
- I've just about had enough of you.
- Instead, they'd just transfer
what we call a utility function.
- Don't get technical with me.
- Which is something that says
how much are you willing
to pay to get something?
And if only humans could do this,
we would solve a lot of conflicts.
[clanking]
- And don't let me catch you following me
begging for help because you won't get it.
- Social robots, "Silicon Valley."
- So you've just been out here
all night talking to a robot?
- Fiona, would you excuse us for a second?
- Yes.
- What's been a huge surprise,
ever since the beginning
of artificial intelligence is if you get
an artificial intelligence
and a human being one-on-one,
you basically can't stop the human
from bonding with the robot.
- I know that Fiona is a manmade
piece of digital equipment.
- Powered by artificial intelligence.
- But I don't remember
ever having a conversation
like the one I've been having
with her over the last 12 hours.
- That was shown in a very
early conversational agent
in the 1960s, incredibly crude,
called Eliza, and people would sit there.
They'd type to it for hours.
- I told her that I was afraid of being
found out as a fraud and she told me
that she's afraid of magnets.
- I love that line.
- Well, I'm shutting her down.
[ripping]
- Check it out, [wobbling] I'm a robot.
[laughing]
- Ripping apart a body
doesn't mean the same thing
for a robot as it does for a human.
You're not killing the robot.
The robot brain could be
in the cloud anywhere.
And could be connected to anybody.
- Hey Jared, don't go over
there, don't look in it.
- [Jared] Oh, Fiona!
- You're just talking to a
microphone and a speaker.
Data collection, "Ex Machina."
- This is where Eva was created.
If you knew the trouble
I had getting an AI
to read and duplicate facial expressions.
- I would argue that
it's sort of not tougher
to do facial expressions
than any of the other things.
But what's hard about facial expressions
is getting the decent data,
you sort of need to be
in a studio like this
to get a good shot of somebody's face.
- You know how I cracked it?
Every cellphone just
about has a microphone,
camera, and a means to transmit data.
So I turned on every microphone and camera
across the entire [beep] planet.
- What is absolutely
accurate and true today is
if you want to build intelligence,
then currently the best way to do it
is collect a lot of data,
and I mean a lot of data,
more is better, and use
that to train something
called neural networks,
which are a sort of crude,
cartoonish model of how we
used to think the brain worked.
- Here we have her mind.
- The crystal ball part
is science fiction.
- There's a weird thing
about search engines.
It's like striking oil in a world
that hadn't invented internal combustion.
My competitors, they
thought that search engines
were a map of what people were thinking,
but actually they were a map
of how people were thinking.
- Most people are horrified
to hear that somebody
is recording data, but
look, here's the deal.
Letting people use your data,
hopefully they anonymize it,
is how we're gonna get useful servants.
- That's exactly right.
- Cooperation, "Robot and Frank."
- Just bring me some cereal.
- That cereal is full of
unhealthy ingredients.
I threw it away.
- They sort of made an
extreme version of, you know?
- That cereal is for children.
- Eat your vegetables.
- Enjoy this grapefruit.
- Eat this grapefruit.
- You're for children, stupid.
- In order to help people,
you gotta make a bargain with 'em.
What's the carrot?
It can't just all be stick.
Even humans taking care of other humans
have the same problem.
- Frank, we're going to
have to work together.
- You are a robot butler.
- I'm not a butler, Frank.
I'm a healthcare aide
programmed to monitor
and improve your physical
and mental health.
- The big problem we're
gonna have in the future
where we're building all these things
that are supposed to help
us, how should they behave?
If all we're succeeding
is automating nagging,
nobody's gonna like it.
- That thing is gonna
murder me in my sleep.
- Healthcare robot, "Big Hero 6."
[whoosing]
- This is what I've been working on.
- Hello, I am Baymax, you're
personal healthcare companion.
- I cried when I first saw that movie.
- A robotic nurse.
- Some of my own work
helped inspire Baymax.
- Hello.
- In order to get robots
that are safe to take care
of people, I wanted to
make them very light.
- It looks like a walking
marshmallow, no offense.
- We started looking at inflatable robots
on the theory you can't kill
someone with a pool toy.
- Goin' for a non-threatening
huggable kind of thing.
- The Disney folks visited our lab
and really developed it in
ways we hadn't even thought of.
And thus we got a lot of inspiration
from watching that movie.
- I will scan you know.
[buzzing]
Scan complete, you have a slight
epidermal abrasion on your forearm.
I suggest an antibacterial spray.
- What's in the spray specifically?
- The primary ingredient is bacitracin.
- That's a bummer, I'm
actually allergic to that.
- You are not allergic to bacitracin.
- What we can't get scanned
for is the internal chemistry
which it was sort of suggested
that Baymax could scan you
and tell you about something
that was chemically wrong with you.
- You do have a mild allergy to peanuts.
- But that's within the
realm of possibility
and you know, might
happen in our lifetimes.
- What kind of battery does it use?
- Lithium ion.
- That's actually a little problematic.
Yes, lithium ion batteries
are really good to use,
but they're also likely
to explode or burn up.
And there's this famous case of NASA,
and they had a robot sitting in a lab.
The robot suddenly starts burning.
And basically, burns up entirely.
So batteries are gonna be a problem.
- You have been a good
boy, have a lollipop.
- We've seen in other movies that a robot
that comes in there and
just tells you what to do
is gonna be met with a lot of resistance,
hostility, it's just not gonna work.
So this is my Baymax.
It takes care of me, I take care of it.
You know, it doesn't take much
for a robot to be a good companion.
[gentle music]
Awakening, "Avengers: Age of Ultron."
- [J.A.R.V.I.S.] You are Ultron,
a global peacekeeping initiative
designed by Mr. Stark.
- [Ultron] Mr. Stark.
[whirring]
- They're trying to show
you a robot what we'll call
booting up from non-operating
to fully-operating.
- [Ultron] This feels weird.
- That's typically a complicated process
where you have a bunch
of different modules
or subroutines that turn
on one after another.
And if you don't get the
sequence right, nothin' works.
- [Ultron] I'm a peacekeeping program.
Mr. Stark, I don't get it, it's too much.
- In this case they tried to show you
lots of lights, pictures and whatnot.
- [Ultron] The mission,
you give me a second.
- Start timing.
- We turn the robot on, it
starts thinking really fast,
and within seconds it's figured out,
"If I'm gonna achieve peace in our time,
"I've gotta get the humans under control.
"I've gotta be the master of everything."
- [Ultron] I believe your
intentions to be hostile.
[exploding]
- Conversational agents, "Her."
- [Computer] We'd like to
ask you a few basic questions
before the operating system is initiated.
This will help create an
OS to best fit your needs.
How would you describe your
relationship with your mother?
- It's pretty stereotype to ask a question
about your mother in a psychological test.
- If I tell her something
that's going on in my life,
her reaction is usually about her.
- Psychological tests are really annoying
and they made that one really annoying.
- It's not about--
- Thank you.
Please wait as your individualized
operating system is initiated.
- Yes, the system doesn't have to hear
your entire response to
move onto the next question.
A real psychological test typically
involves a lot more questions.
- What do I call you, do you have a name?
- [Samantha] Yes, Samantha.
- Okay, where did you get that name from?
- [Samantha] I gave it to myself actually.
- When did you give it to yourself?
- [Samantha] Well, right when
you asked me if I had a name
I thought, "Yeah, he's
right, I do need a name."
But I wanted to pick a good
one, so I read a book called
"How to Name Your Baby,"
and out of 180,000 names,
that's the one I liked the best.
- I actually expect to
see this level of dialogue
about restricted subjects like your email
to have it pretty soon because
there's intense economic
pressure for the folks at
Amazon to get Alexa to do it.
- [Samantha] Do you
want to know how I work?
- Yeah, actually.
- A lot of very smart people
are workin' real hard on it,
and I expect it to happen pretty soon.
- That's really weird!
- Personality, "WALL-E."
[buzzing]
[whirring]
A really interesting
question in robotics is
should robots have a personality
or emotion or motivation?
So far, we've been modeling
our robots on Spock,
totally logical.
- [WALL-E] Ooh.
- In terms of making them successful
when they're on their own,
it might be the case
that we need to give them
personalities, emotion,
motivation, in order to succeed.
WALL-E is an interesting example
of what we call an autonomous robot.
In the movie of course,
he's all by himself.
And he has to figure
out what's interesting.
And I would argue that his personality
could play a big role in figuring out,
"What am I gonna put in
my little trailer there,"
and, "What am I gonna
just leave on the ground."
[intense music]
He discovers this little plant
which he treats with great respect.
[gentle music]
That personality is what
guides WALL-E in making
the big decisions, the
decisions that matter.
Consciousness, "Humans."
- I am a synthetic, but I'm awake,
conscious.
- I really don't know what
to do with consciousness.
I certainly think I'm conscious,
but I have no idea if
any of you are conscious.
I have no real way of finding out.
- Whatever it is, I don't wanna know.
- From an engineering point of view,
consciousness is not yet
a really useful concept.
So when I build robots, I never
even think of consciousness.
- I can think,
sense, feel,
care.
- So a lot of people wonder,
can robots really feel?
Can they get in love and all this stuff.
And I guess I've been in
this business long enough
that from my point of view,
if they act like they're in
love, then they're in love.
I can't tell if someone else loves me.
I just have to go on whether
they act like they love me.
- I like you more than
anything I've ever seen,
or heard, or touched.
- Now, we certainly can
generate that speech
that she gave towards the end.
- Everything normal is bigger
and brighter when I'm with you.
- You make everything
brighter when you're around.
- You make everything
more.
- I think that probably comes
straight out of Hallmark.
Targeting, "Chappie."
- [Man] Droids, droids!
- You were followed.
- [Man] Get behind the wall!
Hit the explosives.
[exploding]
[gunfire]
- By blowing that stuff
up, he did two things.
One, he gave the robots something else
to shoot at or think about.
- [Man] We can't get a clear shot.
- Trying to get the robots
to attack somebody else,
so even though they know you're there,
you're not important enough.
[gunfire]
[ricocheting]
Figuring out if a soldier
or a vehicle is on your side
or an opponent is a hard problem.
It's in fact, a hard problem
for current military systems.
With airplanes and other vehicles,
we put in electronic
markers that for example,
respond to radar and send out a code
that says, I'm on your side.
And if they don't generate the right code,
they're not on your side.
I don't believe we do that on
soldiers yet, but we could.
So this scene appears very
complicated and hard to process.
[gunfire]
Partly because we're not
used to acting this fast,
but actual soldiers are
trained for this kind of thing
and are trained to
process this very quickly.
Robots would also be able
to process it very quickly.
[exploding]
[helicopter whirring]
[crashing]
If you sort of crank through
all the possibilities,
all the different ways
how to detect humans,
we probably could find a way [laughing]
to hide from the robots,
but it isn't gonna be easy.
Small talk from "Star Trek."
- Captain.
- Bridge.
[beeping]
[whooshing]
- I understand that Arkaria has
some very interesting weather patterns.
- Mr. Data, are you all right?
- Yes sir, I'm attempting
to fill a silent moment
with non-relevant conversation.
- Small talk.
- It's hard to tell the
difference between a robot
tuning his parameters or
writing entirely new programs.
One of the original
programming languages Lisp
was created so that robots could
write programs for themselves.
- I have written a new
subroutine for that purpose.
- But if we say tuning
parameters is a form
of writing your own programs,
robots have been writing their
own programs for a long time.
- If you really are
interested in small talk,
then you should keep your
eye on Commander Hutchinson
at the reception this
afternoon, he's a master.
- Now I wanna hear about
everything that happened
after you left Starfleet Medical.
- If you essentially learn how to interact
with people by watching people,
you're gonna pick up on
their interactive styles,
all the things they do.
- A pleasure.
- The pleasure is mine.
- The original goal of
robotics in the 60s, 70s, 80s,
was robots ought to be able to
figure it out for themselves.
When we started trying to get
robots to do what humans do,
we realized what humans do isn't
dictated by the laws of physics.
- I was aware of that.
- And the best way to figure
out how to imitate humans
is to directly imitate humans.
[laughing]
Smart cars, "Knight Rider."
[upbeat music]
- Kit, kit, you there?
- Where would I go?
- "Knight Rider" is another classic robot,
in this case, TV show.
It's really important
and it sort of guided
a lot of thinking of what
robots should be like.
- [Kit] A little consideration
would be a beginning.
- We can certainly have robots
that have fancy lights on 'em.
- [Michael] It looks like
Darth Vader's bathroom.
- We can certainly get cars to talk.
- Rave on machine, rave on.
- We could build Kit today.
The difference between Kit
and what we could build today
is Kit is what we call AI complete.
It knows about everything.
- [Kit] I am scanning your interrogatives
quite satisfactorily.
- The car we built today would be expert
on a very limited number of things.
I think it's much more
realistic that we're gonna have
robots that are pretty good
at a limited number of things,
and gradually get better and better.
- [Kit] I suppose so.
- Rather than we go instantaneously
from toaster-level intelligence
to it's as good as a human.
- [Kit] After all,
we're only human, right?
[bell dinging]
- [Narrator] Conclusion.
- We've watched a lot of clips,
a lot of stuff that I saw 40 years ago.
Folks in the studio here with
me weren't even born yet.
And I have to say, it's made
a huge difference to my life
and helped me build robots.
And I hope that watching
this will get a lot
of young people inspired
to also build robots.
It's tremendously creative,
it's a lot of fun, let's do it.
[audience applauding]
