- My name is Kristin Lennox,
and I'm a principal data
scientist at Beyond Limits.
My job is to help humans
and computers interact
with each other in industrial context.
Today, we're gonna be looking
at some Hollywood depictions
of artificial intelligence.
And I'm gonna be giving my
view on how similar that is
to what's going on in
the real world today.
2001: A Space Odyssey, two
astronauts on a mission
to Jupiter with a computer in their ship,
supposedly the most advanced
artificially intelligent computer
that's ever been created.
- [Mister Amor] Good afternoon,
HAL, how's everything going?
- [HAL] Good afternoon, Mister Amor.
Everything is going extremely well.
- So HAL is described as
working like a human brain
except a human brain that
never makes mistakes.
Computers really don't work like that.
Fundamentally, computers
don't think like we think
even when they're displaying
what appears to be emotion.
- [HAL] I'm afraid, Dave.
- It's trying to make
you more comfortable,
but it's not reflecting what's
going on under the hood.
What artificial intelligence
does is more task-specific.
So we teach a computer to
do a task like drive a car.
But we don't have a computer
that makes decisions
outside of that task.
There is one thing that HAL says...
- [HAL] The 9000 series has
a perfect operational record.
- Excluding situations where
there's a hardware failure,
this is actually correct.
Computers will do exactly
what they've been asked to do.
So any decisions that HAL was making
that didn't make sense to the pilots...
- [HAL] It can only be
attributable to human error.
- During the journey, the computer, HAL,
appears to be making mistakes.
This leads the astronauts to
attempt to deactivate HAL.
- [HAL] I know that you
and Frank were planning
to disconnect me, and I'm
afraid that's something
I cannot allow to happen.
- It's actually very difficult
to design artificial intelligence
that can potentially
be dangerous to humans
mostly because it's very difficult
to get machines to recognize
when humans are in danger.
The film is definitely
not an accurate depiction
of the state of AI in 2001.
But I'd say this is a solid C.
- I can't believe I'm
having this conversation
with my computer.
- [Samantha] You're not.
You're having this conversation with me.
- Her, a world where there
are artificially intelligent
operating systems.
So the the main character
in the movie gets an OS
named Samantha, and they
form a relationship.
I should tell you that I'm not in a place
to commit to anything right now.
- [Samantha] Did I say I
wanted to commit to you?
I'm confused.
- Samantha is a very human character,
so not just mimicking human emotion
but actually experiencing it.
- [Samantha] I don't know.
When we were looking at those people
I fantasized that I
was walking next to you
and that I had a body.
- Which makes for a very interesting movie
but is not the way that
computers behave today.
We don't actually know
how to make a computer
that wants things.
- [Samantha] I wanna learn
everything about everything.
I wanna eat it all up.
I wanna discover myself.
- That's capable of growing
beyond what we ask of it.
Eventually, the computers reached a point
where they were much more
interested in each other
and much more interested in
things that were not relating
to the humans who created them,
what's called artificial
general intelligence,
which would be a true,
sort of, conscious AI.
We don't have a reason to
believe it's impossible,
but we don't know how to get there.
Can we even do it with the
current digital architecture,
or would we have to switch to something
that's closer to what our brains are,
which is an analog
biological architecture.
But that doesn't mean
that we can't get there.
It just means that we really don't know
when we'd be able to.
It's a very interesting film.
It's not accurate.
And it's a lot of interesting ideas
about how things could
potentially evolve in the future.
I'd say B-minus.
Ex Machina...
- Are you building an AI?
- I've already built one.
- [Kristin] A billionaire tech genius
who builds a humanoid AI robot
and recruits a human to test it.
- The real test is to show
you that she's a robot
and then see if you still
feel she has consciousness.
- So we don't have a clearcut definition
of what consciousness is.
The definition that
resonates the most with me
is to say that consciousness
is the experience of being yourself.
We don't say that a thermometer knows
what the temperature is,
'cause there's no internal
experience for a thermometer.
There is no reason to believe
that our current computers
have any kind of internal experience.
Would we know it if we got it,
if we built a very persuasive
AI that could interact
as if it had an internal life?
It's very unrealistic to say
that the first very human-like AI
is going to fit in a human-sized box.
Current super computers,
which are not capable
of producing consciousness,
fit in buildings
and are run by their own power plants.
There is some hand-waving in the movie
about mythical quasi-biological
processing architecture
that they're going to use
to be able to achieve this
but, again, it's not based on anything
that's real and achievable right now.
It's a very enjoyable movie.
It's a movie that sort of explores
what does it mean to be human?
much more than it explores
what does it mean to be AI?
And I would say if you
were counting on this movie
to teach you about AI,
you would not receive above a D-minus.
- Instead of creating an
artificial intelligence
you duplicated an existing one.
- [Kristin] Transcendence,
there's an AI researcher
who is dying.
In order to preserve him,
they upload his brain
into a computer that he's built.
- [Will] I can't describe it.
It's like my mind has been set free.
- [Kristin] They hook
him up to the Internet
and suddenly he has access
to all this information he never had.
He starts building things that
humans can't even comprehend.
- [Will] We've made a breakthrough
with the nanotechnology.
We can rebuild any material
faster than before.
- This movie is full of lies.
We can't scan a person's brain.
If we could, we'd have
to take your brain apart,
so don't try it on your own.
- And in a short time,
its analytical power will be greater
than the collective intelligence
of every person born
in the history of the world.
- Even with the best technology,
you can't overcome physical limitations.
It only has so much storage space.
It only has so much processing space.
There's only so much power to run it.
You don't automatically get this
sort of runaway increase in capability.
I think that there is a point to be made
about taking the unknowable
artificial entity
and hooking it up to the Internet
and letting it do whatever it wants.
- [Will] You need to get me online.
I need to access financial markets.
- But it's unrealistic
even in its depiction
of the threat there.
The way the technology works is a solid F.
WarGames, it's basically about
an artificially intelligent computer
developed during the Cold War
to sort of simulate
global thermal nuclear war
and try to figure out under what scenarios
would you maximize survival of Americans.
A youthful hacker accidentally
accesses this computer
and asks it to play a game of
global thermal nuclear war.
In trying to win this
game, the computer decides
that the optimal strategy
is to actually start
an immediate preemptive strike
against the Soviet Union
and launch a bunch of missiles.
So the first takeaway is
don't ever give the computer
the ability to launch
your nuclear missiles.
That's a terrible idea.
This computer, which is very, very smart
in a very, very inhuman way,
is doing exactly what
it was programmed to do.
And it finds a solution.
And it goes to execute on it,
and it doesn't understand
that it doesn't have all the context,
that really, the best way
to win a global thermal nuclear war
is to not ever start
one in the first place.
It's showing a computer
that's an antagonist
but not a malevolent one.
It just doesn't understand.
Oh, A-plus, A-plus, I love WarGames.
When people make movies about computers,
they're actually making
movies about people.
And we like watching stories
about people or entities
that are like us more than
we would like watching
this is the life of your toaster.
I think as long as people
view it as entertainment
and say that an artificial
intelligence monster
is no more real than
a vampire or a zombie,
then it's good clean fun.
It's actually a really exciting time
to be working in artificial intelligence,
not because we're building robots
that have feelings, but
because we're seeing a lot
of really exciting things
in healthcare, in energy.
The world of research in general
has been completely revolutionized.
There's so much more good to AI than bad,
and that's sort of the vision of AI
that I see moving forward.
(upbeat music)
