When people wonder what is
AI, they think about robots.
Robots.
Let's make a distinction.
A robot itself is hardware.
AI is software.
What is Artificial Intelligence?
Systems that mimic,
perhaps even go beyond,
our own ability to think
and to process information.
It determines who we want to date,
whether we get a certain kind of loan,
operating at all levels of our economy,
in the political realm,
in the judicial system,
employment practices, in
the educational system.
All of these decisions help to determine
how well or how poorly our lives go.
And increasingly, AI is
involved in those decisions.
We are right to be concerned
when we are just now
learning to think about
the social and ethical
implications of using AI.
[Kirk] So many of our own stories
are about the relationship
between the creation and the creator.
So we're at this
inflection point right now.
Suddenly all of this
data, these algorithms,
and the computational power is available.
That's really been the breakthrough.
Things are becoming real
that we've talked about,
we've dreamt about for decades.
AI which can do a very
specific narrow task.
That can recognize voice,
navigate a car, translate language.
And if this is going to
be the ubiquitous tool
we're doing ourselves a disservice
by not educating the
general population also
on what is really going on right now.
We have so many immediate
concerns about the kind of AI
that will be developed that
we really need to devote
most of our educational efforts
to those kinds of problems. 
Artificial intelligence uses
a massive amount of data
to recognize certain kinds of patterns
that are correlated with
a successful result.
Hidden in these vast pools
of data that we're now using
are remarkable opportunities
to run off the rails.
So we have to be careful
about the data we give it.
Data.
Data can be bad because it's
not been properly labeled,
because it's inaccurate,
because it has baked into it
certain human biases that are
naturally a part of society.
One clear example of data
bias is the use of algorithms
to make decisions about
the fate of prisoners.
If you have enough data
points, you should be able to
judge a person's
likelihood of re-offending.
But the problem is
[John] if there is bias
hidden in the data,
your outcome in the judicial
system may be very different
than someone who is upper
middle class and white.
Why is it projecting higher risk scores
onto black defendants?
It often turns out that the
algorithm itself isn't feeding
on any data that's explicitly
labeled in terms of race.
A zip code doesn't tell you
anything about a person's race.
What the machine learns is,
ah, people in this zip code
are more likely to be
criminal, when in fact,
it may just be that they're
more likely to be black or poor
or any other category that is
a subject of biased practices.
These technologies will actually reinforce
some of the worst aspects of our society,
which is really a great
tragedy because the technology
has the potential to actually
do exactly the opposite.
Machines aren't racist.
Machines aren't sexist.
Machines aren't ableist or classist.
We are.
Our biases really get not only embedded
in this one program, but
get scaled out rapidly.
Silicon Valley has been a magnet
for all the world's best,
but it also tends to be
dominated by white males.
AI has a white guy problem.
It's going to be crucial for us
to have different kinds of diversity
to be part of the development of AI.
Who builds these AI systems
and what data are we
using in them is vital.
What I see is a generation
that takes all kinds
of life instructions from
this device whether it is
where to go to eat Korean
barbecue or who to marry.
It all comes magically out
of the palm of your hand,
from an algorithm that
you know nothing about.
Who's behind that screen?
Who's on the other end of the wire?
What are the motivations
of that algorithm?
People no longer even think to
ask, it's just so convenient.
[Kirk] That ability to predict
and then modify our behavior,
perhaps even without us fully
understanding why or how.
If you want the machine to
behave in an ethical manner,
you have to know why it does what it does.
Part of the transparency--
Transparency.
Is to be able to explain the logical steps
that it went through to arrive
at a certain recommendation.
When we are training these
machine learning algorithms,
we're supplying it a
million pictures of cats.
And we just say, "That's
a picture of a cat."
We don't say, "That's a picture of a cat,
"because of the ears and the tail,
"the whisker and the nose," and all that.
The machine itself starts to
recognize that such and such
is a cat and such and such is not a cat.
We don't actually know how
the circuit has reached
that particular set of conclusions.
Hot dog, not hot dog, whatever.
That's funny.
Fire missile, not fire missile.
That's not funny.
[John] The particular
nature of the technology
that's being rolled
out, they're so complex
and they operate on such large data sets
that the function of the algorithm
is largely eluding the designers.
And maybe that's part of the regulation
that will be built around AI.
If you cannot explain what
this program has done,
it just should not be
released to humankind.
And there is this movement
starting where more and more
people say, "Well, we need
to be able to audit this.
"We need to be able to
understand what's going on."
We're about to bump into things
that we really have not thought
about as a human species.
The notion that you can interact
with a bot on the internet
and it can effect your propensity
to vote one way or another
is a tremendous threat to democracy.
Autonomous weapons is
a terrifying concept.
Drones with smart algorithms
being turned into weapons.
There is a legitimate fear that the power
of artificial intelligence
will be disproportionately
put in the hands of the already powerful.
Should we give it to the rich,
'cause they can afford it?
There is a popular view that
over a couple decade period
we'll have no jobs at all.
It's not a binary like,
"Do I have a job in the future or not?"
It's, "How is my job gonna change?"
This isn't a bubble.
AI is going to become a bigger and bigger
and bigger part of everything.
And this might be the way
that we get to the stars,
get to the bottom of the ocean.
Cure cancer, fix climate change.
The world is getting better
and these technologies are enabling that.
At its best, it's going to
solve all of our problems.
And at its worst, it's going to be
the thing that ends humanity.
We must develop responsible
and safe practices
before the power gets too
unmanageable for us to control.
Whose responsibility is it?
Is it corporations?
Government?
Is it society?
And it's really all of us.
There would probably
scientists and technologists
who would say, "I'm not gonna
let any consideration stop me
"from my investigation
of the world we live in."
We all have to together say,
"Is that a road we want to go down?"
Human responsibility for human power.
[Curt] You shouldn't be okay
with us making all the rules.
The new Manhattan Project
is not AI development;
the new Manhattan Project is AI safety.
The more that they take on characteristics
of our own cognitive processes,
the more that raises our
responsibility as their creators,
as their trainers, to
constantly be vigilant
and ask ourselves, "How
should I be doing this work?"
Technology by itself is not good or bad.
If AI is done right,
it's all going to make us better humans.
