[MUSIC]
Hello, everybody.
>> Hello.
>> My name is Jennifer Widom, and
I became the dean of Stanford's School
of Engineering about six weeks ago.
>> [APPLAUSE]
>> Thank you [LAUGH].
[APPLAUSE]
>> And
I have to say, this is my first time
introducing a live radio show, nobody told
me that was part of the job description,
but that's okay, I don't mind.
I think it's a really exciting opportunity
tonight, and thanks everyone for
coming out.
I was also told to announce that this
show will be broadcast
on Saturday at 8 AM,
Sunday at 9 AM on
the Sirius XM Insight channel.
And it's also on the website,
engineering.stanford.edu.
And, I believe,
on Facebook Live we have thousands
of other people around the world,
who are listening at the same time as you.
So, this is the first live taping of
this show, The Future of Everything, but
there have been a number of
broadcast prior to this one.
But let me back up a little bit, and just
saying something about the motivation for
the show.
So, one of the best things I get to do
as Dean of Engineering is learn about
all the amazing research that's
going on around the school.
We have about 270 faculty and
thousands of students who
are working on just amazing things.
When I have to put it in one sentence,
I would say, we
do fundamental research that attacks very
difficult and important societal problems.
And both of those are equally important.
The fundamental research part, and
the fact that we apply that research
to tackle problems that the world
cares about and needs to address.
Examples, you'll probably think of
the same ones, energy, the environment,
healthcare, cities of the future,
cyber security.
We recently had a long range
planning process where we identified
ten questions that the School
of Engineering can be
part of answering together with
our colleagues across campus.
So, the purpose of this
radio show is to highlight
the research that we're working on.
Again, that addresses these societal
challenges, and specifically to do so
from different disciplines and
different perspectives.
How many of you have heard
in the previous shows?
Have some people listened to them?
Okay, quite a few.
There was one with law professor
Hank Greely on The Future of Sex and
Reproduction.
John Dabiri is a new faculty member
in Mechanical and Civil Engineering.
He talked about renewable energy and
technology inspired by nature.
Some of you might have come to the
Intersections event a couple weeks ago and
heard John Dabiri talk there.
Chemical engineering professor Zhenan Bao
talked about flexible electronics and
synthetic human skin.
So, in this show we're going to have
two very engaging faculty members.
One will be Professor of
Computer Science Fei-Fei Li, and
she will be talking about
artificial intelligence.
And Chris Gerdes who's a professor of
mechanical engineering will be talking
about autonomous vehicles.
But before you meet them, I'm going
to introduce your host Russ Altman.
So-
>> [APPLAUSE]
>> Thank you.
>> I'm gonna say a few things about you.
I'm not sure you were
supposed to come out yet.
>> Well, I like to listen.
>> [LAUGH]
>> Okay.
All right, [LAUGH] well,
you have some fans, that's for sure.
You have some fans in the front.
Okay, so when he's not a radio host, and
by the way, when I heard he was a radio
host, I thought,
what perfect person to be a radio host.
But when he's not,
he is a professor of bioengineering,
of genetics, of medicine,
of biomedical data science.
And by courtesy of my own department,
computer science.
And a past Chair of Bioengineering,
so quite-
>> I treasure all of those appointments.
>> [LAUGH] Your business
card must have small font.
>> [LAUGH]
>> Okay.
>> [LAUGH]
>> In terms of research,
I've long been familiar
with Russ's research.
Again, I'm a computer scientist,
I work with data.
And he applies computing and
informatics technologies to
problems relevant to medicine.
Pretty much exactly what I would describe,
fundamental research [INAUDIBLE], whoops.
[CROSSTALK] [LAUGH] Fundamental
research that's applied to
important societal problems.
He has an AB from Harvard College,
I think I've heard of that place,
in Biochemistry and Molecular Biology.
Then he saw the light and came to
Stanford, for both a PhD and an MD degree.
It tells me the years here,
but I'll leave those out.
So, [LAUGH] without further ado,
please join me in welcoming Russ Altman
and The Future of Everything.
>> [APPLAUSE]
>> Thank you very much.
>> [APPLAUSE]
>> Thank you, Jennifer.
Thank you, Dean Widom.
Let's give it up to Dean Widom.
Thank you very much, yeah.
>> [APPLAUSE]
>> And, hello Cubberley, I love Cubberley.
So, you just heard introductions, so
I can cancel most of what I was gonna say.
I am the host of The Future of Everything,
with Russ Altman.
And I just want to tell
you how we got here.
What's up, why are we doing this radio
show, you heard about it, what is it for,
let me tell you my story.
So, Sirius XM, they're our partner,
must be somewhere, yeah, there it is.
They have shows like this,
they have in fact channels, one with NYU
focused on medical stuff, people call in,
get medical advice, and that's been,
I think, a success for them.
And then they did one with Wharton,
the School of Business at
the University of Pennsylvania.
And, again, that's business stuff, and
they said, we need to do science and
technology.
And to our fortune at Stanford, they
were smart, and they called up, I guess,
our Communications Office at Stanford and
said, we do this with Wharton.
We do this with NYU.
Would you be interested potentially
in having a Stanford type channel?
And more importantly,
could you imagine any faculty member
who would ever wanna do this.
>> [LAUGH]
>> So, I guess, because I give talks to
alums now and then so the Communications
Office might know me from that.
And I've done couple TED type talks.
I guess, my name popped up.
And I remember this very clearly.
I got an email saying, Russ,
would you ever be interested in hosting
a radio show about science and technology?
So, I remember getting that, and
I remember my one line response
was I am intensely interested.
>> [LAUGH]
>> And so, okay,
then they brought me into the studio and
they said, we need to see if you suck.
So we-
>> [LAUGH]
>> This is Sirius Radio.
We're not gonna use profanities tonight,
but if we wanted to, we could.
>> [LAUGH]
>> Ask Howard Stern, so they had me come
in, and they had me do a practice
interview with one of my colleagues, for
whom and to whom I will always be
grateful, because I was kind of clueless.
And they said okay, that's pretty good.
Now, we need you to do one more thing.
I said fine, what do you want me to do?
They said, we need you to come into
the studio and can you just talk.
Whoops, this is radio, so
I need to fix my mic here.
It's not a mic.
I need to learn the words in radio,
as well.
It's an earphone.
>> [LAUGH]
>> Hey, somebody say something to me.
Yeah good, okay, we got it!
Do I have to start from the top?
I'm not going to start from the top.
Okay!
So, they said that was pretty good, Russ.
But can you can come into the studio and
just talk for 30 minutes?
And I said, well why, yeah, but why?
They said, well you know in radio
sometimes there is dead time.
And we have to make sure that you
can talk if you need to talk.
So I came into the studio,
I brought a couple of stories that
I thought were mildly entertaining.
And suffice to say that at 35
minutes they were going okay, cut!
>> [LAUGH]
>> And then you're hired, you can talk.
So it's a fun job because
really I'm talking
to my colleagues in this
type of semi-formal setting.
Really finding out why they do
what they do and what they do.
So that was great.
You've heard that we've
had about 14 interviews.
These are available,
of course it's aired on SiriusXM.
They're also available at
stanfordradio.stanford.edu.
But this is our first live show and
I'm very, I believe the word is stoked.
Now, a little crowd participation.
Dean Widom did this a second ago but
I wanna do a little bit more.
Who has even listened to one second
of this show previous to today?
Thank you very much,
who has listened to an entire episode?
Be honest, okay.
Who is pretty sure they've
listened to every episode so far?
God bless you.
>> [LAUGH]
>> He's one of my executive producers.
>> [LAUGH]
>> [LAUGH]
>> Okay, well, by the end of today you
will have heard one complete episode,
and so congratulations.
I think with that we are ready to do this.
So let's do it, thank you very much.
>> [APPLAUSE]
[SOUND]
>> Stanford Radio presents the Future of
Everything in front of a live audience at
the Cubberley Auditorium at
Stanford University, on SiriusXM Insight.
>> Everything is changing.
There's science and
technology coming down the pipe that you
need to understand and be ready for.
>> Featuring director of the Stanford
Artificial Intelligence Labs, Fei-Fei Li.
And director for the Center for Automotive
Research at Stanford, Chris Gerdes.
>> [SOUND] The future of everything.
>> Now, here's your host Russ Altman.
[MUSIC]
[APPLAUSE]
>> Thank you and
welcome to Cubberly Auditorium
at Stanford University.
You know, since the first computer, since
the dawn of computation in the 1950s.
We have been fascinated by the idea of
having computers displaying
some kind of intelligence.
Similar to what humans display.
If you go to the source of all knowledge,
the Internet, and check for
the definition of intelligence.
Here's a couple of my favorites
from the dictionaries online.
The ability to learn or understand or
deal with new or trying situations.
Okay, alternatively, ability to acquire
and apply knowledge and skills.
So, artificial intelligence
has come to mean intelligence
as manifested by computers.
It is artificial, presumably,
because it is not the natural kind of
intelligence that we humans display.
And actually you could
argue some animals as well.
Now a key criteria is
often the requirement for
a broad and flexible intelligence.
A computer program that does
a specific thing very well.
We say, well, that's impressive, good job.
But it's not really intelligent in
the sense that a broad range of inputs can
come in and it can do okay.
So, the ability to respond to
a variety of challenges and succeed or
fail gracefully is also kind of
worked into this definition.
We think about how,
in 2001: A Space Odyssey,
which most of you who are less than 40
have no idea what I'm talking about.
The Star Trek computer and Galaxy Quest
computers, and all the ones on TV shows.
Now my PhD, even though I'm a physician
and I studied biology, was actually
in an artificial intelligence lab,
in the Computer Science Department here.
And we worked on specialized systems
that used logic and rules to try to
achieve high performance in medical and
biological scientific tasks.
And there was a lot of hype,
this was in the mid 80s.
A lot of hype and a lot of expectations,
and I think you could say that those
expectations in the mid 80s,
early 90s were not met.
And in fact, we stopped using
the word artificial intelligence.
Because it become a little bit of
a joke among our colleagues who said,
yeah, that was very artificial and
not very intelligent.
In fact we started calling
it computational thinking.
But in the last few year AI,
as I will now call it.
AI, artificial intelligence,
has made incredible progress.
It's in our lives every day.
And our colleagues has created systems
that function in these broad categories.
Like speech recognition, image
interpretation, predicting behaviors,
controlling cars.
That'll be for the next segment,
and also controlling other devices.
Well my guest today is as
you've heard is Fei-Fei Li
of the Standford Computer Science
department and a real expert in AI.
Particularly in the area
of image analysis.
Fei-Fei would you join me?
Please give it up for Fei-Fei Li.
>> [APPLAUSE]
>> So,
I'm not even sure we're aware
of all the ways that AI is or
is about to affect our lives.
Or what it might lead to in the long term.
So to start out, can you describe to
us some of the current uses of AI and
the ones that are really about to happen?
>> Yeah, sure.
So first of all, thank you Russ for
inviting me to this.
You say you can talk non-stop for
30 minutes.
I was really sweating.
>> [LAUGH]
>> That's not my skill So,
yes in a way artificial intelligence for
someone who has been in this field
of technology for almost 20 years.
It's really the computation and
computing of data in intelligent ways.
It is already everywhere.
You mentioned speech recognition,
Siri or Echo,
Go Home, these are all the AI product
from speech recognition area.
In image analysis and computer vision,
which is the home field of my research.
We're already seeing a lot of
progress in photo tagging,
whether it's Google Photo or Facebook.
In medical imaging,
there is a flurry of work recently
by places like Stanford or Google.
Where scientists are recognizing
cancers from medical images
at the ability like doctors.
And a self-driving car, which we'll be
talking with Chris later, is really also
very much a result of multi
areas of AI research.
>> So that's fascinating, and let me ask
you what happened in the last ten years?
Can we put our finger on any one or
two factors, or
was this just gonna happen all along?
Tell me a story about how these
technologies just kind of seemed
to pop up out of nowhere.
>> Yeah, I can tell you a story.
So I joined the field of AI, if you
count the day of PhD as the beginning of
entering a field, is in the year of 2000.
And like you said,
AI was a little bit of a dirty word.
And we don't talk about it, I was in
computer vision and machine learning.
And that was the time that it was
the dawn of machine learning.
Statistical method in understanding data,
and do modeling.
And it was already in hindsight
the thawing of the ice,
where things are starting to change.
So starting my generation in late 90s,
early 2000,
a lot of work in the field of AI is
building the theoretical foundations and
machine learning tools to process and
analyze data.
But something else was quietly
happening outside of the laboratory.
And again,
that thing later played one of the biggest
roles in the rebirth of AI or
renaissance of AI.
And that was the Internet.
[LAUGH] I remember-
>> I've heard of it.
>> Yeah [LAUGH] I remember
as a first year or
second year graduate student at Cal Tech,
so south in California.
We started to hear our Stanford and
Berkeley friends started telling us there
is this easier search engine,
called Google, than AltaVista.
[LAUGH] So in the first ten
years of the 21st century,
the world was experiencing an explosive
growth of the Internet Age and
the Information Age.
So, as the AI scientists
are building the tools,
the machine learning tools,
the world is starting to
be filled with data,
especially data from cyberspace.
And also as the sensors and
devices start to pick up the pace,
the way of capturing data
also started multiplying.
So quietly these two things along with
the progress of computing and hardware.
The more slow that was
carrying the information age,
really started to converge in a way
that most people didn't expect.
So towards the very end of the first
decade of the 21st century,
around 2010, 2011, 2012,
the field of AI was suddenly
playing with a lot of data.
Using algorithms that have gone through
generations of trials and errors,
and starting to see the amazing effect.
One of the key moments I think
that a lot of you have heard of,
especially the Silicon Valley.
The key moment of deep learning
on your network was 2012.
>> Hold that thought we're going
to talk about 2012 in one moment.
>> Okay, All right.
>> But
first I have to tell you that
this is the Future of Everything.
I'm Russ Altman and I'm talking to Fei-Fei
Li, director of the Stanford AI Lab and
the Stanford Vision Lab.
We're talking about AI and
what happened in 2012.
>> Yes 2012, so five years between that,
I was a young assistant
professor at Princeton.
And I was working the field of
computer vision and image recognition.
And I was frustrated with
the slow progress of the field.
Because vision is so rich for you and me.
We open our eyes, we understand the world.
Every single pixel has its color,
its meaning, its context.
We see hundreds and thousands and
tens of thousands of objects.
We talk to each other,
we socialize all of our vision.
Yet the field of computer vision
was recognizing a few objects,
like airplane if it's sideways or
[LAUGH] human faces if it's frontal.
>> That would not count as AI.
>> Right [LAUGH] and I was thinking,
what can we do to change that?
And I started a little bit of
a crazy project called ImageNet.
And the project was inspired by
something that natural language and
linguist field had started called WordNet.
And it was a Princeton linguist,
George Miller.
When I arrived at Princeton,
he was very senior, he was in his 90s.
George Miller, by hand,
put together an English
lexicon data set ontology
of nearly 100,000
English terms in a structure
that's called WordNet.
And I looked at that structure, and
I was talking to my linguist friends and
we had this idea.
Can we populate the world of
images with that structure and
create a huge data ontology that
can feed into our algorithms.
So I started that ImageNet project and
then, long story short,
it took three years of-
>> And
this was mostly taking
advantage of internet images?
>> Right, so
what we did is we downloaded billions and
billions of images from Google,
Yahoo, at that time.
There are other search engines now
you've never heard of, and [LAUGH]
and also, we were cleaning it by hand.
Like labeling the images, sorting them.
And I told my graduate student at
that time, by the computation of
where we want to go for ImageNet,
he's going to take 19 years to graduate.
>> That is not good.
>> He didn't appreciate that.
[LAUGH] So I think he was depressed,
and I was worried.
And then we heard something
we've never heard of.
We heard of a thing called
Amazon Mechanical Turk, and that was 2007.
Amazon Mechanical Turk was
rolled out in late 2006.
>> This is a service where you
pay small amounts of money for
people all over the world
to help you with a task.
>> Exactly, and at that time we
decided we were gonna farm out this
entire task of image labeling
to the entire world.
Long story short,
around 2008 we were employing
about 50,000 workers around 167 countries
online to help us with ImageNet.
And after three years,
we got a data set that the field
of AI has never seen before.
It was 50 million images
organized in this entire WordNet
function or ontology, and then-
>> Ontology
is like an organization of all your
knowledge into bins and categories.
Just since we've used
that word a couple times.
>> That's why you're a radio host.
[LAUGH]
>> Yes, and
I wanna get to your health applications.
>> Okay, so quickly.
So 2012, once we have ImageNet
we opened it to the world.
And we wanted to invite the whole
world's AI machine learning scientists
to start working on the problem of
image processing, image classification.
During that time I already
moved to Stanford.
We hosted a Stanford University
of Northern Carolina
challenge every year
called ImageNet Challenge.
And then we were seeing some progress.
2012 is the year that Geoff Hinton
from University of Toronto, and
his student, used an old algorithm.
This is what's shocking to a lot of
people who are not in the field of AI,
is you probably have heard of AI or
deep learning just now.
It was a very, very, it was a method
with a long tradition developed in
the 70s, the 80s called neural network.
Specifically convolutional neural network,
and
they submitted their results to
image that challenge in September.
When we got the result, I have to
confess as an organizer, I was shocked.
In half the-
>> You thought they were cheating.
>> It was Geoff Hinton.
He wasn't-
>> Not Geoff.
>> [LAUGH]
>> I trust Geoff.
I was so just amazed by the result.
I remember my student who was running
the challenge called me at night and
said Fei Fei, we've got a winning
algorithm that we didn't expect.
>> So let me fast forward, because I want
people to hear the story of the kinds of
things you're trying to now, really
derivative from that image analysis work.
Tell me what you're doing
in the healthcare setting,
beacuse it's just surprising.
>> So yes, so fast forward as AI is
showing its power in image recognition,
in human activity understanding, I started
to think really deeply of AI's reach
to the application areas, and
to me one of the most important,
deeply personal areas that I
care about is health care.
So I started a collaboration
with a Professor Arnie Milstein.
He's also a professor in your school,
School of Medicine.
Looking, his life's work is looking
at improving health care quality and
reducing cost.
And we started to really bond and
sparkle some ideas about using
AI to play the guardian
angel role in healthcare,
in treatments, in patient care.
So here's an example.
As we speak now, the Lucille Packard
Hospital, Children's Hospital at Stanford
has an entire hospital
unit that has our sensors,
our depth sensors installed
in the hallways and
patient rooms, to help doctors and
nurses monitor hand hygiene practice.
It turns out hand hygiene is one
of the biggest causes of hospital
acquired infection, which is a top
killer of patients in hospitals.
>> We're gonna talk more about
the health applications.
We'll have more with Fei Fei about
the future of artificial intelligence and
these exciting applications,
on the future of everything,
here on the campus of Stanford University
on Sirius XM insight 121.
We'll be back in a second.
>> [APPLAUSE]
>> So, Fei Fei, before the break
you were saying that this problem of hand
washing is something that's amendable.
So can you just take us through, are you
actually taking pictures of people and
checking whether they're
washing their hands or not?
>> No, it's more nuanced than that.
First of all, the most important thing
is to protect everybody's privacy, so
we are not taking pictures.
We're using depth sensor.
What is a depth sensor?
This is a LIDAR sensor that your
self driving car technology
will be using to see if the car
is running into obstacles.
>> LIDAR is a laser type radar.
>> Right.
This is also the same sensor if
you play video games like Xbox,
that can see your gestures.
So we're using the same sensor
that can sense human movement and
gesture, and
then we put it in the hospital unit.
And now we can, for the first time ever,
we can track continuously,
the hand hygiene practice
of the practitioners.
Just in contrast, before this technology,
what hospitals used,
is what they call human monitors.
So they would hire someone,
a nurse or a trained-
>> Typically quite annoying.
>> [LAUGH]
>> And they make mistakes.
They don't see enough things.
They have short attention span and
all this.
And they cannot be there all the time.
And now our sensors can
continuously feed data about
how doctors and
nurses are washing their hands.
>> Now I know that one of the other
applications that you are least thinking
of, and I don't know if you've acted on,
i's the idea of home care for the elderly.
>> Yes.
That's-
>> Talk to me about what might be
possible there.
>> Yes, so aging society is one of
the biggest problem in the whole globe,
America, Japan, and
Europe, in many countries.
So one of the very important goals for
having a quality aging life is
independent living,
because it's important for the seniors.
But how do we ensure the quality
of independent living?
And how do we help the friends and
families to take care of elders who want
their independence and privacy, but
also need to be taken care of when needed.
So in the collaboration
with On Lok Organization,
which is a San Francisco
based senior home for
low income seniors, we have sensors now
installed in some of their test unit to
start pilot the idea of using these depth
sensors to look at the living
behaviors of seniors.
And to help the early detection
especially of things like dementia,
lack of social activity, nutrition intake,
sleep patterns, and all-
>> So this is based on your assumption
that the motion patterns in the room-
>> Yes
>> Will be able to be associated with-
>> Yes
>> Important behaviors that a doctor or
a health care provide would care
about knowing on a day to day,
even minute to minute basis.
>> Exactly.
This is why I call it a guardian angel.
It's quiet.
It's continuous.
It doesn't interrupt your life,
but it's there for
you, and providing the help when needed.
>> This is the Future of Everything.
I'm Russ Altman, and
we're talking with Fei Fei Li,
the director of the Stanford AI Lab and
the Stanford Vision Lab.
We started with ImageNet.
We went to its applications in hand
washing and now we're watching our
parents and grandparents, and soon
ourselves, develop dementia basically.
>> [LAUGH]
>> I think that-
>> If you put it that way.
[LAUGH]
>> Yes, forgive me.
So I don't wanna leave
this conversation without
discussing the societal challenges,
the nontechnical challenges to AI.
>> Yes.
>> It's a lot to ask you as
a technologist to also be worrying about.
>> You had mentioned privacy earlier and
the discussion about hand washing.
How do these teams that you work with,
these interdisciplinary teams,
how do you think about the decisions that
society has to make about when AI is okay,
and when we need to perhaps stay away
from it, or is that even a conversation?
>> It absolutely is a conversation.
When I became the Director
of Stanford AI Lab,
the first thing I did about three years
ago is to start a series called AI Salon
that welcomes AI students and faculty and
speakers to join us to discuss,
biweekly, the societal issues.
As a technologist, I I really believe in
the benevolent responsibility
of technology and it's really important,
especially something as powerful as AI.
I think that it's an important
moment in human history,
that we think about these things.
Be it privacy, ethics, bias,
alternative facts, data.
[LAUGH] And diversity, all these issues,
I think are really, really important.
>> So I don't mean to put you on the spot,
but I do.
>> [LAUGH]
>> That's okay.
>> Is our society,
who are not the technologist,
do you believe that our governmental and
non governmental organizations are paying
sufficient attention to these issues or
are we gonna be caught with new
technologies that we're not ready for?
>> So, first of all,
human nature, human civilization
will never stop innovating
that's in our blood,
in our DNA and
I'm not optimist in the sense that I
watch the progress of human civilization.
I think every time we create a technology,
there is a strong positive force in
our society to want to use this technology
for the better of human society.
But I don't think we should
completely naive about it.
Technology is in the hands of people and
Shannon Valor is a neighborhood
philosopher in Santa Clara University.
I love what she said.
Machine value,
there's no separate machine value.
Machine value is human value,
so I think it's so
important we talk about this, it's so
important we talk about AI today.
I want to talk about AI responsibly.
I want to talk about AI informed, and
I wanna talk about AI with a diverse and
inclusive voice.
So that's what I care about.
>> Well, thank you, and that is a great
way to end this, and I'm sorry,
we could go on for much longer.
Fei-Fei and I have worked
together on one other project,
I guess earlier this last year-
>> Last year, yeah.
>> Was, Stanford co-hosted
a White House Summit on AI,
the future of AI, and it included both
the technical and the social issues.
And it really gave us hope that
the governmental leaders and
staff within the government,
bureaucrats, were thinking about this.
And we're putting together programs for
making sure that society is ready for
these technologies so
we can maximize the benefit and minimize.
What we call in medicine, adverse effects.
So thank you very much for coming.
>> Thank you Russ.
>> Thank you so much.
>> Yeah, thank you.
>> [APPLAUSE]
>> In a few minutes,
we'll be talking about the future of
autonomous vehicles with Chris Gerdes,
the director of the Center for
Automotive Research at Stanford.
But first this is a big one,
I was fortunate enough to ride in
one of Chris's driverless cars.
Let's hear how that went.
[MUSIC]
So here we are in Willows California,
thanks for meeting us,
we were here bright and early.
We have before us I believe, Marty,
the car, and what's the deal with Marty?
>> So Marty is our electrified
self-driving drifting DeLorean.
So Marty is part of a project to figure
out how we can control cars up to the very
limits of their capabilities.
So race car drivers, drivers who are
involved in drift events tend to use all
the friction between the tire and
the road to either be fast or dramatic.
We wanna learn to do the same thing.
And we're gonna be safe.
>> So today we're gonna do
some drifting maneuvers.
We're gonna drift around
this first cone over here.
>> When's the last time
you checked the airbags?
>> There are no airbags.
>> Okay.
>> There is,
however, a-
>> Cut!
>> I think I should disclose
now that I get very carsick.
We're gonna gonna get in, do some
doughnuts, we're gonna burn some rubber,
and we're gonna see how rust and
how the drive does.
Time to helmet up.
All right, [NOISE] we're ready to go.
>> Dukes of Hazard.
>> All right, all right,
I have now have no play.
This is a very cool car.
I have no ideal what I'm
about to experience.
I do have a plastic bag from one of my
producers, which I will use if I need to.
This is the future of everything.
>> All right, you ready?
What I'm gonna do is lift my foot
off the break and put us into
autonomous mode at which point,
Marty will zoom forward to begin testing.
Here we go.
Okay, Marty is now going.
We are accelerating,
we're heading right towards a.
No, we're not heading right
towards anything, actually.
We're now spinning around and
accelerating.
[SOUND] [INAUDIBLE]
orange cone [INAUDIBLE].
[SOUND]
>> Whoa!
We did it.
Terra firma.
Excuse me.
Okay, that was awesome.
So I can vouch that he never
touched the steering wheel.
Everything you saw was done
by some computer somewhere.
I think we did some science, it's hard
to believe because I'm a little shaky
right now but we did some science,
we learned some things.
And there you have it.
[MUSIC]
>> [APPLAUSE]
>> So
I go to Lake Tahoe a lot from
Menlo Park Stanford area and
it's about 215 miles from my house.
And it takes three and a half or
so hours to get there.
And I like to ride my bicycle on
Stanford campus and around town because
it is just miserable, rain or shine, it
is miserable to drive on Stanford campus.
So when I think of self-driving cars,
I think of those two scenarios.
The long boring drive
where I wanna fall asleep.
But, as I actually get up into
the mountains, it gets curvier, and
then in the last few months, icier.
And it's very scary.
Two or three times this winter,
I've just had to let go of the car and
let it finish its slide.
Totally out of control.
Not like what you just saw.
And then, when I drive around
the university, it's totally different.
It's not that it's boring.
It's that I'm positive that I'm
gonna run over students and
that's really bad for
many reasons that I won't go into.
So when I thought about self-driving cars,
it was about, getting around town, long,
boring drives, sign me up.
But what I did not think
about was doing donuts and
drifting in a very cool
DeLorean named Marty.
By the way, if the Marty-DeLorean
thing doesn't make any sense to you,
just Google Marty and
DeLorean, it's a big deal.
>> [LAUGH]
>> It has something to do with
Back To The Future.
The thing that isn't clear in this video
that I wanna stress is, first of all,
before they put me in the car, Chris
demoed to me what the car was gonna do.
So they had the car do its thing.
And at the end of that, he showed me
the skid marks and they made a perfect,
black circle around one of those
cones that we were driving around.
And after my drive with Chris, we got out
of the car, and he pointed out again,
of course, he was very modest about
this but I thought it was amazing,
that the skid marks from the drive we
had just done, were almost perfectly
superimposed on the skid marks from
the first test drive that he had done.
Now, this was significant because
the weight had changed in the car.
As Chris taught me, the tires are entirely
different tires at the end of one drive,
because you burn so much of the rubber,
That it's an entirely different vehicle
and yet their control systems made
these perfectly superimposable.
So that made me realize, and we're gonna
talk about this, that the self driving
cars may be most remarkable not for
recapitulating the things that I can do.
Like drive to Tahoe or try not to
hit Stanford students on campus.
But help me when I'm situations
where I have no clue what to do,
no training, no credentials.
And in fact, in the last six months it's
happened to me three times where I'm just
sliding around.
Well, it happened four times, but
one time was under total control.
And three times where I was just
going along for the ride and
was just glad that I
didn't go over a cliff.
So my guest today,
Chris Gerdes as you know is from
the Stanford Mechanical
Engineering Department.
He hosted me up in Willows, and
I'd like to welcome him up to the stage.
>> [APPLAUSE]
>> Thanks for coming.
Thanks for coming, Chris,
and thanks for hosting us.
That was an awesome day.
>> My pleasure,
great to have you out at the track.
>> It was really awesome.
This was in Willows, California.
You might have to Google that.
So when will these driverless
cars be available?
And how will we roll them out into
a mixed environment with humans and
self-driving cars on the same road?
How can we expect that all to unfold?
>> Well, I think if you go about three or
four blocks off of campus you're
likely to run into one today.
So these cars are already
out on the street.
And they're being tested
in pretty similar ways.
So the cars you see today will have
a human driver in there as a backup.
And that's a really great safety feature
because humans are fantastic at getting
context, at being able to
handle unusual situations.
And so what you see are cars
being developed in the streets of
different cities and
generally in fairly limited environments.
I think what you're gonna see with
the rollout strategy is that companies
are gonna get a lot of experience
on a very few streets.
And then eventually say, we feel confident
enough in this where we're gonna let
the vehicles go out
without a safety driver.
And I think that's gonna start with
a few streets, in a few cities and
expand out from there.
>> So I know that you served your country,
literally, doing a brief stay, or not so
brief, at the Department of Transportation
earlier, in 2016, and before.
Is there a plan is the government,
kind of the same question
we talked to Faith about.
Is the government on top of this?
Or is the industry leading and
hoping that the government will catch up?
>> So
I think the government is on top of this.
One of the things that we did on my stint
there at the Department of Transportation
was to produce the national
policy on automated vehicles.
So there is a federal policy
on automated vehicles.
And it handles a complicated chicken and
egg problem.
Which is a lot of people say we should
have government standards, we should
have some sort of regulations, but that
assumes you know what the solution is.
You can't have standards before you
know the best way of doing something.
And we're still trying to figure
out what are good ways of
handling automated vehicles?
So the idea was to do voluntary guidance.
To have some guidelines for
states that wanna be active.
To have some guidelines for
manufacturers and
say, these are the 15 points that you
need to address in a safety assessment.
We wanna make sure that you're addressing
these things before you go out
on the road.
So address those up front.
Give us the information in a letter and
then go out and test.
And let's let the testing of these
vehicles in a safe environment generate
the data which can be used to then
develop good data-driven decisions and
regulations.
>> Have you, or the industry, or
the government,or anybody
come up with kind of phases?
Is there gonna be a phase where there has
to be a human who can grab the wheel?
I've read that some companies are taking
the, we don't want any wheel,
we don't want a steering wheel in the car.
So does it get down to
that level of detail or
are you allowing kind of a thousand
flowers to bloom and see how it goes?
>> Yeah, the whole idea behind
the guidance is based on this concept of
an operational design domain.
And the operational design
domain basically says,
tell us what this car is supposed to do.
And then tell us how you've handled
consumer education around that,
how you've handled fallback systems and
safety systems.
So you may say, I'm designing a car
that doesn't have a steering wheel,
that actually doesn't have breaks,
or accelerators or
any sort of driver controls.
At which point, the question would be,
okay, if something goes wrong,
what is your backup solution?
Some companies may say, well,
we want to remote monitor that.
Or you may say I am developing
an automated vehicle that's going to have
a safety driver.
It's going to rely on the operator
of the vehicle to take
over in these circumstances.
Then the question will be, okay, how do
you know they're going to be alert and
attentive enough to do that?
So the whole idea is define what
the vehicle is capable of doing.
And it may be only capable of operating on
clear days below 25 miles
an hour on these seven streets.
But that's okay.
>> So you might have those kinds
of proscriptions in the approval?
It'll be approved subject to these uses.
We do this for
medications actually all the time.
>> Yeah, at this point it's voluntary
guidance, but the idea is that they would
submit a safety assessment letter and
have this discussion.
>> So this is The Future of Everything.
I'm Russ Altman, and we're talking
with the director of the Center for
Automative Research at Stanford,
Chris Gerdes,
>> Chris, what are the challenges,
technically, that you and your students
are laying up at night, awake,
worrying about these days?
That was pretty impressive, but
I'm pretty sure you're not done, so
what are the current challenges?
>> Yeah, so I think you saw an example of
being able to push the limits of the car's
capabilities, and
as you said in your open.
The same sorts of physics, the same sort
of math that we use to solve that problem,
applies to the exact same problem of,
I've run out of friction
because of an icy road.
This is just more dramatic,
and visually appealing, and
gets you invited to radio shows.
So that's one of the reasons
we're studying it, but
essentially at it's core
is the same problem.
And we think there's still a few more
years of research available to us in this.
But then honestly we think that we can
give automated vehicles the skills
of the very best humans, maybe even
better then the very best humans,
in controlling a car under any situation.
So there's still some research to be done,
but we're pretty confident in that.
One of the areas that we think is,
really, an open issue,
is understanding how these vehicles
will interact with humans.
And so when we think about an automated
vehicle now going out on the road.
What are our expectations of it?
How safe does it need to be,
how do I quantify safety,
what are the proper interactions?
You mentioned going on campus
around Stanford students.
You have some ways of doing that.
You make eye contact.
You may have some particular expectations
of knowing students on their phone
are more likely to walk out in front
of you than students who aren't.
You'll have all of this context.
How do we program that
in an automated vehicle?
And will people interact with it in
the same way that they interact with human
driven vehicles.
>> So does your group also
worry about those higher level,
kind of intentional
programming of the vehicle?
Because what we saw was really a feat of
physics and dynamic, unstable systems.
But then on top of that you
have to kinda give it goals,
and one goal is just get
me from here to there.
But another goal is,
I don't wanna kill anybody, and
am I gonna have to set up my car to say,
if it's either me or
a pedestrian, make it be me?
>> Well, can you facepalm on the radio?
Does that get through?
>> He just facepalmed.
>> I hear this all the time.
There seems to be this misperception out
there that automated vehicles are driving
around, making these moral judgments
of who is more worthy of living.
>> [LAUGH]
>> Hit Altman.
Run him over.
>> Yeah, exactly, so we've programmed it.
So and it's no wonder people
have this idea in mind.
There's in fact, even a website,
The Moral Machines website at MIT that
is attempting to crowdsource this.
So you can literally say, do I run over
the five cats or the five dogs Does it
matter that the dogs are actually crossing
with the light and the cats are not?
Or do I kill the five criminals
who are obeying the law and
crossing with the light?
Or do I run over the two physicians,
the homeless person and the two cats
that are crossing against the light.
If you think I'm making this up, no.
Check it out, these are real cases.
And I think it's created this mindset,
I mean it's like Stephen King's Christine.
That we have these automated
vehicles going, who lives, who dies?
And in reality, I think what we're doing
is we're programming cars to avoid
these sorts of collisions.
And in fact the arguments at
cocktail parties around here are,
can you believe these self-driving cars
are going at 25 miles down the road?
And I can't get past them.
>> I must say, I've had my first
self-driving car road rage incidents.
Because those of you who live, I'm
sorry I'm gonna take some of your time.
>> No, go for it.
>> I was driving on El Camino, and I can
deal with one slow Google self-driving
car, but there were two of them.
>> [LAUGH]
>> Taking up both lanes on El Camino,
and they were going the exact speed limit.
Which was, let me just be honest,
15 to 25 miles per hour less
than I felt safe going.
>> [LAUGH]
>> And so,
I decided on the spot to give the Google
guys Some data about road rage.
>> [LAUGH]
>> By the way I have a very old car, and
so I don't care what happens to it.
I got as close to the car as I could get,
six inches.
And followed it to say, I hope I can
really mess up their algorithm so
they can go back and can figure out we
really should have made it going faster.
So, it was enraging, and I'd like to
claim an early road rage incident.
>> Well, so I probably should
then go back to my question of,
what are the big technical challenges?
It's you.
>> [LAUGH]
>> [APPLAUSE]
>> I think my producer wants me to say
that this is the Future of Everything and
I'm talking to Chris Gerdes.
Even though you wouldn't be able to tell,
because I've been talking too much.
He's the director of the Center for
Automotive Research.
Go on.
>> Great, so
I wanna come back to this
road rage that you described.
And this is exactly the sort of ethical
discussion that we have to have.
So, I can program a car to be ridiculously
safe and to crawl down the street.
So I know what the limits are, and this
is where our research has evolved from.
Okay, can we get the car to do everything
it's physically capable of doing?
Great, now how do we be proactive?
How slowly can we go down that street and
still avoid that pedestrian?
And the answer is,
well I could go very, very slowly,
but then I've got people
being driven into road rage.
So, how do we answer those questions?
Those are the real ethical
questions around self-driving cars.
Not, who kills who?
It really is a question of, what are our
values around safety, legality, mobility?
How do these things
play off of each other?
If you look at
the California Vehicle Code,
you'll see that you're not supposed to
cross that double yellow lane divider.
But almost everybody in the hills
around here, that I've seen,
will do it to give extra
space to a bicycle.
Well, so do we program the automated
vehicle to do what humans do?
Or do we program it to follow the law?
These are the sorts of ethical questions
that I think we need to answer, so
that societal acceptance of
these vehicles is quite high.
>> So that forces me to conclude,
and tell me if this is right,
that you have to put together incredibly
interdisciplinary teams on your projects.
This cannot just be about
mechanical engineers.
Or do you have incredibly,
ethically aware mechanical engineers?
Who does this work, and who is doing it?
>> Yeah, so we have a lot of
mechanical engineers in my group, but
we work very actively with
other disciplines as well.
So, we've had philosophers
here as visiting faculty.
Yeah, Patrick Lin from Cal Poly
has been here on leave with us,
to study some of these issues.
There's a number of other philosophers
that we've worked with as well.
We had a program in the legal
aspects of automated driving,
which brought Bryant Walker Smith here for
a few years.
He's now a professor at
the University of South Carolina, but
remains a collaborator.
>> So, I think a lot of us are pretty
excited about this, but we're nervous.
Are there things that are showstoppers?
In other words, you've described
that there are these problems, and
there are ways for us to approach them.
Are there things on the road map?
I'm sure you have in your head
a road map of what needs to happen.
Which are the big ones that could
really be a problem, and be a no-go for
self-driving cars?
I mean, you say they're already out there,
but I guess there could be a no advance.
Like, we're not gonna go
further than where we are.
Or are there actually no such barriers,
and
it really is gonna be a pretty smooth
transition into this new world?
>> Well, I think there are barriers and
I think really what society thinks of
these vehicles, how society as a whole
responds, will be very important.
Because automated vehicles
will not be perfect.
There will be accidents.
In fact, that's one of the questions
we're trying to answer.
Right now, humans compensate for
others humans' mistakes.
And, to what extent does the automated
vehicle have to do that?
>> Like anticipating that he's gonna
swerve, she's gonna swerve, so
I better swerve?
>> Exactly, or even if a pedestrian
steps out in front of you.
It may not be physically possible to stop,
so what does the automated vehicle do?
What is it's best effort?
With humans we judge whether or
not that was reasonable.
Does the same standard apply?
And so I think how these
things are programmed, and
what happens with these initial accidents,
are going to be extremely important.
Who can talk about this?
And how do we make sure that the benefits
are spread throughout society?
Right now there are a lot of very
expensive sensors on these vehicles.
But there's no reason, a few years from
now, that the sensor package can't get
down to a couple of thousand,
or even a few hundred dollars.
So, you start thinking about that.
I can have an automated vehicle, let's
say it cost me $40,000 to manufacture.
I only get 150,00 miles out of it.
It can carry four people in
a shared mobility concept.
I'm talking probably about $0.10
a mile transportation cost.
Now, if I am suffering
from transportation,
there is a study from Harvard that showed
that in fact, access to affordable
transportation was one of the biggest
barriers to people escaping poverty.
If I could get transportation from my
house to my workplace at $0.10 a mile,
that is a game changer.
And so, I think it's really
important that we think about,
how are these benefits
spread through society now?
>> So you said the workplace, and
that gets me to this one issue that
everybody wants to know about.
Are we gonna be putting too
many people out of jobs?
And I think you could think
about truck drivers, and
you could think about taxi drivers.
What's the deal there?
Should we be worried about that?
Is it not the problem that
people are worried about?
In promoting this show,
I went on to the Sirius truckers channel.
And all they wanted to talk about
was the three and a half, or
four million jobs that self-driving
cars were gonna take away from them.
>> Yeah, and so
I think that this is a real issue, and
I think we should be talking about it.
I think, in the near term,
there may be some opportunities for
improving some of these jobs.
So look at trucking, for instance.
There's a shortage of truck drivers.
Mental health issues are actually
40% greater in long haul truck
drivers than in the general population.
These are tough, tough jobs.
But now, if you think about if I'm
actually driving one truck, and
perhaps I have other trucks
following me that are now automated,
the productivity can increase.
I still have a human in there for handling
certain situations that are difficult for
handling loading, and unloading.
And maybe I can design this in such a way
that that's actually a more enjoyable job
in the logistics or such,
that this person gets home and
sleeps in their bed every night.
Because they can hand
off to somebody else.
So, I think there are ways of doing this.
It's likely not going to happen
unless people put effort into it, but
I do think there are ways
in the near term.
To make some of these jobs
dependent upon driving and
transportation more fulfilling,
and perhaps easier on people.
In the long run, I do have concerns
about this because I think,
in my experience talking with people
who drive trucks, who drive taxis
This is represents a fairly important
wrong on the economic ladder and
I almost never talk to somebody
who's a taxi driver or driving for
companies like Uber or Lyft who
isn't doing something else, as well.
And so, if you eliminate those
opportunities, what have we done?
I don't know but
it's an important question to ask.
>> So we're talking with Chris Gerdes.
This is The Future of
Everything with Russ Altman and
we're at Stanford University and SiriusXM.
Should we think about self-driving
cars as basically autonomous robots?
And are there rules for
how robots should behave?
That we can borrow from.
>> Well, so
one of the classic sets of rules for
robots are Asimov's
three laws of robotics.
So a robot should not harm a human being,
or
through an action allow them to be harmed.
And then, the robot should obey orders
given to it by a human unless that
would conflict with the first law,
for instance.
We've looked into those.
It turns out that as sort of a code of
ethics or a complete set of programming
that doesn't work, and that, in fact, was
kinda the whole point of Asimov's stories.
He had these breakdowns as these
laws attempted to be applied.
It's really impossible unless we incite
even more road rage on your part to really
have a robotic vehicle
that cannot cause harm.
Because I can jump out in front of it.
When it simply doesn't
have enough time to react.
And I really wanna give some thought
as to whether or not human override
is really the thing that we want and
this is an interesting thing.
So out on the race track at Thunder Hill,
when we taken our self-driving
race car out there, there'd been a couple
of times when I've seen something that I
wasn't quite sure about, and
I hit the big red button, and in the time,
it took me to take control,
the car ended up going off of the track.
Because I couldn't take control quickly
enough and get myself into that situation.
So in retrospect, I would of been better
to just ride it out, and stop it later,
this sort of sense that I could
do something was strong but
actually misguided in that case.
And so, I think there are some laws but
not necessarily ones that we want.
And this is a subject of some
really amazing research right now.
>> I wanna switch a little bit to
the business of cars because I think
car business is something that many of
us learn about because a lot of us have
bought cars and we know that there are
traditionally the three big American ones,
the three big American Detroit based.
There was these great companies in Europe,
great companies in Japan,
are they all paying equal
attention to this opportunity?
Or are we going to see a new
vanguard of companies rising up?
Because you hear about Google and Apple.
I do not think about Google and
Apple as car companies.
But there's rumors that
they care about this.
So is this gonna be a re-shuffling of who
does cars in America or in the world?
>> I think, absolutely.
You're gonna see a reshuffling.
You see a lot of Silicon Valley
startups looking at this.
You see a lot of established
tech companies look at this.
You see every major automotive OEM
figuring out how to play this,
and what's interesting is
over the course of the last,
maybe 9 to 12 months, you start to
hear the same message emerging from
all of these players that we're moving
towards a future of shared mobility.
And the future is going to be around
providing mobility as a service,
as opposed to selling individual units.
So even the major OEMs are on board
with this sort of general thrust,
and their acquiring companies
are building out their own units or
trying to find some sort of hybrid design
where they can move quickly into these
fields, while at the same time, leveraging
the resources of a large manufacturer.
>> Just so I can understand,
that means that it's possible that I will
never actually own a self-driving car,
I may use them as part of some kind of
pooled resource, is that what that means?
>> That's right.
Certainly in the short term if you think
about it from an economic standpoint.
If I am going to provide a self-driving
car that makes sense to you as a service,
I can actually put a lot of sensors on it.
So as long as I don't have
to have a human driving it,
I can make an economic argument
that this is going to pay off.
I don't have to get something at
a price point that you would purchase
I just have to develop
something that can provide you
miles of morbidity at
a price that is competitive.
>> So this has been fantastically
interesting, thanks so much.
We've heard that this is
the definition of disruptive.
We have had, for many,
many decades, a model about cars.
You buy them, you drive them,
they wear out, you get rid of them.
You drive them, but now we're thinking
about these players will change,
the capabilities of the cars,
as you saw and
heard, will change, and even
the economic model for how they go on.
So this is a brave new world for
cars, and it's happening right now.
So thank you very much, and thanks for
your work and sharing it with us.
>> Yeah, my pleasure.
>> [APPLAUSE]
>> So
this has been great and
we're now gonna invite Fei-Fei back up and
we're gonna have questions
from the audience.
So there are two microphones, one in
the middle of each aisle on the sides.
We're gonna do a little bit of a shuffle,
Fei-Fei meet Chris Chris meet Fei-Fei.
The lights are up and
we'd really like to know what's on your
mind about these two discussions we had.
Just to remind you, we talked first about
AI, especially about the revolution
in ability to process images,
and then sensor data for
the care of elderly, and then we talked
about autonomous self-driving cars.
So do we have questions from the audience?
We'll go back and forth.
Please get up and go to the microphones.
I'm gonna be switching back and
forth between microphones.
Yes?
>> Yes, right now we have carpool lanes
for electric vehicles and for carpools.
Do you think we might have, someday, a
transition where we have similar lanes for
automated vehicles that might be able to
move in a pack together where they're
going down the freeway faster
with closer bumpers together so
that we can make better
use of the lane use.
>> So that was actually the automated
highway system concept, and
that was what got me started in this.
I did my PhD work working
on a system like that.
I think people have in general
moved away from that because of
the infrastructure demands, and the fact
that if I'm going to do that I need
to have the ability to have that access
to that space on the highway and
cars specifically design for that purpose.
There are some efforts, in fact, I helped
cofound a company, Peloton Technologies,
that is looking at platooning trucks.
But a lot of the focus is on being
able to do that with automated
vehicles in mixed traffic
on ordinary lanes.
So there would be
the possibility of doing that.
It would be a lot easier, I think,
than a lot of the problems
that people are looking at.
But it would be hard,
I think, to argue for
the right of way in many cases for that.
>> Hi, thanks for
all the information, this is great.
The question I have is that the approach
or the feeling I get is that this is
inevitable and I feel like this
frothiness about artificial
intelligence from the 70s was
the ability to see a block in a room and
moving around the carpet and
then it never got there.
And two questions.
One is, selfishly, for
all the people sort of driving,
if it is inevitable,
when do you thing this would happen?
Is it a couple of years,
is it like 10 years, or 20 years?
I'm, sort of, thinking selfishly about
retirement, and not wanting to drive.
And, the other thing is, are you
worried that maybe it's not inevitable?
And that this is going to tap out.
And what we're going to effectively get is
Cars that just don't drift out of lanes
and don't run into each other, but
you're pretty much still driving.
>> You want to take a shot [INAUDIBLE]
>> I think that's [INAUDIBLE]
>> Okay, so-
>> [LAUGH]
>> In terms of when this will happen,
I don't know.
As I mentioned, I think this is going to
happen on a few streets in select cities
and roll out from there.
I do think that if you look at
software and the economics,
once somebody is confident sending
a car out without a driver in it,
the proliferation of this will be
much faster than people expect.
Because I can reproduce software at
virtually zero cost, so it may not happen.
Actually, I think the greater risk
is that the technology happens, but
we don't sort of organize
this well enough.
We don't organize public policy in
a way to make sure that this technology
leads us to the society
that we wanna have.
And I think that's the real risk.
I think the technology will come, but
the question is, how do I make sure that's
accessible, affordable,
sustainable transportation for everyone?
And not simply more automated vehicles
clogging up the existing roadway?
>> On the left here?
>> Yeah, hi, I'm Jeerod,
my question is for Fei-Fei.
So the concept of this mass monitoring,
unintrusive ways to kind of
monitor certain situations and
habits of people, very interesting.
I have been, for the longest of time,
researching something in terms
of using personal devices.
Like your mobile phone has so
many sensors already built onto it.
My dad suffers from Parkinson's, so
I've been kind of trying to figure out
between his quarterly
visits to the doctor.
That's the only monitoring that a doctor
does, every three months, right?
Now if you have a sensor built into
your phone, how fast he's walking?
How fast did he swipe
his phone this morning?
What's the vibration in his voice?
Is there anything down this
lane that you've heard of?
I'm just trying to figure out if there
are teams already working on this concept?
If there's some way I can collaborate and
maybe work with them?
So you probably know what's
happening in this field.
That's what I wanted to hear from you.
>> So what you're suggesting is really
in the realm of wearable devices for
healthcare.
And from companies to universities,
there is a lot of people looking at that.
>> So the question is, again,
to do with the amount of data
that sensors can gather.
And how you apply artificial intelligence
to this, to kind of device trends and
figure out whether the dosages are right.
And this is like daily monitoring, and
then you apply that with
artificial intelligence.
There's so much more that can happen.
>> Yes, definitely, I mean,
there is all ready a lot of
start-ups mushrooming everywhere.
And using either mobile data or
wearable, Fitbit, Apple Watch.
This kind of data to monitor some
aspect of health be it nutrition,
diet, fitness, and other habits.
So it's already happening, there is
ongoing research also even at Stanford.
I know we have a newly-established
centers dealing with e-wearable devices.
And we can talk offline about
the specific technology, but
this is very much on people's mind.
>> Thank you.
>> Hi, I'm just curious about the neural
nets training other neural nets.
Is there research in that area or
any applied work?
>> Absolutely, in fact if you
look at today, Google's products.
Almost every one of them, Google Photo,
Google search engine,
Google advertisement.
All of that are involving
neural network of some sort.
So the newer name for
that field is called deep learning,
but it's rooted in neural network.
And it really is infiltrating many,
many areas of applied AI.
>> In terms of neural nets
training other neural nets,
I've heard about these
adversarial systems.
Could you describe those
because they're quite amazing.
Where they basically have a little battle.
>> Yes, so the latest is
generative adversarial network.
And then you have a network
that generates samples.
Another network that tries to achieve,
for example, image classification task
using this generated samples and
improve it's performance.
And there's a flurry, the year of 2016 in
deep learning is adversarial networks.
>> Yeah, it's a very exciting name with
the two networks battling it out, yes?
>> Hi, so my question was about,
we talked at the early part of
the show about the definition of AI.
And one of the accepted definitions
is the Turing test, right?
So 30% of the time, if you can't
distinguish it between a real human,
then for all intents and purposes,
it is an intelligent system.
>> Just for background, the Turing test,
very briefly, is you have curtain.
There's an agent on the other
side of the curtain.
You can't tell if it's a human or
a computer.
That means that computer
has passed the Turing Test.
>> 30% of the time.
>> I've simplified it, but
that's the idea, yes, go ahead.
>> Yeah, 30% of the time, so the question
is, would we get to a higher threshold?
So for example, pick a number, 50%,
70%, 80%, so that's one question.
And what would lead to that,
what would lead to real intelligence?
We talked about phones and
medical applications.
I just heard on the BBC the other day
about an application called Babylon
out of the UK.
It's an AI application, powered by AI,
that you speak to it like you're
speaking to your doctor, right?
And it uses NLP, and deep learning,
and all these other things.
But the question is,
where does the Turing test, the benchmark,
the AI, how far would we push it?
It's only recently that a computer broke
I think it was only three or four,
five years ago that a computer
sort of passed for a real human.
Or AI passed the Turning Test,
in other words.
They've been doing it regularly, but I'm
just curious what you thought about it.
What will actually lead to it,
what kind of deep learning?
You talked about GANs you talked about
CNNs, you talked about [INAUDIBLE].
There's other things going on,
obviously, so
just wanted to get your thoughts,
thank you.
>> So first of all, as a AI technologist,
I take the Turing test much more
figuratively than literally.
I think Touring's biggest contribution,
addition for
him is one of the greatest
computer theorists,
is really this inspiration of
what a thinking machine is.
His very specific setup of
that curtain of machine and
a human behind the curtain and
the natural language test.
I think it's really only one
aspect of machine learning.
Whether we use the 30% threshold or
the 80% threshold, itself,
is not necessarily suggesting
a full intelligence.
In fact, one of the greatest joy of
working in the field of artificial
intelligence to me is
it's constantly forcing
me in my field to think about
what exactly is intelligence.
And this is still an ongoing question.
One of my favorite way
to compare humans and
machines is that the machines are really,
really fast, accurate, and stupid.
Humans are really, really slow,
inaccurate, and brilliant.
Because our intelligence
that's packed in our brain and
by evolution is extremely complex.
It's not just computing for
a specific task.
It's socialization, it's compassion,
it's emotion, it's creativity.
It's intention,
it's a whole package of that.
So we do not Have a Turing test or
any test that measures that
complexity of intelligence.
So that's one important
clarification I want to make.
>> So Brad Templeton actually has
proposed a test for autonomous vehicles.
He actually said you will tell
the vehicle is truly autonomous,
when you ask it to take you to work and
it drives you to the beach.
>> [LAUGH]
>> Sign me up for that one.
>> [LAUGH]
>> So
as an expert on autonomous vehicles,
what would you speculate?
Would you think it's an easier to
problem to build autonomous planes,
and commercial ships, and trains?
That's a really good question and
in fact when I was in the Department of
Transportation I worked
with all of the agencies.
There are some interesting proposals.
Rolls Royce has a video online about
a whole automated ship,a container ship
run from shore and what those operations
might be like, which is actually pretty
detailed in terms of how they diagnosed
problems with a team on shore.
So that is a big capital investment, and
the shipping industry is not doing
particularly well financially.
So there are some issues,
barriers there, financially,
but in some ways it is an easier problem
to remotely monitor and automate that.
With aircraft,
there are some amazing designs for
light aircraft that people
have looked at that would be,
in fact, fully automated and
would not require a pilot's license.
Some of these are electric,
would be 60, 70 mile range.
I think there's real sense there
as a transportation solution.
There are big barriers with our
existing regulations for that, and
that was one of the points that I drove
home when I was there, was that we should
be talking not only about unmanned
drones and man ground vehicles, but
we need to think about things that would
be occupied but un-crewed in the air.
>> Hi, many of us have felt or experienced
an increase in traffic in the Bay Area.
So I like that you were talking about,
lets just not put a whole bunch
more autonomous cars on the road.
So what kind of work is being done on
systems to help coordinate car pooling our
kids to school, getting people to work,
getting elders to their appointments.
And how are they going to work with
the people that are developing the self
driving cars?
>> So there are a number of
communities that are looking at
different ways to bring technology
to solve real problems and
one of the things that's nice to see is
sort of starting to put the need first.
And then look for the technology second.
And one of the great examples of this,
the Department of Transportation
sponsored the Smart City Challenge.
So they went out to cities and
basically said, we'll give you
$30 million in a grant form.
Bring us your ideas.
And the winning city was Columbus, Ohio.
And one of the aspects that was really
compelling in their application was they
said, we've got this neighborhood,
the Linden neighborhood in Columbus
with an infant mortality rate which simply
should not exist in the developed world.
It's incredibly high.
So how do we tie transportation
together with medical appointments for
people who live there.
How do we follow up if
people don't make it.
How do we make sure that they're arranging
their transportation at the same time they
make their appointment.
And can we actually demonstrate measurable
impact on this statistic in a few
years to show the outcomes of it.
And I think that's an example of the type
of thinking which is really necessary to
sort of get that need out there, and then
figure out how can technology meet that.
And it's something I
like to see much more of.
Chris, following up on
your comment on humor.
Human assisted driving, it seems over
the next ten years the greatest benefit of
this technology will be to reduce
the accident rate among human driven cars.
So as an engineer,
I've watched these systems develop,
lane assistance and so forth.
Can you help me understand why
the one thing that isn't talked about
is speed control, technically easy and
would be the one thing that
could really reduce the accident
rate among human driven cars.
>> That's a really interesting point
as we start to think about vehicles and
we think about, well, how do I introduce
an automated vehicle out on the highway?
Is it going to follow the speed
limit when nobody else is?
Do I allow the automated
vehicle to go faster?
Do I get serious about speed limits?
We had a visit a couple years ago
from the ambassador from France and
he was sharing a story that, France
apparently decided to raise their speed
limits and then enforce them strictly.
And he said it was quite a surprise
going back and realizing that, in fact,
that sort of buffer no longer existed.
So that was an interesting example of
a country that had decided to do that.
So I think there are a number of
approaches as we think about how to get
automated vehicles out there.
Not sure that speed is necessarily
the most popular solution,
even though it would save lives.
>> Okay, but my reflections was, why
don't we put these for human-driven cars.
>> Yeah!
>> Well,
that's the one area they don't talk about.
>> Well, actually with heavy trucks,
they are talking about that.
So that is, in fact, a rule currently
in the rule making process I believe,
about speed limiters for heavy vehicles.
>> Just to set expectations for
the people who've been waiting patiently,
we probably only have time for
one or two questions.
So I just wanna throw that out there.
Please.
>> This is a very simple question and
this question could be answered by
Fei Fei or Chris, or both of them.
The question is about developing
humanoid robots that can drive
the conventional automobile or
any other vehicle.
Is anyone doing that?
Anyone developing humanoid robots that
can drive conventional automobiles?
>> I'm aware of people doing a lot of
humanoid robots and also cars that drive.
>> I am not aware of places
doing humanoid robot driving for
the sake of driving and
I'm trying to think why would you do that.
>> [LAUGH]
>> So they can get out of the car and
make you dinner.
>> I was thinking they
could take us to the beach.
>> So here's a shameless plug.
>> We're going to have
a guest on the show,
obviously not tonight,
talking about humanoid robots.
Professor Oussama Khatib from Stanford.
So we'll be addressing the issue of what
they can and can't do in future episodes.
[CROSSTALK]
>> And there was a DARPA challenge
actually like that,
with the idea of creating a robot
that could do some of these things.
>> Right.
>> But from the standpoint of automating
vehicles, I think that actually
makes the problem much harder
than controlling the car directly.
>> Thank you.
>> Yes, I'll be brief.
Could you talk about the current
state of computer vision and
the trajectory in the sense that,
do you think it's enough for
computer vision to be supplied sensors for
automated vehicles and not need LIDAR?
So I guess, in short, is Tesla right or
are the rest of them right?
>> [LAUGH]
>> Well Tesla-
>> At least two opinions on the-
>> Tesla did have an accident.
So I think vision when
you talk about vision,
you're really talking about RGB and
visible spectrum and
I think one of the very nice things
about that is it's very cheap and
it's very broad.
But it does have limits just like
human vision in foggy days and
with heavy rain and snow, or
with some kind of particular lighting
situation, you have a lot of issues.
>> And the same limitation might
happen in this type of sensor.
So my personal take on self-driving car or
any machine that needs to sense,
especially the stake is so
high, is a hybrid approach.
Of course, I think a lot of the push in
terms of sensor technologies self-driving
car is to reducing the cost of LIDAR cuz
LIDAR is extremely expensive right now.
And I think there cameras come
in handy to supplement, but
my personal hunch is that we're going to
see the cost of lighter driving down but
the working self-driving cars
will be having hybrid sensors.
>> I think, obviously,
you're not gonna get traffic lights
with anything other than a camera.
So cameras are sort of a part of any
automated vehicle that’s gonna
go in urban environments.
There's a lot of interesting advances
with very high-resolution radar.
A number of companies were working on
this, as well as solid skate LIDAR.
So the sensor technologies that we have
to choose from today are probably not
going to be the palette that we have
to choose from five years from now, and
the cost is coming down dramatically.
So at some point, you have to kinda ask,
but if it costs $100 or $200 and
I can bring in multiple sensors and
add redundancy, why not have everything?
>> Hi, so you talked about some of
the decision dilemmas that an AV faces,
like whether to kill group A or group B.
>> I said that that's not
a dilemma it actually faces.
>> [LAUGH]
>> To me,
all the situations that I've read about
seem a little bit more hypothetical.
What about a more practical situation
of four AVs on a four way stop sign?
>> Well, that's one that Google has
talked about as being one that requires
a little bit of negotiation because it's
one of those situations where we tend to
use subtle cues.
For instance, if your algorithm is
programmed to wait for the other cars to
actually stop in California,
it may be waiting for a long time.
>> [LAUGH]
>> So
that's one of these areas
where social aspects come in.
I completely agree,
there's some really interesting
scenarios that are more realistic
in terms of decision-making.
One that I like to use is if
you have a van that's blocking
the view of an intersection,
and you have a green light.
How quickly do you go knowing that
there could be a pedestrian there.
These are situations that come up once
you start to pay attention to them,
probably multiple times every time
you drive around an urban area.
So I think there's a lot where we
can actually talk about decision
making without creating
fanciful scenarios.
>> What I guess, maybe on a broader level,
maybe the question can be posed this way,
are the autonomous vehicles,
right now, still trying to make
decisions assuming that everybody else
is a human driver while they're already
thinking about other AVs on the road?
>> I think right now they're trying to
make decisions with the expectation
that the other drivers on the road
may act in a broad variety of ways.
So if the other drivers on the road
are going to be automated and
act in much more predictable way.
The problem gets easier.
But until we get there people
are having to solve the harder problem.
>> We're gonna keep on
going until my producer,
Bryan, tells me in my ear to stop.
So keep going.
>> Hi, so obviously autonomous cars aren't
gonna become universal immediately.
There will be a transition in which human
drivers will have to drive on the roads
alongside autonomous drivers, and we have
like these small gum drop shaped cars
from Google, and as Ross mentioned,
they go 25 miles an hour in El Camino,
which has a speed limit of 35,
which is infuriating.
>> Thank you.
>> [LAUGH]
>> You may have more time.
>> [LAUGH]
Also, there
was the story at the Tesla accident, which
was caused by, I may have the story wrong.
Which was caused by the driver being
over-confident in Tesla's auto-pilot
system, which is not quite at the level
of maturity that he thought it was.
So could you comment on
how we as humans can
coexist with these robot drivers
until they become our overlords?
>> [LAUGH]
>> There's no loaded question there.
>> No, actually I was just reminiscing
of my son when we was younger said,
Dad, when the transformers come to Earth I
think they'll think you're okay because
you work with robot cars.
So how do we coexist with this?
Well, I think there's
a couple of questions.
One is sort of sharing the road.
The other is actually sharing
control of an individual vehicle.
And this is an area where I think
it's really important that there be
clear understanding of what is a vehicle
capable of, what are its responsibilities,
and what is the driver capable of.
And in
the Federal Automated Vehicle Policy,
there actually is a very large section
on that sort of consumer education,
and have you done enough to make sure that
people are aware of the limitations
of the system and what it can't do.
So it is a tricky balance when you
try to share control with that, and
making sure that the vehicle is aware of
that, and the human is aware of that,
is extremely important.
With respect to sharing the road, I think,
yes, making sure that these vehicles will
drive with us in a manner that we think
is appropriate, is really important.
Otherwise, you'll have people
starting to pass legislation about
not letting vehicles on certain roads,
or things like that.
And so, this is an area where
I think proactive discussion
is necessary otherwise we're gonna
have all sorts of states and
localities doing different things and
a bit of a mess.
>> [CROSSTALK] Speaking of over, please.
>> Sorry, I just want to add one thing.
I generally believe with the speed of
technology improving that the age of human
machine, coworking and
coexisting together, it has begun.
And this is more reason to invest in more
basic science research, from technology to
laws to moral philosophy and
ethics and all this,
to really give us guidance in terms of
how humans can co exist with machines.
>> [APPLAUSE]
>> Very good.
My overlord has told me that we have
the last two questions on the left where
you've been waiting the longest.
And then, I apologize to those on
the right and to number three on the left.
But please.
>> Hi, I'm Nick and I had a question for
Chris which was one of the aspect of
autonomous vehicles that seems to me to be
a little bit up in the air is
with regards to insurance.
So I'm curious to hear your thoughts
on that specifically I feel like with
autonomous cars it seems somewhat unclear
about whether in the future we have
insurance for the passengers in
a shared vehicle, whether we'll
insure the manufacturer of the vehicle,
whether we'll insure the programmers of
the vehicle, or the software maker,
so I'm curious to hear your thoughts.
>> State Farm is drooling.
>> [LAUGH]
>> Well, actually I gave a talk there and
they said, one of the questions asked was,
is this going
to mean that we're writing basically
one insurance policy per auto maker?
And if so,
which agent gets that commission?
>> [LAUGH]
>> Which I thought was a great question
that sort of gets to the heart of how this
up ends our thinking about auto insurance.
So right now,
it is an individual policy thing,
because we're insuring individual risk.
There really is no need to
ensure that individual risk
once you have an automated vehicle.
It really is shifting from individuals
actions to product liability.
And you already have some companies
that are developing these systems
really stepping up and saying,
if we develop these products,
we will actually be responsible for them.
And if you talk to several people within
the industry, who've worked in this,
they say, the auto industry is used to
getting sued for their products already.
This isn't really such a change for us.
The product does more, but this is a
litigation environment in which we've been
operating for a long, long time.
>> And just to follow up on that,
for this year,
I'm actually on sabbatical working
at Google Cloud which we work with
a lot of vertical industries including
insurance and financial industries.
One things that's fascinating
to me as a technologist
to observe is that a lot of these deep
vertical industries like financial
institutions are being rapidly disrupted.
By new technologies, and
this is something that we don't
experience in the customers base.
But the fourth industrial
revolution of automation and
intelligent computing is
making changes everywhere.
>> Final question.
>> Hi, good evening, my name is Ankush.
I guess I'm gonna have the last word.
This question is for Fei-fei,
and I'm gonna [CROSSTALK].
>> [LAUGH]
>> So my question is around computation.
So real good artificial intelligence
requires lots of computation.
So we are currently in the world
of classical computers, CPUs,
now we have advanced to GPUs.
Google is doing TPUs,
they call it TensorFlow Processing Units.
And now we are looking at
the advent of quantum computing.
So my question to Fei-Fei is what
kind of AI application do you think
would Quantum computing enable?
>> Boy that is a tough question [LAUGH].
So I am not an expert in quantum computing
from my shallow understanding first.
Quantum computing itself is a means to
increase the capacity of computing.
And if you look at today's AI algorithms,
there is a huge limit in
terms of the ability for
the computing hardware to
process the amount of data and
to do inference or
learning in the right amount of time.
So just a very,
very straightforward answer for
that is, quantum computing,
if successfully developed and deployed,
will significantly increase the computing
capacity of the AI applications.
But there is more to computing
than just quantum computing.
The computers we use these days
is the Von Newman architecture.
But as devices start to really percolate
and just dominate the whole world,
computing on device in very fast time,
small space, low energy,
and all this is actually driving very
new thinking in computing architecture.
And that's another very exciting
area that will be coming, and
it's intimately related to
the inference of AI applications.
So we're seeing multiple threats going on,
on the computing front.
>> Well, with that,
I would like to draw this show to a close.
I want to thank Fei-Fei Li,
Chris Gerdes, and the audience for
your participation today.
>> [APPLAUSE]
>> Thank you, Russ.
>> [APPLAUSE]
>> Thank you so much.
>> [APPLAUSE]
>> And
as I like to say when I teach, this
ends the formal portion of our program.
Thank you very much, have a good evening.
>> [APPLAUSE]
