[MUSIC PLAYING]
DAVID FREEMAN: Hello,
everyone, and welcome.
My name is David Freeman.
I'm the editorial
director of NBC News MACH.
And I'm also the moderator
for today's panel,
which is entitled
"Self-Driving Cars,
Pros and Cons for
the Public's Health."
So our panelists today
are, to my immediate right,
Jay Winsten, who is
the Frank Stanton
Director of the Center for
Health Communication of Harvard
T.H. Chan School
of Public Health
and associate dean for
health communication,
John Leonard, vice president of
research at the Toyota Research
Institute, and Deborah
Hersman, president
and chief executive officer of
the National Safety Council.
And also we have joining
us is Peter Sweatman, who
is the co-founding principal at
CAVita, which is a consulting
firm in California, I believe.
So we're streaming live on the
websites of The Forum and NBC
News MACH.
We're also streaming
on Facebook.
This program will have
a brief Q&A at the end.
And you can email questions
to theforum@hsph.harvard.edu.
You can also participate
in a live chat.
This is happening on The
Forum's site right now.
So let's dive in.
There are reports all
over about how we're all
going look forward to this
incredible world in which there
are autonomous vehicles that
are interacting smoothly
with each other and safely
with the environment.
And they're going to give
us new levels of convenience
and safety, and save
tens of thousands,
perhaps, of lives every year.
And we're seeing the
beginnings of that world.
There are a bunch of companies
who are developing autonomous
vehicles, as we all know.
And some are running
pilot programs
in cities across the country.
But how accurate are those
rosy predictions, especially
in light of the
recent fatal crashes?
A couple of recent crashes
involving self-driving cars--
one, Uber and one, Tesla.
So let's take a look at what
happened with a couple of clips
from NBC News.
- This is the first pedestrian
death by an autonomous vehicle
on a public road.
Today Uber suspending testing
of self-driving vehicles
in Tempe, Pittsburgh, San
Francisco, and Toronto.
The company saying, "Our hearts
go out to the victim's family.
We're fully cooperating
with Tempe police."
JASON LEVINE: We are definitely
moving much too quickly
in terms of getting these
things on the road in a location
where you have pedestrians,
where you have bicyclists,
where you have people pushing
their children in strollers.
- Tesla has confirmed that
one of its vehicles involved
in a fatal crash last
week in California
was on autopilot right
before the accident.
Tesla says the
38-year-old driver
had received several warnings
to put his hands on the wheel
before his car slammed
into a highway divider.
He died at a nearby hospital
shortly after the crash.
The NTSB is still investigating.
DAVID FREEMAN: So, Jay, I wonder
if you could talk about that.
We see that these investigations
against them, at least some
of them are still ongoing.
But they do point to these
possible safety concerns.
And I wonder if you can put
that in kind of a public health
context.
What are you thinking about
the safety and benefits
of the technology?
JAY WINSTEN: I think
what we're dealing
with now is kind of a
combination of hope on the one
hand and the hype on the other.
On the hope side, there are
about an estimated 1.25 million
traffic fatalities throughout
the world each year
and millions more serious
life-changing injuries.
So that if we could only
get those pesty drivers out
from behind the
wheel, in principle,
we could prevent
all of that mayhem.
On the other hand, on the hype
side, I think both the media
and some of the
manufacturers and developers
have been going a little too far
in setting public expectations
for what to expect,
especially in the short-term
and in the medium-term.
For starters, it isn't true
that the average person
in their garage is likely
to have a self-driving car.
That's not the business
model most of the companies
are pursuing.
Rather, there are going
to be commercial fleets
of self-driving vehicles.
There'll be automated taxis
without human drivers.
There'll be shuttles
around college campuses
and around particular areas,
such as maybe the Sunset Strip
or, likewise, retirement
communities, et cetera.
And there will be highly
automated vehicles,
which is very different
from self-driving cars.
Pretty soon,
actually, we'll have
a lot of cars on the road
that can handle situations
like highway driving, especially
on divided highways in daylight
hours and good
weather, et cetera.
But it's going to be a long time
before we get far past that.
It'll probably be
decades before we
have a critical
mass of autonomous
vehicles on the road.
DAVID FREEMAN: So, Peter,
do you agree with that?
Where are we now
in the technology?
JOHN LEONARD: Well, I think
of self-driving as the space
race of the 21st century.
There's just so much
investment from not just
the traditional auto
companies and their suppliers,
but also the big tech companies
and a lot of exciting startups.
But there are different
paths to autonomy.
And we have work to do with
educating the public better.
The one path we call
Chauffeur, or a Level 4,
is one definition, where the
car does all the driving,
and the human doesn't really
have to intervene at all.
And that's the approach that
Google and their spin-out Waymo
are doing.
And in that technology,
the car really
does all the driving, at least
in a limited domain, a limited
area.
In what's called a Level 2
system, for example, the Tesla
Autopilot or GM Super
Cruise, there the human
has to monitor the car.
And the challenge there is
can the human pay attention
sufficiently.
Especially as these
things get better,
paradoxically, you
need to intervene less.
And that means you're less
watchful to take over.
But the rate of progress
has been stunning.
And the societal problem of
traffic injuries and fatalities
is so great that inaction
would be criminal.
Safety is a great motivator
to push this technology.
But we just have to
figure out the path of how
to do it in a way that maximizes
the benefits for safety,
and also increasing mobility
for the disabled, the elderly,
and so forth.
DAVID FREEMAN: So
excuse me [INAUDIBLE]..
So let me ask Peter.
I want to direct a
question to Peter
now, the same question to you.
Where are we now?
And what are we looking
forward to here?
PETER SWEATMAN:
Thank you, David.
Yeah, I guess the
intent of automation
is to replace most or all
of driver functions, that
is, perception and control
functions, exercised
by the driver.
And I think the point
to get straightaway
is there's a tremendous
diversity in the technologies
that are being deployed and
the situations that they're
in design for.
So if you consider the
range of driver actions
that may or may not be
required, and then you
think about locations out
there in the roadway system,
the way these systems operate.
We hear about systems that
operate in freeway conditions
and so on.
But could, for example,
an automated vehicle
navigate roadworks, a
complicated set of roadworks,
something like that?
So anyway, there's a
fairly orderly approach
to this through five
levels of automation
that have been
cleared by the SAE.
And John already kind of
alluded to the top levels there,
four and five.
And that really means
that all driving functions
can be performed by the
machine most of the time
or all of the time.
The interesting
thing about it is
that under the current
federal guidelines for safety
evaluation, and
this is voluntary,
the manufacturer nominates
an operational design domain
for that product.
So there's incredible
diversity in a wide range
of technologies.
That's number one.
That this kind of
rollout occur in stages.
We currently model
deployments and trials.
There's going to
be two big phases
to think about here, what
I'd call a transition phase,
when the density of
highly automated vehicles
is going to be less than 50%.
At the moment,
it's less than 1%.
And then eventually, we'll get
to a high density deployment
phase, where we have
more than 50% deployment.
During that transition
phase, there's
going to be quite a bit
of adaptation and possibly
countermeasures required.
But when we get to
full deployment,
we're probably going
to see design changes
in the infrastructure, the
way our system operate.
Already safety
[INAUDIBLE] eventually,
this is going to be
much, much, much safer.
But there could be new risks
in the transition phase
that we need to deal with.
And I would say that the
ability to access data,
tell us what's going
on in real time,
are we developing some kind
of virus in the system,
in our traffic system, that
we're not able to anticipate
through all this technology.
So we need real time data
to keep an eye on that.
And I want to give a shout-out
to connected technology.
The USDOT's invested more
than a billion dollars
over the past decade or so or
more on so-called [INAUDIBLE]
technology.
And there's no doubt that highly
automated vehicles will also
be connected.
So I guess to answer
your question, David,
it's early days.
And there's a huge amount of
diversity in the technology.
DAVID FREEMAN: OK.
Well, let's go back.
I want to see--
John's brought a clip here from
what's going on with Toyota.
We got some cars coming around
the track using the Guardian
technology.
JOHN LEONARD: Could I
say something first?
Just to sort of set it up.
DAVID FREEMAN: Oh,
yeah, go ahead.
No, go.
JOHN LEONARD: So I mentioned
earlier, the Chauffeur
and the Guardian, the
sort of Level 4 approach
and the Level 2 approach.
In the Level 2 system
where the car can maybe
handle 99% of the
situation and the human has
to monitor it to take over
for, say, the other 1%,
the way that's set
up, people aren't
good at monitoring
the technology.
But we think there's
a third path where
you have human
driving, and this is
a bit like current
active safety systems,
but bringing in the full arsenal
of techniques from perception,
planning, prediction of
highly automated driving,
so that if the human got
into a dangerous situation
that the autonomy
could take over.
So we call this parallel
autonomy or also
the sort of Guardian system with
the notion that the autonomy is
guarding you.
And if you could have a car
that could stay on the road,
could not hit things,
could not get hit,
you could basically take
a big factor, a big set
of accident scenarios away.
So we have a video clip of
just a demonstration we did
last fall to show this concept.
DAVID FREEMAN: OK.
Let's take a look at the clip.
- Now we're going to
demonstrate our Guardian system.
We're going to
emulate what happens
when a driver falls asleep.
Guardian can tell if I'm
using a camera that's
part of the dashboard.
The camera can even
see through sunglasses
in order to see what the
driver's eyes are doing
or if their head is
moving into a position
to indicate that they're
not paying attention.
So, Ryan, whenever you're
ready, why don't you go ahead
and pretend to fall asleep.
[BEEP]
And now Guardian has stepped in.
It's driving the car for you.
And now it will offer, at some
point, to give it back to you.
Why don't you go
ahead and take it now.
One of the most
frightening things
that can happen on the highway
is when a car in front of you
switches lanes to avoid debris.
You have very little
time to react,
because your view is blocked
by the car in front of you.
We have sensors that can
see significantly better
than a human driver can see.
The Guardian is
going to take over
where a car switches
lanes in front of us
in order to avoid debris.
[BEEP]
Here, that car switches lanes.
The Guardian decides we
have to switch lanes, also.
And we avoid having a crash.
[BEEP]
Now Guardian has offered
to hand back control.
[BEEP]
And Ryan is taking
control back of the car.
So today you've
seen demonstrations
of two basic technologies
that the Toyota Research
Institute is doing research on.
This is all part of
TRI's work to eventually
build a car that can never
be responsible for a crash,
regardless of what
the driver does.
JOHN LEONARD: One thing
I'll say about that
is we added a second
steering wheel to the car
on the right-hand side.
And you might say,
if the goal is
to get rid of the
steering wheel,
why did we need a
second steering wheel?
But we wanted to be
able to experiment
with this human
interaction while still
having a backup safety driver.
So it's really--
we're just sort of
trying to explore the space
of this different styles
of interaction.
DAVID FREEMAN: So
it's interesting.
And those technologies look so
incredible and so promising.
But I think I looked
that you've written down
the word transition.
I wonder about that, too.
Debbie, your perspective.
It seems like there's all
different kinds of technology
that could be deployed
or are being deployed
or going to be deployed.
What's your perspective on
how we're going to get there?
DEBORAH HERSMAN:
Yeah, so I think
this is a really exciting time.
I think the technology that
we see in the auto industry is
going to evolve more in
the next 10 to 20 years
than it has in the last 100.
And so I think we're on the cusp
of something really exciting.
From a public
health perspective,
though, we lose more
than 100 people every day
in the US on our roadways.
This is an epidemic that
we need to be addressing.
And technology can
really help us solve some
of these longstanding problems.
And so I've got a
couple of slides
if you could pull
those up for us.
So really our question is, how
do we get to zero fatalities
when it comes to the roadways?
Is it possible?
What's it going to
take to get there?
And can technology be a
catalyst to make that happen?
And you've heard Peter and John
talk about different levels
of automation.
And so for those who are
not familiar with the SAE
or Society of Automotive
Engineers levels of automation,
Level 1 is really
where we are today,
and Level 4 and 5 are those
fully automated vehicles,
where you can think about no
steering wheel, no brake pedal.
But we have a long
way to go before we
get to full fleet penetration
of Level 4 and Level 5.
Jay really talked about it.
We're not going to have these
in our garage as a consumer
anytime soon.
There's a lot of irrational
exuberance when it comes to--
that we're going to have
fully automated cars.
So let's talk about what
some of the challenges are.
And if we could move
to the next slide.
When we talk about
the opportunities
that we see with
these vehicles, they
don't come without challenges.
Any time we level
up or we change
or we do something different
or we have new technology,
there are going to
be things that we're
going to have to deal with.
Some of those are
known, and some of those
are unknown, depending on where
we are in the kind of lifetime
of rolling out that technology.
So the first one is really
about feature inconsistency.
And so we look across makes
and models and brands,
in California there are over
50 different companies that
have applied for
operating authority
to test automated vehicles.
They're not all doing
it the same way.
There is no single standard.
Even when we talk about our
large OEMs, our large auto
manufacturers, they
do things differently.
They have different warnings.
Some are haptic.
Some are visual.
Some are audible.
Which ones are best?
How do we make sure
consumers don't get confused?
Performance standards.
The technology is
evolving much faster
than the regulatory environment
can keep up with it.
And so we don't
have set standards
for how things should
operate and how they should
interact with the driver.
Validation.
Even higher levels
of automation require
human beings to validate them.
And so how do we
make sure that we
don't have errors
or introduce errors
into the design or the
programming of these systems?
The human machine interface.
This has been a
longstanding issue when
we look at highly
automated environments,
whether you're talking about
nuclear plants or aviation,
you've got to keep the
human being in the loop.
And so understanding what is
going on in that environment,
because our cars
are starting to look
more and more like
cockpits these days,
do people understand
the features?
University of Iowa did a study.
40% of drivers have been
startled or surprised
by something that their car did.
So how do we keep the
human being in the loop,
so that they're educated,
they know what's going on,
and they don't become
over-reliant on the technology.
They begin to trust it.
It works really well.
So then they get bored.
They get distracted.
They do other things.
And then there's
lots of conversations
about other challenges,
whether it's cyber,
whether it's data protection.
And I know we're
going to get into some
of those things a
little bit later.
But this is a brave new world.
There's a lot going on.
And there's a lot
of things that we're
going to have to deal with
as we take advantage of this.
But what's the benefit
of all this technology?
And if we could jump
to the next slide.
I would say, it's safety, lives
saved, injuries prevented.
And when we talk about those
40,000 fatalities every year
on the US roadways,
how do we get to zero?
We've worked really hard
to change human behavior.
We're still the
same mark one human
being that we've
been for a long time.
We're still making some
of the same mistakes.
30% of our roadway
fatalities involve
alcohol-impaired drivers.
We know this is a problem.
We know we should be
doing things differently,
but we're not.
And so when we
look at technology,
how does technology help
us break through some
of those longstanding
problems, and protect us,
sometimes from ourselves?
Lives saved is
what the technology
is all about when it comes to
organizations like the National
Safety Council.
We don't want people
to die on the roadways.
We're losing too many.
And so many more
are injured and have
life-altering,
debilitating injuries.
We've got to do better.
DAVID FREEMAN: So
thanks very much.
Talking about this transition.
And it sounds-- the
technology is amazing,
but it does sound a bit
complicated and confusing.
And I'm wondering for
you, if you could--
and maybe, John, this is for
you, first, what do you see?
Is this going to be
a gradual adoption
of all these new
technologies on the radar?
Or is there going
to be some sort
of swift change at some point
where everything switches over?
JOHN LEONARD: I think
it'll take a while.
And I think there have been
studies, like new technologies,
like electronic
stability control,
even once it starts being
offered in the marketplace,
it can take a very long time
for a large fraction of vehicles
to have that technology.
And another thing, I think,
is that people love to drive.
People are good at driving.
And it's hard for
me, personally,
to imagine a future where
there is no human driving.
I think that we're
going to have to have
the mix of highly automated
and human driven vehicles.
And that's going to be
around for a very long time.
DAVID FREEMAN: I've
always kind of assumed
that it was going to be
illegal to drive at some point.
JAY WINSTEN: We can't afford the
luxury of letting humans drive.
DAVID FREEMAN: But
why wouldn't we?
Why won't human drivers be
needed if these technologies
can be perfected?
JAY WINSTEN: Well,
I've had folks
in Silicon Valley
say, well, people used
to ride horses on the roads.
And now they go to horse farms
for doing it for pleasure.
They'll go to racetracks.
I don't know.
I think there's
something emotional
and in our psyche about the sort
of connection with the machine
that we have when we're
driving that I think--
like, I love cars.
I love driving.
And I think a lot
of people are not
going to want to give that up.
And the challenge, I think,
is to bring the safety
in to both the human driven and
the fully automated regimes.
That's where I feel, personally.
DAVID FREEMAN: Peter,
I wonder, what's
your perspective on that?
Is there going to be a gradual
shift to this or will something
happen dramatically at some
point in the next few years?
PETER SWEATMAN:
Yeah, I think there's
going to be a process of
kind of connecting the dots.
We've got these different trials
going on in different parts
of the country.
We need to learn
something from that
and then be able to build
it out on a bigger scale.
And we need to think about
some of the early adopters.
And certainly, one seems to
be shared mobility services,
ride-hailing services
like Uber and Lyft.
So obviously, there's a clear
economic case there for them.
So there's a big push
to go in that direction.
But I think the infrastructure
side's important,
because we need to think
about where these--
each type of technology
can operate safely.
And I think we've got
a long way to go there.
We heard about and we
talked about the five levels
of automation for vehicle.
But maybe there's
something equivalent
that needs to be developed
on the infrastructure side,
so that we've got some
idea about where these--
given that incredible
diversity of the capability
of these vehicles, we
need some consideration
of how the infrastructure works.
And unless we have
some data, we're
not going to be in a
very strong position.
And I think over the
past two or three years,
we've seen quite a reluctance
on the part of manufacturers
to share data.
So that's going to be
an important issue.
DAVID FREEMAN: Well,
let's talk about that.
I'm sorry.
Go ahead.
JAY WINSTEN: No, I
just wanted to say,
we ought to bring
regulation and public
policy into the discussion.
Because how are we going
to capture that data?
And how are we going
to monitor improvements
on the safety front?
And what criteria should
be set that a vehicle needs
s meet before it's
unleashed on the roads
without any human driver
in the vehicle at all?
California is, I think, doing
a first-rate job in obtaining
the kind of transparency
that's needed to track what's
happening with each of the
companies that have permits
to operate in that state.
But we have a long way to go.
Should it be a private
sector initiative
to help establish at
least minimum standards?
We almost need a
medical kind of checkup
before a vehicle is
allowed on the road.
It needs a vision
test to make sure it's
going to see what
it's supposed to see,
unlike that Uber
vehicle in Arizona that
resulted in a fatal crash.
We need a mental acuity
test to make sure
that it knows how to
process and analyze data,
and anticipate future actions.
And it needs a neurological
test to test its reflexes.
The reason that you
test on public roads
is to learn and
improve performance.
So we sure don't want to wait
until we're nearing perfection
before you unleash these
vehicles on the road.
But there ought to be
some minimum criteria that
are agreed upon that'll set a
baseline that all vehicles need
to meet and all
developers need to meet.
DAVID FREEMAN: So are
we all test subjects
in a giant experiment?
Debbie, what's your
perspective on that?
DEBORAH HERSMAN: Well, I
would say, I agree with Jay.
It's a little bit like the
Wild West out there right now.
We've mentioned that
over 50 companies
have applied for operating
authority in California.
So there's a lot of people
that are doing things.
They're doing
things differently.
They're not all transparent
about what they're doing.
And, in fact, only two
companies, Waymo and GM,
have filed with the US
Department of Transportation
this voluntary
disclosed information
about operating autonomous
vehicles on the US roadways.
But even when you
read that, there's
not a whole lot of
proprietary information
that's contained in
these, because they're
public documents.
And so to some extent, they read
much like marketing materials,
that they're going
to be safe and this
is what they're focused on.
But we've seen in
many situations,
particularly NTSB
investigations,
that there are some challenges.
And we need to really start
drilling down to address this.
The problem is the regulatory
framework that exists today
isn't ideal for an environment
where technology is evolving
so rapidly that regulations
cannot keep pace with
what's going on.
And if we try to put the
technology that's coming out
into the current
regulatory framework,
where it takes three
to five years to get
a regulation passed,
that technology
is going to be
obsolete and we're
going to be capping what
the potential of it is.
We've got to find out how
to do this differently.
DAVID FREEMAN: Has
someone dropped the ball
in terms of regulation?
What needs to be
done right away?
DEBORAH HERSMAN: I'd say, Jay
really raised a good point.
How do we create a
voluntary environment?
But I would say,
the challenge is
there's so much
self-interest here
and there's so much
competition that I think
a lot of the
manufacturers don't want
to engage in an environment
where they're setting minimum
or baseline standards, or
sharing what's going on,
because they're doing things,
they're doing it differently,
and there's not a tremendous
amount of transparency
at this point in time.
And I'd say, from a
public perspective,
that's the challenge.
We're used to a regime
and a world where we said,
you have to do things
in this format.
They have to perform
to this standard.
You have to show
these test results.
We don't have anything
right now that
really fits what the model is.
And we're not
evolving fast enough.
We're giving people
voluntary guidelines.
But as I mentioned,
only two companies
have submitted to the
Department of Transportation
their program.
And again, there's not
a lot of information
even when they do submit.
And so I think we've
got to figure that out.
DAVID FREEMAN: Well, let me
just take a step back a bit.
Is everyone in agreement here
that these autonomous vehicles
are going to save tens
of thousands of lives--
get to zero, as you're saying?
Are we really going
to see that, where
the lives-- people are not dying
on our highways and our roads?
Jay?
JAY WINSTEN: No,
not anytime soon.
And, in fact, during
the transition period
to autonomous
vehicles, the situation
might actually become
worse in the sense
that the more highly automated
driver assistant systems are
integrated into the
car, the more complacent
the driver becomes.
And when it's time
for the driver
to take back control, the
handoff from the machine
to the human driver,
they're not prepared.
They lack the
situational awareness.
And the more sophisticated
the vehicles become,
the more difficult it
becomes to take back control,
because they've taken
so much, and you've
lost your situational awareness.
So it may actually get
worse before it gets better.
I think we are going to see
a significant, over time,
reduction in fatalities
and injuries.
But there'll be additional
fatalities, probably
fewer than the number
that were reduced.
That is, people will
be killed and injured
by autonomous vehicles
who never would have been
without autonomous vehicles.
There'll be different kinds
of-- and causes of crashes.
But over time, the curve
will go downward for sure.
DAVID FREEMAN: So, John,
are we right to focus
on the regulatory issues, but
also this idea, the handoff,
this kind of uneasy
balance between humans
and the self-driving technology?
JOHN LEONARD: Well, I think
a lot about the handoff issue
for sure.
In technology, there's something
called Amara's law, which
is that we tend to overestimate
the impact of a technology
in the short-term
and underestimate it
in the long-term.
So if you think about the
internet and cell phones,
mobile devices, and so forth.
And so the hope
I have is that we
can deploy some of the highly
automated features in a way
that we gain the experience of
being out there in the world,
but more under a
human driven paradigm,
so that we're
augmenting the driver,
but also capturing the data.
We're in the age
where he or she who
has the data wins in
the sense that that's
why the competitive forces
to keep the data proprietary
is that modern machine-learning
algorithms sort of thrive
on data.
And so the challenge is, how do
you deploy the systems in a way
that you get a safety benefit?
And I certainly would not
like to see an increase.
I want just to drop.
I want to have it go down.
But that ultimately, the
experience and learning
and data that there is this
more dramatic improvement
in the longer-term.
DAVID FREEMAN:
This is for Peter.
I'm wondering, are there
misconceptions out there
among the American motorists
about what these technologies
are going to do for us?
Or do we kind of have a
reasonable understanding?
PETER SWEATMAN:
No, I don't think
we do have a reasonable
understanding.
I think [INAUDIBLE] has
shown that folks don't really
understand what these
technologies are.
That's not surprising, because
as I tried to say earlier,
there's a huge range
in these capabilities.
[INAUDIBLE]
infrastructure are we
talking about that operation.
And the survey showed that
less than 50% of people
really want to have to ride in
a driverless vehicle, so-called.
So we've got a long way to go.
And we tend to
think that education
can go a certain [INAUDIBLE]
in helping this situation.
But really it's going to be
individuals' direct experience
with automation that's going
to kind of win them over.
And so, for example, if they
have a positive experience
with ride-hailing services
that are automated,
then they kind of naturally
will learn to appreciate that.
They don't want to understand
the underlying technology.
They just know
that they like it.
And now, we've always
said that people--
this technology
can't be enforced
[INAUDIBLE] on consumers
or on the community.
People [INAUDIBLE]
like it or accept it.
They have to love it.
Otherwise, this is such
a huge transformation,
it will never happen.
So that's the
challenge right now,
to increase the love
for this technology.
DAVID FREEMAN: One thing--
oh, did you want
to say something?
JAY WINSTEN: Oh, no.
DAVID FREEMAN: Well,
I wonder about, you
talk about the importance of
these different data streams,
and all these
autonomous vehicles are
going to be producing lots
and lots of data streams.
And privacy must be a concern.
What are the threats
that self-driving cars
that they might bring
with these data streams?
Who gets to own the data?
What nefarious purposes could
people find for the data?
JAY WINSTEN: Imagine
what Putin could do.
DEBORAH HERSMAN:
Well, I would offer
that Alan Mulally, who headed
up Boeing and then Ford,
said, "the data
will set you free."
And I think when
it comes to safety,
and again, I want to focus on
this area of how the data can
help us, it is so
important for there
to be a process for
great data sharing,
because we learn so much.
You don't have to have
the fatal crash to learn.
You have a lot of close calls
that you can learn from, too.
And when we look at
experiences, say, for example,
in the aviation industry,
that sector in the 1990s
came together, manufacturers,
labor, operators, regulators,
and they said, let's share
data in a way that we can see
what some of those
outlier events are
and what some of
the big trends are,
and so that we can find out
not just for your airline
or this type of
aircraft, but what
do we need to pay attention to.
And so when we're talking
about automated vehicles
on our roadways
and the technology,
we don't necessarily have
to have one company learn
the lesson, and then the
other five haven't learned it,
because we're not
sharing it, and they
keep making the same mistakes.
We've got to find a way
from a safety perspective
to have data be shared.
And we've got to make sure that
it's available and accessible
to investigators, to insurance
companies, and law enforcement,
people who might need access to
it to understand what happened.
And I think that's
where we could really
benefit from having some
requirements or standards.
Right now many people are
relying on the manufacturers
to give them the Rosetta
stone to be able to download
data to understand what it is.
And so if we live
in that environment
where everything's
secret, we are not
going to be learning lessons
until it's after the fact,
until it's too late.
There's a lot of value in
predictive data sharing.
JAY WINSTEN: Think of the model
of the FDA in drug licensing.
The manufacturers
share proprietary data,
and it's kept within the agency.
So the transparency
doesn't have to be
public at the level of data.
It can be closely held.
DEBORAH HERSMAN: That's right.
And like as an example,
when I worked at the NTSB
for 10 years, we did
have requirements
for sharing of
data with us as we
were doing the
investigation, but we also
had requirements to protect it.
And so that it wasn't shared.
And I think that there
are instances where there
is tremendous benefit,
and being able to create
an environment
that's safe, and that
doesn't create a competitive
threat for people,
but it does give us
those safety benefits
that we so desperately are
going to need as this becomes
a more widespread issue.
Again, we don't want Tempe to
learn a lesson or Pittsburgh
to learn a lesson,
and San Francisco
and other municipalities
not to have information.
And so what's
happening right now
is a lot of small testbeds in
a lot of small environments
that were going to try
to deploy nationwide.
We need to have a
broader understanding.
DAVID FREEMAN: John,
what's your perspective,
as someone who's involved with
the industry, with Toyota?
JOHN LEONARD: Well, let's see,
so I'm also an MIT professor.
So I'm on leave with the
Toyota Research Institute.
And the MIT professor
in me, the data really
that drives research
in the public interest
is just really important to me.
I think there are certainly
complex issues with data
sharing and proprietary nature.
Part of me, I kind of think if--
you can imagine if--
one thing is that
data doesn't have
to be about negative incidents.
You could have
sort of in effect--
from situations where
the technology intervened
to good benefit to sort of
harvest that information.
Also, the kind of near
misses that you mentioned.
And also, maybe individuals
can donate their data
if they choose in the
interest of the public good.
In the same way we
might have organ donors,
we might have people that are
willing to give up information
for the benefit of society.
Maybe there's a new
way to think about sort
of this public
health conversation.
Because the benefits
could be just so great,
if we think really how
compelling the problem is,
and we just don't talk about it.
DAVID FREEMAN: I just
want to say something.
You're talking about
public health, which
is why we're here, of course.
It's easy to think
in terms of reduction
in fatalities and injuries.
But what about downstream
or unintended consequences?
We were talking
before about workers--
drivers will be displaced,
professional truckers,
and so on will be displaced.
What are the things that you
all worry about more, not
fatalities and injuries, but
public health issues downstream
from the adoption of
these technologies?
Anyone?
JAY WINSTEN: Well,
there are over 3 million
professional drivers in
the US, truck drivers,
taxi drivers, et
cetera, who stand
to lose their jobs
over the longer haul,
or at least their successors do.
And so there are going
to be dislocations.
There are issues of--
there are benefits and
costs on the pollution side.
By having fewer
cars on the road,
potentially, there
could be benefits.
But on the other hand, if you've
got all of these automated
vehicles continually
circling around waiting
for someone to call for a ride,
we could actually end up--
along with the mix
of regular vehicles,
we could end up with
more cars on the road,
and the problem could get worse.
One other thing I
wanted to throw quickly
into the equation,
how many lives
could be saved simply through
the widespread adoption
of highly automated
driver assist systems?
Could we get-- and
with a marketing
campaign to get everyone
to want to use them
and to pay for them as part
of the cost of the car.
Forget about autonomous
vehicles, just highly
automated driver assist
systems, could that
get us 90% of the way
there or 40% or 10%?
Or who's done that analysis?
JOHN LEONARD: Debbie has.
JAY WINSTEN: I haven't seen it.
DEBORAH HERSMAN: Yeah.
So there actually are some
great studies in this space.
And Carnegie Mellon in
2016 released a study
that if we had automatic
emergency braking, lane
departure warning, and blind
spot monitor on every car,
we could save
10,000 lives a year.
The Insurance Institute
for Highway Safety
also had an estimate, about
10,000 lives saved per year
with those three technologies
plus adaptive headlights.
So people have looked at
these technologies that are
commercially available today.
But the challenge is,
and for many years,
they've been options on
higher level trim lines
or vehicles that are
at a higher level.
And so we don't want
safety just to be
for those who can afford it.
We want safety to
be for everyone.
And I would say, I'll just share
a personal experience with you.
I have a 2005 minivan.
And my husband and I went to
go replace that just a couple
of months ago.
We have three boys.
And it's a very
dirty, stinky Toyota--
or a 2005 minivan.
But we were looking
at actually getting
a used car, a used
minivan, 2015 or 2016
model year just because
of the price point.
And when we went and we
test drove the vehicle,
it didn't have all
those safety features
that I've been
talking so much about.
And I said, oh my gosh,
this is like, I've
got to hold the
mirror up and say,
am I willing to go for
the 2018 model which
has standard on every trim line
some of these technologies?
And I said, you know what?
I have one 17-year-old
that's driving,
I've got a 15-year-old
that, you know,
learner's permit, and I've
got a 12-year-old who's
going to be right behind him.
And these technologies are
going to be the things that'll
save my kids' lives.
And you know what?
It was one of
those moments where
we say, how do we accelerate
this technology into the fleet?
And how do we turn
it over faster?
Because cars now actually last
longer than they ever have.
So each decade they're
lasting about a year longer.
So the average age
of a car in the US
now is 11 and 1/2 years old.
And so those kind of
curves for adoption
are pretty well understood.
And so for electronic
stability control,
which is a technology that is
awesome and has saved thousands
of lives-- it was mandated
in all vehicles in 2012--
we're still not going
to get full penetration
of the fleet having electronic
stability control for decades.
DAVID FREEMAN: So are we going
to have a growing rich/poor
divide in terms of the vehicles
that are out there on the road?
Is that what's going to happen?
JOHN LEONARD: Well,
I think the good news
is I think that
you're seeing some
of the democratization
of the technology,
that, like, self-driving
is sort of defining
this frontier of really
ambitious-- this race
to get better and
better sensors.
And that makes the
sensors that can
provide these near-term benefits
actually have become cheaper.
Cameras are cheaper.
Radars.
And so you can see,
you know, being
more pervasive in fleets,
including the lower cost
vehicles.
DAVID FREEMAN: I just
have one quick question.
And then we'll have some
questions from the audience.
But I just wondered, I
assume everyone's talked
about the trolley problem here.
And then the idea
that the software
is going to be programmed
to determine who's
going to be saved in
terms of when an accident
situation arises.
Will some cars protect
the drivers and other cars
will protect other people?
Or ultimately,
will the algorithm
become kind of similar
across all the models?
Or are different cars going
to have different weights
to who gets saved in
difficult situations?
Maybe that's for you.
JOHN LEONARD: Do
we have an hour?
DAVID FREEMAN: Oh, yeah.
JOHN LEONARD: So I
think the goal is
to avoid any sort of
A versus B situation,
that the car should have--
my aspiration is to have enough
foresight to just stay out
of those situations.
But it's really difficult.
And very difficult question
to talk about.
It has many dimensions.
DAVID FREEMAN: OK.
All right.
Well, let's go and
do some of these.
We go to some questions
from the audience.
JOHN LEONARD: Or did
you want to add to that?
DAVID FREEMAN: Oh.
I'm sorry.
Sorry.
DEBORAH HERSMAN: Well, I would
just say, one of the challenges
now as we talk about
outside, inside the vehicle,
we're actually doing a
really bad job protecting
our most vulnerable
road users that
are outside of the vehicle.
And so pedestrian
fatalities in the US
went up by 10% last year.
And so when we talk
about these technologies,
it's not just about
inside the vehicle.
It's about pedestrian
detection, and being
able to stop or slow down when
you detect a pedestrian, not
just a car.
And so I think it's
important that we
keep all those things kind
of on the plate and say,
this is what we expect
from this technology,
that it's not just going
to recognize a vehicle,
but it'll recognize a
motorcycle, a bicyclist,
or a pedestrian.
I just read this morning
in the local paper
here that there was a teen
driver who had a learner's
permit who ran over and killed
a woman, hit another couple
and killed their dog, and
they're both hospitalized.
This was in the paper
here this morning.
And after the crash, they
hit a building after that.
They determined that they--
a drug recognition expert
said that they were
likely impaired by drugs.
When we talk about
advanced technology,
it's to prevent those
kinds of things,
those catastrophic events.
And those three individuals
that were struck, and the pet,
they were pedestrians.
And so that's the
kinds of deaths
that we're seeing on the rise.
What can we do better?
DAVID FREEMAN: All right.
Questions from the audience.
Yeah?
LISA MIROWITZ: We have
a lot of questions.
But while we're on
that topic, I'll
ask one that came in about it.
The death of the pedestrian
in that Uber accident
really raises ethical issues
about risk and safety.
How will these
cars of the future
make the right
choices on the road?
And how much can we trust
their decision-making process?
How can they be
programmed, for example,
to make the right
choices morally about who
to protect in a crash?
This is more of a question
about moral choices
that the technology would make.
DEBORAH HERSMAN: I would
say, I think sometimes
when we ask a question
like this, human beings,
like the situation
that I just described,
they don't make good choices.
And so when we talk about--
John talking about
the A/B choice, when
we're in that split
second decision
and trying to figure out whether
we hit the car in front of us,
or whether we change
lanes, and maybe we
don't know if there's
a car over there
that we're going to hit
or run off the road.
And we are not really
great in the split
second making those choices.
And a lot of times those
aren't active decisions.
We're not actually going
through the weighting.
We're making a reaction.
And so, I would say,
it's really important
to make sure the design and
the programming is correct.
But I'd say, compared to what?
And I'd say, right now if you
compare it to human beings who
drive under the influence
of alcohol or drugs
or get fatigued or
drive distracted,
sometimes I think when we
are asking these questions,
we're not actually holding
the mirror up and saying,
what good choices are
we making, and how do we
incentivize human beings
to make better choices?
Not just in the split second.
Because getting in the car
when you're impaired by alcohol
isn't a split second choice.
That's a decision that gets made
before you even start the car.
And so, I would say, that's
not about the programming
and the design technology.
But I think that
question is inherently
unfair in many situations,
because human beings do not
make good choices when
they get behind the wheel.
That's why 94% of
fatalities are attributed
to human error or human choice.
And so I think we're hoping
machines will be better
than us, and they will
be able to discern
some of those
situations in a more
hierarchical decision-making
process than we do.
LISA MIROWITZ: Thank you.
DEBORAH HERSMAN: I'll turn it
over to a technology expert,
though.
JOHN LEONARD: Well, let's see,
it's a challenging question.
But I think some of the
benefits that a machine
can have a greater
context in terms of, say,
driving on a windy
road, that there
can be this almost foresight
and prediction and using
sort of the history of many
cars that drove on that road
before sort of to say,
basically, the car's going too
fast and just try to slow down
before those situations happen.
In general, we strive to
get really strong detection
performance with very
few false alarms.
And there are statistical
ways to characterize
how our algorithms work.
And I think all
manufacturers or groups
doing this are aspiring to
get really strong performance
with the ultimate aim of using
almost kind of superhuman
perception, a long sensing
range, a very accurate
discrimination, to
try to avoid getting
in those situations in a sense.
LISA MIROWITZ: Great.
Thank you very much.
Thank you.
Here's a question
from Dana Plude,
deputy director at the
National Institute on Aging.
"The question of self-driving
cars comes up frequently
and seems to hold
promise for allowing
elderly adults to
maintain independence
for daily activities.
But how reliable
are such vehicles
in inclement conditions,
rain, sleet, snow?
And are such vehicles
operational in rural areas
where many elderly live?
My understanding is
that autonomous vehicles
follow routes that are
previously mapped out,
which seems unlikely
in non-urban areas."
And we've also had
a similar question
about the promise
of the technology
to help people with
physical disabilities,
if you could address that.
JAY WINSTEN: Well,
let me just tell you
that the challenge
in rural areas where
a lot of elderly people
reside is that the business
models of fleets of
automated vehicles
may not support their
availability in smaller
villages, because there aren't
a adequate number of customers.
So there's going to be an
issue about urban versus rural,
and the reach of these systems.
The question is kind
of worrying about some
of the current concerns about
inclement weather and snow
on the roads, et cetera.
All of that will be
addressed over time.
And so a lot of that
will go away as an issue.
So if I were him, I
wouldn't worry about it yet,
because the vehicles
aren't here yet.
And that's the kind of thing
that's being worked on.
DAVID FREEMAN: What if
maybe in rural areas,
people will continue to
own self-driving cars,
and urban areas will
just use fleets.
Is that what is
likely to happen?
JOHN LEONARD: Well, I
think you can distinguish
between the mobility
as a service market
versus a privately
owned vehicle market.
And they do have very different
sort of cost structures.
And not to get--
I don't really
know the specifics.
But you can imagine that in a
mobility as a service system,
say, here in Boston, you could
have more expensive sensors
and have a higher
utilization rate.
And the hope is in the same
way that the active safety
technology trickles down
to the lower cost vehicles,
that hopefully the sensors
will get less expensive, and so
forth, that you would get
the technology into privately
owned vehicles.
But certainly,
weather is a concern.
And you can imagine
a model where
the systems are available when
the weather's more favorable,
and just not available
on a day when it snows,
and that somehow
people adapt to that.
LISA MIROWITZ: Great.
Thank you.
I'm just going to take
one more from online.
We have a lot of questions.
And you can see them all
if you go on our chat.
I want to have our audience
have a chance to ask a question,
too.
But this one's for
Peter from Jeff Larson.
Peter, what is industry doing
to, quote, "increase the love,"
unquote, for these technologies?
Jay Winsten went to Hollywood
to increase the love for things
like designated drivers.
Shouldn't industry be doing
something similar now?
I think this is a
question for both of you.
Maybe you start with Peter.
PETER SWEATMAN: Yeah,
thank you very much.
Well, as I said
before, I think having
direct experience [INAUDIBLE]
makes the difference.
And if these technologies are
in the high levels of automation
[INAUDIBLE] rolled out
in mobility services.
And the matter that was just
discussed about elderly people
having access to these kind of
vehicles and new technology,
I want to give some credit
to the so-called smart cities
movement.
[INAUDIBLE] as being one of the
exciting things that's happened
over the past couple of years.
If you look around the country
at the smart cities programs,
a lot of them are very concerned
about disadvantaged people
not having access to
hospitals and so on.
And a big part of
these early deployments
is trying to address
those issues.
And if you look it, one
of the early adopters
is low-speed passenger
shuttles, or low-speed, maybe
up to 20 miles an hour, that
are able to carry a number
of people on a fixed path.
So there is quite a
bit going on there.
And it's that direct
experience that's
going to change people's
impression of automated
vehicles.
JAY WINSTEN: Yes.
But I think in addition to
that, education and awareness
and the modeling
of behavior, which
we were able to achieve through
the designated driver campaign.
160 prime time TV episodes
had the characters
choosing a designated driver
to model that behavior.
Imagine "The Big
Bang Theory" today.
They finally go out
and get a new car.
And what they've gotten
is a highly automated car.
And they're dealing with
those systems, et cetera.
You could start a
lot of conversation
and you can generate news
coverage out of that.
And you combine that with
the marketing efforts
of the industry as
a whole, and you
can help to move the needle
in terms of public acceptance
and excitement and love, not
only for the shiny object
of the autonomous vehicle, but
also for the highly automated
driver assist systems
that are available today.
I think we shouldn't lose
track of where we are today.
We got to still be hammering
away at drunk driving
and distracted driving and
drowsy driving and the like.
LISA MIROWITZ: Thank you.
I think we're
running out of time.
But audience, our
panelists will be here,
and you can come up and ask
them questions after the event.
DAVID FREEMAN: OK.
So I wonder, before we wrap
up in a couple of minutes,
I wonder if we can go
around and each of you
can say what's your greatest
fear is for this new technology
and what your greatest hope is.
Debbie, you want to go first?
DEBORAH HERSMAN: My
greatest hope is easy.
That's lives saved.
My greatest fear is that
we won't roll it out
in a responsible way, and that
will result in setting us back,
so that the technology isn't
adopted to save those lives.
DAVID FREEMAN: John.
JOHN LEONARD: I'm with Debbie.
My greatest hope is
really that there's
a dramatic reduction
in accidents,
and not just for vehicle users,
but the vulnerable road users
are really important.
And I guess I have a fear--
I mean, I think
incidents need to be
taken extremely seriously.
But there could be a
fear that if we somehow--
that short-term phase when the
benefits don't fully accrue,
that we don't keep the long-term
of the more massive benefits
that will come
with time in mind.
So we have to balance the
short- and the long-term.
JAY WINSTEN: Yeah.
And I agree with both of you.
And I think that an
aggressive well thought out,
comprehensive
communications effort
will be crucial to
bring the public along,
so that increasingly they don't
overreact to a single crash.
DAVID FREEMAN: Are we
overreacting to the--
I mean, a fatality's a fatality.
Are we overreacting to these
things that have happened
recently, these accidents?
Is that a concern?
JOHN LEONARD: I think
they're very serious.
And so every human
life is precious.
DEBORAH HERSMAN: I think
we under-react to the 100
fatalities that occur
every day on our roadways,
and don't prioritize
saving those lives.
Everyone heard about
the Southwest fatality
that occurred in aviation.
That was the first US domestic
commercial aviation fatality
in nine years.
But yet we kill 100 people
a day on our roadways,
and that's not front page news.
That's the shame of it.
DAVID FREEMAN: Peter, your
greatest hope and greatest
fear?
PETER SWEATMAN: Well,
I agree with Debbie.
I think the definitive
reduction of serious injuries
and fatalities is really
why we're doing this.
And I think to the
credit of this country
that all of this
technology is being
pursued with that objective.
I guess my biggest fear
is that the data is
going to remain locked up.
We really need a
new [INAUDIBLE],,
because it's going
to be like aircraft.
The pressures are going to
be humans [INAUDIBLE] plane.
How do we understand safety
on highly automated vehicles?
It's through the data.
So we need to get [INAUDIBLE].
DAVID FREEMAN: OK.
All right.
All right.
Well, I think we're out of time.
So thanks, everyone.
That's the end of
the panel today.
Thanks to everyone
in the audience.
Thanks special to our
panelists here, and to Peter.
And I just want to let
you know that if you want
to see the on-demand
version video of this event,
it will be posted I think by
Monday on The Forum's website.
So thanks very much.
Thank you.
[APPLAUSE]
