Stanford University.
Right now, the technology
for automated vehicles
is moving really fast.
But to actually integrate
them into society,
there are a lot of
considerations that humans
do subconsciously, intuitively.
We have to program vehicles
to make these same kinds
of decisions in the
same kinds of scenarios,
and it's really important to
do so in an ethical, morally
responsible way.
Programmers are going to,
ahead of time, decide how
these autonomous
vehicles maneuver.
And human drivers often
will violate traffic laws
in order to maintain vehicle
safety and occupant comfort.
If you are programming
an autonomous vehicle
and you encounter an obstacle
in the middle of your lane,
what do you do?
Because a human driver would go
around the obstacle and violate
the double yellow line,
assuming that it's clear.
But do we program the
autonomous vehicle
to actually ahead of time
decide to break the law?
We can treat that as a very
harsh, strict constraint,
and the vehicle will have
to come to a complete stop
in order to not
hit the obstacle.
Another option would be to
minimize how much it violates
the double yellow line
and veer very closely
to the obstacle,
very uncomfortable
for the occupants in
the passenger seat.
The third scenario is to
enter the oncoming traffic
lane to give more
space to the obstacle
as you maneuver around it.
The car itself is moving
based on algorithms.
And these algorithms that we use
have different constraints as
well as costs.
And we tune them
to tell it how far
away to be from an obstacle,
how close to get to an obstacle.
The car itself doesn't care.
In a way, we can translate
human comfort, human safety
into these numerical
constraints and costs.
The vehicles are
going to be what's
making the decisions now, and
so we need to somehow translate
social behavior, ethical
behavior into what
happens once the vehicle
finally takes full control.
For more, please visit
us at stanford.edu.
