Welcome everybody.
Welcome to the panel on
teaching ethics in AI.
My name is David Autor.
I'm a Ford Professor of
Economics in the economics
department here.
I'm a labor economist.
So I'm not an AI
expert, but I work
a lot on the impacts
of technology
on employment, on wages, on
skill demands, on inequality.
And so I do think a lot of
the, at least in my mind,
a lot of the ethical
questions also
relate to what the
technology is designed to do
and what its
repercussions will be
for both workers and consumers.
So I hope that that perspective
can be useful in the ensuing
discussion.
But my main role here
is to introduce the four
distinguished panelists.
So I'm going to introduce
them one at a time.
They'll each speak
for 10 minutes.
And then we'll have
time for a kind
of 20 minutes of Q&A. So
the first speaker will
be Hal Abelson, who's
the Class of 1922
Professor of
Electrical Engineering
and Computer Science at MIT.
He has many awards
and distinctions.
He was designated as one of
MIT's six inaugural Margaret
MacVicar Fellows, which is a
award for teaching excellence
in our undergraduate curriculum.
He's been active in the
1970s in using computation
as a conceptual
framework for teaching.
And his widely used
textbook on computer science
embodies the view that
computer language is primarily
a formal medium for expressing
ideas about methodology,
rather than simply a
way to get computers
to perform operations.
So please, Hal.
We welcome your comments.
Here's the clicker.
Here's the clicker.
So the adventure whose start
we are celebrating today
is really about MIT
embracing the reality
that computation is
permeating everything we do,
not only the
technical, but as you
heard the last time, the
artistic and the social
and all of the
opportunities and challenges
that that represents.
And part of that is the way that
our society and people making
policy and people
making the rules
deal with those
opportunities and challenges.
And the thing I
want to say today
is that MIT can be a world
leader in information policy.
Not only do we have the
opportunity to do that,
but some people say MIT has
the obligation to do that.
We can be a world leader
in information policy.
There is an enormous
need for that.
If you look at the almost 550
voting members of Congress,
you see nine of them
with degrees in science,
three in engineering.
It's incredible when you think
of the importance of this
in society and the way that
our government leaders are
equipped to deal with that.
So what does it take
for MIT to do this?
There are really
three pillars of this.
One is to do research.
Take our top quality
research and do
some that is focused on real,
practical challenges of policy
right now, and not only
that, doing work together,
engaged with people
who are making policy
to translate our
insights and our ideas
into things that can have
a real impact on what's
going on in industry and
government right now.
And the third is
almost the first thing
that President Reif said
when he talked about this
is educating students.
To use President Reif's
words, bilingual,
educate bilingual
leaders who have
facility with both
the social and policy
and artistic side and
the technical side.
You saw some beautiful examples
of that in the last panel
where Michael Cuthbert
talked about that,
what he's doing in literature.
So let me give examples
of what this is.
We can produce real insights
into policy challenges
and work with leaders.
Let me give you an example.
Just give you two examples of
the kind of work we're doing.
You've probably all heard
about the controversies
over surveillance.
You probably remember
the terrorist act
in San Bernardino, which led
the FBI to effectively demand
that Apple build back
doors into its iPhones.
One of the things that
we did is we got together
a bunch of leaders, technical
leaders in cryptography
and security, and wrote
an analysis and a paper
about why that is
a bad idea, why
building in those
back doors really
undermines the security
of the whole internet.
And that paper that we
did and went and spoke
to a lot of people about it,
got a lot of notice in Congress,
got a lot of notice in the
Senate hearings on that.
And it's still
referenced in Congress.
But of course, as
you know right now,
that's an active
debate that's going on
not only in Washington, but
in lots and lots of countries.
And we have the opportunity
to bring our technical insight
into that policy debate.
Just to give you
another example,
you've all heard people talking
about network neutrality.
Well, Dave Clark, who
is this senior research
scientist at CSAIL and
one of the real founders
of the internet protocols,
worked with measurement places
in the internet and
did some analysis.
Because one of the
network neutrality issues
is was there enough capacity,
is there going to be
congestion on the internet.
And Dave did some real analysis
to show that the congestion not
comes from the basic
provision of how much capacity
we have on the
network, but basically
where the switching and
interfaces are noticed.
And then he gave seminars
on that in Washington, DC.
Again, it changes the
way in which people
are talking about this debate.
Second pillar is active
engagement with policy leaders.
So these are some of the
visitors we've had to MIT--
the head of GCHQ, which
is the United Kingdom
analog of the NSA, the
head of the US NSA,
the Mass Attorney General,
the head of the European Data
Policy Society.
And this is partly visiting,
but partly really talking
with these people
about the issues
and giving them
some incent of what
it looks like from a
technology perspective
to think about these issues.
That's a second pillar of the
way we can be engaged at MIT.
And then third, there
is the interaction
with how do you educate
these bilingual students.
There are programs
that MIT ought
to be considering about how you
really do that kind of thing.
One of the ones that
we're doing right now
is a joint course with the
Georgetown University Law
School.
And what's happening is we
have combined MIT law school
teams where the project
is to take some area,
like facial recognition for
which there's a lot of interest
and a lot of thought, and for
the teams to work together
to create model legislation
for dealing with these issues.
And the critical
thing is that you
have the MIT students
and the law school
students actually working
together to solve some problem.
And what they
learn is that there
are different--
there are people who
have different modes
of thinking that bring
different insights and
different kinds of intuitions
that they don't have.
And that's absolutely
critical in thinking
about how you educate these
bilingual communities.
One of the things that comes
out of that is not only, let's
call term papers, but
these things have actually
led to published research.
So you're seeing real
research and publications
that have come out of this
work that's part of leadership.
And then that
leadership is not only
about our faculty
and our researchers
and everyone else, but
it's also our students.
Let me just give you
two examples of that.
This goes back to 2007.
And what you're seeing
is the discovery
that you can predict
people's sexual orientation
by looking at the sexual
orientation of their Facebook
friends.
So over on the right, if you
look at male homosexuals,
you can predict that
facial orientation
by looking at their
official friends.
And you're seeing
different populations
like female straight
and female homosexuals.
And David called me last night.
And he said, what do you mean
that was a breakthrough result?
Everyone knows it.
Well, you know what?
In 2019, everybody knows that.
This is work that was done in
2007 by MIT undergraduates.
And it was a breakthrough
result. It really was.
People didn't know
that thing then.
And why is it important?
Because for example,
we talk about privacy
and personal information.
That observation
completely upends
the way we think about privacy.
Because for example,
if I am gay,
and I choose to out myself,
that gives a tremendous amount
of information about my friends.
So we have an outdated
view of privacy.
The regulatory structure has
not begun to cope with that.
And let me end with one more
that's even more famous.
It is almost
impossible today to be
talking about bias on the
internet without a reference
to Joy Buolamwini's master's
thesis and her work,
where what she did starting
four or five years ago
was documenting that facial
recognition programs are much,
much less accurate on females
than they are on males
and much, much less accurate
on dark skinned people
than on light skinned people.
And then when you
do the intersection,
and you talk about do
these programs work
on dark skinned females,
there's enormous problems
in their technology
that's rolling out.
There was actually a bit of
a controversy in Joy's work
getting published and Amazon
objecting to it last month.
But the point I'm making is
that was a master's thesis
that was started in 2005.
And that now has become the
common currency, the leadership
by which people are talking to.
So it's not only our faculty.
And it's not only
our scientists.
We have the opportunity even to
make our students real leaders
in doing significant work.
So just to cap up, we
can do policy leadership.
It requires
policy-relevant research.
It requires active engagement.
And it requires really
taking seriously
that our students can become
these bilingual community.
Thank you.
[APPLAUSE]
Thank you very much, Hal.
That was inspiring.
So I want to introduce
now Barbara Grosz, who
is the Higgins
Professor of Natural
Sciences in the School of
Engineering and Applied
Sciences at Harvard University.
She has made groundbreaking
breakthroughs
in natural language processing.
And importantly as well, she
co-founded Harvard's Embedded
EthiCS program, which
integrates teaching
of ethics and reasoning,
ethics reasoning,
into core computer
science classes.
And so she is going to discuss
the importance of considering
ethics from the start in
the design of computer
sciences and computer
science curriculum
and describe how we can
go about doing that.
Thank you, David.
Do we have the first slide?
Oh, I have to do it.
No.
Let's go back.
There we go.
OK, so my title tells you
where we're going to--
I'm going to wind up at
the end of my 10 minutes.
I want to start by talking
about how we got there.
And I'll start that with--
interesting.
There we go.
OK, how I got started
and interested in ethics.
So I should say first
that I went to Cornell
as an undergraduate.
And I used as much of the
philosophy and history of art
I took there as I
do of mathematics,
so just a reflection
on the last panel.
So I began hearing
from friends when
the voice personal assistants
on phones came out,
people who were interested
in why they did what they did
and why they failed
where they failed.
And I noticed something
really important.
You would think a phone
personal assistant that
could answer one
of these questions
could answer all of them.
This particular
sequence is from a class
I started teaching in
2015, Intelligent Systems
Design and Ethical Challenges.
Turns out that systems can
answer the first question.
You'll notice that the
answer to the second question
seems a bit odd.
The system seems
confused about a flu shot
being a particular
kind of business,
like you get hammers at hardware
stores or it's a musical group.
I didn't figure this out.
A student had to
explain it to me.
But even more
pressing, if you ask
where to get a
sprained ankle treated,
it tells you which
web page will tell you
how to treat it,
which might seem funny
for a sprained ankle.
But if it were a
stroke, it could
be a matter of life and death.
So I thought it was
important for our students
to begin to learn about
the ethical consequences
of the products that they make.
And the first fall that I
taught this course, the cover--
I think this was in October--
the cover of New York Times
Magazine talked about
the Barbie doll that
was available and announced
itself as your daughter's best
friend.
You'll notice-- and
this is right out
of the New York Times.
So I was very lucky.
It became part of it.
So what you just saw was the
first assignment in class.
This was part of the
second assignment, which
was for this and a
few other examples
to write an argumentative about
whether this product should
have been made or not.
Now you'll notice here that
the system, the Barbie doll,
doesn't understand
the word nothing.
That's kind of true of
most language systems.
They don't do so
well on negatives.
Doesn't understand
the negative behind
destroyed and doesn't
really understand
anything about dialogue being
a collaborative behavior.
The kid at this
point throws it away.
So here's the problem.
This isn't just funny.
It raises an ethical challenge.
This is a toy for three
to eight-year-olds
that's teaching
kids that you don't
have to listen what
somebody is saying to you,
that you just follow
your own script.
So I studied, as
David hinted at,
I studied a lot of
natural language dialogue.
People, adults,
don't follow scripts.
You know this from the customer
care people you interact with.
Four-year-olds,
they really don't.
So something's wrong here.
And I taught this class on
intelligent systems design
and ethical challenges.
And then I realized something.
In my research on
collaboration and teamwork,
I argued that it was
really important to design
collaboration into systems
from the beginning.
And it turns out
it's really important
to think about ethics from
the beginning of system design
as well.
And that is what led me to
the segue from this course
on was really AI and ethics
into thinking we needed ethics
throughout the computer
science curriculum
so that students would think
about designing systems
not just that were efficient
or had all of the properties
that Aaron talked about
in the last panel,
but also had considered some
of the ethical consequences
of what the design did.
And some people are now
talking, in the AI world,
are talking about
building systems that
can do ethical reasoning.
That would be great.
But we still can't
really do reasoning
about the physical
world well enough.
And if you spend any time
with philosophical ethics
or even with policy,
you see how hard it is.
So it's a little hard to
automate what we really
don't know how to do.
And that's fundamentally
because there
are values and tastes at
the base of this, that it
takes argument to deal with it.
So I'm advocating that
rather than trying
to build systems that
do the reasoning now,
we teach our students
to do the reasoning.
I also have advocated
in other settings
that it's important for industry
to make this a priority.
But that's another talk.
So here's an example
from my class
that really motivated
this need to integrate
throughout the curriculum.
And I'm going to let you read.
This is-- so in the
intelligent systems design
and ethical challenges
class, every class session
has the students involved
in some activity.
The class was Tuesday, Thursday.
On Tuesday, we had
a long discussion
about the Facebook emotion
contagion experiment
and the need for people,
even in industry,
who are not required
to do IRBs to think
about what they're doing when
they have human subjects.
This is a class, 140 students
compete to get into 24 slots.
They have to write
essays about why
they want to be in the class,
what they'll contribute,
what they'll take away.
So they're committed.
The assignment, the activity,
has them planning something
for a social network company.
And it includes
in it asking them
to list the data they would
collect on their users.
This is 48 hours after the
Facebook emotion contagion
experiment.
I go to the board.
And I start writing down what
data they want to collect.
And it's good.
I'm looking at the board.
So they don't see that I
don't have a straight face.
And I turn around.
And I say, how many of
you thought about ethics
in coming up with this list?
None of them.
I ask on the assignment, why?
Because we know we're supposed
to make money for a company
and write efficient
code and beautiful code.
But nobody tells us we
have to think about ethics.
So they care about it.
But it's not in
front of their faces.
It's somewhat like implicit
bias in hiring decisions.
It's cognitively not present.
The idea of Embedded EthiCS
is that we should have ethics
throughout the computer
science curriculum
so that students
learn about thinking
about the ethical
consequences of their designs
at the same time that they're
learning about efficient code,
correct code, elegant code.
What's up here is the
list of 14 of the courses
that we have integrated this in.
This semester, we're
doing 12 courses.
We grew from four courses to 12.
So one of the things I advise
people who want to do this
is start small, build up a
consensus, and then get bigger.
In order to do this, you need
philosophers brave enough
to tackle technology and
computer scientists interested
and brave enough to
tackle philosophy.
The way this works
is that we have
graduate students in philosophy,
advanced graduate students who
have a reputation
for good teaching,
work with faculty to identify
a technical topic in the course
that raises an ethical issue.
They then design a class
session around that,
including philosophical
readings and an assignment.
The class has an activity.
That's very important
for students
to engage in using the
ideas that they get.
The assignment's crucial so
students know it's serious.
It's important-- why do
we have philosophers?
Well, it turns out they have
a lot of expertise in ethics.
And they also understand
that such notions
as justice and fairness
are social constructs.
And you can't simply define them
in a way that's mathematically
pleasing.
You need to access the-- you
need to think about values.
You need to think about
choices among values and so on.
So it's-- I want to
just end by saying,
I usually argue that
to be truly smart,
a system has to be designed
with people in mind.
And that's one of the
changes between 1950 and now.
Teamwork is needed everywhere.
And ethics is everybody's
responsibility.
I would like to see all of us
in computer science pulling
together as opposed
to breaking apart.
I'll just end by saying the
students really like this.
The faculty are
wildly enthusiastic.
They've started to learn
ethics themselves, which
pleases them no end.
And it's now spreading over into
how they think about research,
both students and faculty.
[APPLAUSE]
Thank you very much.
And fittingly enough, our
next speaker is a philosopher.
David Danks is the LL
Thurstone Professor
of Philosophy and
Psychology and the Head
of the Department of Philosophy
at Carnegie Mellon University,
at CMU.
He integrates ideas from
philosophy, AI, and policy
into his work.
And he studies
the policy impacts
of artificial
intelligence and robotics.
And let me also say,
I didn't realize
philosophers could be so young.
I thought you had to
be at least in your 60s
to be a philosopher.
So very impressed
by this as well.
So thanks very much
for your comments.
Thanks.
I am not nearly as
young as I look.
I age rapidly now
that I'm the head
of a department of philosophy.
If you ever want to think about
what herding cats really looks
like, try and get a
bunch of philosophers
to agree on something.
This actually follows,
I think, really nicely
on what Barbara was
saying and not just
because I endorse the idea of
a jobs program for philosophers
by having ethics
everywhere, but also
because I think we need
to broaden the way that we
think about ethics.
Sometimes, I think
people think about ethics
as the domain of folks like
Kant, Aristotle, John Rawls
from just up the road.
And instead, I think we need
to think about ethics more
in a kind of big tent way.
Ethics is about how you
ought to act in order
to realize your values.
And if we take that broader
view, see if I can--
all right.
There we go.
One of the first
things you quickly
realize if we have a somewhat
broader view of the nature
of ethics is that there's
a complete spectrum
of ethical challenges.
Some of them are really
easy ethical challenges.
You shouldn't steal from people.
Now you might think how is
that an ethical challenge?
Of course I shouldn't
steal from people.
But it's absolutely
something that
is about how you ought
to act to realize
your values in the world.
Now you don't have to take a
fancy philosophy class for it.
You don't need the language of
deontology or consequentialism
or any of these sorts of things.
All you need to have is
what a colleague of mine
refers to as grandmother
ethics, the ethics
that your grandmother
would have taught you.
On the other end of the
extreme, at the other end
of the spectrum, we
have the truly difficult
ethical conundrums
where there's really
no answer that is going to
be satisfactory to everybody
and where you do need these
very complex frameworks in order
to think through them.
Now one of things that I think
comes up a lot in technology
is that the challenges
we face in technology
are somewhere in the middle.
So this is the one I personally
am quite fond of talking about,
which is when we
think about autonomous
vehicles, the ethical challenge
of whether the vehicle should
follow the speed
limit in all cases
or should minimize the
probability of an accident.
Now those are both values that
I would conjecture everyone
in this room endorses.
And you can't have
both all the time.
Sometimes, the
safest thing to do
is to go faster than
the speed limit.
Now the technology
doesn't tell you
how to resolve that tradeoff.
The technology doesn't care.
The cars that are
being developed
in Pittsburgh by Uber and Aurora
and Argo and those companies,
they don't care whether they
always follow the speed limit
or they always drive to minimize
the probability of an accident.
That is a human choice that
actually goes and shows up
in design and development.
It doesn't show
up after the fact.
It's not an ethical module
that we plug in afterwards.
But it's rather something that,
if you want to get the vehicle
to drive one block,
you'd better figure out
how to reconcile these, it
turns out, conflicting values.
Now in order to really
tackle these kinds
of ethical challenges,
I want to suggest
we need to really do three
different kinds of operations.
The first is we need to
know what kinds of questions
we should be asking.
So a second is how do we
answer those questions
about tradeoffs, about the
ethical constraints we have.
And then third, what should
we do with those answers.
That's the how ought we
act to realize our values.
And these are questions that
you can do on the research side.
We're actually, at CMU,
engaged in some projects
with the private sector trying
to teach frontline coders how
to ask the right questions.
So we're not trying
to teach them
how to answer the questions.
We just want to get them to
ask questions before they
were implemented in code.
But what we're also
doing is trying
to realize this in our
educational programs.
And you quickly
see, and those are
you who were here
for Farnam's talk
this morning may recognize
these overlapping circles.
They seem to show up a
lot, this idea that there
are many different disciplines
that we have to put together.
We now have over a
dozen classes at CMU
that explicitly adopt this kind
of multidisciplinary approach
to identifying the
questions to ask,
the answers to the questions,
and what those answers imply
in terms of action and policy,
and another about dozen classes
that are implicitly using this.
They aren't necessarily
explicitly talking
to the students about the
multidisciplinary kind
of approaches.
But they're absolutely
doing it where
there's a databases class that
now has a week on implicit bias
and algorithmic
biases for example.
And so I wanted to do, in
the remaining half of my time
here, is just briefly talk
about one of the classes,
one I know very well
personally, which
is called AI Society
and Humanity, which
is a course that is offered
in the philosophy department
and in the public policy school
jointly at Carnegie Mellon
and is aims to sort
of realize this vision
of trying to have students
who are able to bring together
the insights from all of
these different disciplines.
Now the problem that we
realized very quickly
on when were trying to design
the course is inevitably,
we're going to have
to do collaboration.
There's no way-- there are very
few people in the world who
can bring the insights of all
those different disciplines
to bear at once.
And to the best of my knowledge,
none of them are 18 years old.
Typically, it takes
18 to 20 years
of professional experience
to be able to do this.
And these are-- we're dealing
with people who, many of whom,
have 18 to 20 years of
life under their belt as
opposed to training.
And so we found that
there are a number
of sort of specific issues.
There's a lack of
relevant knowledge.
The computer sciences
students don't
know very much philosophy.
The philosophers don't know
very much computer science
at the end of the day.
And so what we've done
is structure things
around trying to create
multidisciplinary
collaborations.
Now the problem is if you
do that, the students are
coming from different
disciplinary backgrounds.
Every discipline has its own
set of questions that they ask.
If you show a technology
to a computer scientist,
a psychologist, an
economist, a philosopher,
and ask them, what's the most
important question we should
be asking about
this technology, you
will get not four
different answers.
You'll get about 12
different answers
because different
disciplines just
ask different kinds of
questions about the world.
So we have realized
that in the class,
we have to foreground
that issue.
We have to be very
explicit about defining
disciplinary approaches in
terms of the questions they ask,
which is strange
to the students.
The students think about
disciplines in terms
of the methods that are used.
And so what we're
trying to do is
get them to step even one
stage back and understand
that it's different
questions that
often define different
disciplines, not just
the different methods.
And of course, finally,
this has come up earlier,
this notion of bilingualism.
Our students are, as I suspect
most undergraduates are,
largely monolingual in
the sense that they know
how to talk computer science.
They know how to
talk philosopher.
They know how to talk sociology.
And words are used in
very different ways
in the different disciplines.
When a computer scientist
talks about a loss function,
they simply mean
the thing that is
optimized against in the
learning of some algorithm.
When a philosopher and ethicist
talks about a loss function,
they have something
very different in mind.
They have in mind
the actual harms
that accrue to somebody
because of a policy
or actions that are implemented.
And so again, much
as with foreground
in the issue of
different questions,
we have to foreground the issue
of terminological differences.
How is it that the very
same words, the same chirps
and whistles or
sequences of characters,
can be used in such
radically different ways
in the different
disciplines, and making sure
that the students start to
become a little bit more
multilingual over the
course of the class.
So what that resulted
in was a class
in which the final
projects, what
we geared the students
towards, were small group
analyses of a current or near
future technology that required
them to bring
multiple perspectives,
multiple disciplinary
perspectives,
to bear as part
of that analysis.
The part the students all
really hated at the outset
and loved at the end is they did
not get to pick their groups.
We assigned them to groups
deliberately forcing
them to interact with
students from other majors.
Now part of the way we were
able to do that is this
was a first time class
taught in philosophy.
I didn't ask Augustine
what it is here at MIT.
At Carnegie Mellon, that
usually means you'll
get 10 to 12 kids in the class.
We had 60 coming from
14 different majors,
coming from six of the
seven different colleges.
The only college that
wasn't represented
was the business school,
which is graduate only.
It's a little bit unfortunate
that none of the MBAs
came over to take this class.
We're going to try and
remedy that next fall.
And so we did have the
disciplinary diversity
that we were able to pair
the students together
into these groups.
And they studied everything
from, as you can see,
AI doctors to Google Duplex
to autonomous weapons
and had to do the hard work
of actually understanding
the technology--
but there was a technologist
in every group--
and bringing to bear notions
from design and ethics
and the humanities-- but
there was a humanities student
in almost every group--
and integrating that
together with insights
from the social sciences.
But there was a social science
major in almost every group.
And so the students really
had to learn how to do this.
So what did we get out of this?
What are the sort of morals
that we had as a takeaway,
besides the fact that
teaching a first time project
based course with 60
students is not a good idea?
I'm jealous that
you only have 24.
Yeah.
That's the problem
with being young.
You make dumb
decisions sometimes.
So what are the key morals
that we took away from it?
The first is to
focus on consumption,
not production outside
of one's major.
I think sometimes
there's this drive
to turn computer scientists
into ethicists and ethicists
into computer scientists.
And that's great
when you can do it,
but it's also a bar that
we don't necessarily
have to reach.
The ability to consume the
products of another discipline
is all that's required
for collaboration.
And so-- and that's
a much lower bar.
And it's the sort of thing
that is actually achievable,
even just in one semester.
The second is that you
have to explicitly teach
the disciplinary patterns.
You don't get to just count
on the students knowing
what they are.
And of course, you have
to teach collaboration.
We often throw our students
into collaborative teams
and say, go do something great.
And the students flounder.
But they don't want to
say anything because that
would admit weakness.
And so we've also
realized we have
to explicitly teach these
kinds of collaborative skills.
So I think those
are hopefully morals
that might help with
some of the efforts
that I'm sure are
underway here at MIT.
Thanks.
[APPLAUSE]
Thank you very much.
That was terrific.
So I just wanted to
finally introduce
Solon Barocas, who is a
researcher at the Microsoft
Lab, Microsoft Research,
in New York City
as well as an
assistant professor
in the Department of Information
Science at Cornell University.
And he works on ethical
and policy implications
of artificial intelligence.
He is also the co-founder of
the Fairness, Accountability,
and Transparency in
Machine Learning section
and later set that up
at the ACM as well.
So we are privileged to
have him speak to us today.
[APPLAUSE]
Hi everyone.
Let's see if that's right.
It's a pleasure to be here.
I have to say that it's very
heartening and encouraging
to have been someone in
the past few years working
on these type of topics
and see an institution
like MIT devote
itself so explicitly,
so fully, to a new
college that is concerned
with these kinds of
societal questions
and these basic
ethical questions.
And I really do hope
there's an opportunity
here to integrate
ethics and policy
questions into the
foundations of computing.
And rather than kind of
repeating some of the things
that my august colleagues
have discussed already,
I'll be trying to focus a little
bit on what is the substance
and scope of ethics
in computing,
what do we actually care about,
and how might we actually think
about the relevant ways
to approach those issues.
So rather than asking
questions about for instance,
how to structure the way we
teach it, should we teach this
an integrated way, which I
think generally, we all agree
would be preferable rather
than it is one of classes,
I'm going to be asking,
what is the orientation we
take even when we design
it in these different ways.
And some of this will come
from work that I'm undertaking
with some colleagues at
Cornell, at Berkeley,
at University of
Washington based on an NSF
grant through its culture--
what is it?
It's the Cultivating
Cultures of Ethics in STEM.
And this work, at least
in the Cornell side,
is devoted to trying
to do a kind of survey
of the state of ethics teaching
within computer science.
So how is it that people are
actually teaching these things?
It's an empirical question
that we're trying to answer.
We've done some
preliminary work.
And some of the things
I'll describe here
grows out of that
preliminary work.
But we're in the process of
developing a sample frame.
And we'll do a kind
of attempt to have
a statement in a
year or two about
how people are actually
going about this in practice.
OK.
So oh, very nice.
OK, so what I'll try to
describe is by no means
an exhaustive list
of the ways we
might think about the way
to approach teaching ethics
in computing, but I
think sort of describes
what can be a quite diverse set
of orientations that actually
really mean quite different
things for what we're going
to be training our students in.
And so one example to begin--
and this will be a list
of about, I think, five
or five or six entries--
is something like
professional responsibility.
And in a way, this is
most analogous to what
we have already in many
other engineering fields
and even in law and other things
like medicine where there's
a sort of expectation of basic
expectations of what you're
going to be doing as a
responsible, a decent person
in this vocation.
And then in sort of
engineering fields,
this obviously takes the
form of explicit expectations
around things like you
don't build buildings
that are going to fall down.
You are not disloyal or
dishonest to your clients.
You make sure that
you're going to specify
the conditions under
which this artifact is
going to work and so on.
But I think they are
relevant to mention here too
is that these are also things
that figure into licenses.
A kind of condition of
being in the profession
is that you abide
by these things.
And so there's been a
lot of discussion about
whether or not computer
science, as a field,
should not only learn from
these adjacent disciplines
where people actually
have a license
or as a condition of
being in that profession
have to abide by these rules,
but whether we actually
should have things
where, if you fail
to abide by these
responsibilities,
that somehow this
license can be revoked.
And so there are
interesting conversations
around whether things like
the ACM code of ethics
should itself become
something that's not just
meant to be a sort of
generic guidance for people
in computing, but
become something
more of a kind of
condition of entry, right?
You're not actually able
to be a computer scientist
unless you meet the expectation
that this code lays out.
This is in contrast to some
of the things we've already
discussed, I think, which
is sort of approaching this
as a matter of research ethics.
And I think this has become
a very pressing problem
in the sense that increasingly,
the kinds of things
we're doing with
computational tools
is actually studying humans.
So no longer is this sort
of computing in isolation
from humans.
But we're actually
building artifacts that
directly interact with people.
We're often using
data about people.
And so things that
we're seeing as sort
of purely technical
projects increasingly
now are seen as things that
directly impact human subjects.
And obvious examples
of this include
studying the kind of
data we now produce
when we use all sorts of
digitally mediated platforms.
What's interesting about this
approach is that while there's
a lot to learn--
and it's important that
computer scientists increasingly
recognize that they are engaged
in human subjects research--
the expectation here is that
you be ethical with respect
to your research
subjects in the sense
that you have obligations
to those people who
are involved in the research
that you're undertaking.
Notably though,
this is not a way
of thinking about your
broader impact to society.
The principles we
have in research
ethics are not the
principles that say,
don't conduct this research
because of its impact.
It's a principle that
helps us understand
how to be faithful
in our obligations
to research subjects, the
people directly involved
in our studies.
And so while these are
important principles,
they don't necessarily take
us as far as we might want.
A different thing
that's happened--
and I think I've been involved
in some of this work--
is for normative issues
to simply become questions
of technical research.
So just like
security and privacy
are now firmly embedded
in computer science
as technical topics,
the past few years
have seen questions
of fairness become
yet another normative issue that
is sort of seen increasingly
as just a technical issue
that we can work on.
And I can say from practice
from my own experience
in teaching these topics,
this has been very compelling.
No longer are
normative issues things
that have to be taught
to you by philosophers.
Instead, they are
actually something
that you can think about
in purely technical terms.
And you can make your own
kind of contributions,
rather than having to see
this as something that you'll
be educated in by someone
outside the field.
And I think this, obviously,
has lots of opportunities
by making these into technical
problems that become tractable.
They become something
that can be pursued,
I think, very effectively
in practice and in industry.
But potentially, they also
make things much more narrow.
And so what is amenable to
sort of obvious computational
thinking ends up being
the things we work on,
which might be quite
different than what
philosophy in other areas
would want us to think about.
And we heard at the start about
this idea that computing itself
can also inform law and policy.
And I think there's
a complement to that,
which is that we can
also obviously train
computer science students
in that law and policy.
So in order to be able to
contribute to law and policy,
they obviously have to
understand the relevant law
and policy.
And here this can take a
couple of different forms.
It can be of the
style what we actually
want people to understand
the fundamental normative
commitment and principles that
are kind of embedded in the law
and policy that we have,
so not just teaching people
how to abide by the
law, but understanding
why those laws exist at all and
what they are trying to achieve
and then how that actually
might structure their work
as computer scientists.
And I would say that this is
sort of in contrast to things
where we might actually
just instead want
to make sure that we're
sending people off
into the world where
they know how to comply
with those regulations.
So there is certainly
a kind of style
of education, which is about
making sure that people
actually know how to
behave under the rules,
rather than necessarily helping
them think about the higher
level principles that the law
and policy might be focused on.
And here, this might even
take a kind of technical form
as well, where a lot of the
tools that we've now developed
to try to realize
certain policy goals
are themselves actually
difficult to use.
So an example of this
might be something
like differential privacy,
a very powerful tool
to sort of guarantee certain
promises for anonymity.
But at the same time,
something that you
can't necessarily
use very effectively
without appropriate training.
And so here, you can
imagine education
being not a matter of
teaching differential privacy
as a research question, but
rather teaching it as something
that you need to know something
about in order to use it
effectively in practice.
And then, the final thing I'll
point out is that, of course,
there's also a
style of education
here that is not even focused
on individual decision makers,
where the goal isn't to sort
of ensure that the students we
send off into the world
might be able to make
good decisions for themselves,
but instead to help people
understand how computing sits
in relation to other structures,
institutional
structures in society.
And I think this is
actually critical.
Because often I think in
the framework of ethics,
we can see it often as a
matter of individual choice
when many of the issues that our
students are going to confront
when they go off
to these jobs are
ones in which individual
choice will matter,
but where structural
considerations are going
to be incredibly significant,
basic things like the market,
the demand to meet
the expectations
of your shareholders,
the political context,
the historical context.
These things will often
shape the direction
that computing takes
in ways that are not
so easily addressed by
the individual decisions
that our students might make,
no matter the ethical training
they might receive.
And so I think the
point of trying
to provide this
kind of survey is
to show that there's really
quite a bit of disagreement
about what we're even
trying to accomplish
when we say that we're
going to be teaching ethics
in computing.
And we can draw on
many different types
of traditions and orientations
to realize different goals.
But I think the
challenge for the future
is to be able to think
about what comes--
what are the kind of
benefits and disadvantages
of these different approaches,
how to integrate them
in ways that try to get the
most out of the ones which
are relevant and maybe
try to sidestep some
of the limitations of others.
And here I think
the hope, anyway,
is that we can learn
from each other.
And the goal of this
empirical project
is not just to be able to make
a kind of descriptive statement
about the way
people are teaching,
but to get a sense of what
seems to be working well,
what is the kind of wide
diversity of approaches
that people have developed
such that we can actually
learn from the kind of
experimentation that's
happening at this early
stage in the field.
So thanks very much.
[APPLAUSE]
OK, so now we're going to take
questions from the audience.
And so if you have a question,
please raise your hand.
We'll walk around
with microphones.
And please direct--
I will try to direct the
question to the panel
to make sure everyone
has an opportunity
to express their thoughts.
Good morning.
Professor Grosz and
several of the others
stress the benefits
of integrating ethics
into a computer
science curriculum.
And I'm curious whether
there is either A, data,
or B, case studies, that educate
us on that, that is to say,
where teaching of
ethics has been
done in one fashion
in one area, it's
been another fashion
in another area,
and we can see a
difference of outcome.
Obviously, there's some
history in medicine
and other parts of science.
So any insights on that?
Please go ahead.
So I just want to highlight
how important that question is
and stress that one
of the big challenges
here is that we're not
testing just what happens
in an individual course.
What we'd like to see
is what the impact
is over the long term.
So for example, in the
Intelligence Systems Design
and Ethical Challenges course,
one of my favorite remarks
on the student
evaluation at the end
was, now I understand what
I would think is right
and how I might
persuade my manager
to change what my project is.
And that's really what
we're trying to do.
And it's an open
research question
how you can measure three
and five years down the pike
what the impact has been.
I also want to say, I
think it's a really--
it's a both and or
all and process,
the kind of course that this
David talked about is really
important and gives depth that's
different from the depth that
you get when you have this
pervasiveness, which--
but I think the pervasiveness
is really important to habituate
students to thinking about it.
There are studies that
show in other contexts
that just a one off course,
whether it's a required
computer science and
ethics or an optional one,
don't really work because
they make the ethics seem
like it's an afterthought.
So you can have
these rich courses.
But the reason we went
the pervasive route is we
think it's important to make
this part of what students
think about all of the time.
So the bottom line answer is
it's still an open question,
as far as I know.
David, Solon, do either
of you want to contribute?
Just to echo that, I mean,
there is a lot of work on in,
as Barbara said, in
other disciplines.
So especially in
engineering ethics,
it's pretty robust that you
have to have it integrated
into the curriculum
to get anything
more than about a one
semester change in behavior.
I will say one of
the other things
that we're trying to do
is not keep it restricted
to the academy, so trying to
find outreach and engagement,
both professional education,
executive education,
but also these kinds of
interventions in companies
to help breed and
cultivate exactly
the sort of pervasiveness of
the issues of asking questions
about values and interests
at every stage in a process.
Excellent.
Next question, please.
Yes?
Hi.
My name is [INAUDIBLE],,
reporter for [INAUDIBLE]..
And there's been an
interesting debate
because this is so structural,
as several of you pointed out.
And now, the stakes
are so much higher
for the kind of questions
being talked about today,
compared to about 10 years ago.
The one complaint that
I'm hearing is that even
the sources of funding for
this kind of research and even
[INAUDIBLE] the sources of the
funding of the college itself
is, how do you address
the fact that--
how does someone begin ethically
when those structures are--
when the funding sources
are going to point, maybe,
in one direction or another?
And how do you create a system
where you can insulate yourself
from that, even in
a place like MIT,
as colleges around
the country have
to deal less with public support
and more and more with funding
[INAUDIBLE] private [INAUDIBLE]?
Hal, would you like to
respond to that one?
Well, I don't know about the
immediate short term answer.
But in terms of what's
practical and what's
important either to government
or industry funding,
you couldn't possibly
overemphasize
the importance of putting
this in their education.
So if you look at a
company like Google,
from the most practical way
you could possibly imagine,
one of Google's issues is do
people trust its products.
And that's real dollars
and cents in the strongest
way you could possibly say.
And they're starting to
recognize that the way you do
that is that their employees
are trained in these kinds,
in exactly the kinds
of things that the two
panelists on the left
have been talking about.
So they're coming along.
I don't think there's going
to be a long term funding
problem in this.
It's as practical as anything
else in computer science.
Solon, yeah.
Yeah, and this is a
really important question.
And I think maybe I would
parse it in two different ways.
I think one is sort of the
independence of the institution
itself.
Academic institutions
increasingly
are relying on sources of
funding besides the state.
And they're interesting
and important questions
to ask how that might be
affecting how we teach
and what we teach,
but there's also
a question about an orientation
toward teaching ethics
which stresses individual
responsibility that sort of is
in the face of what
continues to be
profound structural reasons why
the tech industry might move
in a certain direction.
And I think we need
to sort of grapple
with both of those problems.
And my sense is
that we haven't yet
really thought nearly enough
about what kind of education
we would want to give people
to become more reflexive not
just about their own behavior,
but about the particular
conditions which explain
why the tech industry often
moves in a certain direction.
And that may even involve
sort of suggesting to people
that they want to
pursue completely
different occupations,
or there's
other ways to actually
engage in the world
to affect structural change that
are not about necessarily being
a frontline engineer who
has ethical commitments
that they intend to
realize, but actually want
to occupy a different
position, maybe
do more direct
political advocacy.
And again, I think there's
an important question
to ask, how do we
train those people,
or is it even appropriate to
imagine that computer science
departments are the place
where people are being trained
to do that kind of work?
Great.
OK.
OK.
Please go ahead.
So I'm going to try not
to antagonize everybody.
But so--
Could you introduce
yourself as well please?
What's that?
Could you please introduce
yourself as well?
Oh, Mike Spencer.
So you gave stealing as
an example of something
that everybody would agree with.
But of course, that's
not really true.
I mean, like Robin Hood
or Elizabeth Warren
believe in stealing from the
rich to give to the poor.
Donald Trump believes in
stealing from everybody as long
as it aggrandizes himself.
China believes in stealing
intellectual property.
In fact, they don't
really believe
that intellectual
property should be
an entity that could be stolen.
And I think Richard Stallman
probably thinks the same way.
But it seems to me that
nobody really has--
the idea of ethical
training would
be to look at the largest
consequences of what
you're doing if you can-- and
you can't really predict that.
But you can try.
And then take your own
moral ideas and what's good
and what's not good
and apply that.
But I think the hard
part is figuring out what
these global consequences are.
So what would you
say about that?
Barbara, would you like
to respond to that?
Yeah, and I'm going to wrap
up my answer with an answer
also to the previous question.
So first, academia has
a really important role
to play here if it holds on to
its values of open publishing
and also if we in computer
science hold onto open source.
I take heart from
what's happened.
I saw the whole cognitive
science revolution.
I think by starting with
undergraduates and graduate
students, we create
different kinds
of people going in the world.
More directly to your
question-- and this goes back
to Solon's answer--
we need to have--
the way to do--
so first of all,
one of the things
we make clear to students
is that there isn't always
a single right
answer because there
are values that may
conflict, that they
need to have a discussion.
If you talk-- I just had
a conversation yesterday
with a Justice on the
California Supreme
Court who's interested in AI.
And he said, you
know at some point,
it comes down to
us arguing values.
It's not just data
that we process,
even though it's nice to
have the process data.
We need to have a
conversation in this country
and in the world that
brings people in industry,
academics from the
social sciences
as well as from philosophy and
ethics and computer scientists,
together so, and government.
And we as citizens,
I mean, the citizenry
has to take
responsibility as well.
So that's not going to get
resolved by computer science
alone.
But I, again, have optimism that
if we teach our students how
to have those
conversations and how
to talk about the ethical
issues, not just in terms of it
doesn't feel right,
but even to understand
why you don't steal when
you then might understand
when maybe it's OK.
And I think that also
reflects on regulations.
If you understand why
there's a regulation
or why there's a policy, you're
more likely to follow it,
or if you think it's mistaken,
to want to try to change it
rather than to just subvert it.
Question from Randy Davis.
Randy Davis, I'm a professor
in computer science
here in artificial intelligence.
I was very struck
by Solon's comments.
And I think we need a term for
what you were talking about how
if you look at an
issue like fairness
from the perspective
of an engineer,
you may get a narrowing
of perspective.
I think that's
terribly important.
It also has to be
balanced out against what
I think is a very useful
consequence of that, which
is every time I've
talked to somebody who's
upset about fairness, it's very
hard to get them to tell me
what they think
fair actually means.
What's it operate?
What would be a fair outcome?
And you find out there's an
enormous amount of fog there.
So maybe this is just a comment
endorsing the whole mindset
here, but to the
extent that we can
put those two
perspectives together,
that would be wonderful.
We also need a term
for what you're
talking about the
engineeringization of ethics,
where you get too
narrow a review.
Be interesting to
be able to name it.
David, would you like
to comment on that?
I just want to
quickly say, I think
that's yet another
reason that we
need to be doing this in
a multidisciplinary way.
Fairness has a couple
thousand years.
I'm not-- I don't mean this
to impugn what you said.
I've been at conferences
where people say,
you know, I wonder what
the nature of agency is.
It would be great if
somebody thought about that.
And as a philosopher, you
sit there and think, hi.
We've been doing this for
a few thousand years now.
It doesn't mean we've
got the right answers.
But we've at least
got frameworks
for how to think about and talk
about some of these things.
That I agree, the real
power comes when we then
merge that with the
unbelievable tools that
have come out of engineering,
computer science, AI.
Just a quick comment
that maybe one
of the most important
things we do is
teach students to talk across
disciplines and backgrounds
because it's those
conversations that
are going to have to happen
broadly in the public for us
to resolve these issues.
Great.
Go ahead.
Nicholas Ashford from MIT.
In one sense, this is just
a subset of a lot of things
we've asked about in a
deployment of science
technology and when we argue
that it provides guidance
for both personal and
institutional behavior, which
forces us to ask, is the goal
of the activity being met?
Is it efficient, which
is an economic question,
and is it fair?
And John Rawls was mentioned.
And there's a
distinct difference
between utilitarian ethics and
what John Rawls talks about.
But there is a second
issue, which really becomes
extraordinarily important.
And that is are we working
on the right problem?
The easiest thing to
do is with expanded
computational facility,
statistics, algorithms,
is to do more information
creation and much less
critical thinking.
What does the
information tell us?
Is it the right question?
And I would say there
is a deficit in asking,
why are you trying to
get this information?
What use is it?
And are you taking
efforts away from asking
the more important
questions about what
the information gives you?
I see that that's missing.
OK.
Hal, do you want to--
are we asking the
right questions
and how do we teach people to
focus on the right questions?
Gosh, I don't even
know where to start.
I think that's
terribly difficult.
And I think one of the things
we're involved with at MIT
is saying, what should
our curriculum look like?
How should it be
changed specifically
to address the kind of issues
that you're talking about?
And I don't know
what the answers
are going to come out to be.
But what your
perspective you're doing
is just incredibly important.
So I don't know the answers.
But at least we're starting
with the questions.
Barbara wanted to--
One quick response.
One of the things we
aim to teach students
is to ask not only could
I build this, which
is what we've tended
to ask in computing,
like, wow, it actually
works, but should I build it,
and should I build it
the way I'm building it?
And of course, you have
the same question in law
when you have to think
about not just what
the law does and
is intended to do,
but the unintended consequences.
Great.
We have time for
one more question.
So I wanted to go back to
this comment about stealing
and maybe [INAUDIBLE]
raised the issue.
I wouldn't describe Elizabeth
Warren's description-- desire
to tax people as stealing.
But that did remind
me that there
are some communities in very,
very underprivileged places
of the world or underprivileged
people in communities
where they do actually
think stealing is OK.
And I guess, generally,
I should raise
the issue of what's
OK to do and what's
not OK is super cultural.
And different sub-communities
view different things as OK.
And just in case this
needs convincing,
presumably you all
feel a difference
between speeding and
driving across a red light.
Why?
They impose traffic laws.
But somehow, we
know that one is OK.
And everyone does that.
And if you get
penalized, it's whatever.
And the other one is a law
that no, no, you don't.
You just don't cross in the
red light, at least not--
maybe in Boston you do--
but most of the world
you do not, so--
Yeah, I had the
order reversed here.
I thought--
There is cultural things so far.
And different cultures,
different communities,
the very underprivileged
in different countries
view this differently.
And I wonder how, in
your teaching of ethics,
how does this virtual cultures
so different cultures can
come in?
So in a couple of
different ways.
So one is to emphasize asking
questions about why are we
doing certain things.
What are we trying to achieve?
Who is being served by this?
Whose interests and
goals are being served?
Because as you said,
one of the quick things
that you come to
realize is how much
technology serves the
interests and values of only
a subset of the population.
I'm sure you have
those issues in Boston
as much as we have
them in Pittsburgh.
And a second thing to do is to
recognize the role of policy
in this.
When you live in a society
that exhibits value pluralism,
to use the jargony term,
that's what government's about.
That's what political
philosophy, at its heart,
much of it is about how do you
have a legitimate government
that actually is
able to be responsive
when not everyone agrees
about what we ought to do?
And so bringing in these policy
questions, the policy debates,
the how do we get
the regulation that
will advance the common
good, whatever that might be?
How do we even figure out
what the common good is?
Simply getting students to ask
these questions who are coming,
to be frank, particularly
from the technical fields,
just getting them
to ask the questions
opens up a whole new
set of conversations
that these students, I find,
are very willing to then engage
in and learn how to engage in
them in a constructive way.
OK, well, that's
a terrific answer.
Any final words
from any panelists?
Otherwise, I'm
going to set you--
