- Joy.
Latanya.
- Darren.
- What a joyous occasion this
is to have this black girl
magic, this brilliance,
enveloped in this room.
Dr. Latanya Sweeney wouldn't
tell you this about herself,
because she's a woman
of great modesty,
the first African American woman
to receive a PhD in computer
science at MIT.
[APPLAUSE]
- Joy Buolamwini
would not tell you
that the PhD in computer science
that she will shortly receive
is an additional credential
stacked on top of that Rhodes
scholarship and the room
full of credentials that
reflect the ways in which
excellence is personified
in our community.
Both of you are pioneers
in a space where
women like you are
often invisible, are not
present in the room.
And yet you have demanded
that the doors be open.
And you are giving
insight and shedding light
on the pernicious affects
of something most of us
believe to be simply unbiased.
I mean, how could AI--
the promise of AI,
is that at last we
can have objective measurements,
evaluation, systems
that move us from the
injustice in the analog world
to justice in the digital world.
So can we have justice in
this new digital world?
Latanya, when you came
to the Ford Foundation
and blew the roof off of the
building with the presentation
you did, demonstrating
how the effects of racism
manifest in simple exercises
of aggregating names,
could you tell us a
little bit about how we
see that manifest on the sure?
- Sure.
Actually, that story started
with when I had first
arrived here at
Harvard, and I was being
interviewed by a reporter.
And the reporter wanted to
see an article I had written.
So I go, and I typed
my name into Google.
And up popped the
link to the paper,
but also some ads that implied
I had an arrest record.
And the reporter
said, forget the ad.
Forget the article.
Tell me about the time
you were arrested.
And I said, well,
I wasn't arrested.
And he says, then why does
your computer say you were?
And so we go back and
forth for a little bit.
And I click on the link
into paying the fee all just
to show that the company, one,
had no arrest record for anyone
named Latanya Sweeney.
But that started me
taking two months,
typing in the names of real
people, trying to understand
how this came to be.
And I did hundreds of
thousands of searches
across the United
States and learned
that the company had
actually put down ads
on the names of all real
Americans or real adults,
rather, who they believed
lived in the United States.
But if your name was given
more often to a black baby
than a white baby,
an ad would pop up
implying you had
an arrest record.
But if your name was given
more often to white babies,
it didn't.
And the difference was huge.
It was like 80%
to 20% difference.
And discrimination
in the United States
isn't illegal we but we
do have protected groups
and in certain situations.
And one of those
groups were blacks.
And one of those
situations is employment.
And the argument that I made was
that when you apply for a job,
someone will look online to see
what information is about you.
And this put African
American and black applicants
at a tremendous disadvantage.
Because right away
the computer was
sort of implying something about
them that often wasn't true.
That turned out to be
exactly what was needed
to open, in the
Department of Justice,
a civil rights investigation.
I'm a computer
scientist by training.
And it was the first time any
of us thought in terms of,
oh my gosh, this
computer is racist.
[LAUGHTER]
And why how did this come to be?
And so that was sort of the
start of a real awakening
that now we see it
engaged in so many ways,
that the pursuit of
technology is not
exempt from these
same ills that we
find in other parts
of our society
and maybe even be
more potent today.
- But before you,
this had not happened.
So why didn't some white
guy computer scientist
figure this out?
- Well, first of all, if he
was searching for his name,
he would have gotten
a nice neutral ad.
So he may not have
been sparked by it.
So this speaks
directly to the idea
of me being who I
am in that situation
and having one of those
black sounding first names.
- And so what can we
extrapolate from this?
Because I know that some of the
work that you have continued
to do has looked at the
predictive analytics that
are being used around
which major decisions
are being made that
impact people's lives
far beyond employment.
- Yeah.
I took time off from Harvard
to be the chief technology
officer at the Federal
Trade Commission.
And one of the things
that became very clear
is how technology was allowing
the very specific types
of fraud, very specific ways
to disenfranchise people
to really exist.
And that was everything from,
if you're on the internet--
so for most households
in the United States,
everyone's most
frequently visited
websites are the same first 10.
But after number 10,
they deviate greatly
specifically based
on whether or not
you have a child, your
income, your education level,
your race, and your interest.
And the more you get into
a community that you feel
is more like you,
the more you trust.
And those are the places
where huge frauds happen.
And so that was kind of this
interesting relationship
we began to learn over
and over again at the FTC
around how people trust their
social networks and so forth
and the internet
and how they can
be manipulated against them.
That became part of, when
I came back to Harvard,
our investigations
with students.
I teach a class here called
Tech Science To Save the World.
And we began looking
through 2016.
How would old ways
in which people
were disenfranchised from
voting show up in technology?
And the work showed
many discoveries.
But one of them was we were the
first to show those 36 voter
registration websites and
their vulnerabilities.
And this year we
taught the class.
And we were able to point out
a vulnerability in the 2020
census that will go online.
These things matter, because
they're subtle in the sense
that, if somebody
disenfranchises
you to vote online, you still
show up at the polling place,
except you're not
in the poll book.
So they give you a
provisional ballot.
So you think you voted,
but in many states
the vote doesn't count.
Or in the census, a miscount
determines the amount
of representatives we have in
the House of Representatives,
and therefore can tilt
the balance of republicans
and democrats.
So these things, in some
ways, tend to be small,
but the manifestations
of them are huge.
- So Joy, you have started
an organization called
the Algorithmic Justice League,
a new civil rights organization
for the 21st century.
And you have also brought
Amazon, IBM to their knees.
[LAUGHTER]
I mean, it is you
who shamed them
on the front pages of The
New York Times and in media
by calling them out,
by calling them out
on the ways in which they were
making millions of dollars
selling facial
recognition programs
and other products that
were actually flawed.
And your research demonstrated
that they were flawed.
But they didn't want to hear
that from you it sounds like.
- Well, with the
Algorithmic Justice League,
I started it because I was
working on an art project that
went awry.
And so I'm sure everybody
in this audience
has heard of the white
gaze, the male gaze.
Well, to that I
add the coded gaze.
And the coded gaze that is a
reflection of the priorities,
preferences, and also prejudices
of those who have the power
to shape technology.
So I was working
on an art project
that used face detection.
So when I looked at a mirror,
it would say, hello, beautiful.
Or it would put a
lion on my face,
so I could become Serena
Williams just for fun.
I'm at the Media Lab.
We do these kinds
of explorations.
[LAUGHTER]
So as I was working on this
project in a class called
Science Fabrication, which is
about visioning what might be
and trying to see if
you can manifest it now,
I noticed there was a problem.
The face detection
software I was using,
it worked fine for
my friend's face.
But when it came to my face,
I ran into a little problem.
[LAUGHTER]
But I got an assist,
right, so literally
coding in a white mask.
I mean, [INAUDIBLE]
already said it,
but I didn't think it would be
so literal when it happened.
[LAUGHTER]
And so I had the opportunity to
share this on the TED platform.
And in that talk, this is
when I talked about launching
the Algorithmic Justice League.
Because I'm thinking, well, if
they can't get our faces right,
what else could be going wrong?
And I also noticed I
had something in common
with the women of Wakanda.
And so when the Black
Panther came out,
I decided to run their faces.
They were either not detected,
some of them were misgendered.
But then I decided to test
out age classification, age
estimation.
So those red columns
you're seeing
are under the age header.
It's verified,
black don't crack.
We see it here.
[APPLAUSE]
But it really became
more serious in terms
of thinking towards justice when
I read a report from Georgetown
Law showing 1 in 2 adults,
over 130 million people,
has their face and a face
recognition network that
can be searched
by law enforcement
unwarranted using
technology that hasn't
been audited for accuracy.
This is one of the reasons
why we audited Amazon,
because they're selling to
law enforcement right now.
They're trialing this
technology with the FBI.
Now, some people
are also saying,
look, not being detected, that's
not the worst thing, right?
Maybe we got a windfall.
But for me it wasn't
not being detected.
But what happens when
you're misidentified?
So in the UK, where
they've actually
done performance
metrics, they showed
that they had false positive
match rates of over 90%,
more than 2,400 innocent
people being falsely matched
and even cases of women
being matched with men.
Last week, an African
American teenager
is suing Apple for $1
billion, because he's
been misidentified through
some of the facial analysis
recognition technology
that's out there.
So because this technology
is actually in the real world
and can change people's
lives in a material way,
that's why I started the
Algorithmic Justice League.
And that's why I've been
challenging large tech
companies.
- And recent research, which
we were emailing about a couple
of weeks ago, truly
bowled me over.
So talk about the
results of the research
around autonomous vehicles
and people of color.
- Oh.
So--
[LAUGHTER]
--you probably know
where this goes.
Let me back up really quickly.
So my MIT research was
called Gender Shades.
And what I did along
with Dr. Timnit Gebru--
and you see us posing for
Bloomberg 50 right there,
looking fierce with our
co-founder of Black in AI.
What we were showing was that
if you looked at skin type
as a way of evaluating
facial analysis technology,
you would find different
kinds of disparities
than if you just looked at race.
So other researchers
took that idea
and said, OK, let's apply
it to self-driving cars.
And let's look at pedestrian
tracking technology.
So using the similar
kind of methodology
that was developed
in Gender Shades,
they tested it on the cars.
Turns out they're less accurate
for darker skinned individuals
when it comes to tracking.
So the promises of self-driving
cars, autonomous vehicles,
literally not being seen
has real world consequences.
[LAUGHTER]
- So like-- wait.
So let's just be really clear.
So in this new digital
world, if you are black,
you are more likely to be run
over by the autonomous vehicle?
- We got to be careful--
[LAUGHTER]
--extra careful.
So that's why going white
face sometimes, just so that--
[LAUGHTER]
[INAUDIBLE]
- But that's the irony is
that in this new digital world
we may literally have
to wear white face.
- Yeah.
That is the new irony, sadly.
- Well, I would like
to say I do think
computer science can do better.
[LAUGHTER]
- Says the PhD from MIT.
So tell us how we make
that happen, Dr. Sweeney.
- So, look, technology
design is really
sort of the new policymaker.
And these decisions
it's really a reflection
of people building technology
in their own image.
AI has always been this
idea of building machines
in your likeness.
And when as they're
building AI, what
is like them is being
over-fitted to the fact
that they're often
white men in their 20s.
- And this is something I call
the problem of pale male data
sets.
[LAUGHTER]
- Pale male data sets.
- Pale, male, and sometimes
stale, but often male data
sets.
[LAUGHTER]
OK.
So when I was doing the
research for Gender Shades,
I started looking at all of
these data sets of faces.
And I looked at data sets that
were used as gold standards.
And what came up
time and time again
was the overrepresentation of
lighter skinned individuals,
the overrepresentation of men
and the underrepresentation
of women, and especially
women of color.
So if you're thinking
about AI and machine
learning as one of the
ascendant approaches,
machines are learning from what?
Data.
So in this case,
data is destiny.
And if we have pale
male data sets,
we're destined to fail
the rest of society,
whether it's on our streets
because we can't detect
different kinds of
individuals, whether it's
in a health care setting
where people are trying
to detect things like melanoma,
or see if you can infer things
like early signs of dementia.
So the lack of representation
I call this power shadows
that end up in our data sets
and our evaluation benchmarks
as well.
- So Latanya, you were
the CTO at the FTC.
What does government
need to do about this?
Is there a role for
government in this?
I mean, in the old analog world,
we had a Civil Right's Act.
We had the EEOC.
We had a regulatory regime that
protected the public interest.
We have yet to define what
the public interest is
in this new digital world.
- Well, a lot of
the work that we
do, I take the fact that
people fought very hard--
and we saw a lot of that in the
scenes that was shown earlier--
for the rights and the
regulations that we have now.
As technology rolls
out, it dictates
how we are going to live
our lives by what technology
allows us to do or
doesn't allow us to do.
And what people
don't seem to realize
is that every democratic
value is up for grabs
by what technology
allows or doesn't allow.
And so it's been
incredibly important to be
able to produce technologists
sort of in the public interest,
a group of technologists who are
interested in understanding how
to find these unforeseen
consequences to shore up
journalism, to shore
up our regulators,
and just help us really apply
the laws and regulations we
have to technology and
also to help technologists
do their job better.
For many people in
high tech, I don't
think this was ever intended.
For many of them, it really
is an unintended consequence.
So there's also a call or
a need for technologists
to do their job better in the
high tech companies as well.
- But we know that,
for example, one
of the reasons I've
come to know you
is because at the
Ford Foundation
we have been working on this
new field of public interest
technology.
Because just as there
needed to be a field
created of public
interest law in the 1960s,
we need to think about
what the public interest is
in this new digital world.
And in fact, it's
the private sector
who has determined the bounds
of what is public and private.
And we saw in the
Zuckerberg hearings
where we witnessed I think
the interaction of capitalism
and democracy, and
democracy lost.
- Yeah.
- Because there was no one
sitting behind those Congress
people passing them
notes, giving them
questions to interrogate
the tech executives.
Because most of the
capacity in this
base is in the private sector.
And so one of the
things we have to think
about is how do we train a
generation of public interest
technologists, like
yourselves, who
are going to fight
the fight for justice
in this new digital world?
So, Joy, from your standpoint,
what's needed most at this time
to protect the public interest?
- It's a big question.
And it's not just one thing.
I still believe that, as we're
talking about public interest
technologist and
as we're thinking
about how computer scientists
how policy makers can shape
the future, we
have to also remind
ourselves the importance of the
artist and the storytellers.
So the work that
I've done thus far
I really believe that
part of the reason
it's gained attention is
because of that visual
of coding in a white mask.
There were FBI experts who did
a facial analysis test before,
but they didn't take the
approach of calling out.
I also think that how we're
trained as computer scientists
has to change, so that there
is a sense of responsibility.
We had a doctor up
here earlier who
was very courageous in
standing up for Flint saying,
we take an oath.
We don't do that as
computer sciences.
We think we can create the
world, we can break things.
And until it's
actually confronted,
we don't actually have
to make any changes.
And so I think
changing how we learn
to be computer scientists
will be a huge part of it,
but not thinking that computer
scientists or technologists can
solve it alone.
- So you see the role of
the arts and humanities.
So do you see a new
curricular being needed,
President Bacow, here
and at other places?
- Absolutely.
And I'd like to talk
about how, let's say,
looking at the social sciences
influenced my own work.
So with Gender Shades,
we went through.
We made a new data set,
et cetera, and so forth.
And what we were able to
show is that the current way
we're taught thinking
about the curriculum
is to look at data and
information in aggregate.
And so if you see the
aggregate performance
for some of these
companies, it seemed OK.
So then we said,
let's break it down.
And let's look at what the
implications are for gender.
And we see gaps.
Let's look at what the
implications are for skin type.
And we see gaps.
But what I was able to do was
then bring in Kimberlé Crenshaw
and say, there's something
we can learn as computer
scientists from what she did
with anti-discrimination law,
saying that single access
analysis is not enough.
And so what happens
when you marry that
with computer vision?
Well, this is what we got.
We provided a new
kind of perspective
of looking at the data.
And here we see that for
one group, the pale males,
you have 100% performance.
And then for another
group, women of color,
right, you have the
worst performance.
And when we disaggregate that,
we got to error rates as high
as 47%.
So as a computer scientist
sitting in my body
as somebody who is
also reading Crenshaw,
I'm able to then
provide new insights
into what we're doing with
computer vision and computer
science.
[APPLAUSE]
- Dr. Sweeney,
are you encouraged
by what you are seeing in the
classroom here at Harvard?
- Oh, my gosh.
So the Save the World Class--
you know, students
want to do good.
And they want the work that they
do to really matter and change
the world.
And the class has
really touched the lives
of a lot of the students.
They've gone out.
They've done amazing things.
They've gotten
Facebook to fix bugs.
They've gotten Airbnb to
address price discrimination.
They were the first to point out
problems in the Affordable Care
Act.
I mean, the list
of accomplishments
that these students have
done goes on and on and on.
And we do have to thank
the Ford Foundation, too.
Because the Ford
Foundation has given us
the funds to allow the
students to explore
these unforeseen consequences
wherever they may be.
And the students, they
literally mean save the world.
And it's been phenomenal.
It's made a big difference.
I just want to say also
that the space of problems
are huge, so from
algorithms being used
not just in our homes, but
also determining what you're
going to see on your
social media feed
to also determining
sentencing and recommendations
for around recidivism,
all of which
show unfairness
and bias in them.
And so the amount
of work is huge.
So some of it is a matter of
shoring up and giving knowledge
to those who have the power
to help us make the change.
- And Joy, final
word, what do you
have to say to this audience of
people who are assembled here,
because we care about justice
in America and in the world?
And we don't all understand
this new technology.
In fact, it's a little
frightening to some of us.
Should we be frightened?
- We shouldn't be
working together
so there's less to fear.
And I hope that all
of you will join me
in moving towards
algorithmic justice,
because we've entered the age
of automation overconfident
and underprepared.
You see in this
chart behind me all
of the areas in which
automated decision making is
starting to enter our lives.
So it's up to us,
we who are here
and also in the live stream,
to be asking questions.
If you're going
for a job interview
and they're using AI to
make a determination,
ask what's going on.
Also, share your stories.
We have Bias in the
Wild reports that
are submitted to the
Algorithmic Justice League
where people are like,
my Snapchat [INAUDIBLE],,
whatever it might be, you know?
[LAUGHTER]
So I think it's really
important that people
feel they have a voice
and you don't feel like,
oh, if I'm not a technologist,
if I don't have PhDs from MIT,
I can't be part of
this conversation.
But that's not true.
We need to move towards
participatory AI
where those who
are at the margins
are actually centered when
it comes to decision making
around the technology
that's shaping our lives
and shaping society.
- Ladies and
gentlemen, give it up
for a Latanya Sweeney
and Joy Buolamwini.
[APPLAUSE]
- And Darren.
- And Darren.
[APPLAUSE]
