A detective turns over a picture of a guy
and says — I know, and he’s like, “So,
that’s not you?” I look. I said, “No,
that’s not me.” He turns another paper
over. He says, “I guess this is not you,
either.” I pick that paper up and hold it
next to my face, and I said, “This is not
me.” I was like, “I hope you all don’t
think all Black people look alike.” And
then he says,
“The computer says it’s you.”
The thing we must keep in mind about Robert
Williams’ case is this is not an example
of one bad algorithm. Just like instances
of police brutality, it is a glimpse of how
systemic racism can be embedded into AI systems
like those that power facial recognition technologies.
It’s also a reflection of what study after
study after study has been showing, studies
I’ve conducted at MIT with Dr. Timnit Gebru,
Deb Raji, studies from the National Institute
for Standards and Technology, showing that
on 189 algorithms — right? — you had a
situation where Asian and African American
faces were 10 to 100 times more likely to
be misidentified than white faces. You have
a study, February 2019, looking at skin type,
showing that darker-skinned individuals were
more likely to be misidentified by these technologies.
But I want to also point out that while we
are showing examples of misidentifications,
there’s the other side. If these technologies
are made more accurate — right? — it doesn’t
then say accurate systems cannot be abused.
So, when you have more accurate systems, it
also increases the potential for surveillance
being weaponized against communities of color,
Black and Brown communities, as we’ve seen
in the past. So, even if you got this technology
to have better performance, it doesn’t take
away the threat from civil liberties. It doesn’t
take away the threat from privacy. So the
face could very well be the final frontier
of privacy, and it can be the key to erasing
our civil liberties, right? The ability to
go out and protest, you have chilling effects
when you know Big Brother is watching. Oftentimes
there is no due process. In this case, because
the detective said, “Oh, the computer must
have gotten it wrong,” this is why we got
to this scenario. But oftentimes people don’t
even know these technologies are being used.
And it’s not just for identifying someone’s
unique individual. You have a company called
HireVue that claims to analyze videos of candidates
for a job and take verbal and nonverbal cues
trained on current top performers. And so,
here you could be denied economic opportunity,
access to a job, because of failures of these
technologies.
So, we absolutely have to keep in mind that
there are issues and threats when it doesn’t
work, and there are issues and threats when
it does work. And right now when we’re thinking
about facial recognition technologies, it’s
a high-stakes pattern recognition game, which
equates it to gambling. We’re gambling with
people’s faces. We’re gambling with people’s
lives. And ultimately,
we’re gambling with democracy.
So, talk about the agencies that you understand
are using this. I mean, you’ve mentioned
this in your writing — Drug Enforcement
Administration, DEA; Customs and Border [Protection],
CBP; ICE. Explain how they are using them.
Right. And in addition to that, you also have
TSA. So, right now we have a Wild Wild West
where vendors can supply government agencies
with these technologies. You might have heard
of the Clearview AI case, where you scrape
3 billion photos from the internet, and now
you’re approaching government agencies,
intelligence agencies with these technologies,
so they can be used to have investigative
leads — right? — or they can be used to
interrogate people. So, it’s not a situation
where there is transparency about the scope
and breadth of its use, which is another situation
where we think about due process, we think
about consent, and we think about what are
the threats of surveillance.
So, Joy, you have written, “We Must Fight
[Face] Surveillance to Protect Black Lives.”
If you can talk about the calls of the Black
Lives Matter movement, people in the streets
to defund police departments, to dismantle
police departments? How does facial recognition
technology fit into this?
Absolutely. So, when we talk about defunding
the police, what we have to keep in mind is,
when resources are scarce, technology is viewed
as the answer to save money, to be more efficient.
And what we’re seeing are the technologies
that can come into play — right? — can
become highly optimized tools for oppression
and suppression. So, as we’re thinking about
how we defund the police, how we shift funds
to actually uplift communities, invest in
healthcare, invest in economic opportunities,
invest in educational opportunities, we have
to also understand that as a divestment from
surveillance technologies, as well.
