test.  Test.
Test.  Test.
Test.  Test.  Test.
Test.   Test.  Test.  This is a 
test.  Test.  Test.  Test.  This
is a test.  Test.  Test.  Test. 
This is a test.  Test.  Test.  
Test.  This is a test.
[music]
[music]
[music]
[music]
[music]
[music]
[music]
[music]
[music]
[music]
[music]
[music]
[music]
[music]
[music]
Ladies and gentlemen, the show 
will begin in five minutes.
Please take your seats.
Ladies and gentlemen, the sew 
will begin in three minutes.  
Please turn off your cell phones
and 
silence your electronic devices.
Thank you.
Hello, everybody.
Welcome to the third annual AI 
Now  Symposiums.  Somehow we've 
had three.  It's been an 
extremely big year.  This is the
biggest gathering.  We talk 
about what's been happening in 
AI, we acknowledge good work, 
and we map some paths forward.  
We have a stellar lineup for for
you tonight.
They are going to be addressing 
ethics, accountability, and 
organizing.
Our third panel is going to be 
on facial recognition and 
surveillance.  The second panel 
is going to look at 
the problems of rising 
inequality, 
austerity, and politics in AI.  
Finally we're going to look at 
research and organizing and how 
they can work together to 
develop  stronger accountability
and develop some of the vexing  
issues that we've been facing in
2018.  Speaking of that, we're 
going to give you a tour of what
happened this year.  Now AI 
systems have been increasing in 
power and reach against a pretty
stark political backdrop.
Meanwhile they have been made to
shift in upheaval in both the AI
research field and in the tech 
industry at large.  To capture 
this, we've decided to do 
something a little different.
We're going to visualize all of 
the stories that happened this 
year.  I want to warn you this 
is overwhelming.  It is designed
to be.
It's been a pretty endless 
parade of events.
In any year Cambridge would have
been the biggest story.  It is 
just one of many.  Facebook 
along had a royal flush of 
scandals, including -- let's go 
through them briefly.  Just a 
sample.
A huge data breach in 
superintendent, 
multiple class actions of 
discriminations, potential 
violations of 
the Fair Housing Act in May, and
hosting massive fake Russian 
accounts all year round.
We saw Mark Zuckerberg himself 
and 
others testifying.  Facebook was
by no means the only one.
News broke in Martha Google was 
building AI systems for the 
department 
of defense's drone project, 
known as Project Maven.  This 
kicked off the organizing and 
defense.  Then in June when the 
Trump 
Administration introduced the  
Family Separation Policy that 
forcibly removedded kids from 
their children, 
employees from Amazon and 
Microsoft 
asked their companies to end 
contracts with I.C.E.  It was 
revealed that  I.C.E.
had tampered with its own risk 
assessment algorithm to produce 
one result.
100% of immigrants in custody 
would be receiving the 
recommendation of detained.  
Zero would get released.  
Meanwhile this is a big year for
the spread of facial 
recognition.
We saw Amazon, Facebook, and 
Microsoft launch facial 
recognition as a service.  We 
also learned that  IBM was 
working 
with the NYPD and secretly built
an ethnicity detection feature 
to search for people's faces 
based on race.  They did that 
using police camera footage of 
thousands of people in New York,
none of whom knew they would be 
used for this purpose.  All year
we saw more and more AI 
systems being used in high-stake
domains with real consequences.
Back in March we had the first 
fatalities for drivers and 
pedestrians from autonomous 
vehicles.  In May we had a voice
recognition in the UK which was 
meant to be  detecting 
immigration and accidentally 
cancel thousands of people's 
VISAs.  In July it was reported 
that IBM Watson was producing 
inaccurate and sometimes 
dangerous cancer treatment 
recommendations.
All of these events have, 
pushing a growing wave of tech 
criticism which is focused on 
the unaccountable nature of the 
systems.  Some companies, 
including Microsoft 
and Amazon, have made public 
calls to have regulation of 
technologies like facial 
recognition.  That's a first.
Although so far we haven't 
really seen any real movement 
from Washington.  I'm waiting on
that one.  So that's a tiny 
sample of what has been a hell 
of a year.  Researchers like us 
who work on issues around the 
social implications of AI have 
basically been talking about the
scale of the challenge that we 
now face.  There's so much work 
to be done.  But there are also 
some really positive changes 
too.  We've seen the public 
discussion about 
AI mature in some significant 
ways.
So six years ago when I was 
grading 
papers from people like Kate and
Cynthia 
on the topic of bias and AI, 
these were outliers positions.
Even years ago, some of you have
been 
at the first AI Symposium.  It 
was not mainstream.  Now it is.
There are ways in which AI can 
reflect bias.
Like Amazon's machine learning 
system for resumé scanning which
was shown to 
be discriminating against woman 
that it was down ranking for 
containing the word woman.
Back in July they showed Amazon 
our they incorrectly identified 
28 members of Congress as 
criminal.
They also said recognition 
performed less well on darker 
skin women.  We're thrilled that
the co-author will be joining us
on stage along with Nicole Ozer 
who drove the  ACLU project.  
Overall, this is a big step 
forward.  People now recognize 
bias as a problem.  But the 
conversation has a long way to 
go.
It is already bifurcating into 
different  camps.
In column A, we see the fixing 
and bias and solutionism.  In 
column B we see an attempt for 
ethical codes that will do the 
heavy lifting for us.
In just the last six months,  
speaking 
of column A, IBM, Facebook, Mwai
cosoft, 
and  others have released tool 
kits that 
promised to mitigate ther issues
using 
system call methods to achieve 
fairness.  It is necessary and 
important.
But they can't fix this problem 
alone.
Because at this point they are 
telling technical methods as a 
cure for social problems.  They 
are sending in more AI to fix 
AI.  We saw this logic in action
when Facebook quizzed in front 
of the Senate 
repeatedly pointed to AI as the 
cure for 
its
allegory rhythmic codes.  This 
is a big reaction to this year. 
What should be built?  What 
should we not  build?  Who 
should make the decisions?
Google published the principles 
and ethics courses emerged with 
the goal of teaching engineers 
to make ethical decisions.  But 
a study published yesterday 
called in to question the 
effectiveness of the poaches.
It showed that software 
engineers do not  commonly 
change behavior based on 
exposure to ethics codes.  
Perhaps this shouldn't surprise 
us.  The current focus on 
individual choice, thought 
experiments, and technical 
definitions is too narrow.  
Instead more oversight and 
public input is required.
Or as Lucy Suchman, a pioneer 
thinking in human.
Computer interactions put it, 
while 
ethics code are a good start, 
they lack 
any real public accountability. 
We are  delighted she's joining 
us on the final panel.
In short, ethics principles can 
help, but we have a long way to 
go before they can grapple with 
the complexity of issues in 
place.
The biggest as yet unanswered 
question 
is how do we create sustainable 
forms of accounts 
accountability.  This is a major
focus of our work.  We are  
trying to address the kinds of 
challenges.
We have looked at  AI this 
large-scale context beyond a 
purely technical focus 
to include a wider range of 
voices and methods.
So, Kate,
how do we do this?
Let me share with you.  We have 
seen five themes.  We thought we
would give you a quick tour 
tonight.  First of all, there's 
a lot to be learned by looking 
at the underlined material 
realities of the AI system.
Last month we published the 
project called the anatomy of  
AI.
This is a result of a year long 
collaboration between myself and
the 
expert where we saw how many 
resources werer required to 
build a device that 
responds when you say Alexa, 
turn on the lights.  Starting 
with that, we traced through all
of the environmental extraction 
processes from mining and  
smelting and logistics to 
contain shipping and what they 
need to build AI systems to 
scale to the international 
networks of data centers all the
way through to the final 
resting place of so many of our 
consumer, AI gadgets, which is 
buried in Ghana, Pakistan, or 
China.  When we look at the 
material realities, I think we 
can begin to see the true 
resource implications that this 
kind of large-scale AI really 
requires for our everyday 
convenience.
But if doing this research, we 
discovered something else.  
There are black boxes on top of 
black boxes on top of more black
boxes.
It is not just at the
algorithmic level.  I think this
is why the planetary resources 
that are needed to build AI to 
scale is really hard for the 
public to see.  But it is really
important if we're going to 
develop real accountability.  
Second, we are continuing to 
examine 
the hidden labor behind an AI 
system.  Now lots of scholars 
are working op the issue issue 
right now, including 
people like Lilia
, and we are delighted that we 
have 
Astra Taylor who joined the term
that seem to be AI that can only
function with a huge amount of 
input.  Often when we tend to 
think about the 
people behind  AI, we might 
imagine a 
handful of highly paid dudes in 
Silicon  Valley.  This isn't the
whole picture.
As Adrian Khan showed, there's 
more 
people who work in the minds of 
content moderation than people 
who work at Facebook and Google.
AI takes a lot of work.  As 
we're going to hear tonight, 
most of that goes unseen.
And third we need new legal 
approaches to contend with 
automated decision making.
That means looking at what's 
working and looking at what's 
needed.  Accountability rarely 
works without 
reliability on the back end.  
There have been some 
breakthroughs this year.  The 
data protection guidelines came 
in to effect in May.  New York 
City announced its automated 
decision task force, the first 
of its kind in the country.  
California just passed the 
strongest privacy law in the 
U.S.
Plus there are hosts of new  
cases taking algorithms to  
court.
We just recently held a workshop
where 
we invited public interest 
lawyers who are representing the
people who thinks they've been 
unfairly cut off from 
Medicaid benefits or lost their 
jobs due 
to bias systems and people whose
prison 
sentences have been affected by 
skews perception.  This focused 
on how do you build more due 
process and safety net?
Later tonight you are going to 
hear 
from Kevin de Liban whose 
ground-breaking work is  
published.
This is designed to get the 
public sector more tools to 
inquire as to 
whether an algorithmic system is
appropriate to be used.
And then shoring the community 
oversight.
Rashida Richardson will be 
talking more about it later 
tonight.
-this brings us to the topic of 
inequality.  Popular discussion 
often focused on hypothetical 
youth cases and promises of 
teacher benefit.  AI is not a 
common resource available 
equality to all.  Looking at who
builds the systems, who 
makes the decision on how they 
are used, 
and who is left out and can help
us see beyond the marketing.  
These are some of the questions 
that Virginia Eubanks wrote 
about.  We're happy she will be 
joining us tonight.  The power 
and insights that can be gleamed
are further skewing the 
distribution of resouses.
That the systems are so unevenly
distributed that they may be 
driving greater forms of wealth 
and equality.  A new report from
the U.N.
published that said while AI 
could be used to address major 
issues, there's no guarantee it 
will align with the most  
pressing needs of humanity.  The
report needs that AI systems are
increasingly used to manipulate 
human 
emotion and spread 
misinformation.  Tonight we have
the U.N.
special on extreme poverty and 
human rights.
Phillip Alston will be joining 
us to talk about his ground 
breaking work on inequality in 
the U.S.
If last year was a big moment 
for recognizing bias and the 
limitations of technical systems
in social domains, 
this coming year is a big most 
for accountability.  A lot is 
already happening.  People are 
starting to take action.  There 
are new coalitions growing.  
We're really excited to be 
working with a wide range of 
people, many of whom are in the 
room right now.
Including legal scholars, 
journalist, health and education
workers, organizers, and civil 
society leaders.
AI research will always include 
the technical.
We're working to expand its 
boundaries.
We're emphasizes 
interdisciplinarity and the 
perspective of those on the 
ground.
That's why we are delighted to 
have 
speakers like Sherrilyn Ifill 
and 
Vincent
Southerland, each of whom have 
made important contributions to 
the debate.  Because genuine 
accountability will require the 
new coalitions.  Organizers and 
civil society leaders working 
with researchers will assess AI 
systems and to protect the 
communities 
who are most at  risk.
-So continue we offer you a 
different kind of AI system.
We're including people from 
different disciplines to support
the sectors and build more 
accountability.  That's where 
Meredith and I have founded AI  
Now in the first place.  It has 
really drived the work that we 
do.  AI isn't just tech.  AI is 
power, it is politics, and it is
culture.  So on that note, I 
would like to welcome our first 
panelist of the evening.
She will be on facial 
recognition, Nicole Ozer.  Don't
forget to submit your questions 
for any of our panelist.
Just two to the  Twitters and 
use #AInow2018.  That's true of 
people in the room and everybody
on the live stream.  Hello, we 
see you.  Not really.
(lalaughter).
Please send your questions in.  
We'll look at them.  Now on with
the show.
(Applause).
>> Good evening, everyone.  I'm 
Nicole Ozer.
I'm the tech coming and civil 
liberties director at the civil 
liberties in California.  I've 
lead the cutting-edge work 
working in the courts with 
companies, with policymakers, 
and in communities to defend and
promote civil rights in the 
digital age.
Our team has worked to create 
landmarks privacy laws like  
Calexa.
We developed ordinances in Santa
Clara and Oakland.
We've worked to expose and stop 
social 
media surveillance of black 
activist on Facebook, Twitter, 
and Instagram.  We have started 
a national campaign to bring 
attention to the very real 
threat of face surveillance.  
Tonight we are talking face 
surveillance.  Of course I can't
think of a more timely topic.  
For some quick background, the  
ACLU has long opposed face 
surveillance.
We've identified it as a 
uniquely 
dangerous surveillance form and 
a particularly grave threat 
because of what it can do to 
secretly track who we 
are, where we go, what we do, 
and who we know.  And how it can
be built so easily 
layered on to existing 
surveillance technology and how 
those incentives and 
societal incentives really 
combine to 
make it evermore dangerous once 
it gets a foothold.  And also 
how it feeds off and exacerbates
a history of bias and 
discrimination in the country.
Professor Woody Hartog wrote 
imagine a technology that's 
potently dangerous.  So 
inherently toxic that it  
deserves 
to be rejected, banned, and 
stigmatized.  It is facial 
recognition.
And consent rules, procedural 
requirements, and boilerplate 
contract is no patch for the 
structure and incentives for 
exploitation.  This past winter 
our team discovered the future 
it now.
Face surveillance that was 
thought of was now being quietly
and  actively deployed by Amazon
for local law enforcement.  
Amazon's recognition product 
promised 
locating up to 100 faces in a 
picture.
Realtime surveillance across 
millions of faces.  And the 
company was marketing its use to
monitor crowds.
People of interest and wanting 
to turn 
officer-worn body cameras into 
realtime surveillance.  When we 
discovered the use of the 
technology, we were shocked to 
find that nothing was in place 
to stop this from 
being used as a tool to attack 
community 
members, to target protesters, 
or to be used by I.C.E.
which we know has been taking 
out 
courthouses, schools, and 
walking down 
the aisle of buses to arrest and
deport community members.
Organizations came together last
spring to blow the whistle and 
to start 
pushing Amazon to stop providing
face surveillance to the 
government.  Our coalition call 
was quickly echoed 
by institutional shareholders, 
150,000 members of the public, 
some of you may 
be some of them, hundreds of 
academics, and more than 400 
Amazon employees themselves.
The ACLU also reenforced the 
public understanding by doing 
our own test of Amazon 
recognition.  We used the 
default matching score that 
Amazon sets itself its own 
product and that we know that 
law enforcement has also used.  
The result?
Amazon recognition falsely 
matched 28 
members of Congress.
And disproportion that thely 
falsely matched members of 
color, including John 
Liu wees, Civil Rights Leader.  
They have been asking Amazon for
answers.  Answers they largely 
have not gotten.  The reality is
we only now the tip of the 
iceberg.  How the government 
particularly in the 
current and social climate is 
gearing up 
to try to use face surveillance 
to try to target communities.
We don't know what the 
communities 
large and small are doing or not
doing to protect community 
members.  That brings us to 
tonight's timely discussion.  I 
want to thank AI Now.
Their research is helping to 
further inform some of the very 
important work.  I want to thank
them for the immense 
privilege of being here tonight 
to 
discuss the critical issue with 
Timnit Gebru.  She's a research 
scientist.
She studies the ethical 
consideration  undermining data 
mining and what methods are 
available to audit.
She's also a co-founder of black
in AI 
where she's working to increase 
important diversity and reduce 
the negative impacts of bias in 
training data.
Timnit's BHD is from Stanford.
He  studied computer vision if 
the  AI lab.  We have Sherrilyn 
Ifill, the 
President and director council 
on the NAACP legal defense fund.
She's the seventh in history to 
lead the legal rights 
organization.
For many, Sherrilyn needs no 
introduction.  So many of us 
know and admire her as a legal 
thinkinger, author, and true 
power house in bringing to light
challenging issues of race in 
the American law and society.  
So we're in for a wonderful 
conversation tonight to help us 
all big deeply into  
understanding the broader social
context.  It is not about  
solving a technical matter, but 
decisions about the future of 
the technology and how it is 
used and not used matters so 
profoundly to the future of who 
we are as communities and as a 
country.  And to focus on what 
people that we have, can 
continue to build, and we'll 
need to wield to push back 
aggressively 
on threats to the safety and 
rights of  communities.  So with
that, let's get started.  I have
the first question for 
Sherrilyn.  Face surveillance is
a  relatively new technology.  
But it isn't being developed in 
a vacuum.
How to you think the threat of 
face
surveillance fit within the 
country's 
past of violence and 
discrimination.
Thank you.  Thank you for  
inviting me and for recognizing 
the importance of us getting our
hands around the critical issue.
I thank you for teeing up the 
first question in this way.  I 
think much of our conversation 
about technology in the country 
happens as 
though technology and AI in 
particular is developing in some
universe that's 
separate than the universe you 
and I all know we live in, which
is identified with problem of 
inequality and discrimination.
So here we are in 2018.
It is four years after we all 
watched 
Eric Gardner choked to death.
It is four years after Michael 
Brown was killed in Ferguson.  
It is three years after Freddie 
Gray was killed in Baltimore it 
is three years after Walter 
Scott was sot in the back of the
park in north Charleston.
It comes as a time of mass 
incarceration, a phrase that 
everyone foes.
When the United States 
incarcerates the most people in 
the world, the overwhelming 
percentage of them 
African-American and Latino.
It comes at a time in which we e
are 
segregated at levels that rival 
the 1950's in the schools, where
we live, and it comes a time of 
widening and some of the widest 
income inequality that we've 
seen in the country since the 
1920s.
And into that reality we develop
this 
awesome power that allows us to 
speed up 
all of the things that we 
currently do, 
take shortcuts, and 
theoretically produce 
deficiencies in doing what we  
do.  If we think about facial 
recognition technology just in 
the context that we most often 
talk about it as a threat in the
context of law enforcement and 
surveillance.  Who here thinks 
the biggest problem of law 
enforcement is, you know, they 
need facial recognition 
technology?
Why, as a matter of first 
principle, do we think this is 
something that's needed for law 
enforcement?  Why is this 
something we would devote our 
attention to?
Why would we devote our public 
dollars to the purchase of the 
technology when we recognize all
of the problems within law 
enforcement.
What we do is we deposit these 
technologies into industries, 
aspects, 
and governmental institutions 
that have demonstrated they are 
unable to address deep problems 
of discrimination and 
inequality.  That lead to 
literally destroy lives.
Not just killing of people which
is 
bad enough, but actual destroyed
lives.
We drop it into a period of 
racial profiling.  We drop it 
into  stop-and-frisk.
We are part of the team that 
sued the NYPD for stop-and-risk 
here in New York.
We are diligently monitoring 
that decree.
Now we have a technology that 
reports 
to assist police in doing this 
kind of law enforcement 
activity.  When we combine it 
with things like the gang 
database in New York which we've
been -- I'm  trying to get 
information about.  New York 
City has a gang database.  There
were about  34,000 people in the
gang database.  They have 
reviewed it and dropped some 
folks out.  I think it is down 
to 17 and  20,000.  They made 
mistakes since they were able to
drop out 10,000 people.  The 
gang database is somewhere 
between 
95 and 99% African-American,  
Latino, and Asian-American.  It 
is 1% wait.
We've asked the  NYPD to tell us
the 
algorithm or the technique they 
put it in the gang database and 
how to get out of it.  If I 
discovered I was  in, how could 
I get out of it?
They still haven't provided us 
with that information.  We just 
filed suit last week.
Now try to imagine marrying 
facial recognition technology to
the 
development of a database that 
thereatically presumes you are 
in a gang and that your name 
pops up in the database.  We 
know we're in the age of the 
Internet, even when you scrub me
out, it exists.  We're  
unleashing the technology that 
has the ability to completely 
transform forever the lives of 
individuals.
We e do work around employees 
and misusing criminal background
checks.  Some of the work 
demonstrates that 
your arrest record stays with 
you forever.  You have employees
that won't employee anyone who 
has an arrest.
We are talking about a class of 
unemployable people, a class of 
people who are branded with a 
criminal tag.
We have a number of school
districted that have invested in
facial recognition.  Identifying
students that are suspended and 
carrying one of the ten most 
popular guns in school shootings
when they come in the door.
Very often these aren't students
that are  suspended.  Now we're 
going to brand students within 
the school.  So the context in 
which the technology is coming 
to us.
To me it is a very chilling 
context.  Yet we talk about 
facial recognition technology 
and all of the other 
efficiency algorithms and AI 
technologies as though they 
exist, were 
created, or can be evaluated 
separately from the very serious
context I just described.
-And in terms of the historical 
context, 
I often think of the  1958 
Supreme Court case NAACP versus 
Alabama.
They were able to maintain the 
privacy of the members list.  In
the case the Supreme Court 
recognized the vital 
relationship between privacy and
the ability to exercise first 
amendment rights to be able to 
speak up and protest.  So, you 
know, in the current political 
context that we're in, how 
afraid are 
you, how worried should we be 
about the impact of surveillance
on civil rights and activist 
movements?
We should be worried.
When we hear the way the 
President or attorney general 
talks about the group and the 
creation of the black identity 
extremist, the idea that the 
technology can be mounted and 
used on police 
cameras and that the police can 
be 
taking
this kind of data from crowds of
people.  The crowds of people 
who came out to to test against 
the confirmation of Brett 
Kavanagh were  characterized by 
some of the President and some 
of the republican leadership.
Imagine those kind of protesters
and activist would be subjected 
to facial recognition in which 
they would be included in some 
kind of database that would 
identify them with these kinds 
of words.  Think about what this
means for young people.  We're 
in a movement period in the 
country in which young people 
are  engaged and protesting.  
Now we have to monitor them.
The recognition of NAACP versus 
Alabama and the reason the NAACP
did not have to give up the 
membership list is the court 
recognized the possibility 
that the revelation of who these
individuals were would subject 
them to 
retaliation within the local 
community 
and their freedom to fully 
exercise their facial amendment 
rights.  They have the same that
others want to be out in the 
public phase.  We have to talk 
about that.  So much of it is 
about implications 
and the contested public space 
in the country and the way in 
which we now want to privatize 
everyone, because you have to 
believe if you step into the 
public 
space, you would all  thely 
surveilled.
And the last thing is just to go
back 
to the point about us being so 
segregated.
It is not just recognizing the 
face, it is also evaluating.  We
know that police officers tend 
to assign five additional years 
to African-American boys.  They 
see them being older than they 
are.
Who says that people who have 
grown up so segregated are in a 
positioning to 
evaluate or tell people apart or
are in a position to know 
whether someone is a threat or 
is dangerous.
Once we go down the road without
recognizing the way which in 
America many of the people who 
would be using 
the technology are ill equipped 
to evaluate the face of someone,
to 
recognize, and differentiate 
between two black people and two
Latino people.
Anyone that is asking why aren't
you happy?  Why don't you smile 
more?  Somebody can look at you 
and tell your emotion is not 
true.
We should recognize it is not 
just 
going to click and say that's 
Sherrilyn Ifill.
It is going to do more than 
that.  It is going to try to 
evaluate my 
intentions in the moment.
Maybe we're out protesting.  
Many of us understand it is 
different.  While you can leave 
your cell phone at home and not 
be tracked, we can't leave our 
face at home.
On a more complex level, what do
you think it is about face 
recognition that potentially 
makes it different, more 
dangerous, or risky than other 
AI technologies?
>> I think you touched on many 
of the things I wanted to say.
The first thing is the fact that
for 
example, it doesn't just 
recognize it evaluates you.  And
let's think about your emotions.
You know, your emotions are --
Ranna, who started an emotion 
recognition company, said your 
emotions 
are some of the most private 
things you possess.
Even if it worked perfectly, 
that would be terrible.  If you 
could just walk around and 
people could see your emotions.
It also doesn't work perfectly.
People trust algorithms to be 
perfect.  I might be perfectly 
happy.  Somebody can say I'm 
dissatisfied.  I just read 
recently every day I learn 
something new about where this 
different face -- automated 
facial analysis tools are being 
used.
I read -- I forgot the name of 
the company.  They were used 
automated facial analysis 
instead of the time cards.  So 
then that -- then they were 
talking 
about the potential to then do 
emotion recognition in 
aggregate.  If their employees 
are not satisfied, they can tell
over time.
This is pretty  scary, right?
So a combination of things.  The
fact that there are some things 
that are very private to us.  We
want to keep private.
There are -- even my research 
with Joy shows that automated 
facial analysis 
tools have high error of 
disparity for different groups 
of people.  At the same time, 
people trust them to be perfect.
I think the three combinations 
are  dangerous.
>> So speaking of that research,
there's been a lot of talk about
accuracy or 
inaccuracy and sort of improving
its function overall.  How does 
some of this conversation really
miss some of the bigger picture 
around face surveillance?
-I think the fact that we showed
high error disparities could 
start the conversation.
The same way the ACLU could show
therer were high error rates for
some of 
the members of Congress; right? 
But that doesn't mean that you 
can -- you should also have 
perfect facial recognition that 
is being used against  
mostly black and brown people, 
like you said.  These two 
conversations have to happen in 
tandem.  For example, for 
emotion recognition, 
again I'll bring up Ranna -- 
she's the only person I've 
talked to about  this.
She started it to help autistic 
kids.  People with autism.  I've
talked to people who want to 
work with older people who have 
dementia and use some of the 
emotion recognition kind of 
technology.  Now this could be 
something good; right?
In this case, you don't want 
high error rates and  
disparities.  The conversation 
about accuracy should happen.  
Similarly there are some other 
computer-vision technologies 
being used for melanoma 
detection.
Again you don't want -- you 
know, a 
skin tone that's very dark to be
-- for 
the AI technology not to work on
it and get
misdiagnosed.  This conversation
should happen.  The solution is 
not just to have 
perfect accuracy in facial 
recognition.
-You said they have drawn 
particular attention to 
government use.  I think for a 
reason.  I wanted to explore 
with you all sort of the 
distinction.  Can it be drawn 
effectively between government 
use and corporate use?
Or do some of thingers really 
bleed and blend together in 
terms of civil rights and civil 
liberties.  This has been on my 
mind this week.  Some of you may
have seen some of the press 
about the Facebook revealed this
week that it thinks that some of
the photos have been scraped by 
Russian face surveillance firms.
This was an issue that we were 
particularly concerned about at 
the ACLU 
when the Cambridge  Analytica 
story broke.
Around the time they provided 
database for the third-party 
apps, they started to change 
their privacy settings.
They got rid of them for photos.
We've been attendantive to the 
fact 
that public photos could become 
a really great space for  
potentially scraping face 
surveillance data.
What do you all think can and 
should we be addressing the 
issues?  Should we bleed 
together and blend 
together and look at the bigger 
issues?
-I think on the privacy front, 
they blend together.  This is 
part of the difficulty of this 
work is that the pathway in is 
usually one or the other; right?
So the pathway in is usually 
this is a business.
I'm running Amazon or running 
Facebook.  It is wonderful.  You
know, the  owners and 
shareholders are  making lots of
money.  People are using the 
technology for whatever reason 
they want to use it.  They are 
making a personal decision to 
use the technology.  So that's 
supposed to cover a 
multitude of it.
It is not.  We know it is often 
very close.  It gets very close 
particularly in 
times of national security high 
alert like post 9/11.  You have 
the telephone computers handing 
over information for 
surveillance.
We know there's the symbiotic 
relationship.  The government 
relies on  corporations to 
develop technology for the 
government to use for a variety 
of reasons.  I don't think 
there's some place where it is 
benign and some place where it 
is evil.  I think the technology
itself is like 
a monster that once unleashed is
very hard to put back in the  
box.  The problem, I think, is 
that where government stops  
acting like government 
and starts acting like it is 
just another corporation or 
another client of a corporation.
The government's responsibility 
is to protect the public.  
That's why the conversation 
about regulation is so 
important.  Because that's the 
government's role.  So it can't 
act just like another 
consumer of a product or client 
of a corporation.  It is 
supposed to hold the public 
trust.  I think what we're 
seeing is the government fall 
down on the job being so scared 
and tentative and buying the 
story that we have to leave the 
folks alone because they are the
brilliant ones who are doing all
of the wonderful, technology 
stuff.  If we regulate them, 
they are going to smash the 
creativity.
We thought it was crazy that the
government would require us to 
put a seat belt across our 
waist.  We didn't want to do it.
My feet were on the floor.  We 
sat kids on the floor.
-I wrote a paper.
-It is true.  You could fit as 
many people in the family as you
could fit in the car.  My father
was outraged.  He thought it was
discrimination with people with 
big families.  Now you can't 
imagine.
Wereuate
We create the  Boogeymen.
The government has to reengage 
and not just be another client.
-I'm so happy what you brought 
this up.  We wrote a paper with 
Kate Crawford.  We had case 
studies.
The automobile industry was one 
of the case studies.  It took 
many years for them to legislate
that you have to have seat 
belts.
Even when they were doing crash 
tests, 
they did them with dummies that 
were 
male bodies and you ended up 
having car 
accidents that  
disproportionally killed 
women and children.
-Right.  It is not the car.  
There are certain uses that are 
dangerous for society.  There 
has to be some interventions 
there.  I always think of civil 
rights and civil liberties as 
either protected by friction or 
by law.
I think decades ago it wasn't 
possible to monitor using face 
surveillance.
Now the technology has advanced 
and much of the friction in 
terms of what the police can do 
and what the 
government can do has been 
eviscerated.  They see what 
types of protection are going to
be built up to protect the 
public and communities.
We at the ACLU have called on 
Congress 
to pass a federal moratorium.  
We have to think through the 
implications.
We have been in the large 
coalition 
pushing on Amazon and other 
countries to stop providing it 
to the government because of 
some of the threats and dangers 
that we've talked about.
As I mentioned, over 450 Amazon 
employees have themselves, 
spoken out in writing about  
this.
Today in Amazon employee called 
Jeff Basoz out.
Yesterday when he was on a panel
he acknowledged that there's -- 
tech could be used by crowds, 
but suggested the company has no
responsibility.
Instead we should leave it to 
society's eventual immune 
response to address the threats 
in terms of the real threats to 
community members.
In contrast, Google has new AI 
principles that specifically say
they 
will not pursue technologies 
that gather for surveillance, 
violating international norms or
human rights.  With the last 
couple of minutes for 
both of you to answer one or 
both, what do  youty the 
responsibility of companies is? 
What should they be doing?
What should law headachers be 
doing?
headache -- makers be doing?
Corporations consistent of 
people.  People have values.  
People can agitate and change of
course of things.  I think 
people in corporations need to 
remember the fact they have the 
values it is their 
responsibility to advocate.
I'll just keep it there.
I think the government has the 
long view.  They hold, in many 
ways, the responsibility of 
communicating history to 
corporations and other companies
that 
are developing technologies and 
setting 
up the
Internet places that have been 
created.  We know what has 
happened in the physical public 
space in the country.  We know 
that most of the civil rights 
movement was a fight over 
dignity in the public space.  
That's a lesson, right, to 
communicate as you think through
how you are going to engage that
new technology.  The same is 
true for facial recognition 
technology if we think about 
racial profiling it is the 
government's 
obligation to recognize those 
kinds of pitfalls exist and 
compel corporations to adhere to
some kind of regulatory scheme 
that guards against what we know
are thing access of every 
system.  There are certain 
themes that are recurring in 
American life.  Racial 
discrimination is one of them.  
The idea that we're going to 
create a new tech following and 
don't have to 
worry about it is absurd.
That's a good segue into the 
audience question.  Is it 
already too late?  Aren't we 
already on camera everywhere we 
go?
My thought is companies are 
watching people.
I don't know everything.  I'm 
learning new things.  Every day 
I learn a new thing.  Joy had 
talked to me about a company 
that's interviewing people on 
camera and doing emotion 
recognition on them, and 
then giving verbal and 
non-verbal  cues to the 
employees who are their 
customers.
I didn't know this existed until
she wrote an op ed about it.  
Every day I'm learning something
new.  We're unleashing the 
technology everywhere without 
guardrails and without 
regulation and without some sort
of standard.
I don't think it is too late.  
People wear seat belts now; 
right?  I think -- you know, it 
it has become standard.  I don't
think it is too late.  We have 
to move fast.
Tens of thousands of people were
killed in cars.  It wasn't too 
late when we got seat belts.  
You just do what you have to do 
at the time that you can.  I do 
think we have to jump ahead of 
it.  What's dangerous about all 
of this is how deeply embedded 
it can become.  That's part of 
why we don't know.
It is so easy to flip in and 
embed in a variety of context 
and the employment context and 
law enforcement context.  Then 
it is hard to get it out.  
That's why I think there's a 
sense of 
urgency that we have to move 
very, very 
quickly.
-I agree.
The ACLU has been working to 
blow the whistle and bring a lot
of national attention.  The fact
that there's been such a great 
response and people have been 
moving with he's to address the 
issues. -- haste to address the 
issue.
I think Kate and Meredith 
talking about what happened in 
the last year.  I work at ACLU. 
I never think it is too late.
There's a long -- there's a long
arc of history.  I think that we
as people can really work 
together to influence that 
history 
and to make sure that civil 
rights and civil liberties are 
protected.  You know, 
historically there's always been
an advance in technology.
It takes sometime for those 
protections to get in place.  We
should not just leave it to an 
immune sons --  response.
We have to push that response 
and make sure it happens.
Both in companies and by 
lawmakers.
It takes me think about Tasers.
Tasers companies are doing a lot
of facial technology in law 
enforcement.  This was supposed 
to stop the use of lethal force.
This was greeted at something 
that was going to be great.  Now
police officers didn't have to 
kill you.  It didn't get to the 
discrimination or 
use of obsessive force or 
brutality.  You might not die.  
That was the theory.  Even that 
is wrong.  Supreme Court just 
denied a case in which our 
client was tased to death.
Let's leave -- as terrible as 
that is, let's leave that to the
side.  It is about the 
dignitying issues and 
all of the issues that surround 
law enforcement.  Just switching
the technology to something 
doesn't get at the problem.
I think in that sense, it is not
too late.  We keep referring to 
this again and again and again.
We're nibbling down on the corps
issue issue.
We have deep problems.
Ultimately, it doesn't get us 
there.  Seeing the level of 
discourse this 
year versus
last year, it is a pretty big 
difference.  There's a workshop 
on the computer 
vision technical conference.
Now people are
starting to know.
I think there's a glimmer of 
hope.
Action and involvement by 
everyone in the audience.
Our ability to change the 
narrative 
and trajectory has gone through
how many people speak up.
I think we have done some work 
to reinforce that.  I want to 
thank the panelist for joining 
us.  I want to thank you all for
coming.  Now it is time for the 
first spotlight of the evening.
Thank you very much.
Now we have our first spotlight.
This is where we invite people 
whose 
work we admire and we punish 
them by asking three them threes
in seven minutes.
It is a high-speed, high-stakes 
game.
I couldn't be more excited to be
sitting here with Astra Taylor.
She's an author and has a few 
film "What is democracy" that's 
opening in January.  Just around
the corner.
And she also coined -- I think 
this is 
a really  useful term
called photomat ion.  What is 
it?
We have to be clear it is  
fauxmation.
I've been writing about issues 
and thinking about labor and 
debt.
I wanted to come up with a term 
that would name this process.  
What passes for automation isn't
really automation.  I think -- 
I'll give a deaf nation.
Fauxtomation is the illusion to 
maintain machines are smarter 
than they are.  You gave some 
great examples in the 
introduction.  We can think 
about all of the digital  
janitors cleaning the Internet 
and making it a space we want to
be in.
They are egregiously under paid.
We think of Amazon with the 
slogan.  Artificial artificial 
artificial intelligence.  The 
same issues are at play.  They 
are exposing the fact that the 
digital assistant is a terribly 
under 
paid human being doing a  
mind-numbing task.
I was  standing in line ordering
lunch.  I talked to the human 
being.  The man was clenching 
his phone.
How did the app know my order 
was done 20 minutes early?  The 
girl looked at him and said I 
sent you a message.
It was the man's -- he was so 
willing to believe it was a 
robot.
He was so willing to believe 
this  all-seeing artificial 
intelligence 
system is  overseeing his 
organic rice  bowl.  He couldn't
see the human labor in front of 
our eyes.  We do that all the 
time.  We're not curious about 
the process.
We're so ready to devalue and 
under estimate the human 
contribution.  I think that's 
really dangerous.
-Where is this comes from?
Who has the most to gain from 
the perfect automated system?
-Automation is a reality.  It is
happens.  It is also an  
ideology.  The point is to 
separate those two things.
To be very sort of up front when
the
idiological component comes to 
play.
Somewhere right now an employee 
is saying someone or something 
is willing to do your job for 
free; right?
The idea of sort of inevitable
human
will be obsolete.
He helped take out an ad in the 
"Wall Street Journal."
If you people asked for $15, 
robots are going to replace you.
He wrote another piece.  It has 
happened.
He cried some crocodile tears.  
When you watch the video of how 
the troublesome workers has been
done away is not anything that's
automation.  It was just 
customers doing their work and 
putting the orders into iPads.  
That's not autoHaitian.
automation.
Newark Airport.  Any time you 
want to buy something.
-
Capitalist are making  
investments in robots to weaken 
workers and replace them.
-It is less catchy.
-If somebody can make that  
catchy, we can co-brand the 
revolution.
-Exactly.  One of the things I 
love is you walk 
us through the history of 
automation, but really the 
feministic history.
-Yes.  We're lead astray.  In 
the robot future everything is 
done for us.  Instead of looking
at that, the people who can give
us insight are socialist 
feminist.  Because women have a 
long history of 
being told domestic 
technologies, there's a book 
about how the labor-saving 
devices ramped up the cult of 
domestic cleanliness.  The tools
created more and more work.  
They offer a deeper insight than
that.
And the socialist, feminist.
They are wrestling with the 
question of what is work?  They 
grow and contain themselves and 
not paying for as much of it is 
possible.  They don't want to 
pay.
Capitalize don't want to pay the
full value of work.  One is the 
assembly line.  You are involved
in monetary exchanges.  The 
underlying is all of the work 
that's tone to reproduce daily 
life and make the workers who 
can work the jobs for wages.
Women have always been told 
their work 
doesn't matter and it doesn't 
deserve a wage.  Because it's 
been compensated.
There's something there for us. 
There's going to be a future 
where we're -- you know, there's
no work for humans to do.  The 
insight was made probable to me.
I was in a lecture who is the 
amazing scholar who also 
features very prominently in my 
film who is democracy.
And the grad student -- we were 
talking about reproductive labor
and the value of it.  Aren't we 
heading to the future where 
there would be no jobs?  You 
know, the reserve army of labor.
The image that we would be 
sitting there with nothing to 
do.  You know, on the margins.
Sylvia's response is bracing.
Don't let them convince you you 
are disposable.  Right?  Don't 
let -- don't believe it.  Don't 
believe that message.  And, you 
know, I think there's a really 
valuable point there.
If the automated day of 
judgment, they wouldn't have to 
invent all of the apps 
to fake it.
-Not only did you go from  
McDonalds to Sylvia, but you did
it in seven minutes.
Can we applaud Astra?  That was 
amazing.  You did it.  Next up 
we have our panel on 
inequality
, who is chaired by Vincent 
Southerland of NYU.
Please welcome him and our 
panelists.
(applause).
Good evening.  I'm Vincent 
Southerland.
I'm the director of the 6:00 
center of 
Race and inequality.
I also serve as the criminal 
justice lead for AI  Now.  
Because both of those words I'm 
thrilled to be part of the 
conversation with our  panelist.
Both of whom are at the 
forefront of 
work being done
AI in a time of rising austerity
and turmoil.
Help me in welcome Phillip 
Alston
and
-- Phillip let me start with 
you.  You report on extreme 
poverty in the United States.  
It was the first in such a venue
to really include AI in the 
conversation about inequality.
Why was it important for you to 
do that in this report?
-My focus is on the human rights
area on the issue issues.
In the AI area where we're 
accustom to talk about 
inequality and the range of 
other issues.  I think the human
rights dimension comes in very 
often.
I tend to see something of macro
and micro terms.
If you are looking at 
inequality, then it is a macro 
focus.
What sort of major government 
policies 
or other policies can we adjust 
in order to improve the overall 
situation?
But if you do a human rights 
focus, 
then you are really going down 
to the grassroots.  You are 
looking at the rights of the 
individual who is being 
discriminated against, usually 
for a whole range of different 
reason or who has simply been 
neglected.  I think one of the 
problems is there's 
neglect on both sides that the  
AI people are not focused on 
human rights.  There's a great 
tendency to talk about 
human ethics which is undefined 
and unaccountable.  And on the 
human rights side, there's a 
tendency to say the stuff is all
outside of expertise and not to 
really want to engage with it.
So in my report on the United 
States, down at the end of last 
year, I made a 
big effort to try to link thing 
issues 
of inequality, human rights, and
the 
uses of AI AI.
Great.  I have a question.
-I want to respond to Phillip.  
This report was so important.  
It was so important for movement
organizing in the poor people's 
movement to have this kind of 
vision of the United States.
The 43 million foreign working 
folks who are really struggling 
to meet their 
needs day to day and are finding
that these new tools often 
create more 
barriers for them than lowering 
those barriers.  One of the 
things that I think is so 
important about what Phillip 
just said is we often -- 
particularly in my work 
in public services, we often -- 
these tools get integrated under
the wire in a way.  We see them 
as just being administrative 
changes and not as the 
consequential political 
decisions.  We absolutely have 
to reject that narrative that 
these things are just 
creating efficiencies and 
optimizing systems.
We are making a profound 
political 
decision to say we have to 
triage.  These tools will help 
us make a decision.  That's 
already a political choice that 
buys into the idea that there's 
not enough for everyone.
We live in a world of abundance.
There's plenty for everyone.  I 
think that's important to point 
out.  I wanted to respond.  
Thank you.
Applause is good.  What does 
that look like on the ground?   
What are the types of thins that
you've seen over the last year?
What do you see going forward 
that squares you when you think 
about 
automate and technology?
-Phillip and I had a 
conversation about this earlier 
today.  We talked about how 
important it is to listen to the
people who are facing the 
consequences directly.  We tend 
to talk about the systems as if 
the harm might come in the 
future in the abstract way.  But
the reality is the systems have 
been integrated into public 
assistance since the early  
1970s.  They are having effects 
on people right now and really 
profound material ways.  So for 
folks who aren't familiar with a
box automating and equality, 
what I do 
is for the last eight year, but 
more intensely the last three 
years looked at 
the way that new automated 
decision system are being 
integrated across public service
programs in the United States.  
I look at lee if the book.
One is an attempt to automate
and  privatize all of the 
processes for Indiana.  And 
another is the housing system in
Los Angeles.  And the third is 
the system call model that's 
supposed to be able to predict 
which children might be victims 
of abuse or negligent sometime 
in the future in 
Allegheny county, which is where
Pittsburgh is in Pennsylvania.  
What I saw is despite the fact 
that 
there are some incredible 
potentials to integrate  
services, lower barriers, and 
provide easier access to social 
services in the country, because
the tools are built on what I 
think of the deep social 
programming of the United States
which 
is deeply -- deep economic 
division, a 
deep and long history of 
murderous, racial discrimination
that rather than 
creating tools to ease the 
burden, we're creating tools 
that divert people from the 
resources that they need.
They are legally entitled to and
they need to survive and protect
their families.  That the tools 
often criminalize 
people as part of their process 
of deciding who is deserving 
enough to get 
access to the basic human  
rights.  Then in the end all of 
the data is 
used to create crystal balls in 
order to know who is going to be
risky in the future in order to 
deny them resources foe.
In the last year one of the 
thins that stands out post to me
as really important to pay 
attention to is if you 
look in the 2019 federal budget,
the Trump Administration budget,
it says 
they are going to save  $188 
billion by increasing data 
collection and analysis in 
middle-class programs.
I look at welfare and child 
services that are poor and 
working  people.  Now they are 
talking about disability and 
social security and 
unemployment.
So it very much seems like poor 
and 
working people have been in the 
canaries in the coal mine and 
the experimental population for 
the tools.
They are looking to implement 
them on everyone.
Yeah.  What about you, Phillip? 
What types of things have you 
seen 
that give you cause for concern?
I echo what Virginia said.  I 
think it is important to 
recognize to the extent that we 
take a social welfare system, a 
system for social protection, 
and think we can simply put 
on top of that AI-type technics 
which will paque it more 
efficient and effective, we're 
doubles down on injustices in a 
great many way.  Because the 
system is based on racial 
discrimination and based on 
gender discrimination, is based 
on discrimination against 
nationalities and so on.  A 
whole range of different  
problems which are not being 
addressed.  And the people who 
are promoting most 
the efficiency motif of those 
who want to slash the programs.
So I think there's a firmtive 
responsibility on AI people not 
to say, hey, we're just a tech 
people.  What can we do for you?
Where do we put our product?
You've got to start thinking 
proactively.  How can we build 
those in and point out to those 
who are hiring us what the  
existing problems are.  And the 
second point, of course, is very
straightforward as Virginia said
what we're going to see is it is
going 
to come to EU.  The house in EU 
can affect your apartment.  
After the midterm elections what
the administration has been 
doing in the last two years is 
to build up the massive deficit.
We know there's only one way 
available to solve that deficit.
It is going to be cutting middle
class entitlements.
And all of the -- all of the 
pioneering experiments that are 
are being done on the poor right
now are 
soon going to be ratcheted up.
That's the only place the saving
can come from.
You guys are going to pay for 
the tax cuts.  We're going to 
see it in all of the services 
available to us on a regular 
basis.  I see a year that has 
been more or 
less a lead up tax cut on tax 
cut.  Tax cuts can only go in 
one direction.  You have 
absolutely miserable public 
services, social services, you 
have a 
much greater burden placed on 
women in particular, because 
they are the ones who always 
have to pick up the slack 
when the State  pulls back.  And
there's just going to be the 
huge 
push to say it is unavoidable.  
We're sorry.  It's got to 
happen.
-Right.
You mentioned earlier taking on 
human rights  framework.
What would that look like in AI?
Talking to an person audience, 
one has 
to be sensitive about what we --
we in 
the rest of the world -- call 
social 
rights
.
-I've been accused of these.
-If we take a stand, you have a 
whole range of  
non-discrimination issues.  You 
have equal due process.  All of 
the rights are heavily imply  
ated.
It is present to go back and 
make the 
point that I
made earlier -- I don't really 
mean it.  It is true that ethics
are completely open ended.  You 
can create your own ethic.  But 
human rights you can't.
They are in the constitution and
in the Bill of Rights.
There are certain limits and 
have been interpreted by courts.
Until we bring those into the AI
discussion, there's no hard 
anchor.
One of the things that I think 
is important that I heard over 
and over again from the point of
view of administers and 
designers of the tools 
is they would say to me the 
systems are necessary systems of
triage.  That we just don't have
enough resources.  We have to 
make these really difficult  
decisions.  These tools help us 
make them more fairly.
Again this is one of those 
political decisions I was 
talking about at the top 
of our conversation which is 
triage is actually really bad 
language to use to describe what
we're doing.  Because trionly 
assumes that there are more 
resources coming.
If there aren't more resources 
coming, we're not triaging.
We're not diagnosing, we're  
rashing.
We're automating sources of 
rashing resources and people's 
access to the shared wealth.
I think it is incredibly 
important to 
think beyond
these values of efficiency and 
optimization to some of the 
principle 
that is are enshined in the 
economic human rights.
The universal declaration of 
human rights.
As an American, I can say I 
don't care if we signed on the 
dotted line.
As a political community we're 
allowed 
to say there's a line nobody 
goes below for any reason.  
Nobody starves to  death.  
Nobody sleeps in a street on the
tent no family is broken up 
because they 
can't afford a child's 
medication.  That's pretty 
baseline.
We need to get there and get at 
the  tools right around what we 
are doing to each other.
I'm wondering how the trend fits
into a larger historical trend.
Can you speak to that at all?   
This is the first time it's ever
happened where
technology has squeezed people.
I use the metaphor.
The tools that I'm seeing in 
public services are more 
evolution than revolution.  And 
that the sort of desocial 
programming of the tools goes 
way, way back in the history.  
The reason that moment is really
important is there's the human 
economic depression in the 1819 
depression.
Economic elites got really freak
out.  Poor people were demanding
things like food and houses.
They did what economic elites  
to.  They commissioned a bunch 
of studies.  The question that 
the study was 
supposed to  answer, what's the 
problem: 
poverty, lack of resources, or 
pauperism?  The ladder was the 
problem.
The dependence on public 
benefits, not poverty itself.
They built them that the 
trade-of is if you want to 
request public services, you 
have to move into the poor house
to receive them.  You have to 
give up your right to vote, your
right to hold office, your 
right to marry, and often your 
children.
The death rates was something 
like 30% annually.  A third of 
people who went in every year 
died.  This was no joke.
This is a really  horrifying 
institution.  And I think really
this is the moment 
in our history that we decided 
social service programs first 
and most 
important job is to do the kind 
of moral diagnosis.  To decide 
whether or not you are 
deserving enough of help whether
or not
we're going in the direction of 
forward.  Which is happening 
many places around the world, 
but not here.  That feels to me 
the programming that underlying 
the new tools.
If we don't address it by 
instituting 
sort of equity gears we are 
going to amplify and speed up.
>> I'm wondering where does race
fit into all of this.
I know we touched on it a little
bit.  I'm curious.
How does race exacerbate the 
problems?  
Do you want to start?
All right.  Thanks.
I think there's a close 
relationship
between race and attitude to 
welfare.  There are a lot of 
studies done.  As soon as you 
talk about poverty, you have a 
vision of a black family.  They 
are the ones that are poor and 
trying to live off of we whites.
We're not going to let this 
happen.
I think the narratives are 
carefully interwoven.
They will try to
stigmaize welfare and what I 
prefer to call social protection
if motivated by the racial 
stereotypes which are fairly 
constant.
One of the issues that Virginia 
and I  talked about this is 
briefly earlier is 
that when I started doing the 
research, I started to look for 
indicators of class in the U.S.
And by that I -- I boiled it 
down very 
simply to looking at whether 
income statistics have been 
matched with different groups of
so on.  Of course there's very 
little of that.  Suddenly race 
comes in there.
That's the sole factor.  To the 
extend that other people are 
poor, that's not the main focus.
That's good in some ways.
Because the heritage of racial 
inequality is so deep and so 
powerful.  It is also bad.
Because it is again  
stereotyping.  This is something
for the black community and not 
for the whole community.  That's
part of the whole them and us 
mentality which, of course, is 
central to the whole welfare 
area.
Those people, mainly black, are 
not contributing.  They are not 
taxpayers.  I, of course being 
white, am a taxpayer.  I'm not 
going to put up with supporting 
these people.
We've got to move beyond all of 
the narratives.
Yeah.  It is interesting because
in both of your work make the 
distinction between 
the  deserving and undeserving 
beneficiaries.  I think that 
tracks along the lines.  I'm 
curious your response.
>> One of the most important 
places that it comes in, it is 
so important to keep our eye on 
this.  Phillip was talking 
earlier about how 
much -- how often they 
rationalize.  There's not 
enough.  We need to be more 
efficient.
But the other reason that 
administers 
and designers give is for 
combating bias, particularly 
racial bias.
It is crucial to acknowledge 
that the public service system 
has had a deep and 
lasting problem with racial 
inequality.
We blocked all people of color 
from  
receiving from 1935 to 1970 when
they fought back and won.  One 
of the things that folks will 
tell you around why we should 
move to the 
tools is because it makes it 
possible to 
identify discriminatory patterns
of decision making and pull it 
out of the system.  It is a  
bias-fighting machine.  The 
problem with the narrative is 
that 
the assumption that there's no 
bias in the computer which we 
all -- this is an 
audience that knows that we 
build our biases into machines 
just as we build them into our 
children.
As also the problem is about the
way 
bias gets defined is the 
automated decisions that I've 
been working on.
Bias there was under understood 
by a racial choice.
It wasn't a stemmatic and 
structural factor.
Let me give you a concrete 
example.
In Allegheny County the 
screening 
tool, they, like every other 
place in 
the United States has a serious 
problem 
with racial disporalty.
18% of the youth population
is in foster care.
One of the things is to keep an 
eye on 
the intake who receive reports 
of abuse and make sure they are 
not making discriminatory 
decisions the problem is that 
the county's own research shows 
that almost all of this 
disproportion, 
all of this discrimination is 
entering at a totally different 
point in the process.  It is not
entering at point where the 
caseworkers are screening calls,
it enters at the point where 
they are calling on families.
So the community reports black 
and biracial families three and 
a half times more often than 
white families.  350%.  Once 
that case gets to the intake, 
there's a little wit of 
additional discrimination that's
added.
Screeners screen in 69% of cases
around black and biracial  
families.
But the reality is the great 
majority is coming in from 
community referral.
That's not necessarily a data 
amenable problem.  That's a 
cultural problem.  That's a 
problem around what does a good 
family look like and in the 
United States the family looks 
rich and wait.  And the problem 
-- one of the problems 
with it is if you are removing 
front
line discretion you are removing
their ability to correct for the
massive misrepresentation that 
comes in.
We're using the idea of 
eliminating 
individual and irrational bias 
to allow 
this vast, structural bias to 
sneak in the back door of the 
system.
I think that's really, really 
dangerous.
-That brings me to another 
question.
Is there a way to use these 
automated tools for good?   Like
with homelessness in L.A. 
county.  You describe the  
match.com of homeless services. 
That's not how you describe it, 
that's how they describe it it. 
Is there a way to view the 
systems for good, so to speak?  
If so, what do we have to do?
What do solutions look like is a
better way of asking the 
question?
Phillip, do you have any 
thoughts?
-My problem is as someone said 
to me in L.A.
, the solution to my problem 
doesn't lie 
in tech, it lies in a house.
But it is true.  In other words 
the basic political 
decision to provide more money 
for housing is  absent.  We're 
not prepared to pay for the 
losers down and out, refuse to 
work, dirty people to get any 
sort of housing.  They are 
awfully generous.  It's it.  If 
you start with that, and that's 
where I think we are.
No amount of good intentions, 
such as the coordinated entry 
system.  We're really going to 
do this scientifically.  We're 
going to bring in every possible
factor.
The decision maker has to know 
how to distinguish.  The problem
is you have to address that 
underlying political problem and
not think that tech can solve  
it.
So just a similar quote from 
Gary Blais.  I think this is one
of the best lines in the book.
You know, I think I often find 
myself in rooms where it feels 
like what people want is a  
five-point plan for building 
better technology.  I think 
there's a lot of room to do that
work.  I think that's important 
work.
I am also really comfortable 
spoiling that notion by saying 
that we have some really deep 
work to do.  I think we really 
have to change the story around 
poverty in the United States.  
The narrative that we tell, the 
narrative that we tell is an 
aberration.  It is just a small 
amount of people.  It is a 
parent.
51% of us will be below the 
poverty line.
Two-thirds of will need welfare.
Creating space to see themselves
within the identity of poor is 
an important part of the work.  
That's incredibly difficult  
work.  You also have to address 
race, gender, and migration.  
You have to do all of those 
things at the same time.
It is super hard work.
It is  change-making, critical, 
unsettling work.
I think we need to move morer to
a 
universal system than 
thermometers.
One of the  gut-check questions 
I tend 
to ask engineers and designers 
is does the tool increase the  
self-determination and dignity 
of the targets?  And if it was 
aimed at anyone but poor and 
working people, would it be 
tolerated at all?  If the answer
to that is no, then it is 
unacceptable from the point of 
view 
of democracy.
Right.  Now we are going to 
transition to 
questions from the Twitter
sphere.
Maybe folks want to know from 
you, 
Phillip, and you, Virginia as 
well.
From my vantage point, there's 
an extraordinary similarity.  I 
did a visit at the end of last 
year.
At the end of this month I'm 
doing a 
similar mission from the United 
kingdom.
I'm  currently totally immersed 
in UK public policy and the role
that AI is playing if that.  I 
also happen to be Australian.
I see exactly what's happening 
in Australia.  In those three 
countries which are similar in 
many ways but also very 
different in others, we see 
exactly the same sort of trends.
We see the same phenomenon in 
terms of 
treating welfare -- I shouldn't 
call them that.
People like most of us who at 
some 
stage needs forms of social 
protection that only government 
can provide.
I see them being demonized and 
stigmatized.
I see the policy which keeps  
saying 
the solution is to get out and 
work.  It is employment.  It is 
not welfare.  It is not 
government assistance.
What we're seeing is that even 
in a full employment economy and
most of 
those three are, the sort of 
jobs that are being created are 
not just very 
precarious, but very often by 
design do 
not offer enough income in order
for those people to survive.  
There are tens of thousands of 
American military people who are
on food stamps.
There are a million and a half 
retired people on military with 
food stamps.
I met with workers who work full
time but need food stamps.
So the narrative which is trying
to demonize the people is really
problematic.  I just want to say
one other thing which is when 
you said what is the role of AI.
Virginia and I both answered you
have to go back to basics.  You 
have to have a serious commit to
welfare before AI can do much.  
That's not the answer for this  
audience.  AI is going to be 
there.
You can either exacerbate the 
trends that are going on, or you
can call attention to them.
You can build in ways of high
lighting what the difficulties 
are.  That's what needs to be 
done as much 
as the more creative and 
innovative ways 
that you are currently working 
on.
One of the ways that I tend to 
try to describe the solution is,
you know, we 
have a tendency to think that 
designing 
in neutral is designing fairly.
But, in fact, designing in 
neutral 
gives us no gears to deal with 
the actual hills and valleys and
twists and 
turns of the landscapes that we 
live if.
It is like building a car with 
no gears and sitting at the top 
of the hill in San Francisco and
being surprised when it crashes 
at the bottom of the hill.
For me it is about building 
those equity gears into the 
system from the beginning on 
purpose bit by bit and bite by 
bite.
I think a bit part of that is 
really speaking directly to the 
folks who are going to be most 
impacted by the systems.  I 
think their voices are too  
rarely in rooms like this.  Of 
the tokes that see themselves as
targets of the systems.  I think
they have the best information 
about them.  I think they are 
also the most likely 
to be accountable for good 
solutions, good-lasting 
solutions.
The idea that people have 
solutions for the problem.  Just
one quick thing to wrap up is it
seems like automation undermines
the 
empathy that we need to drive 
home the social problems.  Would
you agree with that?
One of the things I talk about 
in the 
book is that the system act as 
empathy overrides.
They are the release valve 
allowing us 
to outsource the most difficult 
as a political community to 
technology.  Again we see that 
as an administrative solution 
not a political choice.  I think
we need to sit with the fact 
that we're making these choices 
based on 
the assumption there's natural 
austerity 
and not enough for anyone, and 
there's nothing we can do.  We 
have to challenge that and 
recognize that we're making 
political 
choices and move on from there.
Any last words?
I think Virginia's book is 
superb in 
terms of the stories that she 
tells and brings from the 
grounds.
That brings the negligence which
is 
not
-- dimension.
You can privatize a lot of 
social provision.  You are 
turning it over to companies 
that want to have a set of box.
It is not -- tell me, Virginia, 
how is your husband doing?   
What's the problem with the 
children?  Is that getting any 
better?
How can I factor that in?  
That's what social protection 
and welfare are all about.
They can't be done by 
automation.
Please join me in thanking our 
two wonderful  panelist.
-(Applause).
Our text speaker is Kevin de 
Liban.
I got to know your work through 
the algorithms.  I was hearing 
about the amazing cases that 
you've been working on.
I thought maybe we would start 
with our time on the clock by 
asking about the extraordinary 
case you've been 
working on around Medicaid in  
Arkansas.-legal aid attorneys 
are in the trenches 
helping those who can't afford 
lawyers with the day-to-day 
needs.
It could be things like health 
and Medicaid.  They come to us 
when there's no other option.  
What we were seeing is early 
2016 we 
started getting an inordinate 
number of 
calls of people chain
complaining of the same issue.
A Medicaid will pay for a 
caregiver 
for someone who has physical 
disabilities, help with eating 
and turning and getting out of 
bed.  People said I've been 
getting eight hours of day of 
care for 15 years.
Somebody just came and said the 
computer said I could only get 
five hours a day of care.
The state of Arkansas had 
instituted algorithms to decide 
how much  in-home care the 
people were going to get.  The 
best case is 5.5 hours of care.
For someone who has
cerebral palsy, you are lying in
your own waste.
You are getting pressure sores, 
because nobody is there to turn 
you.  It didn't make a lot of 
sense.
Staying at home is not only 
better for the dignity, but it 
is better for the bottom  line. 
Because it costs less than 
nursing home care.
-Tell me how you have started to
investigate this.
How do you look in to it?
->> All we know the clients are 
saying the nurses that came out 
said the computer did it.
We finally got the algorithm.
I have no background in computer
coding.
It was just a lot of cozy time 
with the algorithm in the 
evenings.  Very fun.
-Did you teach yourself to code?
No.  I could.  It's a bunch of 
if then statements.  I got to 
the point where if somebody 
gave me an assessment, I could 
figure 
out where they would fall in the
algorithm.
What does accountability look 
like for these systems?
-It is not just lawyers in suits
carrying big sledge hammers.  
That's a big part of it.  We 
knew that no judge is going to 
tell the state that you have to 
provide eight hours a day of 
care.  Which is barely enough.  
No judge is going to say you 
have to provide eight hours or 
ten whatever it might be.
They might say you can't cut 
down.  They are not going to 
build policy.  We knew from the 
starts of limits.  That's an 
important thing.
We used the litigation as a 
rallying point for a  passive 
public education that engaged 
the people post  affected.  We 
put out  educational 
information.
We did all sorts of 
presentations.  We produced 
videos of our client's lives 
with their on sent and approval.
All of this information act I 
havely pushed through social 
media and traditional media and 
so forth ended up empowering our
clients to take that and then go
run with it.
They were calling legislators 
and doing change.
org petitions and doing some 
mutual aid sharing.
Once you have the people most 
affected  complimented by 
litigated and complimented by 
policy analysis and everything 
else.  You've got some sort of 
structure to make sure that 
substantive justice prevails.  
The people -- my clients, some 
of whom are watching, get the 
care they need.
Not that we just default to some
sort 
of procedural  fairness kind of 
posture.
Excellent.  How do you think 
lawyers and 
researchers can work together?
This is key.
I figure out the key, and how to
open the can with one.  I can't 
build one.  And that's where the
researchers come in.  Is if 
validated?  Is the software 
correct?
Are all of the projections and 
underlying assumptions on point?
That's the information that I 
was lacking that limited my 
legal challenge to more 
procedural bases that I thought 
I could win, because I didn't 
know how to prove or have the 
expertise to prove 
right away the algorithm is 
crap.  It doesn't do what it was
supposed to do.  I could track 
the ways it was arbitrary.
-You are bares the lead here.
How did the case resolve?
Okay.  It is a win.
All of us do gooders we have 
very limited ability to 
appreciate wins.  We know the 
next terrible thing is right 
behind the stage curtain right 
over there.  Not Meredith.
But --
(Laughter).
We invalidated the algorithm.  
Then the state wanted to bring 
it back.  We've been using this 
illegal thing 
for so long that we have no 
other way to do it other than 
using the illegal thing.  They 
said we're not going to provide 
them services if you don't let 
them do this.
They let them reinstate it for 
two months.  This is not 
holistic.  We don't want it for 
the people of Arkansas.  It is 
gone.  That algorithm is dead.
What is the state going to do in
January?  Another algorithm.
We're hoping that at this point 
not only are we smarter, we're 
hoping that the state has  
learned some lessons.  Now we're
in a position where we've got 
more resources, we've got 
knowledge, we have active 
community members where 
we can go and transmit the 
message.  Do right by people.  
You are not going to wear us out
and not going to out work us.  
We're not going away.  We're 
smarter than you.  And we're 
coming for you.
(Laughter).
(Applause).
It is such a pleasure to hear 
about this work.  It's been 
extraordinary to meet you and so
many of the public interest 
lawyers.  Can you please give a 
big round of applause to Kevin?
(Applause).
I hate to say it, but we're  
facing the final panel for 
tonight.  We could keep going 
for many hours.  We're going to 
close out with one last panel.  
It is about the relationship 
between research, activism, and 
accountability.
Would you please welcome 
Meredith 
Whittaker who will be chairing 
my co-founder of all things.
>> Hello.  Good evening.  It is 
a delight to be here on the last
panel.
We're looking at research, 
organizing, and accountability.
We have a lot of work to do.
You can spend some cozy times, 
that's not going to give you the
social implications.
That speaks to the need to join 
the 
people on the front ground 
living the impact of the systems
with an understanding of how the
systems are designed and, you 
know, the intention of these 
systems.
So I am just delighted to be 
here with, you know, two people 
who I think 
really exemplify this kind of 
socially engaged work.
Lucy Suchman is a pioneering 
research 
one of the architects of  
human-computer interaction.
She spent 20ees at Xerox where 
you defined that field and 
focused on the technical aspects
where humans met machines.
Her recent works looks at 
autonomous  weapons and the 
danger of an  AI-enabled 
military.
It is engaged with organizing 
around the topics.
I'm really  delighted to be 
joined by our own Rashida 
Richardson.
She joins us from the ACLU and 
is our policy director.  She 
looks at the lived experiences 
of the systems and figuring out 
ways to 
empower activist, organizers, 
and civil 
society leaders to push back 
against some of the problems 
that the systems cause.  So it 
is great to be with you and 
before we get started with 
questions, I 
just want to remind you #A 
Inow2018.  We'll be taking one 
of your questions at the end of 
the panel.
With  that, Lucy, I would love 
to invite you to get us started.
Your work looked at the context 
of sort of human interactions 
with the systems.
Where did the best laid code go 
wrong when it meets the fresh 
air?  That really shaped the 
field of understanding how do we
live amongst these systems?  
What lessons do you think we can
kind of learn from your 
approaches as we strive to 
create better accountability 
on the ground?
First of all, thanks to Kate and
Meredith and AI Now and also to 
all of you.  This has been a 
marathon section.  It is 
fabulous that you are still with
us.
So I mean -- Meredithing you 
characterized me as an architect
of AI.
I was a very accidental 
researcher in 
area of human and computer 
reaction.
I went to Xerox as a PhD student
in anthropology.
It would be way too long to tell
you how I ended up there.
It was at one of the moments of 
AI's
AI's asen  dents.
Also the idea of AI and 
interactive computing.
I was really intrigued by the 
idea of intelligence and 
interactivity as they 
were being reworked through a 
kind of computational imaginary 
there.  I guess a lot of my work
since then 
has been about trying to 
articulate both 
the incredible -- incredibly 
intimate relations that we have 
with our machines and also the 
differences that matter between 
humans and machines.
And I've been in that context 
tracking 
developments in AI and robotics.
And I guess for me there's a 
really 
important distinction between 
projects 
in humanoid robotics which are 
attempting to create machines 
until the 
image of the individual 
autonomous human subject and 
develops more recently in -- 
that are really tied to, you 
know, Moore's Law is all cited 
as the thing 
that's going to take us 
inevitable to the similarity.  
It is really about speed and 
storage  capacity.
I think it is the speed of 
computation, the storage 
capacity, and the extent of 
networking which has made the 
real difference.  There we're 
talking about data analytics and
a lot of the things we've been 
talking about tonight.
When we come to humanoid 
robotics, AI and robotics which 
I've been tracking quite 
carefully, I would argue that 
practically no progress has been
made.
And I think the reason for that 
is the 
difficulty of actually
incorporating into technologies 
what we think of as -- basically
knowing what's going on around 
you.  So the idea of context 
which isn't, you know, if we're 
in a container and we have to 
recognize it.
It is something that we're 
actively 
and in an ongoing dynamic way 
co-creating together.
And that sense of interactivity 
continues to escape projects in 
AI robotics.
Maybe if we come back to talk 
about weapon systems.  I think 
that's exreamly consequential.
I've started to engage as the 
military refers to as 
situational awareness.  It is 
fundamental to -- in particular 
to all military operations 
presuppose the identification of
a legitimate target of the  
enemy.
And that's the place where these
problems that we've been talking
about, that's the thread that 
connects the systems that we've 
been engaging with.
Who is being identified as an
imminent threat and on what 
grounds?
-Yeah.  It gets to the heart of 
the question when is human party
has situational awareness.  
Rashida, I would love to turn to
you there.  Obviously that's 
going to require a lot of 
different perspectives.  You've 
worn a lot of different hats.
You come from AI Now from the  
ACLU, you've been engaged in 
social justice and now AI 
research around policier issues 
that reality to the deployment 
of these systems.
How do you see the joining of 
these perspectives and the 
movement and research being done
given you've occupied so many of
the positions?
A lot of what we're talking 
about is the 
application of complex, social, 
political, and economic issues.
I think there's the temptation 
to 
treat researchers and advocates 
separate not collaborative.  
They both have something to 
bring to the table.  It 
shouldn't be a blank slate that 
needs to be taught something.
There's a few things to keep in 
mind with both groups.  I'll end
with what they can do.  The 
first is I think it is important
for both -- I'll say we.  I 
guess I wear both hats.
For us to be both introspective 
and honest about what we know 
and what we don't know.  Because
I think that there are a lot 
of, sort of, blind spots and 
inherit biases that we have from
our individual positions from 
power and privilege that leads 
to a lot of the unknown.
No one  wants to be honest or 
open or open to criticism about 
that.  I also think that all of 
the work whether it is research 
or advocacy needs to be grounded
in reality.  Both Kevin's panel 
and the panel before that showed
that the harms and consequences 
of what these issues that 
we're talking about are very 
real.  They impact real people. 
There's also a temptation to 
think 
about the issues only 
theoretically.
That's extremely harmful right 
now.
It is sort of to break apartment
the group.  It needs to be 
collaborative in order 
to inform the advocacy that's 
collaborative and  diverse.
That's a great answer.  We're 
seeing a lot of movement push 
back against -- we've met a lot 
of people who are pushing back 
from instances of harm.
We're also seeing kind of tech 
work that's organized.  That's 
something I've been involved and
contributing my research to.
Lucy, I know you've been engaged
in it as well.  I would love to 
sort of, you know, two back to 
history a little bit; right?
Because while this recent wave 
of concerned ethical employees 
at big tech companies has been 
fairly surprising and heartening
to had many of us, it is not the
first.  I would love for you to 
tell us a 
little bit about the computer 
professionals for social 
responsibility.  Which can be 
seen as a predecessor to what 
we're seeing now.
What lesson does the movement 
have for 
social issues?
-Sure.  I went in 1978.
And in 1981 an electrical 
engineer 
sent out a possessage to an 
e-mail list 
called anti-war it is  called 
anti-war up arrow.
We used them to differentiate 
distribution list from 
individual persons.
He sent out the message to 
anti-war up  arrow.  They have 
worked on the stage system.
It is the semiawe thon mouse 
ground environment that was 
developed from 1950 to 1980.
This was the Norad early warning
system against incoming Soviet 
missiles during the Cold War.  
He had worked on stage.  He was 
looking at what was happening 
in the development of launch on 
warning 
systems within nuclear weapons 
systems.  And he was 
tremendously concerned about 
launch on warning.  It was very 
destabilizing in terms of 
the sort of so-called logics of 
mutually 
assured destruction.  It was 
really crucial in that respect.
But also based on what we knew 
from having worked on the stage 
system, we saw how dangerous the
hair trigger  
unreliable and there were 
arguments warning system was how
inherently about the inherit 
unreliability of the system 
which carry forward.  To our 
situation today.
So he -- a few years later we 
founded computer professionals 
for social responsibility with 
people at  P.A.R.K. and 
Stanford.
We basically tried to make the
technical arguments for the 
inherit dangers and  
reliabilities on launch on  
warning.  Interestingly in 1983,
there was the kind of companion.
It was Ronald
Reagan's strategic defense 
program.  Star Wars.
And now we have the Jedi and 
have returned to the sequel.
It was basically an AI 
initiative with something for 
each of the branches of the 
armed services.
And with two others I wrote a 
piece 
for the bulletin of the 
scientists where we critiqued 
the strategic computing 
initiative.  That was published 
in 1983.  It was pretty well be 
recycled and it would still 
apply.
Sadly.  I would love to stay on 
this for a moment with you Lucy.
Just about practical  
perspective.  I'm sure there are
people in the audience that work
for the companies that are 
confronting the issues.  What 
works?  What would you advice 
the people to do 
who are often times really 
feeling
tensions within themselves and 
employer?
-Right.
Our actions were not directed at
Xerox itself.
We were working in a base with 
strong ties to universities.  
That's where the academy and 
industry 
and that -- in those networks 
were very closely  joined.
So we weren't    threatening our
employer  directly.  That's a 
really important issue.  It was 
-- I think -- very much through 
the alliance with the academic 
networks that that really grew.
And, you know, it is really 
heartening 
to see what's happening with the
tech 
workers and with Google and your
incredibly important work in 
relation to 
the organizing against Project 
Maven and 
then I would part of the group 
who put together a kind of 
academic, scientist letter in 
support of that effort.  So 
those  coalitions, I think, are 
great.  Of course, you know, we 
didn't really 
have the kind of  network-based 
organizing tools.  We had e-mail
lists.  That was about it.
So the advent of the web.  You 
know, the way that things now 
travel from a letter that then 
gets posted on the web that gets
picked up by the media and turns
into the online op ed.
That's really accelerated and 
facilitated that as well.
-Yeah.
In the case Maven, they have 
visibility into what these are.
Rashida, a lot of your work is 
kind of grappling with systems 
that are incredible by design.  
We talked about the black boxes 
on black boxes.  These are still
deeply,  powerfully, affecting 
people's lives.  I would love 
your incites on how do you do 
work around those systems and 
how do you marshal the 
communities who can 
go their many dimensions?
-Transparency and  
accountability.  I would add 
work as hard.
It's been in part because 
there's so much we don't know.
One way is to engage in the 
research with advocates.  The 
issues vary by jurisdiction.
How much we know may vary based 
on 
your local lawyers or  
procurement processes.  One of 
the issues is frying to 
understand what's going on.  I 
think as far as strategy and 
solutions, that needs to be 
informed by the local context.
And that work -- I'll use New 
York as 
an example, not only because it 
is our hometown, but because it 
is a good example.
Here in your intro with Kate you
guys mentioned the New York task
force that's 
looking at government use of 
automated decision systems.  
We've been involved in that work
from  getting the legislation 
passed through 
to when the task force was 
announced.  Just so the audience
has an understanding, the task 
force has to write a report by 
the end of next year with 
recommendations on a number of 
issues like what should the 
public know about where and how 
these systems are 
being used, if a group or 
individual is 
harmed, what should readdress 
look like?
And more technical questions 
like how can we or can we 
archive the systems 
especially given that municipal 
budgets are not ever-expanding.
In that work we constantly met, 
and I think a common stream both
within government and within 
civil society that this is an 
important issue.  But I think 
everyone kind of struggled with 
how do you actually address 
these issues?
And I don't want to sort of fall
into the let's punt to the task 
force until they can figure it 
out in December of 2019.
So what we ended up doing is 
trying to 
do some of that hard research 
and work the summer and engage 
people from researchers to the 
average New Yorkers.  I just 
talked to my family or people 
on the train about these issues 
and 
trying to figure out what the
redress standard should be.
If we're not grappling with what
could work and what doesn't  
work, we're not 
going to get very far a year 
from now.  That process was 
helpful.  It allowed us to 
engage others.  We listed a 
group of both individuals and 
organizational experts where the
task force to use.  That was a 
useful process.
Even though we did list three 
pages as people, I remember one 
person pointing out, hey, under 
child welfare you listed a lot 
of great lawyers in people who 
understand family law.  You have
no one representing the parents'
voices.
That was just an omission.  You 
only know within your network.  
It was a great opportunity to 
get more voices at the table and
more perspective and understand 
what is going on here and what's
the best way to address it.
-Yeah.
What was the most illuminating 
insight?  Is there something 
that someone told you that you 
wouldn't have figured out if you
didn't engage across the 
coalition?
-I think no one has the right 
answer.
Often to try to figure out the 
redress issue the place is legal
theories on impact.  There's a 
lot of different theories.  U.S.
law is  regressive.  None of the
prevailing ones are the ones I 
would choose.  But keeping that 
in mind and trying to be  
idealistic, I just talked to 
some of the prevailing standards
that are under housing 
discrimination laws.
Others are in employment law.  
And the EOC standards.
I talked to employment and 
housing lawyers.  What don't you
like?
I've never -- we're so used to 
working in regressive, it is 
hard to imagine alternatives.  
That was the struggle in the 
process 
of how do I imagine a solution 
that is not within the 
oppressive society that I live 
in.  The answer is I don't know 
-- but we tried.  I think it is 
-- I think it is forcing that 
conversation in trying to 
imagine  alternatives.
And sort of being resistant to 
the 
idea that that's not practical 
as a good counterargument.
-Yes.  I love that point.
It is a great onramp to return 
to the topic of autonomous 
weapons.
It is the negative example of 
the excess and danger of the 
technology.  Lucy, you've worked
so much on the issue.  You've 
pushed back against the 
autonomous weapons.  You've 
organized against them and 
researched them.  I would you to
connect the issue of 
rejecting or holding these 
systems  
accountable with the broader 
issues of accountability.
What lessons can we learn about 
fighting against the frightening
and 
oppressive society's terms are 
wrapped into the development of 
these systems.
-yeah.
We haven't talked about 
militarism.  These things are 
incredibly joined up.
To me I think it was Phillip 
Alston 
who mentioned the us/them -- 
this is 
about -- as I said, a lot of the
ways in 
which technology has been framed
as the 
avenue for security and the 
possibility of a sort of perfect
defensive system 
which was the Star Wars vision, 
fantasy 
is now -- I mentioned Jed
i, it's the joint
enterprise system that will join
up all of the U.S. pill tear 
operations around the world in 
which they are as you know as 
many of the -- an overwhelming 
number.
So this is -- and the premise 
that 
these systems are going to give 
us the 
ability to discriminate between 
us/them, the good guys and the 
bad guys.
Who
constitutes a threat?
That runs through all of the 
issues of sorting and 
classification and 
discrimination that we've been 
talking about.  And in the case 
of the lethal 
autonomous weapon systems, the 
idea is that -- this is an issue
that's being debated at the U.N.
in the context of what's called 
the convention on certain 
conventional weapons.
These are weapons considered to 
be indiscriminate.
Land mines would be an example. 
The critical functions that have
been identified by the 
international committee for the 
Red Cross and others who are 
campaigning around this are are 
target identification and the 
initiation of attack.  The idea 
is that we should ban the 
automation of those two critical
functions.
We should not create weapons 
where we 
think we have delegated to the 
weapons 
system sufficient discriminatory
capacities that it can identify 
who 
constitutes an eminent threat.
To me that's directly tied to 
the Project Maven work.  It is 
responsive to the vast network 
of drone surveillance that the 
U.S.
military has created which is 
producing orders of video 
footage that's completely 
unusable because of the massive 
size.
And so the idea is that there 
will be a kind of triage.  We 
heard that word before.
A kind of algorithmic automation
of the first path of analysis of
the video footage.  That it will
be -- there will be certain 
classes and certain categories 
of  objects, vehicles, 
buildings, persons, or 
configurations of persons that 
will then lead to the -- lead to
the claim is the handing off of 
the things to human analysts.  
And this will make the whole 
system more efficient.
Those of us who have been 
following the drone -- the U.S. 
drone program, and again Phillip
Alston 
has played a really crucial role
here -- know that the -- the 
so-called precision 
of those systems is a complete 
fallacy.
That, you know, there are -- 
there's 
evidence that, for example, in 
the U.S.
drone program in Pakistan from 
about 
2006 to 2014 about 75% of the 
people killed by that program we
don't know who they were.
And another -- about another 20%
were  
positively identified as women 
or children therefor civilians.
And there's a 5% remainder who 
are the people who we actually 
know who was 
killed, and there's an argument 
that they were a legitimate 
target.  We're talking about a 
massively inaccurate program 
that the proposal now 
is to automate.  Really bad 
idea.
And I think the consistent 
thread --
-yeah.
-I think the consistent thread 
is profiling.
Profiling of various kinds.
The crudest forms of profiling 
that's 
the basis for the proportedly  
cutting-edge technological 
systems.  They are informed by 
the absolutely 
crudest and in many cases most 
long-standing forms of 
discriminatory stereotyping that
we've been -- had with us for a 
very long  time.
time.
-Yeah.  They are so glossy.
I want to ask Rashida, how do 
they push back?
-I think there's a tendency to 
think that people don't 
understand the issues.
I think most people, especially 
those who are abouted not only 
by the systems but the prior 
existence -- sort of forms of 
the systems so government 
systems 
with  humans making bias 
decisions understand the 
problem.  I think being open to 
the fact that people could know 
the solutions to their own 
problems and respecting the 
intelligence experience and 
expertise of 
everyone so that way they can 
engage and have a place at the 
table to come up with solutions.
-Yeah.  I just polled a question
from the audience.  I would love
your thoughts on this question. 
What does it mean to organize 
tech workers who already have so
much privilege?  Are people 
actually putting their jobs on 
the line?
-Okay.  Great question.  Great 
question.  I mean first of all 
-- I think part of my response 
to that would be not all tech 
workers have a lot of privilege.
If we include among tech workers
the 
people who Virginia Eubanks was 
talking 
about who the tech workers 
coalition incorporates many of 
those people.  So if we treat 
tech workers really  
broadly, then there's a highly
heterogeneous group.  It is a 
legitimate question.
Some tech workers who are 
speaking out are risking their 
jobs.  You could argue they are 
relatively well positioned to go
and find other issues.  So 
that's -- then, of course, you 
know, I think we all risk in 
proportion 
to the amount of privilege or 
precarety that we have otherwise
in life.
And so, yeah, I think it is a 
legitimate question.
But I also think again that, you
know, 
there are some really  
interesting 
possibilities for the 
mobilization of 
the -- of a coalition of workers
inside big tech companies.
-I think tech workers are in a 
unique position to have 
influence.
I'm all hesitant to put all of 
my 
faith in the tech community as 
the leaders of some resistance.
It if we're being honest about 
how homogenous the group is, 
they don't understand.  We all 
don't understand a lot.  Which 
was in my first response.
They don't understand the 
history or the social,  
political, and economic 
context of the ways the tools 
they are creating will be 
implemented.
I think some of the advocacy is 
great.  You have people who are 
in higher positions of power and
have higher 
access to privilege and power 
that can move the ball forward 
and move it in some direction.  
But I don't want to think that's
sort of what's going to change 
things.  They think it needs to 
come from everyone else who is 
affected by these systems and 
those who bring other 
perspectives from the table.  I 
don't think it is any one group 
that's going to result in any 
revolutionary change is what we 
really need.  It needs to -- I 
think it is also upon the tech 
workers to realize they have 
certain power and influence.  
But they are not the ones that 
should 
be deciding what is ethical and 
what is 
not and what is moral and what 
is not.
-Yeah.  Hard agree.  I think 
with that we're going to wrap it
up.  Thank you so much.  It was 
lovely to have you on stage  
here.  I'm going to stay up 
here.
Thank you.
I will remain for a moment and 
invite 
Kate
Crawford, co-founder.  Thank  
you.  All that's left from us 
are some closing words.
A big thanks to the speakers.  A
big thanks to NYU.  A big thanks
to everybody in the community 
who has backed the effort, 
offered support, and guidance to
set up an institute here.  I 
want to thank the John D.
and Catherine  Foundation for 
their support.  And a particular
thanks for his 
amazing work on the  
visualization tonight.
That was Varoon.
I want to
thank the first cohort in 
production.
Particularly Emily and Kate and 
the 
Good Sense Team and the NYU 
families team.
We are really grateful as well 
to the volunteers and friends 
and everyone who 
helped and contributed so much 
energy 
and enthusiasm and made this 
happen.
Finally a colossal and huge word
of 
thanks to the director of 
operations, our lead producer in
everything, Mariah Peebles.
We couldn't do any of this 
without you.
-I'm also going to throw in a 
final 
thanks to Meredith for being 
amazing and an extraordinary 
person to do this work with.  
The final thanks is going to go 
with you.
This year has thrown off some of
the visions.
We see the enormous potential 
for action and change.  This 
community here in the room and 
on the live stream you have a 
powerful role to play right now.
Stay in touch.  We're on 
Twitters.
We have old school mailing 
addresses.  Thank you for being 
part of it.  Have a great night.
Thank you.
