Can you hear me? Fantastic!
Thank you very much! Well it's a particular
honour to be asked to present here, not only
because this is one of the finest universities
in the Netherlands but also because the Hague is the
birthplace of Raymond van Barneveld! Any fans
from the Barney Army in? Whey! Nice one! Barney Army!
Right, so let me introduce myself – it's
called 'Bot's Don't Kill People, People Do'
- let me introduce myself, my name’s Oliver
Thorn, I’m English. I am the second of your
all-male panel today. And I’m a professional
actor, but I also lead a double life because
when the British government decided they wanted
to triple the cost of attending university
I decided that was slightly unfair so I started
filming myself in my bedroom telling the internet
what I'd learned in my lectures today. Five
years later that is now Philosophy Tube, a
YouTube channel. It's got just under 200,000
subscribers and that pays all my bills and
that's my job when I'm not acting. And that’s
the capacity in which I've been asked to speak today.
I have to say when I was asked to speak at
a conference on the future of AI in warfare
I was very sad because that meant that somebody
(me or you or the organisers?) found it
easier to imagine a future in which AI is
involved in warfare than we do a future without warfare.
So over the course of this short talk I'm
going to - hoping to - challenge and dispel
some of the popular assumptions about artificial
intelligence and also about the ethics of warfare itself.
First I’m going to be explaining 2 myths
about artificial intelligence and why they're wrong.
Then I’m going to be talking about warfare
and how we can forefront an ethical approach to it.
And then finally I’m going to tie the two strands together and talk about AI's role in obscuring ethical problems.
But before we go any further - not to jump
on Paul [previous speaker] - we're gonna need
some kind of working definition of AI, which
isn't really a very easy thing to come by.
But the military and “defence” industries
are already referring to “autonomous systems."
Now as I’m going to be explaining shortly,
I believe that we need to, as a matter of
urgency, retire the word “autonomous”
when we talk about AI: I think it's very misleading.
But it’s the word we’re stuck with for
now, so I’m just going to keep putting it in scare quotes.
So to understand “Autonomous” systems
it helps to contrast them with Automated Systems.
An automated system is a very simple machine.
If X then Y. Same input, same output.
An “Autonomous System” works probabilistically;
it compares the inputs it receives to a database
in order to figure out what it should do based
on the goal you have previously set it.
Now either you have to give the database before you turn it on, or you have to tell it how to gather
the database in order to operate. You also
have to tell it its goal. The role of the
human being in telling the system what its
database is and the goal it must achieve is
absolutely vital – please bear that in mind
as we go on. Because they operate probabilistically,
an “autonomous” system can give you different
outputs; they don’t always give you the
same result – which means they can be quite useful in situations where there's a lot of uncertainty, for instance in a war.
So let's jump right in. Two myths about AI.
Myth 1: the myth of objectivity. A lot of
people think that algorithms and AI systems
are unbiased and objective. They are not.
They are absolutely not. An AI system can
inherit all of the biases of its creator,
including racism, classism, ableism, transphobia – these problems do not go away just because you introduce a robot in there.
And to illustrate this we're gonna have a
fun example. We’re gonna design a social
media platform. It’s gonna be bigger than
YouTube, it's gonna be bigger than Twitter,
it's gonna be bigger than Facebook. It's gonna
be YouTwitFace. Now I'm gonna need a volunteer.
Can someone put their hand up? Somebody to
be a Venture Capitalist. Hello! What's your name?
STUDENT: Anna!
Thank you very much! You're gonna be our venture
capitalist. You're gonna fund YouTwitFace.
Can you pick a number between one and twenty
please?
Fourteen? Thank you, you've just given us
14 million dollars! To develop YouTwitFace.
It's gonna be fantastic: thank you very much.
So guys, welcome! We are
the YouTwitFace team: we're the team of the
future, we're the team of today, we're the
team of yesterday. Just grab a beanbag anywhere:
it's Silicon Valley so it's a particularly
casual brand of evil. Today our task is to
figure out how things get trending on YouTwitFace.
So every time you log in to social media you've
probably seen something that looks like this?
The trending news box, right? Our task for
today is to write a system, an algorithm,
that determines what appears in that box.
Easy peasy, right! All we have to do is write
a program that says, "What are the most popular
words being used on the platform right now?
Stick 'em in the box." Done! That's what's
popular. That's what's trending. Right? No.
Do you know what two words are always the
most popular on every social media network?
'Lunch' and 'weather.' All the time! They're
always trending. If you write an algorithm
that just takes the most popular words and
sticks them up there it's gonna be #Lunch
and #Weather all day, in every time zone.
So okay. What we do is we take the program
and we say, "Alright, find the most popular
words, if it's lunch or weather filter it
out, if not, stick it up there." Congratulations.
You’ve just created an algorithm that is
biased against reports on climate change.
Everybody who talks about climate change,
who posts about it, who links to it, who tells
people about the research they've just done,
is gonna mention the word weather. You've
just created a trending news algorithm
that has a conservative political bias. Without even meaning to. A trending algorithm has three jobs: it has to gather the data, it has to filter it, and
then it has to publish it. At all three stages
a human being has to say, "Do this; don't
do that," so at all three stages bias can
creep in. And this actually happened to Facebook
a few years ago: they got in trouble for this,
it was a similar scenario.
Now we’re very used to the idea that a newspaper or a news network can have a political bias
- like Fox News, right, really right wing!
And if you're very clever you might already
know that in fact every newspaper and every
news media station has some political bias
one way. But for some reason a lot of us think that an algorithm is “objective.” So why do we think that?
Well let’s return to being the YouTwitFace
team. Now as the YouTwitFace team, we've gotta
give you your money back! Cause you invested
so much money. So how do we as a social media
platform make money? We need to make the platform
friendly to advertisers, right. Advertisers
use trends in order to get their ads in front
of more eyeballs. Now how do you think advertisers
are gonna react if we admit that our algorithm
is biased, i.e. not accurate? Not a good look.
And we could hire a whole diverse range of
people to jump in at all three of those stages
and help us kinda correct the bias or at least
add a bit of diversity. But that’s gonna
be really expensive. Oh and we can't tell
anybody what the algorithm actually does because
it's our private intellectual property and
some rival social media platform could steal
it. So that's how we arrive at a situation
where massive private companies like Facebook
and Twitter determine what news people get
to see, how they see the world, in a biased
way with no accountability or oversight because
everyone involved has a really strong incentive
to pretend that this is objective. But it isn't.
Let’s take another example, a far more serious
one. My Venture Capitalist friend. I know
that YouTwitFace, didn’t quite blow up the
way I wanted it to, but I've actually just
got a brand new idea, it's gonna be really
cool; I’m gonna pitch it to the Mayor of
the city. Who’d like to be the mayor? Can
I get a volunteer? Anybody? Who's gonna be elected?
Thank you very much! You're the new Mayor
of the Hague. Thank you very much. Madame
Mayor, I've got this brilliant idea right.
It's an algorithm that can tell police officers
where they need to patrol based on all the
crime reports from given areas. So what we're
gonna do is I'm gonna take all the arrest
records and all the crime records for the
whole city, I'm gonna plug them into an algorithm,
find the patterns, and the algorithm is gonna
say something like, “Send a few more police
officers down Brown Street cause there’s
been a bunch of burglaries there recently.”
Right? So if you give me a taxpayer's grant
to develop this algorithm - you and me are
gonna get our money back - I make a bit of
money, crime's gonna go down because the police
are gonna be where they need to be, the police
are saving time, people are gonna be safer,
I'll even employ some of the local people
to help me build the software. So I'm creating
jobs, I'm creating wealth for the city. Madame
Mayor, does that sound like a good idea? Sounds
like a brilliant idea. I call it “Smart Policing."
Right? It's gonna be brilliant.
Sounds great! Until you realise that the police
in our city disproportionately stop black
people. And disproportionately arrest black
people for the same crimes as white people.
And disproportionately interpret the innocent
actions of citizens of colour as resisting
arrest, thereby legitimising greater police
force. And that white citizens of our city
are more likely to call the police on citizens
of colour for doing absolutely nothing. How
many of you saw that thing happen, that very thing
happen in America recently? Two black guys
hanging out in Starbucks, waiting for a friend,
and a white lady got "nervous" and called
the police. They turned up, they were arrested,
for doing nothing? All of that data is gonna
wind up in the algorithm. And the algorithm's
gonna tell us, "You need to send more police
into predominantly black neighbourhoods."
We’ve just made racist policing faster,
cheaper, and more efficient.
Now as the developer of Smart Policing, am
I going to admit that? No. If I even realise it.
Are you going to admit, Madame Mayor,
that you just unleashed a racist computing
service on the city? No. The police sure as
hell aren’t going to admit that they’re
institutionally racist. Oh and by the way,
because this algorithm is my private property
it’s not going to be released to the public.
So the citizens are not allowed to know or have
any accountability over the system that is
being used to police them. Yet everyone involved
is gonna tell you it's great. It's working.
In case case any of you think this is scifi,
this technology is already being tested in Los Angeles and in Santa Cruz. Myth Number 1 - AI is not objective.
Myth Number 2: the Myth of Autonomy. AI systems
are not autonomous. Anybody who that tells
you they are is probably trying to sell you
one. Autonomous means having the freedom to
act independently. AI systems do not act independently.
They do what they are told to do.
So as an example: you’re driving along,
you're driving really fast, when suddenly
a kid runs out into the road. You've got a
split second choice to make do you hit the
kid, or do you swerve and hit the car next
to you? Now for today don't worry about which
choice is necessarily the correct one, but
the point is that with a human driver a decision
just gets made - split second. Also, as a
human driver, whichever decision you choose,
you can be taken to court, and the people
taking you to court might not win, but you
can be required to stand up in public and
be accountable for your decision.
Now swap the scenario, let's say that it's
not a human driver it’s a "self-driving"
car. An “Autonomous Vehicle.” The decision
is not made on the spot; the decision was
already made by some programmer ages ago who
programmed it - not directly in a kind of
'If, then' way - but who programmed it to
interpret the world in a certain way, to gather
data in a certain way, to have certain goals.
Maybe not even one programmer, maybe a whole
team of programmers. But nevertheless some
human being is responsible for the "decision"
that gets made. Here's the real kicker - oh,
that's a slogan to remember, AI systems are
not autonomous, they are unsupervised. Here's
the real kicker - if a car swerves and hits
someone, the "self-driving car," who do we
take to court?
Can we take the programmers to court? Can
we take the software company, the car company
to court? Because if not then some software
company has just been given the unaccountable
power of life and death over everyone its
product encounters.
Elaine Herzberg was 49 when she was hit and
killed by one of Uber’s “autonomous vehicles”
in Arizona in March. Her family has since
settled with Uber out of court, so at present
there is no legal precedent for deciding questions
like this one, as far as I know. Some human
being is causally responsible for this woman’s
death. Now whether that amounts to criminal
responsibility we may never know. But of course,
companies that make the AI systems have a
very strong incentive to tell you that they
are autonomous because that means they're
not responsible. But they're not autonomous.
They're just unsupervised.
So to recap: two myths about AI. They are
not objective. They are not autonomous. With
these things in mind, let’s turn our attention
to war. And like before, we’re going to
need a definition of warfare, and fair warning,
this is gonna get pretty bleak but I’d like
to lead you to a definition of warfare in
a bit of a roundabout way:
I’d like you to imagine that you wake up
tomorrow and you check the news or you check
Twitter or you check your phone and you see
that the worst possible crime has been committed.
Don't worry too much about what exactly it
is or rigorous definitions of the word 'worst,'
just imagine whatever would give you the most
horrible feeling, the most sickening, the
most repulsive crime you can think of. It's
been done. Now in August I became the proud
uncle to a baby girl, she's absolutely gorgeous,
she's a joy, she's a light in my life. So
for me I imagine the worst crime - the thing
that makes me feel worst - is somebody harming
a child, and I just scale up from there. So
I imagine that I wake up tomorrow and I check
the news and not just one child but every
child in my country is dead. That’s about
14 million children in the UK, about 4 million
here. Everyone under the age of 18 is gone.
Maybe some of you have younger siblings; maybe
some of you are under 18 yourselves. Some
virus, some nanomachine, something, has just...
No-one's going to work; everyone's calling
in saying they can’t get their kids to wake
up. Public transport's not running. The Prime
Minister's not even made a statement because
y'know, they've got their own family, no-one
can even be cobbled together. The whole economy’s
ground to a halt. There's just screaming filling
every street and silence in every school and
every playground.
And now imagine that we caught they guy who
did it. It was deliberate, some mad scientist
he engineered the virus, he engineered the
nanomachine, that did it. It was deliberate.
We caught him. And the question is what do
we do with him? And I’d like to remind you
of something: neither my country, nor this
one, has the death penalty. There is literally
nobody who has the power to put that man to
death. To take away his life. No-one can do
it. To kill him is legally murder. And even
in countries that do have the death penalty
we can only apply it after a fair trial. So
here's the multi-million dollar question:
on whose authority do we kill foreigners?
When we go to war – we kill people. And
we don’t just kill them: we kill them without
trial. The worst possible criminal - the worst
criminal - is entitled to a fair trial and
a jail cell, even after everything that man's
done, but you don’t even have to be a criminal
if you’re foreign. You don't even have to be arrested. No trial, no charges - we’ll blow the shit of you.
So this is definition of warfare I’d like
you to keep in mind for today, and hopefully
when you leave here as well. Warfare - The
state sanctioned killing of foreigners without due process.
I'd like to say a few things about this definition before we move on. First is that I’m aware that warfare involves more than just killing. I've
spent enough time around soldiers to know
that their job isn't just shooting people
dead. And other definitions of warfare could
capture other aspects of it. This isn't meant
to be comprehensive, I just want to forefront
the ethically contentious bit of warfare in
your minds, which is killing people.
Secondly, I interpret the word ‘sanctioned’
here to be an active word. To be actually
doing something, not just sitting back and
condoning the death of foreigners without
trial, but actively getting involved in the
process. Maybe through an invasion, as my
country and I believe this one did in Iraq,
maybe by sending money and weapons as the
Reagan Regime did to fascist governments in
Latin American, or maybe as my country is
currently doing sending "military advisors"
to Yemen. All those I would say are included
in an active definition of warfare.
Thirdly, we should all I hope be aware that
plenty of state-sanctioned killing of actual
citizens goes on without trial as well. You
are probably aware of the extrajudicial killings
of black people in the United States; it seems
to be happening a lot. You may also be aware
that in my country since 1990, 1400 people,
predominantly people of colour, have been
killed following contact with the police,
and not a single police officer has been charged.
Those I would say are all morally horrific
but not today coming under the scope of what
we're calling warfare. And lastly, I don't
wanna overstate the ability of a fair trial
to find what is just and good, I'm sure we're
all aware that fair trials can sometimes be
biased in their own way. I'm just trying to tease out a particular contradiction that we'll get to shortly.
So – the state sanctioned killing of foreigners
without trial. This definition and the thought
experiment of the worst possible criminal
makes it pretty clear that when it comes to
killing people it’s one rule for citizens
and it's another rule for everyone else. And
in this way, we can see that Citizenship is
the first weapon in a war. Citizenship determines
who may live, and who may be killed.
And as just one example among many, I’d
like to tell you in brief the story of Bilal al Berjawi.
Bilal was British. He was born in 1984 and
he went to school in St John’s Wood, in
North London. In 2010, he was stripped of
his British citizenship by then Home Secretary
Teresa May - he may not actually have become
stateless, I forgot I should have edited that
out of my powerpoint. But he was stripped
of his British citizenship. Teresa May you
may know is our current Prime Minister. On
the 21st of January 2012, Bilal was in Somalia,
and he called his wife in London to congratulate
her on the birth of their third child.
A few hours later he was killed by a drone
strike by the Obama Regime.
Now the United States government had been
watching Berjawi since 2006; they even gave
him his own codename, ‘Objective Peckham,’
and he was on one of their kill-lists. So
it wasn't an accidental killing; they were
hunting for him and they found him.
And here we see that citizenship is the first
weapon. With his citizenship gone he was no
longer under the protection of the British
government. And the Home Office of my country
has declined to comment on whether they knew he was gonna be killed when they stripped him of that citizenship.
Now I don’t want to comment on Berjawi’s
character today, and I don't wanna comment
on his political beliefs, but what's relevant
for us right now is that he was never convicted
of a crime. He was detained several times
but released. Now the files that the United
States government have on him claim that in
2006 he was sending money to Al-Qaeda, which
a criminal offence in the UK – funding terrorism.
They had the evidence for it, they say, but
he was never tried. He was never found guilty.
And I remind you that even if he had been,
the UK doesn't have a death penalty. The worst possible criminal gets a trial. And yet Berjawi was executed.
Berjawi’s case, and the many others like
it, show us that citizenship is the first
technology of warfare and they throw the issues
of AI in warfare into a quite bleak context
actually. And I don't wanna spoil too much
where I'm going with all this but a lot of
people think that using AI in warfare is like,
"Oh no, we're gonna make decisions about who
lives and dies just based on like a totally cold,
emotionless process with no appeal and no
oversight - people just don't have any rights
anymore because they're just blips on a computer
screen. Well actually we already do that.
There’s a pair of philosophical concepts
I’d like to introduce you to called Social
Death and Civic Death. here I’m gonna quote here from Lisa Guenther, and her fantastic book "Solitary Confinement."
“Social death is the effect of a (social)
practice in which a person or group of people
is excluded, dominated, or humiliated to the
point of becoming dead to the rest of society.
Although such people are physically alive,
their lives no longer bear a social meaning;
they no longer count as lives that matter.
The social dead may speak, act, compose symphonies,
or find a cure for cancer, but their words
and deeds remain of no account.”
During warfare, we treat foreigners – not
just enemy citizens but often civilians as
well - as the Social Dead. Arguably actually
I think we treat certain groups of foreigners
as Socially Dead all the time, but we’re
talking about warfare today. Social Death
is what enables us psychologically to move on to the next stage, Civil Death, which is what makes killing them legal:
“Civil death is a legal fiction; it refers
to someone who has been (legally) positioned
as dead in law. Their body may be alive and
their mind sharp, but they have been deprived
of the legal status of a person with civil
rights such as the rights to own or bequeath
property, to vote, to bring a legal case to
court, and so on.”
And before Leonard [next speaker, legal expert]
absolutely leaps out of his chair, I'm not
talking absolutely literally here. This is
the legal position that the people we kill
in wars are thought to occupy. But not always
literally: the casualties of war are not literally
declared legally dead before we kill them,
they’re just not fully thought of as legal entities,
as rightsholders, or at least not quite to
the same extent as we are. And we can see
that Berjawi was shunted through both of these
categories before he was killed. He was Socially
Dead already to almost everyone. The newspapers
in my country practically celebrate extrajudicial
killings like his. And other than his parents,
and his widow, and his children – who are
all still alive and presumably at this stage
not great fans of the British and American
governments – who weeps for Bilal al Berjawi?
And he was Civilly Dead as well: we are sitting
four miles away from the International Court
of Justice, the International Criminal Court.
Do any us really think that Barack Obama and
Teresa May will ever be tried for their role
in the death of Bilal Al Berjawi? No. No-one's
even gonna try. It just isn't even gonna occur.
Here we see Social and Civil Death work to make killing without trial not just acceptable, but normal. Even expected.
Note, by the way, that Citizenship is not
the only technology of Social and Civic Death.
I mentioned already the extrajudicial killings
of people of colour in the United States in
that case arguably Race can become a technology
of Social and Civil Death. Being a prisoner
can. And of course, slavery is the classic
example.
Killing foreigners without trial is very common.
And I think its very worrying. I worry that
it paints a very flimsy picture of what human rights are.
As an example, I'd just like you to focus
your mind’s eye on the Right to Life. The
Right Not To Be Arbitrarily Killed. The fact
that foreigners can in fact be killed without
trial paints a picture of this human right
according to which our right to life is not
a morally necessary foundation for any
human society, but rather a concession from
governments to only certain people, a privilege
that can be revoked at any time without accountability
or oversight. The right to life as something
that flows from governments to people, rather
than something that people demand of their
governments as a condition of governance.
And what we’ve come down to is this: are
justice and human rights moral ideas, or are
they purely legal ones? And it’s fitting
that we are sitting four miles from the International
Court of Justice. What is that building for
exactly?
And this is where you can choose your own
adventure a little bit depending on whether
you think that justice and human rights are
legal ideas or moral ones. If you believe
that justice and human rights are purely legal
figures, then the killing of foreigners without
trial presumably holds no problem for you.
Because Citizenship is the legal tool that
we use to determine who can be killed and
who can’t. The Authorisation for the Use
of Military Force Statute, which passed by
the United States Congress on the 14th of
September 2011, gives the United States government
the legal right to kill anybody, anywhere,
anytime, in any country, without trial even
in countries with which the United States
is not formally at war, as long as they are
the “associated forces” of al-Qaeda, which,
apparently, though it has never been proven
in a public court, included Bilal al-Berjawi,
and that's all there is to it. It's just a
legal matter, morality doesn’t come into it.
There’s a great quote from the Joseph Heller novel Catch-22, “They are allowed to do anything we can’t stop them from doing.”
However, if you believe that justice and human
rights are in fact moral ideas, that killing
Citizens without trial isn't just illegal
but actually morally wrong, and furthermore
you believe that all humans have equal moral
rights, then the killing of foreigners without
trial presents a very serious problem, because
it’s actually impermissible. Not just bad,
not just, "Oh we should probably avoid doing
that," but actually impermissible: do to it
even once would be wrong. Now notice by the
war that these two conclusions - War is Lawful
and War is Impermissible - are consistent;
you can believe both. But you can't live by
both. You can't go to war and believe that
justice and human rights are moral ideas.
Now whenever there’s a terrorist attack
in my country by somebody who isn’t white
people like to ask the question, “Where
was he radicalised?” "Where did he come
to the conclusion that this was an okay thing
to do?" I’d like to flip that on its head
and ask, “Where were we radicalised?”
When did we decide this was okay, when did
we all collectively just accept the idea that
the right to life, the right to just not be
killed, just doesn't apply to foreign people.
It's fine. Where was Teresa May radicalised?
Where was Barack Obama radicalised? Where
were the US military staff who killed Bilal
al Berjawi radicalised into believing that
they had the moral right to kill a man thousands
of miles away who hadn't been convicted of any crime?
Now it's time to tie the two strands together:
we’ve talked about AI and we’ve talked
about warfare. The writer Evgeny Morozov cautions
that once we create a piece of technology
to address some problem it becomes very hard
to question the nature of that problem.
And as an example - our Venture Capitalist friend?
Thank you very much. We're not going to come
up with a new piece of technology together
and the Mayor of the city, thank you. Now
I know YouTwitFace didn't quite blow up and
I know that I inadvertently created a racist
computer that messed up the police force but
this time I’ve got a really good idea right.
And it's gonna be really excellent. And Madame
Mayor this one's gonna be great for you too.
I’ve invented an app that you can give to
homeless people and it monitors how much space
is in a homeless shelter, and it tells the
homeless people where the nearest shelter
is, how much space it is, how much it's gonna cost to stay there, and it helps them find a bed for the night.
I call it S.A.R.A – Shelter And Rest for All.
We're gonna be doing good for the city. We're
gonna be helping homeless people out. Madame
Mayor I know we’ve got that really big sporting
event coming up – it’s the darts final,
Raymond van Barneveld’s gonna be in town
- and you wanted to clear all the homeless
people off the streets because it doesn't
really look good for the city, so this is
gonna be great for that! You give me a taxpayer's
grant to develop this technology, I'm doing
good for the city, right? Doesn't that sound good! Can anyone spot the obvious problem with that? Anyone? Shout it out.
LECTURER: Homeless people with cellphones?
OLLY: Homeless people don't have cellphones?
That's actually a very common misconception!
A lot of homeless people do have smartphones.
The idea that - no, it's true! The idea that
homeless people don't have smartphones is
based on a misconception that homeless people
just leap out of the ground, poor, and have
nothing, but actually people are made homeless.
And that's a hint.
This app does not address the causes of homelessness.
At all. Systemic inequality, exploitative
landlords, the institution of private property
itself, the fact that you Madame Mayor have
been selling off social housing to build luxury
flats because one of your major campaign donators
was a big property developer? None of that
is addressed at all. This app sells you, literally
sells you a nice, slick, easy technological
solution that stops you actually thinking
about the nature of the problem. Remember
that police app we talked about earlier on
that tells the police where to go? Same story.
Doesn't address the causes of crime, at all,
in fact it stops you thinking about it. Oh
and remember, everyone involved from the developer
to the Mayor to the people funding it have
a very strong incentive to tell you that this
app is great and it's working. Just look at
how much homelessness has decreased!
And this could be a big problem when it comes
to warfare. If we create an AI system that
can ‘Identify A Target’ – not even kill
anyone, just identify them – like Bilal
al Berjawi – and say, “Look! There he
is!" then we have created a system that hard
codes the presumption of guilt. That says,
"This guy's on a kill list! What do you wanna
do?" - even if the killing is done by a human
being, that system doesn’t stop to ask – "Why
is this guy on a kill list? Why do we have
kill lists? What are the factors that contribute
to war and terrorism, and how are we feeding
into them? Is there something else we could
be doing here? Why are we spending money on
killing people instead of this?" Doesn't ask
any of that. Sells you, and again I mean literally
sells you – private companies are spending
a lot more on developing tech like this than
state militaries are - sells you a nice easy
solution that pushes you towards a legalistic
interpretation of warfare. That pushes you
towards thinking that this is not a moral
issue. That it's just about whether it's lawful.
In fact you'll see that happen all the time:
if you challenge the police, if you challenge
border agencies, if you challenge anybody
and say, "This thing you've just done is really
morally wrong!" they'll respond to you by
saying it's lawful But that's not the issue
is it. It's not about whether it's lawful.
It's about whether it's right. So in sum Artificial
Intelligence has the potential to make warfare
faster, cheaper, and more efficient. But we
shouldn’t be trying to make warfare faster, cheaper, and more efficient. We should be trying to eliminate it.
Now I don’t want to end on a downer, so
my last slide is entitled – What Can You Do?
And again, you can choose your own adventure
here cause this depends on whether you think
that morality - that human rights and justice
- are moral or legal. The first thing you
can do if you believe it is a moral issue
is you can call your representatives. You
could tell them that if they want your vote
they need to commit to nonviolent foreign
policy that doesn't kill anyone without trial and that you expect them to support peaceful foreign policy.
You can protest. It's a great way to meet
people. Get involved. Learn about military
involvements, get out on the streets and protest
if you think this is a moral thing.
And this applies whether you think it's a
legal matter or a moral matter. Challenge
hypocrisy whenever you see it. When you feel
safe of course, and in a compassionate way,
but if somebody stands up and says, “Our
country cares about human rights, our country
cares about due process,” just ask them,
“Do we? Do we really? All the time? For
everyone?" When somebody says, “Oh this
foreign country," - earlier on we had a joke,
a casual joke about bombing Kim Jong Un - somebody
says, "This foreign country, they don't respect
human rights!" just ask, “Do we? On what
grounds do we claim the right to police them for it?”
Resist, the myths about AI. Resist the Public
Relations attempts to sell you on it by telling
you that it's objective and telling you that
it's autonomous. Again, they're just trying
to sell you on it. Elon Musk wants you to
think that he's cool, because if you think
that he's cool you're not thinking about the
fact that he closed down one of his factories
and put thousands of people out of work because
they voted to unionise. Whether you think
it's a legal matter or a moral matter, just
don't believe the PR.
Lastly, resist attempts to put people in Social
Death. It's by listening to people and cultivating
empathy that we can do that. Listen to prisoners,
listen to sex workers, listen to communities
being aggressively policed, listen to citizens
of countries that your country doesn’t like
and is bombing, listen to undocumented immigrants in detention centres. Resist always
the attempts to dehumanise and to reduce people,
because it isn't AI that's gonna be doing
that. The people who make AI are relying on
you to do that for them. Thank you.

If we've got time for some questions I can
take them now but if we're running on I can
definitely do it in the workshop, whatever you want.
My Venture Capitalist friend! Feeling guilty
about your investments?
ANNA: Hi, thank God I don't have that much
money to spend and play golf with! I'm Anna,
as you know. My question is - so I've learned
in one of my classes that the systems, the
algorithms, are designed by people. Is the
concept of the moral good, is that a subject
for being implemented in algorithms? Can um,
can um, let me see...
OLLY: Could you program morality into an algorithm?
ANNA: Yeah, can you program morals? Perhaps is my question.
OLLY: You could give it a go, certainly!
STUDENT: How?
OLLY: Isaac Asimov had his three laws of robotics,
right, saying like, 'Don't harm a human' was
one of the first ones, right. That doesn't
make it immune to bias though, unfortunately.
That doesn't make it totally bulletproof.
Because the kicker is gonna be - for instance
if you tell an algorithm "Do not harm a human
being" the person who gets to define what
'harm' is has a very big control of the situation.
Some people would say that arresting people
and putting them in prison and solitary confinement
isn't harming them, but other people would
say that solitary confinement is in fact a
form of torture. So it's how you decide what
morals are. So certainly, yes, you could give
it a go. But you're still gonna be vulnerable
to bias. Remember it's not objective.
STUDENT: Hi. I have a question. You said challenge
hypocrisy, right, whether legalistic or moral.
But do you think eliminating hypocrisy about
moral issues will improve the treatment of
the ones affected by that hypocrisy? Will
it not detach us from the issue itself? Will
it not change into a situation where you start
to fight fire with fire if you are not only,
if you are not trying anymore to give a moral
explanation for your actions?
OLLY: I'm not entirely sure I know what you
mean, but in any case I don't believe that
eliminating hypocrisy is actually possible.
I think we can challenge it, and that challenging
it could in some contexts have very good results.
But I think to eliminate hypocrisy and ambiguity
and human beings being flawed and messing
up - I think that's - in order to do that
you'd have to create, well it would be impossible.
We'd have to create a pretty interesting society.
So I don't think that we can eliminate hypocrisy.
But I do think that there can be fruitful
results in challenging it especially if somebody
is being a hypocrite in order to sell you
on something. If somebody's saying, "Our country
cares about human rights (and that's why we're
going to buy a whole load of weapons from
this really big private company and go and
bomb these other people!)" then questioning
the hypocrisy of that can stop you being sold
on something. Cause it's very easy to say,
"Yeah! I agree about human rights!" and someone
goes, "Great! So let's bomb all those people!"
and you go, "Wait, what? What have I just
signed up for?" So I think that kind of challenging
hypocrisy can be quite useful.
STUDENT: Well you answered it, so thank you.
STUDENT: Hi! Here!
OLLY: Hello!
STUDENT: In your opinion, could you name one
good thing about AI?
OLLY: Ooo! ... Well yeah, it makes things
faster, cheaper, and more efficient. But on
its own I don't think that those are always
good things. I'm not sure that anything is
necessarily like, always good in all circumstances.
So...I'm not sure, I think the question's
possibly a little bit too abstract for me
to answer. But I imagine that the AIs that
run my phone and my laptop are very useful.
And you know, put people into space. Help
distribute resources more evenly and justly.
I imagine that we could do a lot of good with it.
But it's deciding where we point them
that's the question. I hope that helps.
[Sees that the questioner is previous speaker
Paul Verhagen]
Oh no!
PAUL VERHAGEN: I'm sure we could have an interesting
discussion. I'm just curious to hear your
thoughts on the concept of the so-called casus
bellum, the cause for war, the legal construction
by which someone goes to war because as you
rightfully argued there's an interesting legal construct
and moral constructs yet war is also to some extent a legal construct. I'm just curious to hear your thoughts on
that.
OLLY: Yeah, well, as the designated ethics
lecturer I focused on ethics today. I'm sure
Leonard can tell you more about the legal
aspects than I could. I don't know, I'm relieved
that I've never been in a position where I've
been asked to legally justify a war. Yeah,
I struggle to think of, I struggle to think
of a cause, a legal cause... because when
you have, like, a state passing laws and deciding
who it's okay to kill then it becomes about
who has power within that state. So yeah,
that's a difficult issue and I'll cop to not
really being able to say anything that informative about it right now.
MALE STUDENT: Yeah, um, I have a few points to
make once again. The first one worthwhile bringing
up is your argument that algorithms aren't
objective or rather that it is a myth that
they are objective. I've heard this argument
many times. The most interesting situation I came
across so far was an article by Wired Magazine
titled, 'Can AI Be Racist?' And I think that's
a really stupid question to ask, okay. The
reason the article was written was there was
this AI research centre that was training
its AI on image recognition and they fed it
all the information, right, and for some reason,
it is not known why because of the black box
problem that we have with AI, the AI - personally
I find this funny, it's a little bit unfortunate
- the AI classified black people in the images
as gorillas. Okay.
OLLY: As what sorry?
MALE STUDENT: Gorillas. As in like, the animal.
Now that's unfortunate but when asked why
that was the case the researchers didn't know
what to say because they didn't actually know
why the AI did that, right. So similar to
the example from earlier with the huskies
[referencing the previous lecture], right,
for some reason it classified it as a wolf
when really it isn't. Anyway, point aside,
my point is to argue whether an algorithm
can be objective or whether it can be biased
is a semantically silly question to ask. It
doesn't know what's going on, it's just following
the procedure of physics. Does that make sense?
OLLY: Do you believe that racism is about intent?
MALE STUDENT: Well...
OLLY: Do you believe that bias only matters when people intend to do it?
MALE STUDENT: Well intent certainly is severely
important in the combination. I know that
Postmodernists for example, radically disregard
intent, which in my mind is an absolutely
insane thing to do but regardless of that
given the topic, which is objectivity of
algorithms, I'm saying you're asking the wrong
question - it's not about objectivity or bias
it's about mistakes, right, it's about problems.
So the AI in the example that I gave or in
the earlier example, the husky, it's not about
whether the AI is biased against - what was
it? He classified it as a wolf, right? Yeah,
he classified it as a wolf but it was actually
a husky. Oh, is the AI biased against huskies?
No it's not, it just doesn't know what's going
on and it's making a mistake and the scientists
behind this need to figure out what's going
on. Alright. So that's my first point.
OLLY: So again, do you believe that it's about intent? Do you believe that bias must come from a place of intent?
MALE STUDENT: Uh, I think that it's an important factor yes.
OLLY: Would you deny that unconscious bias can exist?
MALE STUDENT: Oh yes, unconscious bias is utter nonsense and I can prove that to you. The
creators themselves have shown that because
unconscious bias is actually not called unconscious
bias, that's not its scientific term, it's
actual scientific term is implicit association.
Right? Now I forget the names but I could
give them to you of the two scientists - they
developed the implicit association tests at
Harvard I think in the 1970s, 1980s? And what
the test does not fulfil is test re-test reliability.
Test re-test reliability is an absolutely
fundamental concept in any scientific area
and in this case also in war to some extent,
but point being - and to make it quickly because
I don't want to bore the audience with other
irrelevant scientific facts - unconscious
bias is nonsense. It's been disproven, it
doesn't fulfil test re-test reliability. In any case...
OLLY: That's fine, because what I said doesn't really require that. It just requires that
the systems humans make reflect them.
MALE STUDENT: Sorry?
OLLY: The points I was making don't really
require that, they just require that the systems
human beings make reflect them.
MALE STUDENT: That's what I'm saying. Right, so
it's not racist, it's not biased, it's just
a mistake. In this case, maybe a mistake-
OLLY: Do you think that racism and bias aren't
mistakes? That they have to be deliberate?
MALE STUDENT: Yeah I'll try and cut it short, right?

But no that's not what I'm saying. I mean this is the third time I was trying to [???].
But the other thing is you said something which
I thought was interesting, which was that
the distinction between autonomous and unsupervised.
Okay, and I agree this is a very complicated
topic certainly probably from a legalistic
point of view what do we classify as autonomous
or not? So question is, I'm sitting in my
car, I'm one of those fancy people that has a Tesla, let's pretend, and I'm driving down the highway and I put the autonomous mode
on, the self-driving mode on, is the car autonomous
or is it unsupervised?
OLLY: It's unsupervised. That's the point.
MALE STUDENT: You would say it's unsupervised.
But I'm clearly supervising it, I'm just not driving it.
OLLY: I think maybe you've taken what I was
saying to be applicable in slightly more of
a specific scenario? What I was trying to
- the point I was trying to make is that when
people say AI systems are autonomous, I wasn't
trying to say that the word 'autonomous' like
is never, never applicable or that like, 'unsupervised'
must always be substituted for it in like
1-1 relationships. I was just saying that
it might be illuminating to think of them
as being unsupervised from an ethical point of view.
MALE STUDENT: Okay. Sure.
LEONARD VAN ROMPAEY: Thank you very much for
this interesting presentation. Also very entertaining.
I think that's something that can be commended
because often those conferences can get a
little bit dull, we just talk about principles.
That was a really nice way of rounding things
up. I'm going to make a few comments and ask
a few questions and get a little bit maybe
challenging. Are you making an argument against
AI or against the negative externalities of
AI? Because I think that a lot of the challenges
you mentioned that are current and very accurate
in terms of discrimination are things that
will slowly get tackled as technological development
progresses. There is a whole sub-field of
ethics in AI that develops principles, computational
principles, to implement morals inside the
machine, to give [???] to systems, I mean
this is something that a lot of researchers
are doing now. So I would say that a lot of
the challenges you mentioned are, y'know,
they might not be so static and uncurable.
And also I wanted to ask are you making the
argument that law and moral are self-excluding
principles? Because you tend to portray law
as a purely formalistic element that is only
there to legitimise power of the strong and
to a lot of extent you're right, there's a
lot of studies done to show this, but a lot of
- the law is also a codification of social
norms and of cultural morals. And...
OLLY: Can I talk about them one at a time, cause I'm starting to forget what the first one was, sorry!
LEONARD VAN ROMPAEY: On that point, I'm coming
to my last comment, is that there are laws
in war. There are laws for warfare and there
are laws for the maintenance of peace and
security, this is why we have the UN Security
Council. So my first question was are you
making an argument against AI or against its negative externalities? And the second is-
OLLY: Okay, sorry, can I just do them one
at a time cause otherwise I'm gonna forget them!
I'm making an argument against killing
people. With whatever tools we have for that,
however fancy. So if that involves AI then,
yeah. What I'd like us all to remember is
that warfare is killing people. Like earlier
on Paul said that when you drop a bomb on
a tank you've achieved a strategic objective?
As far as I'm concerned if you drop a bomb
on a tank someone's gotta send a letter to
that guy's Mum. That's what I'm making an
argument against. And the second one?
LEONARD VAN ROMPAEY: The second one is it
seemed to me you were making the argument
that law and morals are self-excluding, could
you maybe expand a bit on that?
OLLY: Yeah, that's really cool! So I was using
- I guess I was using law and legal as sortof
folk concepts? I definitely think that moral
principles can impact how laws are made. But
what I think is that AI and to a certain extent
a lot of other systems as well can push us
towards thinking of law as being without moral
content. Can push us towards saying, "This
is lawful and therefore acceptable," when
actually we might want to say, "It is lawful.
But we still shouldn't do it."
STUDENT: I actually have a question about
the idea that war is necessarily impermissible.
If that is the case would a violent, a slightly
violent revolution overthrowing a tyrannical
regime also be only impermissible?
OLLY: Hmm. Good question. I think I've got
to say I don't know. I think the answer is
possibly too - the question is possibly slightly
too abstract - I might have to decide on the
day. But I would say that, I would hope that
that slightly violent revolution would understand
that human life is important and has value
and that eliminating is is not to be done
lightly and is not to be done as a goal in
itself. But good question!
ORGANISER: We've got time for one more question,
more or less.
OLLY: We've had a lot of dudes speaking and it's a lot of dudes speaking on the panel
as well so are there any people of other genders who have questions?
MALE STUDENT FROM BEFORE: I hope I don't have to
apologise that I'm a man, for me asking questions,
and I certainly hope nobody else ever has
to apologise for their gender when asking
questions. I don't think gender matters, does it?
OLLY: How like a man to say.
MALE STUDENT: Hm?
OLLY: Nothing.

MALE STUDENT: So it does matter? Cause I don't
think gender matters more than your opinion?
WOMAN AT THE BACK: Oh my God...
OLLY: I will go on record as saying I do think
that gender matters.
MALE STUDENT: I mean that's quite sexist, isn't it?
OLLY: Nope.

ORGANISER: We got two minutes we can't go into a heated debate,
or a detailed debate about sexism and gender at the moment.
STUDENT: Yeah, unfortunately earlier I was cut off but I really want to help you
understand that point that you couldn't quite
complete during my last question which was
the whole autonomous and... um, what word did you use?
OLLY: Unsupervised.
MALE STUDENT: So, can you explain to me why do you distinguish between autonomous and unsupervised;
I didn't quite get that.
OLLY: Okay -
MALE STUDENT: How do you decide whether something
is - so you see an AI, you say, "Okay no,
that's not autonomous, that's unsupervised.
Contrary to what everybody else says." alright,
so how do you make that decision? I don't
understand how.
OLLY: Okay, so I think again you might have
misunderstood what I was trying to do and
I'm sorry if I didn't make it clear but what
I was trying to do is introduce people to
a new term and with that a new way of looking
at it because I think that we often quite
uncritically accept the idea that AIs are
autonomous, i.e. make decisions independently
of humans. What I was trying to introduce
people to is the idea that they in fact do
what they are told to do, even if they produce
unexpected results as well, and if some human
is involved in creating them then a degree
of moral responsibility persists. So I wasn't
really trying to put together, like, a formula for
deciding what is autonomous and what is unsupervised;
I was trying to help people appreciate that
by using the word 'unsupervised' the moral
responsibility can persist even when we are
told, by people who have a vested interest
in telling us, that it's autonomous. I think
'autonomous' carries a lot of baggage that
we can see clearer if put down. I hope that helps.
ORGANISER: Thank you Mr. Thorn.

