[MUSIC PLAYING]
SPEAKER: Please join me in
welcoming Jamie Susskind.
Thank you.
[APPLAUSE]
JAMIE SUSSKIND: Thank
you all, very much.
There's a story that's told of
an encounter that took place
in the 19th century between the
great Prime Minister William
Gladstone and the
scientist Michael Faraday.
And Faraday were
showing Gladstone
the invention of
electricity, which
Gladstone hadn't seen before.
And the prime minister was
looking at it curiously.
And he said to Faraday,
well, what does it do?
What use is it?
And Faraday gave an
explanation as to what
he thought the scientific
implications of it were
and how it marked a great
advance in that respect.
But Gladstone wasn't convinced.
And he kept asking,
more and more rudely,
well, what use is it?
What use is it?
And eventually, Faraday turned
around to the prime minister,
and he said, well,
sir, in due course,
I'm sure you'll find
a way of taxing it.
What that story
shows, to my mind,
is a phenomenon that's
the same today as it
was in the 19th century,
which is that there are lots
of Gladstones in
the world who know
a lot about politics, but
not much about technology.
And equally, there
are a lot of Faradays
in the world who know a lot
about science and technology,
but don't immediately see
the social implications
of their work.
And to my mind, the
Gladstones and the Faradays
are remaking the
world that we live in.
They're the most important
people on the planet
just now, when it
comes to politics.
And I want to start, if I
may, with just four examples
of simple technologies
that are emerging
that we'll all have heard of.
The first is a self-driving car.
I want you to imagine
you're taking a journey
in a self-driving car.
And you ask that
vehicle to speed up,
to go over the speed limit.
The vehicle refuses.
You ask it to park illegally
on a double-yellow line,
just for a moment so you
can nip into the shops.
The vehicle refuses.
In due course, a
police car comes along,
its sirens blaring and is
asking you to pull over.
For whatever reason,
you don't want the car
to pull over, at least not yet.
But it does against your will.
Now I want you to imagine you
are using a virtual reality
system, one which
enables you to experience
things which otherwise would
be inaccessible to you.
And you ask that system to
let you, for whatever reason,
experience what
it was like to be
a Nazi executioner at Auschwitz
or to perform a particularly
depraved sexual act which
society would condemn,
by and large, as immoral.
The system refuses.
Now let's think about an
invention or a development
which took place just
a couple of months
ago in relation to chatbots
where a Babylon system was said
to be able to pass the
Royal Society of General
Practitioners' general exam
better than the average score
of its human practitioners.
Imagine living in a
world where chatbots
are not just better at
talking about medicine
and diagnosing conditions,
but are better at talking
about politics than
the rest of us as well.
And finally, think
of the stories
that we've all heard of the soap
dispensers that won't dispense
soap to people of color because
they've only been trained
on white hands, the voice
recognition systems that won't
hear women because they've
only been trained on men's
voices, the passport recognition
system in New Zealand that
declined to issue a passport
to a man of Asian extraction
there because it said
that his eyes were closed
in his photograph.
These were previously
and, too often
still are, seen as
technical problems--
the ones that I've
just described.
But to my mind,
they're political.
The self-driving car
example is an example
of power plain and simple--
a technology getting
us to do something we
wouldn't otherwise do
or not to do something we
would otherwise have done.
The virtual reality example
goes right to the heart
of the question of liberty.
What is permitted in society?
What should be
permitted in society?
And what should be forbidden?
The chatbot example goes
to the heart of democracy.
In a world where deliberation
takes place increasingly
by machines, or
could do, what place
is there for us in the
systems that govern our lives?
And finally, the examples
of the soap dispenser
and of the voice
recognition system
and of the passport system go
to the heart of social justice
because they deal
with how we recognize
each other in society, how
we rank and sort each other,
and how we place each other
in the great chain of status
and esteem.
Power, freedom,
democracy, justice--
these concepts are the
currency of politics.
And increasingly, I argue
in "Future Politics,"
they're the currency
of technology.
And what I say is that, like
it or not, social engineers--
forgive me-- software
engineers such as yourselves
are increasingly becoming
social engineers.
You see the two are just
the same in my mind now.
I struggle even to
distinguish them.
Technology, I say,
is transforming
the way we live together.
And what I hope to
do in my talk today
is briefly sketch out how I
think it might be doing that.
But the overarching
thesis is clear.
The digital is political.
We can no longer be blind
to the social and political
implications of stuff which
previously in the past
was just seen as
consumer products
or as commercial or
technical matters only.
I thought I'd begin
by outlining the three
main trends in technology,
which lead me to the conclusions
that I reach in
respect to politics.
And you don't need me to
spend much time on these,
but I'll just rattle
them off anyway.
The first is increasingly
capable systems.
In short, we are
developing systems--
call them artificial
intelligence,
call them what you will--
they're increasingly able to
do things which we previously
thought only human
beings could do.
And they can do
them as well as us,
and they can do them
better in some cases,
whether it's lip reading,
whether it's transcription,
mimicking human speech,
detecting lung cancers
and diagnosing and
predicting survival periods.
Almost every game
that we've invented,
computers now do them better
or equal to human beings.
And the simple thesis is
that progress isn't going
to slow down any time soon.
Some people say it's increasing
at an exponential rate--
so increasingly capable systems.
But the second
point of importance
is not just that our systems
are increasingly capable,
but they're increasingly,
they're everywhere.
We live in what's being called
the era of the glass slab
where, principally, our
interaction with technology
takes place on computer
screens, or iPads,
or phones through the
medium of a glass slab.
But what's said is
that, in the future,
technology will be dispersed
around us in our architecture,
in our utilities at
home, in our appliances,
in our public spaces,
even on our clothes
and inside our bodies--
the so-called internet of
things or ubiquitous computing.
Which means that, increasingly,
sensors' processing power
and connection to the
internet will be distributed
all around us in
items and artifacts
that we previously wouldn't
have seen as technology.
So the idea of the glass slab
will gradually fade away.
The distinction between online
and offline, real and virtual,
meet space and cyberspace
will lose some of its meaning
and, certainly, lose a
lot of its importance.
So we got increasingly
capable systems
and what I call increasingly
integrated technology.
And finally, we have an
increasingly quantified
society.
Now, what's said is
that every two days,
we generate more
data than we did
from the dawn of
civilization until 2003.
And it's predicted
that by 2020, there'll
be about 3 million books worth
of data for every human being
on the planet.
This is obviously unprecedented.
And what it means is that,
increasingly, what we say,
what we think, how
we feel, where we go,
who we associate with,
what we like and dislike--
almost every aspect of
our lives, in some sense,
will be captured,
recorded as data,
stored in permanent or
semi-permanent form,
and made available
for processing.
Looking at the crowd in
this room, a lot of this
may seem natural
and normal to us
because it's what
we've grown up with.
And all of these trends
have been increasing
through our lifetime.
But to my mind, it marks
a pretty substantial shift
in the state of humanity.
It could be as profound for us
as the scientific revolution,
the agricultural revolution.
Because it's only just started.
We're only 5 or 10
seconds into this stuff
in the historical perspective.
And if you think about what
might be around the corner
10 or 20 years down
the line, then it
would be mad to assume that
the consequences for politics,
for how we live together,
wouldn't be profound.
Because we've never had to live
alongside non-human systems
of extraordinary
capability before.
We've never known what it's
like for digital technology
to be integrated seamlessly
into the world around us.
There's never been
a human civilization
where every facet of its
social and private life
has, in some way, been
recorded and stored as data.
And our duty-- whether we're
Gladstones, or Faradays,
or just citizens--
is to try and understand
what the implications of that
are for future of politics.
And so I thought what I'd do
today is just go through four
of the most basic concepts in
politics-- power, democracy,
freedom, and justice--
and say how I think that
the digital is political
and how your work as
software engineers
will increasingly make
you social engineers too.
People often say that
big tech companies
have a great deal of power.
And it's true, they do.
And that's only likely to
increase in the future.
But I think there's often
a conceptual confusion
that people come
across, which is
that they mistake
purely economic power
for political power.
And I don't think the
two are the same thing.
In politics and
political science,
it's said that a very
basic definition of power
is the ability to get people
to do things they wouldn't
otherwise do or not to do things
they would otherwise have done.
And let's adopt
that as our working
definition for a moment.
Now, I suggest that
technology, digital technology,
is capable of exerting
power in one of three ways.
The first is in the way that we
saw with the self-driving car
example at the beginning,
which is basically
that, whenever we use
a technology, whenever
we interact with it, we
are subject to the dictates
of the code of that technology.
So when you use an online
platform or a piece
of software, you can't
ask it to do something
that it's not programmed to do.
It can only do what
it's programmed to do.
And to take another
prime ministerial example
that often springs to mind--
when Gordon Brown was prime
minister, he went to the US,
and President Obama gave him 25
DVDs of classic American films.
This was, for some reason,
seen as a great insult
to the British people,
in and of itself.
But if that was insulting,
what then happened
when the prime minister
went and sat down at home,
popcorn in hand, was that the
DVDs wouldn't play because they
were coded for US DVD players.
And the digital rights
management system on those DVDs
simply forbade it.
Now, we know about
that technology,
and we understand
why it's happened.
But to the untrained eye,
it looks like a glitch.
But it's not a glitch.
Technologies-- we
can only do with them
what the code is, what
the programmers say
we can do with them.
It's a very simple
fact about technology.
And this was
acknowledged very early
on when we started using
computers and internet.
And people started saying
well, that means code is law
or, at least, code is like law.
But things have developed
since then, quite recently.
The first is that,
whereas we used
to think of the code that
was inside our technology
as a kind of
architecture-- people
that used to talk about
software architecture.
And the language we use
reflect that-- so platforms,
and portals, and gateways--
as if it was a metaphor
for physical architecture.
That's no longer going to
be the case in the future.
Increasingly capable
systems means
that the code that
animates our technology
is likely to be dynamic.
It might be capable of learning
and changing over time.
It might be remotely
changeable by its creators.
Or it might do so
on its own basis.
So the code that
used to control us
in the early days of the
internet and on cyberspace
with more of, like,
dumb architecture,
but in the future, it's more
likely to be more dynamic.
The second big change is that
code is no longer just power
or law in cyberspace.
It's in real space, too.
And that's because of
increasingly integrated
technology.
When we go around our
daily lives and interact
with technologies,
we can't shut down
or log off like we might have
been able to in the past.
If that distinction
between real and virtual,
or online and offline,
or cyberspace and meet
space, if it does dissolve, and
if people are right about that,
then code is going to be
able to exert power on us.
Technology is going to
be able to exert power
on us all the time.
And there's no way of
getting away from it.
So that's the first
way that I would say,
simply, technology can
be used to exert power.
The second and third
ways and more subtle.
The first is through scrutiny.
The more you know
about someone--
what they like, what they
fear, what they hate--
the more easy it is
to influence them.
It's the basic premise
behind all online advertising
and all political
advertising as well.
If it's the case that society
is becoming increasingly
quantified, that all of
our thoughts and feelings
and inner life is
becoming better-known
to those who make and
govern technologies,
then it'll be easier
to influence us
because they'll have more
information about us.
It's a simple point.
There's a deeper and
more subtle way, though,
that people gathering
information about us
allows them to exert power.
And it's the
disciplinary effect.
When we know we're
being watched,
we change our behavior.
We police ourselves.
We're less likely to do things
that society would think are
sinful, or shameful,
or wrong, or that
might land us in hot water.
Google's not a bad example.
Because one of the things
that Google apparently does
is, if people search for things
related to child pornography,
they're reported to authorities.
That, in itself, the
dissemination of that fact
is likely to and does change
the way that people behave.
So the second way that
technology exerts power
is by gathering
information about us, which
can be used to influence us
or by causing us to discipline
and police ourselves because
we know that information
is being gathered about us.
And the third is the most
subtle of all and possibly the
most powerful of all.
And I call it
perception control.
We, all of us, rely on
other people or other things
to gather information
about the world,
distill it into something
sensible and comprehensible,
and present it to us
in a digestible form--
so like a filtering.
Otherwise, all we'd
know about the world
is what we immediately perceive.
Now increasingly, we
rely on technologies
to do the work of filtering for
us, whether it's when we go out
and look for information, such
as in a search function, when
information is
gathered and brought
to us in a news function.
Increasingly, we're subjecting
our immediate sensory
perception to technologies as
well with augmented reality--
over our eyes, over our ears,
over our bodies in haptic form
or virtual reality too.
And those who control the
flow of information in society
exert a great deal of power.
Because you know that the
best way to stop people
from being upset
about something is
to stop them knowing
about it at all.
Or the best way to get
people angry about something
is to tell them over and
over that it's disgusting
and that it's wrong and
that it has to be punished.
And the work of filtering,
presenting to each us
the world beyond
our immediate gaze,
is increasingly done
by technologies.
And so when I say that
technology is powerful,
I'm usually referring to
one of those three things--
the ability to force
us through code to do
something, the ability to
gather information about us,
the ability to control the
way we perceive the world.
And there's nothing
necessarily nefarious or wrong
with any of these.
It's just a helpful,
I think, way
of thinking about how technology
can change the way people
behave, how it can exert power.
The other important
implication, however,
of technology flowing on from
how it exerts power on us
is how it affects our freedom.
Now, the great debate that
we've all heard for 20 years
is how increasing
technologies of surveillance
will potentially lead to states
and maybe tech firms having
too much power over people
because they watch us
the whole time and are
capable of regulating us.
That's an important debate.
It's not the one I want to
necessarily talk about today.
Because I think that the effects
of technology on our freedom
are actually a little
bit more subtle.
So I would ask the
people in the room
to ask themselves if you've ever
streamed an episode of "Game
of Thrones" illegally or you've
gone to take a second helping
from the Coke machine even
though you've only paid for one
or if you've dodged a bus fare,
here or abroad, by jumping
on a bus and not paying for a
ticket and jumping off again?
74% of British people admit
to having done these things.
It's not because
they're all scoundrels.
It's because there is this
hinterland in the law where
people are allowed to get
away with things from time
to time without being punished
as long as it's not constant,
as long as it's not egregious.
That's why so many people do it.
I suggest, in a world of
increasingly capable systems
and increasingly
integrated technology,
those little bits of naughtiness
will become much more
difficult. Whether it's because
your smart wallet automatically
deducts the bus fare when you
jump on the bus, or the "Game
of Thrones" episode, it just
becomes impossible to stream
because the digital rights
management technology becomes
so good, or because you need
face recognition software
to get that second
helping of Coke.
And if you think
that's petty, you
should know that, in Beijing's
Temple of Heaven Park,
facial recognition
software is already
used to make sure that
people don't use more
than their fair share
of toilet paper.
And if that's the world
that we're moving into,
then that hinterland of
naughtiness-- the ability
to make little mistakes
around the edges,
like getting your self-driving
car to go over the speed
limit or park illegally--
becomes a lot more difficult.
I think that has
implication for freedom.
The more profound
implication for our freedom,
though, is what I call
the privatization of it.
Increasingly, we use
technologies to do the things
that would traditionally be
considered freedom-making,
whether it's freedom of speech--
an increasing amount of
important political speech
takes place online
on online platforms--
whether it's freedom
of movement in the form
of self-driving cars
or whatever it is that
comes next, whether it's
freedom of thought, the ability
to think clearly and
rationally, which is obviously,
affected by the systems that
filter our information for us.
The good news about
technology, obviously,
is that our freedom can be
enhanced by these technologies.
The interesting
point, though, is
that, whereas in the past,
for most of human history,
questions of freedom
were left to the states
and were considered political
questions to be decided
on by the whole of society.
Nowadays, they're
increasingly privatized.
What you can do on a
political speech platform,
what you can do with
a self-driving car,
how Facebook or Twitter
filters the news that you see--
these aren't decisions that
you and I-- maybe you--
these are decisions
that most of us take.
They're done privately.
And they're done by
tech firms, often,
acting in what they
perceive to be the best
interest of their consumers.
But they're
ultimately, just now,
a matter of private
decisions taken by tech
firms and their lawyers.
And I think we need to think
through quite carefully what
the implications of this
are, just in political terms,
looking at the long
run of human history.
Because what it first
means is that tech firms
take on quite a significant
moral burden when
they decide what
we can and can't
do with their technologies.
That was previously a
matter of societal debate.
So the VR system I
think is a good example.
When you get a Virtual
Reality system that
is supposed to be
customizable in some way
or give you lots of
different experiences,
should it be up to you,
the individual user,
to decide which experiences you
want, depraved or otherwise?
Should it be up
to the tech firm?
Should it be up to
society as a whole?
The traditional answer
given by human beings
is that society, as a whole,
the limits of what is right
and what is moral and
what is forbidden.
Right now we don't
live in that world.
The second thing
is that, obviously,
through no wrongdoing,
tech firms are not
answerable to the
people in the same way
that the governments
that set laws are.
The third difference between
a tech firm and a state
is that, in the state,
the law develops
over time in a public
and consistent way that
applies to everyone.
Whereas, tech firms
do things differently.
Google might have a different
policy towards hate speech
than Twitter does, a different
policy than Facebook does.
And some people would say that's
a good thing-- for reasons
I'll come on to in a second.
And others would
say it's a challenge
to the development, the overall
moral development of society,
of shared values between us all.
Just to take two
examples that have
troubled political philosophies
since time immemorial--
one is the question
of harm to self.
Should we, as grown up adults,
be able to harm ourselves?
So if I ask my self-driving
car to reverse over my head,
should it do that because
it's my autonomous decision
that I'd like it to do that?
Or my automated cooking
system in my kitchen--
if I want it to make a curry
for me that's so spicy that
it's likely to send me to
hospital, but it's my choice--
should it do it?
Or should systems be
designed to protect us?
The idea that systems,
beyond our control,
should be designed to
protect us might seem anodyne
in this room.
But to John Stuart
Mill and Jeremy Bentham
and other philosophers
like that on whom
our legal system and its
principles are often based--
that would have been anathema
to them for the same reason
that suicide stopped being
illegal not so long ago.
Because people are
generally thought
to be able to do things which
harm themselves and should
be free to do that.
Even more so, the
question of immoral acts--
there are very few laws left on
our statute books which stop us
from doing things
which are considered
immoral or disgusting.
In the privacy of
your own home, you
can do almost any sex
act apart from something
which causes very serious
harm to yourself or to others.
And so free speech--
you can anticipate,
in the future, free speech
and free action campaign,
again, to say, if I
want to simulate sex
with a child on my
virtual reality system
in circumstances where it
causes no harm to anyone else,
I should be allowed to do that.
And actually, a
governing principle
of English law for
centuries has been,
if something doesn't
harm other people,
you should be free to do it.
Now, there might be
disagreements in the room
about whether that's
right or wrong.
The interesting point for me is
that, right now, that decision
is not going to be
taken by the states.
It's going to be
taken by companies.
And that marks quite a profound
shift, I think, in the way
that politics is arranged and
the way that political theory
needs to proceed.
Now, in the book--
I won't bore you
with this too much--
I try to outline a series of
doctrines, of ways of thinking,
that can help us to think
clearly and crisply about
what's at stake when
we limit and don't
limit people's freedom.
So I've got this idea of
digital libertarianism,
which some people
are going to adopt,
which is the idea that,
basically, freedom is freedom
from any form of technology.
If I don't want to have
technology in my house,
I should be free not to have it.
There should be no
requirement of smart devices,
small utilities.
And any piece of code
that restricts my freedom
is unwanted.
More likely is that
people will adopt
a position of what I call
digital liberalism, which
is that the rules that
are coded into technology
should try to maximize
the overall freedom
of the community, even
if it means minimizing
the freedom for some.
A particular doctrine,
which I think
will appeal to free marketers,
I call digital confederalism,
which basically means
that any company should
be able to set its own rules
so long as there's always
a sufficient number
of different companies
so you can switch between
them according to your choice.
People will say, that's the
way to maintain freedom--
lots of different
little subsets.
Digital moralism-- the idea
that technology should encourage
us to be better people.
Digital paternalism-- the
idea that technologies
should protect us from ourselves
and our own worst instincts.
Or digital republicanism--
for centuries, humans
have demanded that, when
power is exerted over them,
that power should
not be unaccountable.
That power should be
answerable in some way,
even if that power is
exerted benevolently.
It's why the
American and English
Revolutions, to a certain
extent, both happened.
It wasn't just
people's frustration
that the monarch
was behaving badly.
It's that they could
behave badly at any point.
So a freedom which relies on
the benevolence of someone else
is no kind of freedom at all.
And digital
republicanism, therefore,
means that, in any
technology, whenever
power is exerted
over you, you should
be able to have a say in it.
You should be able
to customize it,
to edit it according to your
principle of the good life,
to your vision of
what's right for you.
These are all ideas that
are new and strange,
but I think we're going to
have to grapple with them,
whether we're Gladstones
or whether we're Faradays,
if it's right that so
many of our freedoms
are now going to be in the
hands of technologies and people
who make them.
Democracy-- we all
know the ways in which
technology has
affected democracy
as we currently experience it.
It's changed the
relationship between citizens
and other citizens,
allowing them
to organize like the MoveOn, or
the Occupy, or the Arab Spring
movements.
In some places, it's
changed the relationship
between the citizen
and the state,
enabling a more collaborative
form of government--
e-petitions, online
consultations.
It's definitely
transformed the nature
of campaigning between
party and activist
and between party and voter.
Activism is obviously, almost
entirely done online now--
the organization of it,
the central organization.
And Cambridge Analytica,
and the Brexit,
and 2016 American referendum
show that, increasingly,
big data and the
technology surrounding it
are used to pinpoint each of us
based on psychological profiles
or profiles of what we like
in order to influence us
in a particular way.
Now, everyone gets very
upset about this stuff
or very excited about it.
And I think it's right to.
But it's ultimately
an example of what
I call faster-horses thinking.
The reason I call it
that is because when
Henry Ford, the inventor
of the automobile,
was asked what did people tell
you they wanted, he replied,
faster horses.
It's sometimes difficult
for us to conceive,
in politics, of systems
that are so radically
different from our own.
And instead, we just
think of technologies
as augmenting or supercharging
what we already have.
And so the changes that I've
just described to democracy
are all profound.
But they don't change the
nature of democracy itself.
They work within the system
to which we are presently
accustomed.
And I wonder if that's going
to be sustainable or true
within our lifetime.
I suggest there'll be
four challenges to the way
that we currently
think about democracy.
The first is the one that I
described in the introduction.
If bots get to this stage
where they are good enough
to debate in a way that
is more rational and more
persuasive than us, or
even if they don't, how--
and a lot of political
speech takes place
in online platforms--
how on Earth are we
supposed to sustain a system
of deliberation in which you
and I have meaningful say
when, every time we speak,
we're shot down or presented
with 50 facts to the contrary.
Now, remember that,
in the future,
bots aren't going to be
disembodied lines of code.
They'll have human faces.
They'll be able-- if the sensors
are there to detect human
emotion--
they'll be persuasive
and real-seeming.
So deliberation,
which has been part
of our concept of
democracy since Greece,
could be completely disrupted
by a technology that's
already underway.
No one really talks
about that that much.
I think that's something that
could be a problem within 10
or 15 years.
And that's pretty profound.
Second big challenge is,
we're now entering a time
where, easily, it's
foreseeable that we could
have full direct
democracy where,
basically, using a smartphone
or whatever replaces it.
We vote on the issues
of the day directly
with no need for politicians.
Or wiki democracy, where we
edit the laws ourselves--
some model of it.
It's absolutely not technically
infeasible in the course
of our lifetimes.
We need to re-have the debate
about whether that's desirable.
How much democracy is
too much democracy?
Why is democracy valuable
in the first place?
I don't think we're
ready for that debate.
I don't think it's one
we've started happening.
It wouldn't surprise me at
all if the natural offshoots
of the populist
movements that we
see just now is a demand for
more direct accountability
for political decisions--
people will vote
using stuff in their pockets.
Data democracy-- it's going to
become increasingly weird that
we consider a system legitimate
on the basis that we put a tick
in a box once every five years--
an almost inconceivably
small amount of data
is used to constitute the
government of the day.
I think there's a theoretical
and philosophical challenge
to be made about a system which
uses the abundance of data,
which really reflects the
lives that we actually lead
and the role that
that should play
in legitimizing governments.
That is to say, if a government
doesn't pay attention
to the data that actually
exists about its people,
how can it be really
said to represent them?
It's interesting question.
It's one that we haven't
currently got to yet.
I suspect it will
rise in salience.
And the final question is
going to be about AI democracy.
It's not a tool not to consider
as we entrust Artificial
Intelligence systems with
more and more valuable
things-- trading on the stock
market, robots conducting
operations.
One was appointed to the board
of a company in Singapore--
that we might ask,
what role should
AI's play in the decision
of public policymaking,
in the decisions made
by public policymakers?
Which areas of politics
would we be better served
with systems taking the
decision on our behalf,
perhaps, according to principles
that are agreed democratically?
Or should we each have an
AI system in our pocket
which votes in our
behalf 10,000 times
a day on the issues of
the day based on the data
that it has about
us and what it knows
about our preferences
and our lived experience?
We're just at the cusp
of these questions.
But the system of democracy
that we have is a very old one.
And it would very
much surprise me
if faster horses was all we got,
if the disruption we've already
seen to democracy
was the last we
saw of democratic disruption.
That would seem to me to
be against the grain of how
the digital really is
becoming political.
Final concept-- social justice.
When political theorists
talk about social justice,
they tend to be
one of two things.
First is distribution-- how
should benefits and burdens
be distributed in society?
Equally, according to
some principle of merit,
to the best, disproportionately,
to the most needy?
These are all arguments
that philosophers have had
and politicians have
had for generations.
And in the past,
they were settled
by the market, which
distributed goods among us
and by the state, which
kind of intervened
and regulated the
distribution of those goods.
Increasingly, it's
algorithms that
are being used to
distribute goods in society.
72% of CVs-- or resumes,
for an American audience--
are never seen by human eyes.
The systems that make
decisions about who gets jobs
have profound distributive
consequences for who does well
and who doesn't in society.
Mortgages, insurance,
a whole host
of other distributively
important things
are affected by algorithms.
For example, the fact
that algorithms now
trade on the stock market
has caused a ballooning
in the wealth that flows to
people who use those automated
systems--
mostly banks.
That has distributive
consequences.
So what political
philosophers typically thought
of as a question of
political economy--
the market and the state--
that question of social
justice is increasingly
entrusted to the people
who write those algorithms.
That's the first way
that technology is
going to affect social justice.
But there's more to justice than
just the distribution of stuff.
When we see the slave kneeling
at the feet of the master,
or the woman cowering
before her husband,
or the person from a black
or minority ethnic community
having insults hurled
at them, the injustice
there has nothing to do with
the distribution of stuff.
It's what's called an
injustice of recognition
where we fail to accord
each human being the dignity
and respect that they deserve.
Now, in the past, it was
really only other people
who could disrespect
us in this way.
In the future, as we've seen,
it can be systems, as well.
If you think of
the frustration you
feel when your computer
doesn't work today,
imagine what it's going to
be like when one doesn't even
recognize your face because
it's the wrong color
or because it doesn't
hear your voice
because you're the wrong gender
or because it doesn't let you
into the nightclub
because your face doesn't
meet the right specifications
that the club owner has set.
Technology is increasingly used
in questions of recognition.
And I think that's a profound
importance for social justice.
The other way that
technology affects justice
is that it ranks us.
Today, we all know what the
currency of social status is.
Increasingly, it's likes,
it's retweets, it's followers.
People who, half a
century ago, would not
have held high
status in society,
now hold high status in society.
And the reason they do is
because of a particular set
of algorithms designed
by people like you
have been set which decide
what the key factors are.
Who's in and who's out?
Who's up and who's down?
Who's seen and who is unseen?
Who's great and who's nothing?
Now, there's nothing inherently
nefarious about this, nothing
inherently wrong with it.
But it used to be that only
people, and our social norms,
and occasionally laws like the
Nuremberg laws or the Jim Crow
laws, which specifically
discriminated against people,
were the things that decided
the politics of recognition.
Now that's done by technology.
And it's increasingly
in the hands of people
who aren't politicians and who
aren't necessarily philosophers
either.
So just stepping back--
power, democracy,
freedom, justice--
these used to be words
that just politicians
and political philosophers used
in their day-to-day discourse.
I say that they have to be words
that software engineers use
in their day-to-day discourse
and that tech firms know
and are familiar
with and understand.
I'd like to close
with two quotes that
have always stuck out to me.
The first is this-- and
you might have heard it--
"The philosophers
have only interpreted
the world in various ways;
the point is to change it."
The second is this--
"We're not analyzing the
world; we're building it."
And essentially, they
mean the same thing.
What they say is you can
talk, and you can think,
and you can debate.
But the real people
who create change
are those who go out and do it.
The first quote
is from Karl Marx,
it's from his "Theses
on Feuerbach" in 1845.
It was a rallying cry
for Revolutionaries
for more than a century
after it was published.
The second quote is
from Tim Berners-Lee who
couldn't be more different from
Karl Marx and his politics,
his temperament, or indeed,
his choice of facial hair.
But the point's the same--
the digital is political.
Software engineers are
increasingly social engineers.
And that's the
message of my talk.
Thank you very much.
[APPLAUSE]
SPEAKER: Thank you
very much, indeed.
We do have time for questions.
AUDIENCE: So what do you think
of the increasing tendency
of governments to abdicate
responsibility to tech firms
to make decisions?
The classic example, in
the last week I think,
the EU has said we want tech
firms to make the decision
and take things
down within an hour.
Do you think that's
a good trend?
JAMIE SUSSKIND:
I'm not sure what
you mean by the abdication
of responsibility.
AUDIENCE: Or the
delegation, if you want--
where the government could
choose to regulate, but instead
choose to say you must decide.
JAMIE SUSSKIND: The
message I have is this.
If it's the case that
there's going to be--
that tech firms are going
to be taking decisions that
are of political
significance, in due course,
people are going to expect to
know what those are, to demand
transference, to
demand accountability,
to demand regulation.
Tech firms essentially
have two choices--
not mutually inconsistent.
They can try to get it right
themselves and articulate
why they think they're trying
to get it right to set out,
clearly, the way their
algorithms work insofar
as is possible in
the market system
to justify them by
reference to principles
of justice or principles
of democracy or freedom.
The more of that that is done
privately and willingly by tech
firms, the less likely
it is that the state
is going to come barging
in and start regulating.
And we've actually seen that.
I think tech firms
are increasingly
becoming answerable to the
unhappiness or the perceived
unhappiness of their
consumers about the way
that things are working.
But I think if the state
just came trundling in
and started regulating,
the tech firms
would say the same as any
private corporation have said
since the invention
of the state, which
is, I can't believe these fools
at the center of government
are trampling all over matters
that they don't understand--
these Gladstones.
But we have to find a compromise
between the Gladstones
and the Faradays-- the people
who know a lot about tech
and the people who know
a lot about politics.
And I think if tech firms
assume responsibility,
they're less likely to face
regulation which they consider
to be dumb or ill-informed.
AUDIENCE: Thank you.
So when you said--
and I think it was in the first
quarter or half of the talk
regarding the
privatization of policy
through the use of
tech in these firms--
where does open source fit
into this and free software
and that whole movement?
Because one would argue--
and I think a lot of
people would probably
agree with me--
that open source is
a political movement of tech.
So before it was really
known in the private world
that tech would
become political.
So where does that fit
into this whole picture,
and how does it
change the equation?
JAMIE SUSSKIND: It's
a great question.
And the answer is
it obviously doesn't
fit into the very simple
dichotomy that I gave.
But I think it's
also fair to say
that, although the open-source
movement has become incredibly
important in many respects, most
people don't know what it is.
Most people, when
they use technologies,
don't have the opportunity
to customize or edit
those technologies
or to understand
the rules that govern them.
If more tech was open
source, that would definitely
resolve some of the
tension between what
appears to be private
entities exercising
a kind of public
power if they're
using code that can be at
least seen and understood
by its consumers.
I just don't think
it yet characterizes
a lot of the technologies
of power that I described.
AUDIENCE: OK, thank you.
AUDIENCE: Thank you.
I'm also going back to the
issue of privatization.
And I think, in
some ways, we could
argue that there's
a benefit here
that, with an increased number
of actors making decisions,
we get pluralism, and
that's not a terrible thing.
But I think that maybe--
I wonder if you can reflect
on whether this claim
of privatization is as
solid as you suggest.
A lot of these technologies
were funded by public bodies,
by the state.
And I wonder if we need
to revisit the genesis
of a lot of these technologies.
Because we often forget that
these were funded by taxpayers,
and that they're not strictly
private architectures
or private systems.
JAMIE SUSSKIND: I think it's a
really valuable and important
point.
There are two
reflections I would make.
The first is the fact
that a technology
derives from public
investment doesn't necessarily
mean that the public
retains a degree of control
or transparency
or accountability.
It's the use and the application
of the technology that
matters for political
purposes, for the ones that I'm
describing, rather than
the genesis of them.
The second point that I maybe
I didn't make strongly enough
in my speech is that, a
lot of time the alternative
to technologies being
controlled privately
is technologies being
controlled by the state.
And there are huge,
enormous risks with that.
The modern state is already the
most powerful and remarkable
system of control that
humans have ever invented.
The idea of endowing the
state, through regulation
or nationalization or whatever
it is as some people suggest,
with further power in the
form of awesome technologies
of surveillance, of force,
and of perception control
is not something that I
would welcome inherently.
So actually, the big
political tension I say,
for the next half
century or so, is
going to be how
much of this stuff
is best left to the custodians
in the private sector acting
responsibly?
And how much should be brought
under the aegis of the state?
But it's certainly not a kind
of state/good regulation/good
privatization/bad dichotomy.
I'm not just saying that
because I'm at Google.
I think the argument
is often forgotten
by those who
criticize tech firms,
that the state can act in a
pretty heavy-handed way when
it comes to technology as well.
There's a balance to be struck.
AUDIENCE: I guess you probably
part answered my question just
now.
But my question is, in a
similar sense about the-- like
what option does the
regulator even have?
And I'm thinking now
of a global scale.
So the status quo as I see it--
and tell me if you disagree--
regulation is always playing
catch-up with technology.
And the question
is, if the regulator
wants to turn this around, what
option would they even have?
Because if one
country would start
trying to invert
this and basically
try to have regulation be the
default, and as a technologist,
you would basically have
to seek an exception
for every single thing you want
to do rather than what is now--
like, technology companies
invent new paradigms
that affect society, and
then regulation catches up.
So obviously, if one state
started trying to invert this,
tech firms would probably
move away from that country
and do their
innovation elsewhere.
And there would always be sort
of islands of deregulation
as there are islands of tax
harbors and that kind of thing.
So if you think it from
that point of view,
what's your view on that?
JAMIE SUSSKIND: Well, you've
identified two problems
that the regulator faces.
One is you're always behind.
The technologies
are invented first,
and then you're kind
of playing catch-up
to try and understand their
implications and, if necessary,
regulate them.
The second is the problem of
multinational corporations.
If you're just one
country, it's very hard
to set a rule that
others don't follow,
which might place you at
some kind of disadvantage
economically or commercially and
incentivize that firm to leave.
There are other problems
too like the problem
that regulators sometimes
don't have the best people,
the best people are
in the private sector.
I know that with the
regulation of finance,
for instance, that's
a consistent problem.
So there's no doubt that
the task for regulators
is formidable.
What are their options?
Well, they've got
to do their best.
Tech firms, I think,
shouldn't just
see it as a matter
of we'll do whatever
we like until we're regulated.
I think the whole
system would function
better if purely commercial
considerations didn't just
motivate the policy
set by tech firms.
And increasingly, they don't.
I wouldn't, for a second,
suggest that they always do.
The problem of international
movement of capital
or of competitive
advantage is a tough one.
The EU is actually not a
bad counterpoint to that.
The GDPR, say what
you like about it,
it's a kind of
regulation, and it applies
to every country in Europe.
And that makes it easy for them
to act in concerted fashion.
I would welcome--
I see technology
like climate change
as one of those
issues that benefits
from international
collaboration and cooperation.
Partly, the way we think
about it though is we
think about it as
an economic problem.
Like, the power
that tech companies
or the problems that can
be caused by technology
are just matters of economics.
And this is actually
part of the mindset
that I want to try
and change, which
is, we have to start seeing
them as political problems.
And I would hope and encourage,
for the part of states,
that they don't
deregulate or create
Wild Wests out of a desire to
attain an economic advantage.
Countries do do that, though.
There's just no doubt about it.
So I hold my hands up, and I
say the task of the regulator
is formidable.
I think there's so little
regulation just now, though.
And technologies are
becoming so much more
persuasive and so
much more powerful,
something will be done.
As I said earlier, the
more that tech firms
are involved in that
proactively and sensibly
the better it will be
for them, for the states,
and for the people
who use the systems.
AUDIENCE: Hi.
I have a question more about
the concentration of power
and the accountability which
people demand after that.
Increasingly, companies
like Google or Facebook,
they've become public
utilities where
we use search or a social
network on a daily basis.
And that is the
concentration of power.
Do you think, in 20
years from now, we
will see a Google
or a Facebook that's
held accountable,
maybe, inside the state,
and we actually vote on
how that's regulated?
JAMIE SUSSKIND:
Well, I certainly
don't think nationalization,
public ownership
of things like Google or
Facebook would be a good thing.
I also am not sure if
public utility is--
I think it's the best
word we've probably
got just now to describe
the kinds of status
that these companies have
within our modern economy
and within our modern
society, but I don't think
it accurately describes it.
Most public utilities
don't exert power over us.
We rely on them.
We rely on the water company,
the electricity company,
but they don't get us to do
things we wouldn't otherwise
do.
They don't influence elections.
They don't determine
matters of social justice
or what is and isn't permitted.
So I think the public-utility
analogy is helpful only up unto
a point.
Do I think that,
in the future, it's
possible they would
be nationalized
or part of the state?
I guess so.
I think it would
be not sensible.
But again, I think the
regulatory environment,
the regulatory future,
is up for grabs.
AUDIENCE: So you
talked briefly about
how people still have this
mindset of faster horses when
it comes to technology a lot.
What are the hallmarks
or what time scale
do you expect this
public mindset
to shift from just
thinking of technology
as like a step change
rather than like a big step
forward and a revolutionary
aspect of technology?
JAMIE SUSSKIND: It's a
really interesting question.
And I'm not going to give
you a defined timescale
because I think, again,
it's up for grabs.
I think what I try to do in my
book is to sound the fog horn
and say we need to
think about this stuff,
not just as consumers,
but as citizens.
We need to not think about
it like faster horses
but to see the fundamental
revolutionary change.
A, some people are going to
disagree with that thesis.
B, a lot of people aren't
going to be interested in it.
They're just going to be
interested in interacting
with technology as consumers,
which is what most of us
do most of the time--
that looks cool.
This is a cool new function--
without necessarily seeing
the huge broader picture.
So I don't have an
answer to the question
as to when I expect, if at all,
public perception of this stuff
to change.
I do think that
market forces are
likely to result in the
transformations I described.
So insofar as the political
classes paying attention,
I think, easily
within our lifetimes,
we're going to see the big
question of politics change
from what it was in
the last century,
which was, to what extent
should the state be
involved in the
functioning of the economy?
And to what extent should things
be left to the free market?
That was, like, the
big ideological debate
of the last century.
I think the big ideological
debate of our lifetime
is, to what extent should we
be subject to digital systems,
and on what terms?
And I see, over the
course of our lifetime,
the debate shifting that way
because I see it as almost
inevitable if the technologies
develop in the way
that people predict they will.
AUDIENCE: You said
a few times, you'd
like to see technology
companies, technologists, get
more involved in politics.
In a lot of people's
heads, that's
equated with
lobbying, which tends
to be seen as a bad thing.
Can you talk about maybe
some of the positive ways
you can see technologists
or technology companies get
involved in politics?
JAMIE SUSSKIND: In fact, I
think that analogy perfectly
demonstrates the change in
mindset I think we need.
Powerful companies in
the past-- say, like,
the great monopolies of
the early 19th century--
had power in the
political process.
But they exerted it
indirectly through lobbying
and through campaign finance.
What's different
about technology
is that it affects us directly.
If you're a tech
firm, you don't need
to go through the
government in order
to exert power over people or
to affect democracy or affect
freedom or justice.
That's what's so profoundly
different about technology.
And so I say that people
who work in tech firms
do work in politics
because their inventions,
their algorithms, their
systems are the ones
that are actually changing
the way that people live
and changing the way
that we live together.
So it's not Mark Zuckerberg
should run for president.
It's Mark Zuckerberg
is already, in a sense,
some kind of president
because he affects all of us
in ways that he should
know about more.
And so he should
take that power,
as I'm sure he does,
responsibly and seriously.
So what I don't want
people to go away thinking
is I'm saying that
we need technologists
to step into the
political process more,
although, there
should definitely
be constructive engagement.
The point is that, if you work
in technology, in a sense,
you already work in politics.
AUDIENCE: So what's the
positive improvement
that you'd like to see?
JAMIE SUSSKIND: The positive
improvement I'd like to see
is the Tim Berners-Lee idea
of philosophical engineers.
He's the one who said,
we're not analyzing a world;
we're creating it, and we're
philosophical engineers.
Well, sometimes.
The arc of a computer
science degree is long,
but it doesn't necessarily
bend towards justice.
Just like people who
know a lot about politics
shouldn't be assumed to
know a lot about technology,
I think that people
who work in technology
should have a good grounding
in the values and principles
that they are--
whether they know it
or not-- embedding
in their work.
And that's why I wrote
the book in many ways.
It's a book about
tech for people
who know a lot about politics.
It's a book about
politics for people
who know a lot about tech.
AUDIENCE: Hi.
So my question is, considering
that, in private companies,
the end goal or the incentive
is usually to make their users
happy--
which is starkly different from
what the state cares about,
which is to promote the general
well-being of their citizens--
it's hard for me
to think of things
like filtering content
as an exertion of power
rather than an enabler of their
users to exercise their freedom
as they would like.
And so I guess my
question is, when you're
saying that technologists should
be these social engineers,
do you think that requires a
fundamental shift in what we're
prioritizing in adopting this
more paternalistic approach
towards, oh, we think this
would be good for our users
rather than, this is what the
evidence shows our users like?
JAMIE SUSSKIND: Again,
a great question.
And if I may, I'll unpick it.
Do I think there needs to
be a change in priorities?
My first answer would be to
dodge and say I don't know.
Because most of the
algorithms that you describe
are not made public.
And if you look at what
Jack Dorsey said to Congress
the other day--
and one can applaud
him for bringing it
to the public's attention-- he
basically said we got it wrong.
600,000 accounts,
including the accounts
of some members of
Congress, were wrongly
deprioritized from
political discourse
at quite a sensitive time.
The answer to that, to my mind,
would be a Twitter algorithm
that people are capable
of understanding
and that people are capable
of critiquing rather
than a one-paragraph
explanation from Twitter which
says what their policies are and
says "and many other factors"
at the end.
We, the users, are
not in a position,
either to know whether
the algorithm actually
embodies the values
that are stated
and, to a certain extent,
what the values are.
So the first thing, and one of
things I talk about in my book
is, the more transparent
companies are,
the more people will be
comfortable and justifiably
comfortable, just as they
are with governments that
become more transparent, that
the people who exercise power
over them are doing it
in a responsible way,
even if it's just a
small amount of power.
The second thing I would
say is that you correctly
identify that the
intentions of the state
are different from
the intentions
of a private company operating
within a market system.
The difficulty with the
market-system approach to tech,
to just letting the market
do its job is, first of all,
you get monopolies.
And so even if I
don't like Facebook,
if I want to be on a
social-networking system--
there's no point moving to one,
which is just me and my mum,
even if it's superior
in loads of respects
because there's a
network effect there,
and Facebook has dominated it.
The second is you'd also-- it
relates back to the first--
we don't always know the
principles on which companies
are competing.
The difference between
the way that news
is ranked in one system and
news is ranked in another system
is apparent only
from what we see.
But we don't always
know what we don't see.
And so I think it's hard to say
that people are empowered fully
to make decisions like
that if, A, they don't have
a choice, because there's
a monopoly, and B,
they actually aren't shown
the full basis of their choice
that they have to make.
You are right though that a
pluralist system where people
have a choice of moving between
systems of competing values
according to their
values would definitely
be one solution to the problem
of what might be perceived
to be too much
power concentration
or too much unaccountability.
That's one answer.
SPEAKER: We do have
more questions,
but we're unfortunately
out of time.
So thank you again very
much, Jamie Susskind.
[APPLAUSE]
