[MUSIC PLAYING]
ROBERT ELLIOTT SMITH: I won't be
telling this group anything new
when I tell them
that algorithms play
a significant role
in the infrastructure
of our lives, a critical
role in the infrastructure
of our lives.
And they do a lot
of things for us,
the most basic being they
organize the information
that we consume, from news
to items from our friends,
the way they're presented,
the order they're presented
in to us, and the things
that are presented
are determined largely
algorithmically.
In some ways, I think the job
that's been most displaced
by algorithms thus far
would be the job of editor
or information curator that
used to exist in magazines.
That job is largely
handled algorithmically.
But for some people,
certainly other jobs
have been done by algorithms.
A lot of people, their work
is assigned algorithmically--
Uber, delivery people.
The jobs that they
do sequentially,
sometimes on a minute
by minute basis,
are determined algorithmically.
And of course, in
Mechanical Turk,
people are actually being
assigned jobs algorithmically
and acting as
algorithms themselves
in an odd sort of way.
Certainly, one of the
things that people
are most familiar with is who
you might sleep with or marry
is recommended algorithmically,
and your friends as well.
And certainly,
more profound roles
are emerging, like the role
that algorithms are playing
in criminal justice, predictive
policing, where algorithms
determine, for the
consumption of police,
areas that may be
likely for crime,
and in justice,
sentence recommendations
and parole
recommendations are often
helped by algorithms that
provide information to judges.
So certainly, they're having
profound effects on our lives.
Oh, I love this one in
the middle down here,
is this, if you didn't
see this in the news,
is some of you will know about
the Chinese social scoring
system, which is a big data
analysis that basically,
in China, is going to give you
a score to say effectively,
are you a good citizen
of China or not?
And it's based on a
variety of things,
including basically your
social media presence,
but also things
that you do in life.
It gives you access to things
like travel and high quality
goods that you can buy.
You get social perks
by having a high score.
But one of the things
they're experimenting with,
these Chinese policemen are
wearing augmented reality
glasses that will help them
identify potential offenders.
So effectively, they'll
do facial recognition
and then label them with
their social credit score.
And if it's a low score,
then they'll know,
hm, there's somebody we
might be concerned about.
So a lot of people would
find these things concerning,
but certainly, there are
far more concerning things
going on in our algorithmically
mediated infrastructure
in our lives.
For instance, we've all
seen the divisive ads
that appear in social
media that are fed to us
and that algorithms
are manipulated
to feed to us in a
targeted fashion,
so that greater political
polarization comes to exist.
And I think we're all
somewhat concerned about that.
But there are things that
jump up that some of you
will know about that
have made the news
and been a bit disturbing,
one of them being
the unprofessional hair
controversy, which I imagine
you people here at Google
will be aware of that
a few years ago, if you
googled unprofessional hair,
you got pictures of black women.
And oddly, if any of you have
not done this experiment--
I do it every once in
a while-- you still
get pictures of black women.
You guys have fixed the
problem, but the problem
has not been fixed because now
you get pictures of black women
because this story existed.
It's made a social
feedback loop,
and these social feedback
loops are a big part of what
I'm talking about today.
And some of them you'll
notice in this controversy
is Microsoft's Tay Twitter
bot that probably most of you
know about, that
they put up a Twitter
bot that was supposed
to learn to say things
from other Twitter users.
And very quickly, it learned
to say, amongst other things,
"We're going to build
a wall, and Mexico
is going to pay for it."
So that tells you about
the time that it came out.
And it also learned to
say, "Hitler was right.
I hate the Jews."
And they had to take it
down within 24 hours.
Another controversy that you
guys would be familiar with
is the are women evil,
are Jews evil controversy.
Carole Cadwalladr
in "The Observer"
discovered that if
you typed in those,
the Google suggest came
up with "are Jews evil,"
and "are women evil?"
This has been corrected
for, obviously.
However, you still get
some pretty funny things.
I've given this talk
in a lot of places.
I gave it up near Liverpool
at the Blue Dot Festival,
and I used the "are scousers,"
and it immediately told me,
"are scousers thieves" came up.
And that really works
for the audience.
And then the other
one was the Welsh.
And you have to look
down quite a ways there,
but if you put "are the Welsh"--
now this was at Number Six
Festival-- inbred is
one of the suggestions.
Now, as you guys know,
these are all things
that people actually search for.
But again, I'm talking
about feedback loops.
Things that are a
bit more disturbing--
obviously, the labeling
of a black couple
as gorillas that occurred
a number of years
ago that's been worked with
is a disturbing phenomena,
apparent algorithmic racism that
we need to be concerned about.
And this woman down at the
bottom is Joy Buolamwini.
I always mispronounce her name.
But she's a researcher
who discovered
that many of the facial
recognition algorithms that
exist don't see her face,
unless she puts on a white mask,
and then it does.
And she also discovered
the rather disturbing fact
that a while ago, if you
typed in black girls,
you got pictures of porn.
Black girls generated pictures
of porn as search results.
So we have to think about
why is this happening.
Why are these things that
are apparent racism happening
in our algorithmically
mediated infrastructure
of our entire world?
And there are several theories.
One theory is that big
data is a scary mirror.
Effectively, we've held
a mirror up to ourselves,
and we found out
we're kind of racist.
And I'm not saying
that's not true.
There is some truth
to that, for sure.
Another thing that's
popular in the media
is that it's the alt-right
programmer ecosystem,
that there are people
out there manipulating
the algorithms in order
to make certain kinds
of political things happen.
That's certainly
not untrue as well.
There's some truth to that.
And certainly, lack of
diversity in computing
is an issue as well.
But that's not the thing
I'm here to talk about.
Theory three is the thing
I'm here to talk about,
and that's that there's
something about algorithms.
There's something
about algorithms
that leads them to propagate
a certain kind of way
of looking at people.
And I know something
about certain ways
of looking at
people because this
is where I'm from, all right?
I'm from America, obviously.
I've been British
for quite some time,
but this is my country of birth.
And this map is colored based
on the results of the 2016
presidential election by county
across the United States,
blue being Democratic,
red being Republican.
This makes it look like
America's majority Republican,
but that's only by landmass.
It's not by population.
If you resize this map--
and you can find maps online
that will do this-- resize
this map based on population,
you'll see that it's
about half and half.
That's because
America's population
are concentrated on the west
coast and the east coast.
So the center shrinks,
and you get about half
and half of blue and red.
But the place I'm from
particularly is here,
is Alabama.
I'm from Alabama.
I came here 22 years
ago from Alabama.
And you guys, some of you
would have come to the talk
last month by the author
Lewis Dartnell of "Origins,"
how the earth made us.
Interesting, interesting talk.
And he showed you this diagram.
And what he was talking about
is this Democratic voting
band here is what we in
America call the Black Belt.
And the reason it exists
is because geophysically,
that area--
Dartnell told you
this-- geophysically,
that area is where certain
soil conditions exist
that grow certain crops very
well-- in particular, cotton.
And therefore, this
was the concentration
of slavery in the South,
a high concentration.
And these counties are still
majority black counties.
But there's one outlier, and
it's that blue one up here,
right?
The high blue dot in Alabama.
That's Jefferson County,
and that's where I'm from.
I'm from Jefferson
County, Alabama.
I'm from Birmingham, which is
the biggest city in Alabama
now.
And it's not a part of
the slavery narrative.
It is, but it was a city
that existed before slavery.
It's a post-slavery city.
And the reason it
came into existence
is because it was discovered
that in Birmingham, Alabama,
it's the only place
on Earth where
you can get all the ingredients
to make high quality
steel in a single place.
The reason it's
called Birmingham
is obvious then, right?
It's from your Birmingham.
It's a steel town.
It's a new Birmingham
in America.
And it was founded in the
late 19th, early 20th century.
And it drew in people
from all over the world.
Eastern European Jews
came to Birmingham
to run retail businesses.
Italians came to Birmingham
to run the grocery business.
Scottish people who had lost
their jobs in the coal gas
industry outside
Glasgow, because coal gas
became a redundant technology
because of electrification.
And so they all lost their
jobs, and they all came across
and went to places
like Birmingham.
There's a big
Scottish settlement.
And those are my ancestors.
My ancestors are people
who lived outside Glasgow
and mined coal there,
and then lost their jobs
and came to Birmingham, Alabama.
So and also of course, there
was a black community there
at "post-slavery."
And I say "post-slavery"
in quotes because--
because of laws in
Alabama, they were
worked like slaves in the
mines for in many ways as well.
But the thing I
want to point out
is that there was a great
deal of ethnic diversity
in Jefferson County.
It's one of the most ethnically
diverse counties in the South.
However, diversity and mixing
are not the same thing.
Now while Jefferson
County, Alabama
had a great deal
of diversity, it
had very little in
the way of mixing.
And that's because
of Jim Crow laws.
Jim Crow laws are laws
that restricted things
like people using the same
water fountains, people sitting
in the same area on buses,
people using the same dressing
rooms to try on clothes.
And even in Birmingham,
the most segregated city
on the earth in the time of
the civil rights movement,
you were not allowed to
play any board games or card
games with a person of a
different color by law.
You couldn't play
checkers with a black guy
if you were a white guy.
So Jim Crow laws
were a huge part
of the history of the South.
And they're what prevented
whites and blacks,
in particular, from mixing.
But it's also a part of
my history in some ways.
When I was born, my parents
had to cross barricades
to get to the hospital because
a protest had broken out,
organized here at the
16th Street Baptist
Church in Birmingham,
Alabama, a real stalwart
of the civil rights
movement that organized one
of the most successful
civil rights
protests of the civil rights
era, the Children's Crusade,
where children
left their schools
and marched in the streets
to end the Jim Crow
laws of Birmingham, Alabama,
of a particular city.
And they were, of
course, subjected
to Bull Connor's water cannons.
Protesters were subjected
to outright violence
in the streets.
And kids even had attack
dogs sicked on them.
And this was going on
while I was being born.
I was actually being born while
these protests were going on.
Now, 14 days after
my birth, this
led to the end of the Jim
Crow laws in Birmingham.
There was so much
public embarrassment
from this terrible event that
the merchants and leaders
of Birmingham settled and
basically, on a legal basis,
you could try on clothes
in the same place.
Black and white people could try
on clothes in the same place.
Black and white people could
drink from the same water
fountains.
Black and white people could
sit in the same areas of a bus.
And that's because
of these children.
Now, then I started growing up.
So I grew up in a world
post-Jim Crow laws.
That's not to say I
grew up in a world
where there was mixing between
the races in Birmingham,
Alabama, believe me.
This is me as a little boy
with my sister and her doggie.
And this is me
when my grandfather
took me to see the bootlegger.
And bootleggers, for British
people, people who make illegal
moonshine.
And that's those
two fellows or guys
who ran a still that
my grandfather used
to get his liquor from.
And so as you could see, I
was a bit of a weedy kid.
And I was very bookish.
And that didn't do me any
well at my first school.
My first school was this place.
I always like to show that
picture to British audiences
because the fact that there's
a giant blazing American flag
and eagle on the front
of an elementary school
is strange to the
British consciousness.
This was my school,
EB Erwin High School.
And when I was there, I
was a horribly bullied kid.
I had a really
hard time as a kid.
I just didn't fit in very well.
But there were people who
were coming into my school who
fitted in even more poorly
because the government had
ruled that it wasn't enough to
get rid of the Jim Crow laws.
You had to bring
people together.
You had to make people mix
because they were not mixing.
Schools were still segregated
by the way people lived.
By effectively, there were
white suburbs and black suburbs.
So kids from an area called
Airport Heights, effectively
a ghetto outside Birmingham's
airport, were put on buses
and brought to my school.
And that had a profound
effect on my life.
I remember one day when I was
pushed down in the lunch line,
and I got pushed down and had
to go to the back of the line.
And the back of the line
were all the black kids
were always in the back
of the line to get lunch.
And a little girl named
Laretha Jackson said to me.
She said, "You always go
around looking at your feet.
If you don't hold
your head up, people
are going to beat you
down your whole life."
And that changed me.
That changed my life.
It made me a stronger person.
So it's important piece of
advice that I've ever gotten.
It made me a stronger person.
It made sure I'd
never be a racist.
Because this fear I had of
black people-- as a bullied kid,
when black kids
came in, I thought
those kids were going
to beat me up, right?
Because they were
scary black kids.
They didn't care
about beating me up.
They had more
important things to do.
In fact, the bullies
concentrated on them
and stopped [INAUDIBLE] on me.
So in many ways, that
really made my life--
that little girl,
those buses, those laws
that changed Birmingham
from a city that
was unmixed to a city that
mixed changed my life.
So I owe a lot to the Bluebird
Jefferson County School
bus that basically
brought a lot to it.
And it's more than
that because I ended up
being on these buses
myself soon after that.
Soon after that,
I was bused away
to this school,
my second school,
which is an old 1930s
building in Birmingham.
It happens to sit in the middle
of a black ghetto itself.
But it was a ghetto on the
other side of the mountain where
rich kids lived.
And this school was
primarily occupied
by kids from a different
social class than me.
I was a lower middle class kid.
Most of the kids
who went there were
the sons of lawyers,
and doctors,
and university researchers.
And so I was bused to this
school, which ironically,
though, it was in the middle
of a black neighborhood.
It had one black
student out of 250 kids.
Now, this is in a county
that has 40% black people.
Now, why did this have so
few black people in it?
Because this is the
Jefferson County School
for the gifted and talented.
And the reason that I was
brought to that school
is I scored rather
high on an IQ test.
So ironically, this took me
from the first change in my life
where I realized that
mixing between racial groups
taught me something
in my life, to where
I mixed with the class
groups that also taught
me other things in my life.
But it all came
through the IQ test.
Now, it's ironic that the
IQ test has played such
an important role in my life.
Because now I'm a faculty member
at University College London.
In many ways, the
reason that the IQ
test is such an important
algorithmic part of our society
and has been for a
century, more or less,
is because of things
that happened at UCL.
And I'll get to that by telling
you a little history story.
So there's a lot of history
stories in the book.
This is one of the primary ones.
Anybody know who this guy is?
Probably not.
This is Darwin, but
not Charles Darwin.
This is Erasmus Darwin,
Charles Darwin's grandfather.
And he was the guy who
wrote about evolution,
social evolution, the
belief that society
evolved through the action
of selection in effect.
Erasmus Darwin believed in that.
He also believed
biology worked that way.
So he had the evolution
idea before Charles Darwin--
in fact, 100 years
before Charles Darwin.
But there were some steps
that had to take place
to get from there to science.
And the steps were this.
First, Alphonse
Quetelet was a guy
who basically had looked at the
bell curve, the bell curve that
came out of Carl Gauss' work in
astronomy as a way of dealing
with errors in
scientific measurement.
And what he had looked at
is he had thought about this
and said, OK,
people should vary.
There should be errors
around an ideal for people.
And that ideal,
he thought of was
kind of the gladiator,
the ideal soldier.
And he took measurements
of chest girth
of Scottish soldiers
and basically showed
that they were distributed
on a bell curve.
And he thought of the
middle as being this ideal,
and anything higher or
lower was a deviant, right?
And he used the term deviant.
And deviation is a term we
use scientifically quite a lot
around things like
bell curves now.
But deviant is what he
basically used the term for.
So that was the first idea.
Variation of humanity around
an ideal was the first idea.
Second idea that
came in is this is
Thomas Malthus, who
was here in London,
was a cleric who
basically believed
that evolutionary utopianism
was a non-Christian bad idea,
the idea that we'd
evolve to a better state.
He basically believed that
resource limitations would
ultimately lead to people
basically overproducing
and then there being a
steady state misery that
kept us from going any further.
So effectively, there
was a cap on how much
society should
evolve, can evolve.
Now, ironically, Charles
Darwin took these two ideas--
the idea of variation
around an ideal
and the idea of
resource limitations
limiting the advanced humanity
and turn them on their heads.
He took variation from Quetelet
and said there's always
variation biologically.
Mutations cause variation.
He took resource
limitations from Malthus
and said resource limitations
mean that not everybody gets
to reproduce.
You put those two things
together, and what you get
is advancement.
He didn't look at the
mean as being the ideal.
The mean was the current state.
You could advance forward
through the resource
limitations, and that
invented the theory
of biological evolution.
Now that was biological
evolution, not
his grandfather's idea
of social evolution.
However-- and I do want
to point out the idea.
This is the idea of
natural selection
that Darwin came
up with now, which
is the foundation of evolution.
Now, this is Herb
Spencer, who was
a great thinker of his
time, largely forgotten
now, but a very important
intellectual of his time.
And he took this idea
back to Erasmus Darwin
and said society evolves, too.
There's this arrow
of perfectability
that we're moving up as we
evolve human beings, as we
evolve society.
So the idea of Social Darwinism
comes from Herb Spencer.
And he also changed the
word from natural selection.
He changed over to
this idea of survival
of the fittest and the idea
that there is a fittest,
and we're optimizing towards it.
It comes from Spencer.
Darwin really liked this idea.
Darwin put it in the
fifth version of "Origin
of the Species," and it
became dogma effectively
that there is survival
of the fittest.
Now, the fittest is a bit
of a tautological argument,
but it does bring in this
idea that we're evolving
towards better human beings.
We're on this unipolar
optimizing track, all right?
So that was then taken up by
Darwin's first cousin, Francis
Galton, who was one
of the people who
was behind the founding of UCL.
And Francis Galton invented a
new idea, a distinctly British
idea, called eugenics.
This is the Office of
the Eugenics Society
here in the UK.
The first eugenics
department in the university
was at UCL, the UCL
Eugenics Department.
And UCL has a great
history in eugenics.
Wonderful liberal
institution, UCL.
First place you get a
college degree in England
and not be in the
Church of England.
Most people don't know that.
Up until UCL's
foundation, you could not
get a college degree in England
and Scotland, I believe.
Ireland ran ahead of
us slightly on this.
But basically, a great tolerant
progressive institution
that very much believed
that eugenics was the future
of a progressive society.
And the Eugenics Society
was a very progressive idea,
a very liberal idea.
Members included Karl Pearson,
of course, Neville Chamberlain,
John Maynard Keynes,
Margaret Sanger.
Many people believed
that eugenics was a way
to improve the lot of humanity.
And that went along
just fine for a while.
And I love this
diagram that I found.
This is from a eugenics text.
And you note, I talk about
this optimizing line.
We have here the
progression of humanity
forward, right, along that the
skulls of people and measuring
how their cranial
capacity contained
bigger and bigger brains
as you moved up this scale.
And even though some
of the larger skulls
might be down here
in this scale,
the cranial capacity
was smaller.
So effectively, those
were people who were more
inferior intellectually.
You note that down
here at the end,
we have the kind of
European Caucasoid man,
and then you have the
Greek statue, just
like Quetelet, the Greek
statue of the ideal that we're
trying to evolve towards.
All of that craniology stuff
was brought down by this woman.
And let me tell you,
it's hard to find
a picture of a woman scientist
from the turn of the century.
This is a picture
from a tea party.
She is listed as
third from the left.
It's a tea party with Karl
Pearson, the famous scientist.
And there are a few
women there from science,
and this is Alice Lee, who did
a very important piece of work.
You've heard of Karl
Pearson, I'm sure,
Pearson correlation
coefficient, but you haven't
heard of Alice Lee, have you?
And some of you may
have, I really doubt it.
But Alice Lee, what she did
was so incredibly clever.
UCL had the largest collection
of human skulls in the world.
They had collected human
skulls for eugenics research
to basically say,
look, here are African
skulls, here are
European skulls.
Look, one has a bigger cranial
capacity than the other.
And what she did
is she basically,
by measuring the
cranial capacity,
filling skulls up with
sand, and emptying them out,
and weighing the sand, and
using [INAUDIBLE] calculations
to take external
measures of the skulls
to basically infer
two formulae that
said how much cranial
capacity there was
based on these external things.
She verified it very thoroughly,
and then what she did
is she took all
the anatomists who
believed in the
craniology theory
and listed off how
big their brains were.
And at a conference in Glasgow,
she brought down craniology.
And craniology ended then.
But that was the end of
scientific eugenics as research
by a long shot.
I'm going to digress from
eugenics for a minute
and leave England
and go to France.
This is Alfred Binet, the
founder of the IQ test.
The reason he brought in the
IQ test is very interesting.
The French and the
English both delayed
educating their children
to an advanced age
long after most of
Europe and America
had started educating their
children into their teens.
Effectively, for some
reason, France was late.
For some reason,
England was late.
France was late because
of the controversy
between the Catholic
Church and the government.
England was late because of
class issues around education.
But nonetheless, when Binet came
along and studied intelligence,
he basically found
that he thought
intelligence was very complex.
He said a chess player-- he
said feelings, images, emotions,
all of these things are
involved in intelligence.
And he thought intelligence
is very complex,
yet he came up with a test
that generated a bell curve
to classify these children.
But he was very
careful in his lifetime
to say all he was trying
to do, really, was say,
kids who do poorly on this
test are probably kids who
need some more help in school.
And he spent his entire
life and his partner, Simon,
spent his entire life
saying the IQ test shouldn't
be used to categorize people.
It should be something
that's used to help
people who are on the low end.
Otherwise, we should
ignore its results.
However, that's not
how things turned out.
These two gentlemen--
back to England again--
this is Karl Pearson
and Charles Spearman.
And some of you will know them.
If you're algorithmists,
you'll kind of
know that they're the
guys who developed
factor analysis, and
principal component analysis,
and a number of other
statistical techniques
that are commonplace in the
algorithms that we use today.
And the reason those algorithms
are used, you'll know,
is to reduce complicated data
sets down to simpler features.
And what they did,
just like in France,
children had started being
educated to a high age.
They had a lot of exam scores
that they could look at.
And what they were trying
to do is use PCA, really,
to identify a principal
axis in these exam scores
that indicated
that there was a G
factor, a general
intelligence factor, that
underlied all academic success.
And then effectively,
intelligence
could be brought down
to a single number
that they implied was genetic.
And that was the goal
of this research.
In many ways, most of the
statistical techniques
we use in algorithms today
for the reduction of data
have origins that go
back to the beginning
of the eugenics movement.
So that worked out
fine in Britain.
Britain, the eugenics movement
was largely a progressive idea,
not so much in America.
This is Henry Goddard, who
took the idea of the G factor,
right?
He took the idea of the G factor
and said, well, I'll take that,
and I'll combine it with this
French test, the IQ test, which
had been brought
over to America.
And then I'll use it
to categorize people
for purposes of
limiting immigration
into the United States.
Because after the
potato famine, there
had been a great
influx of Irish people
who were basically thought
of by both the English
and the Americans as
genetically inferior.
This is something
we've all forgotten.
It was very
commonplace to believe
that the Irish were
genetically inferior
in England and in America.
And Italians were
coming over, too,
and they thought those were
kind of bit dark people,
and they were a
little inferior, too.
So they had this
idea that they had
to weed these
inferior people out.
And what they did it
with was the IQ test.
And in fact, Goddard used
this to categorize people,
and he came up with some
very interesting categories.
Goddard invented the word moron.
The word moron did not
exist before Henry Goddard
in English.
So he basically
took the IQ test,
and he came up with definite
areas of the IQ test range.
There were idiot, imbecile,
moron, normal, gifted.
And that was used not
only to exclude people
for purposes of
immigration, it was
used to sterilize people well
into the '60s and '70s, OK?
People were sterilized based
on their inferior results
in this digestation of
things well into that.
So I want to point
out that this is
all about simplifying
and generalizing
around quantitative values.
What we've done is
taken human beings
and simplified them and
generalized them using numbers.
And that's something that
has a rather unsavory history
in America.
This is a eugenics
kind of bulletin board
that says some people are born
to be a burden on the rest
and that the Americans
really made negative eugenics
into a real thing.
I mean, they were the
kings of negative eugenics
up until the time that it
was taken up by the Nazis.
And the Nazis, here's the
burden of the racial groups
on the Aryan man--
same sort of idea
taken up by the Nazis.
And of course, the
Nazis continued this
in a brutal and horrible way.
They were still using the
skull measurement technique,
even though Alice Lee
had largely disproved it.
And they also enacted
laws that basically
were based on the idea
that some people are
down on the inferior
end of the scale
and [INAUDIBLE] in the
spirit end of the scale.
The Nuremberg laws, which
most Americans won't realize
were advised by
this man, Laughlin,
who was one of the guys who came
up with racial anti-mixing laws
in the United States.
The United States
had the longest laws
against miscegenation,
marriage between the races,
of any country that's ever been
on Earth, the longest time.
So and it was into the '60s.
You couldn't marry a
black woman in America
into the '60s in certain states.
But it's still a shocking thing.
But yet, this is how we think of
America with the Nazis, right?
America knocking out the
horrible Nazi menace.
And we think of that about
Britain as well, is we
brought down the Nazi menace.
We banned eugenics
from consciousness.
Eugenics disappeared as a word
that people in polite society
would use and think about as
its way of social engineering
because the Nazis had made it so
bad that it became unpalatable.
The one thing that
survived was the IQ test.
The idea of data analysis
to categorize people
persisted well into the modern
era and persisted to this day.
As some of you
know, these topics
are not going away
by the slightest.
So the idea of data analysis
to categorize people is a live
idea and an idea with a history
that cannot be disconnected
from racism.
So the question is, is that
the indicator that there's
something about algorithms?
Well, I believe it is,
and I'm going to say this.
Algorithms are prejudiced.
Algorithms are prejudiced.
And this is a bold
statement that I make,
because I believe it's true.
And it's true for
the following reason.
Let's think about
what prejudice means.
Prejudice means to prejudge,
and how does one prejudge?
One prejudges by simplification
and generalization.
Now, those of you
who work in AI will
know that AI
algorithms are designed
to simplify and generalize.
That's their job.
And that's a good thing.
Simplifying and generalizing
about complex things
can be very, very valuable.
But it has to be mediated,
particularly when
it's about people.
Now the way that we do--
most of you will
know that algorithms
that are doing various
things out there
are largely based on the idea
of optimizing some value.
There's always something
you're trying to maximize.
And these things--
simplifying, generalizing,
and optimizing value
are core principles
for a scientific approach
to basically trying
to manage data and
understand data.
There's nothing wrong with that.
But its history has had a
dramatic impact on society
when it's applied
to human beings.
And it is being applied
to human beings,
and I want to think about,
for just a minute, digress
and think about what it's
doing on social networks.
Now, this is based on some
work I did with some students
at UCL that basically is about--
we did some analytic work about
the dynamics of social network
opinions that are
being influenced
by simplifying, generalizing
value-driven algorithms.
So there's a little
social network for you.
Now, the results I
want to talk about
are all analytical results.
So for arbitrarily
large networks,
these things are true.
But I'm showing you a
little illustrative picture.
So here's their network,
and the greenish blobs
are rational people in
this social network who
are connected to some other
entities in the network.
The colored balls
are the balls that
have one opinion in a binary
social controversy, say,
election of a single
candidate, or opinion
on a particular
political movement,
or opinion on a particular
scientific issue.
The opposite opinion
in that binary issue
are these largely white marbles.
And so what happens
in the social network
is everybody broadcasts
their opinion, right?
Everybody broadcasts
their opinion.
But who you hear from depends
on who you're connected to.
And if you're a
rational actor, you
make decisions based
on what you hear,
which is what a rational
person should do.
What happens in the
dynamics of this?
Oh, by the way, I have to say,
these colored and white balls,
they're not rational.
They're motivated reasoners.
They always emphasize
the same opinion.
They've decided, and that's it.
So what they do is they
influence and they recruit,
because if you look at the way
I've progressed this graph,
is if you're surrounded
by a majority
from one point of view,
you're getting recruited in
to be sort of a quasi
motivated reasoner, right?
You've come to act as if you're
one of the brightly colored
balls or one of the
light colored balls.
And eventually, what
that means is everyone
is recruited in
one way or another.
Now, we've shown
analytically that this
leads to a polarization, where
effectively, in a binary issue,
you're always going to have
one side of the network going
one direction, one side
going the other direction,
and it's a steady state.
You'll converge
to a steady state,
and it's very difficult to get
opinions to move once you've
converged in this steady state.
So in some instances,
in this little example
that I've given you,
this is divided,
the coloreds and the whites.
Now that's a bit acute
of me to refer back
to the water fountains
in Birmingham,
but I want to say
this isn't cute.
It's been shown by the
Pew Research Center
that the likelihood
of seeing news that's
relevant to issues of race
is much higher for people
of color in America
than it is for people
who consider themselves white.
So effectively, the information
polarization that goes on
is a kind of digital segregation
that is actively going on.
So the question is, what
can we do about this?
If I'm right in
this simplifying,
generalizing, algorithmic
way of looking at the world
is causing some of the problems
we're seeing online, then
what can we do about it?
And what I'd say
we could do about
is we can look at the facts
and design our algorithms,
overcome some particular
historical biases
and assumptions.
I want to talk a little bit
about how I think about that.
One is we have this assumption
from Darwin, really,
and from Erasmus Darwin,
really, that there's
this unipolar optimization
going on in the world
of some measure of value.
Economics brings
this idea in as well.
And at some
[INAUDIBLE] down here,
humans are making decisions
up to this quality.
And algorithmically,
we should be
able to get AIs to make a
higher quality decision,
and this unipolar view is built
into a lot of the zeitgeist
of people's thinking.
And that view was
broken up for me
when I did some work back in
the 1990s with this woman,
Stephanie Forest, who's a
woman who did a lot of work
in what was called at that
time computer immunology.
And she was looking at the
real live immune system
and simulating it with
evolutionary algorithms.
And I, being an arrogant
young man with a PhD,
looked at her experiments
and said, that can't work.
Evolutionary algorithms
are optimizing algorithms.
They're going to drive
you to a single antigen.
The idea of a diverse set
of antigens coming out
of the kind of simulation she
was using had to be wrong.
She had to have stopped
the algorithm too early.
It has to converge.
That was because of
my optimizing look,
my unipolar value
optimizing look
at the way I thought
evolution worked.
Turns out we did some studies.
And in the real immune
system, effectively, it's
more like this, is that you've
got this non-unimodal, not
bell curve looking
kind of distribution
of the effectiveness
of anti-bodies.
And effectively, because
of various effects
that happen in real world
immune system evolution,
it will diversify.
And that's when I
made a realization
that survival of the fittest
isn't what evolution does.
Survival of the
fittest is an idea
that embeds the idea that
we're maximizing value
through evolution.
And in fact, even in
genetic algorithms,
you can show that
what's really going on
is an effective balancing of
diversity with advancement.
And I took that work.
I took those ideas.
And I did some of the earliest
computational creativity
work for the Air Force and
NASA to basically figure out
how to learn fighter combat
maneuvers from simulation.
And it was a very
effective thing to do,
and it worked really well.
I kind of built a
career on this idea.
But this is an idea
based on the idea
that evolution should have
more than one thing in it.
And this is a diagram from some
evolutionary algorithms that
were worked on a long time
ago by my PhD supervisor, Dave
Goldberg.
And what he basically
showed both analytically
and in this graph empirically--
if you look at
evolution, algorithms
that do evolution, what you find
out is that there is an area,
this middle area
is where they work.
Outside of this area, they fail.
They fail.
And in particular, this
axis is really the axis
of survival of the fittest.
This is the axis of driving
towards the observed best.
If you do only that
drive, you end up out here
in a failure region.
I've kind of made a
simpler diagram of this.
Out here in this failure region,
evolution does not take place.
Survival of the
fittest is not enough.
The other thing
you need is mixing.
And this is the resimplified
simplified diagram.
One of the most gratifying
things about the book
thus far has been
somebody took this diagram
and made it better.
A guy sent this
out on a blog piece
that I really appreciated.
And what he really saw was this
is out here in this region,
we end up with fragile systems.
If you drive out towards
the observed best,
you end up with brittle
fragile systems.
In order for interesting
evolution to occur,
you need to be in that
band that balances mixing
and survival of the fittest.
Now, what I'm trying
to say is this.
This is a technical effect.
This is an effect we can
explore as scientists.
This is an effect we can
build into our algorithms--
the balancing of
mixing and survival
of the fittest, the balancing
of driving towards what we think
what's the observed
best is and some metric
we're using to optimize
an algorithm we've built,
and a metric that basically
says how diverse are the outputs
we're giving to human beings.
By concentrating on
the idea that there's
this edge of chaos
phenomenon, we
can actually come up with
algorithms that do something
that is productive evolution.
And algorithms are showing
us that's what we should do.
That's the right
look at evolution.
And what we're designing
online in the massive networks
that we're building is
an evolutionary system.
And we need to
emphasize that idea
that diversity preservation
is an important part of what
we need to build.
So what I'm talking
about is a science
led shift in perspective away
from simplifying, generalizing
value-driven survival of
the fittest ideas that
descend from this
past that we have
that's taken social science and
turned it into an optimization
problem, towards the idea
of diversity preservation
with mixing.
And I want everyone
here to think about,
in all the different
algorithms they
implement that do different
things, what that means.
What does it mean to not
have an answer come out
of an algorithm that is
the pat answer that says,
here's the maximum value,
as far as I can-- where
instead, giving diverse
alternatives as answers
and trying to promote
diversity actively
in order to have more functional
algorithmic infrastructure
for the future that may
end some of these problems
of apparent intolerance that
seem to pop out of the feedback
loops between people
and algorithms.
And I think if we do
this, we can be the agent.
We can be the agent
of change the way
this school bus was the
agent of change in my past
and the age of change
for a whole generation.
So that's my talk.
I'd be glad to answer
some questions.
[APPLAUSE]
Yeah, have we got
one back there?
AUDIENCE: To phrase this
in a way that is perhaps
close to the meat of the matter
for a company like Google--
ROBERT ELLIOTT SMITH: Yes.
AUDIENCE: --which
fundamentally--
to steal a phrase from Mark
Zuckerberg-- we sell ads.
That suggests that things
that optimize, for example,
saying, oh, this is
the perfect ad that
will get the maximum response is
actually the wrong thing to do.
And in fact, you want to say,
this may not be the perfect ad,
but it's an ad in
the general area
of what we're interested in.
ROBERT ELLIOTT SMITH: I
think that's exactly right.
And I know that companies like
Google and all the big tech
companies are concerned about
their social responsibility.
They've become so
important that they
should be concerned about it.
And I'm glad that they are.
And I believe that they are.
I think that these principles,
the idea of offering diversity,
instead of just a value
maximizing outcome,
are principles they can adopt
before the government comes
in and starts trying to regulate
these industries the way
that they regulated the
broadcasters in the broadcast
era.
So I think that yeah, I think
that those kind of principals
need to be brought into
every aspect of the big tech
movement, really.
And I think
responsibly, it can be
done by the
corporations themselves
before the government starts,
in a ill informed way,
imposing regulation.
And I'm also talking
to people in government
about this as well.
Because when regulation
does come in,
and it will, that it comes in
some way that isn't draconian
and it doesn't
strangle innovation.
AUDIENCE: Well,
first, thank you.
That was amazing.
Kind of playing
devil's advocate,
thinking really far
ahead, like, how do we
do this in a way
that doesn't end up
looking like the affirmative
action thing where you say,
you need X number of
this kind of people
and X number of
that kind of people?
And how would this look
like in a best case where
it is somehow
natural, and organic,
and representative
of us in a good way?
ROBERT ELLIOTT SMITH: Yeah,
that's a great question.
I think this idea of
diversity is not--
the problem with the kind of
eugenics view of the world
is a simple minded
view of what's best.
A simple minded view
of what diversity is,
is just as bad, right?
In your particular
algorithm where
you're serving ads,
for instance, what does
it mean to be a diverse ad?
It's not I need more people
of certain orientations
or certain colors in it.
It's something that's
more complex than that,
that we as scientists
can study, all right?
I think our simple minded
notions of diversity
now are artifacts, ironically,
of the simple minded ideas
we have of superiority
and inferiority.
So studying what diversity means
in a particularly algorithmic
context is a careful job
design job for algorithmists.
So when I use the phrase
diversity and mixing,
mixing is pretty clear.
I think getting people to
mix, that's pretty clear.
Diversity can have these sort
of pablum meanings, which
I think are not really
what I'm on about.
Nonetheless, hey, look.
The reality is, is in 1987, when
I started in computer science,
what was the fastest growing
field for women in education?
Anybody want to guess?
Computer science was
the fastest growing.
And it was at 40%, which
was reflective of about 60%
of university students were
men and about 40% were women.
So it was effective
parity, right?
It's 18% now.
It's 18%.
That's got to change.
Something has gone
horribly wrong.
And I think some of this
stuff has to do with that.
There's a whole chapter in the
book about the role of women
in computing, and how important
women have been to computing,
and how little it's recognized.
So I hope you read that chapter.
AUDIENCE: My
question is probably
a simplification/follow-up to
the previous two questions,
is the graph that you had with
the ideal band for general
improvement, is that something
that you think can be
quantifiably measured?
ROBERT ELLIOTT SMITH: Yes.
AUDIENCE: Because the
appeal of the previous sort
of single access value
optimizing function
was that it can be
very easily measured.
And you can measure your
improvements on that axis.
ROBERT ELLIOTT SMITH:
Yeah, I do think
that the edge of chaos effect
can be quantitatively measured.
I do think it has to
be adaptively measured
because I think any
quantification we come up
with is probably going to
dissolve the way that market
information effectively
dissolves into a market.
So I think that we're
looking at a problem
of continuous adaptation.
Nonetheless, I think there
are technical metrics
for am I at a productive
level in that two graph?
Am I able to be in that band?
I do think this is a research
question that can be explored.
And people who've been
doing edge of chaos type
work in complex
systems have already
been doing that not in the
context I'm talking about.
So I do think there is a
technical angle of research
for this work.
AUDIENCE: I had a quick
question about your statement
that algorithms are prejudiced.
ROBERT ELLIOTT SMITH: Yes.
AUDIENCE: I guess
I was wondering,
it seems like a
lot of the way we
use the word prejudice, at
least in modern society,
seems to indicate that
there's an assumption that's
powering the simplification
and generalization.
And I was wondering,
how do we remove
those kinds of same
assumptions from bleeding
into the algorithms
that we produce?
ROBERT ELLIOTT SMITH: That's
a fascinating question.
Thank you.
Here's the thing,
is I'm a believer
that all representations
are biased.
It's impossible to have--
and I mean that in the
technical sense of bias,
not in the kind of
social sense of bias.
Representations induce bias.
My favorite quote, which
is a mantra in the book,
is George Box's quote, where
he said, "All models are wrong.
Some models are useful."
So in that sense, when you
simplify or generalize,
when you come up
with a representation
for a complex problem,
you are inducing biases.
And that can't be avoided.
The only thing you can do is--
like Box said, some
models are useful.
So you have to kind of
be aware of those biases,
and then adapt around them, and
be able to basically say, OK,
this is a representation.
I got this result. It's a
representational artifact.
Now I need to change things.
So effectively, I'm
a great believer
that you can't take the
human out of the loop.
Because the great
things human beings
do is they deal with the deep
uncertainties of the world
adaptively in an
ongoing fashion that
becomes an evolving system.
So I guess, the
question was can you
remove the biases that
sit behind the prejudging?
I guess my answer
is no, you can't.
They will change continuously.
And we have to be involved
in that process continuously.
I hope that answers
your question.
AUDIENCE: A few of
us here actually
work using [INAUDIBLE] to
build some predictive models
and essentially to
predict and prevent
degeneration of
patients in hospitals,
so work directly on health care.
And one of our main drivers
is obviously patient outcomes
and how can we save lives at
the end of the day, right?
To put in simplistic terms.
How should we take your
takeaway of the [INAUDIBLE]
maximize the sort of
maximum utility and survival
of the fittest in that
kind of environment?
ROBERT ELLIOTT SMITH:
That's a good question
and a deeply technical
question that we probably
have to consult on
to really understand.
However, I'll say this,
is a long time ago,
I did some work where I was
doing genetic algorithms based
routing for military vehicles.
And one of the
things I found out
is the commanders didn't
like a black box outcome.
And the reason they didn't like
that is because they'd say,
I'm going to be flying this
missile through this territory,
and I've had a model now of what
I think the threat environment
is.
And the commanders at that time
were very aware of the fact
that models are just models.
And what they wanted
was diverse solutions.
So they wanted the
algorithm to tell them
not a probability distribution
around a single route.
They wanted it to give
me this route looks good,
and this route looks good.
They wanted different
routes, so they
wanted a diversity
presented to the human being
for the final decision.
Because then they
used their gut, right?
They used their gut and
said, OK, the algorithm
says that's the best
route, but we're
going to fly this one,
because I don't believe it.
I think in general, when
making recommendation
to human beings, this
idea of providing
a diverse set of outcomes
from the algorithm
because as you know, algorithms,
with a tweak here or there,
will go somewhere
different, right?
So those false minima in
optimizing a deep learning
neural network, those minima
that under your criteria
for optimizing the algorithm
come out to be false
may be valuable.
So I think ultimately,
out of algorithms,
you should come out
with a diversity
of solutions that's
manageable to the human mind
and explicable.
And I think that's the
way I would try to do it,
is never give a human a black
box decision from an algorithm
because they're
going to believe it,
and they're not going to
understand the underlying
assumptions you made
along the way, which could
be fatal in some instance.
I mean, none of us can
foresee the unforeseen future,
and no algorithm can either.
So you need to give
people the ability
to have adaptive solutions.
And I think some of
that is giving them
a diversity of solutions.
AUDIENCE: So I want to
know how practical actually
to make our algorithms
diverse and not prejudiced.
Because based on
just my experience,
for example, if you're
working on a feature,
making it do 1, 2, 3, it's OK.
Then making it 1, 2, 3, 4,
and 5, it's really hard.
ROBERT ELLIOTT SMITH: Yeah.
Well, in the evolutionary
computation community,
the kind of technical
way that you do it is--
I mean, there's something called
niching that's done in GAs.
And the way you really
do it is you come up
with a way of saying, if my
population is all looking too
similar in a
Malthusian sort of way,
in a Thomas Malthus
sort of way, you lower
their productivity by
basically saying everything's
too similar.
And when you do that, you
get the pushback effect
that leads to balance.
And these immune
system simulations
that I talked about, the
interesting thing that came out
of them was the reason you
get a diverse set of antigens,
of antibodies, is because
when the antibodies cling
to the antigen, they block
other antibodies from clinging
to the antigen, so they've
absorbed the resource.
That causes the fitness of
that kind of antigen to drop.
So but if it dropped
so much that you
wouldn't get the sticking
in the first place.
So the idea, you
get this-- and this
is what happens in
real evolution-- you
get resource consumption
pressuring away
from uniformity of solution.
So in general, my
advice-- and we'd
have to look at it technically
for any particular application.
But in general,
the kind of advice
is this, is put in evolutionary
pressures that drive away
from homogeneity and then
let the emergent effect
come out, right?
And the emergent
effect generally
is a balancing effect.
But I think for a
particular application,
we have to look at it in
deep technical detail.
But I think that's
the general advice.
AUDIENCE: I think you made
a great point about how--
reasoning maybe why algorithms
are inherently biased
because of the simplification.
But I guess my question is, what
is your opinion in that, like,
humans--
I guess you mentioned
how humans are different.
But one could argue that humans
also, when making decisions,
often try to make
a rational decision
by formulaically making
the pros and cons
and simplifying things.
So do you think it's
actually different?
ROBERT ELLIOTT SMITH:
Humans absolutely simplify
and generalize in order to
make every decision they make.
What has to come into that is--
think of it this way.
You probably know people.
I know people who treat
people in horribly simplifying
generalizing ways.
And they're not people who are
my friends, who make judgments
about people based on really--
I'd say, gender
is the number one.
I mean, I've met lots
of men in my life
who basically treat
women and men profoundly
differently because
they're making
that simplifying judgment.
We have a tendency to
simplify and generalize.
We have morals and
ethics that come in
to cancel out our ability to
do that, to basically go in
and say, look, I'm not going
to judge this person based
on the color of their skin
or based on their gender,
even though I might--
look, I grew up in a horribly
racist, horribly sexist place.
Those things are in me, right?
I can't get away from it.
Constant vigilance is the
only thing I can have,
is I have to remind
myself don't judge people
based on the way you
were brought up as a kid.
And I hope that that's
become so ingrained
now that I don't have
to think about it.
I would say it's more
difficult for a man
growing up in our world
to not view people
on the basis of gender than
it is to overcome racism.
And I think it's a
constant vigilant struggle.
You have to constantly
tell yourself.
So yeah, people are simplifying.
That's why we have moral codes.
That's why we teach children
don't simplify people.
Don't judge people just because
they look different from you.
You have to teach children that.
Yeah.
SPEAKER: We are unfortunately
running out of time.
So once again, thank you,
Robert Elliott Smith.
[APPLAUSE]
