In 1916 Lewis Fry Richardson, who’d been
a meteorologist before the war,
was ambulance stretched bearer
on the western front.
He was a quaker pacifist. So he didn’t want
to be part of the fighting
but he wanted to help.
But he’d also been working on the
weather before the war
and he thought it would be possible to predict
the weather by calculating it.
That is, by treating it like a
mathematical problem.
So he’d gathered lots of data
from across Europe
based on the weather
on a single morning.
Then he spent months with a pencil
and a piece of paper calculating
what would happen over the
course of that day.
It took him about four months to conduct
a single, one day weather forecast
for the whole of Europe, but
he was broadly correct.
He was the first people to this section of
the natural world as something that
could be reduced to data, computed and
projected into the future.
That’s what we’ve based a huge
amount of 20th century thought on.
What I would call ‘computational thinking’,
the belief that the world
can be reduced to data,
can be modelled,
can be completely understood.
It descends from the enlightenment,
the belief that if we could only know
more about the world
we’d have more control over it.
Unfortunately it seems increasingly apparent
that that belief is fading.
It’s one of the central paradoxes
of our age
that we know more and more
about the world,
more and more information
is available to us,
and yet the world seems mostly characterised
by division,
by fundamentalism, by these competing, incredibly
toxic opinions.
More information, more data
about the world isn’t helping
us to resolve it in any way.
You can see this even in the weather,
that thing that Louis Fry Richardson
first calculated and turned into
something that was computable.
Even our weather predictions
are starting to fail us now.
We can gather so much information about
the world, but the world is outrunning us.
Particularly as a result of climate change
the weather is becoming so chaotic that
our ability to predict is actually reducing.
In the last century we’ve got up to
about a week or ten days ahead.
That horizon is now coming
back towards us.
We face a future in which we have ever
more data about the world, and yet
we know ever less about it.
That is a core character
of a New Dark Age.
There’s this idea that if we could gather
more and more data about the world
then we’d have this complete
overview of it. That’s what Big Data is.
It’s this view that you can get so much
information that you have a total view.
And you produce a kind of
perfect model of the world.
Then, instead of looking at the world you
look at the model of it, tells you everything.
But actually the last 100 years of computation
tells us that
every time the model isn’t good enough.
When we try to use it instead of the
world it continues to fail us.
Big data is always, always, insufficient.
And often overwhelming too,
so, it feels like we have no control.
So, it both doesn’t work, and it demoralises
us.
—
Recently there was an analysis of a
piece of software in the US
which was supposed to help judges
with sentencing requirements.
So when someone was convicted in
court this computer would suggest
how long they should go to jail for.
And after quite a lot of analysis it seemed
to show that actually this piece
of software was racially biased.
It was giving people of colour
longer, more punitive sentences
than it was giving to white people.
This surprised a lot of people who believed
that software is somehow neutral,
that technology is a levelling force,
that it makes us more equal and
allows us to make better, more
equitable decisions about the world.
Unfortunately that’s really not the case,
not least because the only thing
software can look at is
what we’re doing already.
So we’re building expert systems
based on our own history.
And unfortunately our own history is massively
racist and prejudiced
in a bunch of other ways.
As a result, what is incredibly
necessary in this field
is a massive democratisation
of these technologies.
Rather than being built purely by technologists
with a particular expert skill
but not so much social and historical
knowledge, these technologies need
to be open to wider and wider
access so they’re actually
more representative of a wider
population and possibility.
—
In 1997 there was this amazing
chess game
between Garry Kasparov
and Deep Blue.
Probably the greatest chess player
humanity has ever produced
versus a computer that IBM has
built specifically to beat him.
And when he loses it’s regarded as a
terrible thing for human intelligence
and understanding and ‘the machines
are going to take over’ and all of that.
But what’s interesting is we do know
how Deep Blue beat Kasparov,
we understand the process
that happened there.
This was a very powerful machine,
it could think many moves ahead
and store all the outcomes in a
huge index, it could just look them up.
It powered Kasparov but
it didn’t out-think him.
We understand the process
that happened there.
What’s deeply strange was that there
was a kind of repeat of this last year
in a match between Lee Sedol
and AlphaGo.
Playing a different game, playing Go, which
requires a different kind of thinking.
There’s a moment in the third game
when this strange computer
plays a move against Lee Sedol
that surprises everyone.
Sedol actually leaves the room
for about half an hour,
it made the commentators go silent, because
they didn’t understand
why it had made that move,
it seemed completely weird and crazy.
AlphaGo went on to destroy
that game, to win utterly.
Now that move is regarded as one of
the most extraordinary moves
in the history of Go.
But because of the way AlphaGo works which
is what we call ’machine learning’
which is a type of artificial intelligence.
The way that it works means we don’t
understand why the machine
made that move. We’ll never understand how
AlphaGo came to make that decision
to make that move.
Unlike the game with Kasparov where
we can follow step-by-step the processes
that Deep Blue made in order to
make its decisions,
we’re now entering a time when
machines are making decisions
and we don’t understand why or
how they came to those decisions.
Which puts humanity in a
very strange existential place
where our machines are not
just thinking ahead of us,
they’re thinking so radically different
to us that we’re never going to be able
to follow their thought processes.
This whole other form intelligence is
starting to emerge and
this may be a herald of a new dark age,
it also might be a time of increased augmentation
where we’re actually
able to collaborate with this
intelligence in new ways.
But it’s really a political question as
well.
This isn’t a question just for technologists,
this is a question for all of us.
What do we want these
technologies to do?
How can we understand them and
actually put them to our service.
Rather than having them
used against us.
It’s weird that we seem to live in a world
where most of us don’t really have
a sense of how most other
things around us work.
That’s a relatively new condition.
You ask most people how the postal system
works and you get
a relatively reasonable explanation.
You write a letter, you put it in an envelope,
you write the address
on the front, you put a stamp on,
you put it in a box, someone comes
and takes it to the other place.
That’s a totally reasonable explanation.
You ask most people how email works, and that
whole thing kind of falls apart.
You suddenly realise that so many of the systems
that we encounter every day
are completely mysterious to us.
They behave almost like a kind of magic, or
they appear to.
The thing is, they’re not magic
and they have ways of working,
but if we don’t understand how those systems
function, if we don’t understand
how they all interconnect,
if we don’t understand
who has power in those systems,
then we have no way of affecting them
and we’re essentially without agency
and have now power to direct them.
There’s a very strange thing that
happens with advanced technologies,
particularly with networked technologies.
They get hidden away behind glass.
They become hidden within themselves and we
no longer have access to them.
But something equally extraordinary happens
the other way around
which is that if you do start to
understand them a little bit,
if you can read a little bit of the way in
which these technologies function,
all of these power relationships become incredibly
clear and readable
and can actually be addressed
in new ways.
We often think of the internet as
some sort of magical far away place
where all this stuff just sort of happens
and beams down to us.
But in many ways it’s super physical,
there’s big buildings on the edge of cities
filled with computers whirring away
and generating loads of heat and
taking in loads of electricity.
And there’s cables that run under
the ocean connecting everything up.
If you look at a map of where the
internet’s fibre optic maps go
you’ll see that they actually trace out
completely the routes of former empires.
So all the fibre optic cables from Africa
still route back to their former colonial
powers.
Loads of the ones from South America
still go back to Spain.
This is because in many ways imperialism
didn’t stop with decolonisation,
it just moved up to the
infrastructure level.
So if you’re capable of seeing
some of these technologies
you can work out where the power
still lies and start to address them.
A few years ago I was doing a project
that was tracking secret deportation flights
which are charter flights that go out
in the middle of the night
from airports around London
carrying deportees home.
These were quite hard to find,
but they were there,
you could go out looking for them,
you could stand in fields and see them.
One day I was standing in a
field watching one of these flights
when I got talking to someone else
who was also looking at the skies.
And they were looking at the same planes
as I was and they were looking at the
same sky, and yet they were seeing something
totally different.
They were seeing a huge, vast conspiracy
involving the release of chemicals
from planes to drug people, or to confus
them, or to change the weather.
This is a huge conspiracy theory
called chemtrails.
Which is so vast and
all-encompassing
that it might be considered
the first folk literature of the internet.
In fact, conspiracy theories
seems to be
the most powerful narrative
form of our time.
I think that’s because the world has become
so extraordinarily complex.
It’s incredibly difficult now to write
simple stories about the world
which is what we all yearn to hear.
That desire for simple stories lies
behind the desire for conspiracy theories
but also the rise of populist politics
and fundamentalisms.
As more and more information
is made available to us
the world is actually more confusing.
We fall back onto these simple narratives
that often result in misunderstandings,
and even violence, because they’re
unable to accommodate the world as it is.
Conspiracy theories are in a sense one
of these symptoms of a new dark age
where we constantly demand
to be given single answers
to these immensely complex
global problems.
And until we figure out ways
in which we can regard the world
as something that’s ongoing, something that’s
a process of constant negotiation
rather than the provision of computational
solutions to every single problem,
we’re continually going to
run into these deep conflicts,
debates and violent arguments.
So we’re in this position
where we’ve completely undermined
any possible trust in traditional
sources of authority,
whether that’s politics or the media.
But we’ve also spent
the last fifty years
undercutting our own
understanding of the world
by making technology and the systems around
us ever more opaque.
So we’re in this sudden position
of having no authority
but also no ability to make critical judgements
ourselves.
We’re at this absolutely crucial moment
where we need to rapidly
develop our tools of
understanding and
think the world in an entirely different way
to make up for this shortfall of
authority on the one hand,
and a complete
collapse of understanding
on the other.
