Imagine that tomorrow,
Google announces
that they have invented
a machine learning system
that can tell fact from fiction,
that can determine truth
from lie.
And this is a magical
psychology land experiment.
So in this experiment, in
this thought experiment,
the machine learning system
is right all the time.
And then they say that
they've hooked it up
to the Google search results
and YouTube recommendations,
so now, when you search
for vaccinations,
only the scientific
consensus comes up.
All the anti-vax stuff is
down page six, page seven,
where no one's going to look.
YouTube videos on
conspiracy theories
start to just be recommended
less and less and less
and the ones debunking them
keep coming up at the top.
It still matters whether the
search result is engaging
and relevant and
well-referenced,
but now it also matters,
objectively, whether it's true.
And then you realise that this
new system, this algorithm,
it disagrees with you on
something really important,
and it produces a result
that you find abhorrent.
Now, I struggle to
find an emotive example
for this that wouldn't also be
too sensitive for an audience
like this, so as this is the
Royal Institution in London
and it is September 2019,
imagine this machine learning
system concludes that
it would be a good idea
to have a no deal Brexit.
[LAUGHTER]
OK, now that-- that
was-- don't applaud.
That was a cheap shot.
That was a cheap shot.
And there may be people in here
who genuinely believe that.
So for those people
please imagine
the extreme opposite conclusion,
that this system believes
it will be a good idea
to dissolve the United
Kingdom into Europe entirely.
Look, this is a system that,
in this thought experiment,
does create objective
truth, and it
disagrees with one of your
fundamental core values.
What's more likely,
A, that you are
going to just go,
oh, well, OK, I guess
I'm going to be
heard less and less,
and I guess we'll just have
to deal with that, or B,
that you decide that instead,
you need to shout louder
and that the algorithm is wrong?
So for the next
hour I want to talk
about the state of play
for science communication
and broadcast
communication in general
for an audience that has
wildly different ideas
and areas of knowledge on
how that works right now.
I want to talk about why some
science communication seems
to go everywhere and a lot of
it doesn't, and to give advice
for people who are
trying to reach out
and to broadcast their
truth, and whether that's
for small, individual
groups putting out content,
or whether it's big
corporations and charities who
are trying to do outreach.
That's also going
to include a dive
into the concept of
parasocial relationships,
those odd, one-way
relationships that
appear in other media a lot.
Because if you
want to understand
how to reach an
audience, then you
need to understand what that
audience is looking for.
And I want to talk
about one particular set
of things that are increasingly
governing the media we consume
and everything in our
lives, algorithms,
the ones that recommend the
videos you watch on YouTube,
the search results on Google,
the order that your Twitter
feed appears in, and
basically everything that
includes recommendations
on social media these days.
And from the corporation's
point of view,
which advertising to show you.
Now, there are a
couple of caveats here.
I have generalised this talk
as much as I possibly can.
I've run it past quite a
few folks in my industry,
but I am speaking from
a position of success.
I'm lucky enough to have 1.9
million subscribers on YouTube
at the minute.
There we go.
Didn't quite get to
2 million in time.
[LAUGHTER]
Now, subscribers is not a
particularly useful metric.
That's more a function
of how long you've
been on the platform
and how many
one hit wonders you've had.
It's more honest to say that an
average science communication
video from me gets somewhere
between a quarter of a million
and a million views.
And those might be about
linguistics, which is
what my degree is actually in.
They might be about the
basics of computer science,
where I'm self-taught
but checking my script
with experts.
Or they might be about
infrastructure and science
and interesting things
in the world, which
is where I go out
on location and I
hand over to people who know
what they're talking about.
Now, my degree is in linguistics
with a research master's
in Educational Studies,
but ultimately, I
am in this position
because I spent
15 years throwing things at
the internet before something
worked.
I was extremely lucky that the
thing that turned out to work
was science communication, was
going out and telling the world
about things I'm interested in.
I am even luckier
that it turned out
to involve filming on location.
In the past few years,
I've been lucky enough
to experience zero gravity.
I have gone to the Arctic.
I have flown with
the Red Arrows.
And yes, that is mostly
a brag, and mostly--
[LAUGHTER]
--just an excuse to show
the best photo of me that
will ever be taken in my life.
[LAUGHTER]
Have you ever lost a
photo of yourselves
and thought, it's all
downhill from here?
Because it is.
One more caveat.
For those of you who've been to
a Royal Institution discourse
before, you'll know there
are generally two types.
There was one where a
researcher with a PhD
and an associate professorship
talks about their research,
and there is one where
someone in arts and culture
shares their experience.
This is more of the latter.
Some of what I say is going
to be opinion and not fact,
and hopefully this audience will
be able to tell the difference.
There are points in here
where I explicitly say,
I do not have all the answers.
And I also want to add one
conflict of interest note
as well.
My company gets a
lot of its revenue
from the adverts that go on and
around YouTube videos, which
means that my rent is indirectly
paid by Google Ireland.
I can't imagine why
it's in Ireland.
No-- no reason at all why they
set up there instead of the UK.
Not a word of this
discourse has been
passed by anyone in Google.
They don't even
know I'm doing it.
I'm not employed by them,
but ultimately, like, I am--
while I'm willing to
irritate that company
and bite the hand
that feeds me, they
are indirectly paying my rent.
I can try to represent the folks
who have not been as lucky,
who the algorithm
has turned against,
but I am quite happy
with them right now,
and a lot of other
people aren't.
So anyway, that's the plan.
This is the state of
science communication
in the English-speaking world
at the end of the second decade
of the 21st century.
And to understand
how it works, we
need to start with
the algorithm.
Algorithm, in this
context, means quite a bit
different from what people
with a lot of experience
in mathematics and computer
science might think it does.
The algorithm is referred
to in the singular, which
is the almost
anthropomorphized name that's
given to this collection of
machine learning systems.
I went to a conference for
science communicators last
year, and after about
three or four hours,
we realised that we had to ban
the word from conversation,
because while a lot of folks
from YouTube would just
endlessly froth about it, anyone
not on the platform just found
it confusing and messy.
And we would not shut up.
So when I talk
about the algorithm,
I am talking about this almost
magical black box of code.
And the idea is that you
set up this black box
and then you provide it with a
list of human-curated examples,
and it works out their
distinguishing features,
provide some sort of
categorization system,
and then as you throw
novel examples at it,
it categorises them and
learns from feedback.
And those
distinguishing features
may be completely novel or
completely unknown to humans.
So one of Google's
recent AI projects
was looking at
retinal photography.
And this recent paper
claims that with
or with uncanny accuracy, it was
able to look at retinal photos
and work out gender--
sex with 97% accuracy,
age within four years,
and better than chance
at smoking status,
blood pressure, major
adverse cardiac events.
Eye doctors now cannot detect
any of those things themselves.
They're not really sure
how the machine did it.
Now, it's right to be sceptical
of some of those claims.
Maybe there's a difference
in the metadata.
Maybe the retinal
photography machine--
I guess that's
technically a camera--
is set up or focused
differently depending
on some of those attributes.
But the paper does
a pretty good job
of covering their bases there.
And maybe a human
could also be trained
to pick out those
differences, it's
just that nobody's
bothered when you can just
look at the chart
next to the patient.
But the simplest black box
machine learning system
is essentially
categorising pictures.
You give it a load
of pictures of cats
and you give it
a lot of pictures
of things that are not
cats, and then you ask it,
is this new picture a cat?
Which sounds like it's
going to be useless
unless you're trying to design
a philtre for adult content
and you're sending it
pornography and not
pornography, and the aim is
to have a classifier that
can look at a photo
it's never seen before
and work out whether you should
show that to all ages or not.
Of course, it's not that simple.
There are stories after
stories after stories
of machine learning
systems that have
failed through bad training
data or, more likely,
biassed training data.
And this is slightly
outside my ballpark.
I know Hannah Fry covered
this in her lecture
here a while ago.
But I have a very--
I have an example that's
very close to my heart.
YouTube uses the
machine learning system
to try and detect whether videos
are suitable for advertisers
to place their adverts next to.
It was rolled out a little
bit too fast, and before it
was entirely ready,
because YouTube
had one of the many little
scandals that they have,
and they needed to do something
to reassure their advertisers.
So this is a highly
abridged summary
based on unofficial
conversations and innuendo
and scuttlebutt.
I'm breaking no NDAs
here, but the story
goes that they provided
the machine learning system
with a big block of videos
that were definitely
100% safe for
advertisers, and then
they gave it a big block
that were definitely not.
And they told the system
to be fairly conservative,
because it was only a
first line of defence.
If it deemed your
video unsuitable,
you could send it
off to humans review.
And this, is in my industry,
a fairly controversial thing
to say, but I don't think
that's an unreasonable solution
to a very difficult problem.
YouTube has 500
hours of content--
500 hours of video.
I'm not saying content.
YouTube has 500 hours of
video uploaded every minute.
That's a human lifetime,
every single day.
It is not unreasonable to have
a machine learning system be
the first line of defence as
long as there's a human review
behind it.
And nowadays, it's
working fairly well
with some high
profile exceptions,
but the problem,
so I'm told, was
that there was a bias
in the training data.
Videos-- sorry,
channels, people,
talking about LGBT
issues were more
likely to talk explicitly about
sex in some of their videos.
Not all of them.
Not the majority of them.
But enough that the machine
learning system figured out
there was a correlation between
people talking about LGBT stuff
and people talking
about explicit sex.
Again, only in a small
number of videos,
but enough that when the machine
learning system found something
to be about being gay, it viewed
it as more likely to be unsafe.
Now, the YouTube CEO said
in a recent interview,
"We work incredibly
hard to make sure
that when our machines
learn something--
because a lot of our decisions
are made algorithmically--
that our machines are fair."
I know some YouTube employees.
I'm friends with some
YouTube employees.
I believe that they work
incredibly hard to minimise
that bias.
But it's still there.
And algorithmic bias
is a major concern
for every single
machine learning system.
The systematic biases
in the wider world
have already found their
way to social media
without machine
learning being involved.
Of the top 10 earning creators
on YouTube right now--
well, as of 2018, as of
last year, of the top 10,
all of them are male.
And I'm well aware that I'm in
the Royal Institution giving
this talk.
One of the reasons
that I got the audience
in the first place,
one of the reasons
I ended up standing
here, is because I'm
a white guy with a
British accent that
sounds authoritative.
Trying--
[LAUGHTER]
Trying to make sure that
artificial intelligence doesn't
inherit these systemic biases
is an incredibly difficult job.
And it's one for Hannah
Fry and her crew,
and not for someone who
got a linguistics degree.
[LAUGHTER]
When YouTube handed over
that recommendation engine
to machine learning, they set
it to increase watch time.
This is what we told everyone.
If people stuck around
watching your video all
the way to the end, and
it was 20 minutes long,
then it was viewed as good
by the system, at which point
they fell foul of
Goodhart's law.
"When a metric
becomes a target, it
ceases to be a good measure."
So people made longer
videos, and they
put all the important
stuff at the very end,
forcing people to watch
all the way through,
so worse videos were
being recommended.
So now YouTube's
official line is
that they reward high
quality videos that
keep people on platform.
Now, that may not be
videos on the same channel
by the same creator
in the same genre.
In 2017, Jim McFadden, who was
then technical lead for YouTube
recommendations, he talked
about the new engine
they had that came from a
department wonderfully called
Google Brain.
"One of the key things
it does," he says,
"is it's able to generalise.
Whereas before, if I watched
this video from a comedian,
our recommendations were
pretty good at saying,
here's another one just like it.
But the Google Brain
model figures out
the other comedians
who are similar
but not exactly the same-- even
more adjacent relationships.
It's able to see patterns
that are less obvious."
And as for which videos
to recommend, well,
one of the new
model's basic ideas
is that if someone comes to
YouTube because of your video,
tick, that's a good thing.
And if someone does not leave
YouTube because of your video,
does not stop watching,
that's a good thing.
So the black box
takes in those signals
and it works out, what's going
to keep people on our platform?
What's going to keep
them watching the videos,
and again, more importantly
for Google, watching
the adverts in between.
Instantly, apparently,
Google also
serves more adverts
to people who
are more tolerant to adverts.
[LAUGHTER]
If you'd like less--
if you'd like two
adverts to appear
less often before your
video, skip them more.
[CHUCKLES] That was bad
advice to give for someone to
makes his money from that.
[LAUGHTER]
Because you've got to remember,
all these big companies,
Google, Facebook, Twitter,
they are essentially
advertising companies.
Almost all their revenue
comes from being the greatest
marketing advertising
targeting company
that the world has ever seen.
Their ideal is that every advert
is perfectly targeted to you.
And as all of you will be aware,
they haven't got there yet.
But it turns out if
you reward videos
that keep people on platform,
then what you end up with
is conspiracy theories
and clickbait.
And yes, all, all, of the
companies that have algorithms
are working on fighting
disinformation,
because they are aware that
it is a public relations
disaster for them.
But they are doing
it in English first.
I'll get back to that later.
From a creative
perspective, the algorithm
is often seen as being a
bit like a Skinner Box.
It's an operant
conditioning chamber.
It is a food dispenser
that might give you
some money if you
tap the lever enough.
Google and YouTube
and Twitter will never
tell you what that
algorithm is looking for,
because every bit of
information they give away
means that there is more
opportunity for people
to abuse it and send spam.
But from my perspective, the
pallets come out at random,
and we all develop the
superstitions, that we all
keep pushing the lever.
I was lucky enough this
year to have a conversation
with the head of product
for recommendations
at Google, the person in
charge of the algorithm.
And it seems like
the idealised version
of what they want for that,
sort of recommendations that
appear next to video, is that--
it's a bit like-- and this is
something that shows my age.
It's a bit like a TV Guide.
It's a bit like it's a
bit like the Radio Times.
Every single channel
should always
have something on it that
is interesting to you,
all the time.
It should be a transparent
glass layer between the audience
and what they want to see.
And the question is, as they're
weighing up all those videos,
how much they put their
finger on the scale?
How much did they
make that TV Guide
be for the worthy, honesty,
truth-seeking version of you,
and not the clickbait
conspiracy version of you?
Because ultimately,
yes, sometimes you
want to watch a documentary,
but sometimes you
want to watch someone trip
over and hurt themselves.
Like, You've Been Framed
existed for a reason.
And I'm not just
talking about YouTube.
I'm talking about Twitter.
I'm talking about every single
algorithmic system out there.
If all they're
recommending is quick,
short dopamine hits that
just get you in, get you out,
then that's not a long
term survival strategy.
That spirals down into the
lowest common denominator,
which ultimately
hurts the world.
But if they don't have
some of that in there,
then people are going
to go elsewhere.
They're going to go to the
company that does have that.
If everything is painstakingly
verified and educational,
then only a minority of
folks are going to watch it.
Finding that balance,
finding that solution--
I said there were going to
be analogies to older media.
That's what TV
commissioners still do.
It's what the people programming
the YouTube algorithm are
trying to do.
And let's be clear, it's
an unsolvable problem.
There is not some
magical equilibrium
in the middle that
will make this work.
It's about finding a balance.
It is about finding
the least worst option.
You cannot have a successful
platform that is all clickbait,
but you also can't have a
successful platform with no
clickbait, because either way,
advertisers are going to leave.
Viewers are going to tyre of it.
And it's not
sustainable, long term.
There have been plenty
of investigations
into the effects of the
algorithm, plenty of research,
formal and informal, that showed
how you could very, very easily
go from something apolitical,
and then click again and find
something just
slightly political,
and then click again and then
find something a little bit
clickbait, but still honest,
and then maybe something that's
about moderately conservative
politics that's a little bit
untrue, and then
on the next click,
find something about why
Hillary Clinton is evil
and Donald Trump is the
greatest thing to ever happen
to the universe, or vise versa.
The most notable,
recent investigation
into online radicalization
is this deep dive
by The New York Times.
And like I say, the companies
are working in English first,
because The New York Times
looked into radicalization
in Brazil, and it includes one
of the most sobering paragraphs
I've seen in a while.
"Right-wing YouTubers had
hijacked already-viral Zika
conspiracies, and added a twist.
Women's rights
groups, they claimed,
had helped engineer
the virus as an excuse
to impose mandatory abortions."
Quoting Zeynep Tufekci, who
was referring to research
in the Wall Street Journal.
"Videos about vegetarianism
led to videos about veganism.
Videos about jogging
led to videos
about running ultra marathons.
It seems as if you
are never hardcore
enough for YouTube's algorithm.
It promotes, recommends,
and disseminates videos
in a manner that appears to
constantly up the stakes.
Given its billion
or so users, YouTube
may be one of the most powerful
radicalising instruments
of the 21st century."
I worry quite a lot about
how complicit I am in that.
Ben McCowan Wilson, he's
YouTube's UK managing director,
had an interview.
And obviously, he
disagrees with that.
He says that the platform
reduces the spread of content
designed to mislead people and
raises up authoritative voices.
Note that he didn't say
that those voices were true
or that those
voices were correct.
He said they were authoritative.
There's an old
Jonathan Swift quote,
and it took quite
a lot of research
to prove that this actually
was a Jonathan Swift quote.
"Falsehood flies, and the
truth comes limping after it."
Put up your hands.
Who saw this tweet?
A few people?
Yeah, there's about 10
or 12 in the audience.
I mean, look, it
did great numbers.
The idea that marmite, you
lay the bottle on the side
to get that out.
That did that in great numbers.
Who here saw the confession
and the retraction?
[LAUGHTER]
All right, about
half as many people.
Good.
Sure, that's pretty
much harmless.
By the way, that is
now the Marmite guy.
That's what he's known as.
Everyone assumed that that had
been fact-checked by someone,
because it's not that
important, right?
But everyone always
assumes that,
particularly if it agrees
with our preconceptions.
I mean, the long
term solution is
to teach information literacy
in schools, but as a--
[CHUCKLES] as a real
world education policy,
that's about as
useful as saying,
we should reduce our
carbon emissions.
It's true.
How do we get there?
I don't have the answer to that.
Douglas Adams in his novel Dirk
Gently's Holistic Detective
Agency came up with the idea
of a fictional bit of software
called reason, which-- it's
a programme which allowed you
to specify in advance which
decision you wanted it to reach
and only then give
it all the facts.
And the programme's job--
the programme's job, which it
was able to accomplish with
consummate ease, was simply to
construct a plausible series
of logical sounding steps
to connect the premises with
the conclusion.
In the novel, it was sold to
the Pentagon, highly classified,
and it explained why they
spent so much on Star Wars.
Missile defence.
If you would like to be
convinced of a thing,
YouTube and Twitter and
all the other networks
will happily find you
people to convince you.
If you've lost
your faith in God,
then there will be happily
dozens of evangelical preachers
from all sorts of denominations
who will happily bring you back
into the fold, and
dozens of angry atheists
who will make sure
you stay out of it.
Take your pick which
way you want to fall.
If you want to know
who's to blame for what's
wrong with the world
right now, then I
can find you hundreds
of people who
will get you apoplectically
angry at billionaires
or immigrants or both.
Whichever way you
want to go, there
will be someone authoritative
to tell you about it.
So how do we define
that authority?
There used to be gatekeepers.
Or at least, it seemed like
there used to be gatekeepers.
Science communication was
in magazines and television
and radio, and it
meant that there
wasn't any significant peer
review system out there,
but you knew that
there were researchers
at the back end and
standards and quality
to keep these things
fairly accurate.
There were professionals.
There was David Attenborough.
There was James Burke,
and a lot of other men--
and they were all men--
who were all giving authority
with the BBC to back them up.
Which is certainly true,
but for every Planet Earth
or Connections,
somewhere on the channel
there would be an Ancient
Mysteries or an In Search Of
or Ancient Aliens.
The Bible Code became a
phenomenon in the 1990s.
This was the idea of there
were certain hidden messages
in the scriptures that could
be tracked, if you just
went through it exactly
the right number
of letters in a row.
The Daily Mail loved that.
They put it right on
their front page, a splash
above the headline.
Not because it was
true, but because they
knew it'd sell newspapers.
We had gatekeepers.
We definitely did
have gatekeepers,
but I'm not convinced that
from the public's perspective
it was all that different
to what we have today.
Authority online often comes
from having an audience.
You know, why am I standing
here giving this talk today?
It's not because I've done 15
years of painstaking research
into this stuff and
that I am now presenting
my doctoral thesis to you.
It's not because I have a
particularly broad range
of knowledge or
depth of knowledge.
It's because I've worked with
the Royal Institution before.
They, you know-- you
know I can present.
I wouldn't pretend to know
your motivations, Shaun,
but I think it's
safe to say that I'm
partly here because you
knew you could sell tickets
and you knew you'd get
some clicks online from it
for having me
associated with this.
I'm not saying that's all of it.
I'm saying that was
probably a consideration.
There is a reason that every
major British documentary
about space for
the last 15 years
has been presented by
Professor Brian Cox.
[LAUGHTER]
Except it's not just
documentaries about space.
That's Wonders of Life.
That was about natural history.
I'm sorry, I'm just
to take a moment.
Oh, good.
There he is.
[LAUGHTER]
That joke would've landed a
lot better if that had come up
with the right time.
[LAUGHTER]
Can we just take a
minute to appreciate
how bad the Photoshop
job is, by the way?
The lighting doesn't match.
He's got a halo around his
head, and that right arm
isn't even cut out properly.
It's an official
BBC Press Photo.
Anyway, this was billed as
a physics-based approach
to natural history.
And yes, it did cover some
pretty advanced physics
concepts, but let's be
honest, from a television
commissioner's perspective,
it's a bit of a reach.
It was almost
certainly an excuse
to provide Brian
Cox on television
to the audience who wanted
him, to get those ratings in.
I'm sure it was well-researched.
I'm sure they did a
really good job of it.
But Wonders of Life with
the extremely qualified
dual biologist an
astrophysicist no one's heard of
would not have had as
many people watching it.
It would've been much
harder to commission.
How much is it
worth to get someone
less qualified but
known to the audience
rather than the best
person for the job?
Sister Wendy was not
some great art historian.
Her degree was in English.
But in the 1990s, she presented
five BBC series on art history
because audiences liked her
and would listen to her.
In her obituary,
The New York Times
said, "Her insightful,
unscripted commentaries
connected emotionally
with millions."
Danny Beck, a Norwegian
neuroscientist,
recently said this.
"Your desire for a platform or
interest in several sciences
should not supersede your
responsibility and ethical duty
to not speak on topics
you are not qualified to.
There exist others who know
better and can do the job.
Consider amplifying
their voice instead."
And I agree with that, which
is why the rest of this lecture
will be present-- no, it won't.
It won't.
[LAUGHTER]
But just for a moment,
you believed me,
and how did you
feel about paying
to come here and see that?
Like the early videos
in my educational series
were the stereotypical guy
who hasn't done his research
and thinks he knows everything
spouting unsourced facts.
There are still a heck
of a lot of people
who stand in front
of a green screen
or use voiceover
and stock footage
and provide no citations
and no references
and just ask people
to trust them.
I learned quickly
not to do that,
but the trouble is, it's sort
of what the audience want.
A loyal audience, whether
it's on YouTube or Facebook
or Twitter, does not want to see
someone else's voice amplified.
If I hit the retweet
button on Twitter,
I'd just take someone
else's message wholesale
with their face, I'd send
it out to my audience,
few people will pass it onwards.
If I quote tweet it,
if I take their message
and I add a bit of useless
commentary around it
with my face, that
goes much further.
Way more people interact.
Way more people pass it on.
YouTube gives creators
retention statistics.
We get to see, on average, where
people stay and leave a video.
I will note that that
chart does go up to 120%.
[LAUGHTER]
Couple of reasons for that.
One is that sometimes people
go back and watch a bit again.
That counts twice.
That can go above.
But also, it's because
YouTube's graphing API just
isn't good at percentages.
[LAUGHTER]
There's a big drop off in the
first few seconds up there.
It's as people go, oh, I don't
like that, or I don't like him,
or this isn't what I thought.
There's a big drop
off in the end, what's
now called the end card and
was once called the credits.
But apart from that,
just steady slope
downwards as people get bored.
And the algorithm will
look kindly on you
if that slope is less steep.
But if I go
somewhere on location
and I hand over to
experts and experts
carry most of the video,
that graph will be steeper
and retention will be lower.
People will get
bored more quickly,
and they'll show the video less.
So how much do you
pander to your audience?
How much do you tell them what
they want to hear, particularly
when on YouTube
there is literally
a dollar amount
attached to each video
and attached to each of you?
How much do you make
what your audience
wants at a social cost?
You know, it's not
just the viewers
being sent down a rabbit
hole of radicalization.
It's also the creators.
When you look at clickbait that
brings in numbers and money,
it can be very, very tempting
to just cut your losses,
double down on the
clickbait, and accept that,
well, make money
while it's coming in.
Make hay while the sun shines,
because in a couple of weeks,
it could all go away.
But the light that burns twice
as bright burns half as long.
In 1988, Jimmy Cauty and
Bill Drummond, The KLF,
wrote How to Have a Number
One the Easy Way, the Manual.
They'd had a
novelty pop song hit
the top of the charts
a few months earlier,
and The Manual was a
tongue-in-cheek guide
to success in the
music industry,
how to achieve a number one with
no money and no musical talent,
which they said themselves
that they'd managed.
Almost all of it
is out of date now.
There's a wonderful section
a couple of chapters in which
says that by the
mid-90s, someone in Japan
will have invented
a machine that
lets you do this all from home
without a recording studio.
That was right.
But at the very
start, a little bit
that here is as
true now as it was
when it was written, "The
majority of number ones
are achieved early in the
artist's public career,
before they've been able to
establish reputations and build
a solid fan base.
Most artists are never able
to recover from having one,
and it becomes the
millstone around their necks
to which all subsequent
releases are compared.
Either the artists will be
destroyed in their attempt
to prove to the world
there are other facets
to their creativity, or they
succumb willingly and spend
the rest of their lives as a
travelling freak show pedalling
in nostalgia for those now
far off carefree days."
The Cheetah Girls.
They wrote that 10 years
before the Cheetah Girls.
They wrote that 10 years,
20 years before YouTube
came along, and it's exactly
true for people online.
A number one does
not create a career.
It can kill a career.
If you've just got
one hit, all the world
will want to see
is that one thing.
A single video does not
make you a YouTube star.
A single tweet does not
get you a book deal.
But to be honest, that hasn't
happened in five years anyway.
A number one just
makes you the person
who repeats that catch phrase
until everyone is tired of it.
What you want to do is build
up a catalogue of minor hits
over time, get a little bit
of respect, hone your trade,
learn your craft,
and then once you've
got a bit of an audience, then
you can start aiming upwards.
Bands and singers in
the music industry
is actually a
really good analogy
for how the internet
works right now.
After all you know
Brian Cox does
arena tours and world show--
arena shows and tours
around the world.
For the folks out there who are
hoping to communicate science
to the world, it is
like starting a band
or launching a music career
or going out solo and singing.
It will not pay the rent for
the first few years or decades,
or maybe even ever.
And if you are one of the
lucky ones, if you make it,
it will not set you up for life.
And if you are an organisation
trying to get your message
out-- well, I used to think that
people had a built-in corporate
baloney detector.
I used to think that
anything that was put out
by an institution or by
an advertising agency
was doomed to fail just
because it wasn't authentic.
And that's not true.
That's not true at all.
You remember what
I said earlier?
500 hours of video per
minute, not watched, uploaded.
You know, 82 years of video
content a day, and most of it
took almost no time
and effort to produce.
But every one of those videos
has more or less the same
chance as that project that your
organisation spent six months
and a million dollars on.
The most popular recurring
series that I do right now
is very simple.
That's sped up, obviously.
It is a 10 minute,
one take monologue
to the camera about
computer science.
The camera does not move.
The camera does not cut away.
There are none of those jump
cuts that a lot of people use.
I can now completely
understand why they do them.
It's a lot easier.
There are occasional
graphics, sometimes,
but mostly it's just
me and the camera.
And that breaks all the rules
established by television
when I grew up.
It breaks all the rules
that corporate media types
think that they need
to make this work.
It's not about spectacle.
It's about people.
To explain that, I need to talk
about parasocial relationships.
The term parasocial
relationship was
invented by these two,
Donald Horton and Richard--
Wohl?
"Vohl?"
Should really have
fact checked that.
[LAUGHTER]
The term means that
there is a difference
between the spectator, the
viewer as we now call them,
and the performer, what
we now call the creator.
The spectator is
emotionally invested
in the person on screen,
but the person on screen
has no idea that the
spectator exists.
And if they do, there's
such a power imbalance there
that it couldn't
possibly be a friendship.
Parasocial relationships
are not a new thing,
and neither is turning
them into money.
Any celebrity that ever
had an official fan
club with a membership fee
was doing exactly that.
Actually, in the late
'80s, early '90s,
there was a fad for celebrities
to set up phone numbers that
were their personal number
or their personal voicemail,
and some of them made
a lot of money from it.
That's Corey Feldman
and Corey Hart.
And if you think I didn't
track down the advert
to play it halfway through to
get everyone's attention back,
you're completely wrong.
[VIDEO PLAYBACK]
- You can listen to their
private phone messages
and get their personal number
where you can leave them
a message of your own.
$2 the first minute,
$0.45 each additional minute.
Ask your parents
before you call.
[END PLAYBACK]
Yeah.
[LAUGHTER]
Haven't tested the number.
I don't imagine
it works anymore.
But those celebrities
didn't have Twitter.
Twitter is effectively
someone's personal number.
If you know that a celebrity
is always on their phone,
always typing on their computer,
sending their thoughts out
to the world, well,
why not reply?
Send them a message.
They might notice it.
They might actually
reply to you, personally.
You know, you might get
attention from them.
And suddenly it's not a weird
parasocial relationship.
They're your friend.
More than that, because
it's on social media,
you can try to get
that attention.
You can try to get that
over the top fandom.
But it's also performative.
It can also be competitive.
All the fans can now see
each other doing this.
I have a couple of
friends who have
that terrifying
Beatlemania-esque fandom,
that kids screaming
at Take That concerts,
except they're not just
folks around magazines
now in small groups.
They're not kids who
come for a concert
and scream at their
idols and then disperse.
They have notifications on for
when their idol tweets so they
can get the first reply in.
They have group chats
whose only distinguishing
characteristic for
the people in that
is that they're all
fans of this person.
If you've ever wondered why
kids would rather sit and watch
a stream of someone
playing a video game
rather than just play
the game themselves,
it's not about the game.
It's about the
person playing it.
A stream of a video game on
its own isn't interesting,
but someone you know or
someone you think you
know playing a video game,
just hanging out with a friend,
with a chat next to it.
In that chat, that's a lot
of friends hanging out.
You're all just
hanging out together,
except that one of you has a
lot more power and influence
than the others and often is
indirectly asking for money.
You'll notice that I'm not using
any slides during this bit.
I don't want to call out any
particular individuals for what
is basically just hustling.
Patreon and Twitch subscriptions
and YouTube memberships,
all these tools they
have to raise money
for individual people are
not inherently a bad thing.
Patreon has meant that
science communicators are
able to support themselves
despite the fact
that sometimes their content
is not advertiser-friendly.
So people who are talking about
sex education or mental health
or ancient weapons can all
get money for their content
and perhaps even hold-- not have
to hold down a separate job,
because individuals
out in the world
have thought this
should exist, and I
am willing to donate
money to make that happen.
Animators, writers,
podcasters, the sorts
of people who work
incredibly hard
and are only able to make
what they do because people
have chosen to support them.
That is a brilliant thing.
But when it starts
to get unsettling,
when it starts to get
a little bit weird,
is when it becomes not about
supporting someone's craft,
but about selling friendship.
And if you think it's weird
when I put it that way,
yeah, you should.
It's really weird.
If you watch one of the
really popular video
game streamers on Twitch,
which-- the platform that is
just video game
streaming, you'll
see that they're
almost always talking.
They're watching the chat.
They're reacting to the
messages that are coming in.
They're reading them out loud.
They're replying to them.
They're calling out the
names that they've seen.
They're greeting people
who've been hanging around
in that group for
a long, long time.
They are being friendly and
open and on for hours at a time,
performing exhausting
emotional labour.
They will thank anyone
who sends them a tip,
or even better yet, subscribes,
because in a world where
Netflix cost $13 a
month, subscriptions
to a single person on Twitch
can be $5 or $10 or $25 a month.
And depending on
what you pay, that
might affect what
perks you get back
and what attention you get.
And if someone does repeat
their subscription, join again,
then they can choose to
announce it to the whole stream,
little animation says how long
they've been subscribed for.
You'll see people on
Twitch say, hey, so-and-so,
thanks for being part of
the cult for four years.
That's literally language I
heard while researching this.
That seems normal to anyone
embedded in that culture.
Now, if you're a
science communicator
it won't be quite that much.
But it might get you
behind the scenes access.
You know, you might get to see
someone's videos early and put
some comments in, or get
your name in the credits.
It might give you access
to a special members
only private chat room, or
if you're giving someone
maybe $50 a month,
maybe there'll
be a private video
chat with the creator,
just for the folks who are
spending that much money.
Maybe that is OK.
Maybe that's the way that
social norms are going now,
and I'm the old guy looking
at that going, what on earth
are the kids up to?
That might be the case.
But it can be essentially
selling friendship.
And again, it
doesn't have to be.
There are people
who use it purely--
a lot of people who
use it purely as a way
of funding their work.
But here is some of
the advice that Patreon
gives on how to get
more people signed up
for your monthly subscriptions.
"Bring your audience along
for the ride by sharing pics,
videos, and anecdotes
from your life.
Get vulnerable, within reason.
An emotional connection
to you as the creator
can be key in converting
a fan to a patron."
An emotional connection
can be key in converting
a fan to a patron.
There is a very,
very blurred line
between being a fan
of someone's work
and being a fan of someone.
And that's a line I find
really uncomfortable,
because in part,
my brain doesn't
do parasocial relationships.
It never has.
Maybe there's something
wrong up here.
But there is, to me,
a huge distinction
between I'm a fan
of X's work and I'm
a fan of X. I know
linguistically they
can be shortened, but those are,
to me, very separate concepts.
I like the work of Derren
Brown, mentalist, magician.
I have shamelessly cribbed
some of the techniques
that I use in talks,
some of the tricks I--
not like magic tricks, but
some of the rhetorical tricks,
shamelessly from his live shows.
The idea of setting
something up at the start
and letting the audience forget
it and then bring it back
at the end and revealing it
is the key to the whole thing.
I have blatantly ripped off
Derren Brown more than once.
I thoroughly enjoy his work,
and he's a great entertainer.
But I don't give a
damn about the man
himself, because
I don't know him.
He's a stranger.
Parasocial relationships
and everything about the way
that these one-way,
one-to-many relationships work,
blur that line in the
service of greater profit.
I have a strong memory from
when I was a kid, maybe
about that tall, being asked
in school to write something
about a personal hero.
And I didn't have any.
With the benefit
of adult hindsight,
obviously the cop-out option
was to talk about my parents,
but as the kids around me wrote
about sporting heroes or actors
or whoever, I just
sat there stumped.
And it wasn't until I was much
older that I realised to most
people, liking someone's work
and liking someone are the same
thing.
And that was blindingly obvious
to most people, I'm sure,
but to young me, it
was a revelation.
There.
That has revealed something
personal and vulnerable
within reason.
That's helping create the
emotional connection--
[LAUGHTER]
--between me and the audience.
Tick.
[APPLAUSE]
The collection buckets will
be by the door on the way out.
[SIGHS] Guys, it's like a cold
breeze just came into the room.
That was brilliant.
So why have I covered
that in so much detail?
What does all that have to do
with science communication?
It was worked out very early
on in television history
that the people who were
good at getting an audience
were not the people who went,
(LOUDLY) ladies and gentlemen.
They were the people who
went (SOFTLY) hey, hello.
They weren't the
people who went,
(LOUDLY) how are we
all doing tonight?
They were the people who looked
down the camera and said,
(SOFTLY) how are you?
It's the difference between
talking to the audience
and talking to the viewer.
There is a difference between
a nature documentary with stock
footage and a voiceover and
a David Attenborough nature
documentary.
There is a difference
between Wonders of Life
and Brian Cox's Wonders of Life.
Sound quality, factual
accuracy, video quality,
they all matter, but
not nearly as much
as having someone on screen who
the audience can connect with.
My friend Doctor Simon Clarke
vlogged his PhD at Oxford.
For more old school
people in the audience,
vlogged essentially
means video diaried.
Simon now has a doctorate
in atmospheric physics.
In 2018, he stopped making
personal videos about his life
in favour of science
communication,
and he wrote this post
about why and how.
And I sort of quote
a little bit from it.
"The motivation of
watching someone struggle
with the monumental task
of researching a PhD
was mostly what attracted
viewers to watch.
I was the product.
By that, I mean that my
lived existence on earth
was a commodity, something to
be bottled, refined, and sold."
Simon's recent science
communication videos
are really good, but some of his
audience didn't stick around.
As he changed from
talking about his life
to talking about
his work, he has
had to build a new
audience who are
interested in that
post-academia career of his
and about the subjects
he's interested in now.
He's getting there.
He's doing really well.
Doctor Clark is in the
minority, because he's qualified
to talk about a subject.
For every one of him,
there are countless people
speculating or repeating
misguided facts
or just flat-out lying or
trying to shill essential oils
by claiming they cure cancer.
You would hope that it's the
people like Doctor Clark that
would be authoritative, but
often that's not the case.
Authority frequently comes
from having an audience,
and having an
audience comes all too
often from that parasocial
emotional connection
with people.
If you are going to try to
talk science to the world
as an anonymous voice or a
corporation just saying words,
doesn't really matter how
well-cited your sources are
or how groundbreaking
your research is.
You have to tell people about
the human story that's in it,
that's preferably your own.
And to a certain extent,
you have to be parasocial.
So you may think, well, OK, on
television we had gatekeeping.
We did have that.
There were certain
standards, surely.
It may well have been about
parasocial relationships.
There were occasional failures,
but at least so people we
could relate to, even if they
weren't technically qualified.
And again, to be clear,
most of the people who
are doing this today
are extremely qualified,
and even if they're not,
there is a whole team
researching behind the scenes.
But it's still the medium that
gave us Most Haunted and Ghost
Hunters.
And even Most
Haunted, you weren't
watching because you were
interested in ghosts.
You were interested in
watching the present [YELPS]
at something that wasn't there.
And the online
world is often seen
as this uncontrolled,
unmediated place where anyone
can say anything about
anyone, but the last few years
have shown us that
that's also not the case.
Which brings me to
the last main part
of this, which is about
echo chambers and Nazi bars.
The final piece of the
puzzle, working out
why some lucky broadcasts
go around the world
and some don't, is talking about
the people who passed them on.
It's the groups in
which someone can
choose to amplify your
voice or condemn it,
where it's passed
on, person to person
to person, group
to group to group.
And some things are
passed on because they're
interesting or entertaining and
for no more reason than that,
but often it's because they
support existing views,
because it's reinforcing
the in-group,
or because it's
diametrically opposed
those in-groups of
viewers, and they
can bond over despising it.
Up there are the two extremes
of online moderation.
Let's talk about
the Nazi bar first.
This is what happens
when a lot of sites
are set up to be this sort
of bastion of free speech,
where anything legal--
and by that, they usually mean
legal under United States law--
anything legal is free to post.
And you see this
set up by the sort
of well-meaning libertarian
tech bros out of Silicon Valley.
Reddit, which is one of the
major hubs for discussion,
at least among tech
savvy Americans,
is perhaps the
canonical example.
They were set up as this
perfect bulwark of free speech.
If it is legal, Reddit
will let you say it.
Small groups in there
might have their own rules,
but other than that
it is a meritocracy.
The best ideas will
rise to the top.
At which point, inevitably,
the Nazis moved in.
And that's not just a label.
I'm not just slandering people
with right-wing views there.
I mean literal
modern day neo-Nazis.
And unlike most-- like
many European countries,
the US does not have a
law against incitement
to religious or racial
hatred, so that was legal.
So Reddit let them play.
Free speech, they said,
could be countered
with more free speech.
As you might expect, that
lasted until it starts
to affect their bottom line.
When there was, finally, when
advertisers were starting
to have trouble with
them, then there
was finally a crackdown on the
overtly, staggeringly racist
discussion there, this
was the quote from one
of Reddit's co-founders, and
it's kind of astonishing.
"We didn't ban them
for being racist.
We banned them because we had to
spend a disproportionate amount
of time dealing with them."
[LAUGHTER]
This isn't exclusive
to Reddit, by the way.
Facebook's moderation has been
similarly lax and inconsistent,
and somehow they've
mostly got away with it.
The inevitable
conclusion of let anyone
say anything is that the
worst people, having finally
found a place that
will let them in,
start to drive out the
more careful and cautious.
So the discussion swings
a little bit more towards
their views, which means
more moderate people leave,
so it swings a
little bit that way,
and the cycle continues and
continues and continues until
eventually you realise that
either the worst people survive
or maybe the moderators might
want to kick out the Nazis.
This is the analogy to the
Nazi bar, the local pub.
Might be the greatest
place in town,
but if they let the Nazis
meet in the basement,
you're not going to
want to go in there,
or at least you're
not going to tell
your friends you go in there.
In 2015 Reddit conduct
a survey of its users,
they found the number one
reason that users do not
recommend the site even
though they use it themselves
is because they want to
avoid exposing friends
to hate and offensive content.
So let's look at the other
extreme, the echo chamber.
For better analysis
of this, I would
direct the work of Walter
Quattrociocchi and the folks
he works with from the
Laboratory of Data Science
and Complexity at Venice's
Ca' Foscari University.
I've mispronounced
one of those words.
I don't know which.
In an echo chamber, there is
definitely not free speech.
No dissent is allowed.
You see that in places
like Facebook groups
for multi-level marketing
schemes and anti-vaxxers
where everyone has to buy
in to the group's philosophy
or else be branded
a shill or a hater.
Anyone with a
dissenting opinion is
shouted down by a large
crowd, all of whom
support each other
in their views,
whether that view happens to
coincide with reality or not.
And if all dissent
is banned, you
end up with a similar problem.
The most obsessed, the most
extreme radical believers,
chase out the people
who aren't so sure,
so the discussion,
on average, starts
to tolerate more extreme
obsession and less dissent.
And the cycle continues and
continues and continues.
If those two failure modes
sound similar, they sort of are.
Both of those
extremes are harmful.
And note, I am not talking
about political alignments here.
I am talking about the
extreme edges of the policies
that either allow anyone
to say anything or allow
no dissent whatsoever.
And every major company
that enables discussion
has to pick where they sit,
somewhere along that scale.
So this here's a post from a
Florida-based natural medicine
clinic that is selling
homoeopathic vaccines.
I've anonymized them as best
I can for obvious reasons.
Their phone number
shouldn't be up here.
I think all of us here can
agree that this is dangerous,
and there will be wide-ranging
views on whether that
should be legal.
I have said, they did
get part of it right.
Those are definitely
safe for ages above 5.
There is a concept on
Twitter called the ratio.
It's the number of replies
you get to the number of likes
to number of tweets.
If your retweets
number is biggest,
you have made something that
has resonated and should
be signal boosted and
sent it out to the world.
If your likes number's
biggest, you've
said something heart warming
or personal or emotional
that people want
to sympathise with,
but maybe don't want to
send on to their friends.
If your replies
number is largest,
like at the bottom
of that tweet,
you've probably said
something that a lot of people
disagree with.
And a response like that
it's called getting ratioed.
That homoeopathic vaccine
tweet was thoroughly ratioed.
Those 130 replies are all people
mocking it or occasionally
clearly laying out why
homoeopathic vaccines
are a bad thing, along with
the scientific consensus.
I think we can agree at
the Royal institution,
it's probably a good idea
for homoeopathic vaccines
to get some pushback.
But here's the problem.
132 people suddenly replying
to that company who usually
gets literally zero
replies to anything
is algorithmically
indistinguishable
from a mass abuse pile on
targeting a vulnerable person.
If some awful person with a
moderately sized following
says, hey, I hate that guy.
Go and mock him.
Then machine learning systems
cannot tell the difference
between abuse and a company
getting pushback for selling
homoeopathic flu shots.
Any policy decision
that is designed
to reduce abuse on Twitter to
stop vulnerable people being
mass targeted, which
is vital and needed,
is also going to help
the snake oil peddlers.
I don't know where to draw that.
Policy decisions about
community standards
are often seen about
drawing that line somewhere
between the echo chamber
and the Nazi bar,
but there's this idea that
if the company just nudges
the line a little bit that way
and nudges it a bit that way,
there will be a perfect solution
that keeps everyone happy.
I'm really sorry to say
it, but it's not true.
Echo chamber and Nazi bar
are not two polar opposites.
They are both a gradient, and
they overlap in the middle.
You cannot choose
the best option.
You can only choose
the least worst.
Slight tangent, but I
think that's due in part
of the centralization of
the web that's happened
over the last 20 years or so.
In the early 2000s, every
discussion site out there
was on a different server
run by a different person,
maybe in a different
country with completely
different rules.
And that way, it sort
of reflected the systems
we've got now in real life.
You know, one of
those sites might well
allow vulgar abuse and
back and forth argument.
Another of those sites might ban
someone for even mild swearing.
And this is how it works in
the real world, you know?
There is a big difference
between the conversation
that football fans have chanting
as they go into the stadium,
and you have the conversation
of the hallowed halls the Royal
Institution.
I mean, I assume
there is, Shaun.
I haven't been here after
last Christmas Lecture.
I guess gets a little bit messy,
but that different register
and different social norms is
true for a lot of the bubbles
within Twitter and
Facebook and YouTube.
You can have community norms
within smaller subgroups,
but all those subgroups
are centralised
on platforms run by enormous,
mostly American corporations.
So the community standards have
to be standardised through all
of them, and they
have to be what
the corporation or their
advertisers or VC backers
will support.
You know, you cannot
establish massive,
cross-platform policies that
allow every type of discussion.
Or in the case of
YouTube comments,
any type of
discussion whatsoever.
The problem is that while
the communities can be small,
the platforms they're on are
too big and too centralised.
Because, you know, some people
just kind of go on talking.
Just like, ah, it's a
nice little coffee shop.
Friendly space.
Some people I know.
You know, just I
talk to my friends.
And then suddenly, boom,
you know, it's this--
they get this attack
from these other people
who use Twitter as this
massive shouting match forum.
They will search out anyone
using this particular hashtag,
anyone with these
opinions, and they
will shout at them,
because that's
the way they use the platform.
And a few people try
to federate this.
It's called-- there's
a network called
Mastodon, which is
basically Twitter
on several different servers.
You know, they all have
their own different rules
and they all talk to each
other, but running a server
is complex and expensive.
And joining Twitter
is free and easy.
Why would you not do that?
There is something called
Discord, which is the closest,
I think, there is to
those old web forums,
those old bulletin boards.
Each discussion
section is locked off
and private, and just separated
from the rest of the world.
Sounds like a great plan.
Sounds brilliant.
[LAUGHTER]
Sounds great.
And you know, it might work
for a lot of small groups,
but it still has a
single, federated--
excuse me, not federated.
A single sign on across
the whole network.
Unintended consequences are
rife if you try and play
about with this stuff.
YouTube recently had
an algorithm change,
which, you know, tried to
raise up authoritative voices.
Suddenly, if you were watching
videos about climate change,
then you might be sent
to someone like the Royal
Institution's videos, which are
about the scientific consensus,
which means that suddenly, all
the people who were climate
deniers, who were already
entrenched with their views,
were being sent to videos
like the Royal Institution's.
And suddenly, underneath
each of those videos,
there are ill thought out,
unscientific comments.
And as a viewer, you can just
scroll down a bit and go,
ah, yeah, are my people.
They're the ones that are right.
Just like you might
if the algorithm had
determined that your
fundamental beliefs were wrong.
Which brings us
back to the start.
I'm pretty sure that the person
running that homoeopathic flu
clinic genuinely
thinks that they
are doing good for the world.
Their fundamental beliefs
are at odds with reality,
but that's never stopped
people believing things.
People may take what is
just a saline injection,
and they may think
that it's effective,
and they may spread flu, and
that is, in worst case, lethal.
And I'd argue that in a perfect
world, those tech companies
that facilitate that discussion
have a moral imperative
to reduce or remove
messages like that.
But it can't be that clear
cut, because we aren't perfect.
The people running them
sure as hell aren't perfect.
Now, ideally, the
algorithms for Facebook
or for any other YouTube--
or for YouTube, or
for any other company,
they would be able to think
a little bit further ahead.
They'd be able to increase
long-term profits as they go.
At least, that's what the
corporations would like.
They'd be able to
understand public relations,
and they'd be able to
work out what to do.
And from humanity's
perspective, the ideal algorithm
would be helping humanity
survive long term.
It would suppress conspiracy
theories and fake news,
but it would allow enough
entertainment and nonsense
that we still pay
attention to it.
In 1950, Isaac
Asimov wrote a story
called "The Evitable Conflict."
It became the last
chapter of I, Robot.
And it was about
giant supercomputers
that run the world's economies.
They were called the machines.
And in the story, they're not
being perfectly efficient.
They're not perfectly designed.
They're making small
errors here and there.
It turns out-- spoilers
for a book in 1950--
spoilers, they are programmed
to protect humanity.
And they know us better
than we know ourselves.
Little nudge here,
little nudge there,
then humanity is less
like to destroy itself,
and the people who believe that
the machines are doing that
are more likely to be seen
as conspiracy theorists.
We don't have machine
learning systems like that.
We don't have that sort of
artificial intelligence.
Not yet, anyway.
And you know, we can't
tell a computer programme,
here are the odds of humanity
surviving into the next century
and beyond.
Improve them.
If we ever do have a
machine learning system
like that, the, sort of,
super intelligence that
could, in theory, control
broad strokes of humanity,
then the world will
be about to change
so much that fake news will
be the least of our worries.
If anyone ever does
figure that out,
I can only hope that its goal
is not to maximise the profit
of one company.
But until someone does
work out an algorithm that
can do all that, it's up to us.
And I know this is a really
corny note to end on,
but like, we are that system.
It's up to us to fact check
things before we pass them on.
It's up to us to create
things that are honest,
that don't smudge the truth.
It's of the few of us
that create and train
those algorithms to
understand the biases
and make sure we're not creating
conspiracy rabbit holes.
And it's up to those
of us who create things
to manage those commercial
demands of clickbait and drama
against honesty and truth
and help the world not
turn into a horrible pit.
The only algorithm for
truth that we have right
now is ourselves.
My name's Tom Scott.
Thank you very much.
[APPLAUSE]
