[♪ INTRO]
Sometimes when you’re browsing your social
media feed, you might find an account or a post that just seems...off.
Maybe it sounds like a janky product endorsement,
or maybe the wording seems kinda funny.
Quite often, these are bots — bits of computer
code designed to operate an account and maybe masquerade as a real person.
They’re all over the place, especially on
sites like Twitter,
where some have estimated up to 15% of active accounts are actually
bots.
You’ve probably heard they’re shaping
our online experience and could even have real-world consequences
— but how true is that?
And why do we fall for them in the first place?
It turns out that for something so rooted
in computer science, the success of bots has a lot to do with, well, psychology.
To understand the influence of Twitter bots,
it helps to look at the world of artificial intelligence, or AI.
For decades, researchers in this field have
been trying to convince people their computer programs are actually other humans.
A common benchmark of how human-like an AI
is called the Turing test.
In it, someone interacts with a computer and
a person just through text, and they have to see if they can tell the difference.
So far, we're not really to the point of AIs
reliably passing.
Over on the main SciShow channel, we talked
a bit about why this is so difficult from
a computer science standpoint, and we’ll
link to that video at the end of this one.
But even though there’s no such thing as
a perfect AI, some programs have done relatively well.
And obviously, tons of people mistake Twitter
bots for humans all the time.
In competitions and otherwise, the most successful
programs seem to trick us by managing our expectations.
In other words, they’re designed to look
like people we wouldn’t expect to be perfect linguists.
For example, one successful early bot posed
as a seven-year-old child, so spelling and
grammar mistakes seemed normal.
And in 2014, one of the best AI at a competition
at the University of Reading was supposed to be a Ukrainian teenager.
That way, it got some slack when it didn’t
get things like pop culture references.
Managing expectations works in competitions,
and it can explain why Twitter bots fool us, too.
Many bots, especially the ones that don’t
self-identify, rely on the fact that you probably
aren’t looking for them when you go online.
After all, if you notice tweets with bad grammar
or weird hashtags — well, that’s just Twitter.
So the bots slip under your radar.
But that’s not the only psychological trick
in their toolbox.
To pick up human followers, many accounts
— especially advertising bots — take advantage
of another well-studied phenomenon in social
psychology: the norm of reciprocity.
This says that if someone does something nice
to you, you tend to be a little motivated to do something nice in return.
So, if you get a new follower, you might follow
them back without thinking too hard about
whether they’re a person or a program.
Then, the bot can just wait until it has a
few thousand followers — and you've forgotten
about it — to start spamming your feed.
And it seems to work.
Researchers have found that, although people
don't tend to reply to these tweets directly,
they do regularly retweet them and spread
their messages.
This could be because they’re posting something
noteworthy, or just because they targeted potentially interested users.
Generally, this isn’t a big issue.
Whether or not you recognize them for what
they are, a lot of bots just exist to serve you ads or share animal pictures.
But there are also more worrisome programs
you might’ve heard about, which seem to
be trying to spread misinformation and influence
politics.
And those are trickier to understand.
Despite concerns, it’s really hard to measure
if these bots have offline consequences — like
if they actually sway anyone’s vote.
It’s a topic of active research and debate.
Either way, all that political spam probably
isn’t a good thing, and many social networks
like Twitter are working hard to keep bots
from skewing the political climate in one direction.
Still, like we said earlier, bots aren't all
bad news.
Some share self-care tips, others help with
customer service, and some share legitimately useful information.
Some new ones can even help — at least,
to a degree — with mental illness.
In 2017, some psychologists followed in the
footsteps of one of the very first chatbots: ELIZA.
This bot was designed to be a bit of a parody
of the therapists who would just say
"That's interesting... tell me more."
Armed with new research about cognitive behavioral
therapy, or CBT, this team took ELIZA a step further.
They designed a new chatbot named Woebot designed
to walk users through some of CBT’s tools and strategies.
Generally, CBT is a kind of therapy that focuses
on changing your emotions and feelings through
challenging your patterns of thoughts and
behavior.
But you also need, like, a person to be the
therapist.
Or apparently… a chatbot.
In a randomized control trial of 58 people,
the team found that those who interacted with
Woebot for two weeks reported a significant
reduction in depression symptoms on a standardized
self-report scale compared to a control group.
Even though they all knew it was... just a
robot.
The conversation probably didn’t go exactly
like it would with a human therapist, but
it looks like it was close enough to be effective.
So even though some can cause problems, bots
aren’t all bad, and we’re moving in some
interesting and helpful new directions with
them.
If you really don’t want to be fooled by
a bot, you can look out for accounts that
use repetitive wording and post too many links
or hashtags.
But a lot of more sophisticated ones do go
undetected, and even researchers have a hard time telling the difference.
So chances are… you’re probably interacting
with some bots from time to time online, whether you want to or not.
But as long as you keep in mind some of the
ways they can trick you into engaging with
them, you can pick the ones you want to interact
with.
And that’s definitely a start.
Thanks for watching this episode of SciShow
Psych!
If you’d like to learn more about how Twitter
bots work and why it’s so hard to program
them to sound like humans, you can watch the
first part of this episode over on the main SciShow channel!
You can find it at youtube.com/scishow.
[♪ OUTRO ]
