(upbeat music)
- Thanks so much for coming.
I really appreciate you
guys taking the time
to go through this with me.
So my name is Mike Ambinder.
I'm an experimental psychologist at Valve.
This is what I spend most...
This is how I spend most of my time.
I'm actually not a clinical psychologist.
Gabe may require some work,
but that's not my area of expertise.
I'm an experimental psychologist,
so I work on applying knowledge
and methodologies from
psychology to game design,
and so thinking about how
knowledge and human behavior
can impact the choices that we're making.
So, really quick overview for those of you
unfamiliar with psychology,
it's essentially just
the study of human behavior
and its influences,
and what psychologists do,
we look for regularity
or patterns in behavior.
Humans react to certain situations
in somewhat predictable
ways and psychologists try
and figure out what those ways
are and why it's happening.
On the game design side of things,
I'm sure everybody has their
own definition of game design.
Everybody in this room
will have their own...
For the purposes of the
talk, just so we have kind of
a shared context, I'm gonna
define game design here
as a series of constraints
and choices and systems
presented to a player,
and typically, you know,
these constraints and choices
and systems and so forth
induce a behavior or
response, and the hope is that
we can take what we know about psychology,
about the predictable ways
humans might react or respond,
and apply it to game design
and induce various responses
and particular responses.
So here's kind of a roadmap for the talk.
I'm gonna go through kind
of seven distinct topics
that should all hopefully, you know,
contain somewhat novel
or surprising information
about the ways in which
players are reacting
and so we'll talk about attention
and how preferences are
made, a few cognitive biases.
I'll talk about the fallibility
of internal reflection,
our ability to rationalize and explain
why we do the things that we do.
I'll give you an example
of how we make use
of a phenomenon known
as cognitive dissonance
to reduce player toxicity in DOTA.
I'll talk a little bit
about player agency.
I have a couple slides there,
and then end with just a
discussion of motivation.
How do we keep people playing our games?
How do we keep people engaged
in the products that we're making?
So attention and its failings.
This is maybe a somewhat suggestive title,
but the thinking is that with attention,
the important point to
keep in mind is that
we attend to far less of the
world than we think we do.
If I say what's your area of focus,
if you hold out your arm
and hold up your thumb,
the width of your thumb at arm's length,
that's how much of your environment
you are actively attending to.
We're very good at switching
and moving attention pretty quickly
and people in our games
have to do this quite a bit,
but the area of focus is this
much of your environment.
We're still aware of things outside of it
and things can capture our attention.
I guess the point I wanna make is that
if this is the area of
focus, players in our games
might not see as much of the
world as we think they do.
In fact, we do attend
to far less of the world
than we think we do.
Our brain does a very good job of creating
kind of a stable
representation or illusion
of a stable representation of
the world, and we trust it,
but we're not as aware of
the world as we think we are,
and focusing attention is effortful.
We ask players in our
games to focus attention,
engage in battles, search for
things, navigate and so forth,
and focusing attention is effortful,
and I'm gonna walk with you kind of
a pretty specific instantiation
of that in a little bit.
And so I wanna talk about attention
because there are implications
here for game design,
so let's go through, we have
things we probably all know,
but just to get a background
for how attention can work.
So there are various ways
attention can be captured.
Sudden appearances of things
will capture attention.
We do this in games all the time.
Color changes will capture attention.
Looming motion will capture attention.
There's an evolutionary reason for this.
If an object is suddenly increasing
the amount of your field of
view that it's taking up,
it might be something worth orienting to.
Like, it could be a threat
and so we typically are
very oriented to looming
motion and size changes as well
Again, we make use of
all these in our games
and none of this is
meant to be surprising.
I'm just kind of focusing
on the background.
The other point I wanna make is that,
so it's not only the physical
properties of a stimulus
that can capture attention, its color,
its size, its motion and so forth.
It's also the attentional
goals that you have matter.
So I guess as an example,
psychologists call these attentional set.
I'm using the words attentional goals
just 'cause I think it's
a little bit clearer.
If you're in a room and
you're looking for your friend
and your friend is tall
and blond, you're gonna be
more likely to notice people
who are tall and blond
and less likely to notice people
who are perhaps short and have dark hair.
So this is an example
of an attentional goal
and it does scope what
things you will attend to
in that environment and
people make use of this
in our games as well.
If you're searching for
gold coins, you're gonna be
oriented towards gold
objects in the game, right?
So when attention is focused,
as it often is in our games,
players will miss and can miss
very salient and surprising things.
Let me show you this happening in a game.
So this is a video.
This is Counter-Strike, one of our games.
There's two teams, trying
to kill each other.
We have one player left on either team
and this video actually
happened two weeks ago
so it was very fortuitous timing.
I'll let you guys watch
and see what happens.
We should have sound here.
You can hear it? Okay.
- [Player] He's there,
he's here, I told you, man.
Look, left, left, there,
there, left, there.
To the right, flowerpot, flowerpot.
Down, down, down, behind you, there.
Oh, my God, he's there, dude.
He's there, look. (bleep)
Oh, my God, Yuha.
Oh, my God, you're such
a (bleep) idiot, Yuha.
- Okay, we had to tone down the language
'cause this is a family presentation.
So this guy who was playing
is good at Counter-Strike.
If you were watching,
he's clearing corners,
like his crosshairs are always looking
for where he thinks, at
approximate head height,
where he thinks people are going to be,
but what he ended up missing was
there was a guy in the
center of his screen,
just crouched down, right at
the center of his viewpoint,
and he didn't see him because he didn't
expect that person to be there.
You know, he was playing Counter-Strike.
Like, there are certain
areas where people hide.
They don't often hide in plain sight,
and so his attentional goals were like,
I'm gonna look in all the common
places somebody could hide
and not actually see them
where they actually were.
So this is the same finding
as the gorilla video, right?
When attention is focused
or attention is guided in some way,
he's using predictable cues he's acquired
as a skilled Counter-Strike player,
you can miss very salient things
including the very thing
that you're looking for.
So the implications
here are that, you know,
very salient objects can
be hidden in plain sight.
So if you understand the
attentional goals of your player,
you can create surprises when
attention is focused elsewhere
and then conversely, don't be surprised
when they miss very obvious things.
Understand what the attentional goals are
and maybe the cues you're
giving to your player
are guiding them to miss something.
If they're focused on attacking an enemy,
they're not gonna be focused on looking
for cues to an exit, for example.
So, you know, think about the goals
you're priming in your player
and when they are missing obvious things,
try and understand why,
and you know, make use of this.
Like you can create pretty compelling
and novel, surprising experiences
when attention is focused.
Yeah, and I guess one topic I
can't really talk about now,
but the notion of like introducing
gradual changes is pretty important.
Yeah, I talked about sudden appearances
where something just appears.
If you have something
slowly fade into view,
just because there's not
a strong change signal,
people are gonna be more
likely to miss that,
so the notion of gradually
altering an environment,
very difficult for people to notice,
so definitely something
you guys can take note of
and I'd be happy to chat
more about that afterwards.
So we talked about ways in
which attention can fail.
Let's move onto ways in which
preference can maybe not fail,
but talk about, I guess, the ways
in which preferences is constructed.
So by preference, I mean,
what are the things you like?
I think lots of us, we
would like to believe
that we tend to like
things based off some,
you know, at least internally,
some objective estimation of, you know,
I guess our own reactions to a thing,
you know, a sense of enjoyment.
We kind of calibrate towards that.
The point that I'm gonna
kinda lead you guys towards is
that preference is probably
more arbitrary than deliberate
and this has consequences
for game designers.
So why do players choose
to favor a particular game
over another, or a particular
strategy inside of a game
or a character or weapon
or level or a game mode?
Is it all just, hey, everyone is doing
their own kind of internal
calculations about what's best
or are other things at play?
You guys can probably guess
by all the leading questions
I'm gonna ask in this talk;
other things are at play.
So, here's a character list in DOTA.
DOTA's one of our games.
You have 112 characters,
two teams of five.
Everybody can each get
to pick one character.
So whole bunch of characters.
So what if I said rank
your favorite heroes?
Tell me, which heroes
in DOTA do you prefer?
You would say Anti-Mage is first
'cause obviously he's the best DOTA hero.
Legion Commander second,
Pudge, Templar, Naga,
Skywrath and then Windrunner,
all the way down the line.
So you would give me a
ranking of heroes in DOTA.
That's all well and good.
What psychologists have realized
is that if you do something else,
if I say, here are the heroes
that everybody else likes.
Here's the average collective preferences
of the playerbase at large.
They're gonna be different.
Well, so this is what you said.
Here's a different set of heroes.
Here's what the playerbase thinks.
Bloodseeker, Tusk, Ember and so forth.
What happens, and then I'm just like,
okay, you have the playerbase's rankings.
What are your rankings now?
Your rankings change.
This happens in a variety of contexts.
Psychologists have studied
this kind of a bunch
and it's somewhat, I think,
surprising, the result.
Our preferences are influenced
by what other people think.
I mean, that is okay, but the notion
that what I actually like will change
simply because you told
me what other people like,
that's a really important
thing to keep in mind
'cause we do this in
our games quite a bit.
We give players cues to
what other people like,
what we as game designers
like about the game,
and players will react to that,
and preferences will change
and adapt accordingly.
So here are a few examples
of the ways in which
we as game designers do this in our games
and cue players to actually ended up
having a resulting change in preference.
So at the top is just a capture
of the tutorial mode in DOTA
and we have three heroes up there,
Dragon Knight, Sniper and Shadow Shaman.
Because we said, hey, new players,
you should play these heroes,
that gives them cues that, hey,
the developers favor
these heroes in some way
and so that is going to affect
my estimations of their quality.
On the, I guess the bottom left panel,
so this is pro players playing in a match
and the heroes they're using,
and we show this to
players in the DOTA client,
and if you see pro players
using various heroes,
that is also social proof;
that is also giving you cues
as to what heroes you should like
and what your preferences should be.
On the bottom right is
the map selection screen
in Counter-Strike, and top
left corner is Dust Two,
then Train, Mirage, Nuke and so forth.
We're saying, hey, play Dust Two.
Like we're giving you an implicit guide.
That's the map you're gonna notice first.
We as game designers are
cuing you to like Dust Two,
and that may end up
actually influencing people
to like Dust Two more than they would have
if the map screen was randomized.
So the implications here are
that social proof will anchor preference.
So what other players think
will anchor preference.
What we as game designers
convey to players in our games
will anchor and can anchor preference,
so how you display
information to players, like,
has a profound impact
on what they actually
end up liking about your game.
We'd like to hope that, you know,
kind of objective assessments
of quality will emerge,
but people look to cues to
guide their decision making.
We always wanna make
it easier on ourselves
and our brain does this automatically,
and so if, hey, the game
designer's giving me
cues as to what I should like,
okay, I'm gonna start liking them more.
Another point I wanna make is
that players will choose
the default option.
This is a well-known bias in psychology.
If you have a selection,
you know, of four choices,
whatever is listed as the default,
people in general, not
just players in our games,
but people in general
are more likely to choose
the default option 'cause it's a cue
to guide decision making,
and so like this happens
outside of games and it definitely
happens in game as well.
And so, just going back to
the map selection screen,
if you truly wanted to understand
a map's true popularity
or quality, randomize its
placement in your game.
People might like it and play it the most
because it's at the top of your map list.
It's not always practical
to do this change,
but just understand that if
you have a certain structure
in your game, it's going
to scope preference.
It's going to influence preference,
and so if you're truly
after objective assessments,
figure out ways to kind
of remove the social cues
that you're providing.
Okay, so that was talking about ways
in which preferences can,
at least, maybe not fail,
but are heavily influenced,
so we're gonna talk about
a few cognitive biases,
which are just, essentially,
systematic ways in which the brain tends
to err when making decisions.
So I'm gonna say we're neither as smart
nor as rational as we think we are,
not in a demeaning way, but
just in a more realistic way.
Like, you know, our brain
is using heuristics,
for efficiency's sake, to
help us navigate our world
and to go through life,
and the point here is
that not everything, not all those factors
that influence our
assessments of situations
reaches conscious awareness.
So we're conscious of a lot of things.
There's a whole lot of processing going on
below that level of conscious awareness.
Your brain is doing a lot of work,
and so psychologists have
studied for a while now
the ways in which our
brain is kinda biased
in certain predictable
ways and the hope is
that we can then use these
biases to make better games,
and so I'm just gonna talk about,
I have a list at the
end, but I'll talk about
a couple biases in more detail.
So the first is anchoring.
We make decisions and evaluations
of situations comparatively,
so we're always looking
for some benchmark,
some threshold of comparison to use,
and what happens is we tend to anchor
to an initial piece of
information that's presented,
and the important point for psychologists
and why this is somewhat surprising
is that this anchor, what we anchor to,
this piece of information
doesn't have to be related
to our actual decision, and
so let me give you an example.
This is what I did.
I went around Valve
headquarters, I guess, last week,
and asked my coworkers
for the last two digits
of their social security number,
so they were giving me zero
zero all the way up to 99.
Some people were a
little worried about that
and I was like, I don't need
the other seven, just the last two.
I then divided the
responses into two groups.
So I said if you're low numbers,
zero to 49, you're in group one.
If you're the latter 50, the high numbers,
50 to 99, you're in group two.
Then asked, how many
heroes are there in DOTA?
I didn't ask anybody on the DOTA team,
but I asked everybody else this.
So you're in group one, group two,
how many heroes are there in DOTA?
There should be no difference
in the estimates that were given, right?
Like, on average, people
should hopefully converge
on, I guess, a single number.
What happened was this.
Group one, the low group,
estimated 100 heroes.
Group two estimated 115.
Group one had anchored to a lower number,
their lower social security number,
and gave me lower estimates.
Group two anchored to a higher number,
their higher social security number,
and gave me a higher estimate.
The actual answer's 112,
if you guys are curious,
but just the notion here
that irrelevant information
can actually impact your decision making
and your assessment of a situation.
So this is a silly
example, in making a point
that irrelevant information
can play a role in what we do
but what happens when relevant
information is provided?
So this is the map selection screen
from Counter-Strike again.
In the upper lefthand corner, Dust Two.
It also has an expected
wait time, minute 12.
Next to it is Train, an
expected wait time of four 48.
Train, Dust Two, Train, Dust Two, one 12,
Train, four 48, Mirage, two 47.
So what happens here?
Is anchoring happening here?
Say I wanna play Train
and the expected wait time is four 48.
What happens if I have
to wait six minutes?
I've anchored to four 48 and
it's a six minute wait time
so I had to wait a minute
12 longer than I thought.
That's a negative experience.
What if that expected wait
time had said eight minutes
and then it took you six, so you got in
two minutes quicker
than you were expecting?
Right, so in both cases,
you're waiting six minutes.
In the first case, four 48,
in the second case, it says eight minutes.
The eight minute situation
leads to a better experience.
Displaying a higher
number will likely cause
players to be happier in the long run,
which is somewhat counter intuitive.
So we don't wanna systematically
overestimate our wait times.
That doesn't seem like a great
thing, but there actually
could be positive
consequences for our players.
These are accurate wait times.
We're not playing around
with any, you know,
we're not doing any manipulations here,
but because we're saying,
hey, these are honest,
we might actually be creating
more negative experiences
for our players than we
would if we said, okay,
let's build in a buffer and
bump up the weight times.
So that's one example of anchoring.
There's another.
Say I wanna play Train but I look at Dust.
So Dust, man, that one 12,
that sounds pretty good,
so all of a sudden, initially,
I'm waiting on Counter-Strike
and I wanna play Train.
If it's a reasonable
amount of time, I'll play.
So four 48, I have to decide
if that's worth my while.
Now Dust Two is a minute
12, so all of a sudden,
like, my frame of reference changes.
Like, the actual comparison
I'm using changes
and I'm anchored to this lower
wait time, this minute 12.
So now I have to decide,
do I wanna play Train
in four minutes and 48 seconds
or do I wanna play Dust
Two in a minute 12?
So the information we're
presenting to players
impacts their decision making
and changes how they evaluate a situation,
and so their desire to
play Train in a vacuum
is not the same as their
desire to play Train
when anchored to the minute
12 wait time for Dust.
So yeah, that's just kind
of highlighting the point
that all of a sudden, the
axis of comparison shifts
from playing Train or not to playing Train
at a particular time versus playing
another map at a particular time.
So anchoring was one cognitive
bias; here's another.
So what we refer to as
framing, and simply speaking,
framing is the manner of presentation
of a choice affects the response.
So I can ask you the same question,
same concept, but in two different ways,
and you're going to respond differently,
or you're likely to respond differently,
and one way to think about this
or one finding that comes out of this
is that you can frame the same question
as avoiding a loss or acquiring a gain
and people tend to be averse to losses
and oriented towards gains and so.
Let me give you an example.
When WoW first came out,
folks at Blizzard were a little concerned
about people playing for too long,
so what they did was they said, okay,
you can play for a few
hours at basic playtime
and get 100% XP gain,
and you know, on average,
you might earn 1,000 XP per hour,
and then after, let's say,
I'm making up numbers now,
say after three hours,
it's reduced playtime
and we'll give you, you
know, you can still play
but we'll give you reduced
XP, so they wanted to
not get people burned out on the game.
They wanted to encourage people to stop,
so they were like, play for three hours,
you get 100% XP, and then
afterwards you get half XP,
so on average, 500 XP per hour.
Players did not like this,
so what did Blizzard do?
They rescaled things.
They said, alright, we're
gonna give you bonus XP,
200% XP for the first three hours.
You're still gonna earn on
average 1,000 XP per hour.
Then after the three hours is over,
we're gonna give you the base XP of 100%
and earn on average 500 XP per hour.
Players liked this a lot more, right?
The actual XP gain was identical, right?
Nothing changed about the system.
The only thing that changed
was the texturing or too,
saying you're in possession of a gain
and the gain is going away,
you're getting a bonus,
the bonus is going away versus
you're getting a penalty
and we're applying the penalty now, right?
So the actual XP gain was identical,
and simply how the XP gain was
presented to players, right?
They were getting a bonus
versus suffering a penalty.
So I'll go through these really quickly
and this list is, I don't
wanna talk at you guys forever
just by reading things, but the point here
I just wanna make is
that there a whole host
of biases like this that come into play
in our decision making and
our evaluations, our thinking,
and so being aware of them
can help hopefully make us
lead to smarter decisions,
so really quick.
The recency bias, we tend to think about
what happened most recently
when assessing a situation.
Confirmation bias happens
in politics a lot,
but we tend to seek out information
that confirms our beliefs
and avoid information
that would dis-confirm our belief.
The false consensus effect is the notion
that we think that everyone
else is more likely
to agree with us than they actually are.
You may have noticed this phenomenon.
Hindsight bias, we tend to
view things that happened
as more likely in hindsight
than they actually were.
So after something happens, we're like
oh, yeah, of course it was gonna happen
and we kind of rationalize our
decision making around that,
but that's not actually the case.
The endowment effect is kinda interesting.
We tend to value things more
highly when we own them.
So if I say, how much are you willing
to pay for this in-game item?
You might say five dollars.
As soon as you have the five dollar item,
you immediately value that item higher
and you might say I'm only
willing to sell it for seven.
It's a common finding across
a wide range of things.
There's ways that can be played with
when you're thinking about in-game items
and various things that people
will possess in your games.
The last three are kind
of interesting as well.
So the mere exposure
effect is simply this.
The more times you see something,
the more favorably you tend to view it,
so this is an argument
in favor of marketing.
If you see something more often
compared to something you
haven't seen, you're going to
view higher the thing
that you saw more often.
It's just the mere exposure effect.
The mere exposure leads to a
greater liking for a thing.
The bias blind spot is
simply that you believe
that other people are
more likely to suffer
from these biases than you are,
so it's a cognitive bias
about cognitive biases.
And then the last one is
an interesting one as well.
So the peak end rule is this.
When we're evaluating an experience,
like how much I like this game,
how much I like this movie,
how much do I like
listening to this album,
and so forth, we don't say,
what was my average feeling
across the entire experience?
That's not what we do.
We take a shortcut.
And so what we do is we say,
how did I feel at the
highest point, at the peak?
And then how did I feel at the end?
How did I feel at the peak?
How did I feel at the end?
And then you average
those two times together
and that's how your
assessment of the experience.
So implications for us as game designers,
it may be tough to know
when a game's peak is,
but you better be really, you know,
it's really important to nail the ending,
to stick the landing 'cause
that will have a profound impact
on people's assessments of your game.
It'd be great if people
just said, oh, yeah,
across the whole experience,
this is what I felt.
That's not really what people do.
Some people can do that
well, but in general,
when you're thinking about an experience,
you're like how did I
feel at the high point
and how did I feel at the end?
And that's how you assess things.
So implications for game design
for these cognitive biases.
Obviously, decisions can be influenced
and shaped in somewhat predictable ways.
For the two biases that I talked about,
understand what players are anchoring to.
So understand the basis for comparison,
and you realize that decisions
and optimizations will shift
depending on what anchor
you're showing to players.
And then for the framing
side of things, you know,
be aware of the reference frame.
How you present things to players matters.
If you frame something
as a gain or a loss,
that will have a profound impact
on how they react to things
so tend to err on the side
of framing things positively,
giving bonuses as opposed
to penalties and so forth.
Just understand that the same situation
can be presented to players
in a variety of ways.
We have choices about how we do that
and that will impact how
people react to them.
And maybe the most
practical piece of advice
I can give you guys out of this talk.
This is using the anchoring
in every day life.
Always say the first
number in a negotiation.
If you want a 75K salary
and your boss wants to offer you 50,
if you say 75, you guys are gonna
anchor to that higher number.
If he or she says 50,
you guys will anchor to that lower number.
So whoever says the first number first
sets the anchor for the negotiation.
So just something to keep in mind
when you're dealing in every day life,
think about what the anchor is
and what you want it to be,
and try and make use of that accordingly.
So choice blindness and
internal reflection.
Choice blindness is something
I'll define on the next slide,
but really, this section is
just about internal reflection.
How do we determine why
do the things that we do?
So yeah, what I just said.
How reliably do we know
why we do what we do?
And, you know, as a consequence
or maybe more specifically,
how reliable is the feedback
we receive from players?
When players tell us why
things are the way they are,
how reliable are those assessments
of our internal monologues?
So let me give you an
example of a phenomenon
in psychology that touches on this.
This is called choice blindness.
I'm gonna ask you to choose
one of two alternatives.
I'm gonna say, hey, here are two people.
Tell me which one is more attractive,
you know, person A, person B.
Here are two jams, tell me
which one tastes better,
jam A or jam B.
Which of these two gambles
are you more likely to choose?
Which of these two moral judgments
are you more comfortable making?
Essentially just a choice
between two options.
And then I'm gonna distract you.
I'm gonna say, like, oh,
do a crossword puzzle,
watch a video, sit quietly,
play on your phone, do whatever,
distract you for a few minutes.
I'm then gonna ask you
to justify the choice
of the alternative that
you did not choose.
If you chose person A, I'm
gonna give you person B
and tell me, why did you choose person B?
If you chose jam B, I'm
gonna give you jam A,
and tell me, why did
you like jam A better?
More than half of you are
going to give me a reason
why you made a choice that
you did not actually make.
This has been replicated a
bunch, and again, you know,
somewhat of an artificial experiment
but it does illustrate an important point.
It's that we're not great
at explaining why we do what we do.
Some of us are, and if
we sit and be reflective,
we can get to pretty good places,
but we're often cued in
various ways by things.
We latch onto convenient
explanations for things
and so if somebody's saying, yeah,
this is the thing that you
made, you end up actually,
then you have to construct a rationale
for a choice you did not make,
and we are very good at
constructing rationales.
So the implications here,
be wary of self reports.
Incredibly useful, whether
it's an email you get
or feedback you get on forums
or information you get from play tests
where you're asking players
why they did what they did.
Just understand that this is not something
we as humans are great at.
Some of us are good, and you
can get a lot of viable data,
but the point here, be
wary of self reports
and use them as cues to
actually look for behavior.
You'll notice I used the looming motion
to capture attention;
hopefully, that worked.
This is a really important point.
Let self reports give you insight
into what might actually be happening,
then go find a behavior to see
if it's actually happening.
If people are saying a
weapon isn't balanced,
go look at the data and see
if it's actually balanced.
If people don't like a map,
then go see if they're
playing the map or not.
Some of these questions
are really easy to answer
and digging into a measurable behavior
is fairly straightforward,
but sometimes it's more
complicated and more tricky,
but the point I wanna make
is that, just understand
that what's happening
up here is not always...
What we think is happening
up here is not always
an accurate estimation
of what's happening,
so whenever you can, use
self reports to guide you
and then look for a measurable behavior.
So our next topic, I know we're
kinda jumping around a bit.
There's a whole bunch in
psychology, so I'm hopeful
that all these have utility
to you guys in the end.
So cognitive dissonance
and player toxicity.
And I'll define cognitive
dissonance on the next slide.
Player toxicity, you know,
if anybody's played online,
which I assume most of you have,
you might notice that sometimes
people are not so nice.
Sometimes, there are
instances of toxic behavior
and it would be great if we could
figure out ways to reduce that.
So let me define cognitive
dissonance and then talk about
how we can maybe use cognitive dissonance
to help reduce online toxicity.
So for cognitive dissonance,
when thoughts and behaviors
are inconsistent or
opposing, discomfort arises.
So you can think about,
I think of myself as a charitable person.
If I'm walking down the
street and I see a panhandler
but I don't give them any money,
so I'm not charitable in that sense,
I might suffer dissonance later, right?
I have the notion of myself
as a charitable person
but I have this action, or
this inaction, essentially,
where I wasn't charitable,
and that creates conflict.
That's one example.
This can happen in a variety of contexts.
When we do experience that discomfort,
we seek to reduce it by altering
one of the antagonistic
thoughts or behaviors.
And so if we can induce
dissonance in our players,
maybe we can use that to
change behavior in our games,
and so let me walk you through
an example of how we did that.
So here's a chat log in DOTA
that you guys have probably never seen.
This is not how most of
the conversations go.
Right, so it would be great
if all of the conversations
went like this.
That's not always the case.
They don't always tend to go like this,
but it'd be great if they did, okay.
So what happens, well, okay, so actually,
let's talk, I guess there's one slide
on why people tend to
be more negative online
and this could be a whole
talk in and of itself
and we can definitely
discuss this afterwards,
but, you know the anonymity.
DOTA requires a significant
time investment to play the game
so that the stakes are high.
There are many decision points
in the game where you can
latch onto an opponent's
mistake or a teammate's mistake,
and lash out accordingly.
There's a phenomenon known
as the Dunning-Kruger Effect,
which if you say it pithily is
incompetent people don't know
that they're incompetent,
which is, a more general
way of saying it is
we are not so great,
unsurprisingly, at self-assessment,
and so assessing our own level of quality
compared to other people,
and so I might think
I'm a better DOTA player than my teammates
and blame them for mistakes
that I actually ended up making.
And a whole bunch of
other factors as well.
Not really the focus here,
but just understand that
trying to tackle any one of these things
maybe can have an impact
on reducing toxicity.
So in DOTA, if somebody is
a dick, you can report them.
You can click on their
name in the scoreboard
and this dialog will pop up.
I know, I think it's tough to read,
but essentially, it
just says report player
and you can select a
category, communication abuse,
intentional ability
abuse, intentional feeding
and then you can report them.
So this is, you know, I talked
about measuring behavior.
We can look at the number of reports
as an indication of how toxic
the community is, for example.
One thing we used to do in
DOTA was at the end of a match,
we would say, give you a survey and say,
hey, please rate your
enjoyment of the match.
One to five stars, you
don't have to answer
if you don't want, but if you did answer,
one to five stars, please rate
your enjoyment of the match,
and we could correlate
that with, you know,
try and get a overall
temperature of a player base
and correlate that with
specific behaviors in a match
and try and understand better, you know,
why people had a good
time or had a bad time.
And that, useful, not
the focus here though.
After we did this for a while,
we ended up adding two more questions.
So here's the first.
Teammate cooperation,
please rate the cooperation
your teammates displayed
in the last match.
One to five stars, so you can say, hey,
I had good teammates, I had bad teammates.
That's great, we can make use of this data
in various ways like
feed it into matchmaking.
I'll be honest with you guys.
I didn't care about the answers
to this question at all.
Only reason we ask this question
was so we could ask the next one.
Your cooperation.
Please rate the cooperation you displayed
towards your teammates in the last match.
So what happens here?
We have a self-serving bias.
We wanna rate ourselves highly,
so I'd love to say I was a good teammate
and give myself five stars.
If I was a dick in the game,
I know that I can't honestly do that,
so I have two conflicting
notions and minds.
I wasn't a very good teammate,
but I wanna rate myself
as a good teammate.
That induces cognitive dissonance.
So the hope was that if we did this,
people would experience dissonance
if they wanted to rate themselves highly
but knew that they weren't
worthy of such a high rating,
and they would adapt accordingly
and then hopefully
modulate their behavior,
moderate their behavior
and then the next time
they saw the survey, they
could answer honestly.
So we did this; this was
the only change we made
on the behavior side of
things at that point in time.
What we saw was this.
About 137,000, a 12 and
a half percent decrease,
fewer reports per day, simply
just by adding this survey.
So this is across millions of players
and millions of games
going forward, right?
Simply just by adding a survey,
two survey questions essentially,
to induce cognitive dissonance.
So takeaways here, inducing
dissonance can definitely
lead to meaningful behavior change.
Doesn't need to be a
constant manipulation,
meaning you can do it subtly
and it's important to think
about the attitudes you're
priming in your players,
and so if you can prime them
to want to act in various ways,
as we did with the survey, then sometimes
they will act in ways that
make everybody happier,
137,000 times a day happier.
So I have a few slides on player agency,
just 'cause I think
it's an important topic
but I don't wanna dwell too much on it,
but I feel like the
point needs to be made.
I think this might be something
that most folks are aware of
but it's, you know, when
we do lose sight of it,
it can have consequences for our players,
so we like to feel like
we can exert control
in our environments, in any situation.
I wanna feel like I have some semblance
of self-determination, so how
psychologists refer to it.
We want our actions to have an impact,
and so when you design,
how much agency are you giving
the players as designers?
It's an important point to keep in mind.
It's a useful question to ask
whenever you're making a decision.
So here's the report dialogue again.
You know, somebody's a dick
in DOTA, you can report them.
If you report somebody in DOTA
and they get a bunch of reports,
we'll ban them from matchmaking
or put them in low priority
or prevent them from
communicating in games.
We'll give them a penalty.
When that happens, we let
you know that we did that.
Thank you, we've recently taken action
against one or more players
you've previously
reported for bad conduct.
So you had an action, reported someone.
We took action on your action
and then let you know about it,
so we're giving players agency.
There was an action and
there was a consequence.
So you reported a player as the action.
There was a consequence;
the player got banned,
and here is evidence of that consequence.
Somebody you reported, we took
action against, we banned.
This is a really important point.
We could've just had the report dialogue
and made use of it and banned people.
Closing the loop was incredibly important.
It gave players a sense
of agency, made them feel
like they could have an
impact on their environment.
I don't like dealing with negative players
and so I wanna be able to change that.
This is evidence that you
had a way of changing that.
We're giving you evidence
that, yeah, you can change
the thing about your environment
that you do not like.
That is giving players agency.
The other point I wanna make
is that small amounts of agency
can be just as valuable as
large amounts of agency.
It's not always practical to give players
large amounts of agency in various ways.
Agency is more of a binary thing,
more of a dichotomy as
opposed to a continuum.
As long as players feel like I have
a small amount of agency,
that's all they need.
So a silly example or somewhat
silly example of agency,
think about in Madden,
when the weather for a game
is the same weather as
what you're experiencing
in a current location at the current time.
My environment is impacting
my game's environment.
It maybe doesn't feel like agency,
but that's actually giving players agency.
They're having an impact
on the environment
and they're changing things.
Small things like that
can have a profound role
in how players perceive things.
So the final topic I
wanna get to, motivation.
So why are people playing our games?
Motivation, broadly speaking, is kind of
the mechanisms that drive behavior,
and the broader question,
hopefully of interest for us
is how do we keep players engaged
with our game, with our games?
So there are, broadly speaking,
two classes of motivation
or two kinds of motivation.
The first is, you know, experiences
that are intrinsically satisfying,
that are internally driven.
Like, I'm doing because I
just generally enjoy it.
Using the happy team in
Team Fortress as my metaphor
for intrinsically enjoying something.
The other approach is the use
of extrinsic rewards, right?
So getting items in a game,
having a level bar fill up,
unlocking achievements, opening
a case and getting a reward.
Any reward, any thing that
you're given as a consequence
for doing an action will
be an extrinsic reward.
Something that's happening
externally to you
and you are doing the activity to acquire
that thing that is external.
So the points that are
useful to think about here,
so intrinsic versus extrinsic,
internally motivated
versus external rewards.
Intrinsic behaviors tend to last longer.
I'm doing something 'cause
I genuinely enjoy it,
I'm gonna do it more often.
I'm less likely to stop doing it.
They're more difficult to extinguish,
and I'm gonna rate it higher
and it'll lead to greater enjoyment.
If I have a friend who is a good artist
and I say make me a dream catcher
and she makes it for
me and gives it to me.
She enjoys making them.
She's gonna rate that experience higher
than if I said I'll give you
$100 for that dream catcher.
All of a sudden, I've given
her an external reward
for that activity, and she's
not gonna enjoy it as much.
So intrinsic behaviors have
to have these admirable qualities.
Extrinsic behaviors, rewards, very useful
for shaping behavior, to get players
to do various things in our games,
but the important point to keep in mind
is they risk shifting the
motivation for playing.
From doing something intrinsically,
and then all of a sudden,
I'm given a reward
for doing that same activity,
my motivations for doing
the activity might shift
to acquire the extrinsic reward,
and in that case, it may not last as long
and I might be more
likely to stop doing it
and I'm not gonna enjoy it as much.
It's an important point to keep in mind
that motivations can and will shift.
So intrinsic behaviors
have these great qualities.
How do we foster intrinsic motivation?
How do we get people to
enjoy things intrinsically?
And so there's a variety of theories
on how to foster intrinsic motivation.
Here are, the points I'm gonna talk about
are generally accepted as kind of
doing a good job in this area
so we'll just go through them.
I talked about agency in the last section,
and it's fundamental
to somebody intrinsically
enjoying something.
This feeling that they
have autonomy and agency.
Autonomy is essentially the ability
to control one's own fate,
and agency is the ability
to act on, you know,
to perform actions that
contribute to owning one's fate,
so it's maybe superfluous terms
but the notion that I have the ability,
I'm able to make choices that
can impact my environment,
impact my situation and will
let me do what I want to do.
So if you give players
autonomy and agency.
You should also give them
the ability to progress
on some axis of performance.
When we're performing an
activity or an action,
we wanna get better at it.
We wanna be aware that
we're getting better at it,
whatever we're spending time on.
So make sure that any skill
progression that's happening
is apparent to the player,
and give them feedback
on that performance, on
that skill progression.
This is what you did poorly,
this is how you get better.
So understanding how I can
get better at something
and what I did well at that,
that's incredibly important.
So have skill progression
and have feedback on
that skill progression.
Let people know, hey, you're
at this point on the path.
To get to here, do this.
The final point to keep in mind is that,
and I'm reluctant to use
generalizations, but I will here.
A large percentage of what
we do as humans is driven
by the desire to make
positive social comparisons.
We wanna look good to other people,
so give your players opportunities
to look good to other people.
Leaderboards are a canonical
example of this happening.
I am able to show how
good I am to other people.
All of these factors lead
into intrinsic motivation.
So what are the implications
for game design?
When you can, work on satisfying
the intrinsic needs of a player.
Use extrinsic rewards
to incentivize behavior
where you need to, but just understand
that you may shift the
player's motivation,
and there are consequences to that.
So this is the final
slide I have in the talk.
Maybe you can guess where this is going,
but I'm gonna ask you the question anyway.
You have two disks,
two sets of four discs.
Discs on the left and
these discs on the right,
and so the question that we ask people.
Which set of discs appear brighter?
Anybody can answer if
you want or we can...
Right, yes, so most people,
I think about 90% of people
will say the disc on the right,
and the actual answer is
that these are identical.
These four discs in each
pictures are identical.
They're the same discs, and
you can look pixel by pixel
and match this up and I'm
happy to send you guys
or show you guys the actual image
and you can spend as much
time on it as you need to.
The point here is that
the context matters,
so the background is
shifting your perception,
and this has kind of been the point
I've been trying to make
throughout the talk.
The manner in which you
present things to your players,
the context in which things
are presented to your players
can have profound impacts
on the way you see things
or what you like or don't like,
or how you make decisions,
or what your internal reflection is.
Context matters, and so
the hope of this talk
was to convey the point
that if we're more aware
of the influences on behavior,
we can hopefully end
up making better games.
So thank you so much for
spending time with me, guys.
I greatly appreciate it.
(upbeat music)
