[MUSIC PLAYING]
RICHARD SHOTTON: Well thank
you very much for your time.
I want to spend
the next 45 minutes
or so talking about why I
think behavioral science is
so relevant for advertisers.
I want to discuss some of
the biases that have been
discovered by psychologists.
I want to discuss
some of the objections
to behavioral science.
Because you might be thinking,
look, if this stuff is really
as good as some
people say it is,
why aren't all brands using it?
But before getting
into the discussion,
I thought the best
place to start
was maybe a more personal
story, so how I first became
interested in this topic.
And unlike most things in life
that you just gradually drift
into, there was a
very specific moment
when I first realized the
value of behavioral science
for brands.
And it was in a slightly
strange circumstance,
being stuck in a
cab on the way back
from what had frankly been a bit
of a car crash client meeting.
The meeting, which was in
2004, was with the NHS,
and we had been trying to
encourage people to give blood.
And the meeting had
gone awfully because we
were way off our targets.
But then on the way back,
I was reading a book.
And in the back of that book,
I stumbled across the story
of Kitty Genovese.
Has anyone heard of
the story of Genovese?
So we've got one person.
Any others?
OK, a few people.
But for everyone else, Genovese
was a bar worker in New York.
And in the early hours
of March 13, 1964,
she had locked up her bar,
driven home to Queens,
and then parked about 100
yards from her front door.
Unfortunately, on the
walk to the front door,
she was spotted by a man
called Winston Moseley.
And over a period
of about 15 minutes,
Moseley stalks, stabs,
and murders her.
Now within a few days,
this was front page news
on "The New York Times."
Now that might not
sound surprising.
After all, it's a
pretty brutal murder.
But remember how violent
New York was in the 1960s.
There were 636 other murders
that year, and none of them
made the front page of
"The New York Times."
The reason this murder made
the front page was that
supposedly--
I do stress supposedly.
Supposedly, there were 37
people who witnessed it,
and none of them did
anything to intervene.
They didn't go down and help.
They didn't even
call the police.
In "The Times'" opinion,
this was just another example
of the city going to the dogs.
How could a
defenseless person be
attacked on the streets of New
York despite so many witnesses?
But two psychologists, Bibb
Latane and John Darley,
thought that "The Times"
had come to completely
the wrong conclusion.
It wasn't that no one
helped despite there
being so many witnesses.
It was that no one
helped because there
were so many witnesses.
They argued there was a
diffusion of responsibility
among the crowd.
Now rather than just claiming
this from logic alone,
the psychologists set up
a number of experiments
to try and prove their point.
They would stage emergencies--
so for example,
getting a colleague
to pretend to have
an epileptic fit--
and then they would
monitor whether strangers
came to that colleague's aid.
And they set up the emergency so
that either they were witnessed
by an individual or a group.
And their key finding was
that people were up to twice
as likely to come
to a stranger's aid
if they were on their own
compared to being in a group.
Now if you jump back
to 2004 when I first
read about this bias
when we were trying
to work with the
NHS, this, I thought,
was a wake up call
that, bloody hell,
we had been falling
victim to this bias.
We'd been going out and
asking everyone to donate.
And just as Latane
and Darley suggested,
most people were ignoring us.
Most people were
thinking, why should I
go through the time,
the hassle, the pain
of donating when I know loads
of other people have been asked?
So recognizing that
the bystander effect
might be a key
block when I spoke
to the creative agency and a
lovely strategist down at DLKW
called Charlie Snow,
I said, look, Charlie.
Why don't we try and
tweak the creative?
Why don't we stop saying blood
stocks are low in England,
please donate, and
why don't we start
saying blood stocks are low
in Bermondsey, please donate?
Blood stocks are low in
Birmingham, please donate.
Blood stocks are low in
Basildon, please donate, trying
to make a slightly more
tailored appeal to build up
that sense of personal
responsibility.
So it was a very, very
simple tweak to the creative.
But most importantly,
two weeks later, we
get the cost per
response results back,
and we see they've improved
by about 10% or 12%.
Now that, to me,
was a revelation,
that this body of work--
and remember this
is back in 2004.
This body of work,
I felt, wasn't
being discussed
in media agencies,
that we were
dismissing psychology
and other related
disciplines as being abstract
and other worldly, not fit for
solving commercial problems.
But that tiny example
suggested to me
that maybe we were
missing a trick.
So I've spent the
last 15 years or so
trying to immerse
myself in the topic,
and get to know as many
behavioral biases as possible,
and work with brands
to apply them.
And my argument to
brands about why
this is such an important
topic would essentially
be threefold, that this
stuff is phenomenally
relevant to what we do.
On a day to day
basis, brands are
trying to get people
to pay a premium,
to switch from a competitor,
to be bought more regular.
All of that is behavior change.
So why would you not draw on a
120-year history of the science
of behavior change?
What could be more relevant?
But relevance isn't
the only argument.
I think the other big
strength is its range.
There's a really worrying trend
in marketing at the moment
that people are trying to
find single solutions to solve
all marketing problems,
whether that's
brand purpose or another
kind of flavor of the month.
And the danger with having
these single approaches
is that marketing
is far too varied
to have a single solution.
The danger, if you
have one approach,
is that you end up force feeding
the client's individual problem
to the single tool
that you have.
In contrast, behavioral
science is much better
because it's not a big grand
theory you have to subscribe to
in its entirety.
It's this large varied
body of experiments.
And we can pick the bias,
pick the experiment,
pick the effect that
we think is most
relevant for our
particular client,
and then apply that one.
And then the final reason,
perhaps the most important
reason, is the robustness
of the discipline.
Still too many
decisions in marketing
are based on the opinion
of the highest paid
person in the room or
the most eloquent person.
Behavioral science is a
significant step forward
because it's based on the
peer-reviewed evidence of some
of the leading psychologists
from around the world,
whether that's current figures
like Kahneman and Thaler,
or historic figures like
Aronson and Skinner.
And best of all,
all their work is
available in the public domain.
If we think we've found a
psychological insight that
is relevant to one
of our clients,
we can take their
methodology and we
can rerun it to
make sure it works
for our particular problem.
And I thought it might
be useful to just go
through a couple of
those experiments.
Now when I first started
doing these experiments,
they were, frankly,
crap, because I started
experimenting on my colleagues.
And maybe I should have realized
the 20-somethings in a media
agency in central London are
not necessarily representative
of the population as a whole.
But that's where I began,
and the type of experiment
that I used to do--
I'll just give you one example.
This was for Armani perfumes.
The type of experiment--
I'd set up for Armani a stall
in the reception one morning,
and I'd laid out a
load of perfumes.
So there was Armani,
Calvin Klein, Chanel,
and a few others.
And as people came in,
I said to them, look.
Can you smell these perfumes?
Here's Armani.
It costs 40 quid.
What do you reckon?
Rate it out of 10 and put a
few adjectives to describe it.
And then I moved them
to the next perfume
to the next perfume
to the next perfume.
But once about 50
people had done that,
waited for the laggards.
Wait for that even first group
to go and wait for the laggards
to arrive, and then did
almost the same spiel to them.
So that time, I said, look.
Here are these perfumes.
Can you rate them?
Can you describe what you think?
Here's Armani.
Do this one first.
It costs 80 quid for boots.
So not 40 quid, 80 quid.
So all the other questions
about Chanel, Calvin Klein,
all that was just a ruse.
That data was irrelevant,
chucked that away.
What I really wanted to know
was how people rated the perfume
at different price levels.
And what happens is,
as in many areas,
people experience what
they expect to experience.
It's not just about smelling
the physical or chemical
constituents of the perfume.
So part of that
expectation is the brand.
Part of it's the packaging,
but part of it's the price.
And when people thought
the brand was expensive,
they were much more
likely to rate it highly.
So people were more than
twice as likely to rate
it seven out of 10.
Even the adjectives they
used to describe it changed.
So once we'd got
this data, shoddy
as it was, went down to see
the client and said, look.
You're lucky enough
to work in a market
where it's not your quality
that just drives the price.
It actually works the
other way around as well.
Your price drives your
perception of quality.
So you could apply this in
a very literal minded way.
You could just stick
your prices up,
but that might hit the cold,
hard reality of the till.
Or you could change how
you allocate your budget.
Because what they were doing up
until then, like most brands,
were sticking 99.9%
of their budget
behind their mass market,
main selling, quite
reasonably priced perfume.
They weren't paying much budget
at all behind their Armani
Prive that sold for 150
quid because, using logic,
it hardly sold any units.
So our argument was,
put loads of your spend,
a disproportionate
amount, behind
your real expensive perfume.
We know that price drives
perceptions of quality.
Therefore, people will admire
your brand as a whole more.
Therefore, you'll end
up selling more perfume.
So a very, very simple
experiment done quickly,
cost almost nothing, and
a very practical output.
But as I mentioned,
it's hardly scientific
to be testing your colleagues.
And I'd probably still be
testing on my colleagues
today if, luckily, they
weren't quite a cynical bunch.
And I ran so many of these
experiments over the first year
or so that my colleagues became
deeply suspicious about most
things I did.
So I would send out genuine
meetings to genuine client
problems, and half
the people I emailed
would say something
like, look, Shotters.
We know there's
no bloody meeting.
This is one of your
silly experiments.
Can you stop bothering us?
Some of us have got
proper jobs to do.
So I'd polluted my pool
of experimental subjects,
so I then had to go out
into the real world, which,
luckily for me,
means I've actually
got a reasonable database
of valid experiments.
And I want to take you through
just one more experiment.
And this was probably
back about 2010
with quite a few
retailers at the time,
and one of the
dilemmas they had was
what to do with
contactless terminals.
So the issue was
that many of them
had introduced them to London.
Those terminals
had reduced queues,
so they were good in one
sense, but they were also
very expensive.
And many shops thought that they
weren't really worth the price.
So they rolled
them out in London
and hadn't rolled
them out elsewhere.
But a colleague and I--
brilliant researcher
called Claire Linford--
we thought there might be
another reason for introducing
these contactless terminals,
so we put together
a very simple experiment.
We went and stood outside
little coffee shops and delis
in central London, and we tried
to stop people and ask them
three questions.
Now the keyword in
that sentence is tried.
If any of you have ever
done any market research,
people in central London do not
want to stop out of good will
and give you five
minutes of their time.
So the first time we tried
this, it was a debacle.
Three hours, we got one
person stopping out of pity.
So then we went
back to the office
and we started brainstorming.
What cheap incentive
could we give
people that would encourage
them to stop and talk to us?
It had to be cheap.
I had no authority
to be doing this,
so this was all going
through expenses.
What do you reckon the best
thing that we found was?
It costs a pound, the best thing
for getting people to stop.
Any ideas?
AUDIENCE: Cookie.
RICHARD SHOTTON: A cookie?
Do you know what?
We tried food and drink.
People are remarkably
suspicious of a slightly
disheveled stranger trying
to hand them free food,
so that did not work.
We tried--
AUDIENCE: Charity donation?
RICHARD SHOTTON: We never tried
charity donations actually.
That shows where my mind goes.
AUDIENCE: Fake police ID?
RICHARD SHOTTON: Sorry?
AUDIENCE: Fake police ID?
RICHARD SHOTTON: We did
not try that either.
At least, I'm not willing
to admit that on camera.
The next thing we
tried was pound coins.
You know, actually handing
people free cash in the street
is not the best thing either.
I think people
feel they are being
called slightly cheap if they
take a grubby pound coin.
The best thing by far
was scratch cards.
If you give someone
a pound scratch card,
it is far more effective
at stopping someone
than a pound coin.
And I think you're giving them
the dream of winning a million
quid, not insinuating
that they're cheap.
So once we'd got our
incentive sorted out,
we went back outside
these stores,
stopped people, and asked
them three questions.
How much have you just paid?
What means of payment
have you used?
And can we see your receipt?
And so we compared
what they thought
they'd spent with what
they'd actually spent,
and there was a
very clear pattern.
When people were
spending with cash,
about 3/4 knew
what they'd spent.
But those who didn't,
overestimated their spend.
Credit or debit using chip
and pin, 2/3 of people
knew exactly what they'd spent.
But those who didn't, were
as likely to underestimate
as overestimate.
And then with contactless,
less than half of people
could accurately
remember, and they tended
to underestimate their spend.
So we saw this swing of
about 15% between memory
when spending with cash
and memory spending
with contactless.
So we went back to the
retail teams and said,
look, you know that value,
from all your tracking,
is hugely important in getting
people to go back to the shop.
But if you're going
to be pedantic,
it's not value that matters.
It's memory of value.
And you can either change that
memory by reducing your prices,
but that will
decimate your margins,
or you introduce
contactless terminals.
People remember
you as good value
and therefore, they're
more likely to return.
So that might sound like
an anachronistic finding.
Pretty much everywhere
has contactless terminals
these days.
The key point is the
underlying principle.
And the underlying principle,
that the same price
can be made to appear better
or worse value dependant on how
you display it, holds
in a remarkable number
of circumstances.
Over the next few weeks, when
you're out at a restaurant,
have a look at the menu.
Increasingly, restaurants are
taking off the pound signs
because they're aware of--
well, they might just do
it for design reasons,
but they might also be
aware of psychological work
by Sybil Yang, who's shown that
if you remove the pound signs
or dollar signs, people become
8% less price sensitive.
You're reducing
that pain of payment
because you're putting
a bit of distance
between the delivery of the
service and handing over money.
So again and again,
we see different ways
of displaying the same price can
change the perception of value.
Even as basic as something
like a service provider,
they can change the unit
of time they talk about.
I've done work with
colleagues where
we've shown again and again,
that if you talk about,
let's say a pound a
day is the price, seen
as much better value
than 365 quid a year
or 30 pounds a month.
Again, it's like people--
or people do put
greater emphasis
on the cash and too little
emphasis on the unit time.
So it's almost like they
think six times four is not
the same as four times six.
But having spent a
lot of time talking
to brands about these
biases, showing them
these experiments, what
I've tended to find
is people come back with
the same objections.
And I thought it
might be worth going
through some of those
objections and saying
why I think they're not valid.
And the first objection maybe
is one that you're feeling.
You might be thinking,
well, I don't
think I would be
affected by those biases.
You know, I don't think
I would be affected
by whether I was paying
with contactless or cash,
so why would my very
sophisticated consumer
be affected?
But just because
our intuition says
these tweaks are too
small to have an effect,
it doesn't mean it's
actually the case.
And when I first try
to persuade marketers
that the tiniest of tweaks
could have a large difference,
I use lots of
academic experiments,
lots of case studies.
But that didn't seem
to affect people,
and they still went back
to their own intuition.
So what I've tried to do is,
before I go and see people,
I send them a survey.
And then I'll go through
a couple of questions
from that survey,
and the results
will show, hopefully quite
clearly, that we're all
affected by these biases.
So hopefully, some of
you filled that survey
in that Gerald shared.
The results are blended to
get a nice, big, robust sample
with other people
I've asked this week.
I want to take you through
two quick questions.
The first question was,
how good at your job
are you compared to your peers?
I only gave you two
potential answers.
You can either say above
average or below average.
So if there were a
statistician here,
they would say the answer
should be quite obvious.
Roughly 50% will say below.
Roughly 50% will say above.
Of course, if there were
a psychologist here,
they would say that is
never going to happen.
There is a well-known
bias of overconfidence.
People predictably
think they are
better than they actually are.
So knowing that, what
proportion of you do you think
said you are better than
your peers at your job?
Any guesses?
AUDIENCE: 99%.
RICHARD SHOTTON: 99%.
I love your cynicism.
It wasn't quite that bad.
Any others?
AUDIENCE: 70?
RICHARD SHOTTON: 70.
Well, we can split the
difference pretty much, 89%.
You know, that's certainly not
the highest I've ever seen.
The first time I ever did it,
so I knew who answered it,
I did it at the
Newsworks conference,
and a full 96% of them thought
they were better than average.
That's the highest benchmark.
But it is a well-proven study.
I would suggest if you hear a
finding from someone purported
to be interested in psychology,
if you hear a finding that
seems too good to
be true and there's
only one experiment
that backs it up,
you should be a
little skeptical.
Because it's important
these experiments replicate,
and overconfidence is one
that is a very robust finding.
Two of my favorite studies--
there's one from K. Patricia
Cross in which she goes
and finds psychology lecturers.
And even people
who are very well
aware of the bias
of overconfidence,
even 90% of them think they are
better lecturers than average.
There's another
study, Svenson, that's
slightly more ridiculous.
He goes and finds drivers who've
been laid up in car accidents.
They're still in hospital.
And even these people,
who are demonstrably
bloody awful at driving,
even a majority of them
think they are better
drivers than average.
So the point of this is to say,
look, we don't necessarily have
full introspective insight.
Our intuitions are not
necessarily accurate.
Or in the words of a
brilliant psychologist
called Timothy Wilson, we
are strangers to ourselves.
So just because you
feel these tweaks are
too small to influence you,
it doesn't mean it's the case.
And the second kind of
related find to that--
and this is one of the broader
findings of behavioral science
and social psychology.
It's if we don't
have good insight
into our genuine motivations
as Wilson suggests
and lots of experiments
back up, if we aren't
able to explain our real reasons
for buying a product, why,
as an industry, do we still
spend hundreds of millions
of pounds interpreting
quantitative survey
data at face value?
I think there's a
real danger that it
leads us to believe shoppers
are far more logical, rational,
and making
thought-through decisions
than they actually are.
But that wasn't the
only question I asked.
The second question was a
bit more of a puzzling one.
It was a question about apples.
And the question that time
was, how many calories
do you think an apple
has, more or less than 50?
So people had to
guess more or less,
and then they had to
fill in an open text box.
And they made a specific
guess, and that guess was 75.
But whilst that's a
reasonable answer,
if anyone purporting to be
interested in psychology
sends you a survey, you
should be a little suspicious.
There's always
going to be a twist.
And the twist, in this
case, was only half
of you were asked that way.
Half were asked,
how many calories
do you think there are in an
apple, more or less than 150?
So not 50 like the
first time, 150.
So most people say
less, and then they
had to make a specific guess.
What do you reckon the average
answer in that case was?
AUDIENCE: 100.
RICHARD SHOTTON: 100.
Not far off at all, 113.
So you've got a swing of
about a 50% increase based
on this tiniest of tweaks.
Now psychologists,
again, would say
this is completely predictable.
There is a very well
known bias called
anchoring discovered by Kahneman
and Tversky in the mid-1970s.
And they describe anchoring as
the idea that if you throw out
a number at the beginning
of people's estimates,
they cannot but help
be influenced by it.
And Tversky's argument
was, most questions in life
are like this apple one.
There isn't a specific answer.
What there is is this zone
of reasonable answers.
So if you throw
out a big number,
people know it's too high.
So take 150 in our case,
they know it's too high,
but they take it
as a starting place
and they begin adjusting down.
And they stop once they
hit the top of that zone
of reasonable answers.
So in our case, 110, 115.
But then if people see
a low number like 50
at the beginning, they take
that as the starting place.
They know it's too low.
They adjust upwards,
but they only
adjust up to the bottom of that
zone of reasonable answers.
So you get a bizarre
situation in which,
even though everyone knows
the number is relevant,
because they see it
at the beginning,
because they take it
as a starting place,
because they don't
adjust enough,
it affects their final estimate.
Now, you may be coming up with
other objections in your mind.
You might be thinking,
well, what can we really
tell from a slightly
artificial survey?
You probably weren't
paying that much attention
when you did the survey.
You probably had lots
of other things to do
and there was nothing at
stake when you did it.
So you might be thinking,
well would these biases still
have the same effects when
shoppers have cash at stake?
Surely they have a
much greater motivation
to think things
through more logically.
But whilst that seems like
a reasonable objection,
these biases actually have as
much effect in the real world
as they do in the lab.
And perhaps the best
way to show that is
to think about what is
the most effective ad
campaign of all time.
Now that is a subjective
opinion, so over to you.
What do you think
the most effective ad
campaign of all time is?
Any guesses?
AUDIENCE: [INAUDIBLE].
RICHARD SHOTTON: Oh.
OK, excellent answer.
Brilliant answer, so I
reckon we've got that one.
And one more?
AUDIENCE: Breakfast is the
most important meal of the day?
RICHARD SHOTTON: Oh, OK.
OK.
Any others?
AUDIENCE: Gillette?
RICHARD SHOTTON: Gillette, OK.
So for Gillette, who here
has used a Gillette product
in the last week?
See hands up.
So considering
the margins on it,
it's pretty impressive
of a penetration.
What about, and hardly a
scientific sample [INAUDIBLE],,
who voted Leave?
Seemingly a very
ineffective campaign.
More seriously, great
as those campaigns are,
I think there's another campaign
that is even more effective, So
stick your hand up
and keep your hand up
if you've ever been married.
Stick your hand up.
OK.
Now did you buy or
receive a diamond ring?
Keep your hand up if you did.
OK.
We probably should do
it the other way around.
Did anyone not get
a diamond ring?
One person.
One person is not sure.
AUDIENCE: I bought one but--
RICHARD SHOTTON: Yeah,
but there's complications.
Let's not dig into this.
So maybe 50 people answered,
about 95% of people
bought a diamond ring.
I would argue that is the
most successful ad campaign
ever, because there is nothing
natural about getting a diamond
for an engagement ring.
In the 1940s, people
were as likely to buy
rubies or sapphires or
emeralds as they were diamonds.
But then in 1947, a
brilliant copywriter,
Frances Gerety, working
for the NW Ayer agency,
writes the line, a diamond
is forever for De Beers.
And she manages to
fuse in people's mind
the link between the
enduring nature of true love
and the durability of the stone.
So it's an amazing campaign
because people pretty much
think there's no other choice.
But gray as that line is-- and
it might be one of the best
lines ever written
in advertising--
the Ogilvy ad man, Rory
Sutherland, says, look,
it's not even the best
line run by De Beers.
The best line
written by De Beers
is one based on anchoring.
It's the, you should
spend a month's salary
on your diamond ring.
Think how ludicrous that is.
Why would you listen
to a salesman who
has a very strong, very obvious
vested interest in selling
you something expensively?
Yet just as anchoring
suggests, they
throw out the number of a month.
People don't quite
spend a month's salary,
but they adjust
upwards and start
spending two or three weeks of
their salary on a diamond ring.
It has made De Beers
billions of pounds.
But De Beers, being
quite a canny company,
certainly didn't stop there.
In the 1970s, they started
going out and saying
in their ads, apologies,
we've made a slight mistake.
We've accidentally
been saying you
should spend a month's salary.
We meant to say you should
spend two months' salary.
And amazingly, just
as anchoring suggests,
people don't quite
spend eight weeks,
but they start
spending six or seven.
And again, it
doesn't stop there.
In the 1980s, De Beers
launched in Japan.
There's no heritage
of diamond rings
for engagement gifts in Japan.
There's no social norm
about what to spend,
so De Beers go out and say
you should spend three months'
salary on your diamond ring.
And now, the Japanese
spend pretty much more
than any other
country in the world
on their engagement rings.
Now that has literally
made De Beers billions.
In 1939, they were
selling $23 million
worth of diamonds in the States.
By 1979, it was $2.1
billion of diamonds.
If you strip out inflation, it's
a 19-fold increase in sales.
It's arguably the most effective
ad campaign of all time,
and it's partially
based on a bias.
But of course, some of
you might be coming up
with more objections.
You might be thinking,
well frankly, that stuff
was in the 1970s and the 1940s.
Would it really work today?
No, we've got the internet.
All the world's information is
available at our fingertips.
We've got millennials.
Surely things are different now.
But it's just not true.
The fundamentals of human nature
are the same as they ever were.
And I think the best way to show
that is to think about a more
recent campaign that has
almost been as successful as De
Beers'.
And this recent campaign is
based on a bias called price
relativity.
So it's similar to anchoring,
but with a few differences.
And price relativity
essentially means
consumers have no
fixed conception
of what is good or bad value.
They do not walk around
mentally computing
how much they're prepared to
pay per unit of happiness,
whether that's for
a trainer or a beer.
Because that type
of cross-category
calculation would be
ludicrously complex.
And what people tend to do is
replace a very complex question
with a simpler question
that is almost as good.
And the simpler question this
instance is, how much did I
pay for something similar?
If I'm now being
asked to pay more,
this new thing is bad value.
If I'm now being asked pay less,
this new thing is good value.
That should interest marketers
because it means that value
is relative, not absolute.
And so if marketers can
change the comparison set,
they can change
consumers' willingness
to pay by orders of magnitude.
Many brands have done
this in the last 20 years.
Red Bull is a great example.
Craft beer is an example.
But I still think the best
example is probably Nespresso.
Now if you think back to
when Nespresso launched,
if a lesser team than
Nestle had done it,
I think they'd have launched
in something like this.
They'd have put their
coffee in 1/2 kilo bags,
and then they'd have sold that
coffee in Tesco or Sainsbury's.
But if they had done so--
and assuming they charge
the same per gram price
they do now--
how much do you
think a 474-gram bag,
and a standard bag of
coffee, how much do
you think that would cost?
Any guesses?
AUDIENCE: 20 pound.
RICHARD SHOTTON: 20 pound.
I mean, that is a
ridiculously large number,
but it's not quite
ridiculous enough.
Any advances on 20?
AUDIENCE: 34?
RICHARD SHOTTON: 34.
That's a very suspiciously
accurate number.
34 quid, brilliant.
So 34 quid.
I would argue there
is no way on earth
any consumer in their right
mind would go to Tescos,
push aside a 6-quid
bag of Douwe Egberts,
and take home a 34-pound
bag of Nespresso.
And it wouldn't just be the
case this felt expensive.
It would feel so
expensive, it would
feel almost morally wrong.
But of course, Nespresso
didn't do that.
They launch in pods.
A pod gives you a
cup-size serving.
And as soon as consumers
think of cups of coffee,
their comparison
site is no longer
Cafedirect or Douwe Egberts.
It's suddenly Costa
Coffee or Starbucks.
And that makes the 47 pence that
Nespresso want for a Lungo pod
remarkably good value when you
compare it to 2.90 pound that
Costa want for a flat white.
But 47 pence for a pod,
34 pounds for a bag,
it's exactly the same per gram
price, but one feels a rip off,
one feels a bargain.
Again, a company's
made billions of pounds
from the creative but simple
use of a very well known bias.
Now I wouldn't want to suggest
that behavioral science is only
of use in those grand moments
of launching a diamond ring
or launching a coffee pod.
There are also arguments
that it can be used
for more tactical approaches.
So here, you've got
M&S using it, I think.
They have the offer
of dining for 10 quid.
Are a couple of microwave
meals a good value for 10 quid?
I don't know, maybe, maybe not.
But what I certainly
know is 10 quid
feels remarkably good value
compared to the 50 quid
that PizzaExpress want
if you go for a meal.
So by switching
that comparison set,
they change consumers'
willingness to pay.
But on the final objection.
Some of you might be
thinking, well, maybe I'm
affected by these
biases, maybe consumers
are affected by
these biases too.
But do we really need behavioral
science to get to these ideas?
Couldn't we just use
common sense and logic?
Are we just dressing up this
common sense in academic gowns?
Now I think that's a flawed
attack for two reasons.
Firstly, if all
behavioral science
was, was this wonderful catalog
and compendium of insights
into human nature and it allowed
us to get two ideas quicker,
that would be a
great achievement.
But behavioral science
is far more than that.
It's not just stuff that
we could get to logically.
There are some very
counter-intuitive ideas.
And I want to take
you through one
of those counter-intuitive
ideas now.
But before we do it, I want
to do a quick live experiment.
So it's a live experiment based
on the work of an Australian
called Adam Ferrier,
and what I'm going to do
is flash up two pictures
of some cookies.
And all you have to
decide is whether you
want to eat the one on the
left, your left, or the one
on the right.
Simple?
OK.
So stick your hand up
if you would prefer
to eat the one on the left.
OK.
And the right?
So you can keep me
honest in the back.
What do you reckon?
80/20?
75/25 in favor of the left?
Yeah, roughly?
So that is in line
with population.
Ours was a bit more extreme.
Jenny Riddell and I got
a national representative
sample of 626 people,
and we asked them
which they preferred.
66 [? plumped ?] for
the one on the left.
Now if you look at them
a bit more closely,
you can start to see there isn't
a huge amount of difference.
The one on the left is
the original cookie,
rough and flawed.
The one on the
right has just had
those imperfections removed.
So it's slightly an
embarrassing chart
to present because I was so
awful at using PowerPoint,
I didn't know how to make
it perfectly circular,
so I had to get a graphic
designer to do this for us.
I think this might be the most
expensive experiment I've ever
run, so please appreciate it.
So the point of this
is it's one example
of showing that if you
remove the imperfections
from a product, it can
become less appealing.
Now there are specific reasons
why that might happen in food,
but it is not just a
finding relevant to food.
Back in 1966, this
man, Elliot Aronson,
a professor of psychology
at Harvard University,
ran a classic experiment.
He recruited a colleague.
He gets his colleague
to take part in a quiz.
He gives him the
answers to the quiz,
so the guy gets 90% of
the questions right,
wins the quiz by miles,
looks like a genius.
But then as he's
leaving, he stands up
and he makes a
small blunder, what
the Americans call a pratfall.
So he stands up and he spills
a cup of coffee down himself.
All of this has been
recorded by Aronson,
and then Aronson
takes that recording
and plays it to people.
And he plays it in
one of two variants.
Either they hear the entire
instant, great performance
and mistake, or just
the great performance.
Then when Aronson
questions people
about how appealing
the contestant is,
people find the contestant
who had exhibited the flaw
significantly more appealing.
So he terms this
the pratfall effect,
the idea that if people or
products exhibit a flaw,
they become more appealing.
It's certainly not
just the case this
is a factor of an
artificial lab experiment.
In 2015, Northwestern
ran a huge experiment
looking at product reviews.
111,000 product reviews,
and they crossed
the rating of that review--
one being awful, five being
brilliant-- with the likelihood
to purchase the product.
So they looked at 22
product categories.
I've just got one up
here, salon hair care,
but all the categories showed
roughly the same pattern.
So as the review gets better,
as logic would suggest,
likelihood to
purchase increases.
But then at some stage
between 4.2 and 4.5 out of 5,
depending on the category,
likely to purchase peaks.
And then if the reviews get any
better, they begin to decline.
So the psychologist argued that
consumers thought perfection
was too good to be true.
They did not trust
claims of perfection.
And if you think back to the
history of the most successful
ads of all time, it's
interesting how many
have used this insight.
If you just look
at the best ones--
1959, DDB, they go
out and say ugly
is only skin deep for VW Beetle.
They continue that
campaign with a line--
Bob Levenson here.
They tell people
the car is slow.
So there's a wonderful line.
If you can't read
it, it says, "you
can tell them from
Volkswagens because a VW won't
go more than 72
miles an hour, even
though the speedometer shows
a wildly optimistic top speed
of 90."
So they told people
their cars were slow.
Same agency three
years later, in a line
written by Paula Green for Avis,
they say they're unpopular.
Then Listerine go out
and say they taste awful.
The taste you hate twice a day.
Guinness say they're slow.
Stella say they're expensive.
Cream cake say they're
highly calorific.
Again and again, the best
advertising admits a weakness.
More recently, it's a bit of
a trend seemingly in America
rather than here
for brands to admit
they have poor reviews
amongst some people.
This is from a
resort, which wants
to position itself as for
experts with thrilling slopes.
So it puts up a review from
Greg in Los Angeles, one star.
"I've heard Snowbird
is a tough mountain,
but this is ridiculous.
I felt like every trail was
a steep chute or littered
with tree wells.
How is anyone supposed
to ride in that?
Not fun."
Or taking it to
the final extreme,
you've got work by Hans Brinker
budget hotel in Amsterdam,
in which they go out and
say their service is awful.
Now what I think these
advertisers have realized
is there's three big reasons
why you should admit flaws.
The first is that consumers do
not trust advertising claims.
They think that we're either
partial with the truth
or they might mistakenly
think advertisers lie.
So by admitting a flaw,
you tangibly demonstrate
your honesty, and
then your other claims
become more believable.
You've got over one of the
biggest hurdles in message
believability.
Secondly, thinking back
to that Northwestern data,
consumers don't
trust perfection.
It's too good to be true.
They know from bitter
experience there
are always trade-offs in life.
And therefore, if you
don't go out and say
where your flaw is, it's
not that consumers think
there isn't a flaw, they
still think there's a problem,
they are just uncertain about
where that problem lies.
And the danger is they may think
it lies somewhere important.
So my favorite example, going
back into Rory Sutherland,
he's argued the pratfall
effect is responsible partly
for the success of
budget airlines.
Think back to when
budget airlines launch.
It was a very bizarre offering.
One day you can fly to
Madrid for 100 quid.
The next, it's 10 quid.
If they hadn't gone out and
said their service was awful,
consumers might have
thought the cheap price came
at the expense of safety.
But by going out and saying
how bad their service was,
consumers could understand the
reason for the cheap price.
They felt that deal was
fair, and therefore they
were happy to fly.
And then the third and
final reason, I think,
is when you look at
these greatest brands,
you clearly see that
they aren't just
picking a weakness randomly.
They're picking a weakness
very, very carefully.
They're thinking,
what weakness can
we use whose mirror strength
backs up our core reason?
So VW go out and
say they're ugly,
their underlying
message being, well we
don't care about
aesthetic fripperies.
We care about
engineering excellence.
Listerine go out and
say, yes, we taste awful.
But what would you expect
from a potent medicine?
Even Hans Brinker budget hotel.
I think what they're doing
is saying, yes, we're a dive,
but by God, you're going
to have a good time.
The best brands,
again and again,
choose a weakness that
mirrors their strength.
But those, I think, example
should pose a question.
If there is so much academic
evidence, real world evidence,
case study evidence that
flaws are so powerful,
why is it that the use of
the pratfall effect, this use
of admitting flaws
is so vanishingly
rare in advertising.
Because it's no good just
looking at the best performing
advertisers.
We've gone back to 1959,
we've picked a dozen adverts.
I'm sure between us, we
could get a dozen more.
But there have been
tens of thousands
of ads in the
intervening 60 years.
And if you take a more
representative sample--
I've done this.
If you take a representative
weekend sample of papers,
go through them looking for
the pratfall effect, when
I did that, if I was very,
very loose in my definition
of the pratfall effect, I could
still only find less than 0.05%
percent of ads using it.
If I was strict,
none of them were.
So why is it such a well-proven
technique is so ignored?
And I think the reason lies in
the work of this man, Stephen
Ross, who was a professor
of finance at MIT.
And Ross's coined a finding
called the principal agent
problem, and it explains a
lot of problems in marketing.
The principal agent
problem is the fact
there is a divergence
of interest
between the principal, that is
the business or shareholder,
and the agent, that is
the marketer or employee.
The principal wants long-term,
sustainable, profitable growth,
whereas the agent, yes,
they want that to a degree,
but what they also
want, unspoken,
is safe career progression.
And what the pratfall
effect does brilliantly
is give you the best
chance for the principal.
It gives you the best chance of
long-term sustainable growth.
But what it doesn't
necessarily do
is give you the best chance
of safe career progression.
Think to some of the
examples I went through.
Think about Stella Artois.
Imagine you were the marketing
director who came up with that.
You worked with an agency that
coined the phrase, reassuringly
expensive.
Imagine this alternate universe.
You run that, the
campaign flops,
and any campaign can flop.
I think you'd be fired.
Because the CEO probably
doesn't subscribe
to behavioral science,
describe some straw man version
of economics, and
therefore he would
think you were a bloody idiot.
You told people your
product was expensive,
of course that dampened demand.
But that, to me, makes this
the most interesting of biases.
Because if we know one
thing about advertising,
it's that what is distinctive--
and because of the selfish
motivations of agents,
this would always
be distinctive.
We know the what is
distinctive is memorable.
And therefore, if you can
persuade the organization
you're working with to
use the pratfall effect,
the odds are their competitors
won't, because they won't
be subscribing to
behavioral science,
and therefore your
advertising can be
distinctive in the long term.
Now it's not, to be
completely honest,
just these business reasons
why I love the pratfall effect.
The other more
personal reason is
I love presenting about
the pratfall effect
because it essentially gives
you a get out of jail free card.
I worked with Jamie
for ages, and he's
seen me on the football pitch.
He knows that I'm
essentially a bit of a klutz,
and it's probably a
50/50 chance whether I
was going to trip over my
books or spill my water
in a 45-minute presentation.
And the brilliant thing
about the pratfall effect
is you can make any
mistake you want.
You can forget all the
slides, and then you
can pretend it was a devious
Machiavellian technique
to make yourself more appealing.
And then the final reason
I love this bias is,
as mentioned, I've just written
a book called "The Choice
Factory," and that has
pretty much been a walking
example of the pratfall effect.
Everything that can go
wrong has gone wrong.
So the publisher got the
wrong price on the book
so all the books had to go
back to the printers again.
They didn't manage to get enough
stock in time for the launch
day, so it sold out and no one
could buy the bloody thing.
Even stuff that
started off very well
has had a sting in the tail.
So having no
advertising budget, what
I decided to do
to try and promote
this was to get the book,
send it to people I'd admired,
like Dave Trott or Rory
Sutherland or Steve Harrison,
getting them to
write a nice blurb,
and then I sent
that to magazines.
And I kind of hoped they might
write something about it.
So one of the reviews I got
was from Mark Ritson, professor
at Melbourne Business
School, and he wrote this.
If you've ever read his columns,
you know it's quite acerbic,
so in his style, he starts off
slagging off other advertisers.
He says that they are a
cacophony of overstatement,
and kindly says I have a
balanced voice in contrast.
So that sounds great.
I'll send that to
some websites and see
what publicity I can get.
So I sent it to a
site called M+AD--
and along with others--
but are the New Zealand
version of "Campaign."
Couple of days later, I go back
to have a look at the site,
and they had run this.
"A cacophony of overstatement."
So I emailed the
editor and said,
I don't quite think
that's a fair reflection
of the spirit of the quote.
And he [INAUDIBLE]
back and said,
we thought it
sounded quite good.
And English is their
first language.
What is going on?
What is the state of
modern journalism?
So I went home.
I was ranting to my wife about
this, the kind of unfairness
of it all.
And my wife, who's the far
cannier of the pair of us,
and she's a
copywriter, she said,
look, there's actually no
point in getting angry.
It's not going to
do you any good.
Why don't you play these
people at their own game?
Why don't you send me the
other reviews you've got,
and I'll see what I can do.
So I sent my wife some reviews.
I sent her one from
Martin Sorrell.
Martin, sort of suspiciously,
within about two minutes
of receiving the
email said that he
will take a look at the book.
So my wife said, why
don't we edit that down
and stick it on the front cover.
Martin Sorrell
says, take a look.
But to my eternal
shame, I am a coward.
And I thought maybe, you
know, I might one day
want to move jobs.
You probably should
edit that bit.
Maybe I want to
move jobs one day,
and perhaps annoying the
most powerful man in media
was not a good idea.
So I didn't run that
on the front cover.
Instead, I ran out a
Rory Sutherland quote
that's maybe more appropriate.
So I hope that was useful.
The key thing I would
like to stress though
is, that is barely scraping the
surface of behavioral science.
There are literally
hundreds of experiments,
hundreds of biases identified.
Because there is such a
range, whatever problem you
are facing, there will be a
bias that maybe can't quite
solve the issue you
have, but it will least
give you a different
angle and help solve it.
Thank you very much.
[APPLAUSE]
