Welcome to United audio books.
with No Bullshit and no Time waste lets dive
into the actual point.
Thinking fast and slow.
by Daniel Kahneman.
Kahneman was born in Tel Aviv in 1934 and
spent his childhood years in Paris, France.
He and his family lived in Paris when it was
occupied by Nazi Germany in 1940, and they
spent most of the war attempting to avoid
internment.
With the exception of his father, who died
due to diabetes in 1944, his family survived.
The family then moved to British Palestine
in 1948, just before the creation of the state
of Israel.
Kahneman attended the Hebrew University of
Jerusalem in 1954 for psychology and then
served in the psychology department of the
Israeli Defense forces.
In 1958, he traveled to the United States
to earn his PhD in Psychology from the University
of California, Berkeley.
Kahneman then became a lecturer in psychology
and collaborated with Amos Tversky to study
judgment, decision-making, and prospect theory.
Kahneman was ultimately awarded the Nobel
Prize in Economics in 2002 for his work on
prospect theory.
Kahnemanís work on prospect theory is built
on the history of behavioral economics, particularly
the work of Swiss scientist Daniel Bernoulli,
who created utility theory.
This theory, which stood the test of time
for nearly 300 years, argued that the value
of money (its utility) is proportional to
the amount of money someone already has.
Therefore, a gift of 10 ducats has the same
utility to someone who already has 100 ducats
as a gift of 20 ducats has to someone who
already has 200 ducats.
But in the book, Kahneman shows how Bernoulliís
theory is flawed: it doesnít always take
into account the difference in utility between
a gain and a loss.
This crucial error becomes the basis of Kahnemanís
own theory, prospect theory, which argues
that people value losses more than they value
gains.
Kahneman begins this survey of heuristics-and-biases
psychology, Thinking, Fast and Slow, by introducing
two "systems" of cognition: one fast, the
other slow.
Throughout the book, Kahneman shows how the
two systems collaborate and describes many
circumstances in which fast, intuitive thinking
reliably fails.
Introduction
Kahneman begins by describing the goal of
his book: to give people a richer vocabulary
for discussing and detecting errors in judgment.
He offers a brief history of his own professional
interest in the psychology of judgment and
decision making, illustrated by some examples
of the successesóand the failuresóof human
intuition.
Finally, Kahneman provides a high-level outline
of Thinking, Fast and Slow, which begins by
detailing the workings of two complementary
"systems" of cognition and describes the heuristics,
or rules of thumb, on which those systems
rely.
In the "Origins" section of the Introduction,
Kahneman discusses his research and thought
partner, the late Amos Tversky, at length.
Tversky's contributions were central to all
of Kahneman's work and success.
Part 1.
Chapter 1: The Characters of the Story
Kahneman introduces the two systems alluded
to in the introduction.
System 1 is the automatic, intuitive set of
thought processes by which people often make
decisions, sometimes without conscious awareness.
System 2 is deliberate and rational, but it
is also lazy, often superficially endorsing
whatever intuitive judgment System 1 comes
up with.
Moreover, the use of System 2 involves a focusing,
and thus a narrowing, of attention.
When System 2 is engaged on a problem, a person
becomes much less aware of anything not immediately
relevant to solving the problem.
The systems come into conflict whenever a
person must do something counterintuitive,
such as "steer into the skid" on an icy road
or make sense of seemingly contradictory visual
data.
Chapter 2: Attention and Effort
System 1, Kahneman says, is "at the center
of the story," whereas System 2 is a mere
"supporting character."
This is in part because System 2 usually eschews
effort, only getting involved in a decision
if and insofar as it needs to.
To give the reader a reference point for System
2 exertion, Kahneman describes, and invites
the reader to try, a simple but challenging
arithmetic exercise called the Add-1 task.
In experiments, he says, performance of the
Add-1 task has been shown to render people
"effectively blind" to irrelevant stimuli.
Mental effortóthe intense and sustained engagement
of System 2óalso produces a suite of physiological
effects, including dilated pupils and an elevated
heart rate.
Kahneman's introduction begins with an apparently
pessimistic tone, not the "can-do" motivational
rhetoric one might expect from a popular psychology
book.
People, he suggests, are not very astute self-critics,
especially when it comes to their own reasoning
processes.
They are, however, quite adept at (and in
many cases fond of) criticizing others.
Thus, Kahneman holds out the hope that readers
of his work can become more skilled at this
sort of criticism, helping to detect the blind
spots in other people's decision making.
In this sense, the work is not as pessimistic
as it may first appearócognitive self-awareness
simply needs to be recast as a collective
or communal enterprise.
Kahneman's mission to improve the quality
of "gossip" is a roundabout way to improve
society's overall quality of thinking.
At the end of Chapter 1, Kahneman is careful
to point out that the two "systems" are really
aliases for different ways of thinking.
They are not distinct physical regions of
the brain, separate neural pathways, or anything
quite that concrete.
In part, Kahneman chooses the names System
1 and System 2 because these terms have some
currency in other psychological literature.
More pragmatically, he adopts them because
they are memorable and concise, two traits
that lie at the heart of many cognitive heuristics
having to do with language.
He treats System 1 and System 2 as characters
in a story because, as later chapters will
underscore, a story with characters is much
more easily understood and assimilated than
a non-narrative exposition of dry facts.
The structure and vocabulary of Thinking,
Fast and Slow embody some of its author's
fundamental insights about learning and attention.
One of the optical illusions presented in
Chapter 1, the M¸ller-Lyer illusion, has
a long and interesting history of its own.
This is the famous illusion of the two arrowlike
figures, one with two heads pointing out ()
and one with two tails (>ó<).
The lines themselves are identical in length,
but almost everyone perceives the line in
the two-headed arrow () as shorter than
the corresponding line in the two-tailed one
(>ó<).
This perception is remarkably hard to shake
even if one is familiar with the illusion.
The effect was first documented by German
psychiatrist Franz Carl M¸ller-Lyer (1857ñ1916)
in 1889 and has been the subject of many explanatory
efforts.
Some theories emphasize the fact that human
vision evolved to deal with three-dimensional
environments, making it easily fooled by flat
images.
Other explanations rest on the idea that the
brain's first, unconscious judgment of length
is not sophisticated enough to distinguish
the arrow shaft from its tails.
For Kahneman's purposes, the M¸ller-Lyer
illusion is noteworthy mainly because it shows
the persistence and automaticity with which
System 1 can make a mistake.
No matter how many times one views the illusion,
it is still likely to seem as though one line
is shorter than the other.
It takes conscious restraintóthe exercise
of System 2óto recognize that one's intuition
is making a systematic error in judgment.
Chapter 3: The Lazy Controller
Complex, methodical System 2 thinking demands
self-control, particularly if such thinking
is performed under time pressure.
Kahneman notes some exceptions to this general
trend: in a pleasant and well-studied psychological
state called flow, all one's attention goes
into the activity at hand, and no effort is
needed to stick to the task.
Still, in general, being cognitively busy
leaves one with less willpower to resist such
temptations as junk food or impulsive spending.
This state of diminished self-control is known
as ego depletion.
Keen to avoid such an expenditure of willpower,
System 2 seldom contradicts the intuitions
of System 1 unless an obvious discrepancy
prompts further investigation.
Children who possess a strong and unusually
active System 2, who are capable of delaying
gratification and exercising self-restraint,
are likely to perform better on intelligence
tests later in life.
Chapter 4: The Associative Machine
System 1, Kahneman suggests, works largely
by association.
Once an idea has been "activated"ófor instance,
by reading a wordóSystem 1 spontaneously
searches for related and compatible ideas.
Much of this associative work happens unconsciously,
as can be observed in studies of so-called
priming effects, patterns of behavior and
cognition that appear when a subject is primed
with a particular stimulus.
Students asked to solve a crossword puzzle
featuring elderly themed words (e.g., "gray"
or "wrinkle") will move more slowly when walking
down the hall afterward.
Kahneman cites several other studies in which
a seemingly innocuous or irrelevant stimulus
produced a conceptually related effect on
a subject's thoughts or actions.
Chapter 5: Cognitive Ease
Next, Kahneman expands on a concept discussed
briefly in Chapter 3.
In deciding whether System 2 should be tapped
to evaluate a decision, he says, System 1
relies on a perception of cognitive ease or
its opposite, cognitive strain.
The more strained System 1 is, the less effective
its intuitions are, and the more likely System
2 is to be called in to consciously address
the problem at hand.
The causes of cognitive ease, however, include
some wholly incidental features that have
no bearing on whether a problem is easily
solved by intuitionóor whether a given proposition
is likely to be true.
Presenting a statement in a clear, bold font,
for instance, makes it seem more familiar
and less cognitively burdensome, triggering
System 1 to see the often incorrect intuitive
answer as more plausible.
However, problems presented in small, difficult-to-read
font activated System 2 and led more participants
to reject the incorrect intuitive answer suggested
by System 1 and to arrive at a correct answer
by using System 2.
The rhyme-as-reason effect is another example
of how cognitive ease can mislead: rhyming
sayings are "judged more insightful" than
phrases with near-identical meanings that
do not rhyme.
Mere exposure to just about any stimulus,
provided it is not noxious, can lead a person
to later associate that stimulus with feelings
of familiarity and ease.
When influenced by such feelings, an individual
will rely even more heavily than usual on
System 1, whether or not this reliance is
warranted by the nature of the cognitive problem
to be solved.
Kahneman mentions the concept of flow only
in passing, perhaps because his focus is on
the ordinary human experiences in which distractions
abound and paying attention is effortful.
The study of flow belongs to a broader domain
called positive psychology, which investigates
the psychological underpinnings of happiness
and well-being.
In Thinking, Fast and Slow, Kahneman, though
by no means a pessimist, is more interested
in the ways that otherwise successful intuitions
can fail.
Nonetheless, the literature on the topic of
flow is extensive, dating back to about the
same time as Kahneman's own early studies
on heuristics and biases.
The founding figure in this domain of research
is Mihaly Csikszentmihalyi, a Hungarian American
psychologist whose theories are drawn from
extensive, worldwide interview and survey
data.
Csikszentmihalyi proposes the existence of
a cognitive sweet spot of sorts, in which
a person's skills are perfectly matched to
the challenges they face.
In this respect, Csikszentmihalyi's work contrasts
notably with Kahneman's, which tends to unearth
mismatches between humankind's cognitive tools
and the problems it must solve.
Csikszentmihalyi first introduced his ideas
to a popular audience in Beyond Boredom and
Anxiety (1975), but his best-known book is
Flow: The Psychology of Optimal Experience
(1990).
Nearly three decades after it was first published,
Flow continues to be widely read and is a
frequent set text in psychology, design, and
cognitive science courses.
Willpower is another concept worthy of a deeper
dive.
The lack or exhaustion of willpowerówhat
Kahneman and others call ego depletionócan
intensify cognitive biases and suppress System
2 thinking.
Willpower has also, somewhat more controversially,
been proffered as an explanation for the vastly
different academic outcomes among children
with similar educational backgrounds.
Toward the end of Chapter 3, Kahneman cites
a classic study of willpower in young children:
the famous "marshmallow test" or the "Stanford
marshmallow experiment" conducted by Walter
Mischel.
Kahneman describes a version of the study
that involved Oreos instead, but regardless
of the snack chosen, the basic experimental
protocol was the same.
The experimenter would offer a child one marshmallow
(or cookie, pretzel, etc.) and promise to
double the reward if the child could wait
15 minutes.
To endure the wait, the children often attempted
to distract themselves or to ignore the marshmallow
by turning around or covering their eyes.
About a third of the subjects managed to wait
the full 15 minutes without eating the marshmallow.
As a group, these children were found to have
better educational outcomes later in lifeóas
measured, for instance, by standardized test
scores.
Some aspects of the "marshmallow test" experiment
and its interpretation have since been called
into question.
In a June 2018 article for The Atlantic, sociology
professor Jessica McCrory Calarco suggests
that home environment, rather than innate
willpower, may play the main role in explaining
why some children can delay gratification
and others cannot.
However, Mischel has continued to endorse
the willpower explanationófor example, in
his popular book The Marshmallow Test (2014).
Chapter 6: Norms, Surprises, and Causes
Kahneman explains that System 1 is constantly
engaged in "maintain[ing] and updat[ing] a
model" of the world, and of "what is normal
in it."
Against the backdrop of these mental norms,
some events stand out as surprising, but the
mind quickly adapts to surprises and fits
them into the overarching pattern.
Thus, an event System 2 knows to be rare can
seem familiar and expected to System 1 simply
because it matches up with a past experience.
Faced with two consecutive surprises, System
1 works to weave them together into a pattern
that makes neither event surprising.
In doing so, System 1 often comes up with
causal explanations, in which agency, blame,
and intention are imputed even to inanimate
objects.
Such causal reasoning works adequately much
of the time, but it works against the grain
of any attempt to reason statistically.
When phenomena have no clear single cause,
or must be considered in aggregate, statistical
reasoning (a System 2 specialty) is necessary.
Chapter 7: A Machine for Jumping to Conclusions
The net effect of many System 1 intuitions
is a tendency to jump to conclusions.
"Conscious doubt" and the toleration of uncertainty
require the deliberate, effortful engagement
of System 2.
Without such conscious scrutiny, people are
prone to confirmation bias, in which evidence
that fits into preexisting beliefs is given
more weight than contradictory evidence (which
may be dismissed entirely).
This bias can take the form of a halo effect,
a "tendency to like (or dislike) everything
about a person."
Fundamentally, Kahneman says, these biases
spring from the fact that System 1 is tasked
with fitting available information into a
coherent storyónot with seeking out more
information to challenge the story or fill
in gaps.
Kahneman gives a colorful name to this basic
cognitive tendency: "What You See Is All There
Is," or WYSIATI for short.
In later chapters, Kahneman will refer back
to WYSIATI as a causal factor in many different
types of biases.
One way to get a sense of the System 1/System
2 dichotomy is to think of the classic video
game Tetris, in which players must fit together
a series of falling blocks.
System 1 is like a novice Tetris player, concerned
only with the blocks currently on screen.
Its goal is to take the information in front
of it (the blocks) and fit that information
into as consistent and plausible a pattern
as possible.
Because order and simplicity are its overriding
goals, System 1 recoils from anything that
doesn't fit; more "blocks" are not necessarily
better from its point of view.
More information is welcome if it fits nicely
into the preexisting pattern (in cognition,
this is called confirmation bias) and is decidedly
unwelcome if it disrupts the pattern.
In fact, anyone who has played Tetris also
has a good visual model for cognitive ease
and cognitive strain, two concepts introduced
in Chapter 5 and repeated throughout the book.
The slow pace, few blocks, and ample maneuvering
room at the beginning of the game create a
condition of cognitive ease.
The neatly fitted blocks symbolize the mental
state on the screen.
As the pace quickens, blocks pile up, and
things get more difficult to fit together,
cognitive strain sets in.
The haphazardly stacked blocks are a virtual
mirror image of a mind struggling to integrate
large amounts of potentially contradictory
information.
An expert Tetris player does not just think
one block at a time.
Patterns are imagined in advance, contingencies
are planned for, and opportunities are identified.
Thinking about what blocks might show up,
however, is a System 2 activityóboth in Tetris
and in life.
The saying "out of sight, out of mind," though
accurate enough as a general observation about
human cognition, is overwhelmingly true of
System 1.
Chapter 8: How Judgments Happen
System 1, Kahneman asserts, understands the
world in terms of basic assessmentsósimple,
approximate, intuitive readings of the current
situation.
These assessments tend to sort events and
objects into crude categories: aversive or
attractive, threat or opportunity.
A stranger on the street, for example, is
instantly and unconsciously assessed as either
"friendly" or "hostile" based on physical
build, facial expression, and so forth.
However, there are some rather severe limits
to the problems that can be solved by such
assessments.
System 1 is good at estimating averages, for
instance, but very poor at estimating sums,
or what Kahneman calls sum-like variables.
Unfortunately, statistical and probabilistic
reasoning rely on the ability to compare such
variables.
From the point of view of System 1, an easier
task is intensity matching, in which corresponding
values must be assigned to two variables with
different dimensions.
The seriousness of a crime, for example, can
be intuitively matched to the severity of
the punishment, just as the loudness of a
noise can be matched to the brightness of
a color.
Like other traits mentioned in this chapter,
the tendency toward intensity matching simplifies
everyday decision making but greatly complicates
the task of attempting to think statistically.
Chapter 9: Answering an Easier Question
A heuristic can be thought of as a way of
substituting a more easily answered question
for the one being asked.
Kahneman calls these the heuristic question
and the target question, respectively.
For example, if the target question is "How
happy are you with your life these days?"
many people base their answer on the heuristic
question "What is my mood right now?"
This specific substitution is an example of
the mood heuristic, in which people rely on
an assessment of their current mood as a "shortcut"
to answering a much more challenging question.
It is easy and tempting to simply ask "How
do I feel about it?" rather than weigh the
pros and cons of a situation or a course of
action.
The mood heuristic pervades many aspects of
social, economic, and political life.
Kahneman focuses on its implications for electoral
politics.
A candidate whose appearance inspires feelings
of confidence, he observes, is likely to be
seen as more electable than an unprepossessing
candidate with a more impressive track record.
Aware of this fact, campaign strategists often
go one step further and attempt to tell the
public how a candidate's looks should make
them feel.
An example of this occurred in the runup to
the 2004 U.S. presidential election, when
the associates of then-president George W.
Bush tried such a strategy with respect to
Democratic challenger John Kerry.
"He looks French," said Commerce Secretary
Don Evans, evidently attempting to yoke Kerry's
physical appearance to anti-French sentiment
at the onset of the Iraq War.
This was the same era of U.S. political history
that produced the euphemism "freedom fries."
Little effort was expended to explain just
what Kerry's facial features and mannerisms
had to do with France.
Product marketing is another area in which
the mood heuristic is widely deployed.
Witness, for instance, the extensive holiday
advertising efforts undertaken by Coca-Cola,
Folger's, and other consumer brands.
In producing costly, high-profile advertisements
featuring Santa Claus and Christmas lights,
these companies are not merely hoping to boost
sales for the few weeks in December when such
ads are usually run.
Rather, they are cultivating positive emotions
around their products with the aim of promoting
brand loyalty year-round.
They are attempting to create what Kahneman
previously called a halo effect (Chapter 7),
in which a product's positive associations
(good times, holiday cheer) ripple outward
to influence impressions of more relevant
features, such as taste or effects on health.
Indeed, it's difficult to find a soft drink
advertisement that says anything of substance
about the actual product, and hard not to
find a soft drink advertisement that appeals
to the mood heuristic instead.
part 2.
Chapter 10: The Law of Small Numbers
Because System 1 tends to reason causally
about events, it is easily fooled by small
samples reporting extreme results.
It is a basic statistical truth that small
samples are more likely to display extreme
outcomes: one is much more likely to see "all
heads, no tails" when flipping four coins
at once than when flipping eight.
System 1's inability to account for this fact
is what Kahneman and Tversky wryly termed
"the law of small numbers."
Chapter 11: Anchors
Next, Kahneman introduces the anchoring effect,
in which people are found to be extremely
suggestible in making numerical estimates.
The discovery of such an effect is one of
Kahneman and Tversky's most important joint
contributions to the psychological literature.
Exposure to a numberóeven one known to be
randomly chosenówill influence a person's
estimate of the height of a redwood tree,
Gandhi's age at death, or the year George
Washington was inaugurated.
In each case, the respondent has some information:
redwoods are very tall, Gandhi was oldóbut
not hundreds of years oldówhen he was assassinated,
and George Washington could not have become
president before 1776.
Yet a respondent primed with a high number
("Was Gandhi more or less than 144 years old
when he died?") will still give a higher estimate
than one primed with a low number.
This is similar to the anchor-and-adjust heuristic,
in which an anchor is chosen early in the
reasoning process, and any additional information
is used to make slight adjustments.
Chapter 12: The Science of Availability
The availability heuristic is one well-studied
means by which System 1 estimates frequencies.
To decide how frequent or likely something
is, people often rely instead on how easy
(or difficult) it is to think of examples.
This heuristic is at work when, for instance,
spouses overestimate their own contributions
to household chores, so that the total of
the two estimates is greater than 100%.
Examples of chores one has done oneself are
easier to come up with than chores done by
one's partner.
The impression of cognitive availability can
itself be manipulated by asking for more or
fewer examples.
People asked to list 12 examples of their
own assertive behavior have a hard time filling
the list, and often come away with the impression
that they are not very assertive after all.
The "law of small numbers," introduced in
Chapter 10, is named as a riff on the law
of large numbers, an actual theorem from probability
theory.
The law of large numbers states that the more
often a random event is repeated, the more
consistent the average result will be.
If one bets on a single numberósay, 7óon
a roulette wheel, then one will either win
or lose with every individual spin.
Although there are 38 spaces on a roulette
wheel, a win and a loss are the only possible
outcomes of a given spin from the bettor's
point of view.
With a hundred or a thousand such spins, however,
the win/loss ratio of observed outcomes will
gradually approach 1:38, the ratio of the
actual probability of landing on 7 each time
the wheel is spun.
Interestingly, as Kahneman points out in Chapter
10, even trained professionals design experiments
with inadequate sample sizes that they determine
based on their judgment.
This reality is part of the root of the reliability
or replication crisis.
Both the availability heuristic (Chapter 12)
and the so-called anchor-and-adjust heuristic
(Chapter 11) fit into the pattern Kahneman
described in Chapter 9 ("Answering an Easier
Question").
In a typical anchor-and-adjust scenario, a
person begins with a difficult question that
invites, or even requires, a precise answer
("How much money should I save per month for
retirement?")
The question is made much easier if one has
an anchor to grasp onto, even if the anchor
is an arbitrary and artificial number.
A similar process is at play when availability
is substituted for frequency.
Figuring out the prevalence of bears in a
certain national forest, for example, might
require considerable research.
The question "Do I know of anyone who has
seen a bear in this forest?" is far easier
to answer.
The availability heuristic also has implications
for medicine, both epidemiologically and in
clinical practice.
In the United States, for instance, people
frequently overestimate the likelihood of
a spider bite being poisonous because the
names of two highly poisonous spidersóthe
black widow and the brown recluseóare familiar
to many Americans from childhood onward.
Species of nonpoisonous spider, if learned
at all, understandably make less of an impression
and are less "available" when one thinks of
a spider bite.
Likewise, many rare conditions are highly
"available" in the minds of the public because
such conditions are featured on such TV shows
as House, ER, and Grey's Anatomy.
In Chapter 12, Kahneman describes the opposite
effect: people display a casual attitude toward
disease prevention when the disease in question
is not highly salient.
People with no family history of heart disease
will underestimate the likelihood of developing
it themselves.
People whose friends do not get an annual
flu shot are less likely to decide to do so
themselves.
Chapter 13: Availability, Emotion, and Risk
Here, Kahneman fills out the previous discussion
of the availability heuristic.
Judgments of availability, he says, are skewed
by media coverage, which is "itself biased
toward novelty and poignancy."
People are much more likely to hear about
a fatal accident or a homicide in the news
than they are about a death attributable to
diabetes or asthma.
Consequently, people exaggerate the likelihood
of events they have come to fear while downplaying
the likelihood of events that get less media
attention.
It follows that when experts and policymakers
attempt to quantify risk, they may initiate
an availability cascade.
Biases come to be magnified in the public
imagination, making it harder for moderate
voices or conflicting information to be heard.
Chapter 14: Tom W's Specialty
Kahneman now introduces the representativeness
heuristic and the related concept of a base
rate.
The representativeness of an event or person
is the degree to which they seem typical ("representative")
of a category.
A preference for "neat and tidy" environments,
for example, is often seen as representative
of librarians.
The heuristic aspect lies in people's tendency
to trust representativeness over, or instead
of, pertinent statistical information.
Kahneman gives the example of an experiment
in which he and Tversky asked respondents
to guess the major of a graduate student named
Tom W. The brief synopsis of Tom's personality
made him seem representative of computer scientists,
and most respondents guessed Tom's field of
study accordingly.
In doing so, they ignored the base rate: the
small proportion of graduate students in computer
science as compared to other fields.
Chapter 15: Linda: Less Is More
The representativeness heuristic, however,
does not merely distort probabilistic estimates:
it can sometimes lead people to make completely
illogical guesses.
In the famous "Linda" experiment of the 1980s,
Kahneman and Tversky told subjects about a
young woman named Linda who "is thirty-one
years old, single, outspoken, and very bright.
She majored in philosophy.
As a student, she was deeply concerned with
issues of discrimination and social justice,
and also participated in antinuclear demonstrations."
After reading this short descriptive paragraph,
subjects were asked to determine which was
more likely: "Linda is a bank teller," or
"Linda is a bank teller and is active in the
feminist movement."
Overwhelmingly, but illogically, respondents
chose the latteróa tendency Kahneman calls
the conjunction fallacy.
The chapter concludes with a survey of other
studies demonstrating this intuitive but erroneous
"less-is-more" thinking.
In describing the political effects of the
availability heuristic, Kahneman cites the
work of two other scholars in the field, legal
scholar Cass Sunstein and psychologist Paul
Slovic.
Sunstein and Slovic have nearly opposite stances
on the role to be played by experts in assessing
and communicating risk.
Sunstein, popularly known for his book Nudge
(cowritten with Richard Thaler, 2009), argues
that experts should carry out their analyses
as objectively as possible.
He says they should always avoid succumbing
to pressure from an often uninformed and emotional
public.
Slovic, who specializes in the study of risk,
asserts that (to use Kahneman's phrase) "the
public has a richer conception of risks than
the experts do."
Further, Slovic argues that the fears experienced
by the public are themselves a form of suffering
and should be taken seriously, even for that
reason alone.
An exposition of Slovic's ideas can be found
in The Irrational Economist (2010) and The
Feeling of Risk (2013).
Kahneman, who has collaborated professionally
with both men, diplomatically avoids pronouncing
a "winner" in the debate.
The conjunction fallacy, illustrated in Chapter
15, deserves some further unpacking.
The reason so many people get the Linda problem
wrong lies in the tendency to conflate plausibility
with probability.
As Kahneman observes, this is an easy mistake
to make, but the two concepts are logically
distinct.
"Linda is a bank teller and is active in the
feminist movement" makes for a more plausible
story than "Linda is a bank teller," but the
probability of "bank teller" must be higheróor
at least no loweróthan the probability of
"bank teller and feminist."
The combination of two criteriaóin this case,
bank teller and feministóis known as a logical
conjunction.
The difference can be easier to see if the
problem is posed in numerical terms.
Suppose there are 80 bankers in the city where
Linda lives.
Then some subset of those 80 people are also
feminists.
It may be the case that all 80 bank tellers
are feminists; it may be the case that none
of them are.
However, one thing is clearly impossible:
if there are 80 bank tellers, there cannot
be more than 80 feminist bank tellers.
The statement "90 out of 80 bank tellers in
the city identify as feminists" is readily
recognized as absurd within a given population:
there cannot be more feminist bank tellers
than there are bank tellers overall.
It follows that the probability of Linda being
a feminist bank teller must be no higher than
her probability of being a bank teller in
the first place.
Chapter 16: Causes Trump Statistics
Kahneman now draws a further distinction between
statistical base rates and causal base rates.
Statistical base-rate information is framed
in broad terms: "85% of the cabs in the city
are Green" Cab Company.
Causal base-rate information is more specific
and seems, in its presentation, to suggest
a direct connection to the individual case
at hand: "Green cabs are involved in 85% of
accidents."
When both types of base-rate information are
present, the statistical sort is much more
likely to be thrown out or underweighted,
because System 1 has trouble fitting it into
a narrative.
More depressinglyóin Kahneman's view, at
leastópeople "quietly exempt themselves"
from statistics that fail to accord with their
self-image.
Chapter 17: Regression to the Mean
The concept of a "jinx" is widespread in sports.
If an athlete has an outstandingly good first
day at a tournament, she is expected to perform
less well on day two.
The sportscasters covering the event will
advance all sorts of causal explanations for
the drop in performance: the athlete was nervous
because of the higher expectations, she was
exhausted from an unusually strenuous effort
on day one, and so forth.
What's really going on, Kahneman says, is
somewhat less sensational.
Almost any set of outcomes, from test scores
to inches of daily rainfall, will follow a
distribution in which extreme events are rare
and "average" events are common.
Thus, there is a strong statistical tendency
for any extreme event to be followed by a
less extreme one, a phenomenon called regression
to the mean.
Chapter 18: Taming Intuitive Predictions
Understanding regression to the mean allows
one to identify, and correct, predictions
that do not take such regression into account.
An extremely tall child is statistically likely
to become a rather (but not extremely) tall
adult.
An extremely precocious kindergartener is
likely to become a high achiever in college,
but not to the same extreme level witnessed
in early childhood.
In each case, the statistically favorable
outcome lies between the one piece of extreme
information (height or intelligence at one
point in time) and the mean, to which some
regression is expected.
Extreme predictions may be excitingóthey
capture the imagination, and getting them
right is very gratifyingóbut they are seldom
correct.
Where accuracy is important, it pays to temper
one's predictions to account for regression.
The "disheartening" results Kahneman cites
in Chapter 16 come from the work of social
psychologists Richard E. Nisbett and Eugene
Borgida.
Several of Nisbett's later studies and publications
follow up on a theme closely akin to Kahneman's
assertions that "causes trump statistics."
In The Person and the Situation (1991), cowritten
with Lee Ross, Nisbett argues for a view of
social behavior known as situationism.
From this perspective, behavior in any setting
is assumed to depend largely on the situation
itself, rather than on a person's inherent
disposition or personality.
Some support for situationism can be found
in the experiment mentioned in this chapter:
left alone with someone who is having a seizure,
most individuals would try to offer assistance
or at least call for medical help.
However, each of those same people will be
much less likely to take action if there are
several people who could offer aid.
The change in situation overrides any individual
disposition to offer help.
The connection back to Kahneman lies in the
fact that situational factors lend themselves
to being described in statistical terms: "only
20% of people faced with this problem solved
it correctly."
Dispositional factors, meanwhile, are naturally
viewed as more inherent to a personóthey
tend to be understood as causal, not statistical,
information.
To assert that people overemphasize dispositional
traits and underemphasize situational ones
(as Nisbett does) is closely akin to saying
(as Kahneman proposes) that people would rather
see themselves as individuals than as statistics.
However, popular works by Nisbett seem to
challenge Kahneman's half-joking claim that
"teaching psychology is mostly a waste of
time."
In Mindware: Tools for Smart Thinking (2015),
Nisbett attempts a survey of cognitive psychology
similar in some respects to Kahneman's Thinking,
Fast and Slow.
The books differ considerably, however, not
just in the examples they cite, but in their
overall tone and orientation.
Nisbett explicitly presents principles from
psychology and philosophy as "tools" the reader
can learn to utilize.
Kahneman, much more cagily, positions Thinking,
Fast and Slow as a book to enrich the vocabulary
of "water cooler" gossip.
His modest hopeóso he saysóis to make the
criticism of inevitable bad decisions more
eloquent and insightful.
part 3.
Chapter 19: The Illusion of Understanding
System 1, as previous chapters have suggested,
tends to try to reduce experiences into a
cogent, coherent narrative.
This tendency carries with it the risk that
the past will come to seem inevitable, that
the narrative will seem as though it could
not have played out any other way.
People often come to believe they knew all
along what would happen in an inherently unpredictable
situation, and they tacitly "revise the history
of [their] beliefs" to match what actually
ends up happening.
Thus, a person may go from believing (before
the fact) that President Nixon would not resign
to "having always known" that Nixon would
resign.
This is not, Kahneman says, mere dishonesty
to save face: people genuinely forget that
they ever held the ultimately falsified belief.
A vast body of popular literatureóincluding
many business booksósprings from the attempt
to find simple causes of success and failure
after the fact.
Chapter 20: The Illusion of Validity
Attempts to predict the future are often beset
by the illusion of validity, in which a demonstrably
useless tool or method is nonetheless assumed
to have some special value for forecasting
what will happen.
People will continue to defer to a particular
test, interview, or other metric even when
they know rationally that the test is "little
better than [a] random guess."
Closely related is the illusion of skill,
in which the technical sophistication of a
practice (say, financial analysis of stocks)
is assumed to mean that the practice will
be effective.
Stock traders, Kahneman observes, believe
deeply in the efficacy of their own skill
even when they fail to outperform (or even
significantly underperform) the market as
a whole.
It is not the experts' fault, he says, that
the world is difficult to predictóbut it
is their responsibility to be honest about
the limits of their predictive powers.
The term illusion of skill is in some ways
a misnomer.
Often, the people who fall prey to such an
illusion are indeed highly skilled.
The stock traders Kahneman singles out in
Chapter 20, for instance, are probably quite
adept at a variety of tasks related to their
jobs, such as finding and interpreting the
major financial information about a publicly
traded firm.
The illusion is not that people are skilled
when they really aren't, but that their skills
are effective in scenarios where they in fact
make no difference.
For example, someone may be quite skilled
at memorizing facts about the Harry Potter
universe.
Such a person would likely be effective in
winning a themed trivia competition but ineffective
at summoning spirits or making broomsticks
fly.
The difference is that Harry Potter fans have
the opportunity to receive consistent and
legible feedback about the limited applicability
of their knowledge.
Nobody gets the experience of almost levitating
a broomstick.
The stock traders, on the other hand, are
exposed to a flood of feedback, making it
difficult to separate the effects of skill
from those of random chance.
There is, in many real-life scenarios, no
warning bell to signal when a person has crossed
into a domain where their skills will be ineffective.
The distinction applies equally to the illusion
of validity.
The interviewers who assess candidates' fitness
for officer training may be highly skilled
in many respects.
Those skills, such as listening carefully
or getting people to speak in a candid and
relaxed fashion, are not themselves illusory.
Instead, the illusion lies in an overestimate
of those skills' effectsóin where the skills
can be applied, and in how much of a difference
they make.
Chapter 21: Intuitions vs. Formulas
Because of the various systematic biases in
human judgment, Kahneman proposes that algorithms
and formulas can often do a better job than
an expert's intuition.
However, he acknowledges that there is a widespread
"hostility to algorithms" in areas where the
outcome may have a moral significanceófor
instance, in health care.
After describing his own experiences in developing
a scored survey for assigning soldiers to
service branches, Kahneman invites the reader
to "do it yourself."
He offers some tips for developing a quantitative
scoring system to be used in a situation in
which intuitive judgment would be the norm.
Chapter 22: Expert Intuition: When Can We
Trust It?
Although he maintains that the public's faith
in expert judgment is sometimes misplaced,
Kahneman does not rule out the existence of
expert intuition in some specialized domains.
He tells of a research program undertaken
with a colleague, psychologist Gary Klein.
Klein is generally seen as an opponent of
the heuristics-and-biases view of decision
making.
In their research, Kahneman and Klein explored
the limits of a decision paradigm called recognition-primed
decision.
This model treats expert intuition as the
ability to recognize and act on patterns that
might not be apparent to a layperson or a
novice.
Such recognition, Kahneman suggests, is at
work when a chess grandmaster glances at a
board and can tell immediately who will win
the match, oróto use an example from Kleinówhen
a fire captain orders his crew to evacuate
just before a building starts to collapse.
However, such intuitions can only arise in
environments that are "sufficiently regular
to be predictable" and can only be developed
through long and consistent practice.
In fields in which those conditions are lacking,
expert intuition is likely to disappoint.
Kahneman's work with Klein is an example of
adversarial collaboration, a practice he briefly
and somewhat self-effacingly mentions in Chapter
22.
In fact, Kahneman champions this form of collaboration,
in which someone seeks out and works side
by side with the colleagues whom he or she
most strongly disagrees with.
Klein, as hinted in Chapter 22, is a proponent
of a somewhat different model of decision
making than the heuristics and biases studied
by Kahneman.
His preferred theoretical framework, called
naturalistic decision making or NDM, overlaps
in some respects with heuristics and biases
but places greater emphasis on the successes
of the decision-making process in the face
of adverse conditions.
Their joint paper, "Conditions for Intuitive
Expertise: A Failure to Disagree" (2009),
surveys both the commonalities and the differences
between the two psychologists' views of decision
making under uncertainty.
For Kahneman, adversarial collaboration stands
as an alternative to the traditional "critique-reply-rejoinder"
process in which social scientists frequently
voice their disagreements.
This process takes the form of letters to
the editor or articles in an academic journal,
with one scientist critiquing another's work,
the original author replying, and the critic
then issuing a rejoinder to the reply.
Kahneman's skepticism regarding the merits
of this approach comes, at least in part,
from the often sarcastic and sometimes outright
hostile way in which critique-reply-rejoinder
communication is carried out.
Coauthoring a paper naturally takes more timeóKahneman
and Klein's joint effort took several yearsóbut
the result is all but guaranteed to be more
substantial and less combative than a series
of one-sided polemics.
A more thorough explanation of adversarial
collaboration and its merits can be found
in Kahneman's short autobiography at the Nobel
Foundation website.
Chapter 23: The Outside View
In this chapter, Kahneman tells of his experiences
as part of a team tasked with drafting a textbook
on judgment and decision making.
The team's internal estimates of the time
to complete the project averaged about three
years, even though other textbook-writing
projects of similar length and complexity
had tended to take seven or more years.
In fact, the book took nine years to completeówell
within statistical expectations but far longer
than the team members themselves had predicted.
Kahneman takes this anecdote as an illustration
of the need to consider the outside viewóthe
dispassionate, neutral, statistically informed
viewówhen planning a project.
The inside view, which is overreliant on the
specifics of the situation, is appealing but
almost certainly inaccurate.
Those who unrealistically favor the more optimistic
inside view are committing what Kahneman calls
the planning fallacy, allowing the detailed
nature of their own forecasts to trump the
relevant statistical information.
Chapter 24: The Engine of Capitalism
An "optimistic bias," Kahneman says, is not
an altogether bad trait to possess.
Optimists tend to live longer, be happier,
and be more proactive in solving problems
within their control.
At the same time, optimism is indeed a type
of cognitive bias, and it can be very costly
at times.
One consequence of optimism in business is
the phenomenon of competition neglect, in
which entrepreneurs assume that their decisionsóirrespective
of their competitors' actionsóare the ultimate
determiner of success or failure.
Other types of optimism-induced overconfidence
are evident in finance, in medicine, and in
day-to-day life.
As a means of (partly) counteracting optimistic
bias, Kahneman passes along a suggestion from
Gary Klein, who advocates conducting a premortem
of any major plan before putting it into action.
The goal of such an exercise is to imagine
how the plan might fail despite the best intentions
of all involved.
Gary Klein, here credited with the idea of
the premortem, is mentioned in Chapter 22
as a main scholarly "adversary" of Kahneman's.
The two collaborated on a joint work, described
in Chapter 22, as a means not so much to resolve
as to clarify their differences.
Klein's major popular work is Sources of Power:
How People Make Decisions (1998).
As might be expected from a leading researcher
in the field of naturalistic decision making
or NDM, Sources of Power emphasizes the successes
of the mind's intuitive decision-making process,
whereas Thinking, Fast and Slow charts the
often-amusing and sometimes disastrous failures.
In part, the researchers' different emphases
can be chalked up to the different populations
they have tended to study.
Kahneman, as he acknowledges in Thinking,
Fast and Slow, has focused on cases where
there is often too much "noise" to accurately
size up a situationósuch as the trading of
individual securities on the stock market.
Klein's favored subjects are professionals
in areas such as firefighting, where expert
judgment is demonstrably rewarded and errors
can often be identified in real time.
In any case, Klein's book makes for an interesting
compare and contrast when paired with that
of his "adversarial collaborator."
The concept of the premortem has its roots
in the more familiar term postmortem (Latin
for "after death").
Used as a noun, postmortem is synonymous with
"autopsy": a surgical examination of a corpse
to try to determine how the person died.
The term is often figuratively applied to
such metaphorical "deaths" as the failure
of a project or the breakdown of a mechanical
system.
In each case, the principal goal is typically
to identify the "cause of death."
What component failed in the system?
Where did the project plan jump the tracks?
The goal of the premortem, as Klein explains
in a Harvard Business Review (2007) article
on the subject, is to perform this kind of
thinking in advance, achieving a kind of "prospective
hindsight" about the problems withinóand
outside ofóthe planners' control.
part 4.
Chapter 25: Bernoulli's Errors
In Part 4, Kahneman broaches the subject of
behavioral economics, an area he credits Tversky
with introducing him to.
He begins by describing two "species" from
behavioral economics: Econs, who are perfectly
rational and consistent; and Humans, who have
all the biases and inconsistencies of real
human beings.
Behavioral economics, the subject of the next
few chapters, is concerned with refining economic
models to address ways in which Humans differ
from Econs.
First, however, Kahneman lays the groundwork
of decision making in traditional economics.
In expected utility theory, a person's decisions
are assumed to maximize utility, or the benefit
derived from the choices available.
This includes gambles: if a person prefers
apples to bananas, then theoretically, that
person will also "prefer a 10% chance to win
an apple to a 10% chance to win a banana."
Some principles of this theory can be traced
back to Swiss mathematician Daniel Bernoulli
(1700ñ82), who observed that people react
to relative, not absolute, changes in their
wealth.
This is just one way in which expected utilityóthe
"psychological value or desirability" of a
goodódoes not perfectly align with monetary
value.
Despite its brilliance, Kahneman says, Bernoulli's
utility theory overlooked a key aspect of
subjective value: its dependence on a reference
point.
It is, he asserts, the change in a status
that matters, not the status itself.
In particular, people will place greater weight
on losses than on gains.
Faced with a sure gain or the chance of a
greater gain, many will prefer the sure thing,
but faced with a sure loss or the chance of
a greater loss, most will prefer the gamble.
Chapter 26: Prospect Theory
In their 1970s research, Kahneman and Amos
Tversky attempted to account for the gaps
between expected utility theory and the psychology
of real-life decision making.
They concluded that people in general experience
loss aversion, meaning people would rather
avoid losses than seek gains.
This aversion can even be quantified as a
ratio: typically, the two researchers found,
people were twice as averse to loss as they
were attracted to gain: a 50% chance to win
$200 balances out a 50% chance to lose $100.
At the same time, sensitivity to loss diminishes
as the losses get larger, so that losing $200
is not "twice as bad," psychologically speaking,
as losing $100.
These insights combine to create a characteristic
decision-making pattern: when faced with a
win-or-lose gamble, most people will be very
risk averse, but when faced with only losing
choices, people will risk a greater loss to
have a chance of not losing at all.
Though he admits it is not a perfect characterization
of economic decision making, Kahneman describes
prospect theory as a marked improvement over
the earlier expected utility model.
Prospect theory, though it includes many useful
refinements of utility theory, is not immune
from criticism.
Kahneman himself points out some phenomena
that are evident to a casual observer, but
for which prospect theory, at least in its
early form, offered no explanation.
Such "blind spots" include disappointment,
regret, guilt, and the anticipation of those
emotionsóall of which play a clear role in
economic decision making.
A great deal of follow-up research has gone
into identifying the many emotional reactions
that demonstrably do shape decisions but are
not addressed by prospect theory.
The emerging field of neuroeconomics promises
to employ neuroscience in the same way "traditional"
behavioral economics has utilized psychology,
developing a still more sophisticated model
of human decision making under uncertainty.
A related issue for prospect theory might
be described as "domain creep."
Initially, Kahneman and Tversky intended prospect
theory as an account of individual economic
decision making under a fairly restricted
set of circumstances.
Like the rational-agent theory to which it
responded, prospect theory was never meant
to assert a uniform set of rules that could
be applied to all human judgments and decisions.
It would be misguided, even absurd, to apply
the tenets of rational-agent decision theory
to human beings in general; the range of human
behaviors is simply too broad to support the
assertion that people are logical and single-mindedly
selfish 100% of the time.
Likewise, it would be a mistake to treat prospect
theory as a skeleton key to the human psyche
when its authors originally proposed it within
the narrow scope of wins, losses, and gambles.
Still, the appeal of prospect theory is such
that it has been used to explain all sorts
of not-quite-economic phenomena, including
the resolutions of political crises and the
results of elections.
These are, to use a pharmaceutical analogy,
off-label uses of the theory, with little
grounding in Kahneman and Tversky's initial
experiments or those of their successors.
The popularity of Kahneman's ideasónot only
prospect theory, but also the heuristics-and-biases
model in its broadest senseóseems to ensure
that they will be overgeneralized well beyond
their true explanatory value.
Consequently, Kahneman continues to fight
an uphill battle to correct misconceptions
about when and how his theories apply to real-world
phenomena.
Chapter 27: The Endowment Effect
Further complicating the picture is the endowment
effect: people tend to overvalue what they
have, once they have it.
A person who is equally happy to receive either
a raise or some added vacation time will,
once they receive the raise, often be unwilling
to trade it back for the vacation time.
This effect occurs at all scales, from coffee
mugs and chocolate bars to rare antiques and
vintage wines.
The possessor of a goodóprovided it is "held
for use" and not merely a commodity or currencyówill
overrate the value of the good when considering
a sale or exchange.
Kahneman cites several experiments in which
this effect is established and investigated.
Chapter 28: Bad Events
Yet another twist comes from what Kahneman
calls negativity dominance: the tendency to
see the one angry face in a crowd of smiles,
but not vice versa.
There is, Kahneman suggests, an evolutionary
reason to give threats greater priority than
opportunities, since an opportunity means
nothing if one does not survive to enjoy it.
Loss aversion, as discussed in Chapter 26,
is one instance of negativity dominance; another
is the aversion to falling short of a goal.
Hence, Kahneman argues, the better performance
of golfers when putting for par than when
putting for a birdie.
Negativity dominance has frustrating consequences
for negotiations, whether they be economic
or political: both parties feel that what
they are giving up is more valuable than what
they are getting in return.
Chapter 29: The Fourfold Pattern
A final piece of the prospect theory puzzle
comes in the form of two complementary effects:
the possibility effect and the certainty effect.
The former alludes to the tendency to overweight
the mere possibility of an unlikely event,
as seen in lottery ticket buyers around the
world.
The latter alludes to the premium paid for
the last few percentage points between an
almost-sure thing and a true certainty.
Both effects pose a further challenge to expected
utility theory, as Kahneman proceeds to show
by recounting a famous economic puzzle posed
by Maurice Allais (the "Allais paradox").
Kahneman now returns to the basics of prospect
theory (see also Chapter 26) and describes
the "fourfold pattern" of decision behavior
in the face of gains and losses.
When people have a high probability of a gain,
they fear disappointment and are generally
willing to accept a smaller, sure-thing payment
rather than gamble, but when they face a high
probability of a loss, they are typically
willing to take their chances.
The pattern is reversed for low-probability
events: a small chance of a large gain is
preferred to a sure-thing settlement, while
a smaller sure loss is preferred to a small
chance of a large loss.
Together, these four effects illustrate the
interaction of loss aversion and diminishing
sensitivity (both seen in Chapter 26) with
the certainty and possibility effects.
The chapter closes with some illustrations
drawn from the world of litigation.
To understand the fourfold pattern and its
relationship to prospect theory, it may help
to review an example in more detail.
Four different peopleóAlice, Bob, Cathy,
and Dylanóare each offered a gamble to consider:
Alice is asked to choose between a 95% chance
of winning $5,000 and a guaranteed payment
of $4,500.
Bob must decide between a 95% chance of losing
$5,000 and a guaranteed loss of $4,500.
Cathy faces a choice between a 5% chance of
winning $5,000 and a guaranteed payment of
$500.
Dylan's two options are a 5% chance to $5,000
or a guaranteed loss of $500.
Here is how prospect theory would predict
each of the four subjects to respond, along
with a brief explanation of the psychological
principles at work:
Alice will likely accept the $4,500 sure-thing
payment, even though the expected value of
the gamble is higher ($4,750 to be exact).
The "lost" expected value can be chalked up
to the certainty effect: Alice would much
rather be sure of winning than "almost sure,"
and she is willing to "pay" $250 in expected
winnings to eliminate the uncertainty.
Bob will likely reject the $4,500 lossóagain,
even though the expected value of the gamble
is a worse loss ($4,750).
Loss aversion, combined with diminishing sensitivity
to loss, account for Bob's likely tendency
to overestimate the slim chance of losing
nothing in the gamble.
Cathy will likely reject the $500 sure-thing
payment and prefer the gamble, even though
the gamble has an expected value of only $250
($5,000 ◊ 5%).
If she indeed chooses the gamble, she is illustrating
the possibility effect by giving disproportionately
large weight to a small chance of winning.
This is the scenario Kahneman invokes to explain
"why lotteries are popular."
Dylan will likely accept the $500 sure-thing
payment, even though the gamble brings an
expected loss of only $250.
The possibility effect is at play here as
well, making the slight possibility of losing
$5,000 loom larger than it would to an Econ.
Kahneman would liken Dylan's situation to
a person buying insurance, paying a premium
up front not to have to worry about "an unlikely
disaster."
Together, as the above four gambles show,
loss aversion and diminishing sensitivity
(Chapter 26), plus the possibility and certainty
effects (Chapter 29), account for some pretty
substantial differences between the Humans
of prospect theory and the Econs of rational-agent
theory.
An Econ, or a sufficiently disciplined and
well-capitalized trader, would likely make
the exact opposite choice in each of the four
scenarios, trusting the expected value to
reward such decisions in the long haul.
The Human/Econ contrast seen here thus helps
to explain how large classes of economic activityóthe
casino business and the insurance market,
for instanceóarise seemingly in spite of
purely rational principles.
As such, the fourfold pattern captures some
key benefits of extending the rational-agent
model to include at least a few simply stated
behavioral principles.
Chapter 30: Rare Events
In general, Kahneman says, people tend to
"overestimate the probabilities of unlikely
events" and "overweight unlikely events in
their decisions."
Several psychological mechanisms contribute
to this tendency, including the availability
heuristic and the tendency to prefer cognitive
ease to cognitive strain.
The broadest explanation, however, is that
the vividness of rare events tends to give
them a disproportionate share in decision
making.
The mind's focus on vivid outcomes can lead
to denominator neglect: in the phrase "1 out
of 100,000 children will be disabled as a
result of the vaccine," the emphasis tends
to fall on the "1" rather than the "100,000."
The more concretely the probability is represented
(e.g., "1 child in 100,000" rather than "0.001%
of children"), the more pronounced such overemphasis
will be.
"When it comes to rare probabilities," Kahneman
soberly concludes, "our mind is not designed
to get things quite right."
Chapter 31: Risk Policies
The discussion now turns to ways in which
risk can be approached more systematically.
Narrow framing of a problemóapproaching it
as a set of unrelated one-off choicesócan
lead to suboptimal outcomes in many cases.
Broad framing is required to see how choices
interact, to get a "big picture" that transcends
individual losses and gains.
To illustrate the distinction, Kahneman asks
why someone might reject a single, 50/50,
"win $200 or lose $100" bet but accept the
option of making 100 such bets.
If a broad frame is adopted, he says, it is
obvious that the 100 bets are massively favorable
to the gambler.
Yet if each gamble is viewed as a distinct,
isolated event, loss aversion kicks in: the
thought of losing $100 is often more painful
than the thought of winning $200.
In a rare moment of direct advice to the reader,
Kahneman points out that life itself contains
many "small favorable gambles."
Accepting them as a matter of course, he adds,
will lead to a better result than rejecting
each one individually out of loss aversion.
This is an example of a risk policyóa commitment
to handle a particular risk the same way every
time, in order to come out ahead in the long
run.
Chapter 32: Keeping Score
Another quirk of human reasoning is mental
accounting: the tendency to reckon up money
and other resources in separate "accounts"
rather than as a lump sum.
Such mental accounts are, Kahneman observes,
"a form of narrow framing," but they help
in making sense of the world.
One adverse consequence of mental accounting,
however, is that it opens the door to the
sunk-cost fallacy.
People who spend money for a specific purposeóto
see a ball game, for instanceówill often
"throw good money after bad" if an added expense
arises in connection with the event.
Stockholders will sell winners rather than
losers so as to close out their position as
a gain rather than a loss.
Regret and the anticipation of regret, along
with feelings of moral responsibility, further
complicate the effort to "keep score" of finite
resources.
In Chapter 32, Kahneman introduces the sunk-cost
fallacy, a potentially confusing consequence
of humankind's mental-accounting tendencies.
The basic idea is that a nonrefundable expense
is a sunk cost: if there is no way to recover
it, then the cost should have no effect on
subsequent decisions, at least from the viewpoint
of economic rationality.
The fallacyóthe failure of logical reasoningólies
in treating sunk costs as an investment to
be recouped or salvaged, not merely as a cost
to be accepted.
The classic illustrations, including those
presented by Kahneman, involve a monetary
cost, but sunk costs and their follow-on expenses
can also take the form of effort or inconvenience.
If one pays for an all-you-can-eat buffet
only to find the food is no good, the price
of the meal is a sunk cost, and there is nothing
to be gained by stuffing oneself to "get one's
money's worth."
A person who stays in an unfulfilling relationship
on the grounds that they have "invested" time
in the relationship is, likewise, succumbing
to sunk-cost thinking.
Business, psychology, and general-interest
periodicals have published numerous articles
on sunk cost phenomena, and many of themósuch
as journalist Jamie Ducharme's Time article
"The Sunk Cost Fallacy Is Ruining Your Decisions.
Here's How" (2018)óhave a self-help tinge.
"If you've ever let unworn clothes clutter
your closet just because they were expensive,"
explains Ducharme, "or followed through on
plans you were dreading because you already
bought tickets, you're familiar with the sunk
cost fallacy."
A related phenomenon, though with a probabilistic
twist, is the gambler's fallacy: the belief
that one is "due for a win" after sustaining
or witnessing many losses.
The opposite of the sunk-cost fallacyóthe
tendency to consider each new expenditure
on its own meritsóhas also been given a name:
the bygones principle, as in "let bygones
be bygones."
To complicate matters further, it is not always
logically incorrect to keep past costs in
mind when considering new ones.
For this reason, researcher Christopher Olivola
(quoted in Ducharme's article) prefers the
term "sunk cost effect."
"That effect becomes a fallacy," Olivola argues,
"if it's pushing you to do things that are
making you unhappy or worse off."
In some situations, social factors may make
it reasonable to honor sunk costs even though
there is no monetary incentive to do so.
Ryan Doody, whose research involves the intersection
of philosophy and economic decision making,
argues as much in his 2017 paper "The Sunk
Cost 'Fallacy' Is Not a Fallacy."
Despite the provocative title, Doody's main
contention is a rather mild one: sometimes,
it is worthwhile to honor a sunk cost simply
so that one can later offer a flattering account
of one's actions.
In many situations, there is valueóeven economic
valueóin being known for perseverance rather
than as a quitter.
Chapter 33: Reversals
A preference reversal arises when people make
one choice when presented simultaneously with
a pair of options ("B over A"), but another
choice if the two options are considered in
isolation ("A over B").
Kahneman cites such reversals as another feature
of human economic reasoning not adequately
explained by the rational-agent model.
The solution to the apparent paradox, he says,
is quite simple: people's assessments of value
are based on the context in which the question
is asked.
A six-year-old boy who is 5 feet tall is "tall"
relative to other boys his age, and a 16-year-old
boy who is 5 feet 1 inches tall is "short"
for his age, but in a direct comparison it
is obvious that the latter boy is taller.
The joint evaluation of the two boys produces
a different type of comparison than the single
evaluation of each boy relative to his age
group.
In economic decisions, too, this kind of context-based
reversal can be observed: a fine that is high
by one agency's standards may be a pittance
by the standards of another agency.
Chapter 34: Frames and Reality
Kahneman explores the concept of framing effects,
in which the mere wording of a decision problem
substantially changes people's preferences.
He reports that people are much more likely
to recommend a procedure with a 90% survival
rate than one with 10% mortality.
They will endorse a program that "saves 200
lives out of 600" but reject one that results
in 400 deaths.
These effects are important, Kahneman says,
because often there is no underlying preference:
the frame itself determines what moral intuitions
people bring to bear.
"An important choice," he remarks, "is controlled
by an utterly inconsequential feature of the
situation."
These two chapters supply some of Kahneman's
heaviest ammunition against the rational-agent
theory, the view of human behavior that asserts
an underlying rationality and consistency
to human behavior.
Though Kahneman does not deny the effectiveness
of this view as an approximation (e.g., in
economics), he finds too many departures from
rationality in day-to-day decision making
to endorse the rational-agent model.
Importantly, this is not the same as saying
that humans are irrational, or that their
decision-making rules are totally divorced
from logic.
Indeed, Kahneman is careful not to use the
word irrational, which he views as an excessive
and potentially misleading descriptor of the
biases in human judgment.
Still, there is something not strictly rational
about a system of decision making that can
be swayed simply by changing the wording of
a question, or by considering options in isolation
rather than side by side.
The preference reversals described in Chapter
33 can be found well outside the laboratory.
In marketing research, a similar phenomenon
is well known as the decoy effect, wherein
a high price comes to seem lower if a "decoy"
price is provided for comparison.
The classic example is movie theater popcorn:
if a small serving of popcorn sells for $4
and a large serving goes for $8, people will
generally choose the smaller serving.
Introducing a medium-sized serving for $7
drastically changes consumers' preferences:
when the large popcorn is framed as a $1 "upgrade"
over the medium, the extra expenditure suddenly
seems worthwhile.
From the viewpoint of joint and single evaluations,
this may seem like an unusual case, since
the small and large servings are being evaluated
jointly in both cases.
The peculiarities of the decoy effect can
be traced to several principles already presented
in Thinking, Fast and Slow.
Consistent with the laziness of System 2,
people faced with three options will not perform
a separate joint evaluation for each pair
of options.
Rather, they will tend to anchor on the medium-sized
serving and compare it with the adjacent options.
From there, they are likely to conclude that
going from medium to large gives them the
most "bang for their buck."
Chapter 35: Two Selves
This chapter introduces the two characters
who will star in Part 5: the experiencing
self and the remembering self.
The experiencing self is present in the moment,
undergoing pleasure or pain.
The remembering self recalls experiences after
the fact and makes decisions based on those
memories.
Memories themselves are prone to persistent
biases, such as an overemphasis on the best
(or worst) moments of an experience and, more
troublingly, a neglect of duration.
"We want pain to be brief and pleasure to
last," Kahneman observes, but duration neglect
ensures the remembering self will not make
choices accordingly.
Chapter 36: Life as a Story
Kahneman observes that stories are often defined
by their endings.
This is as true, he maintains, of operas as
it is of a person's lifetime.
Citing research on perceived quality of life,
Kahneman shows how "peaks and ends matter
but duration does not."
That is, a person who lives 60 extremely happy
years and then dies suddenly is consistently
judged to have had a better life than a person
who lives a further five "slightly happy"
years.
When planning a vacation, likewise, the remembering
self is in the driver's seat: many people
would not bother to make a trip from which
they could not bring back happy memories.
Chapter 37: Experienced Well-Being
Kahneman surveys some standard approaches
to measuring happiness, methods that Kahneman
regards as overreliant on the remembering
self rather than the experiencing self.
He presents his own efforts to measure an
individual's U-index, a term he and his colleagues
used to denote the percentage of time spent
in an unpleasant state.
Such an index, he suggests, can also be applied
in aggregate as a loose measure of a population's
well-being.
Things that profoundly affect well-being in
the moment, Kahneman reports, may have a smaller
or even opposite effect on overall life satisfactionóand
vice versa.
Chapter 38: Thinking About Life
Finally, Kahneman digs deeper into the issues
inherent in any attempt to measure life satisfaction.
He reviews some experiments in which life
satisfaction measures are shifted considerably
by trivial occurrences (e.g., finding a dime)
as well as by major events (e.g., a recent
marriage).
Collectively, Kahneman describes these results
as evidence of the focusing illusion, in which
people give exaggerated importance to whatever
they are thinking about at the moment.
In considering the purchase of a new car,
for example, people routinely overestimate
how big a role the car will play in their
overall happiness.
This illusion compromises predictions about
what will bring future happiness.
These chapters lay out a partitioning of the
self that is different from, but complementary
to, the System 1/System 2 dichotomy used in
earlier parts of the book.
Both the experiencing self and the remembering
self are prone to the kinds of errors Kahneman
associates with System 1 (the "fast" system).
In fact, duration neglect and the peak-end
rule (both cited in Chapter 35) contribute
to the perpetuation of a fundamental System
1 pattern.
Memory, like the mind in general, prefers
to see the world in terms of averages, rather
than in terms of sums.
The "overall unpleasantness" of an experience,
if such a thing can be said to exist, would
surely depend to some extent on the episode's
duration.
However, System 1 does not think in such terms.
Instead it rates a tense and awkward 15-minute
meeting as "more unpleasant" than a mildly
boring two-hour seminar.
Consequently, the remembering self takes much
more care to avoid short, highly unpleasant
experiences than to avoid long, mildly unpleasant
ones.
Thrill-seeking behavior arises from the same
cognitive quirks: under duration neglect,
a half-hour of euphoria counts for more than
a moderately pleasant weekend at the beach.
Kahneman's discussions of life satisfaction
and the challenge of measuring it (Chapters
37ñ38) are rooted in the fundamental heuristics
and biases presented throughout Thinking,
Fast and Slow.
One example is the availability heuristic,
which Kahneman touches on (though not by name)
in Chapter 38 as a source of difficulty in
coming up with a "score" for one's happiness
with life.
First introduced in Chapter 12, the availability
heuristic is the tendency to substitute availabilityóthe
ease with which one comes up with examplesófor
frequency.
A person using the availability heuristic
to think about life satisfaction will resort
to "available" images and events from recent
experience rather than making a systematic
walkthrough of job, career, relationships,
and so forth.
Also relevant here is the representativeness
heuristic, in which the frequency of an event
is judged by how typical or "representative"
the event is.
Duration neglect and the peak-end rule are,
in a sense, a pattern of representativeness-based
thinking in the specific domain of memory.
The peak-end rule ensures that the highlights
(or lowlights) of an experience come to represent
the whole thing, and duration neglect means
that a preponderance of equally "representative"
moments are simply ignored.
That's it for today guys see you all in another
book.
