KENT WALKER: So good
afternoon, everybody.
Welcome to Authors At.
We have Professor Josh Greene
from Harvard with us today,
the author of "Moral Tribes."
I am hugely excited to have
Professor Greene with us.
He has been somebody whose work
I've been following for years.
He has been doing
a lot of thinking
about the genetic and
biological and moral bases
of a lot of our approaches
to complicated philosophical
questions.
This is interesting
in the abstract,
but it's also interesting with
regard to a lot of the things
we're working on at
Google, whether that be
artificial intelligence
or driverless cars,
or trying to figure
out how to get people
comfortable with new
technologies more generally.
His book-- we have copies
in the back for people
who are interested
in buying a copy,
and Professor Greene has agreed
to stay around and sign them
if you'd be interested--
goes through a whole variety
of different trolley
car problems,
and I'm sure he'll
talk more about that,
and other ways of testing
our own moral intuitions
and trying to figure out what
feels rational and rationally
based and what are some of
the other roots that lead us
to some of the moral conclusions
that we draw that we think
of as common morality,
but in fact may be
intention with the way
we approach other kinds
of rational questions
that we deal with in our lives.
So with that, let me turn
it over to Professor Greene.
He's probably going to
talk for about half an hour
and then leave the rest of
the time open for questions,
so I'd encourage you to
be thinking of questions
and really turn this
into a conversation.
[APPLAUSE]
JOSHUA GREENE: Thanks.
Well, thank you so
much for having me.
Really, it's a great honor to
be here, and I'm really excited.
I'm glad we're having a
Q&A because I'm really
excited to hear your thoughts.
Often, when I give a talk--
other people, the same thing--
you try to find a connection
with the audience.
But somehow, I feel like saying,
"I use your search engine
all the time"
doesn't really cut it
in terms of connecting
with this group.
But I do think that
there is a higher level
connection between
what I'm up to
and what Google is
about more generally.
So you're trying to organize
the world's information,
and that is largely a
technical enterprise,
but there are important
social and moral questions
that go along with that--
questions about oughts
and questions about whys.
And this is really where
my own thinking comes in,
which is to say I'm
trying to organize
our thinking about
social problems
and about moral
problems in a way that
can be accessible to all, in a
way that can be made universal.
And that's a really
challenging thing
because we have deep
felt disagreements
about what's right
or what's wrong.
And so the question is,
what kind of common ground
can we find if we're
going to resolve
the moral questions
that divide us
within nations and
across nations?
So to begin with this problem
of moral disagreement--
and we'll talk a little
bit later, perhaps,
if we have time of
some of the issues
that specifically arise here.
So for example, this
week, the ruling
in Europe about the
right to be forgotten
reverses the greater
good of free speech
and freedom of
information, or the issues,
as Kent said, arising
from how to program a car
to deal with life and
death situations, who lives
and who dies.
Fascinating questions, and I'm
delighted to hear your thoughts
about this.
I'm going to start by taking
you back a couple of years
ago, almost three
years ago, to the run
up of the last
presidential election.
This was one of the
Republican primary debates,
and the Republican nominees
were very much, and still are,
opposed to the Affordable Care
Act, opposed to Obamacare.
And Wolf Blitzer of CNN, who
was moderating the debate,
asked a question of Ron Paul.
And it was very
interesting question.
It was a very awkward question.
He said, suppose there's a guy
who says, you know, I'm young,
I'm healthy, I don't really
need health insurance,
so he decides not to
buy health insurance.
And then something
terrible happens to him,
he ends up in a
coma, and he needs
intensive care for six months.
And he says to Ron Paul,
so who should pay for that?
And Ron Paul, being
a good politician,
instead of answering
that question
answered a different question.
He said, well, he should
have bought health insurance,
but Blitzer wouldn't
let him off the hook.
He said, OK, he should have
bought health insurance we all
agree, but he didn't.
What should happen?
Should we let them die?
Tough question for a politician.
And while Paul was
thinking, people
in the audience at the
Republican primary, some
of them shouted out,
yeah, let him die.
And Paul couldn't quite bring
himself to agree with them
or disagree, and what he
said was very interesting.
He didn't say let him die.
He didn't say that
the government
should take care of him.
He said friends, family,
perhaps the person's church
should come to that
person's rescue.
And I think that illustrates
a couple of interesting things
about the conservative side
of the political spectrum,
because you see
two strands there.
One is a kind of individualism,
that is, we're not all in this
together.
You make your choices you're
responsible for yourself,
and if you make bad
choices, you make your bed,
you lie in it-- or grave,
as the case may be.
And then also a
kind of you might
call tribalism, which is to
say there is a collective that
has an obligation to you, but
it's not the larger government.
It's your friends, it's your
family, it's your church.
Now, around the same time,
bursting on the scene
was my home state's
Elizabeth Warren,
and she came to a lot
of people's attention
when this video of her speaking
in someone's living room
went viral, and it was like
the opposite of Ron Paul.
She was responding
to the idea of people
have a right to their fortunes
and the government shouldn't
be taking things away
and so on and so forth.
And she said, look, if you
built something wonderful,
you made a factory, you
have a successful business,
you deserve your
rewards, but there's
something you have to keep in
mind, which is that you moved
your goods to market on the
roads that the rest of us
paid for, and you were able to
hire workers in your factory
because the rest of us
paid to educate them,
and you were safe in your
factory because of the police
forces and firefighters that
kept your building safe.
So because of all those
things that you received,
which you may not be
thinking about-- you're
focused on your own efforts--
but because of those things,
you owe something
back to society
and owe something
back for the next kid
to come along with a good idea
for his or her own factory.
Now, commenting
on these remarks,
Rush Limbaugh said
that Elizabeth Warren
is a parasite who
hates her host,
and Ron Paul himself had
perhaps not quite as extreme
but similarly
unfavorable responses.
I think those two
moments really nicely
capture the political
differences that we have here
in the United States,
at least a lot of them,
and I think that these are
paralleled on the global stage
as well.
And what I'm trying to do
is to understand, how can we
find common ground between
those two points of view
and other points of
view that are, perhaps,
even more alien to
us here in this room?
So I study morality, and I think
that what morality is really
ultimately about is
about cooperation,
and that what cooperation
is really about is altruism.
It's being willing to
sacrifice something of your own
in order to benefit somebody
else, and if what you sacrifice
is smaller than what
the other person gets
and someone else is willing
to do the same for you,
then we can all
end up better off
as a result of our cooperation.
And so I think of morality as
a kind of cooperation device.
For me, I think the
best illustration
of the problem of cooperation
goes back to a famous paper
by the ecologist
Garrett Hardin called
"The Tragedy of the Commons."
Anyone familiar with the
tragedy of the commons?
I see nodding, yeah.
So for those of you
who are uninitiated,
"The Tragedy of the
Commons" goes like this.
You have a bunch of herders
who share a common pasture,
and these are rational,
self-interested herders,
and they think to
themselves, well,
should I add more
animals to my herd?
And the herder
thinks, well, when
I have more animals,
that's good,
more to sell when
I go to market.
What's the downside?
Well, not much.
They're just grazing
on this common pasture.
And so they say, yes, I'll
add more animals to my herd,
and all the herders add
more and more animals.
And at some point,
there are more animals
on the pasture than the pasture
can support, grass is all gone,
all of them die, and everybody
ends up being worse off.
That's the tragedy.
And what's paradoxical about
it is that each of the herders
was doing what was in
his or her best interest,
regardless of what
other people were doing,
and yet everybody acting
in their own best interest
can collectively make
everybody worse off.
So what's the solution?
Well, as I said, I think
that morality is essentially
the solution.
The problem of cooperation
is about the tension
between me and us.
And what morality is
essentially saying, at its most
basic level, is you
can't just care about me.
You have to care
about other people.
You have to care
about the broader us.
It's not just about yourself.
But simply saying
that doesn't resolve
a lot of the most
difficult moral problems.
So you say, OK, you're
not allowed to steal,
you're not allowed
to lie in general,
you're not allowed to
kill people in general.
Fine.
That's pretty
universal, but there
are a lot of open questions.
So here we have these
herders on the pasture.
Are they going to have
collective health insurance
in case their sheep get sick?
Is it OK for you to
fight off somebody
who's trying to take your
sheep with an assault weapon?
There are all kinds of questions
about the terms of cooperation
that don't get resolved simply
by saying you can't just
care about yourself.
You have to be moral, you have
to be social in some sense.
And so this is where my sequel
to Hardin's parable comes in.
So now imagine you
have this large forest,
and around this forest, you
have many different tribes
of herders.
And these herders
are cooperative all,
but they're all cooperative
in different ways.
So at one extreme
end of the forest,
you've got your
communist herders.
And they say, not only are we
going to have a common pasture,
we're just going to
have a common herd.
Everything's going to be
in common-- free lunch
at the Google Cafe-- and that
way, the tension between me
and us is resolved.
At the other extreme, you have
your free market capitalist
herders who say, not
only are we are not
going to have
common herds, we're
not going to have
a common pasture.
We're going to
privatize the pasture,
we're going to divide
it up into little plots,
and everybody has
ownership of their bit.
Our cooperation will consist
not in sharing material goods
but in respecting each
other's property rights.
And you can imagine
tribes that differ
in a lot of other
ways as opposed
to just being more individualist
or more collectivist.
Tribes are organized
by different leaders.
They're organized by
different religions.
This god over here, this
holy book, this leader
says no black and white sheep
together in the same enclosure,
and this one says it's OK
for women to be herders,
and this one says is not.
Is it OK to be gay
herder or whatever it is?
All these tribes can differ
in terms of the basic rules
that organize their societies--
more collectivist, more
individualistic, more
intrusive into people's
personal lives when it comes to
things like sexuality or not.
So a lot of different herds
with different beliefs
and different values, tribes.
And now imagine one
hot, dry summer,
this forest in the middle of
this ring of different tribes
burns down, and
then the rains come
and there's this lovely
green pasture in the middle.
And all of the different tribes
look at that green pasture
and think, hey, nice pasture,
and they all move in.
And the question is,
what's going to happen?
They're all moral in
some sense, they're all
cooperative in some
sense, but they
cooperate on different terms.
And not only that, all
of their respective ways
of being cooperative, of being a
decent person within a society,
seem intuitively
obvious to them.
It just seems
obvious to this tribe
that women should be
allowed to own sheep,
and it just seems obvious
to this tribe over here
that women should not, that
that's not a thing for a woman
to do, or whatever it is.
They're all coming
into this common space.
This is essentially
the modern world.
That is, moral problems
in the modern world
are not simply
about me versus us,
about selfishness
versus morality.
They're about us
versus them, that
is, our interests
versus their interests,
and our values
versus their values,
our system for being
a cooperative people
versus their system for
being a cooperative people.
And so if we're
going to figure out
how to have an organized,
universal morality,
we have to deal with this
higher order problem.
Borrowing from Hardin and
his "Tragedy of the Commons,"
I call this the tragedy
of common sense morality.
There are different versions
of moral common sense
that enable different tribes
to solve their smaller tribal
moral problems.
But then we come together
in the modern world,
and now we have
this larger problem.
Just as a moral society
on a small scale
is one that allows individuals
to get along and form
a group, a modern morality,
a universal morality
is going to be one
that's going to have
to govern those new pastures
with many different tribes
all living together.
In some sense, this has been
the enlightenment project
for moral thinking for the
last few hundred years,
but it hasn't the problem
hasn't really been solved.
And I'll try to say a
little bit about why
I think it's not reached a
satisfactory conclusion yet.
So the first idea
is the distinction
between these two different
kinds of moral problems,
the basic moral
problem of me versus us
and the higher order moral
problem of us versus them.
Now I want to turn to a
different set of ideas
and get inside the head
and inside the brain
and talk about two different
kinds of moral thinking,
and really two different
kinds of thinking in general.
A lot of you are probably
familiar with Daniel Kahneman's
wonderful book,
"Thinking Fast and Slow."
That basic framework, the idea
that we have, on the one hand,
intuitions, gut
reactions that we
can use to think about
problems, and then
also have a capacity for
slow, deliberate reasoning.
I think this is a central
idea, an essential idea,
for understanding morality.
My preferred metaphor for
the fast and slow idea
is like a digital SLR camera.
So on the one hand, you have
your automatic settings.
You take a picture of a
mountain from far away, you
put it in landscape
mode and point and shoot
and you've got your picture.
And your camera also
has a manual mode
where you can adjust
everything by hand.
Now, it's not that one of these
ways of taking photographs
is inherently better or
worse than the other.
It's that they're good
for different things.
Your automatic settings
are good for problems
that the manufacturer
has anticipated.
So it does the
thinking in advance
and then all you have to
do is point and shoot.
It's very efficient.
The problem with
automatic settings
is that they're
not very flexible.
They're good for what they're
good for and not much else.
Manual mode is very flexible
but it's not very efficient.
You have to twiddle the knobs.
You have to know
what you're doing.
And so the way the camera
navigates this trade off
between efficiency and
flexibility, which I'm sure
is something that all of those
of you who do any computer
programming have
thought a lot about,
is by having these
two different modes,
where you've got
your point and shoot
and you've got your manual mode.
And the human brain really has
the same basic design strategy.
We have automatic settings
that are efficient but not
very flexible, good
most of the time
but not good for
everything, and then we
have a manual mode, which is
our capacity for deliberate
reasoning that enables
us to think flexibly
about novel problems.
So with this
framework in mind, I
want to go back to the
original moral problem,
the tragedy of the commons.
So the laboratory version of
the tragedy of the commons
is something called
the public goods game.
So you bring people
to the lab-- let's
say you have four people--
you give everybody $10,
and then people can either keep
their $10 or they can put some
or all of it into a common pool.
Whatever goes into
the common pool
gets doubled by the experimenter
and then divided equally
among all four people.
Now, if you are completely
selfish, what do you do?
You keep your $10 because then
you get your $10, plus you
get your share of whatever
other people put into the pool
and got subsequently doubled
by the experimenters.
If you're completely us-ish,
if what you care about
is the total good, the
total payoff, then you
put all your money
in because that's
what maximizes the amount that
gets productively doubled.
So you put people in the lab and
you face them with this choice.
Do you do the me
thing, keep your money,
or do you do the us
thing and put your money
into the common pool?
What we were interested
in is what, if anything,
is the role of slow
versus fast thinking
in this kind of social dilemma?
So you might think
on the one hand,
people are intuitively selfish.
You think, I want my
money, but then you stop
and you think, well, but really,
I should be good to the group
and put more money in.
Or you might think it's
the other way, that we're
intuitively good, or intuitively
cooperative, but then we think,
wait a second.
I don't want to
get screwed here.
I'm going to keep my money.
Or maybe there's
no tension at all.
People just have
different preferences
and they just do
what they prefer,
but there's no
internal competition.
So in this set of experiments
done with David Rand and Martin
Nowak, we had people
make these decisions
in one set of experiments
under time pressure.
And the idea is putting
people under time pressure
is going to favor
the fast thinking
over the slow thinking.
And what we found is
that when we put people
under time pressure, they
contributed more money.
And what we found is
that when we told people
you have to stop and
think for at least
10 seconds about this decision,
they put in less money.
So at least in this
context, it seems
like the fast thinking is
what's making us cooperative
and the slow
thinking, if anything,
is making us less cooperative.
And in fact, this result
is consistent with a lot
of different results, some from
my lab and some from others,
basically telling us that
we have social instincts.
We have social
emotions that enable
us to solve that basic moral
problem of me versus us.
And you can think
of these as falling
into four categories
in a two by two matrix.
So we have positive emotions
and negative emotions.
You can think of those as
sort of emotional carrots
and emotional sticks, and we
can apply them to ourselves
and we can apply
them to other people.
So an emotional carrot
applied to me, if I love you,
I care about you,
you're my friend
or I have goodwill
towards you, that
impels me to be cooperative.
I might also think,
I'll feel guilty,
I'll feel ashamed,
if I keep my money
and everybody puts
their money in.
So it's a negative feeling
that makes me be cooperative.
And then I have feelings
that apply to you.
If you put your money in,
you'll have my gratitude.
And if you don't
put your money in,
then you'll have my
scorn and my contempt.
So these different
kinds of social emotions
enable us to be cooperative,
to get in that state
where people are not
just thinking about me
and they're putting at least
some of their resources
towards us.
So I think part one, lesson one,
is that fast thinking, at least
in some contexts--
this is not universal,
and I'll get to
this in a second--
does pretty well at solving
that basic moral problem.
But the basic model
problem, as I said,
is not the only moral problem.
That's the tragedy
of the commons.
We also have the tragedy
of common sense morality.
And this is where, I think,
our moral intuitions often
go astray.
And particularly when we're
dealing with modern contexts
and we're dealing with people
who are not in our groups, who
we think of as
the other, this is
where things go
especially badly.
I think one of the most
chilling demonstrations of this,
actually, comes from your own
Seth Stephens-Davidowitz who
was an economics PhD student
at Harvard until not long ago.
And he's analyzed Google search
data and shown that in places
where there's high search volume
for searches involving the "n"
word, that those are places
where Obama did especially
poorly in the 2008 election, and
I believe in the 2012 election
as well.
That just basic animus
towards a racial out group
can have-- well,
you may think it's
a good effect or a bad effect
in that particular election,
but I hope you'll at least
that racial animus is not
a good thing.
Coming back to the
tragedy of the commons
and the public goods game,
we did this in our lab.
Some of this was online,
some of this was in Boston.
Benedict Herman and colleagues
have done these public goods
games around the world.
And they did repeated versions
of the public goods game
where people play,
they put their money in
or they don't, and then
they have the opportunity
to reward or punish the
other people with whom
they're playing.
So let's say you cooperated,
you put your money in,
and somebody else didn't, you
can pay $1 to the experimenter
and they'll take away
$3 from that person.
So it's the economic
equivalent of bapping somebody
on the head with a stick.
And what they found is
that people play this game
very differently in different
places around the world.
You put a bunch of strangers
together in Copenhagen,
let's say, and people
right from the beginning,
they put a lot of money in,
and it stays high throughout,
so they walk away
with a lot of money.
There are other places
like Melbourne, Australia,
and Chengdu in China, where
people put in a decent amount
to begin with and
then some people
punish the people who
are not cooperating
and then cooperation
goes up, and by the end,
it looks like lovely Copenhagen.
And then there are other places,
like Athens, where people don't
put very much in
from the beginning,
and then over time, people
have the opportunity
to punish the people,
cooperation still stays low.
And they were very surprised
to see what was going on here
because we think of punishment
is about the cooperators is
pushing the non-cooperators,
but in places like Athens,
the non-cooperators were
punishing the cooperators.
You're giving us money and
I'm punishing you for it.
Why would they do that?
Well, they interviewed them
afterwards and they said,
I don't like this game.
I don't know who you are.
I don't know who
these people are.
I don't like this
whole thing and I just
want to let everybody
know, don't mess with me.
I'm not going to play
your little game.
Now, I've been to Athens.
The people there are very nice.
It's not like these people
are jerks, so what's going on?
And I think it's that for them,
cooperation is more tribal.
It's about who you know.
It's about personal
relationships.
And the idea of coming
into some antiseptic space
with some experimenter
you've never heard of before
and these strangers and
laying your money on the line
and trusting them, that was
very uncomfortable for them.
So what's interesting
about this is
you give people the exact
same opportunities in, say,
Copenhagen and Athens,
not even that far apart,
and they walk away with much
more money in Copenhagen
than they walk away in
Athens, at least playing
the game this way.
And it's because of those
automatic settings that people
in Copenhagen have the
gut reaction that says,
I can trust you,
and other research
suggests that it really
is largely about trust.
You have a feeling that says,
I don't know who you are,
but I can trust you.
We're all in this
together in some sense.
Whereas in Athens
and other places,
they don't have that feeling
as much for strangers,
for people who they
don't know personally.
This is another way in which
our gut reactions can fail us.
Different case coming from
the philosopher Peter Singer.
Suppose you are walking
along and there's a pond.
There's a child who's
drowning in this pond,
and you could save
this child's life,
but you're wearing
your fancy Italian suit
that you just got and you're
going to ruin your suit if you
have to wade into
this muddy pond.
How many of you think it's OK
to let the child die because you
don't want to ruin your suit?
How many of you think
that would be terrible?
A lot of hands going up.
Different case.
You're home one day.
You get a letter
from Oxfam or UNICEF
saying, please make a donation.
The money that you
give us can very likely
save somebody's life, a child on
the other side of the world who
badly needs food or medicine.
And you say, well, I'd
like to help these people,
but I've got my eye
on this nice suit,
so I'm going to save
my money for that.
Now, how many of you
think that you're
a terrible person if you spend
your money on luxuries that you
don't really need but that are
nice to have when you could use
that money to save
someone's life?
A little bit, right?
We all go, I see that, but
how many of us live that?
Very few of us.
What's going on here?
Well, philosophers have
argued about this kind
of case for a long time.
Very clever people
say, well, for example,
in the child who's drowning,
you're the only one who
can help, whereas people on the
other side of the world, there
are a lot of people
to help them.
So you have a special duty
to help the person here,
but those other
people over there,
you don't have a duty to help.
And then another
clever philosopher
comes along and says, well, what
if there were a lot of people
standing around this pond
wearing fancy Italian suits
and they're not helping?
Now is it OK for you
to let the child drown
because other people
are standing there
in their suits doing nothing?
You say, gosh, that
doesn't sound right.
And this goes on
and on for decades.
What I wanted to do and what
I did with a student named
Jay Musen is try to turn
this thought experiment
into an actual
scientific experiment.
So in our version, which comes
from a philosopher named Peter
Unger, you're vacationing in
this lovely but poor country.
You have your little
cottage up in the mountains
overlooking the coast, and
there's a terrible typhoon that
hits, and there's
devastation along the coast.
It's much, unfortunately,
like the situation
in the Philippines not long ago.
And you can help.
The best thing for you to do is
not to go down there yourself
but to just make a donation
to, say, the Red Cross.
And your internet still
works in your little cottage
and you can make a donation.
And we asked people, do
you have an obligation
to give some money to help
the people down below who
have been devastated
by this typhoon?
And most people-- about, I
think, 68% in our sample--
said you have an obligation
to help in that case.
We gave a different group of
people the following case.
Same situation with the typhoon,
but you're not over there.
Your friend is over there.
Instead, you're at
your computer at home,
and your friend has a
smartphone and is showing you
everything that goes on.
You can see, you
can hear everything
your friend sees and hears,
and you can help just as much.
You could make a donation
to the Red Cross online
just as much as your friend can.
You're just farther away.
Do you have an
obligation to help?
These people didn't
see the first case.
They only saw this case.
About half as many people,
about 34% in our sample,
said you have an
obligation to help.
It's not a perfect
experiment, but it's
a pretty well
controlled experiment.
It seems like the difference
is this physical distance.
Just imagining that it's
far away versus nearby
seems to make a
very big difference.
Now, morally, it's pretty
hard to defend that,
but psychologically
and biologically,
it makes a lot of sense.
We evolved to solve the
tragedy of the commons.
We evolved to solve moral
problems within a tribe
where we're interacting
with people in person,
and anyone you might
help is someone
who might come help
you later, and anyone
you might hurt or
allow to die is
someone who's family
might be very upset.
It makes sense that we
have moral buttons that
get pushed by people who
are right in front of us.
But it wouldn't make any
biological sense for us
to have buttons that are
pushed from the other side
of the world.
We didn't evolve to
cooperate with statistically
indeterminate strangers on
the other side of the world.
But today, we have the capacity
to save the lives of people
on the other side of the world.
We also have the capacity
to ruin the lives of people
on the other side of the world.
And so the modern world
gives us these opportunities
for good or ill that perhaps
our automatic settings are not
up to.
Do I have time for
one more example?
So another example about
thinking about helping people.
This is a brain imaging
study done by Amatai Shenhav
and myself, where we asked
people questions about,
say you work for
the Coast Guard,
you're in this rescue boat.
You're going to save
someone who's drowning
and then you get a radio
signal that says, wait,
there are these people
who are drowning
in this other location.
You can turn around
and save them instead.
And this, I think, may feel
in a certain sense very much
like a Google car problem.
Do you change course
and go save the five?
Now, you want to know,
how many lives can I save,
and what are my odds
of actually saving?
So it could be 10 people but
a 50% chance of saving them.
Do I let the one person go in
order to save those people?
You need to make
that calculation.
So we had people make
these kinds of judgments
while having their
brains scanned,
and our main scientific interest
was in, what in the brain
is keeping track of
the probability, what
are my odds of saving them?
What's keeping track
of the magnitude,
how many lives could I save?
And what's putting those
two pieces of information
together in order
to make a decision?
I'll spare you some of the
neurobiological details,
but the gist of it is
that the system that we're
using to take these variables
and put them together and make
a decision really
is the same system
that rats and that monkeys use
to make decisions about choice
under uncertainty.
If you're an animal
and you're deciding,
do I forage and get the
easy leaves over here
or do I go for the
nice ripe fruit that's
much farther away, and much
more uncertain and much more
dangerous to get
there, what do you do?
Now, one interesting
thing that we've noticed
and other people have
noticed about when
it comes to saving
people's lives
is that you get these sort
of diminishing returns
in terms of the numbers.
So you save one person's life,
that feels like a big deal.
Two people's lives, well,
that's maybe twice as important.
But once you get
50, 100, 150, 1,000,
it just feels like a lot.
It doesn't feel very different.
Insensitivity to quantity is
the technical term for this.
Now, why would that be?
Well, based on what I told
you from the brain imaging
experiment, it actually
makes a kind of sense.
That is, if this is a system
for placing values on choices,
on options, that we
inherited in modified
form from our primate ancestors
and our even earlier mammalian
ancestors, it
makes sense that we
get these kind of
diminishing returns.
If you're a rat, you
don't have a fridge.
The goods in your
life, once you've
eaten your lunch, that's it.
Having more food isn't going
to do you that much good.
You could maybe share it
with a couple of other rats
or something like that,
but the value of goods
falls off pretty quickly
as the quantity increases.
And it would make sense that we
would have a neural valuation
system that follows
that pattern.
But now we're using it
to think about things
like saving people's lives.
It didn't evolve for
that purpose, right?
And so morally, it
doesn't make sense.
Why should we say
the hundredth life
that you can save is worth
any less than the first life
that you can save.
But at least to this
system within your brain,
it does make a difference.
I'm morally full.
I've saved enough
people, 100 lives.
I'm done.
It's not worth much
more after that.
Now, part of you does engage
in this kind of thinking
and part of you, as you're
doing, can chuckle at that
and say, gosh, that's kind of
ridiculous that we do that.
And that's, I think, your
fast and your slow thinking.
You have an intuitive way of
thinking about these problems,
and then you can
step back from that
with an understanding of
what your brain is doing
and say, gosh, that's dumb.
That doesn't really make sense.
And it's not just dumb.
It's consequential.
The fact that we think this way
can have important implications
for our willingness and ability
to save the lives of strangers
on the other side of the world.
So to recap a bit of
what I've said so far,
the basic moral
problem, me versus us,
selfishness versus
morality, we've
got pretty good intuitions
for that, for getting along
in a tribe.
But when it comes
to the modern world
where we are trying to
get along with people
from different tribes
with different values
and we have
opportunities, like you
can save the life
of some stranger
on the other side of
the world, something
that your brain never evolved
to do, our moral instincts,
our intuitions,
our fast thinking
seems-- at least
to me, and I think
you could make a pretty
good case that this
is true for a lot of
people-- to fall short.
And so then the
question is, all right,
how do we solve
that larger problem?
And this is the problem
with which I began.
How do we solve the tragedy
of common sense morality?
How do we, the people
of the world, people
of different political parties,
even within our own nation,
reconcile our
different moral values
and try to come up with
some kind of system
that we can all live by?
So as I said, this has been
the Enlightenment project
for morality.
The whole second
half of the book
is trying to answer
this question.
And I'm now down probably
to my last five minutes,
so I'm going to try to be
quick and just give you
a flavor of what I think
the answer is like.
So there is a philosophy
which I'm sure some of you
have heard of called
utilitarianism.
Worst named philosophy
ever, but the idea behind it
actually makes a lot of
sense, especially if you're
thinking about the
world's problems
in this global kind of way.
And I'm going to first
give you an example
of utilitarian progress and then
say how I think we got there.
So Jeremy Bentham, who was
the original utilitarian
philosopher, wrote
one of what is
or may be one of the first
defenses of what we now
call gay rights.
This was in the 18th century
at a time when being gay
was punishable by death.
And in this passage
that I love, he
said, I've been
tormenting myself
for years to try to find a
reason why it makes sense
to treat being gay in
this way, because it
does seem wrong to me, like
it does to so many people
around here.
But upon the principle of
utility, I can find none.
And so he came to the
conclusion that maybe being gay
is not so bad, and this was in
the end of the 18th century.
I may have said
19th century before.
18th century.
How did he do that?
What philosophy was he applying?
Basically, his
philosophy says, you
should try to promote
the greater good,
and this really has
two parts to it.
One is that everybody's
well being counts the same.
So it's not just
love thy neighbor,
treat thy neighbor as
you treat yourself.
It's everybody counts, and
that's a pan-tribal idea.
And then the question is,
well, what's the measure?
What is it that matters
for each person?
And the conclusion
that he came to,
and others since have affirmed,
is that what ultimately matters
is the quality of
people's experience.
And the argument for
this goes like this.
Take anything that
you care about
and keep asking, why
do you care about
that until you run
out of answers.
You say, you came to work today.
Well, why'd you come to work?
Well, I enjoy work and I
also need to make money.
Well, what do you need
to make money for?
Well, I need a place to live.
Why do you need a place to live?
Why can't you just
wander around?
Well, it gets cold at night.
I want a place
where I can sleep.
Well, what's wrong
with it being cold?
Well, it's just unpleasant.
What's wrong with
it being unpleasant?
It's just bad.
That's where you
run out of answers.
And the idea is that
there are a lot of things
that we care about, but
when you keep asking,
why do you care about that,
ultimately, it comes down
to the quality of somebody's
experience, which you can,
I think, somewhat accurately
but somewhat misleadingly,
call someone's happiness.
And so what Bentham said is we
should be maximizing happiness
impartially.
Everyone's happiness counts
the same and happiness
is what ultimately matters.
And that's what led him
to the conclusion to say,
you know what, maybe
being gay is fine.
I have this automatic
setting, this point and shoot
moral reaction, like all
the other people around me--
Not all the other
people, but most
of the other people around
me in 18th century England
that this is a
terrible thing to do,
but I've got this
philosophy that I
use to sort of measure
the worth of things.
And that led Bentham and
others since to the conclusion
that there's really actually
no problem with being gay.
And in fact, Bentham and his
successor, John Stuart Mill,
I think got it right about every
major moral and political issue
of their day.
They were among the first
opponents of slavery,
they were among the
first proponents
of free speech and free markets.
I think the way
they were able to do
this is with slow thinking,
by putting their gut
reactions aside and asking, what
good or bad does this actually
do for the world?
Now, this is a very
controversial philosophy,
and I think it's
widely misunderstood.
And a lot of what's
controversial about it
is that it seems to get the
wrong answers in certain cases.
So a famous case that I've spent
a lot of time thinking about,
you have a trolley headed
towards five people
and you can save them
but the only way you
can save them is by
pushing this big person off
of this footbridge
that you're standing
on and into the trolley's
path, and then he dies and gets
killed by the trolley but
the five people will live.
Is that OK?
No, you can't jump yourself.
Yes, this will work.
Even with those
assumptions, most people
still say that feels wrong.
And this seems to be a powerful
argument against this idea
that we should be
promoting the greater good.
Sometimes it's wrong to
kill one person in order
to save five people.
At least that's how it feels.
This is a complicated
set of questions.
My sense is that
it's good that we
are uncomfortable
with killing people.
The world would be
a much worse place
if we all felt
totally comfortable
going around pushing
people off of footbridges.
And so it's good that we
have that automatic setting.
But then for any automatic
setting that we have,
it will always be
possible to contrive
a situation in which
the greater good it is
in conflict with what
that gut reaction, what
that automatic setting says.
And I think, again,
this connects
with a lot of the
kinds of problems
that, for example, someone
designing a driverless car
has to think about.
What are the
trade-offs that you're
willing to make when
it comes to safety?
Would you have a car
drive into one person
deliberately if it was a
way of saving more lives
and so on and so forth?
These are things we can talk
more about in the discussion.
I want to leave you
more with a problem than
with my specific solution.
Once again, we
have gut reactions
that do a good job of dealing
with everyday social life,
of moral life within the
tribe, but the problems
that we face as a nation and as
a world are more complicated.
They're not the
kinds of problems
for which our brains,
biologically or culturally,
were necessarily designed.
And so at the very least, I want
to leave you with this thought,
that when it comes to the
morality of everyday life,
it's probably a good
thing to think fast.
Your gut reactions,
more often than not,
are going to make
you a good person.
But when it comes to
global morality, when
it comes to the
complicated problems
that we face as people
within larger nations
and nations within
a larger world,
our gut reactions
can't all be right.
We have different gut reactions
about the problems that
divide us, and
we're going to need
some kind of moral system,
some kind of meta-morality,
to guide us as we deal
with those problems.
So looking forward to
hearing your thoughts,
and thanks again.
[APPLAUSE]
AUDIENCE: I'll take
your last point.
How do we get to
that meta-morality?
And I think even that seems
like a huge concession
for some people,
saying, look, obviously
what you have isn't scalable.
For me, I always struggle
with this in the public policy
conversation.
It's like, just because this
makes you uncomfortable,
not sufficient for a nation
of 330 million people.
How do people get
to that concession?
JOSHUA GREENE: Right.
I was just talking
about this with Kent.
My hope is that self
knowledge can be empowering.
When you just have your gut
reactions and all you know
is what they're
telling you, then
you're just going to
trust your gut reactions.
But if you have an
understanding of where these gut
reactions come from
and how they work,
you can recognize
that they're useful
and also recognize that
they have limitations, just
like the gut reactions
on the camera.
Let me take this
is an opportunity
to say a little bit more
about the trolley case,
where most people
think, gosh, it sure
feels wrong to promote
the greater good.
So one experiment we
did a few years ago,
we gave some people
the version where
you can push the guy
off the footbridge.
In this version, about 30%
of people said that it's OK.
So most people said
it's not OK to push
the guy off the footbridge.
We gave another group
of people this version.
Instead of you're on the
footbridge next to the guy
and you can push
him, you can hit
a switch that will
open a trap door
and drop the guy
onto the tracks,
and that way save
the five people.
Now about twice as many
people say that it's OK.
A lot of what we're
specifically sensitive to
is something like the difference
between pushing somebody
with your hands versus
hitting a switch.
And on the one hand,
we can recognize
it's good that we're
emotionally averse to going
around shoving each other.
We don't want a world in which
people are cool with that.
But at the same
time, we don't want
that to be an obstacle to the
greater good in the long run.
It's a fanciful kind of case in
the trolley case they give you,
but these things
really matter when
it comes to medical
ethics, when it comes
to something like abortion or
physician-assisted suicide.
The American Medical
Association's stance
is that if it feels like
the footbridge case,
essentially, if it's
intentional and it's active,
then a physician is not allowed
to end someone's life even
if they want to because that
feels like shoving somebody
off the footbridge.
Whereas you can
withhold treatment
or can give somebody
pain medication
to ease their pain knowing
that it will kill them,
so it's a kind of side effect.
These things I think actually
do matter for public policy.
I'm just trying to
illustrate the answer
to your question, which
is that once we understand
how quirky our intuitions are
and how contingent they can
be-- contingent
in the sense of I
grew up in this kind of culture
versus that kind of culture--
I hope that that knowledge
can help us see, again, where
they're likely to be helping
us and where they're also
likely to be leading us astray.
AUDIENCE: But for
any individual,
how do we get them there?
JOSHUA GREENE: Ah.
Well, this is
essentially what I try
to do as a scientist
and a teacher.
I teach classes, write
books, give talks like this.
This audience is
probably the audience
that needs to hear this kind
of thing least, in a sense,
because you are so
sophisticated and systematic
in your thinking, but I think
it's a long, slow process.
My hope is that a kind of
experimental social science
will make its way into the
basic curriculum of an educated
person.
When you're in high school,
you learn about physics
and you learn about chemistry
and you learn about biology,
and you learn about the kinds
of experiments that led us
to a detailed knowledge
of how these things work
in a mechanical level.
And then social studies,
it's much more broad.
It doesn't feel like science.
My hope is that
30 years from now,
maybe sooner, when
kids go to school,
that human behavior
will be very much
a topic of scientific study.
And I think that if we grow
up with an understanding
that the cognitive
processes, the neural
processes that lead
to our behavior
are sensible and adaptive and
intelligent in a lot of ways,
but also inflexible and
limited and problematic
in a lot of ways, I think if
that's just part of an educated
person's common sense,
I think that would
make all the difference.
AUDIENCE: So you mentioned
two ways of thinking,
like the default
mode of morality
and the slow
understanding, but have you
studied the paralysis that
comes from slow understanding
or logical thinking?
So for example, in
the trolley example,
if I'm really trying to
understand the situation
and trying to do greater good, I
need to know those five people,
are they good for
the world or not?
Are they fanatic
terrorists who are
trying to bomb the trolley
in the first place,
or is it like a fat Gandhi here?
Have you studied that
paralysis that comes with that?
JOSHUA GREENE: In a
very limited sense,
that is, when you give people
a difficult moral dilemma,
they take longer.
So in a boring sense,
I have measures
of what you might
call paralysis.
I don't think this is an
insurmountable problem.
I think that any sensible
system for making decisions,
especially at the
policy level, is
going to be taking into
account the consequences
of the available options.
And when you're dealing
with big problems
and non-hypothetical
problems, the amount
of time that you can spend
trying to collect evidence
and worrying about
whether or not
your evidence is as good as it
can be, about the consequences,
is infinite.
So what I would say
is this is a problem
if you take this approach.
I've decided to
rename utilitarianism
as deep pragmatism,
because I think it actually
gets at what it is in
a more transparent way.
If you take this pragmatic
approach to decision
making where you say
it's fundamentally
about the consequences
for all concerned,
there's no bright
line where you say,
OK, now I have all
the information
I need and I can
make my decision.
But the only people who
can avoid that problem
are dogmatists.
The only people who can avoid
that problem at the highest
level are people
who say, I don't
need to know what all the
consequences are going to be.
Some people think of it as
a fault of deep pragmatism,
as I now call it,
that it requires
this impossible information
gathering exercise,
but I think it's a virtue
because it requires it
explicitly, because it makes the
difficulty and the complexity
of the problem in the foreground
instead of in the background.
Instead of trying to have a
simple principle that you just
live by but that ignores
the consequences,
you say, look, these
are difficult problems.
And it leads to a
kind of humility
as well because you know that
no matter how hard you've
studied the problem,
there are always
going to be things that
you haven't accounted for.
So I think it's just the
reality of a complex world that
gives us that problem, and
anyone who thinks that they can
do away with it is just
selling a bill of goods.
AUDIENCE: Over the
centuries, we've
had a gradual
progress in what I see
as the expansion of the scope of
what we consider to be people.
It was progress when Irishman
were allowed to be people.
Black people, women,
now gays are just
beginning to be
acknowledged as people
worthy of treating the
same as anyone else.
Does that judgement
occur in the fast system,
and what promotes
people expanding
their scope of who
is considered people?
JOSHUA GREENE: I think it starts
out slow and ends up fast.
That is, it takes someone
like Jeremy Bentham
to say, why shouldn't gays have
the same rights as everybody
else when it comes to
their personal lives?
Or it takes someone like
John Stuart Mill or his wife,
Harriet Taylor Mill, to
say, why shouldn't women
have the same political freedoms
and education as everybody
else?
And both of them--
Bentham on gays--
he kept his work private.
But the people who first
come out with these ideas,
they're viewed as crazy.
But there are a few people
who say, you know, you've
got a point there.
And then as more people--
this is what historians call,
the terms someone's
called is a moral cascade.
It starts with a
small number of people
who have this crazy
idea, and then
a larger and larger and
larger circle of people
take it seriously.
And the more people
take it seriously,
the less of a moral
adventurer you
have to be to join the club.
And so over time,
what happens is
it starts out as a kind
of theoretical argument.
Hm, why shouldn't
we be giving money
to poor people on the other
side of the world, which
seems crazy?
And then as these ideas
gain currency and as people
are immersed in them,
it becomes just part
of their common sense.
I remember seeing in my
own lifetime attitudes
towards gays.
So high school or whatever,
it was common for people
to call someone a fag.
It was just the general
expletive or whatever.
It was not a gay
friendly environment.
Now I'm over that.
I shudder to think what
kind of emotional residual
I may have from
that time period,
but I think my son, who goes
to public school in Cambridge,
Massachusetts, he
doesn't have that at all.
In his kindergarten class,
they have a unit on families,
and some families
have two mommies,
and some families
have two daddies,
and it's all completely fine.
And so he just doesn't even
have the slightest feeling
that there's something
wrong with this.
And so it starts out as this
kind of intellectual argument,
and then over time becomes
just part of common sense.
Linking your question
together with your question,
I have two quotes in the
beginning of the book.
One is from Anton Chekhov
which says-- pardon
the gendered nature
of this-- man
will come better when you
show him what he's like.
That's the strategy.
And then the answer
to your question
is the second quote, which
I got from a fortune cookie
from a noodle shop in
Princeton, New Jersey,
which is "the philosophy
of one century
is the common
sense of the next."
And that's, I think,
a good summary
of the answer to
those two questions.
AUDIENCE: Once you determine
that our intuitive responses
might not be the
best for determining
the extent of our moral
obligations to others,
how do you propose we
do set about determining
that boundary?
Are you in favor of Peter
Singer's giving until it hurts,
or are there other
philosophers who
have answered the question of
the extent of how far we should
go?
JOSHUA GREENE: The short answer
is I agree with Peter Singer,
but not so much necessarily
in giving until it hurts.
I think the right
way to think about it
is, what is the
best policy for us?
How can we do the most
good, taking into account
our natural psychological
limitations and biases?
So we didn't evolve to be
perfect altruists where we're
concerned about the well being
of strangers on the other side
of the world as much
as we're concerned
about our own well being or
the well being of our family.
And so you might say, well,
instead of having a birthday
party for my son or daughter,
I should just take that money
and give it to UNICEF
or something like that.
But I think that's going to
be exceptionally hard for me,
because I really
care about my kids,
and we all have our
personal commitments.
The good news is
that I don't think
we have to give until
it hurts all that much.
That is, if all of us
in the affluent world
would just give a couple
percent of our income
towards effective
charities, that
would make an
enormous difference.
I don't want to say that
would alleviate all poverty,
and I think there are a
lot of challenging problems
about what's exactly
the right way to do it,
but I actually don't think that
it requires that much pain.
Now, you might say, well, why
not keep giving until it hurts?
Well, there are two ways
to think about that.
One is I can try to do
a lot of good myself,
but if I make a saint
out of myself where
I'm living this
impoverished life
and giving all my resources to
other people, people look at me
and say, wow, you're inspiring,
that's really impressive,
and then not be really
inspired and not
really do something themselves.
Whereas if you could say, look,
I'm a person just like you
and I mostly care about myself
and my friends and my family,
but instead of giving
nothing or almost nothing,
I give this much.
And someone else could
look at that and say,
you know what, I can
do the same thing.
And so in the long
run, I actually
think that promoting a
sustainable culture of altruism
is probably a better strategy
than trying to be a hero.
Being a hero, you
give more now, but it
doesn't light the fire that can
get things going more broadly.
To put the answer
another way, I think
if you think in
the long, long run
and think about how human
cultural dynamics works,
giving until it really hurts,
you can do a lot of good now,
but I don't think that's going
to be the long run answer.
I think the long run
answer is everybody just
willing to care a
little bit more.
KENT WALKER: Round of applause
for Professor Josh Greene.
Thank you very much.
[APPLAUSE]
