Transcriber: Anna Travinskaya
Reviewer: Denise RQ
A friend of mine told me
that when giving a talk
it is important to speak in a low voice.
Since one passion of mine is beatboxing,
I thought I would try it like this:
(beatboxing) Ladies and gentlemen,
welcome to the deepest TEDx talk so far.
Or better not!
Anyway, as for my main passion,
I've been thinking about the world,
and what I want to do in it.
This made me realise two things.
First, the world is really messed up.
Much more than I would have thought.
Be it world poverty, suicide,
and depression rates,
the suffering of non-human animals,
or worrying forecasts about the future.
Many things don't seem
to be alright at all.
Most of us grow up in a bubble
where things seem
to be OK the way they are.
But once you start questioning
the status quo,
it suddenly becomes glaringly obvious
that the way things are is a catastrophe.
Fortunately, the second thing
I have learned is brighter.
I realized that individuals
can make a huge difference
if they ask the right questions
and if they put in some effort.
So this is what my talk will be about.
Many people, who are
altruistically motivated,
start out with some concrete idea
or some favourite cause,
which they then try to pursue.
It is rare that people
systematically think about
where they can make the most difference
and given all
that's going on in the world,
and all the things one could do,
it is hard to even know
where to start looking.
So, the feeling of being overwhelmed
or incapable of making
a meaningful difference is understandable.
Are we even able to make a difference?
I think we are.
In order to see why,
I'm now going to present
a helpful approach.
Let's call it
'Impact through rationality.'
So first off I am going to define
the two elements of that concept:
impact and rationality.
What does having an impact mean?
It refers to how the world is different
as a consequence of our decisions.
And by decisions I don't just mean ones
that explicitly feel like decisions,
like choosing what apartment
to rent or what job to take.
By decisions I mean basically everything.
All our life, every step we take,
and every step we don't take,
is a decision and determines our impact.
But we don't want to have
just any kind of impact,
we want to have a positive one, of course.
Despite diverging opinions in ethics
there is one core value
that everybody agrees on:
-the reduction of unnecessary suffering.
This may not cover everything,
but it serves as a good focal point
of what we, as altruists, want
to accomplish as our shared goal.
And when we're talking about goals,
rationality comes into play.
Or at least it should!
Because it is the science of figuring out
how to best achieve our goal,
or in other words, how to win
at whatever game you are playing.
As we all know, humans
don't always act rationally.
In cognitive psychology,
a systematic deviation
from goal achievement
is called a cognitive bias.
Now, let's have a look
at some particular biases
which play an important role
in ethical questions.
One of these is called
scope insensitivity.
Imagine there's an oil spill
that kills 2,000 birds.
Now, put an imaginary price tag
on those 2,000 birds
corresponding to how much
you'd be willing to pay to save them.
What price tag would you put on them
if it was 200,000 birds?
If the actual goal is to help these birds
then we should expect a linear increase
in the amount of money donated.
Instead, if we ask people
about the cases in isolation,
then we see that the sum stays
about the same for all numbers of birds
-around 80 dollars.
Our brains are not equipped
to deal with large numbers.
Even though each bird matters in itself,
our brains only have
so much caring ability
that at some point, the number of victims
stops affecting the intensity
of our emotions.
But if we want to help effectively,
we can't rely on our biased,
intuitive responses.
We need to take numbers into account.
Numbers count because individuals count.
So, do we actually want
to help these birds
- in which case our money should scale
with the number of birds -
or do we, in a more selfish way,
just want to feel good
about having done something?
So, what's your goal in life?
What is it that you'd like to see
written on your gravestone?
It seems to me that this has to be
the most important question
we can ask ourselves.
Still, we often don't spend
an adequate amount of time
thinking about it.
In order to figure out
what we really care about,
let's go through the following
thought experiment.
Would you push a button
that causes two things:
one, you receive a $100
in your bank account.
Two, a child in Africa gets infected
with a horrible disease.
Most people would not push that button,
and this suggests
that they are not selfish.
What about the second scenario?
Again, pushing the button
will accomplish two things.
One, a $100 gets subtracted
from your bank account.
And two, a child in Africa
will be prevented
from being infected
with a horrible disease.
Not pushing this button
results in you keeping a $100,
and a child becoming sick.
The choice in both scenarios
is basically the same.
In your heads you decide,
which future world
will come into existence.
You can either choose a future world
where you have a $100 more,
and a child is sick.
Or you can choose a future world
where you have a $100 less,
and a child is healthy.
If we don't push the button
that would give us a $100
and cause a disease in a child,
that shows that the health
of a child is worth more to us
than a $100 at our own disposal.
But in that case, it would be irrational
not to give up a $100
in order to ensure the health of a child.
It doesn't make a difference to others
whether we actively cause
something bad to happen,
or whether we omit to prevent
something bad from happening.
Intuitively, it might seem like
there is a real difference here,
but rationally considered, it's a bias,
the omission bias.
Put into practice,
how can we rationally approach
the question of how to help
people most effectively?
Organizations such as GiveWell are working
on scientific charity evaluation.
The differences in cost-effectiveness
between charities
can be enormous.
Let's take interventions
against HIV as an example.
This graph shows
that the available interventions
differ massively in terms
of how much they achieve
per $1,000 invested.
It would be irresponsible not to consult
and produce this scientific knowledge
when spending our money.
For instance, if we want to buy
a television and we find out
that the same product is priced
at $500 in one store and $5,000 in another
the rational thing to do is clear.
Unfortunately, similar situations
in the charitable sector
are not usually handled at the same way.
Luckily, however, there are
a number of organizations
pursuing a cost-effective approach.
In Switzerland for example,
there is the organization EACH,
which stands for
Effective Altruism Switzerland.
Trying to reduce
the most suffering in the world
requires us to take a broad perspective,
search for the biggest sources
of suffering out there,
and find out how exactly
we can prevent them.
Therefore, I want to focus on an issue
which seems very important
and has rarely been mentioned:
the suffering of non-human animals.
Personally, I don't have a dog,
or a kitten, or a child, or any other pet.
And I don't plan to have one.
But the issue is not
whether I like them or not.
The issue is whether they are suffering,
and whether I can do something about it.
I simply don't want to look away anymore,
I don't want to ignore the facts
when the fate of innumerable creatures
depends on what I do.
We slaughter 60 billion
land animals each year.
That's 7 million every hour.
Let's imagine that mentioning
this number was just a bad joke,
and factory farming didn't exist.
Would you object if I proposed an increase
in culinary diversity that required, say,
scientific experiments that killed
7 million animals an hour?
And how is factory farming
different from that?
Considering the victims,
it seems to me that
not helping to end factory farming is
as unethical as newly introducing it.
Some may feel, however,
that the well-being of animals
should not count for as much
as the well-being of humans.
So let's take a step back
and look at the reasoning
for caring about suffering
in the first place.
It seems to me that my suffering matters
not because I belong
to a particular group,
such as human animals.
My suffering matters
because it is suffering,
because it is the sort of conscious state
one intrinsically desires to get out of.
This is why it makes rational sense
to draw the line at sentience.
A sentient being is one for which
it feels like something to be that being.
Things that are not sentient,
such as plants, very probably,
or rocks, certainly,
can not be harmed like
sentient beings can be harmed.
Historically, our circle of concern
has grown step by step.
African-Americans
only received full legal rights
after following serious
efforts and protests.
The fight for women's rights
has been taking a long Tim, too.
Only in the year 1990,
the WHO declassified homosexuality
as a mental disorder.
And yet, none of these social problems
are fully solved by now.
So the historical record teaches us
that discrimination is hard to spot
and hard to get rid of.
Our circle of concern
didn't grow by itself,
it had to be stretched actively
by brave change makers.
And if you look at where
this expanding circle has arrived,
then, it seems to me,
the discrimination of non-human animals
should be the moral issue of our time.
Sometimes, I wonder,
what people 40 years from now
are going to think about us.
I won't be surprised if they look back
on us and ask themselves:
"How could they possibly
have supported these torture facilities
all over the globe?"
Jeremy Bentham summed up
the crucial points
as early as 200 years ago.
"A full grown horse, or dog,
is beyond comparison
a more intelligent as well as
a more conversable animal
than an infant of a day, a week,
or even a month old.
But suppose the case were otherwise,
what would it avail?
The question is not,
"Can they reason?" nor "Can they talk?",
but "Can they suffer?"
This famous quote points to the so-called
'argument from species overlap.'
One might say that humans
matter more than other animals
because they are more intelligent,
but that's a bad argument
because some humans
lack many cognitive capabilities
and still deserve
full moral consideration.
For good reason, of course.
The same is true for other criteria
like the ability to speak,
or the ability to reciprocate.
If we acknowledge
that these are false criteria
for treating non-humans differently,
what arguments remain?
What are we going to tell the pig,
while we unnecessarily slit its throat?
Does it have the wrong number of legs,
the wrong DNA,
the wrong skin color and shape?
We can't seriously mean that.
This reasoning would be
just as biased as discrimination based
on other outward criteria
such as sex or race.
The ethicist Peter Singer has popularized
the term 'speciesism'
to refer to this type of discrimination
against non-human animals.
At this point it is important to know
that opposing speciesism doesn't mean
that we should treat
all animals exactly equally.
For instance, it is not speciesist
to deny pigs the right to vote,
just as it's not ageist
to deny babies the right to vote.
So treating beings
of different species differently
is not speciesist if there are
relevant criteria for doing so.
If the suffering of a sentient animal
counts at least somewhat,
then the alarming prevalence
of animal suffering all over the globe
should be reason for devoting time
and resources to this important cause.
There are a number of organizations
working in this area.
In Switzerland, for example,
there is the organization Vegan Politics,
which tries to find
the most effective ways to help animals.
And animal suffering could generally be
a very cost-effective cause.
A single person that stops eating animals
saves about a 1,000 animals
from a dreadful destiny.
Estimates suggest
that the lower bound
of donating a single dollar
to effective vegan charities
is that it prevents
about a 100 days of suffering
on a factory farm.
Do you remember the button?
But still,
identifying and comparing
potentially highly effective causes
is a very hard task.
Therefore, much more research
is needed on that front.
The Foundational Research Institute,
or for short FRI,
is a quickly growing organization
working on this issue
with new researchers ranging from
physicists to psychologists joining in.
They reduce the uncertainties we face
and trying to make
the biggest possible difference.
One thing is certain though:
with our decisions, here in our heads,
we decide about
how much horrible suffering
there will or won't be in the world.
Let us really imagine what this means
in terms of the responsibility
it bestows on us.
Let us have 'Impact through rationality, '
and therefore, let us be
the change makers of our time
by expanding our circle of concern.
Let us care about
the lives of all the beings
that themselves consciously
care about their lives.
And let us do it rationally,
and thus, effectively.
Thank you.
(Applause)
