[MUSIC PLAYING]
ADRIANO MANNINO: And thanks for
your interest in this topic,
"Effective Altruism,
Philanthropy
as a for-profit endeavor,"
and I'll shortly
delve into what that
means, for-profit thinking
and charity, how that can
be meaningfully combined.
Yeah, let's start off
with a definition of what
effective altruism is.
It's a practical philosophy
and social movement,
which you may have heard of,
that uses empirical evidence
and reason to determine the
most effective ways to benefit
others in global society.
Now, that may sound good, but
it doesn't tell us that much,
of course.
In any case, there are two
conceptual components--
altruism and effectiveness.
Altruism is essentially the
conceptual opposite of egoism,
so it means that we're sort
of practically oriented
in our action towards
helping others as well
and not just considering
our self-interest.
And effectiveness means
that we're trying to,
of course, optimally
achieve our goals,
including our altruistic
goals, so we're
trying to maximize the
probability of goal
achievement.
Now, why should we be altruistic
at all, one might wonder?
I mean, in economics,
there are various prevalent
theories, of course.
But there's a big strand, at
least in Western economics,
saying, well, you know we should
be selfish, utility maximizers.
So that might
raise the question,
why be altruistic at all?
Why be interested in
benefiting others, too?
And there are various thought
experiments and arguments
to justify this
practical orientation.
So imagine you're
in the unfortunate
position of a firefighter faced
with two burning buildings,
one big, one small.
And let's say there are a
hundred people being trapped
at the moment in
the big building
and just one person being
trapped in the small building.
And you also received
the information
that unfortunately it
won't be possible to save
all of these people, so you're
faced with this moral dilemma.
You either can save these 100
people in the big building
or the one person in
the small building.
That's the moral
dilemma you're in.
And you know, for
simplicity, let's
say that you have, in
either case, a 100% success
probability, so you'll succeed
at saving these 100, or the one
person, depending on
the choice you make.
Now, of course, there's
always a third option as well.
If you're the firefighter,
you could just
decide to go have a
beer instead and not
be bothered with the
situation, so this
would be the non-altruistic
choice, of course,
and maybe the selfish choice,
the purely selfish choice.
Just go have a beer and
ignore the moral catastrophe,
the suffering and the
imminent deaths there.
But if we agree that
three is not an option,
as most people do when faced
with moral catastrophe, then
we need to choose
between one and two.
And yeah, I mean,
it's a tragic--
it's a sad choice
because somebody
is going to suffer
and die anyway,
but it seems that, clearly,
in such a situation
we would opt for
the lesser evil,
and the lesser evil here seems
to be saving the 100 and not
the one.
So here we'd probably go for
option one and save 100 people.
And so if we agree, sort
of, with the background
reasoning here, then that's
an argument for two things.
One, that we don't just have
selfish or self-interested
goals.
We have at least some altruistic
goals as well, at least
in situations of
moral catastrophe.
And two, we care
about the numbers.
So when helping
others, it doesn't just
matter that we are helping.
It does also matter that we
try and help the greatest
possible number of people.
There are further
arguments, which you also
may have heard of, originating
in practical philosophy,
but this the drowning child
thought experiment going back
to philosopher Peter Singer.
So imagine you're walking
past a lake or a shallow pond,
and you realize that a
child, a small child,
is drowning in there.
You look around, and you can't
see any parents or anyone else
that would be willing
and able to help,
and you realize that
this child's life depends
on what you do now.
So you can walk into
that pond, or that lake,
and just save the child.
There are no complications,
no risks or dangers involved
to yourself, so you could
just walk in and save a life.
Well, it's not entirely true.
There is a little complication,
and it's the following.
You're wearing pretty expensive
clothes and expensive shoes.
Maybe you happen to be on your
way to an important meeting.
Let's say your expensive
suit and shoes cost whatever,
a total amount of $1,000, and
you realize that, you know,
if you're now running or
jumping into that pond
and lake at no risk
to yourself, you're
going to ruin the
clothes and the shoes.
Let's also assume that there
will be no replacements,
so you'll just have to buy a new
suit and new shoes for $1,000.
So then the question
is, would you
save the life under
these circumstances,
in that situation,
or would you not?
Unsurprisingly, the
vast majority of people
say they would save
the life, of course,
if they didn't really
existentially need
these $1,000.
Of course, you can construct
a situation where you're like,
yeah, I'm going to
be starving if I'll
have to spend $1,000 in addition
to what I'm spending anyway.
But if you're still going to be
comfortably off, if that's just
going to mean a little
less luxury in your life,
then most people say,
yeah, in that case,
it's really a no-brainer that
they would save the life.
And this also shows
that we do indeed,
or most people do indeed, have
important altruistic goals
as well, and they would
be willing to sacrifice
quite a bit of money in order to
achieve these altruistic goals
to reduce the
suffering of others
and save the life of others,
at least if that doesn't
put themselves in sort
of an existentially very
uncomfortable or
dangerous situation.
So if we follow the reasoning
behind these two thought
experiments, the firefighter
and the drowning child, that
may have pretty significant
practical implications
because we can then
ask, well, OK, now
we do know that there are also
burning buildings out there.
There's a lot of
suffering out there.
A lot of people are dying, maybe
from preventable causes that we
could do something about.
So let's say, if we have
in our bank account $1,000,
or maybe many times as
much, that we could donate,
and we would still
be comfortably off,
why don't we do it?
So I mean, if it's a
no-brainer to save the drowning
child in that situation,
then one can ask, well,
wouldn't it also be just equally
rational, and a no-brainer
actually, to donate a lot more
if we're not already doing it?
And then, of course,
try and figure
out where this money could go
the longest way because it's
important to save as
many people as possible
as we saw in the first
thought experiment.
And that's precisely what the
effective altruism movement
is trying to do.
Of course, if we follow
the spirit of these thought
experiments, then one action
point is trying to check out
the material-- the philosophical
material, the scientific,
the economic, the
empirical material--
on strategic do-gooding.
Where should we donate
our time or our money
if we want to make a difference?
Then, one standard action point
within the effective altruism
movement has been trying
to donate at least 10%
of one's income.
Now, when I first heard of this
idea, donate 10% of my income,
I was like, yeah but
that's like a lot.
And yes, I do have
altruistic goals,
but I'm not a moral saint.
Isn't this too much
of a challenge?
Isn't this sort
of overdemanding?
But then when you actually
reason about it more, and check
out the happiness psychology and
happiness economics behind it,
many people conclude that, yes,
you know, it's quite unlikely.
So if you live in a rich
country and earn a good salary,
it's quite likely that you'd
be worse off when donating
10% of your income each month.
And quite the opposite,
there's quite solid
happiness economic, and
happiness psychological,
research suggesting
that donations often
tend to make the
donors happier as well.
So it can be a win-win, and this
is why many effective altruists
have decided to donate at
least 10% of their income.
And it's quite astounding what
can be achieved in this way.
So currently, on a
global level, there
are a few thousand
people at least
that strongly identify
with effective altruism.
I mean, maybe already
tens of thousands
that maybe loosely
identify with the tenants,
but a few thousand that have
decided to donate 10% or more
and have taken a pledge.
This pledge is not
legally binding,
but it's sort of personally
and maybe socially binding--
a pledge to donate
10% over a lifetime.
And in aggregate, these
pledges are already worth
several billion dollars in
sort of promised donations,
and these donations
are now happening,
so it seems possible.
And the movement is only
just starting, essentially,
so it seems possible to compete
with billionaires, actually,
with this strategy,
even if we're not
billionaires ourselves.
The other main action point
is that effective altruists,
of course, don't just try to
donate money but also time.
There is an organization
called 80,000 hours which
provides career advice to people
interested in making as much
of an altruistic
difference as possible.
On average, we work about
80,000 hours in our lifetime.
And of course, it's both in
terms of the personal stakes
and in terms of the stakes
for the world at large,
the altruistic stakes.
This is a huge decision.
What are we going to do
with these 80,000 hours?
And it turns out that sort of
standard, traditional career
advice that people
have been receiving
if they were interested in
making an altruistic difference
isn't very useful
or very rational.
So I mean, if you pull
people and ask them
for examples of standardly
altruistic careers,
many will, for instance, say
doctor in the developing world.
That's a standard intuition.
But did you think about that,
if the goal is to make as much
of a difference as
possible, you need
to work on something that
wouldn't happen otherwise.
So if you take the
job of a doctor,
and if it's the case
that if you had not
taken that job somebody
else would be doing it,
then maybe you're
not making much
of a counter-factual
difference actually.
And in some cases, it can
be a much better strategy
to, say, decide to go and
earn as much money as possible
and then give quite a
large fraction of that,
for instance, in order to enable
several other people to become
doctors in developing countries.
So with that sort of strategy,
often called earning to give,
it could be possible
to make a much
bigger counterfactual
difference and also
to multiply ones impact.
And of course, that's
just one consideration.
These considerations could
go in either direction.
It depends on what the
biggest bottlenecks are.
I mean, if money isn't
a big bottleneck,
then probably
earning to give won't
be that promising a
strategy, so it depends
on the specifics, of course.
But it turns out that
traditional altruistic career
advice often isn't very useful.
Now, another
question, of course,
is well, what are the biggest,
metaphorically speaking,
burning buildings out there?
A traditional
focus of do-gooders
has been just turning to
local problems, problems
in our own society,
and there are sayings
like, charity begins
at home and so on.
But if we strive to make the
biggest possible difference,
that's not a very
plausible focus area
because, at least if we live in
rich and developed countries,
it's quite hard to actually save
a life for, say, a few thousand
dollars, and most of the
time totally impossible.
So if we look into our
health care systems,
it usually takes several
hundred thousand dollars
in order to save a life,
and health care economics
can calculate that
pretty precisely.
But because of this dynamic of
diminishing marginal utility
of money, it tends
to be a lot easier
to save lives, and reduce
suffering and advance
society in poorer countries.
So a more plausible focus
area would, for instance,
be looking to the
refugee crisis.
I mean, and we can roughly look
at the victim counts, people
that are suffering.
It turns out about 50 or 60
million people are currently
categorized as
refugees, but it seems
that there are much bigger
burning buildings still.
For instance, global
poverty, extreme poverty,
still affects about
800 million people.
The standard definition
of extreme global poverty
is an individual living
on less than $2 per day.
That's purchasing power
parity, of course.
And that means, usually means,
permanent undernourishment,
suffering from diseases
that are actually
totally treatable
because you can't
afford any medical
treatment and so on.
So global poverty and health is
a really big burning building
as it were, really
big cause area.
Another sort of general cause
area that effective altruists
tend to be interested in is
global catastrophic risks
because, of course, when
we're dealing with risks
of a potentially
global scale, well,
literally everybody on earth
could be affected, and not
just the present generation but
also all future generations.
That's also a
consideration important
to many effective altruists.
So if we think that
the numbers count,
that the victim
counts are important,
then it seems that, at least
if civilization goes on,
most of the people that will
ever live will of course
live in the future.
And this could then lead to
an argument of the sort--
well, of course the present
generation is intrinsically
important and certainly very
instrumentally important,
but all the future
generations taken together
could be enormously and
overwhelmingly important
if there's ways to positively
affect their well-being.
And of course, some
global catastrophic risks
are environmental,
some are political,
international warfare,
say, some are technological
and there's also, of
course, an overlapping
area between all of these--
I mean, nuclear war of
course, at the intersection
of technology and
international politics.
But there are technological
risks of various kinds
and also some stemming from
our action or our omission.
For instance, one cause
area some effect altruists
are interested in is
biotechnology in general,
so biosecurity.
Of course, here there
are global risks
from just harmful action if
you consider synthetic biology,
the possibility to create,
artificially create,
bacteria or viruses that
could cause pandemics,
so that that's certainly a risk.
But then some risk
also could sort of
stem from our
omission, our failure,
to advance certain technologies.
A broad cause area some
effective altruists
are interested in
is transhumanism,
also thinking about the
human condition from a far
future perspective, a
biologically informed
perspective, and some
subcause area there
is trying to fight, and
ultimately eliminate,
aging, for instance.
So if we take a far future
perspective, and let's
say it's technologically
possible to sort
of fight diseases and the
diseases of old age to a point
where we're really
no longer aging
in a meaningful
biological sense.
So if that is possible,
it seems like, if we
take sufficient technological
action to bring it about,
that could be a huge benefit.
And in a sense maybe, of course,
you know, that's a separate,
long discussion-- to what extent
are biological condition also
represents a
catastrophe in a way--
but this is a case where
technology, of course,
through the
advancement of medicine
has already brought
us huge benefits
and where failure
to act now to invest
in the right kinds of,
maybe, sort of very
visionary and utopian
research could also
mean that certain catastrophes
go on for longer than would
be necessary.
So eliminating aging
as maybe a speculative,
but a cause area that
some effective altruists
have been interested in.
Another cause area
is animal suffering.
I mean, this is also a long
philosophical and empirical
debate.
So if we suppose that
animals are conscious,
too, can suffer as
well, and if, say,
intelligence is not super
relevant for moral status--
and that seems to be what
we believe for humans.
I mean, when humans
are concerned,
we're usually not saying,
well, you know these humans,
say children, small children,
are less intelligent,
therefore they're
suffering matters less.
If we're not going to
go for such an argument,
then we might reason,
well, OK, you know,
animals might be far less
intelligent than we are,
but if there's good evidence
that they can suffer as well,
their numbers are also
enormous, and maybe
there's something we can do to
reduce their suffering as well.
So some effective altruists
have taken that perspective
and are, therefore, trying to
do effective work in that cause
area.
And there's more cause
areas, of course,
but that's just
a rough overview.
And as you can see,
unsurprisingly, I
mean, the world is a
hugely complex place,
especially also when it comes
to trying to improve it.
So that's just sort of one
consideration-- what are
the biggest burning buildings?
But then, of course, if we want
to know more precisely and more
specifically what the
highest-impact opportunities
are for us specifically,
more considerations
will be relevant.
So scope of the
problem, that's sort
of the size of the
burning building,
is just one relevant
consideration.
I mean, another consideration,
needless to say,
is the solvability or the
tractability of a problem.
So if you have two problems,
one big, one small,
and if it turns out that the big
problem is just not solvable,
not realistically solvable,
then, well, yes, you know,
working on it
won't be effective.
And it could be more
effective to work on a smaller
problem with high solvability.
So that's definitely another
relevant consideration.
A third one is
neglect in society,
and this is based on the
sort of economic assumption
that activist resources, so
donations of time and money,
also tend to have
diminishing marginal returns.
So if, say, you're
one of the very--
if there's a big problem
out there in society,
and you happen to be
one of the first people
to help address
it, then the chance
is much higher that
you're going to be
able to make a big
difference, that maybe you're
going to be able to
make contributions
of time, and money, of ideas,
that wouldn't happen otherwise.
But if the problem isn't
highly neglected in society,
so if there are already huge
numbers of people addressing it
both in civil society, and in
politics, maybe in research,
and you're just one additional
person there contributing
money or time, then the
probability of course
will be much lower
that you're going
to make a huge difference that
wouldn't happen otherwise.
So neglect in
society is definitely
another relevant consideration.
Last but not least, personal
fit and comparative advantage,
also of groups, and
organizations and then
of individuals, is
relevant as well.
So I mean, yeah, if you
have a big problem that's
highly solvable and
neglected in society,
but it's just a very bad
fit-- you don't really
have any required
skill; you can't really
make a huge contribution to
that problem-- maybe it's still
better to work on
something else.
So that's definitely also
an important consideration--
your interests, your skills,
your motivation, of course.
You know, if you're
an effective altruist,
a core thing to work on is
also minimizing the probability
that you're going to burn out at
some point, because that would
of course hurt your impact
big time, so all of these
should factor into the
overall assessment,
and that can, of course,
get hugely complex but also
very interesting.
Now let's zoom into
specific cause areas,
and look at some more
examples and also, then,
specific data that enables
us to make cost-effectiveness
and impact evaluations.
In terms of global health--
of course, that's
strongly related
to general global poverty--
there are various diseases
that affect a huge number,
and kill a huge number,
of people every year
and every day.
So for instance, the
so-called big three-- malaria,
tuberculosis, HIV--
kill many more people each
and every day than, say,
political violence, and
oppression and warfare
has tended to kill in a year.
Now that's another
thing, of course.
I mean, I'm not saying
that political action
can't be effective.
I mean quite the opposite.
So I mean, even in terms of
trying to fix global health,
ultimately, if you can sort of
go for a concerted political
action, systemic action, that
can be hugely effective--
there are also risks to that--
but it does seem
like many do-gooders
do seem to sort of
prematurely jump
into politics because that's
been a traditional focus.
Yeah, we need we need to
address something politically
or we need to
address specifically
political problems, but
actually, if we just
look at the victim
counts, it's not obvious
that this should be the focus.
So as I mentioned, if we
consider these big diseases,
the victim count at least
has tended to be much higher.
The solvability also seems very
high, at least in principle,
the medical,
technological solvability
or preventability very
high in principle.
That's also often a huge
complication with politics.
It's so messy, and it's
unclear whether your campaign
will succeed and so on.
Neglect in society also
comparatively high.
So of course, a lot of money
is going into medical research,
but the bulk of
the money is going
into research that aims
to address diseases
that are prevalent in Western
societies and rich societies.
Why?
Well, one reason is that you
can make a huge profit there
because they're going to
sell the treatment to people
in rich countries, whereas
it's much harder to make
a great profit addressing
malaria which predominantly
affects the poorest people.
But in terms of achieving
our altruistic goals,
if we care about helping
as many people as possible,
this can be a great focus
area of course-- you know,
trying to address just the
diseases that tend to affect
the poorest of the poor.
Now, interestingly, empirical
studies, empirical data,
shows that there are
massive differences in cost
effectiveness, so in terms
of amount of lives saved,
amount of suffering reduced,
between different global health
interventions.
And so the title of the talk
said that effective altruism
is sort of trying to
approach altruism and charity
in a for-profit way.
And I didn't mean to say
that we're attempting to make
a monetary profit-- not at all--
but it's sort of the mindset
of trying to maximize
something, maximize a profit.
But here, of course,
the profit is
being defined in terms of lives
saved and suffering reduced.
And interestingly,
this hasn't really
been the traditional
focus of charity,
because in traditional
charity what
gets emphasized is sort of more
the emotional side of it right.
I mean, maybe you
have a relative
that died from cancer, and
of course that affects you
deeply emotionally, and
very legitimately of course,
and then you're
emotionally moved
with your sort of
immediate compassion
to do something about that.
And effective altruism isn't
trying to counter that.
It's just saying, yes,
let's take our compassion.
Let's take our emotions.
Let's take our
heart, but let's also
combine that with our
head, with our rationality,
our optimization
power, that we are
of course standardly
applying when
it comes to actual
for-profit investments,
you know, in terms of monetary
and personal profit now.
When we're investing there
for a monetary profit,
it's totally obvious
that of course we're
going to be interested
in the data,
and we're going to be highly
interested to know, well,
you know, if I'm going to invest
into that particular option,
what's the probability
of success and so on.
How much am I going to achieve?
And this is precisely
the kind of mindset
that effective altruism tries to
apply to the domain of altruism
and charity as well.
Let's look at HIV for instance.
So this is data from the
World Health Organization,
have various interventions
that one could go for--
anti-retroviral therapy,
condom distribution, treatment
and then, at the bottom,
prevention through education
for high-risk groups and
mass media education.
And you can see the
estimated impacts
based on randomized
controlled trials
in many cases can vary a lot.
As you can also see by the
varying shades of the bars,
the uncertainty is
also quite huge.
So these shades there,
and the fainter part,
that represents the
uncertainty, the intervals,
that we should
probably rationally
have based on the studies.
So the uncertainty is
huge, but we can also
see that, despite this
uncertainty, which we should
of course factor in, the
expected differences in impact
are still enormous.
And so if we follow the
initial thought experiment
of the firefighter and
agree that the numbers count
in altruism as
well, then of course
it's crucial to factor
this information in when
making donation choices.
Another example is
malaria prevention,
and this is actually
a cause area,
or a subcause area, that
does particularly well,
tends to do better, at
least at the moment.
I mean, this can
also change based
on the situation on
the ground changing,
the available evidence
changing, but malaria prevention
is a standard intervention
that's currently
recommended by effective
altruist organizations
in the space of world
poverty and global health.
So with organizations such as
the Against Malaria Foundation,
you can distribute, purchase,
distribute one bed net
for just $5, and this is going
to protect two people that
can then sleep under these
nets, and this offers
really strong protection.
Of course, most
infections happen at night
when people are sleeping.
And yes, in many, many
randomized, dozens
of randomized,
controlled trials,
which is really good evidence--
I mean, this is not
physics of course.
It's not a hard science
by any standard,
but I mean this is backed
by really good evidence,
comparatively really good
evidence in that space.
And so of course, if you
scale up these numbers,
if you donate $100,000, then
you can protect 40,000 people.
That means many villages or a
whole football stadium, say.
And donating $100,000 is
quite easy for most people,
actually, in Western,
in rich societies.
I mean, you don't need
to donate $100,000 in one
go, but maybe if you donate
to $5,000 a year, well,
then you're going
to get up there
if you donate $10,000
a year anyway.
And so it's quite amazing
what we can actually
do even just as individuals.
The estimated cost per
life-year saved is just $150
with that kind of intervention,
so it's pretty amazing.
With a donation of $150
that doesn't affect
many, or most, people in
rich societies at all,
we can actually
give a poor person
one more healthy year of life.
Then, of course, we can
calculate of course,
based on that, the average
cost of saving a whole life.
Now of course,
strictly speaking,
we can't save a
life at the moment
because everybody
is going to die,
so we'll need the
evolution of aging there
through biotechnology, but
what health care economists do
is they talk about the
cost of one life saved
as equivalent to saving
30 years of healthy life.
So that's usually what
health care economists mean
when they say, saving a life.
So 30 years-- but yeah,
this is the rough cost
here for one year, one
healthy year of life.
There are various organizations
doing sort of in-depth charity
research, collecting
data, analyzing
it doing the relevant
calculations,
and the leading
organization currently there
is Give Well, GiveWell.org.
And yes, they are
extremely transparent.
You can check out all of
their research on the website
if you're interested,
and they're coming up
with top recommendations
for where
to donate based on their
calculations every year,
and of course, there's
some fluctuations
if the evidence changes.
Generally speaking,
global health
seems to be a really
promising cause
area because there are
these huge medical short-run
benefits.
And then, as some studies
suggest, of course they're also
tend to be long-run
societal benefits.
So one randomized
controlled trial
suggested that kids that were
able to sleep under bed nets
and were protected from malaria
later on tended to earn--
they missed much
fewer school days
and, later on for
instance, tended
to earn 20% more than kids
that didn't have the privilege
to sleep under these bed nets.
And another advantage is that--
I mean, these are some
worries that are often
discussed in these debates.
Well, you know, but in terms
of development aid, of course
there have been many
failures as well.
So many people are, and actually
rightly so, quite cynical
about the impact
of development aid.
And indeed there have
been many failures,
but I'd say, well, that's
just an additional argument
for effective altruism, for
being serious about the data,
and trying to figure
out what actually helps
and what may harm people.
So even if we believed that sort
of all of the development aid,
maybe also the state-sponsored
development aid
that's coming from
Western societies,
was maybe net
neutral, or maybe even
net negative, just
statistically speaking there
must be at least some
interventions that
are positive, right?
So if we assume that all of
the interventions that we've
sponsored in terms
of development aid
were net neutral,
then probably there
is some distribution where
some interventions are harmful,
many are neutral-ish and
some are really good.
And what effective
altruism is about
is sort of trying to find and
isolate the really good ones
and scale them up.
And of course, you know, an
advantage of working in health
is that this seems to
be maybe the paradigm
example of a universal good.
I mean, even if people's
cultural preferences vary
a lot, and people's conceptions
of the good life vary a lot,
well, you know, everybody
needs to be healthy in order
to achieve any goal, so this is
kind of a universal goal that
seems safe to promote.
Now, taking this further, what
about uncertainty and risk?
I mean, yes, it's
totally legitimate
to ask, well, what's
the probability
that a certain anti-poverty
intervention fails
to have the intended
effect, or maybe there's
even a probability that it
will have a harmful effect?
And of course, we should
factor that in this well,
and we can start, you
know, at least in theory,
to address that sort of issue
by modifying the initial thought
experiments.
I mean, what if you
have a 50% probability
to save a drowning child?
You know, you know
that maybe you're
not going to make it in time.
Time is short.
Maybe the child will still
die despite your effort.
I mean, in that case,
if the situation
is either not do anything
or do something and have
a 50% chance of success,
or even maybe just
a 10% chance of saving that
life, we would probably
still say, yeah, it's worth it.
And then, of course
we can introduce
further complications.
What if we have, let's say,
an 80% chance to save the life
but the 1% chance to
cause harm in some way.
Maybe there are other
children around,
and maybe we would,
whatever, you know, I mean,
potentially push a child
into the lake or whatever.
I mean, complications
could happen
with harmful,
unintended side effects.
And then we can
try and reason, OK,
under what circumstances, with
what probability distributions,
would we still go for it?
And we can also
consider dilemmas like,
what if we have a 10%
chance to save hundreds
lives versus a 100%
chance to save five?
What would you do?
What should you do?
And what are the
thresholds maybe?
So maybe we could start
with a 100% probability
to save a certain
number, go down to 90%
and compare it to another
decision possibility.
And a standard
framework for addressing
this kind of situation is
the expected value framework,
so just taking the
probability times the stakes.
So here a 10% chance times
100 versus a 100% chance,
so 1 times 5--
an expected value here would
recommend the first option,
so 0.1 times 100 equals 10.
But of course we
could ask, well,
but shouldn't we be
risk-averse at least a bit?
Doesn't risk
aversion make sense?
So if we have a 100%
probability to save
5 versus just a 10%
chance to save 100,
we might feel uneasy about
going for the 10% option.
I mean, one reply is, if
that's not a one-shot game,
but if many people do
the same, or if you
do the same, many
times over, well,
then, you know, due to
the law of large numbers,
of course you are going to save
more people in fact, if you go
for the highest expected value.
So that's one reply.
But yeah, even if
it's one shot, I
think a pretty good
case can be made here
for going for risk neutrality,
so expected value as opposed
to some risk-averse function.
I think our risk
aversion comes from our
being accustomed to thinking
about decision situations that
are about earning
money or making
money for personal utility.
And if we're about making
money for personal utility,
then risk aversion seems
to make perfect sense.
Why?
Because money tends to have
diminishing marginal utility,
at least in terms
of personal gain.
So the first $10,000
that you earn per year
are hugely important.
They enable you to survive.
The second $20,000
that you add are
a bit less important,
and so on and so forth.
So that seems to be the dynamic
when it comes to personal gain.
But that same
argument doesn't seem
to apply when it comes to
altruism because there--
yeah, if you save the
first life, great.
If you save the
second one, it's not
like the second one
is less valuable.
It's similarly valuable
to the person themselves.
And so if we're
being altruistic,
I think we should not assume
that there's diminishing
marginal utility there, and so
this kind of argument for risk
aversion does no longer apply.
So I would probably argue
for going for expected value
maximization when
it comes to altruism
and taking the probability
times the stakes.
And this leads to a concept
that effective altruists
call hits-based philanthropy
or hits-based giving.
I mean, let's
consider an example.
In the 20th century-- it's
again from the health area--
smallpox killed more than
300 million people up
to its eradication in
the 70s, so a huge evil
in terms of the consequences.
That's more than all
wars, all genocides,
all political violence
and famines combined,
so an insane victim count.
But we were able to beat it and
have since saved an estimated
60 to 120--
some uncertainty there-- million
lives since the eradication.
And so there's a pretty
interesting and complex story
behind how that
happened, and [INAUDIBLE]
it also seemed quite
unlikely to succeed.
You know, many
people were saying,
no, these are crazy plans.
It's not going to succeed in
terms of smallpox eradication.
But the point is: if
the stakes are so high,
then even if the success
chance is very low,
the expected value
can still be enormous,
and it can be extremely
worthwhile to pursue something
that's most likely to fail.
So you're pursuing a plan,
deliberately pursuing
an altruistic plan, that's
most likely to fail,
maybe only has a
5% success chance.
But because the
stakes are so high,
the expected value
can still be enormous,
and so that's the concept of
sort of hits-based giving.
Trying to do, maybe,
many things at once,
trying to have
various individuals,
and various groups,
organizations,
work on such situations of
low probability of success
and high stakes.
Of course, the for-profit
analogy, again,
is hits-based investment.
I mean, you can do that with
your investment portfolio.
If you're pursuing many such
situations or opportunities
at once with low success
probability but high stakes,
that's also a form
of hedging, needless
to say, due to law of
large numbers again,
and this concept can be
transferred to philanthropy
in very productive ways.
Now, some applications
of hits-based giving
are to be found specifically
in the cause are
of global catastrophic
risks, some
of them being the worst-case
climate change scenarios which
might be really
extreme global chaos,
and maybe even some
extinction risk.
But that seems-- I mean,
climate change seems pretty bad,
but the absolute worst-case
scenarios seem pretty unlikely.
But still, the
stakes are enormous.
Maybe nuclear war seems
unlikely, at least
on an extinction-level scale,
but the stakes, again, extreme.
Biosecurity may be unlikely,
but it's not clear.
I mean, I'm also not claiming
that with these risks
the situation is
necessarily of the sort
that the risk is
low probability.
I mean, if the risk
is high probability,
then we should be
addressing it all the more.
But the argument is even if
the risk is low probability,
and even if our success
chance of doing something
to mitigate the
risk is also low,
the expected value of trying
hard can still be pretty high.
And a further cause area
that some effective altruists
have been thinking
about and researching
is artificial intelligence,
opportunities and risks
from a possible transition
to superhuman intelligence
and how to maybe steer
such a transition
to maximize the benefits.
And in order to conclude, let's
zoom in on this cause area
briefly.
I should maybe premise
it by saying that I'm not
myself an AI expert.
I'm a trained
philosopher and have
done a lot of entrepreneurship.
And if you're interested
in what the AI
experts in ineffective altruism
have to say more specifically,
then I'd encourage
you to check out
the organizations
and the websites
that I'll mention at the end.
So this will just
be a brief intro
and philosophical overview.
So many technical
experts do seem
to predict the emergence of
artificial superintelligence,
so machines that
outperform humans
in literally every domain
of cognitive interest,
later this century.
That seems to be a prediction
that many technical experts are
willing to make.
And of course, if
that materializes,
it's likely to be
a change that's
more disruptive on a global
scale than the evolutionary,
sort of, ape to human
transition has been.
And this has been an extremely
disruptive transition,
of course-- so the transition
from apelike brains to human
brains--
I mean, up to the point
where the goal achievement
and the very survival of apes of
chimps depends a lot more on us
humans than it depends on them.
So if artificial
superintelligence,
or superintelligences, emerge--
and I mean, you know, there
are various scenarios.
Maybe there will also be a
corresponding enhancement
of our human brains,
various scenarios there
that are possible.
But I mean, then it's, yes.
It seems quite likely
that our goal achievement
and our very survival
will depend more
on these superintelligences
than it would depend on us.
And this raises
the question, well,
will these AI's goals be stably
aligned with our goals or not?
And it seems that if they are
stably aligned with our goals,
then that could be the biggest
opportunity for humanity ever.
I mean, of course, technological
progress has brought humanity
a great many benefits,
and this could sort of be
the ultimate benefit, maybe the
last invention we need to make.
And in principle, it could
solve all our problems,
because the tool
that we have been
using, of course, in
history to solve problems
is our intelligence.
And if we end up with a
human with a superhuman
intelligence that's
beneficial, meaning that's
pursuing goals that are
identical with ours, goals
that we'd consider good and
valuable, then that could be,
yes, really the biggest
opportunity for humanity ever.
But there could
also be some risks.
Namely, if these
superintelligences' goals
are not aligned, or
not stably aligned--
I mean, it could also be the
case that they are aligned
at first but then sort
of, because there's
further developments,
become misaligned.
That could also be a
problem, so there could also
be the risk that we're
going to be faced
with a kind of
superintelligent power
that's not stably
goal aligned with us.
So it seems that this transition
could be extremely disruptive.
The stakes could be
enormous, literally
the biggest stakes ever.
And depending on whether goal
alignment will be the case,
the outcome could be extremely
good or quite dangerous.
And that's basically
the problem context
that many effective altruists
have been thinking about.
Now, from a sort of
more philosophical,
non-technical point of view,
also in the public debate,
some people have been
dismissing the whole argument
for various reasons and,
well, their common denominator
is often that they think that,
well, the probability either
of these scenarios
materializing is super low.
And even if the
probability were high,
then the probability
would be low of our
being successfully able
to steer the development
in any direction
that we would prefer.
So yeah, some people say,
well, the probability
of any fears of goal
misalignment applying
is extremely low,
and the probability
of goal alignment
succeeding through,
sort of, a strategic effort on
our part is also extremely low.
And from this they
conclude that it's not
a cause area worth pursuing
in any serious way.
But sort of from a decision
theoretic and philosophical
point of view, I'd say,
well, even if that's
completely correct, if that were
totally accurate that, let's
say, it were extremely unlikely
that superintelligence would
emerge at all, or very, very
far into the future only;
and if we also add
that it's, in any case,
extremely improbable that
we would sort of succeed
with a strategic effort
to try and maximize
the probability of an
awesome, a utopian, outcome;
even if both of
these points apply,
well, the expected value
could still be enormous.
So even knowing that our
deliberate effort at steering
the development is
unlikely to succeed,
even if that were the case, the
expected value of trying hard
could still be high.
So that's, I think,
an important point
to make from sort of decision
theoretic perspective.
And that brings me to the
second-but-last slide already.
What I've tried
to do in this talk
is give a brief intro,
like a justification,
for the spirit of EA, and an
overview of main strategies
and cause areas pursued.
And last but not least, I'd like
to mention some organizations
and their websites if you're
interested in checking out
the material in greater detail.
I did mention Give Well,
the charity research
think tank mainly focusing
on the cause area of world
poverty and global health.
Then there is Giving What We Can
which is an organization where
you can sort of take a pledge,
take this 10% pledge that I
mentioned.
And several thousand
people have already
taken this pledge to donate
10% to the most effective cause
areas they can find.
There's 80,000 Hours, the
organization providing
career advice in the
[INAUDIBLE] area,
there is the Effective
Altruism Foundation
doing various things, also
providing some career advice
and donation advice.
And for the cause
area of AI safety
specifically, there's the
Berlin-based Foundational
Research Institute whose
research you can check out,
or the US-based Machine
intelligence Research
Institute, MIRI, which
you may have heard of.
And yeah, so as I said, I'm
a philosopher by training,
but you can find the computer
science and machine learning
literature by people who believe
that this is an important cause
area on these
websites, for instance.
And to conclude-- so this
is sort of the animating
spirit behind all of these
organizations, a keen awareness
that this is probably, in a
very real and urgent sense,
the situation we
are in globally.
It may not feel that
way evolutionarily
because we are still running
on sort of Stone Age brains,
and emotionally we have a hard
time grasping the situation
of a global village,
which is really
an interconnected
global village.
That's a historical first.
So emotionally we're not
really able to grasp that,
but intellectually we are.
And it's probably true
that this is our situation,
but it's also true that having
a beer from time to time
is supremely
strategically important.
We need to minimize
burnout risk.
And yeah, the effect
altruist communities
all around the world
are very welcoming.
And there's also a
community in Zurich
here, pretty big community
in Switzerland and Germany.
And yeah, if you're
interested, these
are also just the kind of people
that like to go grab a beer,
and have philosophical and
also technical discussions.
And with that being
said, thanks again
for your interest
and your attention,
and I'm looking forward to
your questions and comments.
SPEAKER: Great.
Thank you so much.
[APPLAUSE]
