So the last talk was really, really fantastic
and unfortunately I'm going to follow it up
with a much drier sort of academic talk, but
hopefully it will be interesting in a different
kind of way.
So right at the center of effective altruism
is this idea of cause prioritization.
There's lots of problems in the world that
we could be trying to fix.
There's lots of opportunities we could be
taking advantage of and we want to know which
are the most important.
And there's a sort of standard list of candidates
which you may have gotten from Natalie's talk
if you haven't seen them before.
So there's global poverty, there's animal
welfare, there's risks from artificial intelligence,
biosecurity, nuclear security, climate change,
et cetera, et cetera.
And each of us individually or as groups,
as organizations, we have to think "Which
of these problems are the most important?
How do we want to sort of divide our time
and our energy and our money?"
And suppose that we reach the conclusion that
one of these cause areas, say animal welfare,
is the highest priority.
It's the place where say I as an individual
can do the most good.
Then a natural next question to ask is "Given
that I think animal welfare is the most important
cause area from my standpoint, given my capacities,
et cetera, should I sort of go all in on animal
welfare?
Should I focus exclusively on animal welfare?
Or should I also be trying to have an impact
on other areas, say on biosecurity, or an
artificial intelligence?"
Now one kind of consideration that might seem
relevant to this question whether to diversify
or not is uncertainty.
So we could be wrong in all sorts of ways
about which causes is most intrinsically important,
about where we can do the most good, et cetera.
And a particular kind of uncertainty where
a lot of my academic work has focused and
it's sort of kind of going to be the focus
of this talk, is what philosophers call normative
uncertainty, which means uncertainty about
basic moral principles or other kind of normative
principles.
So kind of get a show of hands, who here if
I say Kantianism/utilitarianism, who knows
what I mean?
And for whom is that gibberish?
Okay.
Cool.
So I'll try to balance this somehow.
But so look, there's a simple example, right?
A lot of people's sort of common sense moral
intuition say we have deep moral responsibilities
to other human beings, maybe particularly
to the human beings around us.
We don't have the same kind of moral responsibilities
to non-human animals.
But utilitarians famously say, and a lot of
effective altruists say, no, no, no, we have
basically qualitatively the same kinds of
responsibilities to non-human animals that
we do to other humans.
All right.
So suppose that, yeah, here's one way you
might get worried about a question about this.
So I think animal welfare is the most important
cause area.
Why?
Well I think pain is bad and animals experience
pain.
But you know, maybe really what makes pain
bad in humans is that it's conjoined with
some other more complex mental states that
we're capable of, call it suffering, and maybe
say chickens aren't.
And so I'm worried that maybe these things,
the experiences in chickens that I think maybe
have moral significance, maybe they don't
at all.
Now suppose that I've sort of accounted for
that uncertainty.
I've discounted the importance of animal welfare,
based on my uncertainty whether animal suffering
really matters.
And it still seems to me like animal welfare
is the most important cause area.
Now the question I'm going to focus on in
this talk is does my uncertainty nonetheless,
you know, even given that I still think animal
welfare is the most important cause area,
does my uncertainty give me some reason to
diversify more?
To spread out my resources more across other
cause areas that I think are also important?
Now I should say, just when I was going through
these slides, I sort of realized that most
of what I'm going to say, not all of it but
most of it, is really about the impact of
uncertainty in general and not this thing
I'm calling normative uncertainty in particular.
So that's good, right?
It means what I'm going to say is sort of
more generally applicable, applicable to a
wider range of considerations.
We'll get to some stuff at the end, if we
get to it, that's more specific to normative
uncertainty.
So here's the plan.
I'm going to run through four ways in which
you might think that being uncertain about
basic normative questions and, at least for
the first three of them, just being uncertain
in general, gives me reason to diversify more.
My conclusion is going to be that none of
these is really super persuasive, so none
of them definitely implies that you or me
or most sort of philanthropic, effectively
minded, altruistic agents have more reason
to diversify than we otherwise would.
But in typical sort of signaling my epistemic
modesty sort of fashion, I'm going to say
it's not obvious that these arguments fail
either.
So there might be something here that's worth
thinking about.
And if there is time at the end, which I suspect
that there won't be, I'll try to relate this
back a little bit to some other reasons to
diversify across different cause areas that
don't sort of intrinsically have to do with
uncertainty, but do seem to have some connection
with these uncertainty-based reasons.
Okay.
So.
Reason number one.
Diminishing marginal value.
Raise your hand if this expression is familiar
to you, and raise your hand if it isn't?
Okay.
Cool.
So very quickly, diminishing marginal value
is this phenomenon that for lots of things
in the world, the more I have of it, the less
valuable an additional unit is.
So in an effective altruist context, we're
buying anti-malarial bed nets for people in
malaria-prone parts of the world, the first
thousand bed nets, where are they going to
go?
They're going to go where they are needed
the most, right?
Where there's the highest prevalence of malaria,
where other treatments are least available,
et cetera, et cetera.
The next thousand bed nets will go where they're
needed the next most, and so on and so on
and so on.
By the time we've distributed ten million
bed nets, we're going to be doing less good
with each additional thousand bed nets than
we were to begin with, right?
So this seems to be a very general phenomenon.
And it means that these sort of questions
of cause prioritization I think have kind
of this structure up here.
So all of the examples are going to be really,
really schematic so don't pay too much attention
to the details of the example, but just to
sort of illustrate the kind of effects the
diminishing marginal value has, suppose now
I think the most important cause area is AI
safety and I think the second most important
cause area is bio-security.
I think about okay, the first dollar that
I put into either of these cause areas, how
much good will it do, and then how quickly
does that marginal value diminish?
And here's some sort of very over-simplified
curves.
AI safety starts out a lot better, as I spend
more and more and more, the marginal value
of additional spending goes down, until eventually
I hit this what I'm calling diversification
point, where the marginal value of spending
an additional dollar on AI safety would be
less than the marginal value of starting to
spend some money on bio-security.
Right?
So diminishing marginal value by itself gives
us some reason to diversify, right?
Once you put enough money into something,
maybe you're going to have better opportunities
putting money into something else.
Expected value, we already talked about that
in the last sessions, I won't spend too much
time introducing it, but the point is, these
marginal values, of course we don't know for
sure what the impact of an additional dollar
or an additional bed net is going to be.
We're interested in the expected value, right,
the various impacts it could have, multiplied
by a probability and then you add them all
together you get a probability-weighted sum.
Right?
So when I'm uncertain about the value of something,
that reduces its expected value, it reduces
the expected marginal value of doing more
work in that area.
So here's again a sort of really schematic
example of how this might impact my decision
making.
So suppose that, you know, again don't care
about the details of the example, but bio-security,
suppose that that's really about sort of people
now who are going to be threatened by pandemics
in the next 20-30 years, say.
And suppose AI safety, suppose I have relatively
long what we call AI timelines.
So I think work that we do on AI safety right
now is going to benefit people almost exclusively
who are not alive yet.
And I might be uncertain for various philosophical
reasons about whether or to what extent I
have responsibilities to those people who
don't exist now.
So let's suppose that this makes me think
AI safety is less important in expectation
than I though it was, right?
I apply some discount for my uncertainty about
whether AI safety work matters at all.
Now I still think that AI safety is sort of
initially the most important cause area, but
it's a closer to bio-security, and you can
see here in this really simple model that
that means coupled with the sort of diminishing
marginal value of investment in a cause area
that I hit the diversification point much,
much sooner, right?
It is much sooner the case that I can have
more expected marginal value by diversifying,
going into other cause areas, than if I had
been certain that AI safety was important.
So normative uncertainty could have this impact,
that it brings the initial marginal values
of different cause areas closer together,
leads us to diversify sooner than we otherwise
would.
Basic assessment of this line of argument,
I think that for large donors, here the sort
of obvious example in the EA world is the
Open Philanthropy Project, this kind of reasoning
is just absolutely right and straightforward
and important because they're spending large
amounts of money and they should pay attention
to how the returns to that investment diminish
if they spend more.
And of course they do pay attention to these
things.
For somebody like me, this kind of reasoning
doesn't seem directly important.
Why?
Because, well, even if I gave my whole income
to AI safety, we'd still be hanging out right
about here, like pretty close to the Y axis.
So the marginal value of my donations is not
going to diminish very much as I donate more
of my salary to a given cause area.
Okay.
So that's number one.
Number two is risk aversion.
Here's a simple thought experiment.
Suppose that you have the opportunity either
to save one life for sure or do something
that has a 1 in a 1,000 chance of saving 1,000
lives, and otherwise does nothing.
Now in expectation, these options are equally
good.
Why?
Because 1,000 times 1 over 1,000 equals 1.
1 equals 1.
These are equally good.
Show of hands if you just, you know not trying
to think about it too hard, just sort of your
initial intuitive inclination, who would prefer
to do the 1 in a 1,000 thing?
And who would prefer to take the sure thing?
Right, okay.
So we get a larger show of hands for the sure
thing.
Most people intuitively are drawn towards
doing good for sure even when the expectations
are equivalent.
Now interestingly there's some psychological
research that says maybe this depends on how
you frame the choice.
So if I present you with the same alternatives
but described in a different way, so option
one is 999 people die for sure, option two
is there's a 99.9 percent chance that 1,000
people die and a .1 percent chance that nobody
dies, then in some sort of classic experiments
by Kahneman and Tversky you get the opposite
result.
People's preferences sort of flip.
But I think when we're thinking about our
own charitable activity, we tend to frame
it in terms of gains rather than losses.
We more naturally think in terms of lives
saved than in terms of lives lost.
So I think we're drawn towards doing the sure
thing, in cases like this?
Now why would we be drawn this way?
Well, you know, you can see a line of reasoning
that goes something like this.
If I save a life, then my life has really
meant something.
I've done something that has really made a
difference in the world, made a difference
for somebody else.
That makes my life sort of more meaningful.
Of course saving 1,000 lives would be better
than saving 1 life, but not 1,000 times better,
at least not from the standpoint of my own
sense of having accomplished something, having
done some good.
The sort of reductive way to talk about this
is, in psychology it's popular to say that
people do altruistic deeds for the warm glow
that they experience.
I'll experience probably more warm glow if
I save 1,000 lives than if I save 1, but I'm
not going to experience 1,000 times as much
warm glow, right?
So if what I really want is to be certain,
or as certain as I can be that I've made some
difference, that my life has done some good
for the world, then that gives me some reason
to be risk averse, and that potentially gives
me some reason to diversify more.
So just to give a really quick example, suppose
now I think the most important cause area
is climate change, but again I'm not sure
whether working on climate change is really
that valuable.
Various reasons that could be.
So maybe, again, I'm uncertain whether I really
have moral responsibilities to people in the
distant future.
Maybe I'm just unsure whether I can have any
impact on climate change.
I could spend my whole life doing climate
policy advocacy kind of work, and make no
change at all to actual policy outcomes.
And even if my impact is really high in expectation,
because the impact I might have is really
big, I might still feel sort of depressed
about this, that probably I'm not having any
impact at all.
So it's tempting to say well, what if I divide
my time up a little bit?
What if I do some work on climate change and
do some work on bio-security, do some work
on animal welfare, or maybe I work on one
thing but then I donate to other things or
something like that.
My expected impact on the world will probably
go down if I do that, but my chance of having
some impact will go up, right?
Okay, how do we assess this?
Well, I think, at least it seems intuitively
to me, that this is a powerful psychological
reason why people actually do do diversify.
It's part of, I think, what inclines me to
diversify when I go to GiveWell and I think
do I give everything to AMF this year or do
I do whatever their recommended distribution
is or something?
I like the idea that I can be fairly confident
that I'm having some positive impact.
But it's really hard to see this as an actual
justification for diversifying, I think.
There's sort of decision theoretic arguments
that being risk averse in this way, like trying
to maximize the probability that I do some
good, is probably bad because over the very
long run it means that I'm almost certain
to do less good than I otherwise could've.
That seems bad.
One way in which could try to turn this into
a sort of proactive justification is if you
have kind of running in the background something
like an egoistic normative theory, like if
you think all we're trying to do is be what
philosopher's call instrumentally rational.
I'm trying to satisfy my preferences or my
desires, and look, it just turns out that
I have a preference to diversify or I have
this preference to believe that I've done
some good in the world, and so diversifying
is a way of satisfying that preference.
All right?
Or a little bit less sort of reductively,
maybe you're an Aristotelian and you think
doing good for others is one component of
the good life, but once I've done some amount
of good, if I've saved a life or something
like that, saving 1,000 lives is not going
to make 1,000 times as much contribution to
the value of my own life.
So there's ways you could turn this into a
sort of normative justification.
I don't find them particularly compelling
but they're out there.
Let me just check where I am on time.
Okay.
Line of reasoning number three.
So the basic idea here is decision making
under uncertainty is hard, particularly decision
making under normative uncertainty.
And when we're faced with hard choices, often
it's going to turn out that there are multiple
things that it's okay to do.
So really quickly sort of skimming over how
this can happen, well, for instance, in the
context of decision making under normative
uncertainty, a kind of ideal way you might
think that things can go is well, we have
a bunch of moral theories and we have a bunch
of ways that the world could work, sort of
empirical possibilities, and for each of those
we can assign some precise numbers to what
would be the value of me working on this cause
area, this time, or that cause area, and then
we just do expected value reasoning.
We multiply these things by probabilities
and we add them up.
And then we get some numbers.
And when we have, for each option, a real
number, some part of the real number line,
we're very rarely going to end up with rational
options.
Why?
Because in some sense it's sort of hard to
cash out, you grabbed two real numbers from
even the interval zero-one, the chance that
they're going to be the same real number is
very, very small.
But, that assumes among other things that
you can assign precise numerical probabilities
to all the possibilities that you're considering.
It assumes that you can assign precise numerical
values to, say, the wrongness of killing an
innocent person according Emmanuel Kant's
moral theory, or something like that.
So just to look at one sort of simple example,
suppose my probabilities are imprecise.
So utilitarianism and Kantianism, again two
rival moral theories, it could be that the
probability of utilitarianism is .543 and
Kantianism is .457 or something like that.
Much more likely, you know a normal person
is going to say I think utilitarianism is
a little bit more probable, so more than .5,
but less than .8, or something like that.
And when you assign imprecise probabilities
like this, it's often going to turn out that
there's more than one thing that looks sort
of rationally permissible.
So here's a simple example.
I have $3,000 to donate.
Again, don't worry too much about the details.
If I give it to AMF, let's suppose I'll save
exactly one human life for sure.
If I instead donate those $3,000 to the Humane
League, I'll do an amount of good that, theory
one, say utilitarianism, says is equivalent
to saving 1.5 human lives.
But then there's theory two, which says I
only have moral responsibilities to humans
and the good that I do for animals, the impact
I have on animals, doesn't matter morally
at all.
Now decision theory says the crucial question
that we now need to answer is what probability
do I assign to theory one, what probability
do I assign to theory two?
Let's suppose that the best I can say is well,
theory one, I think that's at least 50% likely
to be true, and I think it's no more than
90% likely to be true.
And then we sort of have a range of probabilities,
and from that, we calculate a range of expected
values.
So donating to AMF we're sure about, that
has expected value 1, where 1 just means the
value of saving a human life.
Donating to the Humane League, well, the expected
value, the sort of natural way doing this
is, it becomes a range.
It's tempting to say it's somewhere in this
range.
Actually the right way to say it is it just
is the range.
So the expected value of donating to the Humane
League is the interval .75 to 1.35, which
you'll notice includes 1.
Right?
So we can't, at least it seems like, say donating
to AMF has greater expected value or donating
to the Humane League has greater expected
value.
Now, very quickly, in this sort of situation,
you might think the following.
If it's permissible for me in any given choice
situation, say every time I'm trying to decide
whether to donate some money to AMF or the
Humane League, if it's permissible for me
to go either way, then it seems like it should
be permissible for me to sometimes go one
way, sometimes go the other way.
So rational options in particular cases create
the option of diversification across cases.
Now there's various worries you could have
about this.
One big worry is that if you do this in a
sort arbitrary way, you could, again, end
up doing things that look sort of definitely
bad, so making a series of choices that each
moral theory you have credence in, or each
moral plus empirical theory, says is worse
than if you had made some other series of
choices.
So we want to avoid that.
The sort of standard way of avoiding that
in decision theory is to say even if you don't
have sharp probabilities, even if you don't
have sharp numerical values, act as if you
do.
Choose some precise probability from within
this imprecise range and just sort of go with
it.
But even if we do that, we could choose the
precise probabilities that put the choice
between say AMF and the Humane League right
on a knife's edge where we're sometimes going
to go one way, sometimes going to go another
way.
So the bottom line on this I think is, there
is something here.
I think we probably are stuck with some degree
of imprecision in decision making under normative
uncertainty, but it only gets us that we are
permitted to diversify.
It doesn't seem like it's going to get us
the conclusion that we're required to diversify.
Okay, just a couple minutes left, so I'm going
to say very, very little about this fourth
thing.
This is actually the line of reasoning that
is distinctive to normative uncertainty, and
it's something that gets pretty technical
pretty quickly, so even if I was going to
talk about if for longer I would be glossing
a lot of the details.
But here's roughly the issue.
When I'm uncertain, say whether Kantianism
or utilitarianism is the correct moral theory,
the big central technical problem that philosophers
have been grappling with for the last 10 or
15 years is how do I make quantitative comparisons
between utilitarian values and Kantian values.
Right?
So Kantianism says telling a lie or killing
an innocent person, these things are just
absolutely morally wrong.
Utilitarianism says more happy is better,
less happy is worse.
And now we want to know how do we compare
a utilitarian reason to a Kantian reason such
that we can for instance compute an expected
value.
I'll skip the example.
Bottom line this is really hard.
There's some cases where it looks doable,
there's a lot of cases where there's no obvious
way of doing it.
One suggestion that's been floating out there
in literature for awhile is that we should
somehow try to compare theories in a way that
treats them equally.
So this is what Will, who's done important
academic work in this area calls the Principle
of Equal Say.
He and this earlier guy, Ted Lockhart, who's
one of the first people to really talk about
normative uncertainty seriously in the literature,
have both suggested that this might have some
role to play in how we make decisions under
normative uncertainty.
The idea basically is that you should give
each theory a degree of decision making weight
that's proportional to the probability that
you assign to that theory.
Now you might think one way this is going
to go is well, if I give each theory a weight
proportionate to its probability, then that
means each theory should get its way, so to
speak, a percentage of the time that's roughly
proportionate to that probability.
Now skipping a lot, I think that this is a
hard to conclusion to get.
There's a lot of questions you have to answer
in order to make this Principle of Equal Say
precise.
The sort of two central questions are how
do we equalize two theories?
Should we equalize the range, equalize the
variance, whatever, and over what domain should
we equalize?
Should it be like all conceivable choices
or all of my lifetime choices or just this
particular choice I'm making right now?
Yeah, I think I'll skip the example.
The bottom is that to justify diversification,
you need a very specific, particularly specification
of the domain over which you're equalizing
different normative theories.
So for instance, if we say each theory should
sort of get equal say with respect to the
domain of all possible choices, okay.
Each theory maybe will get its way some of
time, but that some of the time might not
be like any time in this century.
Right?
Maybe like the 21st century is really important
from a utilitarian standpoint so utilitarianism
always wins in the 21st century, and then
other moral theories win at other times, or
something like that.
So that version of PES is not going to create
much reason for individual personal diversification.
There's actually, at least the little thinking
I've done about this so far, I think you really
have to adopt a very specific version of the
Principle of Equal Say to get the conclusion
that I personally should be diversifying as
a result of my normative uncertainty.
I also think that the Principle of Equal Say
is not super plausible, that it shouldn't
play a huge role in meta-normative theories,
theories of decision making under normative
uncertainty, but this is really still sort
of virgin territory philosophically.
Not much has been written about it, so as
with some of the other arguments we've talked
about, I think it's too early to rule it out.
This might turn out to be an important argument
for diversification.
Okay.
Conclusions.
I think normative uncertainty does give large
donors more reason to diversify.
So the Open Philanthropy Foundation, right,
they really have to think about the diminishing
marginal value of their contributions.
They should diversify more because they're
uncertain about the value of their top cause
areas.
I think the sort of rational options line
of reasoning probably does get some traction.
It probably gives you and me some rational
permission to diversify but doesn't force
us to diversify.
The other two arguments that would imply that
you and I maybe are required to diversify,
my assessment at least is they're not super-compelling.
But again, I think there's a lot more to be
said here.
And again, much of this has to do with uncertainty
in general, not normative uncertainty in particular,
so I think this is something we should all
be thinking about, even if we feel pretty
committed to a particular normative perspective.
Very last things.
None of this is meant to besmirch other really
great reasons for cause area diversification.
A lot has been written about this by the Open
Phil people for instance.
And one thing I'll just draw your attention
to is I sort of reached this conclusion that
large donors have good reason to diversify
because of diminishing marginal value considerations,
but we as a community are a large donor.
Right?
So if there's thousands of people who every
year go to GiveWell and donate to their recommended
charities, and GiveWell is moving millions
of dollars in small donor contributions, then
we collectively as a community ought to be
thinking about the diminishing marginal value
of our contributions, and this is one way
of making sense of, for instance, the fact
that GiveWell will often say "Here's our recommended
allocation."
Or "You donate to us and we'll turn around
and give 40% to one organization, 30% here,
20% here, whatever."
What they're doing is, the fact that we're
acting together makes us into a large donor,
and gives us the opportunity and a sort of
real justification for diversifying.
Which is nice because it means we can sort
of capture this psychological incentive and
have some confidence that we are individually
each having some positive impact, even if
one particular cause area or one particular
intervention turns out not to work. Okay. Thank you very much.
Wow.
You can stand up here for a few seconds.
Who of you is not a philosopher, raise your
hand?
How was this for you?
It was like, yeah, you want to say something
about it?
It was quite fast-paced, I'd say, but actually,
now that I have the mic can I have a question?
Seize the opportunity.
I think I didn't quite understand the diminishing
marginal value and how you decided on the
threshold.
So maybe you can go into more details there?
The threshold. Say what you mean?
Ah yeah, there we go.
Oh, oh, that dotted blue line?
Yeah, so that's just the initial marginal
value of donating to your second best cause
area.
So the point is when the value of donating
to AI safety or the expected marginal value
drops below this line, and then I'm asking
should I donate the first dollar to bio-security
or should I keep donating to AI safety?
Because bio-security starts off here, the
expected value of donating that first dollar
to bio-security is going to be greater than
the expected value of donating an additional
dollar to AI safety.
Does that make sense?
Another question?
Hi, yeah, thanks for the talk. I enjoyed it.
Something that I think is bothering me with
moral uncertainty as a kind of new philosophical
approach anyway, is that I'm not sure
if there's just epistemic disagreement in
our theories.
Like the reason why I think utilitarianism
is better than Kantianism at the moment, for
example, might be I don't have enough, just
enough information, or my epistemology is
faulty, so what do you say about that?
Yeah, no.
I think that's absolutely right.
So I mean... particularly in moral philosophy,
I think there's a sort of deep disagreement
between, as Will was hinting at in his keynote,
people who think that we should start from
sort of a small set a priori first principles
and kind of reason our way downwards versus
people who think we should try to capture
as many of our sort of reasonable intuitive
judgments about patients as possible.
That's a disagreement about moral epistemology
that then leads people to maybe classical
utilitarianism on the one hand versus common
sense morality or versions of deontology on
the other hand.
Not much has been written so far, although
some has been written, about how to respond
practically to uncertainty about basic epistemological
questions, but I think that's also absolutely
an important topic.
And then there's also the question, you know,
if I'm getting this conclusion, you're getting
that conclusion, because we have different
epistemologies, should we be maybe sort of
reconciling more, should I be taking more
account of your epistemic methods, as like
a possibly good way of getting at the truth.
Okay, the final question here.
Okay, I'm lucky sitting here.
But I really liked the last part of your talk,
which was very fast, but it gave me an idea
and I was wondering if that was also what
you're kind of pointing at, which is looking
at normative uncertainty also on the margin,
just like we're looking at impact on the margin
worldwide, we should do what, given what everyone
else does, is the best thing to do, not just
as if we're the only ones alive.
And similarly with moral uncertainty you could
look at it from a global perspective and see
what the world is currently prioritizing morally
in different areas and then say maybe okay,
that's too little Kantianism or that's too
little utilitarianism, so we should full on
and balance that because we think effect,
maybe you say like 10% Kantianism, 70% utilitarianism
and some value ethics, then maybe if it's
currently the distribution is different, you
want to kind of get a different distribution
closer to it.
Is that something you've been thinking about
a lot and is that... does that make sense?
Yeah, yeah.
I think it absolutely makes sense.
I don't know that I've thought about it in
that way before, so that's really useful.
The way that I would intuitively think about
that is, you know go back, I'm not going to
go back, but go back to those marginal value
graphs.
If the world as a whole has been pumping a
lot into the cause areas that, say, common
sense morality thinks is really important,
then that means that the marginal value of
those cause areas has already diminished and
diminished and diminished.
And if we then find like wild animal suffering,
right, a cause areas that's relatively neglected
because the world hasn't been pumping much
of its resources into that cause area, then
the expected marginal value of doing some
initial work in that area might turn out to
be very high.
So I think that's one way of explaining the
importance of neglectedness.
Thank you so much.
Give him another hand.
