Cool.
So, thanks for coming on.
Just to warm us up, give us an overview of
what Open Phil has been up to since the last
12 months since our last chat and what your
plans are for the following year.
Sure.
So, Open Philanthropy is a grant maker, that's
our main activity.
And right now I would say we're in an intermediate
stage in our development.
We're giving away a bit over 100 million dollars
a year, that's been true for the last couple
of years.
We do want to grow that number at some point.
But we have this belief that what we should
be doing right now is strengthening the organization,
strengthening our intellectual frameworks,
our approach, our operations.
And just kind of getting used to operating
at this scale, which is big for a grant maker,
before going bigger.
And so, it's a several year transition period.
And so what I would say is the last year,
in addition to grant making, we did a lot
of work to strengthen our operations and we
did a lot of work to get more clarity on what
we call the cause prioritization problem.
That would be how much money is eventually
going to go into each of the different causes
that we work in.
How much should go into criminal justice reform,
how much should go to support GiveWell's top
charities, which we try to weigh against the
other things, how much should to AI risk,
biosecurity, etcetera?
So that's been the focus of the last year
or so.
And going forward, it's kind of the same.
I mean, we're very focused on hiring right
now.
We just had a director of operations start
and we've been hiring pretty fast on the operations
side.
And that is because we're trying to build
a robust organization that is really ready
to make a ton of grants and do it efficiently
and do it with a good experience for the grantees.
And then, the other hiring we're doing is
this push for research analysts.
These are going to be people who help us choose
between causes, help us answer these esoteric
questions that most foundations don't really
try to analyze.
Like how much money should go to each cause,
what causes should we work on?
We expect them to eventually become core contributors
to the organization.
And so, a major endeavor this year has been
gearing up to find our best shot, our best
guess at who we should be hiring for that.
So it's really a capacity-building, hiring
time and also a time when we're really intense
about figuring out that question about how
much money goes to each cause.
Fantastic.
And just to check, everyone knows you can
ask questions on the Bizzabo app and in a
half an hour or so, we'll start bringing those
into the mix as well.
Okay, but one thing you mentioned was then,
something you've been working on is how do
you divide the money across these very different
cause areas?
And this question of worldview diversification.
What progress do you feel like you've made
on that over the last year?
First, I just want to give a little bit of
background on the question, 'cause it's kind
of a weird one and it's one that often doesn't
come up in a philanthropic setting.
We work in a bunch of very different causes.
So, like I said, we work on criminal justice
reform, we work on farm animal welfare, work
on global health: how much money goes into
each cause?
So one way that you might try to think to
try to answer this is you might say well,
what are we trying to do and how much of it
are we doing for every dollar that we spend?
So you might say that we're trying to, let's
say, prevent premature deaths.
How many premature deaths are we preventing
for every dollar we spend?
You might try to come up with a more inclusive,
universal metric of good accomplished.
And I think there is different ways to do
that.
One way is to value different things according
to one scale.
And so you could use a framework similar to
the quality framework where you say, if you
avert a blindness, that's like half as good
as saving a life or something like that.
And so, you could put 'em all into one scale
and then you can say, how many units of good,
so to speak, are we accomplishing for each
dollar we spend?
And then you would just divide up the money
so that you get the maximum overall.
And what one might think, when you do that,
is that you start putting money into your
best cause.
And at a certain point, it's no longer your
best cause because you're reaching diminishing
returns, like the more money you put in the
less you're getting out.
And so now you put money into another cause
and that determines your allocation.
There's what I consider a problem with this
approach; not everyone would consider it a
problem.
What I consider a problem is that we've run
into two mind-bending fundamental questions
that seem like it's very hard to get away
from.
These questions are very hard to answer and
then they have a really huge impact on how
we give.
So one of them is how do you value animals
versus humans?
For example, how do you value chickens versus
humans?
One on hand, if you value, let's say, to simplify,
we're deciding between GiveWell top charities
which try to help humans with global health
interventions, bed nets, cash transfers, things
like that, and we're trying to decide on the
other hand whether to fund these animal advocacy
groups that try to push for better treatments
of animals on factory farms.
Without going too much into what the numbers
look like, it looks like if you decide that
you value chickens' experience, like 1% as
much as a human's, you get this result back
out that you should just only work on chickens.
Like just all the money should go into farm
animal welfare.
And on the other hand, let's say that you
go from 1% to .001% or zero or something.
You decide you don't value chickens as much
as humans and then you're going to get the
result of course, that you should just put
all the money into humans.
And that is kind of... it's like you've got
this one parameter and you don't know what
the number should be.
And when you move to one number, it says you
should put all the money over here.
And when you move to another number, it says
you should put all the money over there.
The even trickier version of this question
is when we talk about preventing global catastrophic
risks or existential risks.
When we talk about our work on things like
AI risk, biosecurity, climate change, where
the goal is not to help some specific set
of persons or chickens, but rather to hopefully
do something that will be positive for all
future generations, so, prevent some catastrophic
effect that could ripple through future generations.
And then the question is how many future generations
are there?
And if you prevent some kind of existential
risk, did you just do the equivalent of preventing
7 billion premature deaths, which is about
the population of the world, or did you just
do the equivalent of preventing a trillion,
trillion, trillion, trillion, trillion untimely
deaths?
It depends how many people that are in the
future and you pick different numbers and
it's very hard to pick the right number.
And whichever number you pick it just takes
over the whole model.
And so that for us, we have determined that
there is a bunch of reasons we don't want
to go all in on one cause.
That's something we've written up on our blog.
I think, among other things, if you go all
in on one cause, you get all the idiosyncrasies
of that cause.
It could be very easy to miss a lot of good
opportunities to do good if you change your
mind later.
It could become very hard to pull in a lot
of donors and be broadly appealing if your
whole work is premised on this one weird assumption
and this one weird number that could have
been something different.
And so we don't want to be all in on one cause.
And so where we came to at the end of the
last year is that we want to have these buckets,
and the different buckets of capital use different
assumptions.
And so we might have one bucket of capital
that says, preventing an existential risk
is worth a trillion, trillion, trillion premature
deaths averted.
We value every grant by how it affects the
long run trajectory of the world.
And other bucket might say, no we're just
going to look for things that affect people
alive today or that have impact we can see
in our lifetimes.
And then you have another bucket that takes
chickens very seriously and another one that
doesn't.
Then you have to determine the size of those
buckets.
And so it's kind of this multiple stage process
where you first say there's x dollars, how
much is going to be in what we call the animal-inclusive
bucket versus the human-centric bucket?
How much is going to be in the long-termist
bucket versus the near-termist bucket?
And then within those buckets, now you can
use your metrics a little bit more normally
and decide how much is AI risk, how much is
biosecurity, how much is climate change?
And then we're attacking the problem with
two different levels in parallel.
One of them is the abstract.
One of them is if I say just alright, you
have x dollars, how much do you want to put
in the animal-inclusive versus the human-centric
bucket?
And you might start with an assumption of
50-50 as a prior and then say, well I actually
take this one bucket more seriously so I'll
put more in there and you could have some
other adjustments too.
And then we've also started moving toward
addressing this in a very concrete and tangible
way too, where we actually create a table
that says for each set of dollars we can spend,
we'll get this many chickens helped and this
many humans helped and this many points of
reduction in existential risk.
And so, under different assumptions it looks
excellent by this metric, okay by that metric,
really bad by that metric.
And then you can just look at the table and
understand the trade-offs you're making.
And so a lot of the work we're doing now and
a lot of the work that I think new hires will
do, is filling out that table.
And a lot of it is really guesswork, but to
give us some rough sense of what...
If we can understand what we're buying with
different approaches then hopefully we can
make a more reflective equilibrium decision.
Okay, fantastic.
And then as part of that, you've got to have
this answer to the question of your last dollar
of funding.
How good is that last dollar?
And how do you think about that then, given
this framework?
The last dollar question is very central to
our work and it's one of the things that brought
up this dilemma, which is someone sends in
a proposal for a grant and really we have
to decide, do we want to make the grant or
not?
And in some ways, what the question really
is, is it better to make this grant or is
it better to save the money?
And what that really means is, would this
grant be better than the last dollar we're
going to spend?
And so for a long time, we had this last dollar
concept that was based on GiveDirectly.
GiveDirectly is a charity that gives out cash.
For every $100 you give, they try to get $90
to someone in an extremely low income, globally
low income household.
What we said is, if we're looking at a grant
and it doesn't look as good as GiveDirectly,
because we can sort of give almost infinite
money via GiveDirectly eventually, we think,
then why make the grant?
And if it's better then maybe we should make
it.
But we've definitely refined our thinking,
we've definitely gotten further on that.
And so, one of the things I mentioned, GiveDirectly
is a near-termist, human-centric kind of charity.
And the question becomes, if you decide that
instead of counting the good you're accomplishing,
instead of counting people you're helping,
you're counting points of reduction in global
catastrophic risks, then what does your last
dollar look like?
And it probably doesn't look like GiveDirectly
because there's probably better ways to accomplish
that long-termist goal.
And so we spent a bunch of time over the last
couple of months trying to answer that question.
What is the GiveDirectly of long-termism?
What is the thing you can just spend unlimited
money on and does as well as you can for increasing
the odds of a bright future for the world?
And so we tried a whole bunch of different
things, we looked into different possibilities.
And where we are right now is we have this
idea of platform technology for rapid development
of medical countermeasures.
So the idea is you would invest heavily in
research and development and you would hope
that what you get out from a massive investment,
is the ability to more quickly develop a vaccine
or some kind of treatment in case there's
ever a disastrous pandemic.
Then you can estimate the risk of a pandemic
over different timeframes and how much this
would help and how much you're going to speed
it up.
And our estimate ended up being that we think
we could probably spend over 10 billion dollars,
in present value terms, on this kind of work.
And I think we estimated it, this is really
wild and it's a wild guess but we're trying
to start with broad contours of things and
then get more refined, but we estimate something
like the low low price of 245 trillion dollars
per time that you prevent extinction.
245 trillion dollars per reduced extinction
event actually comes out to a pretty good
deal.
That's like total wealth of the world so-
Yeah, and times a few.
Or total wealth.
It's like total GDP times a few.
Yeah, which is about 240 trillion it's total
wealth.
So that's weirdly balanced.
Yeah, exactly.
But there's all this future wealth too so
it's actually a good deal.
And the funny thing is that if you just count
people who are alive today and you look at
the cost per death averted probabilistically
and then expected terms, it's actually pretty
good.
It's kind of in the same ballpark as top charities,
for GiveWell.
Now, very questionable 'cause GiveWell's cost
effectiveness estimates are quite rigorous
and well-researched and this is not.
So, you don't want to go too far with that
and you don't want to say these numbers are
the same, but the preliminary look is we think
we can probably do better than this with most
of our spending.
And so this is interesting to see that that
last dollar looks like not a bad deal and
that we can compare any other grants we're
making that have this long-term future of
humanity.
We can say, are they better or worse than
this medical countermeasure platform tech,
because we can spend as much money as we want
on that.
We don't have a last dollar estimate yet for
animal welfare.
We do have a view that the current work is
extremely cost effective; it's under a dollar
per animal's life significantly improved in
a way that's similar to a death averted.
But it's more of a reducing suffering thing
a lot of the time.
And we do think we could expand the budget
a lot before we saw diminishing returns there,
so that's a number we're working with until
we get a better last dollar.
Cool, terrific.
And then of all these different cause areas,
are there some that you're just more personally
excited about than others?
Yeah.
How does that affect it like...
I mean, I get differently excited about different
causes on different days and I'm super excited
about everything we're doing because it was
all picked very carefully to be our best bet
for doing the most good.
And as soon as it looks bad, we tend to close
it down.
I generally am excited about everything.
There's different pros and cons to the different
work.
The farm animal work is really exciting because
we're seeing tangible wins and I think the
criminal justice work also looks that way.
It's just really great to be starting to see,
hey we made a grant and something happened
and it helped someone and it led to someone
having a better life.
And so a lot of times that does feel like
the most exciting.
And on the other hand, I have, over the last
couple of years, just gotten more and more
excited about the long-termist work for a
different reason, which is that I've started
to really believe we could be living at a
uniquely high leverage moment in history.
I mean, just to start off, just to set the
stage, I think people tend to walk around
thinking well, the world economy, this is
certainly how I walk around, the world economy
grows at around 2% a year.
In real terms, maybe 1 to 3%, and that's how
things are and that's how things have ever
been and have been for a long time and will
be for a long time.
I think that's basically a weird way of thinking
about things because I think that rate of
growth is super weird and super historically
anomalous.
It's been like 200 or 300 years that we've
had that level of growth, that's like 10 generations
or something.
It's a tiny, tiny fraction of human history.
Before that we had much slower growth.
And then when we look to the future, for many,
many reasons that I'm not going to get into
now, we believe there are advanced technologies,
such as highly capable AI, where you could
either see that growth rate skyrocket and
then maybe flatten out as we get radically
better at doing science and developing new
technologies, or you could see a global catastrophe.
And I think again, it's probably possible
today, it could be possible, to wipe ourselves
out completely with nuclear weapons in the
future.
There may be other ways of doing it like climate
change and new kinds of pandemics with synthetic
biology.
I mean, as of 100 years ago, there was basically
almost no reasonably likely way for humanity
to go extinct.
And so there's a lot of things that look special
about this time that we live in.
It's kind of the highest upside, the highest
downside, maybe that's ever been seen before.
One way of thinking about it is you could
think that civilization or more humanity,
has been around for hundreds of thousands
or maybe millions of years, depending how
you count it.
Could be around for billions of years from
now, but we might be in the middle of the
most important hundred years.
And when I think about it that way, then I
think boy, someone should really be keeping
their eye on this.
And the other thing is that, to a large degree,
people aren't.
Even climate change, which is better known
than a lot of the other risks, sufficient
action is not being taken.
Governments are not making it a priority to
the extent that I think they reasonably should.
And so as someone who has the freedom to spend
money how we want and the ability to think
about these things and act on them without
having to worry about a profit motive or accountability
to near-term outcomes, we're in a really special
position to do something and I think that
it's exciting.
It's also scary.
Cool.
So then over the last year, what other particular
grants that you think people in the audience
might not know as much about, that you are
particularly excited about, that you think
are going to be particularly good or important.
I mean, there's a lot of grants I'm excited
about.
I'm guessing people know about things like
our OpenAI grant which I'm excited about,
and our many grants to support animal welfare
or corporate campaigning in the U.S., abroad,
India, Europe, Latin America, etcetera.
I'll skip over those, I'll name some others
people might not have heard of.
Very excited about the AI Fellows Program.
We recently announced a set of early career
AI scientists who we are giving fellowships
to.
It was a very competitive process.
These scientists are really, really exceptional.
They're really some of the best AI researchers,
flat out, for their career stage.
And then they're also interested in AI safety,
they're interested in not just making AI more
powerful but in making it something that's
going to have better outcomes, behave better,
have fewer bugs, etcetera, solving the alignment
problem, things like that.
We found a great combination of just really
excellent technical abilities and then a seriousness
and an open-mindedness to some of the less
mainstream parts of AI that we think are the
most important, like the alignment problem.
Our goal here is to have these fellows, to
help them learn from the AI safety community
and get up to speed, and become some of the
best AI safety researchers out there.
And also to make it more common knowledge
in AI that this is a good career move, to
work on AI safety.
And I think it's exciting as a foundation
to be engaging in field building and trying
to make it true more than it was before.
That working on AI safety is a good move.
So I'm excited about that.
I'm excited about the...
Our science team is working on a universal
antiviral drug.
Believe that a lot of viruses, like maybe
all of them, rely on these particular proteins
in your body to replicate themselves and to
have their effects.
And so, if you can inhibit those proteins,
and we already have some drugs that do it
and that are safe because they're already
being used for cancer treatment, things like
that.
If you can inhibit those drugs, you might
have a drug that is not something you'd want
to take every time you got a virus, but might
work on every virus.
Which could make it a really excellent thing
to have if some unexpectedly terrible pandemic
comes and to stockpile.
So, that's super cool.
I think in general, I'm often excited when
I just see that we're doing something that
is...
We have a couple of grants that are just all
about speed.
So there's a couple of science grants where
just there's a technology that looks great,
everyone's excited about it.
There's not that much to argue about.
Gene drives to potentially eradicate malaria.
There's an experimental treatment for sepsis
that could save a lot of lives, and we're
just speeding them up.
I kind of feel proud as an organization and
of our operations team and all that, that
we're sometimes able to make a grant where
we're like, yeah everyone knows this is good
but we're the fastest.
We can get the money out now and we can make
this happen sooner and happening sooner saves
enough lives or whatever that it's a good
deal.
So those are some of the things I'm excited
about.
Cool.
I mean, that's a lot of exciting stuff.
Yeah.
Okay, switching gear a bit and talking about
the EA community a little bit.
What do you wish was happening in the EA community
that currently isn't, you think?
That might be projects, organizations, career
paths, lines of intellectual inquiry and so
on.
I think the EA community is an exciting thing
and a great source of a lot of...
A lot of people have interesting ideas and
make us think and have affected our thinking
a lot and our cause prioritization a lot.
I think right now, it's a community that is
very heavy on people who are philosopher types
and economist types and computer programmer
types, and all those somewhat describe me.
And I think we have a lot of those.
And I think those folks do a lot of good.
But I would like to see a broader variety
of just different types of people in the community.
Because I think there are people who are more
intuitive thinkers who wouldn't want to sit
around at a party debating whether the parameter
for the moral weight of chickens is 1% or
0.01% and what anthropics might have to do
with that.
They might not be interested in that, but
they still have serious potential as effective
altruist because they're able to say, "I would
like to help people and I would like to do
it as well as I can."
And so they might be able to say things like,
"Boy there's an issue in the world.
Maybe it's animal welfare and maybe it's AI,
maybe it's something else.
It's so important and no one's working on
it.
And I think there's something I can do about
it so I want to work on that."
You don't need to engage in hours of philosophy
to get there.
And I think a lot of people who are more intuitive
thinkers who may be less interested in that
stuff have a lot to offer us a community and
can accomplish a lot of things that the philosopher
programmer economist type is not always as
strong in.
That is something I would love to see.
I would to see it if the Effective Altruism
Community could find a way to just get a broader
variety of people and be a little bit more
like that.
And then in your own life, you're managing
a lot of money, there's a lot at stake, how
do you keep yourself sane?
Does this cause you to overwork?
How do you balance working with time off?
I mean, I think we're working on a lot of
exciting stuff and I certainly know people
who when they're working on exciting stuff,
they tend to just work all the time and they
tend to burn out.
I've been really pretty intense about not
doing that.
I co-founded GiveWell 11 years ago, and just
been working continuously since then.
GiveWell, Open Philanthropy.
Maybe at the beginning it felt like I was
sprinting but right now it really, really
feels like a marathon.
I try to treat it that way.
I'm just very attentive to if I start feeling
a lack of motivation, I just take a break.
I do a lot of really stupid things with my
time that put me in a better mood.
I don't feel bad about it and-
Name some stupid things, go on.
Just like going to weird conferences where
I don't know anyone or have anything to contribute
and talking over them there, video games.
Like my wife and I like to just have stuffed
animals with very bizarre personalities and
act out.
So now you know some things.
And I don't feel bad about it.
And something that I've done, actually for
a long time now, I think starting on two years
into GiveWell, is that I actually have tracked
my focused hours, my meaningful hours.
And at a certain point, I just took an average
over like the last six months and I was like,
that's my target, that's the average.
When I hit the average, I'm done for the week,
unless it's a special week, unless I really
need to work more that week.
What that does is, it is sustainable, 'cause
I think going in more than the average would
not be sustainable.
And two, it just puts me in a mindset where
I know how many hours I have and I have to
make the most of them.
And I think there are people who say, "If
I can just work hard enough, I'll get everything
done."
And I don't think that works for me, I think
it is often a bad idea.
One way that I think about it is the most
you can increase your output from working
harder is often around like 25%.
If you want to increase your output by 5x,
by 10x, you need to get better at skipping
things, deciding what not to do, deciding
what shortcuts to take.
And also you need to get good at hiring, managing,
deciding who should be doing what, deciding
it shouldn't be you doing a lot of things.
I feel like my productivity has gone up by
a very large amount and there is a lot of
variance.
Like when I make bad decisions, I might get
a tenth as much done in a month, but it's
not by working more hours.
That is definitely something that I do that
even though I think we're doing a lot of exciting
stuff, I do take it easy in that sense.
Cool.
Over the last decade, what do you think is
some of the biggest mistakes you've made or
things you wish you'd done differently?
Yeah.
I got lots of fun mistakes.
You said last decade, so that rules out a
lot of fun stuff.
Had some notes on this, 'cause it's hard to
keep track of all my mistakes.
Yeah, just give me a sec.
I think I'm looking at the wrong thing.
I will say a couple of things.
One thing is I think early, on and still kind
of all the time, I think without meaning to,
a lot of times we've communicated in a careless
way.
And I think especially early on, our view
was more attention is better, we really need
to get people paying attention to us.
And the problem is that a lot of the things
we said, they never go away, the internet
never forgets.
And I think also people who may have been
turned off by our early communications, you
never get the second chance to make that first
impression.
And when I look back at it, I think was it
really that important to get that much attention?
And no, it wasn't.
I think over the long run, if we had just
kind of been quiet and said something when
we really had something to say and said it
carefully, I don't know that anything really
would have gone that differently, maybe it
just would have gone better.
We've really succeeded to the extent that
we've succeeded, just having research that
we can explain and then people resonate with
it.
I don't know that we really had to do it that
way.
I think another mistake that I look back on,
I think I was too, actually, too slow to get
excited about the Effective Altruism Community.
When we were starting off, I knew that we
were working on something that most people
didn't seem to care about.
I knew that we were asking the question, how
do we do the most good that we can with a
certain amount of resources?
And I knew that there are other people asking
a similar question but we were speaking very
different languages from each other.
And so it was hard for me to really see that
those people were kind of asking the same
question I was asking.
And so I think a lot of what we did is we
were a bit, I wouldn't say totally dismissive,
we talked to the proto-effective-altruists
and everything before it was called effective
altruism.
But I think it really didn't hit me like,
boy if there's important insights about my
work that I'm missing because I don't have
enough people with different perspectives,
the most likely way to find those insights
is to find people who have the same goal and
are different from me.
And the fact they speak a different language
from me and a lot of their stuff sounds loopy,
I mean, that's just good.
That means that there's going to be at least
some degree of different perspectives here.
I think we've profited a lot from engaging
with the EA community.
We've learned a lot and we could have done
it earlier, so I think that's a mistake.
And then the final thing, here's a mistake
that I will not go into details on and it's
more like a class of mistakes.
But in general, I feel like the decisions
these days that I'm most nervous about are
hiring, recruiting because most of the things
we do at Open Phil, we've figured out how
to do them in an incremental way.
You do something, you see how it goes, you
do something you see how it goes and nothing's
ever that disastrous or that epic.
And when you're recruiting, it's just like
someone saying, "Should I leave my job or
not?"
And you have to say yes or no.
And it's such an incredibly high leverage
decision that it's become clear to me when
we do it wrong, it's such a huge problem and
such a huge cost to us.
And when we do it right, it makes it, it makes
everything we do.
And basically, everything we've been able
to do is because of the people that we have.
Those are the make or break decisions, and
we often have to make them in a week and we
have to make them on limited information.
Sometimes we get them right, sometimes we
get them wrong, I think there's a lot of ones
we've gotten wrong that I don't know about.
And so a lot of the biggest mistakes have
to be in that category.
Makes sense.
So then what headline career advice would
you give the EAs in the audience who are currently
figuring out what they ought to do?
How do you think about that question in general?
I mean, obviously I only in some sense have
one career to look at, although since we interact
with a lot of grantees, we also do notice
who's having a big impact according to us
and what's been their trajectory.
And one thing that I do think whenever I talk
to people about this topic, I get the sense
that effective altruists, especially early
in their career, are often impatient for impact
relative to what they should be.
I think a lot of the people I know who seem
to be in the best position to do something
big, they did something for five years, 10
years, 20 years.
Sometimes the thing was random but they picked
up skills, they picked up connections, they
picked up expertise.
And I think a lot of the big wins we've seen,
both stuff we've founded and stuff we haven't,
it looks less like someone came out of college
and had impact every year, and then it added
up to something good.
It looks more like someone might have just
been working on themselves and their career
for 20 years, and then they had one really
good year and that's just everything.
And that makes your career in terms of your
impact.
I do think a lot of early career effective
altruists, I kind of think if they just switched
and made the opposite mistake, which I also
think would also be a mistake, just forget
about impact.
And just say, "What can I be good at?
How can I grow?
How can I become the person I want to be?"
I think that probably wouldn't be worse and
that might be better.
And I think the ideal is some kind of balance.
But I think that would be a high level thing
that I do end up giving that advice.
And I don't know if it's right or not, but
it's definitely something that I say a lot.
Okay, good.
A couple of people are interested in small
question of just what is your average numbers
of hours per week then?
I think you can say no comment if you don't...
For focused hours, for the time I was doing
it, it was like 40.
Hours on the clock would be more than that.
And then recently I've actually stopped counting
them up because now I'm in meetings all the
time.
And one of the things that I've found is like
my hours are way higher when I have a ton
of meetings.
And if I'm sitting there trying to write a
blog post, they are way lower.
It doesn't seem as worth tracking as it used
to, but there's your answer.
Yeah.
I mean, getting into the mindset of like different
hours can be a hundred times more valuable.
Like Frank Ramsey was one of the most important
thinkers early 20th century, but died at 26
which is why no one knows about him.
He just worked four hours a day.
He made amazing breakthroughs in philosophy,
decision theory, economics, maths, it was
like, insane.
Yeah, that's incredible.
Another thing a couple of people were interested
in was Open Phil's attitude to political funding.
Firstly, just whether you have a policy with
respect to funding organizations that do political
lobbying.
And then secondly, in particular, funding
particular candidates more than other candidates
who may increase or decrease existential risks.
I mean, there's just...
Open Phil, there's no real reason in principle
that we can't do that.
We treat political giving or policy oriented
giving the same as anything else.
Which is we say hey, if we work on this, what
are the odds that our funding contributes
to something good happening in the world and
what is the value of that?
And if you multiply out the probabilities
in a sense, how good does that make the work
look and how does that compare to our last
dollar, and how does that compare to our other
work?
If it looks good enough and there aren't other
concerns, we'll do it.
I don't want to say...
I mean, most of our grants, we're not actually
calculating these figures but we're trying
to do something that approximates them.
For example, we work on causes that are important,
neglected and tractable.
And we tend to rate things on importance,
neglectedness and tractability, because we
think those things are predictive and corellative
with the kind of total good accomplished per
dollar.
A lot of times you can't really get a good
estimate of how much good you're doing per
dollar, but you can use that idea to guide
yourself and to motivate yourself.
The answer in politics is that I think, in
some ways in politics there's an elevated
risk.
You should have an elevated risk that you're
just wrong, and you should remember that when
you're doing things that you think are good.
If it looks like giving to bed nets to prevent
malaria, helps people a certain amount, and
giving to this very controversial issue that
you're sure you're right about, helps people
a certain amount, and they're the same, probably
the bed nets are better, because you've probably
got some bias towards what you want to believe
in politics.
And that said, I don't think things are necessarily
balanced.
I think on political issues, people have a
lot of reasons for holding the political views
they do, other than this is what's best for
the world as a whole.
And so when our goal is best for the world
as a whole, it's not always that complicated
to figure out which side is the side to go
in on.
It's not always impossible to see that.
We can and do fund things that are aimed at
changing policy.
And in some cases, we've recommended contributions
that are trying to change those kinds of outcomes.
Okay, fantastic.
So then another thing, a couple of people
are interested in is what Open Phil's, what
your plans are for trying to influence other
major foundations, other philanthropists.
What are your aims and plans there?
One of the cool things about Open Phil is
that we are trying to do our work in a way
where we find out how to help the most people
with the least money, or how to do the most
good with the least money, per dollar.
Then we recommend that to currently our main
funders, who are Cari and Dustin.
But there's no reason that recommendation
would be different for another person.
A lot of times, if there were a Will MacAskill
foundation, maybe it would be about what Will
wants and its recommendations would not be
interesting to other people.
Actually, I know your foundation would not
have that issue, but that is the more normal
way to be.
We definitely have aspirations that we think
the work we're doing, the research, the lessons,
are applicable to other philanthropists.
Down the line, I would love it to be the case
that we see way more good things to do with
money than Cari and Dustin have money.
And so we're going out and we're pitching
other people on it and trying to raise far
more than Cari and Dustin could give.
So that's definitely where we're trying to
go.
That's not what we're focused on right now,
because we're still below the giving level
we would need just to accomplish the giving
goals of Cari and Dustin.
We're focused on that and we're focused on
just also having a better organization.
More of a track record, stronger intellectual
framework, just better...
Something more solid to point to and say,
here's our reason that this is a good way
to do philanthropy.
We make early moves now, we talk to people
who are philanthropists, who will be philanthropists
but it's not our big focus now but I think
in a few years it could be.
You emphasized a lot.
You got criminal justice reform, existential
risks with bio and AI, animal welfare, global
health.
Is there another cause that you think you'll
branch out into over the next couple of years?
And if so, what might that be?
What's some potential candidates?
I mean, what you see a lot in the next few
years is trying to stay focused when it comes
to our grant making.
But also a lot of...
As we're doing that, to figure out more clarity
around, again, this question of long-termism
versus near-termism and how much money is
going to each.
And I think that will affect in the future
how we want to look for new causes.
So I think we will look for new causes in
the future, it's not our biggest focus in
the immediate term.
But one thing we do do, is a lot of times
we will pick a cause based on, partly, who
we can hire for it.
And we're very big believers in a people-centric
approach to giving.
We believe that a lot of times if you support
someone who's really good, it makes an enormous
difference.
Even if maybe the cause is 10% worse but the
person is way more promising.
We originally had a bunch of policy causes
we were interested in hiring for.
And criminal justice became a major cause
for us and the other ones didn't because we
found someone who we were excited to lead
our criminal justice reform work, that's Chloe
Cockburn and we didn't for some of the other
ones.
But I would say, one cause we may get more
involved in in the future, and I hope we do,
is macroeconomic stabilization policy.
Not the world's best known cause but it's...
The idea is that some of the most important
decisions in the world are made at the federal
reserve, which is trying to balance the risks
of inflation and unemployment.
And we've come to the view that there's an
institutional bias in a particular direction,
more inflation aversion than is consistent
with a "best most good for everyone" kind
of attitude.
And we think some of that reflects the politics
around it and the kind of pressures they come
under.
So we've been interested in that cause for
a while.
There's this not very well-known institution,
not very well understood and makes these decisions
that are kind of esoteric.
It's not a big political issue, but it may
have a bigger impact on the world economy
and on the working class than basically anything
else the government is doing.
Maybe anything else that anyone is doing.
So I would love to get more involved in that
cause but I think to do it really well, we
would need someone who's all the way in.
I mean, we would need someone who's just obsessed
with that issue 'cause it's very complex.
Yeah.
Do you feel like that's justified on the near-termist
human-centric view or do you think it's got
potentially very long-run impacts too?
I think that one's kind of a twofer.
I haven't totally figured out...
We haven't tried to do the calculations on
both axes but certainly, it seems like broad-based
growth and lower unemployment.
I think there's a lot of reasons to think
that might lead to better societal outcomes.
Just like better broad-based values, which
are then reflected in the kinds of policies
we enact and the kinds of people we elect.
I do think that if the economy is growing,
and especially if that growth is benefiting
everyone across the economy, if labor markets
are tighter, if workers have better bargaining
power, better lives, better prospects in the
future, I do think that that is a global catastrophic
risk producer in some way.
I haven't totally decided, how does that the
magnitude of that compare to everything else?
But I think if we had the opportunity to go
bigger on that cause, we would be thinking
harder about it.
Okay, terrific.
Okay, so current distribution across causes,
is it the case that you think something could
happen or you could learn something or maybe
just ethically reflect or something such that
you'd say, okay, actually we're just going
to go all in on one cause?
Like is that conceivable or you're going to
stay fixed?
I think it's conceivable.
We could go all, in but not soon.
I think as long as we think of ourselves as
an early stage organization, one of the big
reasons to not go all in is option value.
In this way we get to learn about a lot of
different kinds of giving, we're building
capacity for a lot of things and so I want
to have that option to change our minds later.
There's a bunch of advantages to being spread
out across different causes.
But one of the big ones, it's like yeah, maybe
in 30 years we'll just be like, "Well, we've
been at this for ever and we're not going
to change or minds and now we're going all
in."
And that's something I can imagine, but I
can't really imagine it happening soon.
Yeah, okay.
Terrific.
And then another thing a couple of people
are interested in is the relationship now
for...
What attitudes should a small individual donor
have?
A couple of thoughts, one is like, well can
they donate to Open Phil or Good Ventures?
Another is just, well, what's the point in
me donating?
There's this huge foundation that I think
is very good?
So I'd be interested in your views on that.
Individual donors cannot to donate to Open
Phil at the moment.
We just haven't set that up.
We haven't set up the customer service and
all that stuff and processing that we would
need so we're not taking donations.
There are the Effective Altruist Funds, I
think is what they're called, at CEA.
Some of our stuff manage donor-advised funds
that are not Open Phil, that are outside Open
Phil.
But you can give to an animal welfare fund
that is run by Lewis, who's our farm animal
welfare program officer, and he will look
at what was he not able to fund that he wished
he could have funded with Open Phil funds?
And he'll use your funds on it.
So that is an option for individual donors.
I think there's definitely...
Donating can definitely do a lot of good despite
our existence.
I mean, we certainly... the capital we're
working with is a lot compared to any individual
but it's not a lot compared to the size of
the need in the world, and the amount of good
that we can accomplish.
Certainly, we do not believe, given our priorities,
given the weight that we're putting on long-termism
versus near-termism etcetera, we are now pretty
confident that we just do not have enough
available to fund the GiveWell top charities
to their capacity.
And not just GiveDirectly, but some of the
ones that, according to our analysis, are
an even better deal than GiveDirectly for
bang for the buck.
So, bed nets to prevent malaria, and seasonal
chemoprevention treatment also to prevent
malaria, deworming, some other cool programs.
I mean, you can donate there and you will
get an amazing deal in terms of helping people
for a little money.
I know some people don't feel satisfied with
that, they think they can do better with long-termism
or with an animal-centric view.
I think if you're animal-centric, I mean,
we also currently have some limits on the
budget for animal welfare and you can give
to that Effective Altruist Fund or you can
look at Animal Charity Evaluators and their
recommendations or just give to farm animal
groups you believe in.
Long-termism right now is the one where it's
the least obvious what someone should do if
they're trying to reduce global catastrophic
risks.
But I think there's still things to do.
Among the things going on, we're currently
very hesitant to be more than 50% of any organization's
budget, unless we feel just incredibly well
positioned to be their only point of accountability.
There are organizations where we say we understand
exactly what's going on here and we're fine
to be the only decision-maker on how this
org is doing.
And other orgs, we just don't want them to
be that dependent on us.
So there are orgs working on existential risk
reduction, global catastrophic risk reduction
and Effective Altruist Community building.
Most of them we just won't go over 50%.
And so they need, in some sense, to match
our money with money from others.
I would say generally, no matter what you're
into, there's definitely something good you
can do with your money.
I still think donating is good.
Cool.
So you talked to us a bit in terms of your
grants about technical AI safety that you've
been funding.
One person is asking about what about AI strategy,
whether you're interested in funding that,
whether you've done that in the past?
Maybe explain what that means as well.
Sure.
So, AI strategy is a huge interest of ours.
One way of putting it is when we think about
potential risk from very advanced AI, we think
of two problems that interact with each other
and make each other potentially worse.
I mean, it's very hard to see the future but
these are things that are worth thinking about
because of how big a deal they would be.
One of them is the value alignment problem
and that's this idea that it may be very hard
to build something that's sort of, in some
sense, much smarter than anyone who built
it and much better at thinking in every way
and much better at optimizing, and still have
that thing behave itself, so to speak.
And so it may be very hard to get it to do
what its creator intended and do it smarter
than them but not too different from them.
That's the value alignment problem.
Most people consider it mostly a technical
problem.
A lot of the research that's going on, is
how can we build AI systems that can work
with vague goals?
The kind of goals that humans are probably
going to be capable of giving to AI systems.
So how can they work with vague, poorly defined
goals, and still figure them out in a way
that's in line with the original human intentions?
Also, how can we build AI systems that are
robust?
AI systems that, if they trained to one environment
and then the environment changes, they don't
totally break.
They realize they're dealing with something
new.
The world has changed and now they don't totally
do crazy things.
So that's the alignment problem.
That's technical research.
There's this other side we think about, the
deployment problem, which is what if you have
an extremely powerful general purpose technology
that could lead to a really large concentration
of power?
And the question is sort of who?
I mean, is it the government, is it a company,
is it a person?
Who has the moral right to launch such a thing
and to say hey, I have this very powerful
thing and I'm going to use it to change the
world in some way?
Who has the right to do that, and what kind
of outcomes can we expect based on if different
groups do it?
And one of the things that we're worried about
is that you might have a world where, let's
say two different countries are both trying
to be the first to develop a very powerful
AI.
Because if they can, they believe it'll help
their power, their standing in the world.
And because they're in this race and because
they're competitive with each other, they're
cutting corners and they're launching as fast
as they can.
And so they're not very careful about this
other issue I mentioned.
And so they release a carelessly designed
AI and then the AI ends up unaligned and it
ends up behaving in crazy ways and it has
bugs.
And something that's very intelligent and
very powerful and has bugs could be very,
very bad.
And so, AI strategy in my take, is working
on that deployment problem.
Reducing the odds that there's going to be
this arms race or whatever, increasing the
odds that there's this deliberate wise decision
about what is powerful AI going to be used
for and who's going to make that decision.
And I think there's a lot of really interesting
questions there.
We're super interested in it.
I mean, we've definitely made grants in that
area: we've supported The Future of Humanity
Institute to do work on that.
We're currently investigating a couple of
other grants there.
We've put out a job posting for someone who
wanted to think about AI strategy all day
and wanted to do it, didn't have another place
to go.
But if we thought they were really good, we
could fund them ourselves and that's sitting
on our website.
And then also, our support of OpenAI, that's
kind of a little bit of both.
And so when we support OpenAI, we're trying
to encourage OpenAI as an organization to
do a lot of technical safety research, but
also to be thinking hey, we're a company that
might be on the lead of AI.
What are we going to do if we're there?
Who are we going to loop in?
We're going to be in conversations about how
this thing should be used.
What's our position going to be and who should
use it?
We've really encouraged them to build out
a team of people who can be thinking about
that question all the time, and they are working
on that.
And so, this is a major interest of ours.
Yeah, terrific.
And so you mentioned one of the key ways in
which it could be a worry is if there's an
arms race.
Perhaps licitly, it's war time or something.
Would you be interested, and how do you think
about trying to make grants to reduce the
chance of some sort of great power war?
I think it would be really good to have lower
odds of a great power war, just flat out.
Maybe some of the possibility for advanced
technologies makes it even more good to reduce
the odds of a great power war and that's an
area that we have not spent much time on.
I know there is a reasonable amount of foundation
interest in what looks like the peace area.
And so it's not immediately neglected at first
glance.
That said, a lot of things that look like
they're getting attention at first glance,
you refine your model a little bit, you decide
what the most promising angle on it is, and
you might find that there's something neglected.
So I think it's something I'd love to look
into at some point and it might be an example
of a really awesome cause that we haven't
had the capacity to look into.
And if someone else saw something great to
do, they should definitely not wait for us.
Okay, terrific.
And then another cause that some people were
asking about was suffering of animals in the
wild, whether you might be interested in making
grants to improve that issue.
Sure.
Wild animal welfare, I mean, I think there's
definitely just off the bat kind of questions
about is there anything you could do to improve
wild animal welfare that wouldn't cause a
lot of other problems, and wouldn't perhaps
pose these problems that come from this hubris
of: there's these very complex ecosystems.
We didn't create them, we don't understand
them, we're intervening in them.
And then another problem with working on wild
animal welfare is we just haven't seen a lot
of shovel-ready stuff.
Something I will say is there's probably a
lot going on where human activity or some
other factor is making it the case.
That there are probably animals in the wild
and a lot of them, a really large number of
them, that could be a lot better off than
they are.
It could be suffering, could be better off,
maybe just could have better lives if not
for certain things about the ecosystem they're
in that may be human caused, maybe not.
And so I see potential there.
But we've got the issue I mentioned and we've
got just the...
It hasn't been obvious.
What are some grants you could make that might
improve the welfare of animals in the wild
that might accomplish this goal of making
other beings better off and not having a bunch
of problems come along with them.
And it just hasn't been that clear to us,
but we are continuing to talk about it.
Looking into it in the background is something
we may do in the future.
Makes sense.
And then final question, in the 2017 year
review, you said well we've actually already
been able to see some successes in particular
in criminal justice reform and animal welfare.
So I'd be interested if you'd just talk a
bit more about that and then, what lessons
do you feel you've learned, whether they transfer
to the things which are harder to measure
success?
Yeah.
So, we're early.
I think we've only been doing large scale
grant making for a couple of years.
A lot of our grant making is on these long
timeframes, so it's a little early to be asking,
do we see impact?
But I would say we're seeing early hints of
impact.
The clearest case is the farm animal welfare
where we came in, and there were a couple
of big victories that had already been achieved,
like a McDonald's pledge that all their eggs
will be cage-free.
And so there's definitely already momentum.
We came in and we poured gasoline on it, in
a sense.
I mean, we went to all the groups that had
been getting these wins and we went to some
of the groups that hadn't and we said, "Would
you like to do more?
Would you like to grow your staff?
Would you like to just go after these groups?"
And within a year, basically every major grocer
and every major fast food company in America
had made a cage-free pledge, approximately.
And so hopefully, if those pledges are adhered
to, which is a question and something we work
on, but hopefully 10 years from now you won't
even be able to get eggs from caged chickens
in the U.S., it'll be very impractical to
do so.
That'd be nice.
I mean, I'm not happy with how the cage-free
chickens are treated either, but it's a lot
better.
It's a big step up and I think it's also a
big morale win for the movement and creates
some momentum, because from there, what we've
been doing now is starting to build the international
corporate campaigns.
And some of those already existed and some
of them didn't, but we have been funding work
all over the world to try and...
Next time we would love to be part of those
early wins that got the ball rolling, and
not just coming in late and trying to make
things go faster.
And so that's been pretty exciting and we've
seen wins on broiler chickens, which is the
next step in the U.S. after caged chickens
- or after layer hens.
And then we've seen wins 
overseas and so that's been exciting.
And just these corporate campaigns had been
one of the quicker things we funded.
Because I think a lot of times with the corporation,
it wouldn't actually cost them that much to
treat the chickens better.
Somebody just has to complain loudly about
it and then it happens, that seems to be how
it goes.
And so in criminal justice reform, we picked
that cause partly because we saw the opportunity
to potentially make a difference and get some
wins on a lot of other policy causes.
And one of the early things we saw was a couple
of bipartisan bills in Illinois that we think
are really going to have quite a large impact
on incarceration there and that we believe
that our grantees, with our marginal funding,
were crucial for.
We've also just seen a little bit of a mini
beginning of a wave of prosecutors getting
elected - head prosecutors -who have different
values from the normal head prosecutors.
So instead of being all about tough on crime
and whom do we lock up, there is someone,
for example, Larry Krasner in Philadelphia,
has put out a memo to his whole office that
says, "When you propose someone's sentence,
you are going to have to estimate the cost
of that to the state.
It's like $45,000 a year to put someone in
prison, and you're going to have to explain
why it is worth that money to us to put this
person in prison.
You're also going to...
If you want to start a plea bargain and you
want to start it lower than the minimum sentence,
you're going to have to get my permission.
I'm the head prosecutor."
It's a different attitude, it's prosecutors
who are saying, "My goal is not to lock up
as many people as possible, my goal is to
balance costs and benefits and do right by
my community."
And I think there's been a bunch of orgs and
a bunch of funders involved in that.
I don't think any of these are things I would
call Open Philanthropy productions.
They're things that we think we helped with,
we sped along, we got some share of the work
being done.
So, I mean, we're excited about that.
Those are two of the causes that have nearer-term
ambitions.
I can't say I've seen wins on biosecurity
yet, I mean other than the fact that there's
been no pandemic that killed everyone.
But I don't give us much credit for that,
or any.
And then I think in terms of lessons, I mean,
I think we've just seen what's been working
and what's not in those causes where there's
more action and more things happening.
I think one of the things that I think we
are seeing is our basic setup of giving program
officers high autonomy, I think has been working
pretty well.
And that a lot of these grants, they're not
the ones I would have come up with.
And in some cases, they weren't even ones
I was very excited about beforehand and it
was...
We have systems for trying to give our program
officers the ability to sometimes make grants
that we don't fully agree with and try to
reduce veto points, reduce the need for total
consensus and have people at the organization
try and make bets that may not be universally
agreed to or even agreed to by me, or Cari
and Dustin.
And I think looking at some of the grants
that have been effective, I think that's a
good move so I will continue to do things
that don't seem right to me and I'm very excited
about that.
Cool.
Well, we better wrap up now.
But thanks so much Holden for coming on stage
and thank you for your work.
Yeah, thank you.
