Today I'm going to talk to you about biosecurity
as a cause area and how the Open Philanthropy
Project is thinking about it.
I think that this is a cause area that EAs
have known about for a while, but haven't
dug into as deeply as some of the other things
that have been talked about at this event
- like AI and farm animal welfare and global
poverty - but I think it's important.
I think it's an area where EAs have a chance
to make a huge difference, especially EAs
with a slightly different set of skills and
interests than those required by some of the
other cause areas.
I would love to see more EAs engaging with
biosecurity.
When I say biosecurity, I want to make sure
we're clear on the problem that I'm talking
about.
I'm focusing on what we're calling Global
Catastrophic Biological Risks at the Open
Philanthropy Project.
I'm going to talk to you about how we see
that risk and where we see that risk - where
we think it might be coming from.
I'm going to talk to you about how I think
EAs can make a difference in it.
Then I want to note that I'm not really focusing
too much on the specific work that we've done
and that others have done.
I thought it would be more interesting for
people to get a sense of what this area is
like and the strategic landscape as we see
it before getting into the details of specific
organizations and people, so hopefully that's
helpful for everyone.
I also want to note quickly that I think this
is an area where a lot less thinking has been
done for a much shorter period of time, so
to a greater extent everything should be viewed
as somewhat preliminary and uncertain.
We might be changing our minds in the near
future.
The cause for concern when we think about
global catastrophic biological risks is something
that could threaten the long term flourishing
of human civilization, that could impair our
ability to have a really long, really big
future full of joy and flourishing for many
different sentient beings.
That's kind of different from what you might
think about biological risks that most people
talk about, which are often things like Ebola
or Zika.
Ebola or Zika are unbelievably tragic for
the people afflicted by them, but it doesn't
seem like the evidence suggests that they
have a realistic chance of causing international
civilizational collapse and threatening our
long-term future.
To take this further, we predict that we would
need a really extremely big biological catastrophe
to threaten the long-term future.
We're really thinking about something that
kills or severely impairs a greater proportion
of the entire human civilization than what
happened in either of the world wars or in
the 1918 flu pandemic.
That kind of implies that we're thinking about
fatalities that could range into the hundreds
of millions or even the billions.
There's a lot of really amazing work that
could go into preventing smaller risks, but
that's not really what we've been focusing
on so far.
It's not what I anticipate us focusing on
in the future.
Overall, we're currently ranking the prevention
of global catastrophic biological risks as
a high priority, although I think it's somewhat
uncertain.
I think it's high priority to figure out more
and then we might readjust our beliefs about
how much we should prioritize it.
So what are these risks even like?
We think the biggest risks are from biological
agents that can be easily transmitted that
can be released in one area and spread, as
opposed to something like Anthrax, which is
very terrible in the space that it's released,
but it's hard to imagine it really coming
to afflict a large proportion human civilization.
Then within the space of infectious diseases,
we're thinking about whether the most risky
type of attack would be something that happened
naturally that just came out of an animal
reservoir, or something that was deliberately
done by people with the intention of causing
this kind of destruction.
Or it might be the middle ground of something
that might have been accidentally released
from a laboratory where people were doing
research.
Our best guess right now is that deliberate
biological attacks are the biggest risk.
Accidental risk somewhere in the middle, and
natural risk is low.
I want to explain why that is because I think
a lot of people would disagree with that.
Some of the reasons I'm skeptical of natural
risks are that first of all, they've never
really happened before.
Humans have obviously never been caused to
go extinct by a natural risk, otherwise we
would not be here talking.
It doesn't seem like human civilization has
come close to the brink of collapse because
of a natural risk, especially in the recent
past.
You can argue about some things like the Black
Death, which certainly caused very severe
effects on civilization in certain areas in
the past.
But this implies a fairly low base rate.
We should think in any given decade, there's
a relatively low chance of some disease just
emerging that could have such a devastating
impact.
Similarly, it seems like it rarely happens
with nonhuman animals that a pathogen emerges
that causes them to go extinct.
I know there's one confirmed case in mammals.
I don't know of any others.
This scarcity of cases also implies that this
isn't something that happens very frequently,
so in any given decade, we should probably
start with a prior that there's a low probability
of a catastrophically bad natural pathogen
occurring.
Also, we're in a much better situation than
we were in the past and than animals are in
some ways, because we have advanced biomedical
capabilities.
We can use these to create vaccines and therapeutics
and address a lot of risks from different
pathogens that we could face.
Then finally, on kind of a different vein,
people have argued that there's some selection
pressure against a naturally emerging highly
virulent pathogen because when pathogens are
highly virulent, often their hosts die quickly
and they try to rest before they die and they're
not out in society spreading it the way you
might spread the cold, if you go to work when
you have the cold.
Now, before you become totally convinced about
that, I think that there's some good countervailing
considerations to consider about humanity,
that make it more likely that a natural risk
could occur now than in the past.
For example, humanity is much more globalized,
so it might be the case that in the past there
were things that were potentially deadly for
human civilization, but humans were so isolated
it didn't really spread and it wasn't a huge
deal.
Now everything could spread pretty much around
the globe.
Also, civilization might be more fragile than
it used to be.
It's hard to know, but it might be the case
that we're very interdependent.
We really depend on different parts of the
world to produce different goods and perhaps
a local collapse could have implications for
the rest of the globe that we don't yet understand.
Then there's another argument one can usually
bring up, which is if you're so worried about
accidental or engineered deliberate attacks,
there's also not very much evidence of those
being a big deal.
I would agree with this argument.
There haven't been very many deliberate biological
weapon attacks in recent times.
There's not a strong precedent.
Nonetheless, our best guess right now is that
natural risks are pretty unlikely to derail
human civilization.
When we think in more detail about where catastrophic
biological attack risks come from, we can
consider the different potential actors.
I don't think that we've really come to a
really strong view on this.
I do want to explain the different potential
sources.
Some possible sources could be different states.
For example, in bio-weapons programs, states
could develop pathogens as weapons that have
the potential to be destructive.
Small groups, such as terrorists or other
extremists might be interested in developing
these sorts of capabilities.
Individuals who have an interest, people working
in various sorts of labs: in academia, in
the government and on their own.
There are DIY biohacker communities that do
different sorts of biological experimentation.
Those are the different groups that might
contribute to catastrophic biological risk.
There are different kinds of pathogens and
I think here - our thinking is even more preliminary
- we're especially worried about viral pathogens,
because there's proven potential for high
transmissibilities and lethality among viruses.
They can move really fast.
They can spread really fast.
We have fewer effective countermeasures against
them.
We don't have very good broad spectrum antivirals
that are efficacious and that means that if
we had a novel viral pathogen, it's not the
case that we have a huge portfolio of tools
that we can expect to be really helpful against
it.
biosecurity 1
I've created this small chart that I think
can help illustrate how we divide up these
risks.
On the top there's a dichotomy of whether
the pathogen is more natural or more engineered
and then on the vertical axis a dichotomy
of whether it emerged naturally (or accidentally)
or was a deliberate release.
The reason I'm flagging these quadrants is
because I think there are two different ways
to increase the destructiveness of an attack.
One is to engineer the pathogen really highly,
and the other is to optimize the actual attack
type.
For example, if you released a pathogen at
a major airport, you would expect it to spread
more quickly than if you released it in a
rural village.
Those are two different ways in which you
can become more destructive, if you're interested
in doing that.
Hopefully you're not.
My current guess is that there's a lot more
optimization space in engineering the actual
pathogen than in the release type.
There seems to be a bigger range, but we're
not super confident about that.
biosecurity 2
Here's where the risk we see is coming from.
There are advances in gene editing technology,
which is a really major source of risk.
I think that they've created a lot more room
to tinker with biology in general to both
lower resources and lower levels of knowledge
required and to a greater overall degree,
create novel pathogens that are different
from what exists in nature, that you can understand
how they work.
This has amazing potential to do a lot of
good but it also has potential to be misused.
It's becoming a lot cheaper to synthesize
DNA and RNA, to get genetic material for different
pathogens.
This means that these capabilities are becoming
more widely available, just because they're
cheaper.
Regulating them and verifying buyers is becoming
a bigger proportion of the costs, which means
companies are more and more incentivized to
stop regulating sales and verifying buyers.
Biotech capabilities are becoming more available
around the world.
They're spreading to different areas, to new
labs.
Again, this is mostly a sign of progress.
People are having access to technology, places
in Asia and around the world are having large
groups of very talented scientists and that's
really great for the most part, but it means
there are more potential sources of risk than
there were in the past.
Then finally, all of those things are happening
much faster than governments can possibly
hope to keep up and than norms can evolve,
so that leads you to the situation where the
technology has outpaced our society and our
means of dealing with risk, and that increases
the level of danger.
Now I'll contrast and compare biosecurity
with AI alignment, because I think AI alignment
is something people are much more familiar
with.
It might be helpful to draw attention to the
differences, for getting people up to speed.
I think that overall, there's a smaller risk
of a far future negative trajectory change
from biosecurity.
Overall it seems like a smaller risk to me.
biosecurity 3
With addressing biosecurity risk, there are
fewer potential upsides.
With an AI, you can imagine that if it develops
really well, it has this amazing potential
to increase human capabilities and cause human
flourishing.
With biosecurity, we're basically hoping that
just nothing happens.
The best outcome is just nothing.
No attacks occur.
No one dies.
Society progresses.
In the case of AI alignment, maybe somebody
develops an aligned AI, which would be great.
But for biosecurity, we're really about preventing
downside risks.
More of the risk here comes from people with
actively bad intentions as opposed to people
with good intentions or people who are just
interested in the research, especially if
you believe and you agree with me that deliberate
attacks are the most likely source of concern.
In biosecurity more than AI, I think there
are many more relevant actors on both sides,
as opposed to there being a few labs with
a lot of capabilities in AI.
It could be the case that we end up with a
situation in biosecurity where there are millions
of people that are capable of doing something
that would be pretty destructive.
And also, we can unilaterally develop counter
measures against their attacks.
There's less connection between the sources
of the risk and the sources of the risk reduction.
They're more divorced from one another.
There's more possible actors on the sides
of attack and defense.
I think that the way that The Open Philanthropy
Project is seeing this field right now is
somewhat different from how most people are
seeing it.
Most of the discussion in the field of biosecurity
is focused on much smaller risks than the
ones that we're worried about.
I think discussion of things with greater
than one million fatalities was kind of taboo
up until very recently.
It's been difficult for us to find people
that are interested in working on that kind
of thing.
I think that part of the reason for that,
is that it's been really hard to get funding
in the space, so people want to make sure
their work seems really relevant.
And since small attacks and small outbreaks
are more common, a good way to make your work
more relevant is to focus on those.
There's ongoing debate in the field about
whether natural, deliberate or accidental
releases are the biggest risks.
I don't think people are synced up on what
the answer to that question is.
I don't think everyone agrees with us that
deliberate is mostly the thing to worry about.
Then people are really trying to walk this
tightrope of regulating risky research while
not regulating productive research, maintaining
national competitiveness, and encouraging
productive biotech R&D.
Given all of that, we have some goals in this
space.
They're kind of early goals.
They won't be sufficient on their own.
They're mostly examples, but I think they
could get us pretty far.
The first thing is we just really need to
understand the relevant risks in particular.
I'm keeping it very high level for now, because
there's not a lot of time, and partly because
I think that talking about some of these risks
publicly is not a productive thing to do,
and also because we're pretty uncertain about
them.
I think it would be really helpful to have
some people dig into the individual risks.
Think about what one would need to do in order
to pull off a really catastrophic bio attack.
How far out is that being a possibility?
What sorts of technological advancements would
need to occur?
What sorts of resources would one need to
be able to access in order to do that?
If we can answer these questions, we can have
a sense of how big catastrophic biosecurity
risks are and how many actors we need to be
worried about.
Understanding particular risks will help us
prioritize things we can do to develop counter
measures.
We want to support people in organizations
that increase the field's ability to respond
to global catastrophic biological risks.
The reason for that is that right now the
field of biosecurity has lacked funding for
a long time.
A lot of people have left the field.
Young people are having a very difficult time
going into the field.
Hopefully that's changing, but it's still
a pretty dire situation, in my view.
We want to make sure that the field ends up
high quality with lots of researchers that
care about the same risks we care about, so
people that show signs of maybe moving in
that direction, we're very enthusiastic about
supporting, in general.
Then finally, we want to develop medical counter
measures for the things that we're worried
about.
We've started having our science advisors
look into this.
We have some ideas about what the worst risks
are and if we can develop counter measures
in advance and stockpile those, I think we
would be much better prepared to address risks
when they come up.
Finally, I want to talk to you a little bit
about what I think EAs can do to help.
I see a lot of potential value in bringing
parts of the EA perspective to the field.
Right now there aren't a lot of EAs in biosecurity
and I think that the EA perspective is kind
of special and has something special to offer
people.
I think some of the really great things about
it are, first of all, the familiarity with
the idea of astronomical waste and the value
of the far future.
That seems like it's somewhat hard to understand.
It's a bit weird and counterintuitive and
philosophical, but a lot of EAs find it compelling.
A lot of other people find it wacky or haven't
really heard about it.
I think having more concern about that pool
of value and those people in the future who
can't really speak for themselves could do
the field of biosecurity a lot of good.
Another thing that I think is amazing about
the EA perspective, is comfort with explicit
prioritization, the ability to say, "We really
need to do X, Y, and Z.
A, B, and C are lower priority.
They'll help us less.
They're less tractable.
They're more crowded.
We should start with these other things."
I think right now, the field doesn't have
a clear view about that.
There's not a very well thought out and developed
road map to addressing these concerns.
I think EAs would be good at helping with
that.
Finally, I think a lot of EAs have a skepticism
with established methods and expertise.
That's great because I think that's necessary
actually in almost every field.
Especially in fields that involve a complicated
interplay of natural science and social science.
I think that there's a lot of room for things
to be skewed in certain directions.
I haven't seen too much harmful skew, but
guarding against it would be really helpful.
There's some work going on at the Future of
Humanity Institute that we're very excited
about.
It seems like there's a lot of low hanging
fruit right now.
There are a lot of projects that I think an
EA could take on and they'd be pretty likely
to make progress.
I think biosecurity progress is more of a
matter of pulling information together and
analyzing it, and less based only in pure
insight.
I think that you should consider going into
biosecurity if you are an EA concerned with
the far future, who wants to make sure that
we all get to enjoy our amazing cosmic endowment,
and if you think that you might be a good
fit for work in policy or in the biomedical
sciences.
This is an area where I think that a lot of
safety might come from people not overhyping
certain sorts of possibilities as they emerge,
at least until we develop counter measures.
It's important to have people that feel comfortable
and are okay with the idea of doing a lot
of work and then not sharing it very widely
and actually not making it totally open, because
that could actually be counterproductive and
increase risk.
That's what I hope that people will be willing
to do.
I hope that we find some EAs who want to move
into this field.
If you feel like you're interested in moving
into this field, I would encourage you to
reach out to me or grab me sometime at this
conference and talk about both what you'd
like to do and what might be stopping you
from doing it.
In the future we might write more about how
we think people can get into this field and
be able to do helpful research, but we haven't
really done that yet, so in the meantime,
I really hope that people reach out.
Thank you so much and I'll take your questions.
Okay, so we've got a number of questions that
have come in and I'm just gonna try to rifle
through them and give you a chance to answer
as many as we can.
You emphasized the risk of viral pathogens.
What about the, I think, more well known if
not well understood problem of antibiotic
resistance?
Is that something that you're thinking about
and how big of a concern is that for you?
Yeah.
I think that's a good question.
The Open Philanthropy Project has a report
on antibiotic resistance that I encourage
you to read if you're curious about this topic.
I think it's a really big concern for dealing
with conventional bacterial pathogens.
Our best guess is that it's not such a special
concern for thinking about global catastrophic
biological risks, first of all, because there's
already immense selection pressure on bacteria
to evolve some resistance to antibiotics,
and while this mostly has really negative
implications, it has one positive implication,
which is that, if there's an easy way to do
it, it's likely that it'll happen naturally
first and not through a surprise attack by
a deliberate bad actor.
Then another reason that we're worried about
viruses to a greater extent than bacteria
is because of their higher transmissibility
and the greater difficulty we have disinfecting
things from viral pathogens.
So, I don't think that antibiotic resistance
will be a big priority from the far-future
biosecurity perspective.
I think it's possible that we're completely
wrong about this.
I'm very open to that possibility, and what
I'm saying is pretty low confidence right
now.
Great.
Next question.
To what extent do small and large scale bio-risks
look the same and to what extent do the counter
measures for those small and large scale risks
look the same, such that you can collaborate
with people who have been more in the traditional
focus area of the smaller scale risks?
That's an interesting question.
I think it's a complicated one and a simple
answer won't answer it very well.
When I think about the large scale risks,
they look pretty different for the most part
from conventional risks, mostly because they're
highly engineered.
They're optimized for destructiveness.
They're not natural.
They're not something we're very familiar
with, so that makes them unlikely to be things
that we have prepared responses to.
They're likely to be singularly able to overwhelm
healthcare systems, even in developed countries,
which is not something that we have much experience
with.
But the second part of the question about
the degree to which efforts to address small
scale risks help with big scale risks and
vice versa, I think that that's somewhat of
an open question for us and as we move towards
prioritizing in the space, we'll have a better
view.
There's some actions that we can take.
For example, advocacy to get the government
to take biosecurity more seriously might help
equally with both.
On the other hand, I think developing specific
counter measures, if we move forward with
that, will be more likely to only help with
large scale risks and be less useful with
small scale risks, although there are counter
examples that I'm thinking of right now, so
that's definitely not an accurate blanket
statement.
When you think about these sort of engineered
attacks that could create the largest scale
risk, it seems like one thing that has sort
of been on the side of good, at least for
now, is that it does take quite a bit of capital
to spin up a lab and do this kind of bioengineering.
But, as you mentioned, stuff is becoming cheaper.
It's becoming more widely available.
How do you see that curve evolving over time?
Right now, how much capital do you think it
takes to put a lab in place and start to do
this kind of bad work if you wanted to and
how does that look five, ten, twenty years
out?
I don't think I want to say how much it takes
right now, or exactly what I think it will
take in the future.
I think the costs are falling pretty quickly.
It depends on what ends up being necessary,
so for example, the cost of DNA synthesis
is falling really rapidly.
It might be the case that that part is extremely
cheap, but actually experimenting with a certain
pathogen that you think might have destructive
capability - for example, testing it on animals
- might remain very expensive, and it doesn't
seem like the costs of that part of a potential
destructive attack are falling nearly as quickly.
Overall, I think costs will continue to fall
but I would guess that the falling plateaus
sometime in the next few decades.
Interesting.
Does biological enhancement fall within your
project at all?
Have you spent time considering, for example,
enhancing humans or working on gene editing
on humans and how that might be either beneficial
or potentially destabilizing in its own way?
That's not something that we've really considered
a part of our biosecurity program.
Fair enough.
How interested is Open Philanthropy Project
in funding junior researchers in biosecurity
or biodefense?
And relatedly, which would you say is more
valuable right now?
Are you looking more for people who have kind
of a high level strategic capability or those
who are more in the weeds, as it were, of
wet synthetic biology?
Yeah.
I think that right now we'd be excited about
EAs that are interested in either, potentially,
depending on their goals in this field, the
extent of the value alignment, and their dedication
and particular talents.
I think both are useful.
I expect that the kind of specialization,
for example, either in policy or in biomedical
science will possibly be more helpful in the
long term.
I'm hoping that we'll gain a lot of ground
on the strategic high level aspects of it
in the next few years, but right now I think
both are sorely needed.
Next question.
For someone whose education and skills have
been focused on machine learning, how readily
can such a person contribute to the type of
work that you're doing and what would that
look like if they wanted to get involved?
I don't know.
I've never seen anyone try.
I think that it would be possible because
I think that there's a lot of possibility
of someone who has no special background in
this area, in general, becoming really productive
and helpful within a relatively short time
scale and I don't see machine learning background
as putting anyone at a particular disadvantage.
Probably it would put you at somewhat of an
advantage, although I'm not sure how.
I think that right now, the best way to go
would probably be just to get a Masters or
PhD in a related field and then try to move
into one of the relevant organizations, or
try to directly work at one of the relevant
organizations like our biggest grantee in
biosecurity, the Center for Health Security.
And for that, I think that probably having
a background in machine learning would be
neither a strong drawback nor a huge benefit.
That's about all the time that we have for
now, unfortunately.
But will you be at office hours after this?
I don't have office hours planned, actually, but feel free to grab me if you want to chat more.
