So, we are being controlled by the random
outcomes of complex system and in this talk
I would like to argue that these outcomes
are not that random if we manage to understand
them better.
And, that may lead us to wonder whether we
are actually controlled, but I won't start
any debate on free will right now and we will
just start delving into complex systems at
first.
Hello, welcome to my talk.
It's actually the longest talk at the conference
just because I asked to talk for a longer
time and they accepted.
My name is Max, I work for the three organizations
down there and I will give this talk because
some people asked me to give this talk.
Some people asked me to talk about complexity
because complexity seems to be overlooked,
neglected in the EA movement and if you want
to categorize this talk, this talk is like
a methods driven talks.
It's rather meta and hopefully, it's the first
talk of this kind about this field.
Hopefully, leading to more specific applications
of complexity science and computational modeling
to EA purposes.
And if you want to categorize me, I'm first
and foremost an aspiring effective altruist,
and then I kind of started being interested
in complexity science.
I didn't start with complexity science and
computational modeling and then just applied
patterns matching with EA just to give a talk
at a beautiful conference like this one.
I thought that like this method is actually
helpful for actual EA purposes.
Before actually delving into details, I have
a set of prompts and the first one is that
if we actually look outside in the world,
we can look at different systems and we can
agree that these systems on earth are quite
dynamic.
We can look at social systems of people interacting,
exchanging goods, of wealth being distributed
in a certain way and this keeps changing and
this is dynamic.
We can look into nature, which is not that
different, where you have the number of species
that is changing.
You have the environment that is interacting
with the social environment etcetera.
Things are quite complex, things are dynamic,
we can observe this with our own eyes.
It's not that difficult to agree on this,
but what's interesting is that if we actually
look into more details in the system, we see
that they are dynamic but they also exhibit
some resemblance.
This is an example from a study by scholars
in Utrecht University, that was quite a challenge
to find this, that actually looks at distributions
in social systems and in a natural system,
and they actually find profound resemblance
between both.
So, they looked into wealth distribution in
the world and found out that around 1% of
people own 50% of the resources and they looked
into the Amazon Forest and found out that
like 1% of the species actually has 50% of
the biomass and they were like, “Oh, okay,
we can find 1% 50% everywhere, so let's look
actually deeper into this.”
And, they found that the math that explained
the way distributions actually happen are
the same in both systems.
Even though the mechanisms that produce this
distribution are completely different in both
systems, they managed to find common properties
in both systems, systems that are dynamic,
systems that are profoundly different and
that is kind of like a first prompt that justifies
the existence of one field to understand complex
systems that are different but common properties
to explain them.
My second prompt, and this is why I have a
Rubix Cube in my hand.
I just want to take a problem-solving perspective
at looking at systems and trying to understand
reality, is that, if someone wants to understand
something or if someone wants to solve something,
the person can use different methods or the
person can be quite stupid to the question
and just keep doing the same thing and over
and over, and this is literally an illustration
of stupidity because we will never manage
to solve the Rubix Cube if we keep doing the
same thing over and over.
The person might use another method that is
so much better that is just like trying to
do random stuff, continuously, without any
learning and this is so much better because
at some point, it will take ages, but the
Rubrix Cube will be solved.
And the third method would be to be a little
more systemic about the Rubix Cube, or about
the problem in general and to actually understand
the gears of the problem, the gears of the
system and by doing so we can actually test
some things, learn from it and just try to
be systematic about the problem.
If you actually think about reality that way,
we can have a set of methods that we like
to apply but since systems are quite different
from each other, since reality is quite complex,
we might want to think that like the best
method is at least not to have a fixed one.
That we can change the methods that we use,
and that hints back at how to understand dynamic
systems.
The third prompt is this common distinction
between the map and the territory.
The map being our understanding of the reality,
the reality being the territory.
And all maps are wrong and that's okay.
That's actually why they are maps.
Maps are meant to be useful, they are meant
to show what we know, what we don't know,
they're meant to be used.
This is why they are wrong because if our
maps are reality then we just can't manage
to compute everything.
Our main goal as EA is to create good maps
but the problem is that sometimes people tend
to over-marry the map and the territory, and
that's like a form of intellectual hierarchy,
which just prevents everyone to understand
everything.
What we want to do is to always focus on map,
let the territory alone and enhance the type
of map we have, and hence the quality.
Summary of prompts, first of all systems are
adaptive, dynamic but nevertheless, may exhibit
some resemblance.
How can we understand this?
Second, to solve problems we can use stupid,
random or systematic methods, and fixed methods
are likely to be the worst and finally, we
all create maps of territories and sometimes
we confuse them and the ultimate goal is to
avoid confusion and improve our maps.
So, those are the prompts for my talk and
my talk will cover the following.
First of all, I will go through the field
of complexity science, so that's mainly theoretical.
Then we will switch to computational modeling
to explain why we model, what it is and why
it might be useful, and then at the end I
will come examples of applications of computational
modeling and computational science that are
directly related to EA topics and at the very
end I will give an example of a project that
we are studying in Geneva that relies on this
body of theory and methods.
Complexity science first of all.
Complexity science is a scientific field that
was created by a bunch of scientists.
Most of them were from the Los Alamos National
Laboratory, who gathered together in Santa
Fe and wondered, “How can we have a scientific
approach that actually does a synthesis of
other disciplines while not being reductionist,
while not being a discipline itself, and while
actually fighting against the specialization
that is prevailing in science.
Their idea was just to synthesize disciplines
at first and they actually went beyond this
and developed a set of methods, a set of properties
and tools to analyze complexity.
But what is complexity?
Everything is complex and our life is complex,
EA is complex, EA ideas are complex, what
can do about it?
John Holland was pretty famous in the complexity
science sphere, just says, “Complexity still
remain loosely defined and that's okay.
However, it does not prevent one to have a
systematic approach to the subject matter.”
That's right okay, it makes me comfortable.
The second thing is slightly more advanced
like, “Complexity characterizes something
with many moving parts where those parts interact
with each other in multiple ways, displaying
non-linear patterns in the aggregate, which
often is not additive.”
This something usually is called a complex
system.
This is already like a better definition but
that is still quite meta and a bit blurry
but at least, it gives us some indication.
To actually understand system, I would like
to take the analogy of a chess game, a chess
board to go through some of the basics that
we apply in system theory.
The first one is the concept of state, so
the state of a system is how the system looks
like at time T. It's like how the chess board
looks like at a certain point in time and
then the states basically evolve and the states
evolve according to a set of rules that will
determine how basically the dynamics function
in a system then recreate other states continuously.
For example, this is the current state of
a chess board and on this board we have eight
times eight squares and we have 32 pieces
of six different kinds that just like move
and those are the rules that create states.
One particular property of complex systems
is that there is perpetual novelty in the
system.
We can notice a starting state, we can know
the rules but there will always be new patterns,
new ways and it's almost impossible to predict
an entire chess game based on the state that
we actually look at, at a certain point in
time.
This is kind of this stochastic process that
happens in a complex system.
But the good thing is that it's not entirely
chaotic, we can still understand that and
we understand this by actually looking at
patterns, and that's the key thing in complexity
science.
We try to look at patterns and how they emerge
from the rules that will change the states
and the thing is that if there are no patterns
then it's completely chaotic.
That means that we can't understand anything.
If we can predict any pattern that means that
the system is simple, and if the patterns
keep changing but we find some resemblance
and some references in the patterns then the
system is complex and actually most of the
systems are complex, very few are simple,
very few are chaotic.
Complexity science in order to understand
these patterns, mainly look into the micro-level
dynamics of a system rather than looking directly
at all the states, we try to deep dive into
the micro-level dynamics and that should change.
Yeah, nice.
Here's an analogy, so we have Lee and Lee
is quite happy because it's sunny today and
Lee has an idea because there is Sam over
there, and Lee wants to give a gift to Sam
to seduce Sam but Lee sees that Sam is not
very happy.
So, if Lee learns from that and actually adapts
and updates and Sam then becomes actually
happy.
Okay, all great but then there's actually
Kim that sees this and that he's unhappy,
which makes Sam updates because of this, and
will make Lee updates, and acknowledges that
it was a bad idea update and adapt their behavior.
The question here is, you have different agents
that have different attributes and different
rules of behavior and preferences but you
also have an environment that may change.
Would the situation be different if it was
completely rainy and if Lee had an umbrella?
Maybe not at all, maybe Lee would have found
a strategy to do something better, we don't
know.
But that's basically what complexity science
specifically look at.
Like how different agents that have a topology
of interactions, different agents that are
all different, they're heterogeneous, that
have rules of behavior in terms of what they
do, how they adapt, how they learn and how
they update.
They have a topology of interaction, so that
means that some agents may be connected to
others but not to the others on the other
side and they are like feedback loops, that
are the things through which information travels
and then, these agents behave in an environment
that is also changing and that includes stressors
and shocks.
For example, a stressor might be like a population
that lives in a democracy that has certain
rules.
This is a stressor and then you may have some
shocks like elections or petitions and they
will just change behavior or people or perceptions
of people.
Okay, so this is quite a simple example but
now, what if we look at thousands of agents
constrained in an environment that is at its
core producing the patterns with agents' behavior
and the environment changing itself.
It becomes a bit complicated and that's why
we will turn to computational modeling at
some point.
But you need to formalize that to understand.
Only with theory it wouldn't be possible.
Just like as a summary, on the micro-level
what we look at is the adaptive behavior,
that will keep changing over time when you
have other agents that behave in a certain
way, when we have the environmental changes
then the agents actually adapt.
We do this to look at, then, macro-level dynamics.
That basically the outcomes of what happens
on the micro-level and they have different
properties, so one of them is what we call
self-organization in patterns.
This is a map of New York and we see that
there is some segregation based on people's
preference, or it might be top down, or we
don't know.
In case it's based on people's preference,
this is self-organization and there is a quite
famous model to explain this, which is called
a Schelling model.
Which basically explains that how two different
agents that have only one parameter, which
is their willingness to live close to someone
else, then just produce, in the aggregate,
patterns of segregation. in both cases, agents
actually prefer to live with people that are
different.
Agent preference of 30% that means that 30%
of the time they would like to live closer
to people that are similar.
That means that 70% of the time they prefer
to be with people that are different, and
same with 45%.
But the problem here is that we could assume
that they wouldn't be any segregation with
like only 30% of micro-level segregation but
in fact, in the aggregate we find that there
is, and that therefore the aggregate is not
just the sum of all the pats.
It's just something different, greater or
not greater.
Another example of self-organization is like
this flocks of birds that we can observe.
Like each bird is kind of like independent
from each other.
They fly where they want but there is like
this coordination that happens without any
central control giving birth to these beautiful
shapes.
This is something that complexity science
also tries to understand, not only on birds.
Something else that happens in complex systems
on the macro-level are these like non-linear
effects.
You have agents interacting on the micro-level
and what you find is that like at some point
then everything changes or you have a huge
event happening from nowhere but why is the
case?
Why is the case is basically like the whole
question of complexity science, sometimes
it's just about power laws and non-linear
effect.
This is completely different from what we
sometimes intuitively assume in terms of like
linear relationship between two variables.
In complex systems you just don't assume this
kind of thing, you just look at what happens
and you try to understand what happened before
the non-linear effect happened, and that's
where the butterfly effect analogy comes into
place.
There is an example of a model that shows
this, which is forest fire model, where you
have one parameter only, which is like the
density of the tree and you have a fire coming
from the left hand side and then you have
the percentage of the forest that is burned
and the model from the left the density is
only 58% and the model on the right the density
is 59%.
We could just assume that it's the same because
it looks exactly the same on these grids 58%
and 59%, but with this extremely simple model
we can explain non-linearity because with
just one more percent of density we reach
a percentage of forest burned of like 70%,
and this is completely non-linear.
What we observe if we vary other percentages
is like before 58, kind of it stays the same,
you have some outliers but most of the forest
burns to 10% but once you just switch to 59,
everything changes.
This is the interest of complexity science,
and one underlying other dynamic is the concept
of power laws, which may be about the probability
of events happening, and that like huge events
have a lower probability to happen but they
nevertheless happen and it can be also about
the distribution in networks, like some nodes
have way more edges than others.
And, this is again very, very different from
what is applied in mainstream research, normal
distributions.
This leads us to extreme events, which is
basically one of the study of complex systems
is the study of black swans that like those
huge events that we cannot anticipate because
they are such a low probability and that are
hard to study, but there is actually this
other concepts of Dragon King, that is theorized
by Didier Sornette ETH in Zurich that basically
says, “Power laws distributions are not
the only thing that happens in complex systems.
You may have weirder distributions that may
actually arise,” and that's mainly because
he theorizes huge events as small events becoming
big over iterations.
He basically says that like we may have extreme
events that may happen, with actually a fairly
high probability and he does this kind of
analysis with financial crisis and crashes
in the economy.
Two other fundamental properties of complex
system is this idea of non-equilibrium dynamics.
So, sometimes when we think about the system
and we want to model something, we ask ourselves,
“So what is the equilibrium here, what is
the system trying to optimize for?”
And, in the case of complexity science we
don't try to do this at all, we just try to
actually figure this out.
How things actually evolve that way but do
not converge to equilibrium, but just like
displace sometimes overtime but the notion
of equilibrium does exist in complexity science
and sometimes we may just observe like multi-equilibria
dynamics.
For example, like a system that alternates
between two states and overall that is just
like a set of properties and methods that
are completely different from what we tend
to think ourselves but also from what we find
in mainstream science.
Linking micro and macro-level dynamics, that's
kind of like the thing we want to do in complexity
science and one idea fundamental to this is
that the macroscopic properties that emerge
from micro-level dynamics: it's that the system's
macro property is different from the sum of
its parts.
There is this concept of emergence that some
people talk about.
Emergent is a bit blurry as a term.
It would be more right to say like there is
something that emerged from something else,
and then this thing that emerges from something
else is not just the addition of everything
that we observe on the micro-level, it's like
different.
There is this common sentence of the world
is greater than the sum of its parts.
It's almost the same it's just like different
because it can be smaller than the sum of
the parts, or bigger, etcetera.
And, then there is this other thing that the
relationship between micro and macro often
is stochastic, meaning that like the starting
state of a system gives you almost no predictive
power on the macro state of the system.
And this is why complexity science tries to
explain things rather than predict, because
the argument there is that like almost any
error in the measurement on the micro-level
will just lead to huge errors in the macro-level.
One example of this is this model of 10 ants
that have the same behavioral rule, that start
with the same starting states, the same speed
and the three models last the same duration
and it's exactly the same, visually exactly
the same on the micro-level, but what we observe
in the aggregate are three completely different
scenarios and those are 10 ants that are exactly
the same with the same behavioral rule, 10
single ants.
And, we observe such a complexity when we
just run the same model thrice.
This is just like an example of how complex
systems may just produce stochastic outcomes
and this is basically the study and raison
d’etre of complexity science, and we can
argue that like economics for example, they
do look into complex systems and they argue
that they try to understand how complex the
economy is, and that's true, and systems have
been analyzed already throughout history,
but not necessarily done via such insights.
As examples of complex systems, just to grasp
the concept a bit more there is this concept
of an ant colony of like small agents that
are very, very simple in comparison to the
world, that create this huge complexity.
It can be stock markets, we have agents and
group agents that behave in a certain way,
that interact that lead to non-equilibrium
dynamics and some outlier events.
Policy might be a good example where you have
agents and group agents interacting, leading
to big decisions having small effects and
small decisions having huge effects and then
finally the field of artificial life is actually
the field where complexity science has been
mostly applied, and is mainly about how things
might evolve and develop intelligence through
learning and evaluation.
It might be quite easy to agree with all of
this complexity jazz, because it seems quite
obvious or at least it might be easy to be
interested in this complexity jazz.
But what's not obvious is how can we actually
integrate all of this in traditional research.
And if we look at traditional research, we
actually find practices and ways to consider
things that we don't find in complexity science.
For example, we, in traditional research what
we've mainly done so far was applying reductionism,
so we tried to understand the parts of the
system rather than the system as a whole.
The parameters in a system seen in a reductionist
way are specified in a precise way, then do
not necessarily fluctuate.
In traditional research we also tend to assume
the presence of an equilibrium that things
are maybe static.
That the system is optimizing for an equilibrium.
We tend to assume that the agents in the system
are rational, and that the agents themselves
are homogeneous.
In complexity science research, we rather
look at the whole system, or almost the whole
system.
We try to look at like how things are flexible
and versatile in terms of parameters and agents,
we try to look at the dynamics rather than
how things move, rather than how things stop.
We try to look at like the processes that
may lead to patterns.
We try to look at networks, how things are
interconnected, and there is this fundamental
question of like adaption that we specifically
look at.
We also suppose that agents are bounded rational,
they have cognitive limitations and they're
not necessarily super intelligent, and we
also suppose and want to specify heterogeneous
agents in the analysis because that's from
where the micro properties will come from.
This was my first section on complexity science,
it's pretty theoretical and pretty meta.
Now we will switch to computational modeling
to see how the theory itself can be actually
operationalized to explain concrete phenomena.
One question is why model?
And, quite often in Geneva when I say that
I try to model policy making, I try to model
risks or I am interested in resilience, people
they're really quite interested and they relate
to the topics because they seem to be obviously
quite interesting, but when I emphasize the
fact that I want to model things people say,
“But why model?”
And, they become super cold and what I reply
is like, “But you have models in your minds,
so that's why I model.
I just try to have more explicit models, and
your models that you have in your mind is
just an implicit one that you cannot test
and we can't verify.”
We need to be explicit and more concretely,
we need to model for different purposes, so
in some cases we model to actually predict
something.
This is not really the fact in complexity
science, but in some instances it's meant
to do so.
We model to explain, which is completely different
than predict, to explain some processes, to
just explicitly say what the micro-level of
a system is.
We model to guide data collection.
Sometimes on complex topics, we don't really
know what we need to collect data about, therefore
we model before then we collect data after.
We can model to illuminate core dynamics.
For example, if I would have asked you to
have a mental model of a forest fire of 58%
density and the same we have a 59%.
I'm not sure you would be able to actually
imagine the core dynamics of such system.
Then we can also suggest dynamic analogies,
so for example, between a natural system and
a social system as we've seen, we can discover
new questions by challenging the model.
We can promote the scientific habit and mind,
because if we only model in our mind then
we can do everything we want but if we actually
put things on paper or in computers, then
it becomes a bit more systematic.
We can also bound outcomes into plausible
ranges, so some things that are in complex
systems, are just about like, “Okay, so
what are the main patterns, and then what
are the outliers?”
And then basically, it's just about arguing
that like, “Okay those are the main patterns,
those are the outliers,” and we don't talk
specifically about some patterns, it's just
about the set of them.
We model to eliminate some core uncertainties,
to sometimes offer crisis options in near-real-time.
So, if we have a simulation, a model at hand
about a specific topic that is of high importance,
then we may actually run simulations when
the thing happens in order to see what we
can do.
Then we can try to explain some trade-offs,
and suggest some efficiencies in the system
itself.
If we see that we have a system that is dynamic
but it's not efficient at all, we can try
to change parameters to see how it might change.
We may want to challenge some theory by doing
modeling and then test the model.
Sometimes in very complex things that we can't
really collect data for, all we have is theory
and we can build computation models to then
actually change the parameters, change some
parts in the theory to know how in the aggregate
things will change.
We can suggest, refine and interpret experiments,
so I will talk about this a bit later but
for example, let's say we want to have interventions
in developing countries, and we don't really
know what to test, how to conduct the RCT.
We can do some modeling beforehand and see
some dynamics that will then support which
RCT to conduct.
We can use model for other purposes like training
practitioners.
To discipline the policy dialogue around like
a certain model rather than around people's
mind models, and we can educate the general
public if we have good models that are understandable
at hand.
What's the modeling process, how to actually
create one?
What we mainly want to do is to translate
what we have in the system into like a computation
model, and if we look at segregation, we can
look at reality mostly like this, and collect
some data, and based on this state some variables
and this will be basically the input of the
model.
Then we can look at some of the rules of the
states that, for example, the interactions,
the rules or behavior of agents and specify
this in the engine and the engine will then
produce an output that we then try to compare
if possible with what we observe in reality
at time plus one, two or three.
The modeling steps are basically the following,
what we try to do is to have like a conceptional
model of the system, that we describe qualitatively
first usually via a diagram and then, we try
to translate the relationships into math and
we can use differential equations to explain
the relationship between two variables, two
parameters.
We can use phase transitions to explain how
like a specific agent may vary its own state
or how the whole system may actually change
or we may actually rely on critical phenomena
math to explain the creation of power laws
or like non-linear dynamics.
Then once we have done this, we need to convert
the world into code, choosing the right algorithms
that capture the conceptual model in the math,
and choosing the right programming language.
Most of the models are in C++, Python, some
in JavaScript and MATLAB also exist, and then
at the end what we need to do is to do extensive
testing.
We need to check whether the code, the math,
and the model reflect what we have or what
we could get from reality, and then we need
to test some simple cases to run the model
and to actually construct the model on something
that is specific, then moving on, on like
different bigger cases with more complex,
adding some more complexity to see whether
the model can actually handle that, and at
the end we can validate the role and compare
to external data sets.
The other question that I receive in Geneva
after the, why model question is, “But why
using computers?”
And, my usual question is, “But you have
computers in your mind, and you do compute
every day and you have a really limited computation
power and that's why we rely on computers.
Just because they have more computational
power.”
The problem that we have in... so it's inverted,
interesting...
That we have with our limited computation
power is that we can have a really good detailed
micro-level model in our heads but in order
to imagine what will happen on the macro-level,
we will need to take a lot of mental shortcuts.
And those mental shortcuts might be really
good but might also be really bad, but it's
unclear and not explicit.
So, what we just want to do is to rely on
more inputs that we just put in a computer
to have more computation power, to process
more parameters, more variables and to have
an output that comes directly from the input,
and to be able to run the same input many,
many times to see whether different dynamics
may be possible.
But if you do this in your mind, it's not
necessarily clear that you have... if you
think about the same thing twice and you really
come to the same conclusion twice but you
may change your beliefs, change your perceptions
in between.
In most of the scientific period so far, what
we've done are theory and experiments, and
back and forth between the two with deductive
and inductive reasoning and logic.
What we can do with computational models,
it just adds computer simulations to our set
of tools.
And we can do deduction and induction with
computer simulation but what we can also do
is generating data, and the data that you
can generate with a computational model then
you can use them to do deduction and induction.
That's how it complements pretty well how
science has been done for long years.
What are these models that actually generate
data?
One very common type of model is this agent-based
type of model that may actually reflect the
most closely the theory that we find in complexity
science.
That's basically when you have an environment
that is changing, and you have agents in this
environment that are heterogeneous that are
interaction, and how you run the environment
and the agents and you observe what comes
out of it.
This is agent based modeling.
You can have network based modeling to see
how things are interlinked.
It can be very useful to model global pandemic
for example.
This so far has been mainly used to do some
World Wide Web analysis.
You can use computational game-theoretic models,
that is basically is like your advanced version
of the game theoretic models that we know
like Prisoner's Dilemma, Stag Hunt or Chicken,
but just other iterations with learning processes.
Typically, you have a population N to start
of different agents that have different strategies,
they go into game rules, so they basically
participate in tournaments and then switch
to like replicate the rules, so there will
be selection effects based on agents' fitness
and agents will also be able to change the
strategies based on the strategies used in
the tournaments and then, that will lead to
a population one and then we can iterate this
and see how agents adapt their behavior in
game theory.
We can also rely on system dynamics, so this
is a system dynamic model of the effect of
a new product that comes into the economy,
and we use this to specify variables that
are not moving but only changing and only
interacting with each other, so this is one
model of a new product.
This is one model of circular migration in
Southern Europe, so of people moving in different
stages and the stages interacting between
them and in the loops you have basically people
moving.
Models can be used for two main purposes.
The first one is to do exploratory or theoretical
modeling, so that means that we have a complex
system or a problem that is extremely complex
and we don't have a lot of data about that,
and all we have is theory.
And we just model to do more theory, to test
theories or improve theory.
This is something that we can do by generating
data, that can just illustrate what we mean
by the theory.
But we can also do something else that is
more evaluative and data driven.
For this basically you rely on data to start
the model, and then you compare the model
results with the data.
This is some details on how the data can be
used in model.
Data can be used to seed the model, so you
can have data on the micro-level, on the agents,
their beliefs, their decisions etcetera.
You can use data to calibrate the models.
So, once you run the model, you want the model
to go into the right direction so you might
use data to just introduce some path dependency,
so just sort of, like, the model represents
reality better and at the end you use the
same data and other data to validate the results.
If you actually do the three steps, that's
we call evidence-driven modeling.
That means computational modeling, sometimes
it's quite difficult to argue that this is
a form of evidence, because this can be purely
theoretical and you just generate data from
something you specify in the micro-level,
but if you actually rely on data and you see
it calibrate and validate with them, then
that actually may produce outputs that can
be qualified as scientific evidence.
Some applications, so this is the graph of
the results of computational game-theoretic
model of different agents that actually adapt
to the strategies at games.
This is an agent-based model that try to model
different behaviors in interacting in stock
market.
Well that may lead just to non-equilibrium
dynamics and things constantly changing.
This is a system dynamic model of policy stages,
so policy-making can be understood as a set
of stages within like another setting, policy,
policy design, policy formulation and policy
implementation and evaluation, and this model
looks at like the interaction between stages.
Then this one is a network based model that
looks at social media big data, and to try
to predict infectious disease spreading based
on the network structure in social media,
and based on the data collected from the social
media.
That might be quite exciting, and actually
there is an increasing number of people that
want to do modeling and that do modeling and
that advocate for more modeling on anything
but in fact, there are quite a lot of pitfalls
and limitations because a lot of people model
and they do crappy models, and there are some
things we need to be careful about.
So the first thing is that all models are
wrong, and this is why we model.
We model to build better models because all
models are wrong.
We need to keep in mind that all of these
models are just representation of reality
that we try to improve, but they are not fully
accurate.
Then other thing is that models cannot prove
a mechanism, but can only disapprove a mechanism,
or suggest that like one mechanism is enough
to produce such output.
Because once you actually put something in
a computer you can generate like any curve
with any kind of mechanism, but that doesn't
mean that those mechanisms are right.
You can say that okay this mechanism is actually
enough to produce such outputs or you can
just actually say these mechanisms actually
are not enough to produce the output of interest.
Then modes cannot replace experiments, they
can only complement them as I already sad
and also, the replication of model results
are clearly needed.
Again, you can generate a lot of data with
models, you can claim that you try to understand
complexity and then you have really beautiful
like non-linear dynamics etcetera, but again
you can generate this quite rapidly doing
20 minutes of Python code, so we want to replicate
this and that's another dilemma that we have
in science, is that we just don't have time
to do it but it's clearly needed.
Models have clear problem in external validity.
So even though you can adapt models with the
same variables to different contexts, it will
always be difficult to say the results of
this model will be applied to other situations,
other circumstances etcetera only based on
the model results.
Then one important thing is that models are
not computer games.
That's kind of the third question I receive
in Geneva when they say, “Oh, you do simulations
of policy making, but that's like a computer
game,” and they say this probably because
I said computer, agent, environment in the
same sentence and that because I seem quite
happy about it, but in fact it's not a computer
game at all and the goal is not to have computer
games because computer games are way too difficult
to analyze and understand, it's way too detailed.
What we want to have is like way more accurate
maps but that are not that complex and again,
what we want to have is maps and that's why
models remain maps, that's why models remain
wrong, and that's completely fine.
That was my second section on computational
modeling and now I will turn to how all of
these might be useful for EA purposes.
There's one thing that might be of interest,
is basically risks modeling.
When we talk about huge risks in EA, sometimes
all we have is philosophy, we don't have many
data, because those are so huge risks that
we can't have data for, and we have some considerations
for people just working on developing technologies
right etcetera.
But it's very difficult to have like a quantitative
approach to these huge risks because they
just don't happen, but what you can do in
risks modeling is to actually just simulate
how the risks might spread, and this might
actually enhance an understanding of the subject.
This is a network-based model of how global
pandemic spread around the globe.
That means that if you model this and if you
get policy makers to look at such model, they
might actually understand that like to face
a pandemic you may be need to be prepared
well in advance, and then that too late is
actually too late.
Then this is the result of an agent based
model of bio-attacks, of for example, terrorist
attacks spreading infectious disease around
the world, and this is the result of like
an exploratory model of this kind.
This is extremely powerful because sometimes
facing risks is very difficult, which you
have an impact if the risk doesn't happen,
so it's a bit strange.
It's risks that are likely not to happen,
so you just can't collect a lot of data about
them, so what you can actually do is to generate
data and then feed this into the theory or
feed this into the policy practices.
Another application is to complement these
experiments and it's funny because I had to
remove the graph I had in there before, but
I'm just not allowed to show this.
Otherwise the UK government may not be happy.
In Geneva, I'm involved in a project that
tries to use an agent-based model to simulate
a household behavior in sub-Saharan Africa,
and the household behavior according to food
production and food consumption, and we try
to see how this behavior changes when you
have shocks in the environment, when you have
a drought, when you have floods, when you
have a conflict, etcetera, in order to see
some malnutrition patterns and to see like
the resident malnutrition of a household and
once you have this running, the goal is to
test some aid inventions in the model.
That can be extremely helpful because when
it comes to not just solving one disease,
but actually solving something that is way
more complex and structural like food security
and directly relies to behavior.
Then you may want to have some more granular
analysis than just RCTs, and this is...
Yes, so this is how models can be used to
combine RCTs.
When you want to conduct an RCT you have a
population that you need to divide in two
with a control group and the treatment group,
and at the end you will have some results
and from this you can infer some cause and
effect.
And we kind of know this in EA, that's kind
of one of the best methods we have at the
moment to determine the influence, the effectiveness
of a treatment.
But there is some debate, that you can actually
read on the EA Forum at the moment, is that
like the results of randomized control trials
have a very, very low external validity.
That means that the results of an RCT are
quite difficult to replicate in another system.
What you want to do with, first of all, agent
based models is that you rely on the same
population that you will analyze but you just
run the same simulation twice, so you have
like the whole population going for the treatment
one time and going for the control one time
and you observe the reasons, which might give
you some more granularity, plus you may test
some other parameters in the model.
You can test some other counterfactual scenarios.
Let's say we have this treatment that we test
in RCT.
Let's say it's exactly the same situation
in the model but let's also test the same
situation with a shock in between to see whether
something strange would happen and-
Max, you have one minute left.
Maybe one final point you want to make and
then?
Yeah, there is, so I wouldn't talk about this
because this is redundant with the rest.
My concluding remarks are just the same than
what I said, so we just talk about this.
In Geneva we are consolidating the plan for
using complexity science and computational
modeling for EA purposes, and this plan is
actually about trying to do modeling of existential
risks, risks that actually are possible to
model.
So for example like a global pandemic that
may spread.
This is one side.
The other side is doing modeling of policy
making, so modeling of things and institutions
that may interact by the specific risks and
there we can specify the agent's beliefs,
the agent's possible decisions etcetera.
And if we combine both of them, if you actually
merge you can test some... you can generate
some results and some insights on how policy
making might change if you have something
but the risks that changes on each side, and
based on this you can make some recommendations
for the x-risks community, for organizations
to use their research insights in a better
way once they communicate with policy makers.
We will give like papers and talks, but more
importantly what we can do is to conduct educative
workshops for policy communities and for this
we can really on serious games or what we
call scenario exploration systems, so those
are basically board games that policy makers
can play and they interact on the scenario
of a risk.
They can make decisions, they can interact,
and then you can reverse engineer and tell
them, “Oh, maybe it went in the wrong direction,
let's actually look at some evidence, at some
research, let's do some applied rationality
training and then iterate the game.
This can last three hours, can be like highly
and heavily experience for policy makers,
and potentially over time improve policy making
with respect to the risk.
This is how we would use this field specifically
in Geneva, for policy and x-risks sphere,
and that can also create a network of people
that can interact more closely on the matter.
For example, policy maker at the World Health
Organization trying to work on global pandemic,
interacting with people of x-risk organizations
that can provide expertise on how a risk might
actually happen.
So, I think I will just say thank you.
Thank you so much.
