MALE SPEAKER: Thanks for
coming, everybody.
Nicholas Nassim Taleb has
devoted his life to problems
of uncertainty, probability,
and knowledge.
He spent two decades as a
trader before becoming a
philosophical essayist and
academic researcher.
Although he now spends most of
the time either working in
intense seclusion in his
study or as a flaneur--
I love that word--
meditating in cafes across the
planet, he is currently
distinguished professor of risk
engineering at New York
University's Polytechnic
Institute.
His main subject matter
is decision
making under opacity--
that is, a map and a protocol
on how we should live in a
world we don't understand.
We'll be selling books after
the talk and Mr. Taleb will
sign them for you.
He is going to pause for
questions occasionally during
the talk as well as after, but
please go to the mic if you
want to ask a question.
Please welcome Nicholas
Nassim Taleb.
NASSIM NICHOLAS TALEB: Thank
you for inviting me.
So this is actually the first
time I speak at a place
knowing what it's about, because
I've watched your
Google talks on YouTube.
They're quite interesting
because they're long--
no, they're one hour, which
means we have an hour and 10
minutes, because there's
always a bonus that the
speaker he likes to
give himself.
So thanks for inviting me.
An author should not give you
a substitute for his book.
A book is a different
product--
it's not a talk.
So in other words, it's not
something that I can
substitute.
So what I'm going to
do is mostly--
what I have here as slides is
things to scare you a little
bit, just graphs of things
cannot be captured fully in a
verbal conversation.
And I'm going to show you graphs
of what is this idea of
anti-fragile about.
So I will speak for about, what,
20, 25, minutes, which
means probably 30, and then
we're going to have a Q&A. But
if you're extremely angry with
what I have to say, do not
hesitate and interrupt.
Raise your hand.
If you have severe
disagreements, disagreement is
always very good.
This is why these things should
be different from the
book, because with a book, the
author rarely disagrees with
himself, you see.
Whereas here, you can have
disagreements and they're welcome.
OK, so we start.
If you asked your mother or
cousins, someone who hasn't
heard about this book, what's
the opposite of fragile, what
do you think the answer
would be?
AUDIENCE: Robust.
NASSIM NICHOLAS TALEB: Robust.
What else?
AUDIENCE: Stout.
NASSIM NICHOLAS TALEB: Stout,
durable, solid, adaptable,
resilient, what else?
OK, it's not, simply because
if you're sending--
let's look at the exact
mathematical opposite.
I don't know-- you guys
work for Google.
What's the opposite
of negative here?
Positive.
It's not neutral.
OK, very good.
So what's the opposite
of convex?
AUDIENCE: Concave.
NASSIM NICHOLAS TALEB:
Concave.
Actually, the negative of convex
is concave, you see.
Very good.
So the opposite of robust
cannot possibly be--
the opposite of fragile cannot
possibly be robust.
If I'm sending a package to
southwestern Siberia, and it's
a wedding, and you're sending
a wedding gift--
so we have champagne flutes.
What do you write
on the package?
AUDIENCE: Fragile.
NASSIM NICHOLAS TALEB: Fragile
And underneath?
Do we explain it to the
Russian inspector--
what do you say?
Handle with care.
So what is the opposite
of handle with care?
If you're going to send the
exact opposite package, what
would you write on it?
AUDIENCE: [INAUDIBLE].
NASSIM NICHOLAS TALEB:
Exactly.
Please mishandle.
So something that is fragile,
if you map it properly
mathematically, you'd realize
that the fragile is what does
not like disorder.
It doesn't want mishandling, it
doesn't want volatility, it
wants peace and predictability.
So the opposite of that would
be something that love
volatility.
What I'm saying may sound
interesting and new, but it's
not to the option traders I
worked with, because I was an
option trader for 20 years
before becoming whatever,
calling myself all
these names.
I was just a simple options
trader before, but my
publishers now they
want to hide it.
But the fact is option traders,
they understand the
world and do it in two
different dimensions.
There are things that don't
like volatility and things
like volatility, and it's a
very, very extremely bipolar
view of the world.
You almost have nothing
in between.
And effectively, all I did is
generalize this idea, to map
it to things that option
traders-- because option
traders, all they do is they
drink and they trade options,
so they don't have exposure
outside of that narrow field.
So all I'm doing is representing
them to take the
idea outside.
And we can do a lot of things,
because effectively, fragility
is something I can measure and
anti-fragility is something I
can measure.
But risk, you can't
really measure--
unless, of course, you're at
Harvard or Stanford or one of
these places where they have
the illusion where they can
measure risk.
But in reality, we can't
measure risk.
It's something in the future.
I can measure fragility.
And let's see how we
can generalize.
The graph you have here shows
you very simply a fragile
payoff where nothing happens
most of the time.
I don't know, you don't have
porcelain cups here.
Otherwise, we would have
had an experiment.
But nothing happens
most of the time.
But when something happens,
it's negative.
You see?
So this is a fragile,
and everything
fragile has this property.
To give you a hint, we can
generalize it to medicine
where you take pills that give
you very small benefits.
The benefits are small or
nonexistent, and the harm is
large and rare and often not
seen in the past history of
the product.
That's a fragile.
So I take a pill, it gives me
small benefits, and then 10
years later, you realize that
it gave you cancer or some
hidden disease that
nobody saw.
It's the same payoff
for fragile.
Visibly, the robust will
have this payoff--
it doesn't care--
and the anti-fragile will have
this payoff where harm is
small and the big variations are
positive, are favorable.
So this is sort of like the
idea or the general idea.
Once you link fragility to
volatility, you can do a lot
of things, and let me show
you exactly the link.
I'm going to show you a graph
that sort of explains it in
graphical terms.
Everything fragile has to have
disproportionate harm--
in other words, concave,
nonlinear.
I'll show you what--
we'll talk about concave
in a few minutes--
nonlinear harm with respect
to an event size.
Let me explain.
I mean, you guys at Google and
particularly in this part of
California, are pretty
special.
But if you jump 10 meters--
that's 33 feet--
I guess people here in
Palo Alto, in this
area, they die, no?
Very good.
No?
They would die.
Now if you jump 100 times 10
centimeters, you survive.
No?
It means your exposure is
not linear to harm.
You're harmed a lot more jumping
10 feet than if you
jump 10 times one foot.
So we have acceleration
of harm.
If I smash one of the Maseratis
you see in Palo Alto
against a wall at 100 miles per
hour, I'm going to damage
it a lot more than if I
smash it 100 times at
one mile per hour.
You agree?
It means that there is this
proportionate harm coming, an
acceleration of harm.
It's a second order effect.
Fragility isn't a second
order effect.
And it has to be so, because
if harm were linear, I'd be
harmed just walking
to the office.
This is a central idea.
Anything that has survived and
is in existence today is
harmed a lot more by a 10%, say,
move in the market, 10
meter jump or whatever it is,
than by a tenth of that and 10
times the tenths of that.
It means it has to be concave
to a source of stressor.
In my book, I give the--
because this talk is more
sophisticated than the book
and the contents of the book.
In the book, I have
to give a story.
And it's in the Talmudic
literature that there is a
king who had to punish his son,
and he had to crush him
with a big stone.
And given that he was both a
king and the father, he had
the dilemma that was solved by a
local counselor who told him
it was very simple.
Crush the big stone in pebbles
and then throw pebbles at him
That's the definition
of fragility.
Anything that survived,
conditional on something
having survived, has to be
harmed disproportionately.
You see, the larger the stone,
the more the harm.
With this, you can see why large
becomes vulnerable to
shocks, because, for example, a
100 million pound project in
the United Kingdom where we have
data had 30% more cost
overruns than a five million
pound project.
Now with this, not only do
we have a definition of
fragility, but we have a robust
way to measure it.
How?
Simple.
It's the acceleration that
allows me to detect the
fragility, the acceleration
of harm.
If I have a bad ruler, I
can't measure a child,
the height of a child.
Do you agree?
It's very hard.
But I can tell how fast he's
growing in percentage.
Do you agree?
So I don't have to have a great
measuring tool for fragility.
All I need is to detect the
second order of derivative,
the acceleration, because
fragility is in acceleration.
Now that I gave you the
difficult stuff, let me talk
about my book.
Everything we posit
on this idea that
fragility is in the concave--
and if I learned how
to work this.
Hold on.
This is a concave.
The concave is fragile, and
you can see the benefits.
And the concave is
anti-fragile.
To give you an idea of why
the concave is fragile--
if you have a piece of
information that your
grandmother spent two days at
70 degrees Fahrenheit as the
sole information, you would
assume that your grandmother's
very happy, no?
That's a perfect temperature
for grandmothers.
But then the second order-- ah,
your grandmother spent the
first day at zero degrees and
the second one at 140 degrees
for an average 70 degrees, I
think you would be thinking
about the inheritance and all
the things that come with a
funeral, no?
Is fragile what does not to like
a negative second order
effect and is, therefore,
anti-fragile what likes
variation and likes these
second order effects?
Let me try to work this because
I'm a little confused
about this, how to work
the computer.
I figured out how to work it.
So let's stop with the
graphs and let me
talk about the book.
Now you're confused enough
but intrigued.
So let me talk about my book
after I showed you these
technical definitions.
This book--
I realized that this property
of anti-fragility, once you
had the definition of fragility
and then you have
its opposite, was
misunderstood in the discourse.
Like, when governments want
stability, they shoot for
perfect stability, but something
that is organic
requires some amount
of volatility.
It's the exact opposite
of the grandmother.
There's a mathematical property
called Jensen's
inequality that tells you that
often things gain on their
variability.
There are a huge amount of
phenomena like that.
In other words, you do a lot
better, yourself, if you spend
an hour at 50 degrees and an
hour at 80 degrees than if you
spent two hours at 65 degrees,
for example.
In Jensen's inequality,
anything convex--
actually, this is a graph
of Jensen's inequality.
OK, here it is.
It's complicated, I told
you, so let me
remove it very quickly.
So there are things that
like variation.
So you can classify in
three categories--
the fragile is what does not
like volatility, randomness,
variability, uncertainty,
stressors.
The robust doesn't
really care, like
the Brooklyn Bridge.
And the anti-fragile requires
some amount of variability in
all of these.
The discourse missed completely
the notion of
anti-fragile, so we try
to get stability.
With government, for example,
they want to have no
fluctuation and you saw
what Greenspan did.
If you gave him the seasons, he
would have had the seasons
at 67.8 degrees, the temperature
year round, like
inside this office.
It's what you maintain, I think,
inside the office.
And of course, we would have
blown up the planet.
I'm glad we only gave
him the economy--
he only blew that up.
But we do a lot of harm by
depriving something organic of
a certain amount
of variability.
Anything organic communicates
with its
environment via stressors.
So this is composed
of seven books.
Book one, I talk about this
difference between a cat and a
washing machine-- in other
words, between the organic
that requires stressors.
Do you guys have a
gym at Google?
Well, there you go.
So you put your body
under stress.
But you don't realize there are
other things you need to
put under stress as well.
There are other stresses you
need to have just to enjoy
life if you want to be alive.
There's no liquid I know of
that tastes better than a
glass of water after
spending some time
in the Sahara Desert.
So therefore, there's a chance
an inequality at work right
there in your life.
We realize here and there that
you need to stress the bones,
but we don't really transfer
it to other areas of life.
Like, we may not like to have
this architecture, modernistic
architecture, smooth
architecture--
it's not as pleasant,
it's not for us, as
something richer, fractal.
I'm looking out the window,
I have trees.
It's a lot richer, and the
ancients actually liked that.
I don't know if you've been
in the Gaudi Building in
Barcelona, where you walk in.
It's a cave.
It's rich in details and I
feel more comfortable--
visibly, my eye likes
variations, just like your
body likes some term of
variation and some stressors.
So that's book one where I talk
about that, and I talk
about ethics.
What happens is that people
understand that what doesn't
kill me makes me stronger.
They don't understand the real
logic of it, which is that
what kills me makes
others stronger.
That effect, a system that works
very well, is a system
that has layers.
Like the restaurant business
works very well because its
components are fragile,
the entrepreneurs.
Otherwise, we'd be
eating bad food.
I mean, not that we're eating
great food all the time, but
you understand the idea.
You'd be eating like Russia
during the Soviet era.
So there's some businesses
that thrive--
like California, I'm here at the
epicenter of things that
thrive because the failure
rate is converted into
benefits for the system.
So this is Darwinistic, except
that we can inject some ethics
into it to avoid what
philosophers call the
naturalistic fallacy, that
what is natural isn't
necessarily great.
So we can have--
we should have entrepreneurs,
encourage more entrepreneurs
in the economy, encourage
them to fail,
and remove the stigma.
This is the only place in the
world where there's no big
stigma for failure, here
in California.
We should have it more
generalized,
because you need them.
But also at the biological
level, when you starve
yourself, you stress
some cells.
And the reason we are healthy
is because there are fragile
cells in us that break first
under stress, and therefore
you have an improvement
within you.
You always have the top layer
require the fragility of a
lower layer.
So that's book one.
Book two--
again, these books are separate
books that discuss
different topics linked
to that original
idea that I gave you.
Book two is about modernity, how
suddenly you start having
policies that try to control,
touristify the world, where
you have a plan, you have
everything is smooth, no
randomness in life.
And I explained that, really,
we have a lot of people that
have discovered over time that
you need randomness to
stabilize a lot of systems.
So the book discusses
a disease called
interventionism, overstabilizing
systems, and
some research on different
areas, about 50 of them, where
in which there is a need
for randomness to
stabilize the system.
Like Maxwell's governor--
it was discovered that if you
overstabilize a steam engine,
it blows up.
So we have that in the economy,
you have that in a
lot of places.
So that's my book two.
And in it I discuss a certain
brand of person.
I call them fragilista,
someone who denies the
anti-fragility of things and
fracases by the denial.
Later on, we'll talk about a
relative of the fragilista
with the Soviet-Harvard approach
to things, from top
down, not bottom up.
So that's book two.
Book three introduces a friend
of mine, Fat Tony.
And Fat Tony doesn't like
predictions, and visibly, as
name indicates, he
enjoys life.
But he's a little coarse.
And there's his friend Nero.
He and Nero are always
fighting.
But he taught Nero how
to smell fragility.
Because you see these graphs?
I had to use my brain to
understand fragility.
Fat Tony can do it naturally.
He can figure out the sucker--
so, his idea of the world is
sucker versus non-sucker.
And his point to that is any
system that's based on
prediction is going
to blow up.
So he finds those who
really are sensitive
to prediction error--
because, remember, the fragile
is very sensitive to
prediction error.
So then I continue.
Book four--
I don't know if I have
the book numbers
right, but it's OK.
I can change it, because I'm
the author, remember.
Book four is about optionality
and the things I introduce
link to convexity.
I didn't want to scare
the readers--
I didn't talk to them about
convexity right away.
I tried to get through the back
door via this very simple
representation.
It's if you have an
asymmetric payoff.
If you make more when you're
right than you lose when
you're wrong, then you
are anti-fragile.
And if you have more to lose
than to gain, you are fragile.
The same applies to a coffee
cup, to anything--
the china, anything.
And of course, the volatility
will be the vector that would
cause you to lose.
So I introduced this, but so far
the book is not technical,
so I introduced via Fat
Tony and Seneca.
Seneca was also like Fat Tony
but much more intellectual.
Seneca, the Roman philosopher,
who is precisely not Greek in
the sense that he was practical
and he had a
practical approach to
stoical philosophy.
The guy was the wealthiest man
in the world, and he was
obsessed with the fact that when
you're very wealthy, you
have more lose than to
gain from wealth.
So he trained himself every day
to wake up thinking he's
poor and then rediscovered
wealth.
Once in while, he
would mimic--
he would have a shipwreck in
which he writes that he lives
as if he were on a shipwreck
with only one or two slaves.
You get it.
But he was the wealthiest man in
the world writing about how
to love poverty.
You get the idea, but the
guy was good at it.
He figured out that you have
to always be in a situation
where you got more upside than
downside, and then you don't
have to worry about
randomness.
And in fact, the strangest thing
is not that he said it.
I picked it up and
I was shocked.
I said, this guy's talking
like normal people.
What all academics, the view of
stoicism, is that they'd be
like academics--
boring, and like fashionable
stoicism,
unmoved by the world.
No, they're only unmoved
by bad events.
That's central.
So via Seneca, I introduced
that notion of asymmetry--
always have more upside than
downside from random events,
and then you're anti-fragile.
So I go through Fat Tony and
Seneca to drill the point and
it sort of works.
Also, this book has titles and
subtitles, and there's no
connection between the title,
the subtitle, and the text.
Why?
Because since I wrote my
first book, I sort of
was afraid of reviewers.
But then I said the best way to
have a book is to tick off
reviewers from day one,
so that way I don't
have to fear them.
And reviewers, they want
to skim the book.
I can't understand or figure
out what it is about them.
Plus, I put a 600 page map--
actually, it's a Google text,
by the way.
You guys were housing
it for free.
So far, it's 400 pages of math,
dense math, as backup
for this, plus a technical
appendix.
Just to tick off reviewers,
the ideas--
I want people to go through
the reading experience.
So they can't figure out by
then that I'm not talking
about the whole thing, the
book, is about Jensen's
inequality, things that love
randomness and how to
benefit from it.
Now I'm going to go to
California and talk to you
guys about a phenomenon.
I skipped the chapters because
here I have more Greek
philosophers, more stories.
Something very simple--
I'm going to simulate
a process here--
this is not in the book,
by the way, this
is outside the book--
where you have two
people competing.
One person has knowledge and his
brother has convexity, has
a convex payoff.
And the difference between them
would be the difference
between knowledge and a convex
payoff will be what I call the
convexity bias.
I simulated it and
look how big is.
Well, visibly this explains
something that people so far
couldn't understand.
Trial and error has
errors in it.
Do you agree?
So in history books and history
of technology, people
usually oppose trial and error
versus theoretical knowledge.
But whenever we're able to work
with trial and error,
they did not understand
it had to be convex.
Trial and error relies on luck,
but luck can hurt you,
so it was never modeled
as an option,
technology as an option.
If this model is an option-- and
I'm sure there are other
questions, so I go over
this very quickly.
If I were to model it as an
option, trial and error, then
it would be something that
loves volatility--
option loves volatility.
And you can have policies
that come from it.
My idea of flaneur
is very simple.
I'd much rather have series of
options, like have a long
highway with a lot of exits,
then be locked in into top
down plan like a highway
with no exits--
a destination and your
exit, that's it.
So assuming you want to change
your mind, you're in trouble,
particularly if you don't know
Russian and you're in Russia.
That's how they build
their thing.
So we have two approaches
to knowledge.
One is top down and
one is bottom up.
So here there are about 75 pages
that should upset a lot
of academics because
you take--
I took some evidence, which
includes my own field, which
was to be derivatives, that a
lot of things that we think,
that we believe come from top
down knowledge and theoretical
knowledge effectively come from
tinkering, dressed up
later as having been developed
by theoreticians, which
includes these corners
up here.
Euclid-- people say you have to
learn Euclidean geometry,
and look at all these things
that were built after Euclid.
For about 15, 16 centuries,
people were building things
and never heard who
Euclid was.
The Romans were extremely
heuristic--
very, very, very experienced
based, and they did everything
using this convex knowledge.
How was it convex knowledge?
It's exactly like cooking.
You have very little to
lose by adding an
ingredient and tasting.
If it works, now you have
a better recipe.
If it fails, you lost nothing.
So things in knowledge, no
academic would want--
I mean, I'm a professor in an
engineering department.
No academic-- except engineers,
because they're
nice people--
would accept the notion that
knowledge can come
from a bottom up.
So we have evidence of
what I call lecturing
birds how to fly.
A lot of science comes
from technology.
But look at a definition--
google technology, science,
and it would explain that
technology is application of
science to practical things,
exactly opposite.
Anyway, so this is my options
theory thing.
I don't know if it upset many
of you, but typically it
upsets academics.
So then I go to the notion
of medicine.
To get to it, I go to something
called the via negativa--
how to make something robust.
To make something robust,
there are two things.
Because of Jensen's inequality,
it's better to run--
better to walk and sprint
rather than just job.
So you have strategies that
have variations in them.
Bipolar strategies are vastly
better than mono strategies.
And you see it, for example,
with portfolios.
It's much better to put 80% of
your money risk free, if you
can find something like that,
and 20% speculative, rather
than the whole thing
medium risk.
It's much more robust
that way.
But you can see it in the
policy of every single
monogamous species, which
includes humans, but we have
data for birds.
Monogamous birds, typically,
instead of the female opting
for a good match, she picks the
accountant 90% of the time
and the rock star to cheat with
10% of the time for a
linear combination of having
someone in the middle.
So the idea is you take the
loser, the stable accountant,
and stuff like--
not that accountants are losers,
but you see the kind.
And then you take and then you
have the hotshot rock star on
the occasion, so the linear
combination is better.
This is explained in the book
why things that have
variation--
and I use the very same
equation with Jensen's
inequality to show why it's
a lot more stable.
Then medicine, of course.
This is medicine where you
have visible gains from
anything you ingest in medicine
and big losses,
except there's convexity
in medicine.
I study the problems of harm
done by the healer, whether in
policy or something else,
in medical terms.
It's called iatrogenics--
harm given to you by someone
who's supposed to help you.
And you can measure iatrogenics
probabilistically.
I'm going to give you
an idea that I just
put on the Web today.
It's not exactly in the book,
but what we discovered from
something about blood
pressure.
So you have these big hidden
risks, but if you look at
Mother Nature, Mother
Nature equipped us
for a lot of natural--
I mean, three billion years
is a lot of time, even for
Google, so it's a lot of time.
So Mother Nature was capable of
treating things that don't
deviate from the normal.
So we have never been able to
find anything you can put in
your system that has turned
out to be 20 years later
unconditionally good without
a hidden risk like this--
steroids, tamoxifen,
all these.
You see small little gains.
But on the other hand, we should
analyze medicine using
convexity terms, that if you are
very ill, you should have
a lot more medicine and
much less medicine if
you're not very ill.
There's convexity of payoff
from medical treatment.
But there is a problem, and let
me give you the problem.
If you're mildly hypertensive
and they give you drugs, you
have one chance in 53 of
benefiting from it, but now
you have all these risks.
If you're extremely
hypertensive, you have 90%,
80% percent chance of benefiting
from the drug.
So you have this risk, but you
have also a huge benefit,
particularly when
you're very ill.
The problem is as follows.
People who are once sigma away
from the mean, which nature
has treated, by the way, and
medicine doesn't help them
much, are five times more
numerous than people four
sigma aways from the mean.
So if you're pharma,
what would you do?
Who would you treat?
You have five times more people
mildly ill than people
who are ill.
What would you do?
You'd focus on the mildly ill.
We'd focus on reclassifying
people as
mildly ill to be treatable.
And also, they don't die, so
they're repeat clients who are
going to cash out
for a long time.
So I use this argument
against pharma--
via negativa is by removal of
something unnatural to us.
You have no side effects, no
long term side effects.
In a complex system, you need
something I call less is more,
because adding something has
multiplicative side effects
whereas removing something
unnatural-- like if I stop you
from smoking or something like
that-- you have very, very
small long term side effects.
So these are the book so far.
And book number--
the last one, seven,
is on ethics.
It's very simple.
It's about a situation in which
one person makes the
upside and someone else
makes a downside.
You're looking at me like, what
he is he talking about?
Well, have you heard
of the banks?
Bankers make the upside.
The rest of society
has a downside.
So they're long volatility
at the expense of others.
And of course, it's my most
emotional book and the one
that made me the most enemies,
because I named names.
I had this thing--
when you see a fraud,
say fraud.
Otherwise you're a fraud--
so, commitment to ethics.
And the whole book is about, of
course, never ask a doctor
what you should do.
You get a complete different
answer if you ask him what he
would do if he were you.
So here, I don't give advice.
I just tell people what
I've done, what I do.
Like when someone asked
me for a forecast, I
don't believe in forecasts.
I tell you this is
what I've done.
This is what I have my portfolio
for the day.
Go look at it, if I want.
Otherwise, but no forecasts.
The same thing is that
you should never harm
others with a mistake.
Why is this essential?
At no time in history have we
had more people harm others
without paying the price,
whether bureaucrats in
Washington--
they're not harmed, shamed
by spreadsheet--
to economists giving us bogus
methods, and academics.
I'm not harmed, I'm not the
one bearing the harm, so
nothing improves
in that field.
So like Steve Gill telling you,
oh, it's peer reviewed by
a great journal.
Nonsense-- all that's
nonsense.
They're not harmed by the
mistakes or could keep going
on with all this--
can you curse here?--
with all this bullshit.
You can edit it out.
I don't know.
I did that at LSE--
I used the F word at LSE--
and then they told me,
well, you know what?
We're going to keep it, but
it's extremely unusual.
So I told them, OK.
But anyway, during the Q&A,
probably, I can relax more and
[INAUDIBLE].
So here I've introduced the
book, and add the book with
the following.
The only way you know you're
alive, you're not a machine,
is if you like variability.
That's it.
So if you're anti-fragile,
that means you're alive.
So thank you for listening to
me, and now let's start with
Q&A.
[APPLAUSE]
NASSIM NICHOLAS TALEB: I keep
the slides just in case
someone asks me an emotional
question.
Go ahead.
AUDIENCE: Hi.
Thanks for coming.
It was great to hear
you speak.
I was wondering if you could
elaborate on a related topic
of fragility, which
is this whole
question of a long peace?
NASSIM NICHOLAS TALEB: OK.
Very good.
Excellent.
What has happened over the
past 200 years and in the
military is that you have
switched to tougher weapons,
so we had longer periods of
peace punctuated with war.
And if you stood in 1913 and 3/4
looking at recent history,
you'd say, oh, it's
all quiet now.
We don't have to worry.
I'm sure you were really
surprised.
So when we move into what
I call black swan prone
variables, it takes much
longer to figure out
what's going on.
And we live in that world where
most of the big jumps
come from a small number
of variables.
You guys here prove it.
If you look at how much of the
internet traffic is explained
by Google, you had that
concentration.
If you look in the book business
where you have 0.2%
of the authors generate half the
income, if you realize you
have that concentration--
so the same applies
to wars, simply.
With fat-tail processes you
cannot make an inference from
just a small sample, and a lot
of people make the mistake of
taking the last 50 years and
saying nothing happened the
last 50 years, therefore,
let's not worry.
No-- we have a lot of
potential danger.
Plus, if it's a pinker book, the
pinker book is confused.
But other than that--
it's nice.
Yeah, crime has dropped, but
you can't make statements
about whether the risks
have changed.
I've written about
it on the Web.
I don't want to talk about it.
I get emotional.
Thanks.
Next question.
AUDIENCE: When you're describing
chapter one, I
think it was, you said
that what doesn't
kill me makes others--
NASSIM NICHOLAS TALEB:
Book one, yeah.
What kills me makes
others stronger.
AUDIENCE: Right, what kills
me makes others stronger.
But one of the takeaways I got
from your "Black Swan" book
was that that's a fallacy,
that if you look at a
population and you
stress it, and--
NASSIM NICHOLAS TALEB:
That's excellent.
AUDIENCE: --the weak ones die
out and you're left with the
strong ones, but it's not really
true that the stress
caused the strength.
NASSIM NICHOLAS TALEB:
Exactly.
It's the same point
I'm making.
People think that what kills
me-- what didn't kill me makes
me stronger, and I'm
saying it's wrong.
It's typically because there is
a selection effect, not an
improvement.
Let me go through the history
of the mistakes made with
anti-fragility.
There's something in medicine
called hormesis.
You give someone a drug, their
body overcompensates by
getting stronger.
A gentleman wrote something
on anti-fragility
from a draft I had.
He's a geneticist, and he
actually proved that what
happens is that if a system gets
stronger, it's because
some of the components
were destroyed.
So when someone says
what killed me--
what didn't kill me made me
stronger, it could often be
that it killed the others who
were weaker, and therefore I
have the illusion of getting
stronger, when in fact, it
killed the others
who are weaker.
That's the idea.
It's a little subtle idea
which tells you that
everything is by layers.
I have cells, cells have protein
in them, and all that,
and typically the weak needs to
always be destroyed for the
system to improve.
And this is how your body
improves, not because it
overall improves under shock.
It's because you are
killing things
that are bad, typically.
AUDIENCE: Thank you,
Professor Taleb.
I think that your message about
our epistemic limitation
is very important.
And I had a question about
your view of libertarian
movement, and how do you
think that your idea of
anti-fragility fits into those
ideas of smaller government
and more bottom up approach?
NASSIM NICHOLAS TALEB:
That's excellent.
So what I'm showing
here is actually--
I don't know if it's a
libertarian view, but it's
definitely a localist view in
favor city-states, a lot more
robust, because of
the side effect.
A small government works better,
not because it's small
government, but because it's--
you get the idea.
Top down government
doesn't work.
Now you can have probably a
dictatorship in a small
village and it may work.
So I cannot prove that it's
not private versus public.
For me, it's large
versus small.
Small has the ability to survive
and a large gets
disproportionately weakened
by unexpected events.
And thanks for linking it to
epistemic opacity, because
this idea of fragility being
measurable solves the problem
of opacity.
I don't understand the
environment, but I can pretty
much figure out I'm
fragile to it.
Go ahead.
AUDIENCE: So you said you didn't
believe in forecasts,
so I won't ask you to
make a forecast.
So what's in your portfolio?
NASSIM NICHOLAS TALEB: I don't
want to answer the details of
it because I'm talking about my
book and in a year it will
change, so I can't
talk about that.
I'll tell you that if I were
compelled to produce a
forecast, but I don't
like to forecast.
But anyway, the book tells
you what I do, so
that's what I do.
And it irritates the critics.
Everything that irritates
books critics is
wonderful for books.
But again, consider this class
of phenomena that benefit from
harm-- rumors love repression.
Try to repress a rumor and
see what it will do.
Go stand up and deny a rumor
and see when a politician
says, we will not devalue.
The rumor is wrong.
You know what happens--
it's the best way
for a rumor to spread.
And same thing with books--
try to ban them and
see what happens.
There's some class
and I call it--
what do I call it?--
refractory love, where people
like in Proust where people
have obsessive love.
And the more you try
to repress it,
the stronger it gets.
A lot of things get stronger
under repression and belong to
that class of anti-fragile
and it all can
be mapped as convex.
Yes?
Go ahead.
AUDIENCE: Some of the systems
that you mentioned, the
difference of--
I'm just trying to compare that
to the financial markets.
If you have a period of
stability and all of sudden
you get cancer, you usually
don't recover back yourself.
Or if you have a fragile item,
it breaks down, it usually
doesn't recover itself.
But how would you compare that
to financial markets?
It doesn't have an external
property that--
NASSIM NICHOLAS TALEB: OK.
Let me link this to the question
earlier, because now
I remember that I didn't
give a full answer to
the question earlier.
A system that has a lot of
parts, independent, that break
sequentially, is going
to improve.
How?
Take, for example,
transportation, or take
engineering.
Every bridge that collapses
makes every other bridge in
the country safer.
So the probability of a building
that collapses makes
every building in the country,
or the probability of the next
building collapsing smaller.
Smaller or equal, but it
doesn't get worse.
So when you have a system
composed of small units that
break sequentially, fail without
contagion effects, the
systems improves from failure.
And, exactly as what kills me
makes others stronger, and
that's a benign system or
a system that's actually
anti-fragile.
Now, take banking, take
large corporations.
When one fails, the other--
it increases the probability
of the other failing.
Then the system doesn't
work well.
That's one thing to answer him
and get into your point.
He's asking me whether financial
markets, what
benefits they have?
Well, people think that they're
good at providing
information.
In fact, they're great at
masking information and that's
why it works.
It prevents panic.
Say if someone is predictable
and comes home
every day at 5:30.
Boom, you can set your
watch, he walked in.
And one day he's late?
What would happen?
Two minutes and everybody freaks
out, he's not here,
where someone more random in
his arrival time would not
cause a panic.
Well, it's the same
thing with prices.
That's one of the aspects.
Another thing with prices is
that volatility prevents big
collapses because it's
just like a forest.
You have flammable materials.
Steady, small forest fires
clean up that flammable
material and don't let
it accumulate.
But what happened with Greenspan
by stabilizing
everything--
no volatility or minimized
volatility, something they
called The Great
Moderation that
resembles the Great Peace--
you had a lot of hidden risk in
the system, very explosive,
ready to explode.
In effect, we saw what
happened-- they blew up.
So this is where financial
markets, by bringing
volatility, clean up the
system periodically.
That explained it.
Thanks.
AUDIENCE: Thanks a lot.
The question I had was how does
your work reflect on how
you think about conglomerates
and family businesses,
especially in the developing
world where there seems to be
a high concentration of
preservation and a model where
they actually look
for stability
versus choosing variation?
NASSIM NICHOLAS TALEB: This
is a good question.
I don't know much about--
I looked at family data for
businesses, and effectively in
what we call today the
OECD countries.
They have stayed in power
because they have what I call
skin in the game, among
other things
and among other qualities.
Now, conglomerates--
I have no idea.
I just like the conglomerate I
work for-- namely, the owner
of Random House, Bertelsmann,
because they're not listed in
the market.
Although I like market
volatility, I don't like
people to fit the company to
the security analysts that
don't understand hidden risks.
And the stock market tends to
push companies to hide risks,
and it fails, because the
security analysts don't have
the tool to analyse second
order effects.
Go ahead.
AUDIENCE: How much of the
anti-fragility phenomenon that
you're talking about across
systems is really about
evolutionary learning
in that the two
curves, knowledge versus--
I forget what the other one was
labeled, but it was the
anti-fragile curve--
is really about two different
forms of acquiring knowledge.
One is for acquiring articulate
knowledge through
articulate processes and the
other one is for acquiring
inarticulate knowledge, the kind
of knowledge that Hayek
talks about, where the system
learns without the human
beings necessarily being aware
of what it learns.
NASSIM NICHOLAS TALEB: That's
a good question.
He's asking me how much of--
there are two types of
knowledge, [INAUDIBLE],
knowledge top down, bottom up,
heuristic knowledge versus what
we call propositional
knowledge, or things that aren't
formalized, and so on.
There's been a dichotomy through
history between these
two [INAUDIBLE], a
lot of people.
But the first person who
discovered it-- let me give
you the background--
is Nietzsche.
Nietzsche had [INAUDIBLE]
between-- actually, even Seneca
discovered it, but we
attribute it to Nietzsche.
When he was 25, he wrote the
most beautiful book probably
of the century, "The Birth of
Tragedy," by showing tension
at humans between the rational
Apollonian and the deep, dark
unexplainable force,
the Dionysian--
depends if you're British or
American how you pronounce it,
from Dionysus, the god of wine
and [INAUDIBLE] thing.
And he actually--
I think Nietzsche is the one
who used the word first,
creative destruction.
Nietzsche, not [INAUDIBLE].
An economist cannot come up
with something that deep.
So Nietzsche spoke about that,
and to continue, he went after
Socrates for saying whatever
you cannot explain isn't
necessarily stupid.
And effectively in my
book, Fat Tony has
a debate with Socrates.
You can imagine a guy from
Brooklyn debating a Greek
philosopher, and I'll let you
guess who's going to win the
debate along these lines.
So effectively I go along these
lines, except that what
I've done is very simple.
I don't have a theory in
here of anti-fragility.
People can talk about complex
system, however they are.
I have a descriptive
detection.
This here, I proved very simply,
that I can detect
fragility through a second
order derivative.
So in a way what I have is more
like phenomenology, which
is not at the level of a theory
but something lower--
a way to map objects in order to
work with a world we don't
understand.
So in a way, I don't have a
theory how things come from,
but I of integrated this
dichotomy you have between the
bottom up, unexplainable, we
don't know how we do it.
There was one thing I would like
to mention, since there's
time for another question--
one important thing--
that effectively the longer
we've been doing something
that we don't understand, the
longer we will do it.
In the book, I say time is the
only detector of fragility.
Remember one thing--
time is volatility.
You agree?
Time involved mathematically--
they all are the same.
Disorder, time, entropy,
volatility--
approximately, I call them
siblings, brothers.
They're not exactly the same,
but they're like fraternal
twin brothers.
OK
So with time, what was fragile
eventually will break.
So it's a very simple law.
Whatever is not perishable,
namely an idea, will have a
life expectancy that increases
with time, which is shocking
for technologies, but let me
explain the rationale.
If you see a human and he's 40
years old, you can safely say,
not knowing his history, and
he's not ill, unconditionally,
that he has an extra
50 years to go.
You agree?
That's mortality tables,
condition
and mortality tables.
He lives another year.
You know that his life
expectancy has decreased by
little less than a year.
So his life expectancy decreases
every day he lives.
If I look at a technology, how
old-- all of you need to know.
How old?
40 days.
Technology--
a book and idea [INAUDIBLE]
that are not perishable.
Our genes, for example--
not our bodies.
How old, technology?
40 years.
Very good-- it has
40 years to go.
Next year, 41 years--
at least 41 years to go.
So the life expectancy of
technology increases every
day, an idea of everything,
believe it or not.
So this is why we have bicycles
and they probably
will live longer than the cars,
cars more than planes.
And of course, now we know
that the regular plane's
better than the supersonic
planes.
And I'm talking about this
here in Silicon Valley.
Why?
Well, very simply because--
and we don't understand why.
We don't have to understand
anything.
Time-- there's an intelligence
of time that's at work there.
A book that had been in print
for 3,000 years--
in print, or at least read
for 3,000 years--
that you get in hotel rooms
still, will probably be read
for 3,000 years.
Regardless of what the latest
intellectual tell me about the
rationality or not
rationality--
I don't believe in
rationality.
I believe in fragility.
Things like that you could
apply to technology.
You can say the red car, the
first red convertible car, was
born when--
38 and 1/2 years ago?
That's 38 years ago,
approximately.
But of course you'd be shocked
now if you see how much of
what we have today resembles
ancient days.
We still use watches, glasses,
3,000 years old, chairs,
desks, silverware.
We're trying to cook like
our ancients did.
I have in the book a picture
of a kitchen
from Pompeii in Italy.
It's 2,000 years old and it's
no different from a good
Italian restaurant's kitchen.
So this is to tell you that
there are things we don't
understand in the world, but we
can understand them via--
there's no rational means to
understand why people use
something like a technology, but
we can understand via just
fragility, the concept of
fragility via time.
AUDIENCE: So I'd like to ask--
I suppose not, really.
It's a separate question
more than a followup.
But returning to the great
debate between Hayek and
Keynes, especially with regard
to the Great Depression,
oversimplifying--
the way I view is that the
Austrians were saying that
Hayek was saying, OK, you've
got this cascading failure.
Let it fail, because systems
that fail under cascading
failure need to be beaten
out of the system.
It's the only way it's
going to learn.
The basic Austrian point of view
was we're better off in
the long run, which is the
long run that Keynes was
responding to.
NASSIM NICHOLAS TALEB:
This I agree.
Because we're short on time,
so I'm going to answer you
very quickly.
He's comparing system--
I think that it's quite
artificial to say Keynes
versus Hayek.
Keynes was not an idiot.
He was a very smart human, and
he would think differently--
vastly smaller than people who
write for New York Times and
claim that Keynes
said something.
And then also, of course, on
the risks of the system, I
also have to say that the
problem is you can't suddenly
stop doing things.
If you look at medicine, the
rule is if you're slightly
ill, let things take care of
yourself, because whatever
medicine you're going to put
in probably will harm you
probabilistically a lot
more than help you.
But if you have cancer or
you're very ill, see 10
doctors, not one, 10.
You get the idea?
So what happened was
interventionism, was statism
intervention, overintervention.
The state is never there, the
interventionist is never there
when really needed because
of depleted resources.
And this is what happened
to us, by the way.
The money's gone, all right?
So it's a different problem.
I don't know if I can claim with
this thinking to be fully
against state intervention,
but I would say the state
needs to intervene.
It's got to be extreme
circumstances and for a very
temporary situation to just
avoid pain, starvation, and
stuff like that.
If I use the same linear
argument, reducing extreme
unhappiness is different from
raising happiness--
two different animals.
We can take one more
question, I guess.
We have one.
We started at three, so
we have three minutes.
That's what I owe you.
AUDIENCE: When you talk about
finding fragility by talking
in the second derivative?
Can you give some more details,
like the second
derivative?
NASSIM NICHOLAS TALEB: Yes.
It's very simple.
You take a company, you
lower sales by 10%,
they lose $100 million.
You lower sales by an extra 10%,
they lose $500 million.
Accelerating losses--
it means they're going to be a
lot more harmed by an adverse
event than a regular company.
It's a very simple test.
It's so simple that people
were ashamed of
telling me I was right.
You see, very simple--
acceleration.
Take the stock market--
take a portfolio.
The market's down 5%,
I lose a million.
The market is down 10%,
I lose $5 million.
I'm fragile.
It's that simple.
It's the same argument when
you say that if the market
goes up 10%, do I make
more that if the
market went down 10%?
I'm anti-fragile.
It's the same thing with
a lot of situations.
So that brings fragility--
you can measure it that way.
Size cause fragility.
You can measure it that way.
Thanks.
