Transcriber: Swenja Gawantka
Reviewer: Marcel Stirner
I'm a philosopher.
I work at the University of Technology,
so quite often I'm asked,
"What do you actually do as a philosopher?
Sit around and think?"
Basically, yes. That's what I do.
Just when you're a professor,
you also get paid for sitting around
and thinking about problems.
And on good days,
you may even solve a problem.
What philosophers think about
is formulated very nicely
by Bertrand Russel,
a famous philosopher and mathematician.
He writes, "I believe the only difference
between science and philosophy
is that science is what
you more or less know,
and philosophy is what you do not know."
So, uncertainty is where science
and philosophy meet.
No matter what we do,
we always face uncertainty.
Will the weather be stable enough
for a barbecue tonight?
How shall I invest my savings?
Which school shall we send
our children to?
And the decision becomes more complex
when we consider society as a whole.
Shall we accept global warming?
Or rather go all renewable?
Having to face the risk
of large-scale blackouts.
Or shall we go nuclear?
Having to accept accidents
in a nuclear power plant, possibly.
All these questions
are not only about what is worse;
the nuclear accident,
global warming, the blackout.
They're also essentially about
how we can incorporate uncertainty
into our decision-making.
We've had, since the 1950s,
professionals in risk
and uncertainty analysis.
And we've made great theoretical
and practical advances
in how to deal with, how to understand
and in how to manage uncertainties.
But still, mankind is caught
like a deer in the headlights
when it comes to dealing
with uncertain information.
Take climate change as an example.
Since the 1970s,
the predicted ballpark figure
of global warming has not changed.
And still we argue about it.
And why?
Because some details
of climate models may be uncertain,
because we're not sure whether
the human-induced climate change
is 90 percent certain
or 95 percent certain.
And a lot of people use this
as arguments for inaction.
But this is just one example.
So we seem to have reached
a kind of dead end
when it comes to dealing with uncertainty.
We know a lot about the future,
but we're paralyzed to take action.
I want to show you
a way out of this dead end.
One way of addressing uncertainty
is risk minimization;
minimize the risk of your endeavor,
and no matter whether you want to start
a hedge fund or put on your seat belt,
it seems like pretty rational advice.
But societal risks are different
from individual ones.
When we think about the future
of our energy supply
or when we think about GMOs,
then the decision for or against
the risky technology affects us all.
Us living today, but also
the ones living in the future.
So, the question about uncertainty
becomes intrinsically intertwined
with the questions about what is ethical.
So, is risk minimization
still a good approach here?
Risk is mean expected harm;
harm expected on average.
But the risks may be distributed
very differently
among different members of society.
Think about a wind farm.
When you live in its close vicinity,
you have to take all the burden
in the form of noise pollution and other,
while those living further away
only share the advantages
of a safe and sustainable energy supply.
And focusing on risk as a mean concept,
as mean expected harm,
completely blurs out this information.
So, is this okay? Yes, well, maybe.
Maybe we have to accept that.
Maybe some have to take the risks
in order to do good for the many
You may remember this
very touching scene from Star Trek,
in which Spock dies.
His final words to Captain Kirk are,
"The needs of the many
outweigh the needs of the one."
Spock is certainly a member
of a very rational species,
so maybe this
is the right approach to risk.
But the Star Trek case differs
because Spock is pretty certain
that with his death he'll reach
the goals he wants to reach.
Now, what actually is uncertainty?
When we talk about risk
as mean expected harm,
we make use of the concept of probability,
a mathematical concept.
And take, again,
nuclear power as an example.
We may be able to calculate
the probability that a certain valve
will begin to leak in the course
of the coming 10, 25 years.
And with this, we can maybe calculate
the probability of a nuclear accident
of the same time period.
But not always do we have
good probability estimates at hand.
Consider the nuclear waste and its storage
in a geological repository underground.
Maybe we can calculate
that the rock formation is stable
over the coming, let's say,
100 years, or even a bit longer.
But what about really long time scales?
Think about 24,000 years,
the half-life of a plutonium isotope.
Or think about 1 million years,
the time span the German government
requires nuclear repositories to be safe.
We certainly don't have
good probability estimates
for such long time scales.
Now, in philosophy,
we refer to such situations
as "decisions under uncertainty."
While for risk,
we know all the probabilities,
for decisions under uncertainty,
some of the possible outcomes
cannot be assigned probabilities.
You probably learned
all about probability in school,
and you learned it probably by urns
which contained balls
in two different colors.
Now, we have an urn here,
and the equivalent
of a decision under risk
would be that I ask you,
"What is the likelihood,
what is the probability
of the next ball being black?"
What you do in answering it,
you begin to perform an experiment,
you begin to draw balls from the urn.
And you determine
the relative frequency of balls.
In our case one third.
Then you say, "The probability
of the next ball being black
is exactly this relative frequency."
This is exactly what we face
in a decision under risk.
Imagine a similar situation,
again, I face you with an urn.
I want to know
the probability of a black ball.
But now I change the rule of the game.
You're not allowed
to perform an experiment,
you're not allowed
to draw any balls anymore.
So, what do you say?
What's the probability?
You can make a sophisticated guess,
but the problem is, you don't have ...
a frequency estimate
for your probability at hand.
And that's exactly what we face
in a decision under uncertainty.
Now consider a third case.
Again, like in the uncertainty case,
you're not allowed to draw any balls.
You have to start with
a real world, with real events,
with the drawing of the balls,
in our case, right away.
So you draw a few balls, all is fine.
All of a sudden,
you end up with... not a ball.
With a white rubber duck
which looks pretty intellectual.
Oops, that was kind of a surprise!
We call these decisions
"decisions under ignorance".
While for risk and uncertainty,
you still knew what
the possible outcomes were,
for a decision under ignorance,
you don't even know
all possible decision outcomes.
And these are particularly bad,
because you only know them
with the benefit
of hindsight, as we just did.
So we have risk, we have uncertainty,
we have ignorance.
And only for the first,
risk minimization
would be a good approach.
Luckily, we do have
other decision rules available,
for example, a precautionary approach.
The precautionary principle does not look
at the harm that occurs on average,
it looks at the worst-case scenario only.
So the worst imaginable case,
and it tries to avoid it at any cost.
Now, quite often, we associate
the precautionary principle
with politics in the EU,
on the European planet,
while risk minimization
is associated with the US.
And indeed, when you look at TTIP
or at the different ways
people react to GMOs
on both sides of the pond,
that seems to be the case.
While many European countries
go for a precautionary
approach towards GMOs -
they are forbidden
until they're proven to be safe -
in the US, they are allowed,
as long as they withstand a risk analysis.
Precautionary principle and risk analysis
provide the two extremes
of a whole spectrum of decision rules,
but all decision rules
share the same problem.
There are good reasons
for one decision rule,
for one decision,
and there are also good reasons
for another decision rule
for the same decision.
So, what to do?
Our current theories
don't tell us anything about
for which decisions we should actually
take the precautionary approach,
or for which we should take
the risk minimization approach.
Why is this?
The reason is, all our current theories do
is focus on the decision situation,
the action itself
and all its uncertainties.
But we also need to include the actor,
with all his characteristics.
So far, we focused on knowledge --
knowledge about the decision situation,
knowledge about decision rules.
But what we need is to also include skill;
skill of those who are
actually taking the action,
the person or the group of people
who do take the action.
Skill differs significantly
from knowledge.
Knowledge you can learn from a textbook,
but skill you have to learn by practicing.
You cannot learn how to ride a bicycle
only by reading a textbook.
The skill we need in order
to deal with uncertainty
is certainly not a bodily skill.
What we need is rather
something like a skill,
more like a kind of virtue,
a character trait.
And now probably for many of you,
red flags rise in your head,
because training virtues,
training character traits?
That sounds, at best, impossible.
While at worst, it sounds 
like manipulation.
But we do value character traits
and virtues even in our society.
Think about honesty,
think about nonviolent
conflict resolution,
think about maybe even bravery.
And we do train them;
we do discourage our kids
at school from cheating.
And this way, we train them
the value, the virtue of honesty.
The skill we're looking for,
the virtue we're looking for,
is still much more complex.
On the one hand, it needs
to resemble honesty;
it needs to be an ethical skill
that needs to tell us what is worse;
a nuclear accident or global warming.
But it also needs
to be an intellectual skill
that needs to sort of tell us
whether we actually do face a decision
under risk uncertainty or ignorance.
Philosophers have also, for a long time,
only focused on the action itself.
This was completely
different in ancient times.
There, the person, the actor,
actually stood at the center
of our reasoning.
Where do we find a solution
for our seemingly modern problem
of how to deal with uncertainty?
Ancient philosophy.
The Greek philosopher, Aristotle,
has suggested the virtue of phrónesis.
And this proves
very important for us today.
The phrónesis is an intellectual virtue,
but it's not the cunningness of a fox,
not a kind of shrewd cleverness;
it's prudence that is always
aimed at the ethically good.
How does this work in practice?
The work of the phrónesis is twofold.
First, it's a kind of problem indicator;
it shall single out a situation
as ethically relevant.
So consider a seemingly
ethically neutral situation,
like shopping for new clothes.
Then the phrónesis will indicate
all the things you're uncertain about.
You're uncertain, maybe, about
the production process of the clothes,
whether the chemicals
are used in a sustainable way,
whether there was child labor involved.
And the phrónesis hints you
to the ethical implications this may have.
The second task of the phrónesis
is to mediate between general rules
and the specific situation.
In our case of buying new clothes,
it mediates between
the general decision rules
we have available under uncertainty -
risk minimization, for example,
precautionary approach -
and it helps you to decide, whether,
given your uncertain
background information,
you should go for the risk approach
or for a precautionary approach.
You may think that still sounds
pretty abstract, and it is.
But remember, just like you
cannot learn how to ride a bike
just by listening to a TEDx talk alone,
we are talking about virtues,
about character traits,
a kind of skill; we need to train them.
The fundamental shift we need
in order to improve
our decision-making under uncertainty
is to shift our focus away
from knowledge about the action itself,
to the character traits,
the virtues of the decision maker,
of those who are taking the action.
That doesn't mean that knowledge
becomes unimportant.
It just means that knowledge
does not suffice.
And this shift is a very fundamental one.
It's a kind of revival of ancient thinking
after it was buried for 2,000 years.
With this, I want to give you a taste
of how a possible way out of the dead end
we, at the moment, are at when dealing
with decision-making, could look like.
And remember: I'm a philosopher,
and paid for thinking
and not paid for doing.
So, what philosophy does is,
it provides the science,
the direction of how to reach
a good decision-making under uncertainty.
And when you practice phrónesis,
when you practice
your intellectual skill of cycling,
you'll finally get there.
And I want to wish you
a good journey there.
(Applause)
