The rainmaker killer risk.
So what do we mean by rainmaker?
These are individuals
or business units
that are generating
very high growth,
something very difficult to do.
So this challenges very powerful
individuals and entities.
So certainly, it's
politically difficult
to bring up these risks.
The second thing is that it's
easy to identify these people.
They have a
peacock-like attitude
and they're very
highly compensated.
Everybody knows who they are.
It's also a leading indicator,
many times, of disasters.
The typical progression of
this killer risk is-- first,
it's difficult to grow.
So it's a real search for
identifying the growth
opportunities.
If we manage to
find somebody that
can actually organically
grow the company, top line
and bottom line.
The next step is, now that
it's difficult to find,
we're going to
throw money at it.
We're going to the ramp it up
as fast and as best as we can.
So we throw tons of resources
at this usual talent.
And implosion is
often the next step.
Why does that occur?
Well, it's because of a lack
of scrutiny, transparency,
and accountability.
And it encourages bad behavior,
such as excessive perks,
or even theft and fraud.
And some people,
after they're in jail,
sometimes, for the fraud, they
still are unrepentant and feel
that somehow they deserve it.
And this is how bad success can
go to people's head, sometimes.
But it's also often the pressure
to continue the performance
and continue to
receive the accolades.
Is when it encourages fraud,
violation of company policies,
or excessive risk taking beyond
the limits set in the company
policies.
So first case study is a
very famous case study,
long term capital management.
This was a speculative
hedge fund founded in 1994.
So identifying the
growth opportunity.
They were growing massively.
They earned, in the
first three years,
this is the kind of returns
they had, 21%, 43%, and 41%.
That is very attractive.
What they were
doing was they were
taking advantage of arbitrage
opportunities, which quickly
disappeared as
others in the market
started piecing together
parts of their strategy.
So as many arbitrage
opportunities,
the window started closing
after these first few years.
But what happened
at the same time
is people identify
somebody that can grow,
so people threw money at them.
They came with
wheelbarrows of money
and said, please, take
our money and invest.
We want these kinds of returns.
Now, did they explain that
the window had disappeared?
Did they say, look, we were
geniuses for the first three
years, but this
opportunity is closing.
Please take your money back.
No, they didn't
do any such thing.
They took on increasing
levels of risk,
with increasingly
leveraged positions,
to be able to try to deliver
that same level of performance.
So investors were not given
that transparent disclosures
on the new high level of risk.
And that led to implosion.
In 1998 they failed.
And they got bailed out.
It was a $4 billion loss.
So bailouts are not new.
Another case study.
This is Fukushima.
Fukushima Daiichi Nuclear
Power Plant Disaster.
In March, 2011 the Tohoku
earthquake and tsunami
caused a full meltdown
of three of its reactors,
resulting in the second
largest nuclear disaster
since Chernobyl.
What is commonly
known are those facts.
However, when we look
at this a little deeper,
and there's been a lot of
articles written about this,
within an ERM
perspective, though, we
get an even deeper insight.
We consider that the
source of this risk event
was more related to the
rainmaker killer risk
than to the actual
seismic events.
It's kind of interesting.
So why are we showing
a picture of Godzilla?
Well, I have to admit I watched
Godzilla movies as a child.
And these are Japanese movies
where it shows a radioactive,
gigantic a lizard suddenly
coming out of the ground
or coming out of the water
and destroying Tokyo.
And there's similar themes
to all the Godzilla movies.
What I didn't know as a
child, what I've known,
actually, fairly
recently, was that it's
common knowledge in
the film industry
that this movie referred to the
Japanese fear of the Hiroshima
and Nagasaki nuclear bomb
blast, the suddenness of it
and the nuclear aspect of it.
So eventually, Japan
got over this fear,
in part due to the need for
growth in energy power sources.
So again, the progression is
that we need to grow power.
So nuclear was identified as
the most lucrative opportunity
for growth in the energy sector.
And this is where
the problem started.
The true source of the
risk event was here.
There was a push for massive
growth in nuclear power.
And it seems that anything that
got in the way of the growth
was squashed, which led
to improper oversight.
There's a New York Times
article dated April 26, 2011,
and there were three
main governance risk
issues in the way Japan was
regulating the nuclear power
sector.
They had a few
different bad practices.
One was Amakudari, or
descent from heaven.
And what that is
referring to is that they
were allowing bureaucrats,
usually in their 50s,
to land in cushy
jobs at the companies
that they once oversaw.
They had a second
practice called amaagari,
which means ascent to heaven,
which refers to the fact
that regulatory panels relied on
testimony from retired, or even
sometimes active engineers
currently working
in the nuclear
industry-related companies.
So that was another bad
governance practice.
And third, although it was
charged with oversight,
the nuclear and
industrial safety agency,
as part of the Ministry of trade
economy and industry, which
is the bureaucracy
charged with promoting
the use of nuclear power.
So the area that's
promoting the power
has authority over
the regulatory body.
And further, the officials
were constantly transferring
back and forth between them.
So this is blurring the lines,
and a very poor governance
practice.
In 2000, Kei Sugaoka a
Japanese-American nuclear
inspector who had done work
for General Electric at Daiichi
had reported a cracked seam
dryer that was being covered up
at Fukushima.
And Japan has
whistleblower laws in place
to protect people
from coming forward.
But unfortunately, the
Japanese regulator,
the nuclear industrial
safety agency,
divulged Mr. Sugaoka's
identity to Fukushima.
And he was blackballed in
the industry after that.
And further, they just
told Fukushima, basically,
they just allowed them
to regulate themselves
for two years.
And they looked the
other way, apparently,
to what was later revealed to
be far more serious problems
at Fukushima, where the
executives were hiding,
including a cracked
reactor core cover.
So we talked about
these case studies now.
What can possibly be done?
Again, we can't
completely mitigate
these difficult problems.
But ERM can shed some light
on some possible mitigation
to partly dampen the effects
of these difficult problems.
The first is to
have an ERM policy
to automatically
enhanced scrutiny
when you have
abnormally high growth.
So many of us are familiar
with the risk-return graph,
the efficient frontier.
And what we have here is the
risk-return relationship.
So generally, I mean it's
of diminishing returns,
but as risk goes
up, return goes up.
And again, not in
complete proportion,
but it's just a known thing.
So when you have massive
increases in growth,
you're going up this
line in the y-axis,
you know that
you're going to have
more risk-- you're going to have
more risk on this dimension.
You'll end up more over here.
So if you're going
from here to here,
don't just recognize, oh, we've
got great growth opportunities.
Let's not even look at risk.
Just the contrary, you're
also increasing risk.
And maybe the risk
return payoff is OK,
maybe you're on the
efficient frontier over here.
But actually, maybe
you're way over here.
Maybe you've taken on way too
much risk for that return.
So you've got to
really examine it.
The second thing is to implement
a value-based enterprise risk
management approach.
This helps blunt the argument
that, well, you know, we're
generating so much value, it
overwhelms any potential risks
concerns.
So risk folks, just
get out of the way.
Let it happen.
It's all right.
Well, if you have a
value-based approach,
and let me just talk
you through the diagram
here, what you have on
the left is developing--
this is part of the risk
quantification phase.
You're developing different
deterministic scenarios, up
and down, for different risks.
And you drop those into your
baseline ERM model, which
projects the
baseline dynamically,
the strategic plan baseline
reproduced dynamically.
And you drop those individual
or combination of risks
into the model to say, hey,
how do these shocks change
the picture?
How much would we misplan?
How much would we exceed plan?
And you get two main outputs
on the right hand side there.
On the lower right hand
you get a distribution
of the entire value
of the organization.
So you can say, hey, how
likely are we to fall 10%
or lower-- or lose 10%
or more of our value.
How likely are we to
meet or exceed plan?
And on the top chart there,
you get the bar charts
of how much impact any
one individual risk
scenarios are going to have.
So you have the combination
of the individual shocks.
And then in the ERM you
need to put it all together
to say, well, how does this
affect the overall volatility
of the organization.
But the key here
is you're measuring
risks in terms of the
most important things
to the company.
For corporate entities, that's
just kind of cash flow value.
For other organizations,
or maybe other metrics,
you can just attach the
ERM engine to anything
that you care about.
As long as you know what your
goals are and you know where
you're going, what those are
the goals and the metrics
around those, then you
start to quantify the risks
in terms of how you could
cause a deviation up
or down from that.
So you can use this is an
entree to deflect the, hey, just
get out of the way risk people.
We have too much value here.
It's just fine.
You can say, well
hey, wait a second.
We can quantify value.
Maybe we can
actually show exactly
how much value we're adding and
see if the risk-reward profile
works.
This didn't used to be possible.
In traditional risk
management, you
didn't have that
side of the equation.
The risk folks
handled the risk side,
and the strategic planning
folks handled the upside,
or new ventures
handled the upside.
And the two never
met in the middle.
This brings the two together.
So any decisions
can be evaluated
in terms of the impact on
the expected change in value
and the likelihood
of achieving it.
And the ERM folks can
inform on those decisions.
