[MUSIC PLAYING]
GUY HORGAN: Good afternoon.
My name is Guy
Horgan, and today I'm
happy to welcome
RP Eddy to Google.
RP got his start on the
National Security Council,
after which he served as
both the UN and United States
diplomat.
Today he is the CEO and founder
of Ergo, a global intelligence
and strategy firm that advises
large governments and companies
on a variety of matters.
Today he's here to
talk about his new book
that he wrote alongside Richard
Clarke called, "Warnings--
Finding Cassandras to
Stop Catastrophes."
He's going to give us
a few prepared remarks,
after which point we
will do a Q&A together.
RP, welcome to Google.
RP EDDY: Thanks, Guy.
Thanks very much.
Glad to be here.
[APPLAUSE]
Today I want to
talk about people
who can see the future, people
who can warn us of disasters,
people who
historically we ignore,
and people we must
stop ignoring.
The man in that video at
the end was Richard Clarke.
He's my co-author.
I had the real honor
of working with Dick
at the National Security
Council in the mid '90s.
One of Dick's primary missions,
then, inside the White House,
was to convince the White
House and other governments
that a group called
al-Qaeda posed
a massive threat to us-- not
just to us but to our allies.
At that time,
terrorism was looked
at as a largely national
issue or a regional issue.
We thought of groups like
Hezbollah or Carlos the Jackal.
We didn't think of big global
movements like al-Qaeda.
Dick realized this was
a massive threat to us
and something that had to
be taken more seriously.
Dick tried to make
his warnings heard.
He wrote a series
of important memos.
He talked from
president to president.
Dick ultimately was
ignored, and 9/11 happened.
As you can imagine, Dick and
I, and a small band of us
who had worked on
the al-Qaeda issue,
have looked back on that
question over the years
and asked how we failed and
asked how failures like this
occur.
One evening he and I
had a conversation,
likely over a bottle
of scotch, and we
started realizing that
disaster after disaster,
there'd be a commission.
There'd be articles.
There'd be journalists
who would pursue it.
And someone would
come out, who'd
stand out, who had predicted
that disaster in great detail
and been ignored.
We realized there seemed to
be a pattern of these people,
and that, perhaps, was
worth looking into further.
In Greek mythology, there
was a woman named Cassandra.
She was a beautiful
mortal, and she
refused to sleep with Apollo.
Apollo's curse to her
was that every time she
foretold disaster--
in fact, he allowed
her to foretell disasters--
she would be ignored.
Cassandra then, being from
Troy, saw the sack of her city,
saw her brothers killed,
saw her family destroyed,
her home destroyed, her entire
city burned to the ground.
She saw this in
perfect detail, well
before it occurred, and
warned and warned and warned.
And Cassandra went
crazy with her warnings,
crazy that no one
listened to her.
So we decided to go look
for modern-day Cassandras.
And boy, did we find them.
What we found was a huge number
of people who were ignored--
but not just warnings,
not sandwich plate,
crazy people on the
street corner saying
the sky is going to fall, but
people with shockingly clear,
data-driven warnings,
eminently qualified experts
who were telling us about
events of massive consequence.
You know, yes, we
are right to ignore
many warnings of
disaster, but there
are some we have to listen to.
And when we don't
find these Cassandras
and we don't listen
to them, people die.
I'll talk about a
couple examples of this.
Some of these are
awfully painful.
We went out and interviewed
as many of the Cassandras
as we could find.
And it was a fascinating
and sad experience.
We learned a lot.
These are extraordinary
people who gave these warnings
and were ignored.
Another example of a
Cassandra that was ignored,
another extremely poignant
example, is Charlie Allen.
Charlie was the National
Intelligence Officer
for Warning.
This is a job in an
organization called
the National
Intelligence Council that
sits above the CIA.
Charlie was a career
intelligence professional,
and he realized in '89
that Saddam Hussein
was going to invade Kuwait.
Charlie had a variety
of intelligence
that was giving him high
confidence of this concern,
and he brought it
to the interagency.
He brought it to the DoD,
brought it to the State,
brought it to the deputies,
brought it to the cabinet.
Everybody ignored Charlie
all the way along the line.
He had a power to
write something
called a warning of war.
By virtue of his
position, he could
write a letter that would
land on the president's desk.
So Charlie wrote
the warning of war.
It ended up at the White House.
The president looked at it,
and it said, I am telling you,
Saddam Hussein
will invade Kuwait.
It might be a few months from
now, but it's coming soon.
Charlie had extraordinary
confidence in this.
And again, it was data-driven.
Of course, Charlie was
ignored, and the First Gulf War
occurred.
We call it the First
Gulf War because then we
had a Second Gulf War.
If you think about what
ended up occurring, the price
we paid from the First Gulf
War, from Saddam's invasion
of Kuwait, it's
fairly extraordinary.
One thing that might
not be as obvious
is that American troop
presence in Saudi Arabia--
we put troops there to protect
Saudi Arabia at this moment--
was the entire raison
d'etre of al-Qaeda.
Al-Qaeda did not exist
until we put troops there.
Al-Qaeda had no
reason, no raison
d'etre until we did that.
Al-Qaeda was born because
we put troops there,
because Saddam invaded Kuwait.
Of course, Saddam's
invasion of Kuwait
and the First Gulf
War and al-Qaeda
led to the Second Gulf War.
We're talking about 25 years of
destabilization in the region,
trillions of dollars of treasure
spent, millions of lives lost.
And Charlie, the man
who got it right,
well, he was threatened
with being fired.
He was screamed at, yelled
at for giving the warning.
And the National Intelligence
Officer for Warning position
has since been abolished.
We're not very good at
listening to these people.
The final example I'll
give is of Fukushima.
Dr. Okamura is a noted
seismologist in Japan.
And when the Fukushima Daiichi
Nuclear Plant was being built,
he came to them in every
open hearing and said,
you cannot build this nuclear
reactor where you are.
You are too close
to the shoreline.
You are too low.
If you do, a tsunami will
come and will destroy it.
He knew the tsunami.
He knew the speed.
He knew the time.
He said you're 40 years
overdue for a tsunami.
He knew precisely the
one that would come.
He was able to go and
look in the mountainsides
above the Fukushima
Daiichi Nuclear Power Plant
and find sand and stones that
came from this Jogan tsunami.
Now, the problem with the
tsunami he foretold was it
was from AD 869.
It was a long time ago.
So he would go to hearing
after hearing, a noted
PhD seismologist from a
noted center of seismology,
and warn them, do not build this
plant where you're building it.
And every time he
would show up, they
would say, Dr. Okamura,
thanks for coming.
The hearing today is about
earthquakes, not tsunamis.
Well, when's the
meeting on tsunamis?
We don't know yet.
The meeting on tsunamis
was never scheduled.
He never was able
to be listened to.
He was ignored repeatedly.
So what happened because of
his warning not being heard?
Well, the tsunami wave
showed up at the precise wave
height he warned, about the same
time he said it would happen.
It rushed in shore at 435 miles
an hour, crested the wall,
and destroyed all the
backup power at Fukushima,
leading to the largest nuclear
disaster since Chernobyl.
19,000 people died in
Japan, and many of them
tragically died when they
went to tsunami evacuation
spots that were too low.
Even the tsunami
evacuation spots
hadn't been built to
the proper height.
And Fukushima, what happens
when a nuclear power
plant has no power?
It melts down.
It explodes.
It spews toxic and
radioactive gases,
and that's what happened here.
We estimate that if TEPCO, the
builder of the Fukushima plant,
had spent the money making
the improvements that Dr.
Okamura had suggested, it would
have cost them $50 million.
It's a real number.
But it's cost $100 billion to
clean up the nuclear disaster.
It will probably
lead to 10,000 deaths
from the ongoing radiation
poisoning and cancers.
That's a 2,000 times
return on the investment.
Those examples and
many others were
predicted in nearly
perfect detail
by highly credible
data-driven experts,
yet we ignored them,
one after the other.
And we continue to do this.
So the only question
that Dick and I could ask
is, what the hell
is wrong with us?
Why are we doing this wrong?
Why don't we get this?
Every one of these experts
had massive numbers
of similarities.
So we realized there
was a clear pattern
through these interviews.
One thing that was
pretty fascinating
is they all said to
us two sentences.
Every Cassandra we interviewed
said two things to us.
One was, when I discovered
this data, when I figured out
what was happening,
I wanted to be wrong.
So I went to my colleagues and
said, please look at my data.
Please study what
I think I found.
Tell me this is inaccurate.
And of course, in all instances
their colleagues couldn't.
And the second
thing they all said
was, well, now that
I knew it was right,
I took it to the decision
makers and presumed
they would act on it.
And of course, they
didn't, or else they
wouldn't be Cassandras.
And when they ignored
it, I said to them,
why are you ignoring
your own data?
This wasn't proprietary insight.
This was publicly available
data that they provided.
So that's one thing
they had in common.
Another thing we realized
they had in common
was that there were four
sets of characteristics
across every Cassandra event.
We called this the
Cassandra Coefficient.
The first one is
the warning itself.
The second is the
decision makers,
the third the Cassandra,
and the fourth the critics.
Each one of those
things can help
us identify if a warner is a
Chicken Little or a Cassandra.
Now, this Cassandra Coefficient
is only the beginning.
This book is very clear in
describing itself not as
a perfectly scientific tome.
There's a team at MIT
working on that now,
doing a series of
analyses to figure out
how to make this Cassandra
Coefficient more accurate.
And I hope someone else
will look at that as well.
But there are some very powerful
patterns that we found in here.
I will mention only two of the
24 Cassandra Coefficients, two
that we think are
relatively interesting.
The first one is called
initial occurrence syndrome.
Many of these disasters occurred
and nothing like it had ever--
never occurred before.
Initial occurrence
syndrome has to do
with our human bias availability
heuristic and normalcy bias.
It's very hard for us to
imagine something happening that
hasn't happened
before, very difficult
for us to see totally
novel events in our head.
This is a quote by
Chairman Ben Bernanke, one
of the most intelligent men ever
to serve in the US government,
highly credentialed,
highly smart.
And in this instance,
he falls fully victim
to initial occurrence syndrome.
What he's saying
here is, we've never
had the entire nation have a
real estate decline in prices.
And since we haven't, we won't.
And since we won't, we won't
have a full employment problem
or an economic downturn.
He said that in July 2005.
When you see words
like "never before,"
you should look twice and wonder
why people are thinking this.
Was he right?
Of course, he was
dramatically wrong.
The 2008 collapse
happened entirely
because of a nationwide
collapse in real estate.
The 2008 collapse is also
notable for another Cassandra.
Meredith Whitney, a young
Wall Street analyst,
began to do the homework
on Citibank very early
on in '06, '07, and realized
that they were going to fail
as a company.
The largest firm in the
world was going to collapse.
Meredith had the numbers.
She had the data.
She produced her estimate.
She was obviously right.
But until she was right,
what did she get in return?
Death threats, threats of being
fired, and extraordinarily
sexist responses.
She then was correct about
Lehman and then Bear.
In every instance, she
was ahead of the market.
She was ahead of the naysayers.
This Cassandra was correct.
The other major category
of why we get things wrong
has to do with not
so much our biases--
and if you look at the
Cassandra Coefficient,
there's a large number of
things that are highly related
to human biases.
The other one has to do
with the massive complexity
in the world.
I call this the
acceleration of history.
We were all taught how
the Second Industrial
Revolution of 1870
led to World War
I which led to World War II.
When you have
technological revolutions,
you end up with
massive displacements.
Yes, you have winners,
but for all the winners,
you have losers.
The losers often tend to be the
employed class, the blue collar
class, as it has been in
every technological revolution
that we've had
throughout history.
And when you have a large
number of unemployed,
you obviously have a massive
disruption to society.
The 1870 technological
revolution
led to World War I,
led to World War II.
And other technological
revolutions
have been just as disruptive.
Now, as you look
at this chart, you
see the periodicity between the
revolutions is getting shorter.
We go from 200 years
till 1780, to 100 years
till 1870, and on we go.
The periodicity matters.
Society needs time to absorb
and retrain the workers.
We need time to get back
on our feet as nations.
We need time to
get over the wars
that we generally have had
after these revolutions.
But the periodicity between
the technological revolutions
is getting much, much closer
together, much shorter.
How fast are we
moving as a society?
How fast is history
accelerating?
Well, it's worth noting
the period of time
from the bronze
sword to the steel
sword is four times longer than
the time from the steel sword
to the nuclear weapon.
We are moving
extraordinarily quickly.
Now, if you look at where
we find ourselves today,
you could argue we have three
technological revolutions
happening at the same time--
three revolutions that are
leading to massive disruption,
three revolutions that
are changing regions,
three revolutions
that are changing
employment and employability.
They're all happening within
a decade of each other.
Super intelligence and
artificial intelligence,
genetics, and energy.
It is this complexity
mismatch, it's
this acceleration
of history that
also makes it hard
for decision makers
to make decisions
when given warnings.
I'll give you an example
of complexity mismatch.
Harry Markopolos is a quant.
Harry Markopolos is a very,
very smart hedge fund manager,
and Harry Markopolos got a
warning of a hedge fund run
by a guy named Bernie Madoff.
When Harry began to look
at the Madoff hedge fund,
he said he realized within five
minutes it was a Ponzi scheme.
How did he know?
The math told him so.
The data told him so.
His education told him so.
So what did he do?
He went to his friends and said,
this can't be right, can it?
They said it is.
He went to the SEC and said,
please listen to me, won't you?
They said, no.
He said, how are you
ignoring your own data?
The exact same story.
He had 11 exchanges with
the SEC over nine years.
He provided three highly
complicated explanations
of why Bernie
Madoff was a fraud.
He was ignored every time.
Do you remember when
the SEC arrested
or caught Bernie Madoff?
They didn't.
Bernie Madoff's
kids turned him in.
He went scot-free for decades.
He stole $65 billion, the
largest financial fraud
in history.
And the warnings were
there, in perfect data.
Complexity mismatch--
it's very hard for people
to understand these things.
Now remember, the lawyers
at the SEC at the time,
well, they were lawyers.
They were liberal arts grads.
When Harry came in with a bunch
of charts and math and numbers,
they didn't understand it.
They couldn't get it.
That's a relatively easy thing.
Imagine how hard this is going
to be with superintelligence.
Imagine how hard this is
going to be with the nano.
Imagine a how this is going to
be with new sources of energy,
genetics, CRISPR.
We're in real trouble
if we can't begin
to understand what's going on.
So while that's an
historical look back,
the only value to
this book would
be if we can see
something coming down
the pike in the future.
Can we apply the
Cassandra Coefficient
to current warnings?
At the end of the book,
the final seven chapters,
we looked at seven
other threats.
These are people right
now pounding the table,
warning us, explaining
to us that these
are species-level risks.
Things that could kill.
There's a phrase that I
learned writing this book,
a word called gigadeath,
billions of people killed.
People who traffic in
this world understand
these horrible ideas.
It's very difficult for us
to get our heads around it.
That's what they think about.
When we wrote this book
about a year and a half ago,
the actual drafting, all
seven of these warnings
seemed a little preposterous,
a little out there.
In a year since then, five
of them-- six of them--
five of them, they've gone
from fantastic to probable.
I'm going to talk about two.
The first one is very timely.
This is a warning being
given to us right now
by a noted professor
named Alan Robock.
Alan has picked up the work
previously done by Carl Sagan.
Carl Sagan, in the '70s
and '80s, very ingeniously
teamed with a group
of Russian scientists
and did a lot of
study about what
would happen if 1,000
nuclear weapons were
thrown back and forth between
the United States and Russia.
Other than the death,
other than the destruction,
what would occur to the planet?
What he discovered is
you'd have massive fires.
Those fires would create
superheated plumes.
The plumes would go
in the mesosphere.
And they would dump a
huge degree of effectively
dust and small particulate
matter around the earth.
That leads to global
nuclear winter.
Global nuclear winter
was considered so proven,
so mathematically
correct, so well-modeled
by the Russians
and the Americans
that Carl Sagan's theory led
to a huge degree of disarmament
talks, disarmament success.
It's considered a
really magical tool
to help ease some of the
tensions in the Cold War.
He was considered correct.
Alan Robock, our
current Cassandra,
went to go look at
Sagan's work using,
obviously, much more advanced
computers, much larger models.
And he said, wait a second.
You don't just need thousands
of weapons going back and forth.
You could probably trigger
a global nuclear winter
with 40 weapons.
Now, 40 seems like a lot of
nuclear weapons in a world
when we've only used two.
40 is not.
I've worked on nuclear weapons
strategy, nuclear planning.
When the United States
decides to hit a target,
like perhaps North Korea,
it's 30 weapons minimum.
That's just our weapons.
Nuclear weapons don't go in
onesies and twosies anymore.
So Alan Robock took
a look at this.
And his initial study was
on Pakistan and India.
And he came back and said
that if they went to war,
we'd have trouble.
And he's since modeled a Korean
peninsula nuclear exchange.
What he's found is if even 40
warheads are shared in between,
this would trigger a
global nuclear winter,
where the entire earth
temperature would go down two
to four degrees Fahrenheit.
Crop productions would
go down 15% to 25%.
And that would lead to 1
to 2 billion people dead.
So while 40 weapons
in the peninsula
would be a horrific
thing in and of itself,
it could lead to a
global catastrophe
like we can't imagine.
This is hard science, real
data, by an eminently qualified
expert who's trying to
get the word out there.
Now, the good news
is Alan Robock
and his team won the Nobel
Prize last year for their work
on nuclear weapons disarmament.
His work on weapons disarmament
is being well heard.
His work on global nuclear
weapons theory is not.
I've had conversations with
the parts of the US government
that are making the
nuclear weapons plans that
would be the ones who would
actually press the button,
and this theory is
largely unknown to them.
That's a real problem.
The final risk I
want to talk about
is about artificial intelligence
and then superintelligence.
Artificial intelligence,
meaning machines
that do the work of
humans, is all around us.
It's your ATM machine.
If you don't think you're
already a cybernetic organism,
try to survive without your
phone or computer for a week.
You are.
Superintelligence is
dramatically different.
It's the idea that a computer
can be, quote, unquote,
"smarter than a human"
at all tasks humans do.
It's also called AGI,
artificial general intelligence.
I'll call it superintelligence
to make it easy to understand.
It is definitionally very
hard for our small brains
to understand a thing that would
be twice as smart, 10 times as
smart, or 1,000
times as smart as us.
But please understand
there is no real argument
that this is coming.
Every person involved with
superintelligence or artificial
intelligence, down to
only a couple dissenters,
says yes, we will get
to superintelligence.
Now, we don't know the how
and we don't know the when.
That makes it very difficult
to make the argument
that we should be concerned
about it, but it is coming.
In fact, when it does come,
it'll come very quickly.
We're talking about machines
that program themselves.
It's called self-recursive
programming.
This is a self-replicating
technology.
A little footnote--
any technology
that's self-replicating or
self-recursive and asymmetric
is wildly dangerous.
This is a self-replicating
technology.
It has to be considered
extraordinarily powerful.
The only argument about this
superintelligence, as I said,
isn't if it will happen or not.
It's if it will bring
good tidings or bad.
We spent some time
with Eli Yudkowsky.
He's our Cassandra in
this part of the book.
He's a man of
extraordinary intellect
and now of great stature who
is dedicating his entire life
to ensuring that
superintelligence is safe,
that it brings good tidings.
It's a very difficult puzzle.
He doesn't know the solution
yet, but he's dedicated to it.
And in fact,
billions of dollars,
nominally, are
being dedicated now
towards ensuring that
superintelligence
is done safely.
Elon Musk, Stephen Hawking,
Bill Gates, Eli Yudkowsky,
philosopher Nick Bostrom, who
I think gave a Google Talk,
they all say, you
better watch out.
This is a species-level
event of extraordinary risk.
Others aren't so concerned.
There's two possible
outcomes, to make it simple,
with superintelligence.
One is personified here
by this picture of Lobot
from "Star Wars."
Lobot, as you can see with his
1970s technology in his face
looks-- he looks like an
Atari console or something.
But Lobot is a
cybernetic machine,
part human, part computer.
Google's own Ray Kurzweil
looks at this as the future
of superintelligence.
He tells a very
rosy story that we
don't have to be concerned
about superintelligence
because we will get the
cybernetic interface,
and we'll be able to learn
with, grow with, and control
superintelligence because of
our capacity to be part of it.
He and others argue,
the Lobot argument,
that superintelligence
will create
an immense amount of good.
And it will.
This is the most powerful
tool of all time.
Any problem that can be solved
by computation or intelligence
will be solved by
superintelligence.
Problems we don't
know exist will
be solved by superintelligence.
That means hunger.
That means disease.
That means death.
That means climate change.
This is the most powerful
tool ever to be invented.
And remember, as I said,
it will be invented.
There's no question of that.
But it's been called
our final invention.
It will then invent
everything else.
And the question is,
yes, it will give us
all those great societal
benefits as described,
but what does it do then?
So then you switch to
the other picture here.
And this is of the
Terminator T-2000.
And every conversation
on superintelligence
apparently has to
show this picture.
We've all watched this movie.
We all know that robots
will come for us.
Footnote-- part of the fact
that that is so outlandish, part
of the fact that Hollywood
has peppered us with movies
about robots coming
to get us makes
us believe it's not
going to happen.
It makes us believe it's in
the realm of fantasy in film,
not in our actual lives.
It's inured us to the threat.
People like Gates and
Hawking and others
believe this is a real risk.
And this is the way we'll
personify it in this instance.
The only way I could
basically describe this
is think about a genie
coming out of the bottle.
In every narrative, it
gives you three wishes.
I described the
goodness that we'll
get from superintelligence.
But then the genie is
off to do its own thing.
Now, I don't know if we'll
control superintelligence.
I don't know if Eli
Yudkowsky will be right
and if we can put ethics
into the computer.
I have a really hard time
understanding how we will.
But I know it's a question
we have to address.
Elon Musk calls this
summoning the demon.
And the one question
people often ask is, well,
can't you just turn it off?
Well, why didn't chimpanzees hit
the off switch on human beings?
Again, every noted
expert working
on artificial
intelligence agrees
we will have superintelligence
at some point, probably
within the next 100 years.
I would guess 25 years.
This is a photo of
the World GO champion
losing to your computer here
at Google called AlphaGo.
And if you want to see
the face of dejection,
I think you can
see it right there.
This chess-- this
GO tournament was
watched by a huge number
of people inside Asia.
It was a massive event.
It was a massive event
for popular media,
but it also was a massive
event in the superintelligence
community.
I called up and talked
to one of the experts
we had worked with
on this chapter
as soon as this win had
happened, and I said,
you told me that this wasn't
going to happen in five years.
And he said, I thought it wasn't
going to happen in five years.
And he was almost in tears.
It's not supposed to have
happened this quickly.
Technology develops much faster
than humans usually give it
credit to be able to do.
We are very, very bad as
a race of understanding
technological development.
Complexity mismatch,
which I mentioned before,
is all over the issue
of superintelligence.
We do not understand
what's happening here.
Another issue called
diffusion of responsibility
is all over superintelligence.
Can you tell me who at the White
House is responsible for this?
Can you tell me who in
the Department of Defense
is in charge of
making sure we don't
get killed by
superintelligence and robots?
I don't know who it is.
But standing here, I believe
Eli Yudkowsky, Elon Musk, Bill
Gates, Stephen Hawking
is giving us the chance
to listen to a Cassandra.
This is a Cassandra moment.
And if I, at the Google
campus, didn't mention this,
I'd be remiss.
You are one of two or
three places in the world
legitimately in the race to
create superintelligence.
Google may well have the fate
of humanity in its hands.
It's an extraordinary statement.
What other company in history
could you say that about?
Google's leaders,
in some instances,
have become dismissive about
the threat of superintelligence.
That worries me.
Google's early manifesto
was, don't be evil.
Google's new manifesto needs
to be, don't build evil.
Cassandras are out there,
and we need to find them.
We now have a tool
to help us do so.
And I think we can do a
much better job at this.
So Dick and I have
dedicated a huge amount
of our future effort to getting
the word out about Cassandras.
This wasn't about writing
a book or making money.
There's no money
in books anymore.
This was about
getting a message out
so others don't have to feel
the dejection and the failure
and the catastrophe of giving
a warning and being ignored.
We've created something called
the Cassandra Award Foundation.
You can see the board
and jury up here.
Some very distinguished
individuals
have joined us, including
some former Cassandras.
We have representatives
from the military,
from intelligence, from
Wall Street, from medicine.
And we, every year,
will give out one award
to one individual giving
a warning we think needs
more attention.
We gave the award this
year to Alan Robock.
I mentioned earlier Alan's
work on global climate change,
on a new nuclear winter.
We gave him the Cassandra Award.
And part of the value
here isn't that he
gets to hold a piece of
glass, as pretty as it is.
Part of the value is
that Alan's work ideally
gets a little more of a voice.
More people pay attention
to what he was doing.
The members of the board--
you saw their names earlier--
we've all committed to
trying to get the story out.
When we go on media now,
we talk about Alan Robock.
When we meet with
government officials,
we talk about Alan
Robock and the fear
that we all have of a
global nuclear winter.
We're going to give this
award out every year.
It is a nonprofit organization.
The website's on there.
Please nominate others about
whom we have to be concerned.
And please donate if
you're interested.
The funds are used entirely
to reward the award winner
and to pay for publicity
around the message.
So thank you for
taking the time,
and please remember we've
found a tool we think
can help us avert catastrophes.
GUY HORGAN: RP, thank you so,
so much for joining us here
and for being here.
I'm properly freaked out, so I
have a few questions for you.
RP EDDY: [LAUGHS] You're not
supposed to be freaked out.
This is a good news story.
Right?
We now have a tool.
GUY HORGAN: I so
appreciate your being here.
And I was hoping we
could kick things off
by talking a little bit about
the perception of Cassandras.
So one of the key
things that you describe
is that most
frequently Cassandras
are not paid attention to.
Either they are not likable.
They're abrasive.
They don't look the part.
And I was wondering if you
could speak specifically
about Ms. Whitney and her
experience, especially
in light of the
topic of the day.
RP EDDY: So Meredith
Whitney, the Cassandra
who predicted the
2008 collapse, is
a little different than some
of our Cassandras for one
primary reason.
A lot of the
Cassandras we looked at
had something we call an
off-putting personality.
Now, I don't know if they had
the off-putting personality
the day they found the warning.
But by the time we got to
them, and other people had
interacted with them, a lot of
them were relatively abrasive.
And I guess you would generally
call kind of a low EQ.
Meredith is not like that.
Meredith Whitney is
an extraordinarily
charming, smart, and at that
instance, this year, 2008,
she was a young female financial
analyst at an investment bank.
And that was a bit of
a rarity to start with.
Meredith was sitting
at a cocktail party
with the CFO of Citi.
Citi was the largest corporation
in the world at the time.
And the lead analyst
from another bank,
a man of great repute who was
the Citi analyst from, I think,
Goldman Sachs--
don't quote me-- said
to the CFO of Citi,
your company has become so
complicated that I don't even
bother modeling it anymore.
And Meredith heard this and
said, what in the world?
Your job is to model
these companies.
That's your duty and
your responsibility
to know what's going on.
So she went back and redoubled
her efforts to crunch
the model on Citi.
And of course,
she made it, data.
She came out with
her model and said,
Citi can't pay its dividend.
For a financial stock of that
size not to pay its dividend,
it's the equivalent of the
company almost going out
of business.
She also said it will
go out of business.
I mean, her report hit
Wall Street like a blast
and led to stocks tumbling.
And she was right.
All they had to do was go
back and look at her numbers
and see that this young
analyst was correct.
Because she was a woman, because
she was an attractive woman,
all the press began to focus
on the attractive young woman,
didn't focus on the numbers,
didn't focus on the facts,
didn't focus on
her genius, focused
on her sex and her outfits.
And so if you go
read the articles
about Meredith Whitney,
they talk about her heels.
I mean, it's just
extraordinarily sexist.
She got death threats.
She was hated.
And it's very clear to me
that if Meredith Whitney
had been a stodgy old man and
had made those predictions,
she would have been
extraordinarily
lauded for getting it.
Now, she was lauded.
I think she was the cover
of "Fortune Magazine."
She was considered
one of the great--
and she is one of the great
financial minds of the era.
But she paid a very
heavy price for it.
And if it's not sexism,
I don't know what it is.
And this being
the #MeToo moment,
it's worth understanding
the biases.
This is all about biases,
the biases that go
into getting that story wrong.
GUY HORGAN: So in
that same vein,
for those Cassandras
who are listening today,
what would you recommend they
do to improve their messaging?
RP EDDY: There are
people who are listening
to this now who believe they
know a catastrophe that's
coming.
Now, to be fair, some
of them are just crazy.
We have a website
where people have
been nominating Cassandras.
And we got something like--
we got a huge number of
nominations last year.
And a good quarter of them were
people nominating themselves
because they saw 9/11 happening.
They weren't data-driven
experts in the field.
Now, if you are a data-driven
expert in the field--
if you look at the Cassandra
Coefficient, the indicators
of the Cassandra-- if you are
a proven technical expert,
you are data driven,
and you are effectively
the person that we train,
hire, and trust to warn us
on that topic, and
you have a warning
and you're not
being listened to,
you very well could
be a Cassandra.
So what do you do?
It's not easy.
As we've seen example
after example,
trying to get people to
listen to your message,
there's a number of
different things.
And one, get yourself nominated
for the Cassandra award.
Two, understand that
your messaging is now
the second most important part
of your-- part two of your job.
Part one was to get the data.
Part two is to get
the story out there.
You have to effectively
frame your warning
to the decision makers.
Now, as I told, all the previous
Cassandra's believed, look,
I got the data.
I got it right.
It's been peer reviewed.
Of course they'll believe me.
To them, the data
told the story.
The data doesn't
always tell the story.
Harry Markopolos learned
that the hard way.
His data on Madoff did
not tell the story.
His data was inarguable.
I mean, it was impossible
for Madoff not to be a fraud,
and he proved that numerically.
But the data didn't
tell the story.
Your data will not
tell the story.
You need to tell the story.
Or if you don't have
a good EQ, if you
don't have a good
capacity to communicate,
find someone who does.
A Cassandra we talk about in
the end of the book, who's
now giving a warning on climate
change, is Dr. Robert Hansen.
Dr. Robert Hansen is the
person who taught the world
about climate change.
Dr. Robert Hansen's science
is absolutely peerless,
peer reviewed, and
extraordinarily accepted.
Robert Hansen, in 2015
or '16, wrote a new paper
that said things are much worse
than we thought they were.
We talk about him in the book.
He said, climate change
is already horrible.
We already know that.
But it's going to get way worse,
way faster than you believe.
He put a paper out that
wasn't peer reviewed.
He got slammed.
This is the guy who
proved it before.
Again, a totally noted guy,
put the message out and got
slammed.
He put another paper out
this time, he got slammed,
until they caught
up with the science.
Took two years, everyone
now says he's right.
Now, Robert Hansen
has been arrested I
don't want to say the number--
I think four times.
He chained himself to
the White House fence.
He was a government employee.
Like, he has done all he can
do to get the message out.
Robert Hansen had to
pair with Al Gore.
Al Gore is obviously a great
personality, very well known.
He paired with Robert Hansen,
and they created "Inconvenient
Truth" and got the story out.
So sometimes the
data and the expert
needs to find effectively
a marketing team
to get the story out.
And we want to do that
with the Cassandra Award.
We want to build the marketing
team in a very light way.
So your data won't
tell the story.
The other thing
that has to happen
is in the intelligence
community, going all the way
back to the Pearl Harbor attack,
which was a surprise attack,
of course, the
intelligence community
talks about the difference
between smoke and fire.
So when you sit down with a
principal, say a president,
or a secretary defense,
or whatever and say,
I think something horrible
is going to happen,
in lots of instances,
the intelligence analyst
only can point to general
smoke over the horizon.
Again, this is just an analogy.
What they need is fire.
They need to say, here is
specifically what's happening,
specifically when
it will happen,
and here's how I can
tell you it will happen.
Now, intelligence analysts
need to adapt a new methodology
where they continually warn
the decision maker about what's
going to occur and
show them the things
that will happen going forward
to prove they're correct.
A great expression is, here's
what will happen next week,
next month, next
year to show you
that my scenario is correct.
Another expression
to use is, here's
what could happen to
show you I'm wrong.
If we see this, I'm wrong.
And the final tool,
highly related,
is to create the boundary
conditions around the warning.
If we see anything else happen
in these boundary conditions,
I am also wrong.
So going forward in describing
where the smoke will turn
to fire and how is critical.
And then penultimately,
finally, you
need to then direct your
collection to get the fire.
In most instances,
we did not have
the intelligence we needed.
9/11 people believe was a
failure to connect the dots.
That's not right.
9/11 was a failure to collect
the necessary intelligence
for that specific attack.
We knew generically what
was going to happen.
We didn't know specifically
what was going to happen.
So get more intelligence.
And then finally, you need
to explain these things
to people in parables.
People don't learn from slides.
People don't learn from
numbers, as we said earlier.
People learn from parables.
GUY HORGAN: From the narrative.
RP EDDY: From the narrative.
So one tool that we've
used before with clients
is we can-- there's something
very simple on the internet
you can find where you
can type some text,
and it looks like a "New
York Times" headline.
And when you rip it,
it looks literally
like I ripped the front of
the "New York Times" off.
And you put it in from a client,
and it says, "CEO Johnson
fails to make necessary fix--
disaster strikes"-- you know,
whatever that narrative is.
"President of the
United States fails
to understand
superintelligence risk--
human species destroyed."
So find the narrative.
Put it in front of them.
Make it a parable about them.
And begin to get over
that curve of their biases
and their agenda inertia.
These men and women are
running major organizations--
governments or
companies, whatever--
because they think they're
good at making decisions.
Other people believe
they are too.
It's hard to get them
off their set agenda.
So you have to
make it about them.
GUY HORGAN: Wonderful.
So I'm also curious
about the project
that you mentioned
that's happening at MIT.
So you guys are
developing great expertise
in chatting with
these Cassandras.
They're being nominated.
You're talking with
them, identifying them,
and awarding the one that
you think is most relevant.
But I was wondering about
what they're doing at MIT
and specifically
regarding the scalability
of these principles.
Is it possible to take in
many, many possible Cassandras
and apply this in
a scalable fashion
so that you can identify
them without necessarily
going through them with
a fine-tooth comb and all
of these experts?
RP EDDY: What's
happening at MIT is
very promising but very small.
And the question there
gets more to what we just
discussed previously.
What's being
studied there is how
to be a predictive
intelligence analyst,
so how to use the Cassandra
Coefficient and other tools,
how to be able to
look around a corner,
and then how to
effectively frame that.
So the cycle that
we've just discussed
about moving from smoke to
fire, to effective framing,
to getting the warning
across to the decision maker
is what was being looked at
there and hopefully still is.
What really needs to also happen
with the Cassandra Coefficient
is the n has to increase.
So we have about eight or
nine examples in our book.
I'd like to see 50 examples.
I'd like to see them studied.
I'd like to see false
positives studied.
And I'd like to increase the
n so we can understand, take
these 24 Cassandra
Coefficients, characteristics,
and really build them
out and grade them,
understand when they
may work or not work.
And that could get
your question of scale.
But right now, this
is a powerful tool.
It's a discovery.
I truly believe Dick and I
discovered something important.
It was out there the whole
time staring us in the face.
There's an expression called
the invisible obvious.
When you find it,
you go, my god,
I can't believe we missed that.
I can't believe we've
missed this for so long.
I can't believe that
these data-driven, highly
credentialed experts time
and time again get ignored.
And by the way, it
continues to happen.
So today, and on the
slides we used today,
these are disasters that
happened up till 2011
might be the last one.
The Grenfell Tower fire
in London, a recent train
crash in the Northwest of the
United States, all of these
had specific warnings
that they were
going to occur by credentialed
experts and were still ignored.
So we just have to
get better at this.
GUY HORGAN: Changing
gears slightly,
I do want to get into some
topical questions for you.
But before we do, I was
wondering, for those of us
listening, I was
wondering if there
is a way that you
can take these ideas
and apply them to
your personal life.
Is there sort of the
Cassandra warnings
for one's personal work life,
financial life, love life,
whatever it might be?
Can you apply them in the micro?
RP EDDY: I think there's
a couple of things that
are worth understanding here.
As a very macro thought, we all
have to first just acknowledge
two things about ourselves.
The first one is, most of
us know the right thing
to do, most of the time.
And it doesn't mean
we always do it.
That's a self-discipline
issue, and that's not
the focus of this conversation.
The second thing we have to
know about ourselves is we
are, unfortunately,
largely just bags of meat
driven by biases and heuristics.
So if you go back and
understand from whence we came,
you have to understand
that this brain was formed
300,000 years ago then
140,000 years ago,
and it wasn't designed for
the world in which we live.
It was designed for
groups of 140 people.
Go read "Sapiens" by Harari.
He describes this perfectly.
That's what you walk around
with all day between your ears.
And the decisions that
you constantly are making
are driven first by
your monkey brain.
And then it takes a
long time to get up
to your more evolved brains
and let impulse not lead
automatically to your response.
The Cassandra
Coefficient acknowledges
that decision makers
have monkey brains.
All of us have monkey brains and
listen to them far too often.
And we as individuals
should realize that too.
So I think inside--
I'll give you a great
example, a financial example.
Now, 99.9% of investors
in this country
should never buy anything
other than an index ETF fund.
This is proven by
every source of math
you could possibly imagine.
99% of us should
never buy anything
but an ETF, an index fund.
Yet, that's not the way
we behave with our money.
That's not the way I
behave with my money.
I believe-- and it's called
magical thinking-- that I
have a view others don't have.
I know something
they don't know.
I can go buy or sell this stock,
and it's the right thing to do.
That's foolish.
The quantified experts and the
data tell us something simple.
We need to follow it.
I'll give you another example,
highly related-- lottery
tickets.
GUY HORGAN: It's not
a good investment.
RP EDDY: It's not as
good as you'd think.
Statistically, no one
wins the lottery, ever.
Yet, the amount of money
put into the US lottery
is unfathomable.
And it's put in by the people
least able to afford it.
By the way, lotteries are
a taxation on stupidity,
and they are a
social injustice put
upon the poorest of our
class by our governments.
And they are an evil.
Another example
of where you just
have to listen to the experts.
So are there experts
in our fields
who we have to listen to?
Yes.
Will that help on a micro level?
Sure.
GUY HORGAN: For some.
If we could dial things
back for a moment,
I'm curious about
your company, Ergo,
and how you got
your start there.
How did that company start?
How did you transition from the
public to the private sector?
RP EDDY: I've just been
really, really lucky
to have great colleagues
and bosses all the way
along the line.
Dick was really one
of my first bosses
and has been
extraordinarily influential
in my life, all
the way to today.
He's godfather of
our third child.
He was a groomsman
at my wedding.
And I wouldn't be
anything of who
I am if it weren't for all
of his time and influence
and love, and I can't
thank him enough.
And after I left
government, I got
to work at some great
organizations, the Monitor
Group with Mark Fuller, Gerson
Lehrman Group with Mark Gerson
and Alexander Saint-Amand.
And then from Gerson
Lehrman Group,
I created Ergo with my
partners Matthew Moneyhon
and Evan Pressman.
And Ergo is now 14 years old.
And we came out
of-- it really is
a reflection of where we came.
It's a reflection of diplomacy
and intelligence and research.
And that's what Ergo
does, and that's
where my partners
come from as well.
And it's been a huge joy.
And what we do at
Ergo is part of what
we're talking about today.
We try to give people frameworks
to understand the future.
Now, this is a framework
to understand catastrophes.
This framework works
on other things too.
But there's only a
few proven frameworks
about how to look
around corners--
scenario planning, virtual
markets, Cassandra theory.
So we try to bring those
frameworks to our clients.
And we're a little
different in that
we infuse our analysis almost
entirely with on the ground
intelligence.
So we've paid 36,000 people
for information in 14 years,
all around the world,
people from shop floors
to truck drivers to people
in and around cabinets
and parliaments and kings.
And we collect
that intelligence,
put it into frameworks--
this is a big
difference, frameworks--
and then try to help our
clients see what's happening.
GUY HORGAN: So in
a few interviews,
I've heard you
described as a spy.
RP EDDY: [LAUGHS] I'm not a spy.
GUY HORGAN: Well, that's
what you always say.
RP EDDY: Cut.
GUY HORGAN: But I'm
curious, you can only
speak in generalities,
I'm sure, but for some
of the clients, who
are some clients,
or what projects have you
done for clients that they're
particularly happy with?
What have you come and
discovered and shared?
RP EDDY: So what's worth noting
is, what does the CIA actually
do?
Now, we believe it's-- well,
we saw the James Bond films
and saw the Jason Bourne films.
That must be something
like they do.
That's not what they do.
CIA, half the house is a
group of really, really
smart analysts who
have, obviously,
access to an extraordinary
amount of intelligence.
They do analysis, and they try
to give warnings and insights
to the executive branch.
And then what's the
State Department do?
Well, that's a little
better understood.
The State Department is a series
of ambassadors and on down
who represent the United
States to foreign governments.
Ideally, Ergo has
aspects of both,
where we help companies
and governments represent
themselves and operate
in foreign areas
and in the United States and
give them the intelligence
to do so.
Now, is that a spy?
No, it's not.
Examples that I'm proud of,
we have some great examples
of helping the US
government defeat IEDs
that were killing our soldiers.
There was a great
organization called
JIEDDO, the Joint IED
Defense Defeat Organization,
out of the US DoD.
It's a really cool example
of the US government
getting its act together,
realizing there's a threat,
and putting a huge
amount of resources
at it very quickly and kind
of getting around bureaucracy.
While working with
JIEDDO, we're very
proud for some of
the work we were
able to do to help stop some
of the most potent IEDs who
were killing our soldiers in
Iraq as well as in Afghanistan.
So that gives us huge pride.
On a corporate side, we've had
a lot of work helping people not
get involved with frauds.
I'll give you a quick example.
A very, very large
investment house
wanted to start a whole new
fund in a non-American country,
and they were going to build
it around a particular team
and a team that had
an extraordinarily
positive reputation.
They hired us due diligence.
It's a core aspect of our
business is diligence.
And our diligence came back
and said, yes, by every report,
these people are
extraordinary and influential.
The problem is we've
discovered that they're
under investigation
for corruption.
They know it.
They're lying to you.
It's not public record.
We were able to unearth it.
So we saved that company,
by their estimates,
a billion dollars of
embarrassment and cost
by going into that enterprise.
Because the allegations
of corruption
would eventually have come out.
And there's numerous
stories like that.
But effectively
helping people create--
in a world of opacity, we
try to create transparency.
And it's no different than
what a Google search does,
in many instances, but just
much bigger and more specific.
GUY HORGAN: I was hoping
we could transition
to start talking about a few
things that are going on today.
You mentioned North
Korea, and it's something
that I think everybody
here is concerned about.
And I wonder if we're
almost past the point
of needing a Cassandra
here, because it's
a great fear by many.
So I'll ask you that.
And then number two, I'm
curious about the solution.
So Cassandras have
alerted us to this.
People are worried.
What is there to be done?
RP EDDY: All right.
So first, Alan Robock who
is just-- the human race
should be very thankful to
have Alan Robock amongst us.
And his organization, as I said,
won a Nobel Prize last year,
so he's not going unnoticed.
But Alan's work on
global nuclear winter
is a warning we
have to listen to.
So that warning
hasn't been heard.
We were talking earlier about
the amount of nuclear weapons
that would likely be exchanged
if we had a nuclear war that.
GUY HORGAN: You said 30
is the starting point.
RP EDDY: I think 30's
your starting point.
So if you think about
this series of targets
that we would have
to hit in North Korea
to have some degree
of confidence
that Seoul isn't turned into
an inferno or Tokyo or Guam,
or even California or even
DC, depending on where
North Korea is in it's
nuclear weapons development,
it's a large number of targets.
GUY HORGAN: And you're saying
that those 30 strikes would
need to happen quickly, then?
So the artillery can't be--
RP EDDY: It'd have to happen
almost simultaneously.
It very much depends on
what your theory of war
is in this instance.
But the one that seems
to make most sense to me,
and I think most sense to our
nuclear weapons planners--
I can't say with huge
confidence-- is 30 at once.
If we're going to do it,
we're going to do it.
This isn't Hiroshima
or Nagasaki where
we unveil the power of the sun--
have you ever seen it before,
which is almost
a mystical thing.
This is a tactical strike to
take out military capacities.
So you're looking at
probably 30 minimum.
Now, do we need a
warning about that?
We're separating two things.
Do we need a warning about
our risk with North Korea?
Maybe not.
I want to talk about it.
Do we need a warning
about nuclear winter?
Absolutely.
All right, so we've
made that warning.
Now, what's going
on with North Korea?
As we sit here today, I think
we have a general sense of ease.
Ah, we had the Olympics.
It all went well.
We're beginning to believe
that the North Koreans are just
like we are.
Don't the Russians love
their children too,
as Sting and Police at one point
in the middle of the Cold War?
And we said, yes, they do.
And look, it all
worked out just fine.
Well, the great gobbledygook
machine of Washington DC,
which my view is actually
Colonel Mike Sheehan's view,
Ambassador Mike Sheehan's view,
is sort of how policy really
gets formed in DC.
The State Department and
Brookings and Heritage
and all these guys
kind of have a view,
and they sort of say, ah.
So the gobbledygook machine
view of North Korea for decades
has been, they are rational.
Don't worry.
They don't want to nuke anybody.
They don't want to get nuked.
They know that they're
massively overpowered.
We got this one.
We will negotiate
our way out of it.
That is the standard view.
I hope it's right.
I don't have much
evidence that it is.
There is some recent
evidence, and only recent,
by the third Kim that
they will be responsible
nuclear weapons actors, because
they said they would be.
If you look at the proliferation
of Israel, India, Pakistan,
even South Africa, and Brazil,
when they are pursuing weapons
programs, they all were very
clear to transmit to us,
don't worry.
We're not crazy.
We will handle these
weapons responsibly.
North Korea said the opposite
for three generations,
three decades, however
you want to look at it.
They've said, when
we get weapons,
we're going to turn
Seoul into fire.
We're going to unify the
peninsula under our rule.
They've said very
aggressive things.
Now, we want to
believe normalcy bias--
confirmation bias
or normalcy bias.
We want to believe that there
are going to be well-behaved.
It just makes me feel good.
I don't have the evidence yet.
As I said, we've seen a little.
The Trump Administration
is now grappling with this
and trying to understand
the question themselves.
North Korea has just offered
to have a negotiation with us.
Everyone thinks
this is great news.
It's not bad news.
What North Korea is very
likely doing right now
is trying to get the
technological advance
to have the ICBM move
and deliver a payload.
Right now, the ICBM moves
in the proper radius.
But they have yet to prove
that they can successfully
deliver a payload.
When they prove that, which
is another technological feat,
they will have the entire
set of cards in front of them
that they need to negotiate,
if they want to negotiate.
GUY HORGAN: So you're saying
you believe that it's possible
that North Korea is delaying
such that they can miniaturize
the payload so it
can be delivered,
so that they're more
powerful negotiators?
RP EDDY: I would bet
you dollars to donuts
they are delaying as
long as they can to get
that final re-entry proved.
Why not?
Now, there is good news.
There are a series of things
inside the nuclear weapons
complex of North Korea that may
have been built just to trade.
Let's see.
Now, there have been
countries that have turned.
I mean, I mentioned South
Africa changed of goodwill.
They were under
sanctions as well,
but they got out of apartheid.
Burma has had an extraordinary
change out of sanctions.
They change to a
transparent country.
Maybe North Korea will too.
It has happened
before, and let's just
hope it does in
this instance, or we
are in for a very,
very difficult puzzle.
North Korean
puzzle's a hard one.
GUY HORGAN: So the
Cassandras that you
listen to on this subject, what
do they say about the future?
We agree it's a problem.
We agree it's a
potential catastrophe.
Is there a solution?
RP EDDY: There is
absolutely a solution.
If North Korea decides-- look,
we have something they want.
They have something we want.
There's a negotiation to be had.
They want their children--
the leaders in North Korea,
and there's probably
about 150 people
who matter in that country right
now, as far as decision making.
The leaders in North Korea,
like any good parent anywhere,
probably wants their kids
to work at Google someday.
They probably want their kids to
play in the Stanford basketball
team or the University
of Illinois soccer team.
Why not be part of this dream?
And right now, of course, North
Korea can have none of that.
They want to be
part of commerce.
They want debt.
They have massive oil reserves
offshore they want developed.
They want to be wealthy.
We can give that to them.
But we're not going
give that to them
till they give us
what we want, which
is that they denuclearize.
Now, there's a
number of outcomes.
There's one I'm afraid of.
One outcome I'm
not excited about
would be if North Korea
and America strike
a deal that says, we're
going to keep our nukes.
We're going to keep our
short range missiles,
but we'll give up the ICBMs.
So you, America, can sleep
soundly in your beds,
while Seoul and Tokyo
rip their hair out.
That is one possible
negotiated outcome.
If this administration, or any
administration negotiates that,
it would be the absolute
end of Pax Americana,
of the American promise to be a
global leader in good standing.
It would be the end of that if
we left our allies high and dry
like that.
But it's going to be very
tempting, particularly
to people who don't understand
the cost of destroying
Pax Americana.
Another outcome would be
total denuclearization.
That would be a dream scenario.
We would have to give them
a series of guarantees--
not only let them back into
the world as I described.
We'd have to probably guarantee
their security to some extent.
We'd have to stop military
exercises, et cetera.
Look, that would be
great for everybody.
It'd be great for them.
It'd be great for us.
The only people who almost lose
a little bit in that scenario
is China, because
they no longer have
North Korea to keep us busy.
And they've got, all of
a sudden, a country that
could potentially be our
ally sitting on their border.
They may not want
things to go that well.
GUY HORGAN: Mr.
Bolton, John Bolton,
has started in the White House.
And of all the many
scary things that
have happened in
the past year, it
seems like the tenor of people
positing fears about him
have been a little bit
different, that this is truly
something scary.
And it's almost like people--
that his hawkish history
could be something different.
Combining that with other people
leaving the White House who
are generally supposed
to be moderating forces
and the possibility of the
Mueller investigation coming
to a head, and Trump now
truly needing a distraction,
I'm wondering if you have any
fears about their deciding
together with John Bolton
encouraging what he seems
to have always
encouraged and Trump
wanting to divert
attention from what
might be coming down on him.
Might that be a
combination of things
that would be a perfect
storm for us to have
a preemptive strike
and eliminate
the possibility of even having
a negotiated disarmament?
RP EDDY: So Guy, I wish I could
give you a lot of confidence
that that's not
likely or possible.
I just don't know.
You know, it's a reasonable
Cassandra warning.
But let me offer you a
slightly different view
of the Bolton presence as our
national security advisor.
First of all, we're all
very mimetic animals.
So when the press shows
us a series of quotes
from John Bolton about
destroying 10 floors of the UN,
or how every other
nation is just
a satellite that orbits
around our sun, et cetera,
or how he's talked about a
preemptive strike on North
Korea, et cetera, we could
walk out of the Iran deal,
it's very easy to
get very concerned,
particularly married with some
of the belligerent language
we've seen from the president.
I'm not sure that's wrong.
But I have a slightly
different view.
My view is where we were headed
before Bolton was scarier.
So Rex Tillerson is gone.
HR McMaster is gone.
Madison, Kelly are still
around, but a huge number
of people who were
given to Trump,
foisted on Trump, selected
by Trump, have left.
And what we're
beginning to see--
and I think this was no surprise
to us four months ago when we
began talking about this with
our clients, five months ago--
we're seeing Trump begin
to get very comfortable
in his presidency
and believe that he's
able to make these
decisions on his own.
So I call it TOHO,
T-O-H-O, Trump On His Own.
And I think any
president on his own,
any president making complex
decisions on his own,
is bad news.
I think a president who doesn't
have the depth of experience
or lacks experience
as this president does
is doubly bad news.
So what I want to not
see more than anything
is I don't want to
see Trump on his own.
I don't want TOHO.
I want smart, wise counsel
around every president,
particularly this one.
John Bolton and I have probably
wildly different opinions
on how to exercise
American power,
but we want the same thing.
We believe in American
exceptionalism.
We believe America has either
a right or an obligation,
however you want to call
it, as the preeminent power.
I tend to believe more that
Pax Americana, living up
to the system of treaties we
wrote, being a positive ally,
being there when people believe
we'll be there, even if it's
a little late, is the way
America has power and benefits
us around the world.
I think his view is less
interested in alliances
and more interested
in military force.
That's bad, as far
as I'm concerned.
What's good is that he is part
of the mainstream conversation.
John Bolton is not Steve Bannon.
I look at Steve
Bannon as the guy
who wanted to tear
down the temple
and had an extraordinary,
really outside the mainstream
point of view of things.
John Bolton is not there.
He's way to the end, but
he's still in the mainstream.
He's been an
undersecretary of defense.
He's been a real member of the
foreign policy establishment
for a long time.
He knows how it works.
He knows the value of
the civil servants.
He knows the value
of the flag officers.
He knows the value
of the diplomats.
He knows how you can extract
value from alliances.
So I'm hoping when he
sees a really crazy idea,
he'll know it.
The second thing
about John Bolton,
he's got a really strong
personality and a really loud
voice.
So Trump on his own maybe will
get mitigated by John Bolton
because he will be in the room.
He won't be ignored.
And Trump can't
toss him out like he
tossed other people out.
He can't toss him out
for being a globalist.
He certainly isn't.
He can't toss him out for
being more liberal than him.
He certainly isn't.
He can't toss him
out for being stupid.
He isn't.
So I don't think we're
going to see Trump ignoring
Bolton for very long.
We'll see how long it lasts.
So what that means is
Trump won't be on his own.
At least he'll have Bolton
joining Madison, Kelly,
giving him what we hope
is wise counsel, which
every president needs.
GUY HORGAN: Final
question for you.
This is not just a book,
but it's a movement
that you're starting here.
You have the awards
that you're giving out
for Cassandras that you identify
as some of the most credible.
But you have another one
called the White House National
Warning Office that you and
Richard Clarke are proposing.
So I was wondering if you could
talk a little bit about that
and what your vision for it is.
RP EDDY: Good question, Guy.
Thanks for asking about that.
So look, I mean, I truly believe
that Dick and I found something
that previously existed,
was these Cassandras,
there are people out
there who are giving us
warnings of disasters.
We know what they look like.
We know how to separate the
Cassandra from the Chicken
Little, and we need to do that.
We have to institutionalize
this process.
So the Cassandra
Award is one step.
The other one is a
National Warning Office
at the White House, inside
the Executive Office
of the President.
It needn't be a bureaucracy.
I'm not talking about a
large number of people.
I'm probably talking about
a half a dozen men and women
who work within the EOP, so they
have an imprimatur of the White
House, and now get to work
across the interagency,
training people on Cassandra
theory and other theories,
looking for warnings, and
trying to give voice to warnings
that haven't been heard before.
It would be an extraordinarily
wise thing to do.
We no longer have the
National Intelligence Officer
for Warning, the job
that Charlie Allen had
where he foresaw Saddam's
invasion of Kuwait.
That job's gone.
We don't have that job anymore.
We need an institution
inside government
that's cross-cutting and is able
to see the forest for the trees
and can find warnings
before others do.
This organization could
create conferences,
could create allies, could work
throughout the interagency,
could task intelligence.
It could take warnings,
put them through something
like a Cassandra Coefficient
or other even better models--
I don't care what it is,
no pride of authorship--
and see if those warnings
deserve more attention or not.
Why not build
something like this?
Why not find a tool to
give voice to the warnings
that we've been ignoring
time and time again?
It would cost
effectively nothing,
and the outcome could
be extraordinary.
Think about the return I
mentioned on Fukushima.
$50 million would have
saved $100 billion,
$50 million, $100 billion--
2,000 time return.
A little bit of effort
to listen to the warnings
could serve us dramatically.
GUY HORGAN: I look
forward to it.
RP EDDY: You can
be the head of it.
GUY HORGAN: I appreciate it.
I'll look forward to joining.
"Warnings-- Finding Cassandras
to Stop Catastrophes."
Thank you so much
for coming to Google.
RP EDDY: Thanks very much, Guy.
Great to be here.
[APPLAUSE]
