CHRIS DEFAY: Welcome,
everyone.
And welcome to those watching
this talk on
Google's YouTube channel.
My name is Chris DeFay, and I'm
a member of the Talks at
Google team, here in Google's
Los Angeles office.
We are very pleased to welcome
Dr. Martin Libicki, senior
management scientist from
the RAND Corporation.
Dr. Libicki's research focuses
on the impacts of information
technology on domestic and
national security.
A topic he has published
extensively on over the years.
The title of today's talk is,
"Crisis and Escalation in
Cyberspace." Please join me
in welcoming Dr. Libicki.
Thank you.
DR. MARTIN LIBICKI: Ah.
And thank you for inviting me.
[APPLAUSE]
DR. MARTIN LIBICKI: Well,
I guess that's the
title of the talk.
Actually, what this is is the
title of a monograph that RAND
should be producing in the
next month or two.
If you want to really think of
the title of the talk, it's
going to be closer to something
like, "Why Cyber War
Really Isn't War."
And I kind of want to start
with basic history.
Pretty much as soon as people
learn to walk on two legs
rather than four, people learn
to make war on each other.
And you had the origin
of armies.
We had sort of mastered
the ground medium.
And when people learn to get
onto the little dugout canoes,
and even better, put up sails,
and rows, and oars, and
galleys, people had mastered the
naval medium, and thus we
had naval warfare.
It wasn't until 1903 when the
Wright brothers could master
heavier than air flight, that
we had mastered the air
domain, and thus we
had air forces.
And, of course, until we put
things in orbit, we couldn't
even start thinking about
space warfare.
Well, there are a lot of
people where I work,
particularly across the street
from where I work, which
happens to be the Pentagon,
which are thinking about
cyberspace as a new
domain of warfare.
Here, I suppose my voice
should get really deep.
But it's something very curious
about cyberspace.
Yes, you couldn't have cyber war
until you actually had a
cyberspace.
But the reason we have cyber
war is because we haven't
mastered the medium.
What do I mean by that?
Generally speaking, if you've
got a computer, it's supposed
to do what you think
the computer is
supposed to do, right?
It's supposed to be under your
control, not under the control
of some random guy 10,000 miles
away in a cave somewhere.
But it doesn't always
work out that way.
It turns out that sometimes
our systems are under the
controls of people who don't
really mean to do us any good.
And for the most part-- we
can go into exceptions in
questions--
for the most part, people have
that kind of control over your
machine because of errors in the
construction of software
or errors in the construction
of the relationship between
hardware and software.
In other words, the reason we
have cyber war is not because
we've mastered the medium
of cyberspace.
It's because we haven't
mastered the medium of
cyberspace.
And within these mistakes, is
you have the potential for all
sorts of nasty things
to take place.
OK?
This is particularly true if
you talk about cyber war in
terms of malware.
And most of what people are
talking about, if it's talking
about these so-called advanced
persistent threat, if you're
talking about things such as
Stuxnet, if you're talking
about the attack on Aramco and
RasGas that Secretary of
Defense Panetta mentioned
last month,
it's in terms of malware.
In other words, code that is
inserted into your computer
that allows the bad guy to make
your computer do certain
things, presumably that
you don't want done.
Now, not all of security
issues are malware.
If you had a world in which you
had no malware, you would
still have security issues.
But as I say in a more, I guess,
military context, if
you could get rid of malware,
you could turn cyber war from
a four star issue into about
a one or two star issue.
In other words, it would
become the province of
security professionals who could
tell you all sorts of
interesting ways that people
could do things to your
computer, but you wouldn't be
reading about it so much in
the "LA Times," or the
"Washington Post," or whatever.
And that, in many respects, is
the definition of cyber war.
Let me just make a little
statement about the
relationship between
three things.
One, is when you have a system
or a software, it's what the
design manual says
it's going to do.
Right?
That's how the people
who built it think
it's going to behave.
And then the next part, is what
you think the computer's
going to do.
But there's a third part, and
that's what the code says the
computer's going to do.
And I'm sure you guys know this
better than I'm going to
ever know this, if there's ever
an argument between the
design spec and the code, the
code is always correct.
It is always a better predictor
of what the machine
is going to do than
the design spec.
Now, as computer professionals,
I'm sure you
folks tried to make the code
resemble the design spec as
well as you can.
But as computer professionals,
you also know that systems are
extremely complex.
And they're getting more complex
with every given day.
And that means that the task of
making the code look like
the spec is a very, very
difficult one.
And it shows no systematic signs
of getting any easier.
And if the crack between
those two is only--
to be speaking metaphorically--
a bit stream wide, that leaves
enough room for the hacker,
the bad guys, so to speak,
to get into the system.
But I want to mention something
else, and that is
architecture.
We have an architecture
for PCs which is now
about 35 years old.
From the ancient CPM
machines, from the
ancient Apple machines.
I got my first personal
computer in 1983.
In those days, you approached
personal computers with a
screwdriver, literally.
I actually had to assemble my
computer from parts because,
it turns out, because of IBM's
strange pricing, it was a lot
cheaper that way.
But the point I want to make is
that the PC, the computer
in general, was meant to be
almost infinitely manipulable
by the user.
In fact, the original owners of
PCs back in the late '70s
and early '80s were hobbyists.
They said, oh, this is
interesting, let's see
what it will do.
Just like they were sort of the
descendants of the folks
that had race cars in
the 1950s and '60s.
And that was all good.
And that was lots of fun.
But when you have a system that
is open to that kind of
manipulation and you connect it
to a network, then all of a
sudden you have effects that are
much more serious than I
can make a mistake
with my computer.
OK?
Well, the architectural feature
of the PC that allows
it to be subject to malware is
essentially the notion that
you can add programs to it.
And you can add-- if this is a
Microsoft machine-- you can
add programs to the register.
And the computer will actually
run the programs that are in
the register in that order, or
in some sort of order, when
you boot up the machine.
Now it's also possible to
envision, by the way, a
computer that cannot
have malware in it.
And that is to say,
by putting--
for 90% of all the computer
users, and I don't think any
of you folks are in the 90%,
it's efficient if I give you a
PC or an Apple equivalent that's
got office automation
tools and a web browser.
And I can put it all in hardware
and give it to you
and I say, here's three years
worth of computer.
Now, when you're done at the end
of three years, of course,
it'll be three years
out of date.
Now, you're actually talking
to somebody who still has
Windows Office 97 running on
his computer upstairs and
that's about a 15-year-old
program.
It probably has a
few bugs in it.
But all I'm suggesting is that
the computer does not have the
architecture to accept
new, good programs.
It also will not have the
architecture to accept new,
bad programs.
I am not advocating this as
a way to build computers.
I am saying the ability of the
trade-off is still there.
OK.
So that's the basic factors that
you folks probably know
better than I do.
Now, let's take a look at what
happens when something bad
happens to your computer.
Well, as it turns out, in
Washington, DC we're like the
seven blind men with
the elephant.
And everybody takes a look at
the problem of computer
insecurity from their
own lens.
If you work for DOD, it's
a cyber war problem.
If you work for the FBI, it's
a cyber crime problem.
If you work for the intelligence
community, it's
an OPSEC problem, Operational
Security problem.
But of course, if you work
for the other end of the
intelligence community, it's
an OPSEC opportunity.
That's kind of like what
intelligence does.
And now I'm going to pick on
my good friends at DHS,
because if you work
at DHS, it's a
cyber terrorism problem.
Now, it's also a perimeter
control problem.
And that leads to some very
interesting implications for
how DHS spends its money.
Now, take a look at DHS.
This is not a Washington crowd,
so I don't expect you
actually to know the answer.
What are the four largest
components of DHS?
One of them is Immigrations
and Customs Enforcement.
Closely related, is Border
and Customs--
Border--
BC--
Border something
Patrol, right?
What are their main functions?
To guard the perimeter of the
United States against things
that we don't want in
the United States.
Then the number three
is the Coast Guard.
Well, it has the word
"guard" in it.
What is the function of the
Coast Guard, apart from
pulling drunken fishermen
out of the water?
And that's to guard
the coastline
of the United States.
They're the last line, so to
speak, of maritime defense.
And then finally you have the
Transportation Security Agency.
OK?
Now, 90% of what they do is
to guard a perimeter.
The perimeter between you and
the gate, so to speak, the
airline gates.
Given that it's in their
history, given that's in the
construction, is it any wonder
that DHS thinks of cyber
security as a perimeter
control problem?
Rather than, say, an engin--
by the way, if you guys waiting
for the slide to
change, it doesn't change until
about 30 minutes in.
OK?
I want to relieve some
tension you may have.
OK?
You guys are wondering, when's
the guy going to go to the
next slide?
DHS has a cyber security budget
of about a half a
billion dollars.
Roughly 3/4 of that money is
spent constructing a giant
firewall around the civilian
government, or what I can call
for short, the .gov domain.
Which, interestingly,
leads to an irony.
They spend 3/4 of their budget
protecting the 2% of their
country that's in the .gov
domain, leaving one quarter of
their budget to protect the
98% of the country that's
outside the .gov domain.
And sometimes you want to remind
these guys at DHS that
they kind of actually work for
the taxpayers, but nah.
That may not be where
we want to go.
OK.
Let me talk a little bit about
terrorism here also.
Because it also, if you follow
any of the debates in
Washington, you know that a lot
of people are enthusiastic
about the notion of information
sharing.
We're going to do information
sharing and we're going to put
a big dent into our cyber
security problem.
OK?
How many of you have played
the game of "Clue"?
What's the game of
"Clue" about?
Mr. Green with a pipe wrench
in the ballroom, right?
In a terrorism, if I've
got information--
advanced information--
I want to know who's
coming at me.
I want to know where they're
planning to attack.
And I want to know how they're
planning to attack.
But frankly, I don't care that
much about how because, except
for extraordinary events such
as 9/11, it's going to be a
guy and a pipe or car bomb or
something like that, right?
So let's take this same model
and talk about information
sharing and cyber security.
And so you have the
same priorities.
You want to know who's
attacking.
You want to know where they're
going to attack.
And you want to know how they're
going to attack.
And then you think about this
for a minute, and say, well I
don't really need to know
who's attacking.
Because that's not going
to change anything.
And it'd be nice to know who
they're going to attack.
But in fact, they could change
their minds almost instantly.
What I do want to know is how
they're going to attack.
Specifically, what software
vulnerabilities are they going
to take advantage of.
Because that's how you get in.
That's the interesting
stuff, right?
Remember?
Good software, they're not
supposed to get in, but
somehow they do.
Problem here?
And then you think about this
for another minute.
Let's say I'm a chemical
producer, right?
And the whole notion of
information sharing is I get
together with my other chemical
producer friends and
I say, have the hackers
been at you lately?
Yes, no, maybe so.
They tried to do this, I've had
problems with that, then
you go off and have
coffee or tea or
whatever it is you drink.
That's not the information
sharing I'm interested in.
I want somebody to go to the
guy who's got the software
with the vulnerability that let
the bad guy in, or who let
their other vulnerabilities
having to do with the
spreading through the network,
and I want to say, you've got
a vulnerability here.
I want you guys to
work on a fix.
And having worked on a fix, I
want you guys to proliferate
the fix, and I want
the problem fixed.
As a chemical manufacturer,
exactly how that problem is
fixed is kind of really
secondary for me.
But all of a sudden, if you're
taking a look at that model,
you're taking a look at
a different notion of
information sharing.
And then you're asking the
question, not how do peers put
information together, but what
is the mechanism by which we
inform the identification of
vulnerabilities so that we can
get a fix on their
vulnerabilities.
Much, much different problem.
But it doesn't really fit into
the terrorist mold, because
that's not how you think
about terrorism.
Which kind of comes to the same
theme that I want to kind
of reiterate.
It's different.
This is not a form of warfare,
or a form of crime, or a form
of intelligence collection--
although it actually comes
closest to our form
of terrorism--
that just happens to take
place in another domain.
It's got its own rules here.
OK?
Now, another thing that I think
about when I think about
cyber terrorism is the--
I don't know if you've heard of
this-- the cyber 9/11, the
bad guys coming into our system
and wreaking all sorts
of mischief.
And the more I thought about it,
I say, well, I don't know
how likely this 9/11 is.
Maybe more likely, it
may be less likely.
On September 10, 2001, not many
people predicted the 9/11
that was going to take place,
even though, as it turns out,
there was a precedent.
But now let's look at
9/11 for a minute.
9/11 killed about 3,000
folks, did about $100
million worth of damage.
And then the United States
government reacted to 9/11.
And we established DHS and
a lot of other security
controls, and we went
to two wars.
As a result of which, 6,000
Americans died, 10,000 to
20,000 were severely wounded.
We paid about a trillion dollars
in war, and about
another half a trillion
dollars in
other associated expenses.
So you think for a minute and
said, you know, 9/12 was a lot
more expensive and dangerous
than 9/11 was.
So it got me to thinking about
the question, what would a
9/12 look like in cyberspace?
And the answer wasn't a
particularly good one.
This is why I love to come
outside of Washington, because
they get a little crazy inside
the Beltway here.
OK?
One of the things we're going
to want to do is retaliate
against the guy who did it.
And in fact, the more the
damage, the faster is going to
be the retaliation.
Well, anybody who's worked in
cyber forensics can tell you
it's not one of the things
that's instant.
OK?
Sometimes it takes
a lot of work.
If the guy doesn't really want
retaliation, there are
interesting ways of making it
look like somebody else.
But if you've got a political
pressure to retaliate, you may
say, I'd love to have
the complete picture
but I've got to go.
Why now?
Who cares?
I've got to go on what I got.
Right?
And so you end up retaliating
without necessarily knowing
why the attack took place and
without 100% confidence that
you know who carried
out the attack.
I mean, if you've ever played
strategy games, you know that
worse than having one enemy
is having two enemies.
Basically, by retaliating
against somebody who didn't do
it, you may end up
with two enemies.
Something else to worry
about is a trade war.
People talk about cyber attack
as if it's one level below
kinetic conflict.
But in fact, when you think
about all our trade
relationships with the world,
you realize that the odds of a
trade conflict are in fact much
greater as a result of a
cyber war, and as a result
of getting excited
about a cyber war.
Oh, we can't let in any of
these products because we
don't trust them.
Well, then we can't let any
of these products from you
because we don't trust
these, right?
You may have read about a month
or so ago there was a
congressional report on Huawei
and how we couldn't trust the
stuff that's coming in
from Huawei, right?
Only if you looked at the
report, all you could tell was
that Huawei wasn't enthusiastic
about answering
congressional inquiries.
Oh.
OK, but the good stuff was
supposed to be in the
classified appendix, which I
haven't seen by the way, but
Reuters did.
And it tells us there's no good
stuff over there either.
But somehow we don't
trust them.
Well, so there's a Chinese
province who declared that
they're not going to
buy Cisco either.
These sorts of things
can get out of hand.
OK.
Let's talk about some
other reactions.
How many of you are
familiar with the
term of "active defense"?
That's one of those euphemisms
for, we're going to get you
before you get us, right?
There is some very bright people
who sit at Fort Meade
who love this active
defense stuff.
And you kind of have to worry
if, in fact, their active
defense is going to do what
they think they do.
Remember, there was Iraqi
weapons of mass destruction,
and I don't have to say very
much more to know that even
the best of intelligence can
sometimes get things wrong.
OK?
Another thing that there is
going to be a lot of pressure
to after a cyber 9/11 is a
national firewall across the
United States.
Now, a national firewall would
be an enormous undertaking,
which some people, by the way,
are glad to take taxpayer
money to do.
But that essentially means that
some computer is going to
read every email that you
write that happens to go
outside the country.
Now, it turns out that probably
more emails than one
would think go outside the
country because of all sorts
of routing issues.
But that's neither
here nor there.
That's a lot of emails
to be read.
OK?
Well, if we're going to have a
national firewall, then the
one thing we can't afford to
have is encryption, right?
Because encryption defeats a
firewall because you can't
read the stuff that's
going through it.
Which means you can't
have open--
and you need hard authentication
because we have
to know where everything is
coming from, which means we
all need good cyber IDs, which
means we can't have open
Wi-Fi, which means we
can't have Tor.
You're all familiar with
what Tor is, right?
It's a good crowd.
And it means you can't
have things like PGP.
That's a lot to pay for not
having a cyber attack if you
thought a national firewall
would work, which, by the way,
it may not for whole lots
of other reasons.
And then there's this whole
notion of, well, the attack
could have come from
a bad website.
So we want the ability to take a
look at everything that's on
a website even before
it's posted.
And I probably don't even have
a good enough imagination to
imagine all of the things that
people who don't understand
the internet are going to want
to put on the internet.
So that's why I worry
about a cyber 9/11.
Not for what it will do, but for
how it will motivate the
people who are going to react
to it to do things as
a result of a 9/11.
OK.
Let me switch a little bit from
the subject of cyber war
and get into sort of the nuclear
strategy for a little
bit because it illustrates a
question about whether the
existence of cyber war, the
existence of cyber security,
is going to be a destabilizing
factor.
I do this, by the way, I worked
at RAND, it made its
reputation on this whole
nuclear business.
Have any of you ever been in
the Santa Monica City Hall?
There's this really nice
sculpture here called "Chain
Reaction." Now there's a town
that does not like it's
leading employer
quite so much.
At any rate, one of the things
that people worried about is--
somebody was asked to do a study
on where should bomber
bases be in Europe.
It turns out that-- it's
probably, in many ways, the
most famous study that
RAND had ever done.
And it was based on the
principle that if you're
within range, if the enemy
is within range,
chances are so are you.
So if you put your planes too
close to the Soviet Union,
there's a possibility the Soviet
Union could wipe out
your entire nuclear force.
In which case, all that
deterrence you thought you had
would come to nothing.
So the whole notion of a second
strike and instability,
that the guy who goes first in
the nuclear weapon has this
existential edge over
the other guy.
And the nuclear community's
been spending decades and
decades working on
the problem.
And if you guys are interested
in nuclear strategy, I can
answer more questions
about that.
But let me say, a lot of people
are worried about that
in cyberspace as well.
They're worried that because
offense is dominant over
defense, that, in
fact, the first
strike could be disabling.
And if a first strike is
disabling, then you've got the
incentives to go first.
And in a world in which the
incentive is to go first, to
go on the offense, is a world in
which you have a great deal
of instability.
Well, let's step back
for a minute.
How unstable is all this
cyber war going to be?
Well, the first thing I have to
mention is, we still have
nuclear weapons.
And nuclear weapons can still
trump cyber weapons.
In fact, if I had to make a
chart, I started to make a
seriousness chart like this.
Nuclear war here, a large
conventional war here,
hurricanes, we had a little
something called
Sandy a few weeks ago.
Nice and breezy over
in Washington.
Fortunately, not much damage.
And then finally we have a cyber
attack, which may or may
not come close to Sandy.
So those nuclear weapons are
still useful to have in your
pocket if you're worried
about anything else.
Now, what about a cyber attack
on our nuclear infrastructure?
I can't rule it out.
How many of you have seen
the movie "War Games"?
Right?
That was based on the
presumption that there's a
modem bank on the side of the
command that controlled
computers for STRATCOM.
I kind of rather doubt it.
And in fact, the folks
who run STRATCOM--
actually, the folks who run
our ICBM force have gone
publicly in saying
it can't happen.
Are they right?
I think they actually are.
At least, I'd like to think
that they're right.
And part of the reason they
might be right is because this
is one of those things where you
don't really want to get
all happy about networking.
Yes, it is true, I cannot dial
in a nuclear attack from home.
I actually have to get dressed
to come to the office and
start a nuclear war.
But I consider this a very
small inconvenience given
everything that's
at stake, right?
And as it turns out, if you
actually looked at the nuclear
arsenal, it's fairly
primitive.
And it's primitive--
they have managed to ignore the
siren call of networking
for a lot of this stuff, and
that's a good thing.
And by the way, it's also a good
thing if the other side
also ignores the siren
call of networking.
Because we really don't want
anything funny to happen to
their nuclear weapons.
Because those are the
ones that are
actually pointed at us.
So there's a level of some
sort of assurance.
OK?
Now, the other thing you have to
understand about cyber war
is this is one of those forms
of warfare where you can't
really disarm the other side
by a cyber attack.
Not very well, at least.
Consider what are the
four and a half key
components of a cyber war.
The hacker, the information
that a hacker knows or has
access to, a computer, a network
connection, right?
Of those four, yeah, you can
knock somebody off the
network, but you're not going to
knock them off every single
network in the world because
there's billions of
connections.
And yes, there are ways, in
theory, that you can screw up
with his computer, but what
do computers cost now?
What, $299.99 at Best Buy?
Not exactly very hard
to replace.
So the whole notion that you can
cripple a country's cyber
attack capability with your own
cyber attack capability,
that doesn't really hold
a great deal of water.
Which means, by the way, that
are a lot of things you don't
necessarily have to worry
about in cyberspace.
You hear a lot of talk in
Washington about a cyber arms
race, right?
But there's a big difference
in nuclear war.
In nuclear war, nobody thought
that defenses were going to
work at the time.
In fact, we still don't have
reasonably good defenses.
They're kind of 50-50ish,
is the stuff I've read.
So that means it's like,
I got 1,000 nukes, the
Soviets have 1,000.
Then they have 2,000 nukes, so
I got to get my 2,000 nukes.
And it would go on and on
and on like that, right?
But in cyber war, it's not
offense versus offense.
It's offense versus defense.
And in fact, you really don't
know what kind of offensive
capabilities the other guy
has if he's got any good
tradecraft at all.
So you don't really have the
basis for an arms race.
People in the nuclear business
talk about the cycle between
alert and warning.
That if you react in ways that
anticipate a nuclear attack,
the other guy's going to think
that you're preparing for a
nuclear attack, you're not just
anticipating a nuclear attack.
And he's going to raise his
alert level on his offensive
forces, which means you raise
your alert level.
And before you know it,
everybody's on a
hair trigger alert.
And people are worried about
that sort of stuff.
But now here's my question.
Anybody here from Uruguay
or Paraguay?
I usually like to pick on
those because I can find
nobody in my audience
from there.
OK.
You're in Uruguay and you're
worried about a cyber attack
from Paraguay.
And you notice Paraguay's cyber
defenses are getting better.
What do you conclude
from this?
Well, yes, Paraguay could be
starting an unanswerable first
strike on Uruguay.
Or they finally got
money in the cyber
security budget for Paraguay.
Or they fired their chief
information systems officer
and hired a new one.
Or there's new opportunities
here, or a product came in, or
a really clever salesman
walked out the door.
When you look at all the reasons
that people acquire
cyber defenses--
whoops, got to stay past this
line-- when you look at all
the reasons that people acquire
cyber defenses, the
fact that they're going to war
isn't prominent among them.
OK.
So a cold look at what you can
and cannot do with cyber war
suggests it's not all that
unstable an environment.
However, crises are not caused
by what is true.
Crises are caused by what
is perceived to be true.
And in cyberspace and in cyber
war, the difference between
perception and reality
is very big.
You have a nuclear explosion,
you have what's called the
blinding flash of the obvious.
Not going to be any mistake
on what happened.
In cyberspace you can
have a great deal
of doubt and ambiguity.
People might object because the
level of cyber espionage
that you're doing, which was
acceptable one day, is no
longer acceptable another day.
They may misinterpret defenses
even though they shouldn't.
OK?
They may have too much
confidence in attribution.
In other words, oh yeah, I think
somebody did it and they
go ahead and strike back when,
it turns out, they really
don't have much more than
an educated guess.
On the other hand, you could
have instability if people
have too little confidence in
your attribution and they
think they can strike
with impunity.
And so on and so forth.
OK?
The important point to remember
here is that the
distinction between what is and
what is perceived to be is
particularly wide in this field,
particularly when you
think about how much ignorance
there is in this topic.
It's almost kind of scary.
Because there are a lot of
bright people who know a lot
of secrets in Washington, who
not only don't reveal the
secrets very much, but who are
talking to an audience who
wouldn't understand them even if
they do reveal it to them.
The late Ted Stevens referred
to the internet as, what, a
pile of tubes?
I don't think he's all that
atypical in terms of
understanding of cyberspace.
And when it comes to cyber
conflict, there's even more to
be misinterpreting about.
Let me switch a little bit into
questions of crisis and
crisis escalation.
OK.
This is a thing that nuclear
theorists talked
about all the time.
Herman Kahn, for instance,
wrote a book--
actually, it was the first
book I ever bought, maybe
about a half a century ago--
on escalation in nuclear
conflict.
OK?
But when you think about
escalation in a cyber context,
and how do we keep a cyber
conflict contained, it turns
out it's very, very difficult
for everybody to agree on all
the same red lines.
Or in this case, orange
lines and blue lines.
Let me give you a scenario.
Let's say we have a cyber
conflict with
another large country.
Not to name any names, right?
And we're interested in
crippling their ability to
conduct naval operations.
So we hack into their afloat
naval supply facility.
We just want to mess up your
logistics a little bit.
I mean, we're already shooting
at each other, so I guess it's
no big deal.
And they say, well, we want
to do the same thing.
As far as we're concerned,
attacks on military support,
you've introduced
that into play.
So that's fair game.
So they attack Guam.
They attack Guam's port.
Which is in Asia, right?
And they say, hold it,
that's an escalation.
Because you've gone from
attacks in the field to
attacks on the homeland.
And so we're going to attack a
port in your home country.
And the other guy says, whoa,
you've just switched from
attacks on military support
to attacks on civilian
infrastructure.
So we're going to attack your
coal-fired power plant.
And we say, hold it, you've just
escalated from attacks on
the homeland to attacks
on industrial safety.
Well, you folks are literate and
you can sort of basically
read the chart up here.
What's the problem here?
The problem here is that
not one person in this
confrontation thought they
crossed a red line.
Because, to them, they didn't.
But according to the other
person, they did.
So while each person was
reacting in what they thought
was underneath the red lines, it
turns out they escalate all
the way up to the top.
It turns out that's a tricky
little thing in cyberspace
because that requires a lot of
nuance in terms of where you
think the red lines are and
where the other guy thinks the
red lines are.
Next month, I'm going to
be sitting down with my
counterparts from the People's
Republic of China in what are
called track 1.5 negotiations.
I can assure you we're not even
close to being able to
negotiate about that.
We're not even close to being
able to basically discuss what
is information warfare.
Because we have one definition
which is closer to, you mucked
with my machines.
And they have another definition
which is, you put
subversive material into
my country via--
oh, this is Google--
YouTube, right?
Well, between the two
perspectives it's really hard
to get to first base.
And this is like fifth
base or sixth
base in terms of nuances.
So I don't have a great deal
of confidence here in what
they call escalation control
and management.
OK.
Tit for tat.
That's another interesting
notion in cyberspace, right?
You did this, I didn't
like this, I'm going
to do that to you.
And then you're going to learn
that the impact of your
carrying out an attack
is, in fact, that you
get an attack back.
Tit for tat, right?
Now, tit for tat works badly or
well enough when you know
what happened.
But I would argue that within
cyberspace there's a
disjunction among four things.
What I hope to do by attacking
your system, what I think I've
done by attacking your system,
what you think I've done by
attacking your system, and then
finally what I really did
by attacking your system.
Now some of this I can
understand, it's the military
problem of battle damage
assessment, right?
But it would seem obvious that
at least the perceived effects
of the attack are the same as
the actual effects of the
attack, right?
I mean, the guy that's got the
stuff that's been broken knows
it's been broken.
Well, not necessarily so.
Remember the Stuxnet attack?
That attack was carried
out, essentially--
from what I believe in reading
the papers and other things--
in late 2009.
The Iranians did not realize
they had a malware problem
until 2010.
June 2010.
And my hunch is they didn't know
that they had a problem
at Natanz until they read about
it in the "New York
Times," which would have been in
late September 2010, right?
So these guys had gotten an
attack, but because it was a
corruption attack and not a
disruption attack, they didn't
have the insight as to
what was going on
in their own system.
And it was only because the
attack spread to some place
well outside the reactor that,
in fact, the chain of clues
was discovered.
So here's a tit for
tat, right?
You start off with the intended
effects of the
attack, that is, what's in the
head of the guy who's doing
the attack.
The guy you wanted to try
to influence with
the tit for tat policy.
And then you get to the actual
effects of the attack, which
may be different.
Right?
And then you get to the
perceived effects of the
attack, which may be
different yet.
And then I say, well, what's
appropriate in terms of what I
want to create as an
effect going back?
Which may be a miscalculation,
because you're going to say,
well, what are the nature
of equivalences?
Particularly if I want
to strike back in a
proportional manner.
And then I have the same error,
and misperception, and
miscalculation going around
the other way.
So that by the time the guy
who's decided to carry out an
attack has, in turn, been
affected by the attack, you've
got six forms of error
going around.
And you end up having,
potentially, a very, very
imprecise mechanism.
Well, a lot of other topics I
could cover, but I'm kind of
running out of bandwidth here,
or at least energy, or at
least oxygen.
But the bottom line I sort of
want to come to is the same.
I think Washington is in error
when they think of cyber
insecurity and conflict in
cyberspace as just one more
thing that they're
familiar with.
That they just take out the word
"physical and kinetic"
and put in the word "virtual
and cyber," and they
understand what they're
talking about.
And I want to basically say is
you have to understand this on
first principles.
And then when you understand on
first principles, you can
walk through a lot
of the strategic
and the policy issues.
But you do so having a firm
grasp of what's actually going
on in the machine.
And not what it looks like is
going on in the machine.
So that concludes, and I'll
take any questions.
Unless I've reduced you all
to stunned silence.
AUDIENCE: Well, I guess I'll
start with one sort of on the
policy side.
Because you're talking about
this information
sharing with people.
And so recently there's been
this move to get some
legislation to make
that easier.
Do you have some take on that?
Do you think it's a good idea,
is that misguided?
DR. MARTIN LIBICKI: What is the
expression from Douglas
Adams' books, "mostly
harmless"?
Information is good stuff, OK?
There are some issues having to
do with how you write the
legislation in such a way that
you don't violate privacy.
The Center for Democracy and
Technology and a few other
groups in Washington I think
have been working the issue.
I think-- and I'm not that up
on the current debate-- that
they're satisfied that
the legislation has
been written carefully.
Part of the problem in writing
legislation is a tendency to
tack on, not withstanding any
other law, dot dot dot, which
means they just run roughshod
over everything that's ever
happened before.
People get nervous about this.
I think it's sort of
been worked out.
But the question is not whether
that's a good thing or
a bad thing.
The question is, is it going to
do enough good to be even
worthwhile bringing up
in front of Congress.
And as you can probably tell, I
am not optimistic about how
good information sharing is.
Although that doesn't
mean I'm against it.
You can be for things that just
don't do very much good,
as long as they do
even less harm.
AUDIENCE: I just have
a follow-up to that.
DR. MARTIN LIBICKI: Sure.
AUDIENCE: Information sharing
about exploits.
So for instance, if someone is
using an exploit against us,
do we necessarily want to
share it because we can
catalogue that, and it's
also a valuable weapon?
DR. MARTIN LIBICKI: Ahh, now
you have touched a very
interesting nerve.
OK?
I was actually talking about
this to one of my colleagues
who used to work at the Puzzle
Palace out in Maryland.
If I had to make a plea for what
the money should be used
for, I would say it would be
well spent-- and I think I
have actually said that
in the talk--
looking for vulnerabilities.
Now, there are many ways you can
look for vulnerabilities.
You can hire people
to do that.
You can do all sorts
of things.
But the current way is we pay
someone to look for it.
We create a market in
vulnerabilities.
In fact, Google does that.
AUDIENCE: Bug bounties?
DR. MARTIN LIBICKI: What?
AUDIENCE: Bug bounties.
DR. MARTIN LIBICKI:
Bug bounties.
That's right.
But we spend $60 billion a year
on cyber security, how
much did we spend on bug
bounties, how many orders of
magnitude less?
I think maybe four or five.
I got to count the zeroes.
You can see what
I mean by that.
There's a lot more money that
could be spent profitably.
It turns out that there
is a malware market.
And it's, from what I
understand, it goes from the
bug hunter to somebody in
the defense world, to--
need I say more?
So if you've got one party
that says, I want
vulnerabilities so I can exploit
them and you've got
another party that says, I want
vulnerabilities so I can
fix them, what do you get when
they're both in the market?
A bidding war.
Now, if it turns out that the US
taxpayer is financing both
ends of the bidding war, then
we have what we call
implementation issues.
On the other hand, you want
these guys to afford to do
their job, and you do
want these guys--
DHS, if they were the ones
running it-- to do their job.
It's just kind of unfortunate
that they don't necessarily
see eye-to-eye, because they're
not supposed to.
AUDIENCE: So when you were
talking about the computers
that you can't install software
on, I'm wondering if
some of the things with these
netbooks and the cloud
computing where the software
is basically not on your
machine, but in some presumably,
one would hope,
trusted location.
Is that a step in the direction
of what you were
thinking of?
DR. MARTIN LIBICKI: Well, I
mean, General Alexander wants
to get the federal government
to the cloud.
But what he really wants is
a much thinner client.
You know, I do remember the days
of the terminals, with
the acoustic couplers.
That may not be exactly where
we want to go, but something
like that may be
the trade-off.
Like a lot of things, what the
trade-off you make depends on
what you have at stake.
There are the trade-offs you
want to make for, you know, ma
and pa sitting in Vacaville,
California who don't really
care if they have a virus on
it, because the worst thing
you're going to have is a DDoS
attack because they're a part
of a botnet.
And maybe that's not
a major problem.
On the other hand, if I'm
running a nuclear power plant,
I want to make another
trade-off entirely.
Sometimes we tend to forget in
Washington that different
folks have different
requirements.
And that you protect some
information that's particu--
I'll give an example.
There's been a report recently
that the advance persistent
threat's been all over
Coca-Cola, right?
I'm willing to bet you
they didn't get
Coke's secret formula.
Because I'm willing to
bet you they didn't
put it on their internet.
I'd like to think so, guys, that
they keep it in a safe
somewhere in Atlanta.
Right?
Because they've segregated
information that they need to
protect from the information
they need to circulate.
Well, as it turns out, the
information they needed to
circulate was also about
their purchase
of an overseas company.
Was that a good idea?
Was that a useful trade-off?
Would they have been
better off?
I can't make that decision.
But that's a decision that
people have to make.
What do you protect, what
do you not protect?
Or, to give you a bumper
sticker, insecurity is a price
we pay for convenience.
Inconvenience is the price
we pay for security.
OK, I can say you've been
a kind audience.
[APPLAUSE]
[MUSIC PLAYING]
