All right thank you. Welcome everyone
thank you for showing up. I appreciate
you taking the time to come and listen
to us today.
My name is for those of you don't know
me my name is Jeff Kubiak, I'm a
Professor of Practice in the School of
Politics and Global Studies here at ASU
where I co-direct an MA in Global
Security. I'm also a senior fellow at ASU
Center on the Future of War. Today we
have a great pleasure of talking with
and listening to one of my colleagues
Peter W Singer. Peter first and foremost,
in my book is a Professor of Practice as
well here at ASU and teaches in our MA
and Global Security, is one of the great
minds behind our Cybersecurity
Concentration. But Peter is also a
strategist and senior fellow at New
America ,where he's been named by the
Smithsonian as one of the nation's
hundred leading innovators in my foreign
policy as one of the top 100 global
thinkers. Peter is the author of multiple
best-selling award-winning books.
I remember seeing Peter, I met Peter for
the very first time back in 2004 when he
was working at Brookings and he was
talking to a group of Air Force fellows
about his latest work and the corporate
warriors. He's talking about private
military industrial complex even as
early as 2004 well before it became an
issue for the wider audiences
and the next time I came across his work
was his book Wired for War: the Robotic
Revolution in the Conflict of the 21st
Century absolutely rocked my world.
It seems so futuristic and yet it was
very compelling in that it was really
right here right now and of course many
of you already are familiar with the
book, his latest non-fiction book Like War
which has really kind of reshaped the
conversation about the information age
and the weaponization of information.
Peter has - and I've told him this too -
Peter has the ability to see the near
future especially with the future in
general much more clearly than most and
has us ready for an even more
intense story with his latest novel that he'll
talk about today which is his book Burn-In. So Peter thanks for joining us.
I appreciate you taking the time and we're
glad to have you.
First, thank you for that incredibly kind
introduction. It was really generous
and appreciate it
and also just want to thank everybody
for virtually coming out and joining.
The ASU community is
rich and it's what's great, so
inviting for all of us whether we're in
Arizona or not. So really do appreciate
that. What I'm going to talk today about
is this new book but my guess from
having experienced a lot of these
webinars and like is that at some point
you're gonna get tired of looking at me
so hopefully the technology is going to
allow us here for me to be able to share
with you a PowerPoint of some visuals
and give me a thumbs up Jeff if you're
seeing that. Awesome okay.
So the starting point for this talk
today is actually a book that Jeff
mentioned in his, again just incredibly
kind introduction there. A little over
a decade back I had a book come out
called Wired for War and Wired for War
explored how robots, these things from
science-fiction, were actually starting
to be used in our real wars in places
like Iraq and Afghanistan. For example,
the US military force that went into
Iraq in 2003 had a handful of (we called
them) unmanned aerial systems. If you're
in the Air Force you call them remotely
piloted aircraft. The rest of us call
them drones. We had only a handful. None
of them were armed. On the ground the
invasion force had zero unmanned ground
vehicles or systems. Today, the US
military by one measure has over 22,000
unmanned systems in it and of course
many of them armed but we're not the
only ones. Nations that range from
adversaries like Russia and China, to
allies like Great Britain and Australia
are all now not only using unmanned
systems but wrestling with the questions
of everything from: how do you buy them?
how do you train for them? what's the
best doctrine? To larger questions as
they relate to the
topic of this center: what do they mean
for the future of war and peace? In fact,
can you be at war but not at war in the
same old way when you think about the
notion of the so-called drone wars that
have taken place everywhere from
Pakistan to right now in East Africa? Now
that's where we were back then. What's
happened since? When it comes to the
technology we've seen it blossom out
into all sorts of different forms and in
many ways it's changing not just in the
forms but how we think about it. We used
to even the way we called it unmanned
it was a lot like horseless carriage.
What it's not. What it's replacing rather
than what it is: automobile or in this
case robot. And as you move beyond
thinking about that way
it also means in your designs you stop
you know building around the old man
platform. You know just swapping out the
taking out the seats in the cockpit and
instead you get lots of different forms but
you also get therefore wildly different
sizes. They might be incredibly small or
they might be massively huge the size of
several houses put together but maybe
more important though is not in their
form on the hardware side that's
changing but on the software side. This
is an image of the Navy's MQ 25. It's the
new system that is better than the
previous generation of drones just like
you would measure planes in the past: it
flies faster, it flies further but it's
better in a different way. It's more
intelligent. It's more autonomous. It's
able to do more on its own. It's able to
do very complex tasks like for example
take off and land from an aircraft
carrier which any Navy pilot will
happily tell you is the toughest human
pilot task of all.
But it also will take on roles that
range from air-to-air refueling to
surveillance to for being honest it's
going to be in a strike role. The point
is we're not in a world where you know
they're out there doing
everything on their own like the
terminator but we are seeing a change in
the human role relative to our machines,
including our robotic machines. We're
moving geographically where you might
have a pilot sitting in Nevada or
there's a base in Arizona but the plane
that they're flying is over Afghanistan.
But the human role is not just changing
geographically, it's changing temporally.
As you move from remote operation to
them doing more on their own, there's a
human role but it's not at the point of
time of the action. It's actually further
back the human role that matters maybe
when it was decided to deploy the system
or maybe even earlier. The months or even
years earlier when the human role was
writing the software for. Now that change
is huge but my sense is that there's a
larger change going on and it's an
important story not just in technology
today but arguably one of the most
important stories in all of human
history. Everything that we've seen play
out on the military side is playing out
on the civilian side even greater. You
can see it in everything from factory
industrial robotics 300 percent growth
period, to software you know AI has been
something that was dreamed of in Arthur
C. Clarke's 2001. Well from 2001 in the
real world to today we've seen AI
disrupt everything from finance to
medicine. But we're only at the start of
this. There is no other area that has seen
as much promise, is being funded as
deeply, is being supported by as diverse
an array of players, as this space where
the cutting edge of hardware and
software come crashing together. For
example, one measure saw 153 billion
dollars worth of spending with an annual
creative disruption impact of 33
trillion dollars. The participants, they
range from you know the leading
governments of the world, it might be the
U.S. national defense strategy that
talks about the importance of AI, to
China's government strategies
says, no we want to be the world leader
in AI by the year 2030. It's on the
business side, pretty much every single
fortune 500 company out there is plowing
not just millions but billions into it.
They're the expected tech companies, you
know the Google, the Microsoft, the
Amazons, the Baidu's but it's also non-
tech companies or what used to be non-tech companies but are now. For example
both John Deere and McDonald's recently
bought up one of the most promising AI
startups respectively.
That's because they believe that the
future of tractors and the future of
fast food runs through a IA.
Now that point about startups being bought
up by the big guys also points to what's
playing out among the companies that
want to be those big guys. The founder of
Wired magazine put it this way: "I
think the formula for the next 10,000
startups is to take something that
already exists and add AI to it," and yet
all those trends are only going to go
faster because of what's played out over
the last several months. The trends
towards automation and AI and robotics
they were already there before
coronavirus but all the data points to
them being drastically accelerated by it.
You know think about it this way, we have
in some areas we've pushed forward to a
level that we didn't think we would ever
be at. The amount of distance learning to
the amount of distance work that played
out. In other areas the scale is where we
thought we would be but not for a long
time. For example telemedicine is a field
that I'm pretty familiar with because of
family,
telemedicine field they in a couple of
weeks move forward to where the industry
thought it would be ten years from now.
On the physical side we're seeing
robotics rolled out into roles that
range from policing curfews, to cleaning
subways and hospitals, to delivering
groceries, you name it. To the AI
surveillance of society - we are
scaling that up to frankly beyond what
not to science fiction but even the
Chinese government imagined they would
be putting into place. The point of this
is, after this outbreak is hopefully done
with, we're not going back a hundred
percent. So if we're headed into what's
been excitingly termed everything from
you know a new industrial age to a second Machine Age, fourth
Industrial Revolution, there's lots of
different terminologies for it. I think
there's three questions though that
we're going to have to figure out how to
work out and going back to the notion of
the pandemic we have to figure them out
a lot more rapidly. The first is: what
will be the impact of all this
automation on the economy and all that
ripples out from the economy, society,
politics, family, you name it? Now while a
lot of people feel that new technologies
lead to new jobs and new opportunities
the data shows though that we will see
job displacement and even replacement on
a massive scale. This is what happens
when you rewire the economy. The
estimates of this extend. We actually
built a database so for those students
in the group you know this is what you
do and actually some Arizona State
students helped with this. We tried to
pull out every job report prediction
that's out there and read through them
but also a student put them into a
massive Excel spreadsheet. You get to see
the distribution of these estimates and
it's fantastically interesting. You know
for example Oxford University looked at
seven hundred and two different job
occupations and found that 47 percent of
total US employment is at risk for
replacement or reduction by robotics and
AI within the next two decades. Now, McKinsey,
the consulting company, looks at the same
trend says no no no Oxford you know
that's a wild over estimate we only
think it's 45%. Price Waterhouse Cooper
says no man McKinsey you're so off. We
looked at it our data, our approach is
better. We found 38%. This data set
actually is 1,300 plus of these in there
and the low end is OACD
at 9%. Nine percent of jobs being
replaced or displaced is still a really
big deal. Again for the economy but for
society at large and so even if you
take the most optimistic view that
somehow everybody who's displaced finds
a job which is a pretty unlikely
scenario because one, you have a delay
that transition period. You also have the
fact that this is a different kind of
industrial revolution. The tool is not
just you know swapping out shovel for a
farmer to hammer for someone working on
a factory assembly line to oh now they
sit behind a computer. No, this tool is
intelligent so it takes on some of the
roles of the work or themselves but the
point is the magnitude of this shift
is historic however you cut it and not
just on the economy. As the director of
MIT Media Lab put it quote "Every area of
life will be affected. Every single one."
And think about what that means.
The last Industrial Revolution was a
story of new economic winners but also
economic losers at every single level. At
the individual level, at the industry
level, at the regional level which part
of the United States, to even the global
level it's part of the story of the rise
of the United States as a great power.
From that you saw all sorts of ripple
effects, you saw positives like no industrial revolution no mass consumer goods.
You also saw negative effects, global
climate change from that same outcome.
You see mass political movements and
new concepts. No Industrial Revolution, no
good things like children's rights,
workers rights, women's rights.
No Industrial Revolution you also
probably don't get fascism and communism,
which would drive conflict in
politics for the next rough century.
The point is, we've got all that playing
out but we have another second dilemma.
Every time you get a new technology you
get new questions of law policy ethics
that spin out from it. It's always been
the case. Think in war the bow and
arrow and actually they tried to
ban it a little over a thousand years
back but you also have that story with
the airplane, you had that story with
computers. We have the cyber security
specialty. Think of all the issues that
rolled out of cyber security and cyber
war. That will happen here but we also
have layers of questions that we've
never really dealt with before and
there's roughly three types. The first is
machine permissibility. what is that tool
that's increasingly intelligent and
independent allowed to do on its own and
under what circumstances and who can
decide and all that kind of stuff. The second
issue is machine accountability. Who owns
that intelligent machine and all that it
does? Who owns for example profits from
what it does? What information it
collects but but also who owns it in the
sense of who takes accountability when
things don't work out the way that we
planned? Then finally is on the issue
of machine rights. Not just what can the
machines do but what can we do to the
machines and this is a weird wild space
that brings in everything from sex
bots to the US military. The US Air
Force for example has taken the position
that our drones have a quote inherent
right of self-defense, that is if someone
else shoots at our drones they have the
legal right as if they were manned to
shoot back. In fact they don't even have
to shoot at our drones for us to be
authorized to fire at them. They just
have to turn on the targeting radar,
that's enough. So the point is these
interesting legal ethical questions
we've seen right out of science fiction -
a lot of them are - they're playing out in
our real world and they're playing out
in lots of different places.
Take face-recognition which you know you
can see this across all of it. Face
recognition is a technology that's being
developed and deployed by the US
military. There's for example program
that's creating a system for targeting
and intelligence that will be able to
face identify in the dark from a
kilometer away but it's also playing out
in law enforcement. You know everywhere
from urban police departments like New
York City to rural West Virginia, all
have been deploying this. To the business
side face recognition has been deployed
by you know everything from social media
platform companies to Kentucky Fried
Chicken has done a test of it. Now of
course all this will you know be used to
raise security but it also you know when
you've got a world of Big Brother or in
that case big kernel it opens a
some really interesting and scary
questions of privacy. The final thing
though is that we have new security
questions and they come out of not just
this issue of privacy but how in a world
that's wired every single device out
there lashed up so the internet moves
from being about us communicating to
things operating. Whether it's on your
smart home, whether it's a smart car, a
smart city, a smart military base, you
name it.
You have each of them collecting
information operating via the networks
so you get a panopticon that can map
your entire life history. Not just your
past but your present but then add an AI
and it allows you to do projection
prediction of activity and even
influence in shaping and just like those
sensors some of them will be visible.
It might be ads popping it up at you and
some of them will be invisible. Might be
subtle shifts that you don't even notice
the way you're routed a certain
direction or information warfare
elements to it that swirl around you. You
don't even notice but that means we also
have other types of security questions.
One for the people particularly
interested in cyber security we are on
the cusp of seeing a shift from the
problem being about information theft,
which has been tough enough whether it's
theft of intellectual property, how to
build your own version of our jet
fighter, to theft of financial
information - stealing your credit card
numbers. That's been the problem of cyber
security mostly. Now we will see breaches
that go after this internet of things
and cause physical change in the world.
Not to plot spoil too much but an
example of this would be what hit Israel
just a couple of weeks back where
hackers went after their water authority
and sought to change the chlorine level
of the water
that people would have come out of their
pipes. To go to the the book project, if
you think that small business and local
government water treatment facilities in
the US have better cybersecurity then
the Israeli government, I have really bad
news for you. So we play with that in the
book but you also go back to that first
issue of all that political economic
change, you get new ideologies and maybe
even new violent movements or maybe
their return to old violent movements.
During the last Industrial Revolution for
example we had the rise of the Luddites,
now we would call them the equivalent of
a terrorist or insurgency. They would
assassinate factory owners in the
early factories in the 1800s. They would
ambush on the road. They've launched
these pitched street battles and
ultimately the level of violence rose to
such that more of the British military
deployed to crush the Luddites then
deployed to fight the US military in the
war of 1812. Kind of puts us in our place
when we think about what they thought
was a problem back then. Now while we have
these three big big issues to figure out
we also have three challenges and how we
figure them out. One, when we do think
about it we think about it as being way
off in the distance future. For example,
the secretary of the Treasury said that
it's "not even on our radar" 
because the issues of AR in
automation are not going to be something
that we really have to care about for
"50 to 100 and more and years away". 
Obviously that's I believe
self-evidently wrong not just what's
happening in the near term but what's
already happened.
For example 85% of manufacturing jobs
that were lost over the last couple of
decades were lost to automation
not to outsourcing or the like.
That's what the data is. We may not like
it but that's what it is.
The second issue is when we do talk
about shrinks enough for a field that's
about networks they are disconnected,
they're stove-piped. The people who care
about for example the future of work are
different from the people who care about
the future of war. That people who care
about cybersecurity and Internet of
Things are different and then for
example the people that care about the
legal and ethical questions that we
talked about. Then finally, is a
wonderful strange irony. We are on the
100-year anniversary of the creation of
the word robot and it was actually in
this play that you see a scene of from
1920 are you are and ever since our
concept of robot has been that idea of a
mechanical servant who rises up and then
rises up and that story of a rebellion
of the robots you know cut through from
our you are 1920, to the Terminator, the
Matrix, you name it. Now it'd be fine if that
you know stayed in fiction except it
shapes the real world and we see it
shape everything from for example the
predominance of the debate over killer
robots. Which is you know shaped
everything from Pentagon policy, to that
a debate about it on the floor of the
United Nations - its shaped research and
investment. They've in particular, Silicon
Valley billionaires, they've actually
invested over 5 billion dollars to what
do we do about the existential threat of
robot uprising. My argument is you know
maybe one day we're gonna have to debate
whether we fight the robots or salute
our metal masters but in the here and now and
for the next couple of decades, when you
and I are alive, it's not a revolt of the
robots,
it's the robotic revolution that we have
to figure out and that is the challenge
and part of the challenge on top of that
is that we don't understand it
that well. Not just antidotes like
the saying it's far away but they
did a poll of leaders and only 17% of
them said they have even passing
familiarity with the issues of AI, let
alone the dilemmas that it will provoke
and that's self reporting. That's leaders
you know if they're saying seventy
percent say I think I get it,
it's probably lower. That is a massive
disconnect when you think about the
importance of it again and everything
from national defense strategy to
business strategy. So that's what we're
after with the Burn-In book but we're
trying to go after it in a new kind of
way. Burn-In the title is taken from the
concept from the
engineering field of when you
deliberately push a new technology to
the breaking point in order to learn
from it. So for example when you take a
new watch underwater to see how deep can
it go.
In this case Burn-In is a new kind of
book. It is a novel that is also a work
of nonfiction. That is, it's a smash up
of the two. It's a techno thriller. It's a
story in which you follow a veteran
turned FBI agent who's hunting a new
kind of terrorist through the streets of
Washington DC of the future but at the
same time, it is a work of nonfiction in
that baked into the story is over 300
explanations and predictions drawn from
real-world research with 27 pages of
endnotes to document that's where they're
from and so it might be micro details
about the world to come. Two characters
talking and in the distance certain type
of drone delivery drone flies overhead
and then it'll have a note and will show
you hey that's not what Singer and Cole
my co-author and I dreamed up. How they
just described it, that's Amazon's patent
for it or it might be a concept
that we all need to know. Something
complex like algorithmic bias which has
struck everything from war, to parenting,
to if you work in banking or medicine.
This is an issue that's hitting all
these fields. It's really really complex.
Most of us are not going to read an
academic white paper on it so we depict
and explain it through a scene in which
our character is an FBI agent trying to
find a terrorist in a crowded train
station. So notice what I just said you
can have already visualize that and
hopefully your pace, you know your pulse
quickened a little bit. Where's the
terrorists and the train station among
the crowd? That's the power of combining
fiction and nonfiction. So the idea of
the project is to entertain for some
people hopefully it's just a good
escapist summer read and we definitely
need that right now but for other people
it's to inform and to equip them to
navigate this world to come and
 I think the result of that's why it's
gotten such a strange wonderful mix of
early endorsers. Everything from people
on the defense side, you know the former
heads of NATO, the US military, the US
Navy, the US Marine Corps, CIA, NSA, the
business world, the the head of LinkedIn
- but also the fiction world. Creators of
projects like Lost and Watchmen and the
new Star Trek movies. These are not
people that normally come together on
the same project but I think it's
because of this double layer nature of
it. So it's come out literally yesterday.
It's really exciting to be able to talk
to you about it and I'm happy to answer
questions about all of this. Thank you.
Kubiak: Thanks Peter - that was fantastic.
I feel compelled, not just because
nobody asked questions because I see Joe's
got his hand up and our students,
previous students, graduates has a
question in the Q&A box but I want
take the prerogative of being the first
ask a question in this situation because
it's really...I'm engaging my students now
in my current course on this topic with
regards to technology. American culture
seems to be sort of at least in my
upbringing and my youthful years, as a
culture we are technological optimists. If there was a problem out there
that needed to be solved,
we Americans certainly, had the
wherewithal to do that and that
technology would address the problem
before it became something that
was critical to human
life or the U.S. power. Even the
current pandemic - there's a certain
amount of technological optimism
involved with how quickly we can come to
get a vaccine because of modern
technology CRISPR and a whole host of
other you know computing issues. And yet
you and here more and more and more
every day about this fear of technology.
I kind of want to know where you fall on
that - let's put that on a
continuum. Where do you fall on that
continuum of technological optimist
versus the fear of new technology?
Singer: So I hope that I'm a realist on that
spectrum. It's kind of a dodge. I
recognize saying that but basically my
take is that every single technology
- technology just means tool - every single
technology from the very first one to
today has had good and bad effects and
has been used by good and bad people.
The first technology was the stone. Someone
picked it up and we don't know whether
they first used it to you know grind
some nut to get at the good healthy
stuff inside or maybe they just used it
to bash someone else over the head.
To drones, you know and definitely it's been a
center of study for
the ASU Center, drones have been applied
into war and you know whether you think
they're good or bad kind of depend on
which side of the battlefield you were
on but they've been applied
into war, to stop war crimes. There's a
human rights group right now using
drones. To literally today to talk about
this idea of things speeding up was a
news story that crossed my screen of
using drones to deliver coronavirus
tests to a rural hospital to get them
out there and back in 1/5 the time. So
you've always had good and bad. I think
what's interesting and you talked about
sort of that unique American but it's
particularly you know that unique
American play Silicon Valley. What's
troubling us right now is and this is
very much a theme and in Burn-In,
is the fine line between utopian and
dystopian views of the future and it's
hard to figure out not just which one
we're in but which one they're building.
You know, I wake up in the morning and
I have a algorithmically shaped news
feed that pushes me information and it's
pushed me you know stories and sometimes
stories that are really advertisements
masquerading in that way about
everything from a smart toilet to
pandemic drones, which are ones that are
gonna monitor your temperature from afar,
to movement tracking apps on a phone that oh by the way you know we'll share it
with Foursquare and again I didn't make
that up that's actually happened.
And all that you know, you go
with the smart toilet and it's gonna
you know help you figure out whether
you're sick or not and share it with doctors.
It totally redefines, can I get some
 privacy? And so you know I
think part of the challenge of
 it is that there's been a
they called it the tech lash. We put Silicon
Valley up on a pedestal and they were
massively praised and and too much
praised and now the the very natural
pendulum swung the other way and we're
seeing them you know kind of attacked
for a lot of different things and 
that was just a natural thing of the
pendulum but I think it's also the
concern that there's there's very real
systemic problems that they've still not
gone after. Whether it was you know
they've not learned the lessons of what
played out in the 2016 election and
their role in helping to allow and
amplify extremism and Russian government
disinformation campaigns - what's
happening right now the accompanying
info-demic of the pandemic that
spread mis-and-disinformation
about coronavirus and remedies to it and
it's literally gotten people killed.
Not just small numbers. The pandemic has
clearly gone wider than it should have
in America in part because of 
mis-and-disinformation online and so why they
don't still get it, I think are some
systemic issues. One is that it's the
attribute of their techno optimism. They
are problem solvers. They are optimistic
they can solve problems. That's where we
get all the great stuff but it also
means sometimes they look at really
complex socio-economic issues and say
all right got the instant answer to it.
And they're complex
for a reason. So the second thing is, they
don't red team. They don't remember you
know what a Clausewitz would advise.
There's fog. there's friction,. the enemy
gets a vote so they roll out the product
without you know saying, okay how is the
bad guy going to use it? How's the bad
guy going to go after the
vulnerabilities of it? And part of red team is about
alternative perspectives. The third thing
is they still have a diversity problem and
so it's a community that
that tends to have a particular kind of
background. You know white, male, upper
middle-class upbringing who went to a
limited number of majors at a limited
number of American universities. And when
you roll out of technology like face
recognition it has very different play
if you know you're from a different
country, from a different economic
spectrum, etc. So all those are
issues but they're also remedy you can
you solve some of them and come out
better on the utopian side.
Kubiak: That's great. All right let's get to
some of the questions.
I'll get to Joe - I'll get to you
in just a second but I'm gonna ask
one the questions John, a graduate of
the MAGS a couple years back.
He asked a question about 
how prevalent is
the use of AI drones by the Chinese PLE and in the South China Sea. Actually more
of a question for your Ghost Fleet novel
 maybe then and this but how has that
changed maybe since you've written Ghost
Fleet to now? What have you seen in terms
of trends especially in regards to South
China Sea?
Singer: So the Ghost Fleet project - 
for those of you who are
not familiar - it was a novel
 looking at what a war between the
US and China might be like but it 
drew from research. Now it wasn't like
Burn-In and that we we didn't start it
from the beginning with that research
concept in mind. It was more about us
sort of showing our work
documenting it versus Burn-In it's
very much from the very start. Even the
choice of character identity was
with that idea of blending in fiction
and nonfiction but still the rule for
Ghost Fleet was the same as is Burn-In
that has to be drawn from the real world
and as you're asking you know one
things that there were technologies and
settings in it that - you know we're
researching it back in 2013, 2014 - the
book comes out in 2015 and things that
seem really futuristic are now becoming
real so we see all sorts of actors with
not just drones but increasing the
autonomous drones. China has multiple
different programs for this just like
the U.S.
military. Some of them are physically
large in size. Surveillance ones that are
you know the size of a passenger jet -
they're also doing work on small cheap
disposable swarming type technology. 
They're organized
under military districts. Different sort
of geographic location and one study
found yes, every single district
including the one that bumps up to
contested areas like the South China Sea have
unmanned aerial systems deployed into
them. The interesting question is you
know going to back to that like legal
issue: What happens when unmanned systems
start to deploy and get into trouble
back and forth? What happens when a
Chinese drone accidentally crashes into
a Japanese air Self Defense Force plane
or vice versa?
And you know these are seemingly sci-fi
now real questions but one of the funny
things like literally today news story
popped up, the other part of it something
from Ghost Fleet five years back that
seemed sci-fi: counter drone technology.
There's a US Navy system that
 we have in the Ghost Fleet book
that just deployed today that is a laser
to shoot down drones and so you know
what's fun slash scary to me is so that
was five years back and then the system
deploys today so what is it that is in
Burn-In today that five years from now
we'll see the very same thing.
Kubiak: Like I said, that's
 to me in my
mind your primary skill set is
being able to see those sorts of things
maybe five years in advance. 
Hopefully, you're a little bit
pessimistic in the way things turn out
in a lot of cases. In any case Joe -
I'm gonna push the "allow to talk" to see if
he's there, there's no camera. You want to
unmute yourself Joe and have a talk.
Looks like you should have a mic
button, there you go. 
Joe: Peter can you talk about how you
turn all this stuff into a compelling
story by novelizing it. I'm especially
interested in how you make human
character and motivation central as
distinct from the machines. Peter: yeah, 
because so you know we call what we're
doing useful fiction or the technical
term thickened for the people with an
intelligence community background. You
know you have these different tools of
analysis and explanation and prediction.
You know
SIGINT - signals intelligence, human human
intelligence, you know human spies.
It's combining the power of narrative with
the research so it's not pure sci-fi,
it's not you know dreamy worlds. You
have to do the research side but you're
building a narrative and so you're doing
both at the same time and just like you
know the researchers and students 
who are part of the program, the
research might be you know bringing 
together studies that are out there -
it might be interviews of
experts but as you're doing that you're
also building the world of fiction and
that involves world building, involves
building a plot, scenes, characters and
what we found is that the research, the
realism side actually helps very much
with the fiction side and making it more
compelling. It might be the way people in
the real world are not one-dimensional,
they have multiple identities so the the
main character of this story and again
this is a big difference for fiction for
techno thrillers, it's a she. It's a
female veteran turned FBI agent. Most
techno thrillers - actually it's hard to
even think of one right now - 
when they do have a female character,
she's the 1B. She's the helper. You
think of the Dragon Tattoo or
Red Sparrow or whatever but it's not
just that they tend to be
one-dimensional. In our case our main
character, like people in the real world
she's got these different identities.
She's an FBI agent. She used to be a
veteran. She's a mom to a five-year-old.
She's a spouse. Her marriage
is falling apart and each of those
identities like in the real world they
juggle. They deal with each other.
They pop up you know
something from your personal life
affects how you operate at work or visa-versa.
That's how it is in the real world. The
research though may allow you to make
something seem more real but also again
make it more compelling. When we looked
at the job automation data. One of
the things that is not well
understood is that it's not just gonna
be you know factory jobs or truck
drivers. It sitting on white-collar jobs
and so that's hard data. It's numbers. We
turn that into a character. Her husband
is a contract lawyer which is a field that
makes 120 thousand dollars a year right
now. It's on the trend line to being
massively reduced by AI. Now you have
that data in the report, what makes it
feel real but I think compelling in the
fiction sense is go - okay what does it mean
for someone who you know got good grades
went to the right of law school and then
it was taken away from them and they're
working a gig job from home. How does it
affect your marriage? How does affect way
they parent? How does it affect the
political identity and so again you can
see the world-building. Other times it
might just be little details that you
come across in the real world, you go
that's gonna be so cool in a story or
flush out the story. So the first time I
went to the National Security Council
and then later on the White House, one of
the things that - it's strange to
say but we're humans, this is the way
we operate - the floor in the old
executive office building, my shoes made
this kind of small squeak because
they're this like they're this kind of
you know marble thing and in my head
even though I'm going to the NSC, I'm
thinking I hope the escort doesn't hear
my shoes squeak and then when we crossed
over into the White House, the
the rugs they're thick and I got this
sense of relief to be in the White House
where my shoes didn't squeak. That's a
personal experience. That is like a
little detail that then our character
has that same experience and so you know
again I feel like it makes it more just
like oh she walked into the White House
to like she's thinking in her head you
know at least it's not squeaky shoes. 
So I hope that it gives you a
sort of a sense of what the world
building back and forth. One last thing
that you bring up Joel that's really
interesting that's a real-world issue: is
one of the other occurred in quotation
mark characters is a robot. Is Tam's but
we never say it's a character. It's a
technology but you the reader and other
characters can't help but thinking of it
as a character because of the real-world
issue that we have of we always put our
emotions on top of these intelligent
machines and that's a whole other issue
that again we have to figure out and
everything from policing and military.
You know I came across this for a past
book where military units where the the
robots were supposed to keep them from
taking risk and one case in Iraq there
was a ground robot a bot that got
stuck in the mud and someone ran out
under heavy machine-gun fire to rescue
their robot. The exact opposite of the
idea of why we were using unmanned systems
and this was in 2005 I think.
That's with a robot that's
remote-controlled, doesn't speak, you know
doesn't look
like a human and yet it was affecting
people that way. What about you know move
forward five to ten years to again - all those
issues - your kids toys.
How does it affect that? How did they
connect to it in ways that you'll be
like oh that's exciting a ways to be
like I don't like that. So that character-ness
is another thing of like the fiction
 of the nonfiction going back and forth.
Kubiak: that's a really interesting story. 
Thanks Joel for the question. We've got a
lot of good questions on our - Singer: you know
what, I'm giving super long answer so I'm
gonna - you know let's ask them and
 I'll give like short answers.
Kubiak: Great but I'll say screen
name Mariah Maury, I
don't know who that is but has asked
several questions but the one I find
most intriguing is - I'm gonna ask it on
her behalf or that person's behalf - is
there a Moore's law specific to AI? 
Singer: there are arguments 
as to whether there is or
not and not just Moore's law but the
sort of compounding effect into what's
called the singularity - you know Moore's
law was originally just looking at chips
and then other people said no no no it
applies roughly to technology
breakthroughs. Then there are those who
say well because it's AI, it actually
speeds it up because then the AI joins
in into the speeding up of it. What you
basically see though is at a certain
point those debates with in particularly
singularity movement cross between
research and study and faith-based
debates. So you know the end point for
them is like and and then at some point
we'll be able to upload ourselves into
our machines. You're like well - self -
what it means to be a human, that's
a debate that's like has
those religious tones to it or 
consciousness what does that mean?
Also they tend to again not be
multidisciplinary so you know yes
Moore's law or this effect of advancing,
that's one way you
can look at it but
we also have the intervening effects of
everything from business investment, to
do war, to pandemics. Which again means
that the technology trends don't come to
have a nice perfect curve. They tend to
you know slow and then they have massive
jumps. You know think of the effect of
World War II on you know all of the
advancement so that's that. And that sort
of linked to the question which was
like how do you document what gives job
loss versus overall value. That's again
what these studies try to go after
and again they tend to be disparate. You
know one group looks at job loss, another
group looks at value creation and again
you know this where I find the the
William Gibson as a science fiction
writer so useful to us on the flip side,
which is he talks about the futures here
it's just unevenly distributed. The
uneven distribution of it is I think the
key that yes there will be massive
economic winners. Particularly those who
benefit from the network and the data
gathered but they're clearly going to be
other losers so saying that it all evens
out in the end. You know one person
making a billion dollars and you know a
bunch of you know after crunch the
numbers real quick but other people
either losing their job to working for
less than $15 an hour or being under the
hamster wheel of a gig type project. It
doesn't mean it evens out. Society has
to deal without an even distribution.
Kubiak: Yea the aggregate numbers
tend to hide quite a lot. One of the
other participants asked: that to advance
further in AI, we need 5g infrastructure
and yet we are nowhere close to having
an infrastructure really. Give me comment
on this. Singer: Yeah that's a another theme that
we play with in the book. What
does it mean to live like in the United
States where the infrastructure is not
world-leading in fact incredibly brittle
and we are starting to feel the effect
of that so we move the timeline forward.
You again get this sort of unevenness
where there are some parts of the US
that are lashing up and benefiting from
this advancement including you know
getting 5g and then there's other parts
that are not and what does it mean to
sort of go between those two spaces?
Kubiak: It's gonna have a kind of balkanization
effect even within the United States not
just across the globe as other
technologies have kind of done so I
think that's interesting point.
Let's see Brian Johnson you've had your
hand up for quite some time but if it's
the question you've posed in the Q&A block
I'm going to ask it in the interest of
time. Brian asks: what impacts have you
seen from Ghost Fleet and what impact do 
you hope to see from Burn-In?
In other words, what are the ramifications from those
who have read and thought through it?
Singer: Part of what drove us on Burn-In
was this just amazing experience of the
real-world impact from Ghost Fleet. You know
I've done a series of nonfiction books
and they've done well and they've been
on the military reading list and had
certain policy impact. Ghost Fleet beat
them all on policy impact. It was the one
that got me invited to go to the White
House Situation Room and have squeaky
shoes but not the squeak in the
White House. It was the one that got me
invited to the the tank - the Joint
Chiefs meeting room and the Pentagon.
All told over 75 different government
organizations, to military units, 82nd
airborne, JSOC. Multiple 
investigations were
sparked by that book both Defense
Department and GAO investigations and
that's sort of a just an aside - that's
one of the great things of this blend is
that you don't just get prediction
hopefully you get prevention.
You get people maybe fixing 
the problems that you point
out. The Navy even named a three point
six billion dollar program Ghost Fleet.
So with Burn-In it just came out
yesterday but we've already been sharing
it with certain groups and entities and
I've already done briefings on it's real
lessons for NATO, the US Air Force, I
could go on a multiple different to one
part of it is actually woven into this
congressional cyber solarium commission
report. It's a first for both US
government documents and science fiction
to have this in the official report. So
our hope is one that Ghost Fleet like
effect of maybe helping predict but
further prevent you know the nightmare
scenarios we talked about from coming
true but more broadly, I hope that it
equips both leaders and you know 
people working whether it's in military
and telecommuting or banking or medicine whatever. I hope it
gives them the ability to visualize this
world to come, the vocabulary that they
need in order to thrive in it and 
that's my hope but at the same time - I
just hope a lot of people have really
fun escapist read. Particularly given
everything going on right. Kubiak: 
You've already contributed quite
extensively. And I apologize folks. We're out
of time now so there's several questions
in the Q&A I'm not sure if Zoom will
allow me to capture those later. I'll see
what I can do. Email me if you've got a
question not and I could forward it to
Peter. So you have
contributed enormously to the MA
Global Security already through your
contributions both as lecturers in
some of the classes but also in guiding
two the classes in the cybersecurity. We
really do need to build a
course around AI in particular and maybe
use this book as a backdrop to that but
really get folks up to speed in AI so
that would be I think going forward that
maybe one of the contributions I seek to
try to extract from you, if you will or
get you to contribute to going forward
for the MA in Global Security. So
Peter we do sincerely appreciate your
time. Best of luck with the sales and 
I can't wait to see your next project
because the glimpses of the future I'm
getting are a little darker. I'm kind of
hoping for something a little more upbeat next time.
Singer: So first I want to
end by thanking you again and
everybody for joining us but you know
the other thing - the
great thing about fiction - even if
it's dystopian fiction, which I don't
know if this really is but it's always a
story of perseverance. Kubiak: Right, resiliency. 
Singer: and that's what we're living
through right now and you know we're all
the the hopeful when the story of this
period is told, hopefully were the heroes
in that story right. You know whether
it's what we do individually that's
right to
you know I think my kids. Like my kids
are heroes in the story. They're giving
up you know school and sports teams to
keep their grandparents safe and so
hopefully that's that's the takeaway.
Kubiak: That the source of
 human optimism is actually the
is the human spirit itself so I
 appreciate that contribution.
Singer: I just want to again thank
everybody for joining out and
if you want more about the
book, the links in there Burn-In book
dot com and you can check it out all the
all the great online places available
for books to safely get and enjoy it but
but most of all thanks Jeff for being
such a kind host and the like so take
care of everybody. Kubiak: Anytime you can spare
the time, we'll have you aboard. Thanks
Peter and a goodbye to all thank you so much.
