hello and welcome to this virtual public
seminar featuring Antoine Bousquet and
Jairus Grove
on Martial Autonomies: Rise of the war
machines.
My name is Michael Richardson and I'm a
Senior Research Fellow in the School of
the Arts and Media at UNSW Sydney
and a co-director of the Media Futures
Hub
I'd like to acknowledge that I'm coming
to you from unceded
Begegal country, in the lands of the
Eora people
in what is now called Sydney, Australia, the struggle for justice for
Indigenous rights
um continues in Australia and and around
the world.
so this is the second of six seminars on
the theme of drone
futures, drones are crucial to the future
of war
but also of everything from policing to
agriculture to conservation
drones are reshaping how the world is
perceived how people are governed
and how power is enacted and resisted.
Yet they remain
elusive, thanks in part to their
diversity, constant evolution
and in the military context their
disappearance from view.
From the neocolonial violence of
contemporary wars in the Middle East and
Africa, to the strange histories of
unmanned aerial vehicles,
to resistant art practices, this seminar
series
looks to the past and present to think
into the future
and the aim is that by showcasing
scholarship from
multiple disciplines we can spark new
connections and stimulate debate about
how to build
more just drone futures. So the seminar
series is hosted by the Media Futures
Hub
at UNSW and it's funded by an Australian
Research Council Discovery Early Career
Researcher Award
that I hold for a project about
drones and witnessing in war and culture, so
thank you ARC.
If you'd like to know more about that
project or to see the upcoming drone
futures seminars you'll find a link
below the video,
if you click 'show more', it's a really
exciting lineup,
today's seminar included, and you can
also read there about
the symposium on drone cultures that
will follow
this seminar series in December.
So today we're going to hear from
Antoine Bousquet and Jairus Grove
who I think are two of the most
fascinating thinkers in international
relations and critical security studies
today
Antoine is reader in International
Relations at Birkbeck College University
of London
and he's the author of the Eye of War:
Military perception
from the telescope to the drone, which is
a deeply researched and carefully argued
exploration of what Antoine calls
the 'martial gaze'. Jairus is Associate
Professor of Political Science and
director of the Hawaii
Research Center for Future Studies at
the University of Hawaii at Manoa.
He's the author of Savage Ecology: War
and geopolitics at the end of the world,
which is one of the most startling and
unsettling works of political theory of
recent times,
and a book that has really helped me
think through how to understand where
where we're at in geopolitics and the
life of war today.
I have to say I'd be really
pleased to have either Antoine or Jairus
um speaking here individually but to
have them together is really amazing
because they've both in different ways
helped my understanding of
of war and technology tremendously.
So before I hand over to them though
i'll just describe the format,
you're all on Youtube obviously
and not here
in this zoom call um with us and so
while it would be nice to have more
voices speaking and so on the the number
of people and connections in the call
um can make that kind of messy, so on
Youtube you've got
access to the chat and that chat is
being moderated by Madelene Veber
and she's a PhD Candidate at UNSW and
she's working with me on this Drone's
Project
so Madelene will be following along and
diving into the chat and she'll feed
your comments and questions into the
zoom.
So while this is maybe a little less
intimate than a discussive
big zoom call we're hoping it'll help
the discussion flow smoothly,
and it also means you can chat during
the talk and kind of have side
conversation without disturbing anyone. Now the the format today is that um
Antoine and
and Jairus will each offer some
opening remarks and then they will kind of
respond to one another
and then we'll pick up into a kind of
free-flowing conversation
so um I might ask some questions they
might have questions for one another
uh and you might have questions for them
so it won't be that same that that
typical strict format of uh a more
strict format of a
of a talk followed by a Q&A um and so
and we'll sort of
see how things unfold over the time um
that we have
available to us so uh without
further ado Antoine Bousquet and Jairus
Grove with Martial Autonomies:
Rise of the War Machines.
Thank you Michael thanks for for
inviting us,
yourself the center and of course the
University of New South Wales
um it's always a pleasure to talk
about things with Jairus, we do it quite
often,
uh doing it in public is perhaps
slightly different experience,
and so hopefully being as enjoyable for
you as it is for us. I guess I'll start with a few
preliminary
thoughts uh on the question of drones
and autonomy
and the first thing, and I think it's
important to get this out of the way
even though maybe at this point it's
become
obvious to most is that you know we we
have to be careful when we talk about
drones to
not uh become kind of uh
fetishistic about the object. There's
already, I think a long history
of on that era of vehicles that shows
that although these objects have come
really to public attention and to even
scholarly attention in the last 10-15
years really
uh they go they reach all the way back
from the very beginnings of
uh air power.
Similarly I think it's very important to
underline that really the drone is
is a concatenation of technologies, it's
not a
fixed uh object in itself,
and we we need to be very attentive to
these wider
technologies the histories of these
technologies
the networks in which the drone as an
object is embedded in
i think that's kind of important to just
lay that out there because i think once
we start
framing things in this way uh we get i
think more
essential considerations about where
these technologies
might be heading. Of course today we're
talking about
uh this question of autonomy which is
very much tied to debates about
artificial intelligence
or lethal autonomous weapon systems as
we found in the debates
the first thing I would do I guess would
be to make something like a deflationist
move
and first thing would be to say that on
one level, autonomous
weapon systems are not uh future devices,
they already
exist, they're already deployed in
certain restricted contexts.
So we find them in anterior defenses
or as sentry guns, and this really goes
back to the
late 70s and 80s we can find some of
these systems,
and, while there's a lot of heat
around the idea of lethal autonomous
weapon systems,
uh here we're talking about systems that
are no doubt fallible
but operate in relatively simple
environments
in which the presence of
non-belligerence is generally
unlikely so these systems might not
destroy
enemy targets uh but but the risk of
so-called collateral damage is limited,
and effectively we're in spaces where um
these machines the automation of these
systems becomes, from the kind of
operational
necessity when we what we really are
talking about most of the time when we
think about lethal autonomous
systems they're about systems uh whose
artificial intelligence would allow it
to make complex decisions,
discriminations about targets
and the ability to identify targets in
in a crowded in crowded battle spaces,
where there might be many
illegitimate targets as well. And here I
think the claims often we have to be
wary of is that really there's a lot of
AI
boosterism, that if you look at the
history of AI, military
and otherwise, and you think about the
the kind of
problems that these AIs would have to
solve in these types of
uh environments, if you're asking these
systems to discriminate,
to conform to laws of war, uh really
you're asking for
quite considerable advances on the
current AI whether it's in the realm of
computer vision or or synthetic
reasoning
um and we should have a degree of
skepticism to the extent to which this
AI
is around the corner in any meaningful
sense.
I'll make a point here as well that we
should note that the real challenge here
is discrimination right, we already know
how to indiscriminately target on a
massive scale if we didn't
if militaries did not care about
um carpet bombing places, or mass top
mass indiscriminate targeting
the problem of AI would be a pretty
fairly straightforward and simple one
and we maybe wouldn't even need AI in
the first place so i think it's it's
important to note that this is a big
part of
the operational context for these
technologies.
so for all these reasons, you know
militaries
are on the whole not exceedingly keen to
forego human control if the technology
is still uh error-prone. There are
major problems of course for militaries
themselves for institutions with the
militias and
and for the people who send militaries
out in
deploying autonomous weapon systems the
major problems are really surrounding
issues of accountability if things go
wrong,
who is responsible.
My sense is actually part of the problem
we have
or the danger we have in focusing on
lethal autonomous weapon systems
is that in many ways we might be missing
what is the more
immediate challenge and danger,
which is to say I think humans are
likely to stay in the loop
for quite a long time, that both for
practical but also for
political and institutional reasons,
militaries will want to have
humans involved in the process in some
level or another
authorizing the use of weapon systems. But what will what is happening, and what
will continue to happen is that these
individuals
are becoming ever more tightly embedded
within the socio-technical assemblage
of the weapon, and this I think is really
the novel the issue
that when we think about these systems
when we think about the ways in which
humans are embedded in wider meshworks
of technologies of cybernetic control
systems
and so forth, and in artificial
intelligence,
we find it ever more difficult to
identify the locus
of agency in these systems, certainly at
a kind of functional level.
At a legal level it may be very
practical uh for institutions to say
well here's the human if something went
wrong, here's the human decision maker
that we can blame, but effectively of
course if human decision makers within
uh military systems are increasingly
finding that
the operational environment is being
mediated
oriented and informed by
AI then it becomes much more difficult
to legitimately
see a human decision maker as a kind of
sovereign
decision maker involved in that process.
So I'd be tempted to say that
actually
the question of autonomy is perhaps the
the wrong question
for us to ask, or at least that we really
need to think carefully about what we
mean by autonomy,
because when we think about the ways
that, it strikes me that when we talk
about autonomy in the context of
of uh military systems and machines is
that our conception of machine autonomy
seems to be effectively modeled
on our our understanding of human
autonomy. This is quite an
anthropomorphic conceit that we should
think that huge machines,
the idea of a autonomous
machine must be somehow
equivalent to what we think is an
autonomous
human and I think we get
time to revisit that as well, I think
Jairus may say some things about that,
but I think there's no
necessary reason to think that our
machines, and as they are evolving
will develop uh human forms of
deliberation and
and cognitive decision making, so I think
there's a first problem here in
mapping human conceptions of autonomy
onto
machines in a straightforward fashion.
but this raises I think a
secondary issue,
a perhaps even more fundamental issue
which is that perhaps this conception of
human autonomy is flawed
as regards humans as well.
You know there's been work in in
philosophy in 15-20 years,
further back if you if you look
to its roots that really
makes the argument that human cognition
is not a process that happens only
within the confines of our
bodies that we really are extended cells,
that all cognition and this is one of
the chief characteristics of
at least what that humans have developed
most
uh intensively is that cognition is
something that takes place
in interaction with our environment, and
think very
at the very basic level about the
activity of writing, which is an
externalization
of thought onto the material
world
of making sums on paper or drawing on
the sand, these are very basic cognitive
tasks but they do not simply happen in
our head they happen in
in conversation or in engagement
with our environment, and here the work
of people like Andy Clark or
Edwin Hutchins I think is very important
for us,
and it is I think more broadly a
testament to the unique plasticity of
the human species that we have
developed this interaction, this
development of our
beings in constant dialogue, in constant
engagement
with our environment. The late Bernard
Stiegler made the point that
anthropogenesis is deeply intertwined
with technogenesis I think we want to
think about the trajectory of
uh human evolution and of our technical
evolution we have to be
mindful of this the cognitive processes
at the heart of our
societies and our own sense of being are
not things that are
uh confined to a human
um body or mind.
Now current technology is really an
extension of this,
so I talked about very early types of
technologies the further
the technologies that are appearing now
a further extension of this
process of combined anthropogenesis and
technogenesis,
now that's not to say that nothing has
changed I think that the bot the ties
the deepening of these interactions is
ever greater, the environment
through the kind of development of
cybernetic systems, we have
environments that are
you know not simply reflecting mutely
what we impress on them but
themselves processing and sending
information
back that we didn't necessarily put into
it
But this I think you know means that
when we think about our
human autonomy, autonomy has
never been something that was a
pristine quality of human beings it is
not
in itself a discrete quality that beings
either possess or not, you are either
autonomous
or you are not, it's always been a
contingent,
situated, gradiated, emergent
property, or to think in other words
agency
is always something that is assembled, assembled from the world, from the
fabric of
society and as capaciously as we can
understand the connect the idea of
society
or collectives, and so the same is very
true of the AIs we are setting loose in
the world.
They will also evolve and
gain forms of autonomy in relationship
to
the world in which they're setting the
wider networks of connections
that they make. So I think we may want to
and I'll conclude on this for the moment,
we we may want to think preferably in
terms of
autonomies, not a quality of autonomy but
a field of autonomies
that will coalesce from the entanglement
of humans and
machines, and that I think raises a whole
set of complicated analytical questions.
But I think it gets us closer to the
heart of the matter,
and thinking in kind of binary
absolutist terms about this idea of autonomy.
Go with you, Jairus
Awesome, sorry it took me that long to
figure out how to unmute
um which the autonomous zoom system told
me I had done
several times. So
I think you know there's a great
deal of overlap I think where
Antoine and Is thinking is converged on
drones so I'm going to try to
boil it down and over as little as
possible but
come back to some of the the central
point so there's sort of four
sort of main areas I think which are
also in some sense a deflationary move
um but like Antoine's point uh the
deflationary move is not necessarily
good news um so I think that's one of
the things that
often these debates happen over autonomy
versus automation
what kinds of limits,  you know can we
have ethical software,
um the skepticism tends to be on the
side of, oh well these
systems can't work so we're fine, or
these systems can really do these
amazing transformative things, so we're
fine.
and i think um I think Antoine's point
and my point
coming soon is something like, well, they
may not work at all like we think they
are and that's not necessarily good or
bad news it's a radically different way
of thinking about what machines can do
to warfare and the political.
So I've got sort of four core areas
I'll introduce quickly and then go
through, and then
I want to do a little bit of painting a
picture of
where I think transformations could take
place and what the drivers for those
transformations would be.
so the first one, which I think you
know Antoine touched on the beginning
which is something like drone
essentialism
um we have a tendency um
to want to think about the drone as a
thing right we want to think about it as
you know fundamentally a predator drone
or uh you know one of the quadcopters
that we see constantly at the beach
but the reality is that the drone's only
interesting, not because it's a thing but
precisely because of what it can be
plugged into
and Antoine gave lots of great examples
of the ways in which it really doesn't
even make sense as an object without you
know the vast network of satellites that
we have,
the ability to have telecommunications
in real time,
what are for me the most interesting uh
which is that what
what's really become more sophisticated
with drones is their sensory capabilities.
um but if we're going to try to unpack
drone essentialism, I want to go a little bit further and
and I'll come back to that because I
want you to
sort of think about all these together.
The second major area is
this idea of autonomy versus automation
uh and getting past the idea that these
are
in some sense opposed terms or even
dialectical terms.
I think the more that we come to
understand how human cognition works the
more that we understand that autonomy
and automation are actually mutually
dependent upon one another uh and I'll
get back to some of the points that
that Antoine raised about um about
agency and
where that comes in, number three uh
which I'll spend some time on too
intelligence, uh why
intelligence is at the center of this
and I think this gets to the question of
the
sort of human and the sort of
anthropocentric uh nature of our
research
but also the anthropomorphic
expectations we have of the future,
and the fourth uh which I'll definitely
spend some time on when I talk about the
future
um is what I would like to call the
the functionalist
fallacy, which is often at play
amongst futurists who I spend a lot of
time with which is that
um things will be used if they work what
we don't spend a lot of time
thinking about is failure and what
things do when they don't work.
All right, so I want to start with drone
essentialism and maybe introduce some
ideas that could change our expectations
of the future
and also think differently about drones.
The first drone that I want to talk
about
which doesn't get a lot of airplay these
days are landmines.
Landlines are one of the most highly
distributed drones in the world
and what I mean by drone is
they're a weapon in waiting
In some cases, just again recently
a confederate US landmine was
discovered
in the south and it was still
explosive, it was waiting uh in some
sense
to have a kind of sensory experience
right
now sensory experience in this case,
thankfully it was so deep that no one
was able to set it off,
but when they go off in places like
Cambodia
or in Vietnam uh or in Afghanistan and
Iraq
it's because something agitated them
they have the capacity to feel,
so they're tactile drones. If we
think about the ways in which the
improvised explosive device has changed
over the last decade, right, going from
being in some sense just either
radio-controlled or wire-controlled
landmines,
to now being aerial drones, right
so we have uh in many cases instances in
which sort of
suicide drones which functionally are
just land mines that can fly.
So i want I want to keep that there
because really what
then happens is not the sort of sensory
change right we have
an incredibly long history of weapons
that have a certain
sensory capability, they can decide
to go off or not go off based on the
presence of
a particular kind of network or
encounter,
but there is a there's a kind of
obsession with the flying drone because
we have an obsession with mobility,
and I think that actually is
somewhat misleading and not terribly
helpful. The second drone that somehow
falls out of the picture entirely
because we don't have the same sense of
intelligence, although it could become
quite intelligent, which we should get
out of if we want to get out of drone
essentialism
is nuclear weapons; incredible
sophistication and targeting
the ability to re-target in many cases
um to be in constant communication with
those that fire them
to have capacities to overcome or even
deploy
countermeasures depending upon the kind
of encounters they make,
and the renaissance that we're having
right now in
in nuclear weapons even as we
traditionally understand them, as
we understand them
for instance uh you know
intercontinental ballistic missiles
but already debates about whether or not
Russia has developed the capacity for
what amount to nuclear drones right, deep submersible warheads
um which can stay underwater
indefinitely because they don't have
crews,
and an era in which we could quite
imagine
submarines with nuclear capability and
never having the need to come above
water
Does that fit the kind of model we have of
the drone? I don't think so but those are
the kinds of drones which I think could
really
alter or change. The last one's kind
of wacky but I want to think about it
which is there's a kind of
cousin to the drone that we often don't
think about which is the way that
animals were used
um consistently in warfare uh in many
cases even to deploy weapons.
We know for sure that there's a
very advanced marine mammal program both
in Russia and the United States as well
as some smaller programs in Iran. In fact
the marine mammal program
in the US is
housed mostly in Hawaii, and what was so
attractive to using seals and dolphins
and even some small whales was precisely
their capacity
for autonomy, that they had decision
making capabilities that their training
could become improvised, and there are
even some rumors that
some of the most crazy uh nuclear
scenarios,
that people thought about mounting
small warheads
to larger whales who would deliver them
without being able to be picked up
by sensors. So what does it mean to
get out of
I think the sort of sleek notion of the
predator drone and think more about this
wider array, not just the network it
lives in
but the kinds of everyday drones which
have become quite normalized
as components of war, but have precisely
changed the temporality
and in some sense the human character
of war if we think about human
and humanity as a time scale right so
land mines that can live for decades uh
even a hundred years, animals that may function on
different scales
and weapons that may lie in wait
even accidentally
which could alter the course of warfare.
Part two: 
Autonomy versus automation. I'm not going
to spend a lot of time here but I do
want to think,
you know I was really provoked by
Catherine Malibou's
latest book on AI and um you know given
the work she's been
doing on neuroplasticity, and been
quite critical of cybernetics she makes
a bit of a turn
and really takes seriously that
cybernetics was on to some very
interesting things when it came to
understanding not just the nature of
human cognition, but just the problem of
cognition,
and that what's obviously clear is
is
how dependent even our most
enlightenment notions of freedom are
on a vast autonomic system, which if it
were not completely automated
we couldn't think at all right, if we had
to remind ourselves to breathe,
if we had to think or cogitate when
deciding whether or not to dodge out of
the way of a car,
if we had to in some sense coordinate
what Immanuel Kant called the spontaneous
accord of the faculties,
the fact that they all get along,
communicate and make sense, there would be no thought and freedom
in the first place,
which is another way of saying, and
this is Malibou's point, if humans were not almost entirely
automated
autonomy would be entirely impossible
So trying to get out of the sense that
that something is autonomous or
is automated and thinking instead
about nested
levels of autonomy and automation, and
what that makes possible
I think that's particularly important
for thinking about the future of drones,
because we we tend to think that
drones will make this sort of step
change
when something like a fully formed AI
gets implanted
into a highly capable machine, instead of
thinking about the ways in which
an incredibly sophisticated set of
sensory capabilities
reaction capabilities, so for instance,
drones can fly themselves through storms
take off and land, process data,
potentially go silent when they could
be targeted by
different kinds of weapons, that it's
actually the interplay of those
autonomous capabilities
or rather those automated capabilities
that may make something
emergent like autonomy possible.
Part three: Intelligence.
I think this is another area of not just
drone essentialism but AI essentialism
which is really
unhelpful and it's exactly what what
Antoine said which is it's this sort of
inherited notion,
not of actual human intelligence but our
self sort of sense of what our
intelligence is
which I think actually has more to do
with the reason why we've had an AI
sort of long winter than the technical
challenges, it's that we're actually
looking for the wrong thing and so
I want to add
two different possibilities which would
alter the way that we might think about
drone development, the first one is
why aren't we talking about modeling
desire. Desire is also incredibly
complex,
hard to understand, if we're Freudians we
want to think about desire and its
relationship to drives,
for Lacanians we want to think about the
way that a lack produces
this almost exuberant attempt to fill
in what isn't present, for Deleuzians
or Bataillians, we want to think about
desire as these sort of explosive
or productive forces in which
expenditure
is actually uh the thing which not only
is desired but
uh produces in some sense limited
moments of
rest from in some cases a kind of
subordination by by desire. Imagine those
things those different models of how we
understand what makes us want
things, being modeled into machines.
It's seemingly less sophisticated
than intelligence but one can imagine on
the battle battlefield what it would
mean to desire to kill, to desire to hunt
to desire to find, to desire to reproduce
to desire not to die, to repair
how these things might change things. 
Fundamentally the last one is actually
much more simple than
desired, it's my little pitch for
thinking about the sort of biomimetics
maybe we'll talk about in the Q&A but
the attempts to desire are not
single animals, but collectives of
animals.
so wolf packs, bee swarms
thinking about how different components
work together and create
in some sense an emergent intelligence
which again has to be highly automated
right, if bees thought things through
themselves it would go badly
but what if we simply were able to model
hunger.
I think that would be revolutionary. 
Last thing,
the fundamentalist fallacy or the the
rather the functionalist fallacy
that we'll only do things because they
work or
the sort of correlate, the sort of
fallacy of design will only do those
things which we decide
to do, and that's where I'll leave a
couple of drivers and then I'll save
the discussion of the relationship
between machine and politics for
for Antoine and Is discussion. A couple
of drivers which I think will
will fundamentally alter
the course of drone history I think
covid is something we're already seeing
and it's sort of a back door to the
conversation that I wanted to have
about the political and war and people
which is that drones aren't just
automating war they're
also experiencing an automation of labor
it's all labor is another way to put it,
if it's all labor and increasingly we
have reasons why we don't want people
interacting with each other
or we don't want people directly
participating
in the acts that they imagine
for instance
not just giving the zoom call but
needing to operate things via zoom
mechanical devices we could quite
literally
lose the kinds of capacities we have for
direct encounter with decision making,
we could lose the capacities we have for
civil unrest, we could lose the
capacities we have
for direct action and and other forms
of protests that are somehow short for.
So I think that's that's one driver, it's
not just covid is this particular
instance but
the sort of constant disruption and
necessity to rely
increasingly on non-human forms of labor
which could be highly disruptive.
The second one
is hypersonic weapons. I think as weapons
become
faster the necessity to rely on
increasing levels of
automation will produce autonomy
meaning that because the devices
themselves will have to re-target we'll
have to move, we'll have to cope with
very small changes in the atmosphere,
those kinds of decisions and deployments
will alter our necessity to rely on
weapon systems
that don't necessarily work but for whom
we have no other choice but to try it.
The last is that as miniaturization of
drones happen more and more,
as swarming of drones happen more and
more,
the only logical response will be more
directed energy weapons, microwave
weapons
electromagnetic pulses, lasers,
all of which will drive the arms race in
the opposite direction
to make drones increasingly deaf and
blind,
meaning that they will have to rely on
their own cognitive capacities
increasingly because that will be the
only way to create counter measures
uh from destroying them or knocking them
out of the sky we think about that
kind of environment what we're looking
towards a sort of increasing
push towards automation broadly and a
disconnection from the state,
an increasing speed of war which I
think Antoine's done a great job
detailing in his own book,
and a sort of necessity for drones to become less dependent on humans in
the loop,
then I think we could see some futures
which would be not exactly what we're
expecting but certainly different
from what we have now, and I'll leave it
at that
If i can I'll respond, there's a lot
there of course but
one thing I think you raised Jairus is 
very
interesting, is this question of what or
who
counts as a drone, and here I'll note
there's a kind of interesting tension
here between this idea of,
I mean this is the kind of oxymoron and,
or,
contradiction in terms rather, between
the idea with the idea of an autonomous
drone, because what you usually think of
as a drone in common parlance is
precisely the opposite of autonomy
is this unthinking and
unfree or or merely through
going through rote motions
what counts as autonomy and 
who's a drone,
I think it would be worth noting that
autonomy, within the military everything
in historical terms has always been very
unevenly distributed.
The great tactical and strategic
determinations of generals
have typically rested on the automatic
execution of orders by the massive
soldiers,
that is what military discipline has
through the ages generally required,
that soldiers just execute orders
whether these orders are predetermined
in advance, the corporate armies of
Frederick the Great in the 18th century
or whether they are to you know
responding or to respond to orders that
flow
through the battlefield as the battle go
goes on but ultimately
the autonomy of the soldier has very
often been extremely restricted
and in fact autonomy in general has been
a compromise for many, for many
militarie, it's something that had to be
given to soldiers
because distance or restrictions of
bandwidth did not allow the timely
transmission of orders,
so that if there are developments on the
battlefield and effectively you can't
get input from the commanders
then soldiers have to take some
decisions themselves.
So it's important to note that armies
have always treated its human material
as cogs within larger social machines.
Refer here to
Lewis Mumford's ideas of the mega
machines, these vast social machines
where humans are basically substitutes
for cogs in a
in a more literal machine, and a major
driver from this
perspective, a major driver of
technological development has been the
desire
to remedy the fallibility of these human
cogs.
So you know, whether it's more efficient
faster
cheaper to get a technical object, or
any kind of more sophisticated machine
to do what a human can do
is really one of the big questions one
of the big drivers
of technology, but fundamentally that
doesn't really necessarily change the
function of autonomy
within the system, or the
relationship of subordinate parts to
to maybe the autonomous or more
autonomous decision makers
within that. So, you know talking about
lethal autonomous weapon systems we
might
fall prey of a kind of idealization of
autonomy within the
war machine that
it's always been a very partial and
uneven,
uneven property.
No I totally agree I mean we were
talking about this before, that
the entire martial paradigm of
training
is, the point is to to eliminate in
some sense as much autonomy as possible.
You know I was also thinking the sort
of increasing reliance after World War
II and all the studies on
the failure to pull trigger rates and
the
move to operator response techniques
right, so training soldiers not to aim
first but to fire first right, so that
more
bullets would go on the battlefield or
the uses of techniques of behavioralism
to alter training, so that they no
longer thought about themselves as
shooting at a different person right, the
different ways in which we've learned to
process guilt, memory
um whether or not it's pharmaceutically,
it's through therapy,
I mean the the entire apparatus of
making war,
something sustainable as opposed to
something traumatic and catastrophic
uh is a is a process of building human
drones,
and I mean part of the the sort of
dream in some sense right is of the sort
of corporate soldier
warrior who can, you know, have this time
at war and then
come home somehow and sort of start
their life over, is one where the training could be
turned on and turned off
uh and so you know I mean I think that's
also something that's important for
getting out of
the sort of drone essentialism box, right
this sort of shiny predator drone is
you know from 1947 or 8
basically to the present, the entire
obsession with the brain
by the US military whether or not it was
John Lilly's research on
how to hack dolphins and monkeys, how to
build
you know human machine interface
technology or
machine animal interface technology, all
of that was about
how to modulate the relationship between
autonomy and automation, and I mean I
think that's that's actually something
that goes all the way down to the
sort of thinking and planning also at
the
sort of program level for the military, I
mean think about all the fights we've
had over the last
20 years over more special forces, more technological instead of special
forces or
the kind of soft counter insurgency uh
revolution that we've had, and then that
going back to kind of more air power
those those conversations
are really, this torsion between we
want automated systems like
air power which we can measure very
clearly
and sort of tactically what the outcomes
are, versus
uh this sort of image of you know green
berets and rangers who
can think for themselves or navy seals
uh who can kind of adapt, 
situations and the drone seems more like
uh a synecdoche right I sort of stand in
for that larger conversation about
what a what war is supposed to be,
is it supposed to be this sort of
creative apparatus
that adapts to the battlefield and
sort of creates
constantly new tactical innovation or
or is it a control mechanism, is it
the
capacity in some sense to constrain the
level of force and use
of force to have sort of gradual
escalation and more sort of
authority at the top uh and less
autonomy at the bottom
and I think that gets to the core of the
very nature of of warfare, and I think a
lot of
in some sense paranoia about the drone
even more so than the drone itself, not
to say that the drone's a metaphor
because
you know Antoine and I we don't like
metaphors,
I mean I agree with the you know the
tension here
uh that you underline, which is you
know, on the one hand the military
has for some time and there's a there's
a report by published by DARPA the
R&D arm of the military in the US, back
in the early 2000's that base
bluntly stated the weakest link is the
human,
and you can see in lots of ways why the
military might want to move the human
out of the picture for problems of
indecision,
you know it's an ability to cope with
perhaps a
heightened speed in the battle space,
just
sheer cost as well I mean you think you
know AI and so on,
our mind boggles are some of the money
that's spent on weapon systems but
soldiers are extremely expensive for
contemporary militaries
you have to train them you have to
support them through the deployment but
especially now you have to take care of
them
after they after they retire, and often
sometimes with
lifelong injuries and conditions and
it's all you know it's been remarked
that you know
the only really socialized medical care
you get in the US
is as a member of the
military.
So the displacement of humans by
machines is appealing in a number of
ways obviously for
for the military, also we're talking just
on the rebounding on what you're saying
about desire
and the problem of the shooter problem
is you know you have to create the
desire
or engineer the desire to kill, which is
can be quite a complicated uh thing to
do um
or at least you know an involved one
which you may not obviously is a problem
that doesn't necessarily arise
in the same fashion at least with
machines
but parallels of this, we do live at a
time where there is a
greater emphasis than ever I think
within the military on the role of
creativity,
I mean movements such as the military
design movement
uh you know emphasize creativity
and certainly you know even kinds of uh
heretical forms of
of creativity which are you know very
much the kind of preserve for at this
point
of the human now and how this
debate of course how much is creativity
and even to some extent insubordination
within the military should be allowed to
percolate down
within the institution, but
these
trends are definitely happening
concurrently and some aspects of
contemporary military deployment such as
special forces seem to be the ones
where you know the human will remain
central for
for a very long time, even though it will
be continually integrated with other
technologies
whereas you know aero air power
obviously,
is the cutting the the tip of the
spear really for
for automation, and perhaps even autonomy
within the military.
Yeah I know it's really
So here's the question though you know
there are moments where I
think that's where
everything's happening because that's
where the most intellectually
interesting debates and
like the sort of weirdest parts of the
military are I mean
um DARPA is interesting because it's
weird right it's
seemingly creative and
seemingly innovative
and then I come back to you know the
point that I was trying to make about
nuclear weapons which is
like can we really imagine caring that
much about
drones and the kind of limited
capacities they could have if we were
really
were in nuclear battlefields I mean I
can imagine the value of a drone in the
sense that
you know humans wouldn't be able to
exist in those battlefields
but in terms of the kinds of scale right
so the other thing about the drone, and I
think
about the obsession of the drone that can
be quite dangerous, is it it presumes a
scale
and management of warfare um that
there's no reason to suspect will
necessarily take place
and so I think um the the question is
not just what drones
can do and could do
if their future developed, they became
highly intelligent,
but could we really imagine for
instance this would be sort of my
counter
example, let's say the United States
really gets its
you know third offset, and in fact
they skip ahead to the fourth offset
they have general AI,
and they have the capacity for central
computer
to not just think tactically but also
strategically to innovate the ways in
which it's fighting on the battlefield
but also,
the war aims themselves right that's
sort of the vision, something that
could
think sort of faster than its opponent,
and let's say you got that huge
advantage over
Russia or China and there was some kind
of first move advantage to
to attacking do we do we really think
that Russia and China wouldn't just say, well
that AI looks really really impressive
and that's a lot of drones
um I think we're going to use you know
multiple re-entry vehicles with
you know 10-20 megatons uh
and just wipe it out right, so like
there's there's sort of a strange
um agreement right when we think about
these sort of
improvements of a certain kind of
technology that they'll happen within
the boundaries of certain kinds of
thinkable warfare
So, I also kind of wonder too
how much the drone is
somewhat an aspirational weapon,
in the sense that for all those states
who invest heavily in it
uh it's also an attempt to invest
heavily in small wars
right right sort of manageable
low-impact
very very long-term wars um and
and potentially and I know this has come
up in some of the other drone talks and
will come up certainly in other drone
talks
but the drone may have less to do with
war and more to do with security,
that we may see it as a sort of
revolutionary weapon particularly its AI
capacity
not in the fighting of adversaries but
in the maintenance of
our own populations, and I think we've
already begun
to see the sort of the bare bones
beginnings of that in the United States
where
you know the continual
aftermath of slavery that never ends in
this country,
that was once managed by trained dogs
and you know people wearing white hoods,
is now managed by
the capacity to scan cell phones, to use
drones for surveillance,
to think about putting dispersal weapons
on drones like
LRADS and microwave weapons to manage
borders,
That it's really you know we like to think in terms of
warfare but I do think that
um that in quote-unquote intelligent
capacity of drones is really in its
interface capacity I think
with with policing technologies even
more so
and that's you know that's yeah.
Briefly on that I think that
links back to what I was saying earlier
on about this idea of discrimination
that you know AI really comes to the
fore and autonomy comes to the fore when
it's not really about just launching as
much indiscriminate force as possible,
the temptation might be to think of that
as well, that's a evidence of our kind of
higher normative standards of war today,
but I think the other answer is
precisely that, that you're
getting at which is really, this is also
part of the
blurring between war and policing that
we see in today's
world where I mean the nature of
policing is that it has to be
discriminate in some shape or form
because it's not about destruction, it's
about governance ultimately
um so yes just not just on that since
Michael wants to come in I guess we have
a few questions
in the chat maybe yeah we should address.
yeah,
Um thanks guys, um
really fascinating so far I have 
a load of I have a load of questions
but
I'll save most of them I think
and drop them in as we go along I mean
one
thing that strikes me though, as
you were both talking
is um the
feedback the unpredictable feedback
loops and relationships that arise
between developments in military
technologies, strategies, events
um and so on and so forth like you know
where what Jairus was just talking about
for example um with you know what would
be the response
to a massive perhaps almost
insurmountable
high-tech robot, killer robot force
might be to just go, well stuff that,
which like also
seems to me in some ways to be
one of the fundamental counter arguments
to the idea that
you know every major nation having
lethal autonomous weapons would somehow
make war like safer and friendlier
because then the robots would all like
go off and fight each other
um which of course like you know the
which you know of course
like the
countries have never settled
disputes by playing chess
so you know like not on that scale
anyway, and so why would they
you know have a game between robots
essentially
be the means of resolving conflict and
instead like you would look for the
the alternative response, you know the
way that steps outside of the logic of
what is happening right now
and that in some ways seems to
connect into
some of the threads that you guys
covered around
what
what the kinds of problems that
autonomous weapons and drones
and military technology in general tries
to manage
and also how that relates to
developments in
military thinking, so I'm sort of
thinking in particular about like
one of the sort of figures
that hasn't um circulated so much
through the discussion yet which is
you know the the emergent
terrorist threat
right you know the that that um that
there are that there are these sort of
threats out there that
that need to be dealt with before
they arise or as they arise,
or in advance of their arising, and
and so in some ways like
this ties in for me at least
into this question
these questions of autonomy and
automation quite significantly, because
you know the structure of the military
of militaries is as top-down
hierarchical
systems, um doesn't uh
doesn't handle that type of you know
unpredictable eruption or potentially
unpredictable eruption of violence
um and so um you know the drone or the
or the um fighting unit
perhaps special forces unit or whatever
made autonomous um is perhaps in some
ways like in
in concert with with those types
of
the imagining of those types of
threats as sort of the major threats
no need to respond to my babel
though,
there are a couple of really great
questions coming up
in the chat and I'll try to kind
of um
organize them as we go along, so
um one thing that I think fits neatly
with where we've just sort of ended up
in
in relation to 
drones and sensing and wider
environments and so on is from is from
Adam Fish
who does a lot of research on 
drones
from an ethnographic perspective
and works on conservation and
oceanography
in relation to drones, and so Adam's
asked: there's surprisingly little being
said so far
about the ecologies, the savage ecologies
perhaps,
elements and other forces that drones
sense, and so are the sensed others
merely victims and how do drones
co-create
with the elements they are suspended
within and the agents they sense. 
I think this is maybe a really
interesting question in the context
of
warfare, so yeah if either of
you have
thoughts on that, would be great
I
could say a quick thing
which I think is a bit of a historical
analog
which is you know in the mid 90s really
through almost 2000
there was a huge effort to declassify
particularly all the CIA satellite data
of southeast asia
because it provided one of the most
important baselines
for not just deforestation but also the
loss of coral reefs
where development was taking
place and the irony was they still
wouldn't let it go
so you know I mean at the same time they
were arguing about whether or not to let
go of maps about where they dropped
omelets and things like that in Laos and
you know I mean,
I mean it was a fight but there has
always been this dual-use character
and this sort of environmental character
that could if not be positive at the
very least
uh show us something about the kinds of
change
that not just war but you know capital
created in regions, and so no I don't
think that the thing that's
sensed is always a victim, but I but I do think more than I used
to
that the opportunities for play reversal
um and uh
resistance become increasingly
difficult,
as humans
lose contact with one another, I
know that sounds like a really nostalgic
thing to say but there
there's something you know maybe it's
just the fact that I've been living on
zoom for seven months but
um there is something about the
encounter
between people and other non-human
animals
and I think even machines the physical
encounter and this relates back to the
sort of Andy Clark
uh point that Antoine made of how
extended minds work,
right cognition is a collective
enterprise
and it doesn't it's not the same when
it's mediated, it's not to say that it
isn't also a collective enterprise but I
I do believe that there are components
which are not engaged in the same way in
terms of the tactile
smell all sorts of different ways that
memories are formed
uh and so while I don't want to reduce
the image or
the sense to merely an object
in an uninteresting way I think we
would
we would be remiss to not really think
about that, just like
massive neurocognitive transformation
that happens to soldiers, policymakers
citizens,
activists,insurgents
when the narrowing of encounter takes
place,
and again it is another kind of
encounter but I think it'S,
it's one that is neurocognitively
radically different,
um and so I've you know I don't know,
I'm not so sure that for instance even
so-called good drones
right, that really makes sense to me
in that sense because I
I want to know what kinds of publics
they create, what kinds of multi-species
economies they create, and they, I think they tend to narrow or narrow on
two terms, one which is
they narrow the terms of of encounter
and engagement but they also narrow
the numbers of users so I think about
the number of times that
um human rights organizations and
environmental organizations
have made almost identical arguments to
the argument the vice admiral the
Australian Navy actually made to me
about why they wanted to get in the
drone business
was because it freed up so much time uh
to do other things,
right really what they meant is they
freed up so much time to downsize
to have fewer and fewer people
involved and so if you think that
organizations are actually about
building
the process of peoples and communities
and
relations, then needing
fewer isn't really an advantage um and
so I think,
I think that actually cuts as close to
the quick of war as it does in politics
or
other kinds of engagement, I don't know
if that exactly answers the question but
hopefully it rhymes with the question.
I wanted to respond to a different line of
thinking some of the things that Jairus
was saying
in his previous comments, and
yourself as well Michael which
which is I mean something that's I think
it's close to our heart
Jairus and I and that we were you know
was that the impetus between behind the
the special issue we edited with Nisha
Shah in Security Dialogue
on becoming war which is to think about,
you know to ask the question of
what it means to think about war as an
autonomous realm in itself, so if we're
going to think about autonomy as the
giving itself of its own laws
you know can we think of war in this
fashion and I think this
goes against the grain of a lot of
social and political
thinking which sees war as derivative
of other orders it's the derivative of
the economic or the political
of the psychological or whatever it
might be and of course
framing it in these ways raises
different kinds of issues about how you
might
solve the problem of war you know if we
have a more equipable
socio-economic distribution then war
will go away
um and our problems definitely do not
have to be uh
war centric , well I think this important
question asks whether that's that's
right
and whether the problem of war is not a
deeper one
um so what would it mean to you know uh
develop a kind of probably more centric form of
thinking, and I think one of the
questions for that would then be well
if we want to treat war kind of in an
autonomous as an autonomous realm, what
would be its internal
dynamic, and I think there's a good case
to make that
escalation, and you find that you know in
the in classics you know
our kind of philosopher king of war who
who makes the
picture early on in on war that you
know war has a natural tendency to
escalate towards the extremes and really
what acts as a break on it
in terms of counterfeits in his mind is
the kind of geographic,
temporal, ultimately technical
restrictions on war.
Of course many of these restrictions
have since evaporated since he was
writing in the early
19th century which might well
leave us with just the bare logic
uh ever closer to being realized. We find
this idea of escalation in the more
contemporary work of Paul Virillo of
course
through the form of speed and this kind
of continual acceleration of society
which is
driven in his mind primarily through uh
the military, and
while what is more of an accelerant than
basically the mutual escalation of
belligerence we find this in the
in the work of Friedrich Kittler who also
sees
the emergence of media are
basically a product of
war acceleration, so you know we might
want to think about that
when we think about the evolution of
drones or autonomous weapon systems or
AI and so on, and how this
fits into this wider historical tendency
towards escalation
within the uh the war sphere and what
that might mean if we think more broadly
as
of escalation as a key principle of
technological
and political development, and of course
that maybe raises the question of
whether what we are dealing with are not
kind of discrete
militaries in the end but a with a war
system, a kind of globalized
war machine in which uh you know to use
Deleuze and Guattaris phrase states might
become
mere appendages to it, or in many cases
also non
non-state uh non-state actors, so I think
that's
a you know a question that I think is
important to us to think about
you know how we central this discussion
on war and what are the dynamics
inherent to war
that might account for some of the
things that we're seeing and where that
might lead us to
I was just gonna, I
was just going to commend the
the special issue the two
of you and Nisha Shah
edited of Security Dialogue, I'm
guessing that a number of people
on the uh at this seminar are
familiar with it but if you're not it's
really it's really fantastic and
the introduction the three of them
wrote on this idea of becoming war is
quite powerful and kind of turning
turning on its head um some of the ways
that we would understand war perhaps as
like
an irregularity and instead kind of
repositions
,repositions war not not
to say it is like well we just
have to accept that there's war all the
time
and that it sucks, but rather tries
to just like shift the way we think about the
relationship between
between war and and other human
structures and activities.
Okay so I did
you know maybe picking up on on some of
the those
themes that both of you have
articulated in your responses there
there's a question from Chris Aguis
who asks:
Do you see a difference between
agency and intention
and Chris has said here in the chat
that she's thinking about this
question in the context of post-human
and new material
new materialist readings of of agency so,
does agency and intention kind of
function differently within
these drone assemblages or apparatuses
That's fine you want me to go or you want to
go
Um yeah you know I mean I so
no obviously in part because she was my
advisor but um
also just because her work was great
like I was I was in grad school when
um Jane Bennett was writing Vibrant Matter
and you know and Bill Conley was working
on
the work that would sort of become the
the most new materialist of his
right so the sort of
trilogy on on becoming uh in the world
um and there was a real discussion I
think for
more than a year about whether or not to
hold on to the word agency at all
um and Jane really made the decision to
hold on to it, she
she didn't want to let it go and her
reason I thought was a very good one
which is that
to in some sense give in to
a certain posthumous critique of agency
was to to undermine how much agency was
this collective enterprise right so yes
we could sort of reject agency or even
intentionality
uh which I'll get to in a second um
if what we wanted to do was undermine
the sort of enlightened subject
or what we could do is show really just
how unbelievably complex
agency was right so you know I
and I always thought that was a
really important move and I
I thought was better in some sense than
the sort of Latour move towards actants
I thought actants I think sort of
bypasses
trying to think about why it's not
anthropomorphism to see agency all over
the place it's sort of anthropocentrism
to not
see it all over the place um so that's
sort of part one
um part two is you know Bill
went through a similar thing with
intentionality and decided
for Nietzsche and Deleuzian reasons to
hold on to the concept of will
um and I think that will is actually a
very important counterpart to intention,
intention I think requires too much
self-reflection which is almost always
very late to the party right we have all
kinds of it but it's generally the day
later when we're like oh I really wish I
had said that
that was my comeback um right
intentionality is uh
I think it's a post-hoc fallacy but I
think that will
is a, is a real force uh
and the Spinozist, Whitehead in you know
Isabelle Stengers and me says that will
is a kind of drive for its creativity
and and to expenditure which is another
component of war which we haven't really
talked about that
a lot of people may be drawn to war in
this kind of will
of expenditure, I often wonder for
instance that
in a dark way how much the sort of
neo-fascist
resurgence in the United states is
precisely because liberalism
in some sense banned the capacity to
think that
acting out uh was something worthy of
doing,
um and that's something that certainly
the African-American community in the
United States never forgot
um but uh for those sort of racist
enclaves that
that bought into a certain vision of
post-political America
um it's not surprising that the
generation just after them came
resurging back in the worst
possible ways of wanting a fight on the
streets right, think about the boogaloos
for instance
in this case who seem to want chaos more
than they want a particular outcome
but a chaos frame within a really toxic
white masculinity, um so
you know even them I I'm not sure I
would call them intentional but they're
willful
um so I don't know that's that's how I
want to think about it is that
you know I like new materialism when it
thinks in terms of will and
and when it thinks in terms of agency as
process um as opposed to a capability
I generally cop out and just use the
term efficacy but um
that's just because I'm not as old as
Jane so
I don't have anything to add to those
comments other than you know to say that
these questions of agency and
intentionality sort of
are very complicated questions uh that
we can't easily adjudicate but that
leads me to a broader point I think
which is
you know this I think what this applies
to questions of war, agency
and AI, which is to say that I think in
many ways we're still at the stage of
trying to ask the right questions
and you know regard to the theme today
you know
a reference to the campaign to ban
killer robots which I think is a very
well intentioned and noble enterprise
but I think it's going to miss the mark
it's just not going to achieve anything
it wants to achieve because the ability
are we
the conceptual frameworks that are being
used to campaign on this space I think
are very poor fit for the reality we
face
uh AI is not you know um
it's gonna be very difficult to draw the
lines in the ways uh that
you know this campaign wants to do
so
that might be frustrating to say this to
say that really we're not at the point
of being we're still at the point of
vowing to ask the right questions, it
might
sound you know terribly academic to say
so but
given the parallel situations
we find ourselves
in today we might want to be able to act
now
but but I do think that we are still
grappling to understand the reality of
the
the emergent reality, and if we don't
ask the right questions
if we don't have the right conceptual
tools
I don't think we're going to get much
of a grasp on it
Switching gears just just slightly
Mitch Goodwin asks uh whether we
can consider
code, so bots, executables
virals, and so on, that are stealthy and
lie in weight, that swarm across
networks, can we think of these kinds of
things as drones or drone-like
um even though they might not I suppose
have the material encasing
Yes but, no I mean I
I think this is one of those moments
where I wish the drone weren't quite
so much the center of the conversation
because the reality is drones are only
doing more interesting things than they
were doing in World War II precisely
because of that litany
of different kinds of
interactive informatic
capabilities uh which
can be communicated at speeds such that
they could be transferred from
materially encased thing to another
materially encased thing, and so
I would rather not think of them as
drones so much as
think about a rainforest like ecology of
new
informatic entities, which
maybe spend some time in drones, help
design drones,
help do experiments so the drones don't
crash,
uh change the machine vision of the
drone,
you know I this actually gets to the
second question Mitch asked about art
you know like is there a role for art
and war,
this is precisely the role of art and war
I mean the reality is that a lot of
these algorithms and a lot of
these bots like they're promiscuous
to say the least
right, they're as useful in generating
art as they are and generating war and
vice versa and
I don't have a strong sense of that, I'll
only say that when Lilly was developing
the
machine brain interfaces and the
isolation tanks
and the dolphin attack programs, he was
doing it because he was convinced that
learning to communicate with dolphins
was the only way to get outside the
anthrocentric circle
and really get at the tough questions
Heidegger was trying to ask,
because until we could interview another
intelligent species about ourselves
philosophy was at a dead end.
The fact that he could have that idea at
the same time that he was torturing
animals and
humans and developing one of the most
sophisticated arms of the war machine
and
hoping without hope uh that that machine
wouldn't just allow us to interview
dolphins but it would make us capable of
being able to interview aliens if we
ever encountered
them, because it was about trying to
bridge cognitive capacities
which were radically different, and I
will say another one, in Andean uh culture that mostly used
whistles because
there was not enough oxygen often for
language,
so he was also interested in like
radically different human forms of life
um but that sat so comfortably
in a lab filled with monkeys with things
screwed in their heads
I, you know I don't know, art and war
is not a fine line, I think
It's funny when uh when I started um
you know researching and thinking about
drones several years ago
you know I had this my initial um you
know thought was like,
oh how many things can we call a drone
and
you know how many things are drone like
but actually like the
you know the longer I've read and
thought and kind of and
I've researched into these things the
um
that seems to me to be a question that
um for me anyway
like you end up just chasing your tail
in a way because it doesn't actually
what what you're what for me what that
ends up doing is kind of repeating
it traps you analytically because
you're then you're then caught within
a within a desire to kind of frame
things according to a key word
which is actually a problematic keyword
anyway because
you know we all as you both in different
ways started this talk with like
let's not obsess over the drone, um and
yet
so if we're looking for drone-like
and drone objects and so on
drone logics and so on uh can be a
a risky path to go down,
Antoine did you want to hop in
yes I
mean to come to
to abandon this direction I think our
categories are too broad
you know the drone is we don't the
broadest, the drone is too broad a category
or the question of AI
is too broad, and yes we've been talking
in kind of very broad brushed terms
today and now we can say some general
things
but ultimately you know the the kind of
empiricism that we we advocate in
the becoming war special issue is
precisely to kind of get our hands dirty
and to really
get into the specifics of social
technical systems or
uh military ones in particular so with
regard to
you know um software or viruses or
other forms of malware or code, I don't
think we should come to them with the
preconception that we know what a drone
is and we're gonna
we're gonna layer this on top of the
analysis, we need to
understand the specificity of the forms
of code that we're looking at
the, what forms of behavior that they
have what kinds of
economicologies they participate in and
associated is the question of whether
they are intelligent or not, I think
to me to be a little bit of a kind of a
of a blind alley we should just
think about what they actually do and
leave the kind of metaphysics of
intelligence and sentience to to the
side
on that basis, so yes we're very much in
favor of kind of broad kind of
speculative
theory but also it could conjugate with
kind of a
attention to detail, in fact to my
mind one of the biggest challenges we
face as a you know as a civilization and
I mean this is the kind of most
possible is to, we need to kind of
uh we need to bridge the the gap
between the technical cultures and
you know and the kind of wider sphere of
the humanities it
you know within the humanities social
sciences we we've got a kind of long
critical tradition of technology but a
lot of it remains
at a distance and a with a with a man of
generality and if we don't reconnect
more broadly as a culture between the
technical side, if we don't have
technicians thinking about these big
philosophical normative issues, and if we
don't have
uh the you know the philosophers, the
thinkers and  the wider populace
grapple with technical specificity that
I think we
were very unlikely to be able to
overcome or face the challenges that
we encountered today.
Yeah thank you, which is which is
of course not to kind of
discard the the value in in finding
likeness and similarity and
uh and so on, I suppose um
uh I've got a question from Ari
Eisenstadt who asks:
What are the implications of quantum
computing
with drones or perhaps with war more
generally, and is there a good way
that it could be
regulated, do you think
And I would
just I'll just flag that um
we've got a few questions and about
10 minutes to go
so you know we'll if we try and
keep our
our responses tighter
I'll be very fast
on quantum computing.
I think it'll have very little impact on
drones I think that what quantum
computing is good at
is increasingly in regards to the
technologies that spin off from the
computing
so the ability to control uh more
subtle scales of matter the one area of
that which I do think will
significantly impact drones is not
necessarily good
which is that quantum computing research
has allowed us to expand
the detail of the electromagnetic
spectrum
which allows for forms of communication
and radio signaling and jamming which
could
I think be pretty catastrophic or
incredibly
eruptively productive uh for drones but
I don't think it's gonna have a
real impact on the artificial
intelligence um but that's just my
my gut, it's um big
centralized computers as opposed to big
network systems
you know quantum view is a big
centralized computer therefore I don't
think it's going to produce
as much as big network systems
Doug Khan asks:
diesel electric subs promised lesser
state players to submerge
undetected nuke, substyle
weapons long enough to threaten naval
invasions and shipping lanes and
and so on, is there an equivalent for
lesser stake non-state actors etc
in potential pack behaviors or uses
for
for drones, and in drone futures?
Well to just address that question in
general terms there's no doubt that
drone technology is already tricking
down to
to non-state actors and it's not a
technology that states
will hold a monopoly on um
there's a general I think a general
trend that we're faced with which is
that the technical advances are in
in many ways is
raising the kind of all while lowering
if you want the barriers to entry into
into organized wires and this is
not a new story, we could already see
that
in you know the prevalence of civil war
or uh civil strife in in parts of the
world, where effectively you
very limited numbers of malcontents
with automatic
um machine guns can can create
significant
problems which is why the you know the
process kind of the virtuous process if
you want to look at it that way between state
formation and and the development of
military force
in the early modern period is one that's
not necessarily going to be replicated
uh going forwards, so there's no doubt
that
that the technology is not one that will
be uh
held for very long or is being held for
very long by state actors and like say
perhaps nuclear weapons which would be
another example,
but even then I think you know again
these kind of categories are too crude
because if really if we extend this to
the question of artificial intelligence
again I think we get a variegated
picture
if we think about quantum computing that
we just raised that's going to remain
largely a state monopoly for a long time
because it's very expensive it
requires large installations and so on
so forth and we still
don't know exactly what the what quantum
computing wouldn't deliver but if it
delivers major
uh altitude intelligence breakthroughs
these are not ones are going to be
easily accessible to non-state actors, on
the other hand
you know many processes are already
available to actors
to a bunch of actors with data
mining or
uh you know the generation of fake fake
imagery and
and so on, so I think we again we can't
speak in two broad terms we have to be
quite specific here.
Yeah I'll just throw it quickly that
North Korea bootstrap its own
drone program entirely from scratch uh
it doesn't seem to have depended on
other drones and it did it relatively
quickly so
if we were to compare for instance
getting something to escape velocity to
enter the atmosphere
versus drone capability um it's
it's just it's apples and oranges in
terms of the amount of capital
investment and technical capabilit, so
it's like it's diesel subs
to the power of 100 in terms of
expansion and accessibility
Yeah and the there's um the Bill Arkin um has a really great book like popular
trade book I guess about
about the development of drone systems
and um
and uh data systems in in contemporary
war that that really does show like
just the level of ad hoc um you know
bootstrapping that even occurred in in
the US system and the
the strain that that put on bandwidth
and network infrastructures and so on
um which really does which sort of tells
you um I think a lot about
um about the how we ended up
where we are technologically um with
with drones right now.
Katheryn Brimblecombe-Fox
asks um what would thinking about
censored
drone or robot sensing rather than using
sense
how would that change perceptions
of drone capabilities and human um
reactions and interactions with them, so I guess this is a question about
um uh
about maybe like not not collapsing
sensing as a as a kind of human
phenomenon into what the
technological can do and as a sensor as
or as sensed
You want to go first
You know
what would I want to say about that I
think in part I
this is another one of these places
where I don't want metaphors to be
the driving force so we we like to call
machine vision vision because what it
does is
it does something like seeing but it
doesn't actually do something like
seeing at all and it also enables all
kinds of different capacities that will
we would never possibly have and so I
this is you know this is the empiricist
streak in Antoine and myself which is
I want us to start thinking
each machine event on its own terms
and so I think we would do well to spend
a lot of time and this is where i think
art actually plays a really significant
role
to create phenomenal phenomenological
experiences for humans,
to experience both the limitations and
capacities of machines
uh sensorily um and how disorienting it
is,
or what gets seen
and I think opportunities to do that
and to build those kinds of human
machine interfaces
you know it doesn't necessarily put
ethics in the right direction but it
gives us something to work with
in even having the conversation in the
first place,
that we're just not having, right so
you know conversations about
discrimination really only makes sense
if you can see like a drone
um and I would I would rather see
research developed there
to bring that very question to the fore
than to try to figure out how we could
sort of
conceptually deconstruct what we meant
by
sensor or sensoring which I think is
probably what she's up to but she makes
really cool art, so
All right well I think we've we've hit
the end of the
um of the questions that um folks have
submitted and it's uh
6.25 here in Sydney and I think that
makes it
10.25 in Hawaii and 9.25
in London, we've managed to thread the
needle
time wise but we've also coming to
the end of our time, so I will just um
I think I'll wrap it up here and and say
um thank you all very much for joining
us today and for the rich
discussion, really interesting um and
really interesting to do it in this
dialogue format too um so
please do come along to others in our
drone futures series.
The next seminar is at 11am on
Wednesday the 23rd of September, that's
Sydney time,
and we'll be joined by Kate Chandler
from Georgetown University, she's a
feminist STS and media studies scholar
and she's the author of a really
terrific book called Unmanning:
How humans machines and media perform
drone warfare,
and I think that um her her book and I think that her talk will pick up on
on some of the themes particularly
from earlier in the conversation between
Antoine and Jairus about um about the
the long history and also um the stories
of technological failure and so on
um that populate uh the story of drones
and then after Kate we have um
Jen Schnepf, Thomas Stubblefield and
Mahwish Chishti,
so there's lots of really great seminars
to come. 
You can register for now for almost all
of them, and there's a link below the
video
to to a page which has got info about
about all the seminars
um and uh so you can just follow that
link and
if you're on twitter that you can also
follow us um
@richardson_m_a, in my case or antoine at
@ajbousquet and jairus @savageecology um Jairus clearly
with the much cooler handle than than
either of us
and if you like this talk please do
spread the word and the link you're
using now will become a regular youtube
video
In the very near future and we'll be
releasing this talk
along with a separate interview on
the media futures
hub podcast, and that you can uh
find a link to the podcast in the video
description as well if you want to
if you'd like to subscribe.
We'll also be sending out sending out
this link to everybody who registered
for the talk today.
So thank you Madelene for moderating
the chat and
thank you Antoine and Jairus for a
stimulating talk and above all thank you
all for turning in for this
virtual seminar 
in the Drone Futures series and offering
such thoughtful and insightful
questions, so stay safe stay strong
and take care of one another.
Be well everybody
 
