- [Nick] All right my name is Nick Merrill
I'm here from UC Berkeley Center
for Long-Term Cybersecurity
talk about this paper
Model of Minds: Reading
the Mind Beyond the Brain.
So before I get started
I put the slides up
here yellkey.com/space
all of everything I say
is in the presenter notes
so if you want to read
the presenter notes while I
go along, yellkey.com/space.
Okay, okay this is an ant.
(laughing)
It may look like a
normal ant, but its body
has been overtaken by a
fungus called Ophiocordyceps.
Has anyone ever heard of Ophiocordyceps?
Oh good, okay, nice well then
this is old news to some,
but this fungus has over
taken the ant's body.
It's clinging to the ants
muscles and producing
a network of sensing and
actuation on top of the ant's own.
So, in the lower left here
you can see the fungus
wrapping itself around
the tendons of the ant.
So, the fungus' goals here is to stick its
teeth into the underside of the twig,
and kinda the point of this is that once
the ant is stuck there the fungus can use
the ant as a medium for reproduction.
I'll spare you the photo
of that, I figured, I had
a photo of that in there,
but I thought if your a CHI
you've been through enough.
So, from what I can tell, basically
Ophiocordyceps is able
to move the ant around
by creating a model of
the ant's experience
of the world, okay.
So if an ant can be said to have a mind,
I would say an ant does have a mind,
then this fungus is
modeling the ant's mind.
Okay, it's using the infrastructure
that it built inside of the ant to
kinda provide inputs to that model.
Now this model may not
be similar to the ant's
experience of the world, right.
It may not be similar to the ant's lived
experience, but it doesn't need to be,
it just needs to be good
enough to get that ant to
the underside of the twig,
the fungus just needs to reproduce.
Okay, now the one key fact
about this fungus that
I think is really really interesting,
is that Ophiocordyceps has no
presence in the ant's brain.
Its ability for all of
its ability to kinda
navigate the world, find this twig,
latch on to the twig, it
does it without the ant's
brain at all, okay it
just has this network
of sensing and actuation
in the ant's body.
Okay, so with this in mind we can look
uneasily toward the
emerging world of wearables
and internet of things devices.
But could these devices like this one
this enthusiastic self
tracker is wearing here.
Form a similar system of
sensing on top of human bodies,
such that without reading
the brain per se ,
it is able to create models of minds
robust enough for control.
Okay, while my thesis here is
yes, machines can create
models of minds without
sensing the brain contingent
on what we mean by "mind".
Which we'll talk about
in a second, and HCI has
already started building
these types of models.
Okay so, as a preview of
where we are going here
I end the paper, the PDF
paper with this kind of
feedback loop between our beliefs
about what the mind is?
And the technologies
that seek to sense it.
We have some beliefs about
what the mind and body are,
and as we kind of build and use devices
that informs the
technologies that come out.
And these technologies
that aim to sense the mind
as we build and use them, that feeds back
and informs our beliefs
about what the mind is.
And it's really important here this is not
necessarily an error correcting process
this is a mutual co-construction, right.
Between our beliefs about the mind,
technologies perceived to sense the mind.
Now the good news here
which I will discuss later
is that it means we have some
kind of agencies over this,
and I'll talk about this.
But just in case you think
that there's some kind of
wanky, philosophical kind of navel gazing.
I want to show you that
this is the power dot, this
is a real product, the red stickers
on this persons legs
electrically stimulate
there muscles from an app.
So the fact that this app
is literally controlling
the black man's body
should give you a hint
as to why I'm talking
to you about this, okay.
This all may sound philosophical,
I promise you it is not.
And this is just an unreasonably kind of
direct analogy to the fungus thing, okay.
Obviously there are lots of
indirect forms of control,
okay just figure it out.
All right so yes, machines
can create models of minds
without sensing the brain.
Here is the argument, okay
this is a really short
kind of talk, so not a lot of citations,
details in the PDF.
Okay, so you might assume
the mind is the same thing
as the brain, who here
thinks the mind is kind
of the same thing as the brain?
Okay, we've got one person, two people
willing to admit it, your very brave.
Okay, well there are theories
of embodied cognition,
came along about the
'60s '70s and argued that
the mind is potentially more
expansive then the brain.
Two main arguments here,
for one thing neurons
run body wide it's difficult
to evaluate the role
of the brain's neural activity in mind
without considering the role of neural
activity originating the body in mind,
cause there are neurons in both places.
Another argument here is articulated by
Noe and Thompson 2004 is that,
"The exact way of organisms
are embodied simultaneously
constrains and prescribes
certain interactions
with the environment."
Right, that is to say, that
mind manifests as it does
because of the physical
conditions of the body.
Okay, so again impossible
to tear apart the
contributions of body
and mind, body and brain.
Even more radical even
more recent work by folks
like Andy Clark and Edwin
Hutchins argued that
the mind extends beyond
the confines of the body.
To the built environment,
to the tools, and the
people around us.
Kind of distributed or extend cognition
you may have heard these terms before
no time for a deep review here, but
check the paper it cites Hutchins study of
Naval Pilots, to make this case.
Okay, so if the mind extends
to the body environment,
can sensors we put there
yield computational models
of mind, computational counts of mind.
So my argument in this paper, this
is going to shock you, it's yes, and, also
HCI has already done this.
Here's just one example,
lots more examples here,
but this is a photo of Rosalind Picard
with some of her affective
computing devices.
So Picard is perhaps the
"Godmother" of modern wearables
and she has done a
tremendous amount of work
to kind of make the argument
that aspects of mind
and you know in her case particularly.
In motion can be modeled from the body,
and from sensors in the environment.
So there are a lot more kind
of CHI relevant examples
in the paper, and you can
probably think of some too.
But the point here is
just the CHI community
is no stranger to models of minds we
just haven't recently had a cohesive
term for describing this yet.
Okay, so what?
You know why bother with models of minds,
this term why bother with this argument?
Well in general as a
security researcher I'm
interested in kind of
mitigating the harms of
technologies, for all earthlings,
to borrow from Haraway.
So, here I'm interested in
how these models of minds
have the Sperry of model of minds,
can help us understand and in hopefully
stave off new instruments
of control and surveillance.
So let me put a fine point
on this, and one that
will hopefully adds some
urgency to this debate.
So when I arrived at CHI a few days ago
I was incapacitated
with a 39 degree fever,
and Nora Howell who's
here lent me Jenny Odell's
book How to Do Nothing which is a very
fitting title when you're
incapacitated with a fever
so thank you Nora.
Also I'm feeling better
now and I'm not contagious.
So in this book Odell
critiques the attention economy
to kind of technical and
social systems of control.
In which almost every
minute of our waking lives
are captured by neo
liberal logic's of product
activity and performance, and just to
provide some context here like Nora and I,
Odell comes from Silicon Valley
where all but the richest
are getting squeezed,
squeezed out even,
economically and socially.
So a lot of us for example,
have a lot of particular
residents for me cause if kind of always
feels like you need to be doing something
to make ends meet, or
even if you have a margin
to increase that margin
because you know that
prices are going to keep going up.
So saying no to working
long hours, saying no
to the gig economy and the productivity
cultures is really not an option.
So in this context Odell argues that
withdrawing our attention is one
of the last acts of resistance
available to us in her words,
"... in a time of shrinking
margins, when not only students
but everyone else has to
"put the pedal to the medal,"
and cannot afford other
kinds of refusal, attention
may be the last resource we
have available to withdraw.
Only in the space of
our own minds can some
of us begin to pull apart the links."
So as they relate to
control models of minds
fundamentally threaten this last resource
so we have to withdraw.
Okay, so lets take stock for a moment,
I reckon that metaphorically speaking we
are already this person, with the sensors
all the way up the arms.
Between the devices that sense us, right?
And all the services that track us we,
at least in the global North are already
sufficiently we instrumented to yield
models of minds, and
perhaps once there more
intimate then we currently appreciate.
But I reckon we are not yet this ant,
whose body has been over
come by the all controlling
fungus, so ruthlessly invasive that
it can build a model of
the ant's inner experience
so complete that is can work the ant
literally to its death, and
I mean literally literally.
So I reckon we don't really
want to become this ant,
but we may be a lot
closer then we imagine.
Developments in the same
market logic's that brought
us Odell's attention
economy may turn us into
this ant before we have
time to even notice.
So what can we do for ourselves?
Well the one good piece of
news I have for you here,
is this loop, if our technologies, and our
discourses about this
technologies are mutually
co-constructed we, in
particularly we as the CHI
community with some degree of agency over
how reasonable, accurate, or
legible these models seem.
To capitalists or to anyone else.
(laughing)
So remember whenever model of mind develop
we as CHI will be in part responsible
for their consequences,
as academics we have the
power to make models of minds as a concept
more or less plausible,
more or less actionable,
more or less appealing, to those who seek
to use them in the interest against the
interest of society has
already marginalized.
So how can we as the CHI
community, what can we do
to make ourselves accountable
in a positive way here?
What can we do to limit or contain
models of minds potential harms?
Okay, so ill leave that
question for the Q&A.
I'd like to thank all
my amazing contributors
whose conversations over many many years
made this work possible.
I'm also and also BioSense group where
I did my PhD Center for
Long-Term Cybersecurity
where I'm currently a
PostDoc and the UC Berkeley
School of Information, and finally I'm
thrilled to announce the lab I'm
starting at UC Berkeley at
the Center for Long-Term
Cybersecurity called DayLight
Security Research Lab.
So our mission is
basically to make kind of
the threats of technology and the harms of
technology understandable.
Which is an issue now,
so if your interested
kind of in the intersection
of design, STS,
some kind of expanding
notions of what security is.
Stay tuned and obviously
shoot me an email as well.
So with that thank you very much.
(clapping)
- [Woman] Hello, so about
your question about how do we
in a (mumbles) mitigate this ant effect.
I read this paper during the review
(mumbles) and I agreed.
One of the things I think
is important to recognize
is that while we, sort
of as an aggregate are
not necessarily the ant,
we have actually produced
wearable fungus invasions of people and
it's predominantly in the
wearable interventions
for disability, so one of
the ways that we can mitigate
these is to stop making that SHIT and
you can actually read resistance to these
technologies in the paper themselves, the
participants do resist
these interventions,
so pay attention to that,
stop making fungus, thank you.
- Thank you
(laughing and clapping)
I was just writing that
down, that's fantastic.
I remember I was talking to you
and I come from a world of brain computer
interface and I was talking
to a researcher I know
and her wife does cochlear implants,
and she is a huge
advocate for
not giving children
cochlear implants at birth
because it's basically
not consensual mutilation
of their body, so yeah ,
okay, thank you for that,
thank you.
- [Ricky] Hi, I'm Ricky
from Indiana University
I thought it was really a nice talk,
I wanted to ask you,
do you think merely
building this model somehow
(mumbles)
surveillance or whatever,
merely building this
model is it problematic
with the way they are
being used or the way
the companies,
the people who are in control
intend to use those things,
which one is more or less problematic or
(mumbles)
- That's a great question,
so you know just kinda flying my kite here
to use a term that I
learned five seconds ago.
Kind of, I would say it's impossible
to tear apart building
the model in the way that
it's used, you are always building a model
because you expect it to be some way we
are never just kind of tinkering
in the void here, right?
So we are not building
these models from the vacuum
there's always some imagine
kind of context of use
and that's also entangled with our beliefs
about kind of what the mind is, who these,
where these minds are, what
there suppose to be doing
right, and there are all of
these normative judgments
built into that.
What are they doing, what
should they be doing?
So, you know I think that
what's important here is
certantly to experiment
and build things that
are intentionally
pushing against boudries,
we might consider creepy and I'm borrowing
from Google here the problem is
that when Google does
it, it's already to late,
when we do it at least we
can raise the red flag and
say this is really really bad, right?
What I worry about kind
of in the darkest moments
are that our ability to
collect data and process
that data to a massive scale, we know that
battles been lost already basically.
Those with the data who've already won
and this is kinda the
Shoshana Zubuff kinda stuff.
And you know you can go
off on that for a while,
but there is so to speak a symmetry here
around data collection, and I'm concerned
what we can do to even
kinda of understand
what people with lots of data are doing,
when we ourselves have so little data,
I mean what can we do
to kind of collect data
can we force open this?
Is there some kind of
regulatory like FOIA,
I don't know if FOIA is a thing in the UK,
Freedom Of Information Act request.
You know can we force openness on the part
of these large corporations and say,
you need to make your data public.
Huge problems there,
obviously with privacy
and also the opportunity
there maybe is that you can
at least know what's
possible with the data.
Where as today we are giving up data,
we don't know what we are giving up,
we don't know what the data means,
we don't know what the date could mean,
they don't know what the
data could mean until they've
already collected it in aggregate it.
I'm reminded of this study,
I think a hundred thousand
Swedish military conscripts
they had their heart rate taken,
resting heart rate for,
you know kind of military
purposes and they
followed these men through
their lives and they found this robust
correlation between
low resting heart rate,
you know even controlling
for fitness and all of that
low resting heart rate
your most likely hood of
being a victim or
perpetrator of violent crime.
Okay, so this is not
something the researchers were
looking for right, they
just had huge data,
went fishin and here we are.
This is what Goggle is doing
with every minor detail
at ever second, millions
of times per second,
billions of times per second, right.
And we have no idea why?
There is no transparency,
there is no force
on transparency, this is the like,
this transcends kind of political and
epistemological problems and I don't know
what the solution is, so sorry.
(laughing)
