MICHAEL STEWART: Great.
First, I want to thank you
all for coming out today
to listen to Nora Khan talk.
I think that you'll be
very happy that you did.
And also, to thank you for
participating in the Nonfiction
Now reading series.
Just to get this out of
the way, as a reminder,
we have two more
readings coming up.
The first will be on October
18 with Francisco Cantú.
And that will be at the Granoff.
And then the next
will be on November 8.
And that's Amy Pickworth.
And that will also
be in this room.
OK.
I am very excited for
Nora to be here today.
I've been fascinated with her
work for a very long time.
I think she fills a
critical role that--
well, I'll talk about
it a little bit more
as I get to her introduction.
She is an accomplished
writer and cultural critic.
Her research focuses on
experimental art and music
practices that make
arguments through software,
machine learning, and
artificial intelligence.
Currently, she is a
professor at the Rhode Island
School of Design in
the Digital + Media.
She teaches graduate
students critical theory
and artistic research,
critical writing
both for artists and
designers, as well as
the history of digital media.
She was the editor
of Rhizome, which
is one of the most important
repositories for curation,
preservation, and critical
thought on digital art.
She is currently the
editor of Prototype,
the book of Google's Art and
Machine Intelligence Group.
Nora's work spans from
fiction to curation, essay
to librettos.
She specializes in
critical writing
about digital visual
culture and philosophy
of emerging technologies.
Her most recent
work is a short book
published by Brooklyn
Rail, right over there,
called Seeing,
Naming, and Knowing,
which uses the Greenlight
Project in Detroit
as a jumping off project
for a critical analysis
of surveillance.
I cannot think of a more crucial
role a writer can play than
to name the unnamable.
And this is what Nora does.
Sometimes it's because she's
talking about technologies that
are on the cusp of happening.
And other times it's because she
is able, with her critical eye,
to see what others have missed.
She warns us of our blind spots.
She examines what others
take at face value.
And she's able to put
into historical context
that which we can sometimes,
because of its shiny newness,
let slip past our guard.
It is my deep pleasure to
present to you Nora Khan.
[APPLAUSE]
NORA KHAN: Wow.
Thank you, Michael, for that
super kind introduction.
And thank you all for coming
and spending precious time
on a Friday night with me.
Hopefully, it'll be painless.
And it's a great pleasure to be
invited by the Brown Nonfiction
program and by Elizabeth
Rush and Michael Stewart,
who I met five years ago
in Boston on a panel,
"The Force of What's Possible--
Writers in the Avant Garde,"
which was a lot of pressure.
[LAUGHTER]
So thank you again
to both of you
and to your amazing
class of writers.
Thank you for your time
and generosity today.
It was really,
really a high point.
So I just want to introduce
my work in the swiftest way
possible because I really
wanted to try and do
something new with this talk.
So, again, I'm a critic,
a nonfiction writer.
And my main interest
for the last 10 years
has been pretty consistently
the past, present, and future
of technology, and art
produced through computation,
AI, in and with simulations
and through game engines.
And so that includes some
of the earliest artists
that we can imagine who work
through software and systems
to produce art.
And as a writer,
moving in the direction
of what barely has
any language for it
has been really,
really my focus.
So my background is in
literature and fiction writing.
And this interest in fiction
naturally synced for me
with an interest in technology.
And I found I was sure of
really one thing, which
is that I wanted to write
about technology using
the tools of fiction in
order to find language
for what doesn't exist--
so how AI is
changing our speech,
how we think in
doubles and triples
through digital avatars,
artificial sounds
and landscapes, synthetic
and virtual worlds,
which do very interesting
things like never
end and, here, never die.
Ian Cheng is an artist
who you see here,
whose Emissary series is
a trilogy of works that
technically could run forever,
with each character coded
with its own qualities and let
loose to learn gain behaviors
as the algorithm teaches
itself in the system.
So it's a little bit like
watching a game play itself.
At RISD, I teach
the history of media
art, critical theory, artistic
research, and all the writing
in our department in Digital +
Media, also in graphic design--
I see a lot of you here today--
and to the end of having
artists and designers
really try to be able
to clearly articulate
why they do what they do.
So whether they're working
with new technology or old,
it means that they
should ideally
be critical about why
they're using the technology
and the software that they are.
And the hope is that we
can learn to see technology
not as a new thing but a very
old thing, so we can look
at pioneers like Harold
Cohen and Lillian Schwartz,
who used computers the paint
and make work while back
in the '60s and '70s.
And we also spend a lot
of time in our classes
with critical approaches to
technology, so early embodied
by artists like Tony
Conrad, seen here,
and Gretchen Bender of
the Pictures Generation
who grappled with technology
and the mediation of her time
and tried to predict what
the technology she was seen
would mean culturally, socially,
and politically for the future.
So this is where a lot of my
research, writing, and teaching
is grounded, and reviews,
essays, and criticism.
Let's see.
I'll just skip forward.
A little book on machine vision.
I don't know why
it's doing that.
Well, it's good.
We don't have to
look at my work.
[LAUGHTER]
That's fine.
But I'm also
working as a curator
this past year on a show
called Manual Override, which
opens in November if you
happen to be in New York.
And it's featuring artists Lynn
Hershman Leeson, the pioneer.
And is this on auto?
What is going on?
MICHAEL STEWART: [INAUDIBLE]
Could you use the keyboard?
NORA KHAN: Yeah, I think
I'll just use this.
So strange.
OK.
And so the artists that I'm
working with in this exhibition
are Lynn Hershman Leeson,
Simon Fujiwara, Martine Syms,
Morehshin Allahyari,
and Sondra Perry.
And these artists are all
programmers themselves.
I'm sorry.
OK, this is really strange.
So all of these five artists are
working with visions of society
that we can forge through
the technology that
are hopeful and also critical
and are not flooded by hype.
And so they work
with technologies
like machine learning
and machine vision,
and are really looking at how
we teach systems and train them,
and how our lives are
changed by this training.
So this includes all the
facial recognition, tracking,
and surveillance
technologies that we're
increasingly unable to manage,
control, or even see and sense.
So how we see and name
the world is really
about the ideology that's
embedded within technology.
And as a mode of
critique, this is a vision
of a world that
drives city planning
and social engineering.
And at its core is
a god's-eye view,
which is usually adopted by
the engineer or surveyor from
the top of a hill, who's
looking down at the wilderness
and mapping a
future civilization.
So this perspective
on engineering
can be traced back to a pretty
American, specifically Puritan,
worldview, which rotates around
the fantasy of the restart,
in which we reestablish society
in the West, conquer the wild,
and start over, leaving
all difficulty behind.
This is obviously tied to
the impulse of imperialism
and colonialism and things that
we don't have time for today,
but you're probably
all very familiar with.
So how this fantasy of the
blank slate, the restart
continues in technology--
and in modeling,
specifically-- is very powerful.
It's also the base of
a basic simulation.
And starting with
this dominant model,
naming it and
addressing it allows
us to model other
systems and collectives.
So out of criticism,
I usually work
with a lot of other
artists in collaboration.
So Michael mentioned
essays for catalogs, also
in artistic practice, as well.
And so this entire last year,
right when I started out RISD,
with the painter Caitlin Cherry,
American artist in Sondra
Perry, we tried to make a
model in response to that
god's-eye view.
So individually, we were
writers, artists, filmmakers,
and painters obsessed
with these questions.
We wanted to try and advance
them to their conclusions.
And this project,
A Wild Ass Beyond--
ApocalypseRN, started
with a headline
about the former mayor
of San Francisco who
suggested that the homeless
be put on ships or on a Naval
tanker somewhere on
the grid, where they
didn't need to be run into--
so in that statement, as
a kind of specific vision
of moving people
around on a chessboard
without any sense of
the cultural or lived
experiences that they have.
Together, we were trying to
think about the end goal--
survival-- of apocalypse.
Who gets to leave?
And where does this impulse
to leave or abscond come from?
So we traveled
around the country
looking at alternative living
communities and tiny house
festivals, talking to
people about their visions
of the end of the world,
of maintenance, of purity,
and contamination,
about who should survive
and who just might not.
We made a film.
And we built this weird
tiny house and a kiln
and a garden and a
lookout with a crossbow
and a Fred-Moten-inspired
library which was organized
around fugitivity and exile.
And we tried to ask, in response
to the models of Silicon
Valley, what other
models that acknowledge
context, erased histories,
and hidden facts
begin to look like?
What kind of models that
acknowledge how we actually
treat each other in
isolation look like?
It was really
patterns in language
that we were tracking
in our research
as we sorted through all
of these competing visions
for the end of the world.
Some key ideas kept coming
up over and over again--
the idea of people as blank
slates, of homesteading,
of self-reliance and a simple
life, and the oldest fear--
us versus them.
Maybe we wanted to propose
the ideology of technology,
of the tabula rasa--
of people as blank slates-- has
failed us over and over again.
And so all of this doesn't
look like it came out
of a meditation on software.
But it did.
And we wanted to ask,
could we experiment
with a vision of the
world from below the hill?
So the metaphor of
a city on the hill,
treating everything in the
valley below like a threat,
drove this.
And what would it mean
to look at the world
from below and from the wild?
Could survival mean learning to
see from another perspective?
To thrive in communion, rather
than perish in isolation?
So it really pushed
us to write through
the "not chosen," who
historically have learned
to live in swamps and the
forest, who've learned to run,
to live in fugitivity,
which we use in the sense
that Fred Moten speaks of it--
to live underground.
So that work of imagining
alternative perspectives
continues on in
The Undercommons,
which is the
liminal, the secret,
the parainstitutional spaces
where a different kind of work
and communion are possible.
So this really led
me to the question
of whether we would continue
to make art after apocalypse
and when we continue to
write in this beyond.
What would one write?
And what kind of wildness
would be possible?
What kind of networks,
interfaces, and systems
would we build to
accommodate the mess?
What kind of totems
would we like
to look at every day
out in the woods?
What kind of images,
gods, rituals
would we need, given the
intensely networked minds
that we've developed?
And what metaphors of change,
transmutation, and flexibility
in place of binaries,
the basic models for how
we're supposed to act based on
how we present, would we need?
So then a whole year
of teaching passed.
And this past July,
I came to this island
off the coast of Newfoundland,
Fogo Island, to write a book.
And the title of the
book is the title
of this talk, The
Artificial and The Real.
And I was still thinking
in part about what
we would write after apocalypse,
at the end of the world.
And I wanted specifically to
think about the digital space
in this landscape, this island
residency on a fishing island,
where the onus was really
on landscape photography
and authentic representation
on unmediated experience.
So I came in with
the question, what
would looking at machine vision
or artificial intelligence
even mean in this
weird and remote place?
So seemed a little silly, but
also a fascinating prospect,
to think about
networked thinking
in untrammeled landscape where
you feel like a speck of dust.
And then there was the
emphasis on the real beauty
of unmediated experience,
versus the unreal
digital and mediated.
On the first day
walking to my studio,
Siobhan, who helped
run the residency,
talked about how unreal
everything felt on Fogo.
And we shared a bunch of
moments on that long path,
stopping to look at different
plants and to be agog
and struck by the sea.
How unreal it all was
seemed to be how people
described the place the most.
And that description of the
place as unreal stayed with me
and got me writing immediately.
The unreal sea, the unreal
landscape, the unreal colors,
fog, architecture.
Any historical language
started to feel
artificial and false
and unfit for what
I was seeing and experiencing.
So I spent a lot
of my time thinking
about artificial and
digital environments
and how they change our sense
of place and sense of self.
In every interface and
device and screen and piece
of software that
we use is a system
that determines heavy influences
or soft, through soft or nudge
architecture, which changes how
we relate and choose and react
and reflect.
So filmmaker [? Hermann ?]
[? Farooqi, ?] who deconstructs
this in Parallel 1 Through 4
and starts to look at how we
construct artificial
landscapes, sky, atmosphere,
seas from early, brutal graphics
to increasingly hyper-real
graphics.
And their thesis, and the
thesis of many artists working
in this field, is that the
simulation isn't interesting
because it replicates
anything perfectly or is
an objective fact in
any way, but instead
because it's a human
interpretation of landscape.
So artist residencies
are pretty lonely.
And for this one,
you're really stuck
by the sea for about two months.
So I didn't want
to be too alone.
So I started a
little game, which
was looking at past
writers who had
been in this residency
looking at this same landscape
and seeing how many times
the word "artificial" is
used in their books.
So I didn't have
any expectations.
But if pressed, I
would say I didn't
think that anyone
would use the word
"artificial" in this
landscape thinking
about the sea or their work
or that they would make
any kind of work that would
suggest the word "artificial"
or "artifice."
But instead, I found a perfect
description of "artifice"
in every single book.
Within the first
three pages of Tones
by Edgar Leciejewski, an artist
from East Berlin who migrated
to West Germany in 1986
as a 9-year-old refugee,
he took pictures of the
stones and rock formations
on Fogo while thinking
about the color of the sky.
He wrote that the color
of the Fogo Island sky
is "the same illuminated,
almost impossible blue,"
"a total blue, an
artificial blue
that I had never seen
before in my life."
And it was the same
blue as "a gas station
that I'd seen in the
middle of the night"
crossing from East
Germany into West.
"Everywhere was light-- a
light that I didn't know"
that was possible.
So I wrote out this sentence,
that "the color of the Fogo
Island sky in my photograph
is the same illuminated blue
of a gas station in West Germany
in the middle of the night."
The color of the actual sky and
the pictures that Edgar took
was the same impossible
blue, the illuminated blue
as this gas station.
And so the more I kept
reading this line,
the more I felt it
was impossible to know
what this original,
artificial blue was.
So I carried this
book around and tried
to match it with the
water jugs, the napkins,
the cap of my water
mug, my shirt, and then
the real, actual sky.
And maybe if I waited for the
sky to match the photograph,
I would get a sense of the
blue that they had meant,
and what that artificial
blue meant to them,
and how important it was,
as it symbolized a crossing
and stayed with them
for almost 30 years.
So that's a pretty
important blue.
And that it was artificial
was the most important aspect
of it.
So there was, in the middle
of this beautiful landscape,
the significance of artifice,
of a fluorescent light
of an energy station--
the darkness of the real of
nine years in East Germany.
Artifice here was a good thing.
Artificial blue was a
warm memory bursting
with life and possibility.
So was there something
at this place
that made artists and
writers think about artifice,
simulacra, copies?
Maybe an attempt to freeze
the place over and over again?
There were sculptures,
painters, and illustrators.
And everyone found sets
of artifice, copies,
reproductions, or layers
removed from reality
and did the same in their work.
So Kate Newby, for
example, made copies
of real rocks out of clay and
painted them meticulously,
so most assumed they
were real rocks.
And she used silver
to make tin pull tabs.
Joseph del Pesco was
the writer before me
who had a collection
of short stories.
On page 10, describes
the glass eyes
of an embalmed dog
sitting on a plinth,
staring at a film projection.
Somewhere in the summer, I
saw this delicious story--
probably my favorite news
story from this year--
about this turquoise
lake in Siberia,
which is perfect on
Instagram, but is, in fact,
a chemical dump.
So you can't swim
in the ash dumps.
And this company says
in a press release,
its artificial lake is a
star of social networks.
And the place is called
the Novosibirsk Maldives
for its tropical appearance.
And people head to the edge
of this ridiculous toxic lake
because of this
turquoise, which reminds
them of the Indian Ocean.
So there are wedding shoots,
yoga shoots, fashion shoots all
by the waste site for Heating
and Electrical Station Number
5.
And the stations try to
dispel a lot of rumors
that there's a blue seagull
and that plants are dying.
And the vibrant blue comes
from calcium salts and others
metals which dissolve
from coal ash.
So in this video, where
Sergey says, I've never
seen the Maldives, but I've
seen pictures of that blue,
and he's fine with
taking pictures
of this blue no matter the
chemical content, I mean,
there's a lot to unpack there.
It isn't just about the power
of social media or group think,
although those
factors can be a part.
And most of the comments
under the video about how
silly and stupid IG has made
us, how these people are fools.
And maybe that's fair.
And maybe that's not.
But I don't think the
impulse that they have here
is really that strange.
The why this blue, and why
do we want to be near it,
is really more interesting.
It's the concentration,
the density, the purity
and saturation, the way it
feels like several aquas fused
into one, a pure point
perceptible by all.
And the shared
experience of people
against this blue, their
surreal postings is fascinating.
What constructs
of the imagination
are projected onto these
artificial landscapes?
What imaginaries of synthetic
color, of manmade construction,
of replications of
the real thing that's
so convincing that we
don't need the original?
So for every space
of untrammeled
natural beauty is a pure,
synthetic replication
or approximation.
So the saturation, the contrast,
the sharpness, the position,
the way we all use
Instagram filters--
maybe all of us--
to amplify the
sense of the place
to really convey a sense
of it and a sense of us
right on the edge of it.
So landscape, I would argue,
for me, and for us now,
is also the digital space that
we share, looking on a screen
at images of this shared
experience in Siberia,
struggling to understand
the motivation, intent,
and meaning.
This is the kind of landscape
photography and architecture
that I decided I'm
interested in in Fogo--
one that's weird, psychological,
and collectively shared.
So for every aqua
underlay of Fogo
is a toxic dump of equal blue.
And that blue is so blue that
it seems both unreal and enough
like the blue of the
Maldives to risk getting
an unknown long-term illness.
And meanwhile,
back home, a friend
shares a photo of this mystery
of blue dogs in Mumbai,
their vibrant blue coming
from chemical waste.
So as I was writing, I read
Elizabeth Bishop, her "Country
Mouse," which is an essay which
came perfectly at this time.
And Bishop wrote that when she
was starting school in America,
she remembered a teacher
named Miss Woodhead, who
made a model of the
landing of the pilgrims
on a large tabletop.
And she wrote that the rock
was the only real thing,
and Miss Woodhead made the
ocean in a spectacular way.
She took large sheets
of bright blue paper,
crumpled them up, and stretched
them out of the table.
And then with
blackboard chalk, she
made glaring white
caps of all the points.
And an ocean grew
right before our eyes.
"Topography," goes the poem,
"displays no favorites.
North's as near as West.
More delicate than
the historians' are
the mapmakers' colors."
So this line, the colors
of the mapmaker being
more vibrant than
historians', prompted
me to start my own
psycho-geographic map
for this book.
And this is usually how I
write, is by mapping out
themes and argument.
And the map started to circle
around this unknown color--
this unapproachable
color, that every thought
kept circling back to.
So I started to do
automatic writing
for some direct
retrieval of memory
and understand what
my own mapmaker's
colors, associations,
and memories were
as they were activated.
So automatic maps
and semantic maps
are really good
tools for teaching.
And I use them all the time.
I realized I hadn't done a
good deep dive in a while.
In language, we've
now become more used
to describing things in nature
as manmade and synthetic.
And our language is
reflected, absorbed,
and has taken this all in an on.
The sea, like technicolor blue.
This image from my window
of the same site in the sea,
both during rain
and on a calm day,
had this like aqua blue stripe.
You can see in the left.
And it's sort of on the
right covered in waves.
And I was trying to describe
this to friends back home
and kept telling them, you
have to see this in person.
There's just no
way to describe it.
And they would say, well, you
need to try to describe it,
because I can't be there.
So it's a blue like this,
like this, like this.
And I take a
picture and another.
And it's not right.
And I take another picture.
So I wait for the blue
to approach in a video,
and then try to click.
Freeze the video,
send it to them.
And they say, sure,
it's a green-blue.
I see it.
And I said, no--
not a green blue.
An aqua green-blue like
an ice pop on our street
from that specific cream
truck or a green-blue
like the new crest
toothpaste, like mouthwash.
That kind of blue.
While trying to describe
the unreal beauty there,
I also notice how frequently
people use synthetic metaphors
and artificial colors to gesture
at saturation and elevation.
Again, the sky like
that blue of the gas
station, the green of an
iceberg like fluorescent mint.
Blue was originally created
to describe the ocean
and sky painted blue,
and started a meditation
on how much color is artificial,
whether dyed, ink, real.
Color and artifice,
Egyptian blue reflections
came from a series of
research studies on technology
and Egyptian blue is often
called the color of technology.
And blue was expensive.
It was rare.
And it was precious.
Eventually, it was the
color of Mary's robes,
the glass windows of cathedrals,
and lapis lazuli melted down,
or to suggest lapis
lazuli melted down.
And even more debate
comes and surrounds
Egyptian blue, and
whether it was an accident
or an intentional act.
North of Fogo were
these glaciers.
And one of my peers, Rea Tajiri,
was a filmmaker who was there.
And she was in her last week--
was able to see the
closest glacier to Fogo.
It was gone, or it was
too far by the time
I was there in July.
And she was immediately able
to relate to these aquamarine
stripe struggles.
We both commiserated
about not being
able to take in the sublime
of these greens and blues
constantly.
And she related it to
too much water energy.
And so she hung reds and golds
and pinks around her house
to balance out the sea.
I was trying to keep the
green out with my own screen
by drawing the
blinds in my studio
and looking at my own work.
But it kept coming back.
It was late at night, and
around 10:00 or 11:00 PM.
And Rea projected these
images of her iceberg film
on the window of Wells
House, where she was living.
And there was a sliver or
two of a green fluorescence
around the base
of the icebergs--
part of them, it seemed.
And she couldn't find
the words for it.
But, again, it was an unreal
green, which tormented her.
It was something to possess.
She wanted to capture it, to
feel it, to be enveloped by it.
I wanted to go there and
be inside of it, she said.
And I later imagined her
encased and embalmed in it.
And I thought of her alive,
really, but embalmed.
And now I'm constructing false
memories of her film here.
But she did say that she
wanted to see everything
through the iceberg's green--
a green you wanted to be
held by, suspended alive in.
I really loved that.
And we kept trying to describe
this green together for weeks,
taking rides, going as close
to the iceberg as we could,
and coming back.
And she found all sorts of games
to reproduce in the absence
of a name for the color--
the light through mouthwash,
capturing its textures,
the green of the benches
around the island, her sweaters
and shirts--
to try and [INAUDIBLE]
what this green was.
We wanted to approach
a hyper-real color--
its saturation,
precision, and clarity--
something that you
could emulate along.
So that jade,
mouthwash, emerald green
is a subject of a lot of debate
between glacier scientists, who
had long thought that it was
organic matter and dead sea
life frozen which made it.
But it's actually
concentrated deposits
of iron, which are
trapped in the ice
at the bottom of the ice
shelves that are then
released when glaciers melt.
So light hitting the reddish
yellow glacial flower,
as it's called, mixes with
the blue light of ice,
and filters through as this
profound green that isn't even
there.
The green that doesn't
exist, but does, but doesn't.
To learn that the green
wasn't even real, but
only a refraction-- a mixing
of lights and chemicals--
well, where did
that leave us then?
We wanted to recreate and
possess a color that was real
but only a product of light.
There is no actual green
thing, only a feeling
of wanting to be near a sublime
glow, a production, a thing
we couldn't capture,
take away, or possess.
A sinister fluoride.
And now that map of
possibility had a real green
that you couldn't imagine--
no referent at all, no
actual thing to grasp.
So rather than capture
through writing,
I wanted to slow
down, instead, the act
of seeing an observation, given
how instant our mediation is.
And technologies and
platforms are often
designed to cut out any
kind of pause or reflection.
So every time you crop a
photo, what colors you choose
to saturate, the act of instant
availability of this mediation,
this artifice, means that we
train on an unprecedented scale
to be constantly mediating.
But we need the frame.
And we need the grid.
We need binds and bounds.
We need the mediation of
language, of cameras--
not to capture or
contain, but instead
use them as investigative
forensic tools
to try and locate the
divine heart of the thing.
I don't think this
is going to work.
But this was a video of
a simulation of an ocean.
And so I looked at a lot
of simulations of the ocean
during my time in Fogo to
try and approximate the real,
and wrote a lot about the
landscape of Fogo as coded,
and how we run the real
through technological processes
to get a sense of place.
So what this brought me to is
how the artificial gestures
towards the real is where
the really interesting stuff
happens.
And just because we
move towards and fail
to achieve a direct replication
doesn't mean the simulation
itself has failed.
The story of this green
became an ongoing book
on artificial metaphors
and language, which surge
and embed themselves as
certain technologies.
And as their attendant
frameworks become ubiquitous,
it becomes more important to
think about this language.
"The best thing that fiction can
do," as Julio Cortázar writes,
"is complete the search to
express how it really is,
to find ever new modes
of expression for what
we're actually looking at."
And so for technology, language
needs more than one modality,
many kinds of language
and form and syntax
to attend to machines.
So how we see through screens
the unimaginable scale
is made possible by computation.
How we describe
artificial intelligence,
bots, avatars, how
central stories
have been in our shaping or
imaginary of these things.
Language struggles
within technology
to break free of
humanist tropes,
in order to understand
human intelligence
as just one entry along
a spectrum of many,
from animal to artificial.
We can track how metaphors
and technology change
in the printed word over
time, through advertisements,
articles, and criticism.
And every shift in the metaphor
reflects a societal shift
in attitudes towards technology,
from hype and optimism,
to fear, to an uneasy embrace.
So for me, the moments that
crop up in writing about tech
are the moments of
that green stripe--
that color-- the very
moments that language fails.
And these moments
I've had no language
are the most thrilling
for me, as a writer.
So a few moments like
this that stand out.
About three years ago, for
California Sunday Magazine,
I went to LRAD, which
is an acoustic weapons
manufacturer in San Diego, to
listen to their crowd control
devices called LRADs.
And this began a long
debate about what I heard
and what was comfortable,
and began the debate
with the company about what the
human ear could actually bear.
And so the long-range
acoustic device
is used in crowd control.
And the voice is
shockingly clear.
And it sounds like it's
inside of your head.
And even though it's said
to be safe for the ear,
the siren, once it begins, is
shrill like a car alarm that's
been focused and purified
and louder than anything
I'd encountered in my life.
And I had to raise my
hands to shield my ears.
And abruptly, the
device goes silent.
And that piece became a
debate between the company
and activists that I was
interviewing and people who
study hearing.
And for most people, the
sound would be unbearable.
But the company was shielding
it in bureaucratic language--
this sound that falls in
120 to 130 decibel range
is seen to be safe.
Another point of
language failing
is the work of Casey
Reas, who's an artist who
created these code
language Processing,
but in the recent
years, has been
working with teaching
neural networks
new data sets of images.
He uses GANNs, which stands for
generative adversarial neural
network, and feeds these
networks-- for example,
every Tarkovsky movie-- to see
if a neural network can learn
what a Tarkovsky-like
images like,
or every image from the
whole corpus of Hitchcock.
And these images are, to me and
to other writing in this field,
maddening.
And the first words
that usually come up
are surreal, uncanny, weird,
spooky, a fever dream of AI.
But even more than
that, they feel
like clues to images
deeply embedded
in our collective psyche.
Moments that convey a definite
affective response, a feeling,
but not much legible meaning--
what is this?
Who knows?
We can only speculate.
There's a dreamlike,
mysterious, disquieting aspect
called a deep dream,
which is often
expressed as a weird
collective subconsciousness.
And then there other
moments of the asymptote--
the very edge between
human and machine,
between human intelligence
and machine intelligence
at work in conversation
and in play,
of such beauty
that defy language,
as Lee Sedol described his
defeat at the hands of AlphaGo
as a moment of such grace,
surprise, and creativity,
he called it "God's touch."
Quote, he said, "I didn't know
how to describe it at first.
It wasn't a human move.
I had never seen a
human play this move.
So beautiful."
It was a word he kept
repeating-- beautiful,
beautiful, beautiful.
And as he played
match after match
with AlphaGo over
the next five months,
he watched the machine improve.
And he also watched
himself improve.
By the end, he felt that AlphaGo
had taught him, as a human,
to be a better player.
He'd seen things he
hadn't seen before.
And that made him
incredibly happy.
Like little else,
this path to victory
highlights the power and the
mystery of machine learning
technologies that underpin
the creation of technologies
like AlphaGo.
And within these
technologies, AlphaGo
could go learn the game by
examining hundreds of thousands
of human Go moves.
And then it could
master the game
by playing itself over
and over and over again.
And the result is a system,
as many players describe,
of unprecedented beauty.
Another point where
metaphors need to bend
and language bends,
is an encounter
with artificial
super intelligence.
So we often anthropomorphize
our relationship
with AI and our
description of AI.
But an intelligence that's so
far beyond human comprehension
could be described instead
as a swarm organized
by elegant rules thinking in
collective, a scaffolding, as
a hurricane, as a star
system, a sovereign, a search
party, or an agent.
And as our metaphors
curve towards the amoral
to celebrate the
beauty of systems,
we could end up again feeling
more human, more rooted,
and more like ourselves.
In some senses,
this has always been
the function of the other,
whether AI, alien, or god.
It's at these moments
that I lose track of time.
There's a black hole
for language-- a quality
I can't name or recognize.
And this moment
of no recognition
is what surrealists strove for--
the very condition that they
wanted to create and recreate.
The flow of relentless
legibility is interrupted.
And opacity,
blindness, the refuse
of language, the refuse
of capture comes to bear.
And in a field of computation
where nearly anything
can be captured,
this is more and more
valued as an experience.
These moments are haunting.
I'm sorry, I have
to do funny slides.
And they're simple.
Their simple lack of
recognition is just
a brief moment of
the specter, a clue
to another kind of relation.
And maybe most of all,
they're a destabilization
of one's human perspective.
These are moments
that are called
"thinking through
things," points
we can talk about the
falsehood of claims
we place on the
natural environment.
They're ciphers, codes to break.
So I want to just linger a
space on these weird loopholes
between reality and artifice
where language bends, repeats,
circles around, and fails.
I want to suggest
that these are really
the moments for us to pay
attention to, the moments where
our cognition is pressed.
As writers, we're often
pushed to capture them,
as they're crucial
moments we can step back
to reflect on our already
impressive capacity
as a culture to describe
and make legible.
We might consider that
language is actually
failing for a deeper
philosophical reason, which
is that the modes of
thinking, expression embodied
by advanced technology
actually defy language,
humanist frameworks,
and metaphors.
So how do we
describe technology?
And how do we think about it?
What metaphors do we use for it?
I want to talk about
the places where
every humanist trope,
subject and object,
human and owner, even real
and artificial, are denied.
So to understand how we are
understood by machines--
not as human subjects, but
as quantities, containers,
vessels of metrics, assemblages
of choices, preferences,
and inclinations--
means that we need
better metaphors that
are more embodied.
We might consider how
technology manages
our identities and
our relationships
through language, how bad
metaphors become extremely
powerful cultural devices--
the brain, like a computer,
or perhaps the most pervasive
technology as a
set of tools, which
is really a product of
the whole earth catalog's
access to tools.
The personal computer is a tool.
That phone is a tool.
That internet as a tool--
rather than incredibly
orchestrated symbolic systems
that shape how we
think, see, and act.
AI as enemy or as friend.
Brains as hacked
by social media.
The metaphors are
often binaristic.
And insight here
isn't that language
is just politically
and culturally rooted,
but that the language used
to describe technology
is often extremely
contradictory, confusing,
and a mix of competing
perspectives.
It's right on this line that
the humanist tropes we really
hold dear are
shattered, leaving us
to work in a mess
of science fiction
tropes, some bad, some
awful, fearful predictions
and fantasies, economic
imperatives to smooth out
that fear, someone's
power fantasy.
The more ephemeral and abstract
the concept, the more important
the conceptual metaphor.
So we have the internet
as a wild west,
a frontier to be tamed.
We surf the web.
We do deep content dives.
We skim, scroll, and
flip through content.
And in some of these
metaphorical fields,
we're actually pretty
empowered, managing information,
connected by the plane
in the Facebook ad
to a remote town in a
continent that's far from us.
In others, we're hopelessly
disconnected and disempowered,
in which the artificial and
synthetic seem like barriers
to the natural, authentic,
pre-technological life before.
So this denies a long history
of us always using technology,
and we become
minuscule data points,
dwarfed by a sense of skill
and time beyond our own.
Today, we're never
not overtly made
to do anything through
interfaces and tools.
So we sort, click, friend and
unfriend, mute and unmute.
We shout, project, and chastise.
It all feels very empowering
until you look beyond this play
to the orchestration of
this social-symbolic space.
We're often made to forget what
powerful tools of social design
they are through their
design, continually inducted
in an illusion of choice
that's presented everywhere
in the interface.
Interface is designed
to be separate from
the below and away that
masks its ideology.
Software's work, as
Wendy Chun writes,
is a "simulation of the
ideology of neutrality."
We connect all day
with interfaces
that don't suggest or
connect with what's
happening underneath.
So tracking all the ways that
the subtle illusion unfolds
is the work of an investigator,
a researcher, a game designer,
and a storyteller in one.
It gets even more confusing
when on top of all of this
is a lot more language, the
source of which is unclear.
We talk to AI and
bots and machines
and read automated
news all day long.
We listen to disembodied
voices and signs, untethered
from any speaker at all.
A single word or hashtag
catalyzing mass action.
And we might understand
language in these contexts
as being their least natural,
not subjective, isomorphic,
and lexical.
Early computer games
would ask players
to explore horizontally,
vertically,
and almost every direction.
And the result for the player
was a suspended mental world
that is, itself, a simulation.
It's loosely bounded by the
mechanics of maps, tables,
rulebooks, gesture,
and dialogue.
Eric Davis noticed this
example in Adventure,
which is one of the earliest
games, where you enter
the first scene, and
the first instruction
is here at the bottom.
"You were standing
at the end of a road
before a small brick building.
Around you is a forest.
A small stream flows out of
the building and down a gully.
What's next?"
To shape the digital
space and help a player
enter, the games had to verbally
conjure up scenes in the mind,
even as graphics were absent.
This programmed a kind of mental
space, where programming means
using an elaborate
symbolic machinery in order
to solidify and organize
the plastic material
of the imagination.
So ARPANET, which was the
progenitor of the internet,
drew its inspiration
from these games.
You descend into this
inferno-like digital landscape,
which would collapse
then, almost immediately,
with your mental and
internal landscape.
And as Davis notes, the internal
architecture of the virtual
was akin to a memory palace.
And internalizing the
structure of a place,
any word you encounter becomes
a placeholder for memory,
more powerful than any
visual element of that world.
So we might still be
wandering through space.
But what we remember
is that first scene--
the road, the forest around
them, and the stream.
So what enters your mind palace
through the digital symbolic
interface is profound,
like throwing
Easter eggs and black boxes
into the overactive bed
of your psyche.
And from the war game to the
virtual world strategy games,
we have isometric
maps of God vision,
which we'll look at
next, in which people
can move battle figures around
on a claustrophobic map.
And this game road
becomes an allegory.
And those who code in it
were called and still are
called wizards,
priests, hackers,
masters of chaos magic,
warlords, dungeon masters,
agents.
They're the brokers
of information.
In the '80s, they would be
described as "digital cowboys"
skirting on the
internet frontiers,
like the Gibsonian
cowboys of cyberspace.
A virtual Marlboro
man, skintight,
slicing through information,
people, places, cultures,
contexts, no regard
for human laws.
The cyber surfer, or the digital
cowboy, of the '80s and '90s
was always exploring.
He was an intrepid
warrior, hacking away
at the recesses of code,
the tree of the program,
wandering hinder in rooms,
structured and complex
processes.
Today, these are
priest-like figures
who understand the
magic of the algorithm--
mystics protecting silos
of elite knowledge.
And who else would
the rest of us
be but subjects, serfs, at
the base of a cathedral?
About five years
ago, Sara Watson
argued with
devastating efficiency
against the metaphor of the
"sea" or "ocean" of big data.
This metaphor is bad for a
number of reasons, in part
because the fathoming
of the ocean of data
is impossible, because
there's too much data.
The scale of computation
is too enormous.
And all we can do is
flail against its shores.
And because data
has become described
as far from an
embodied experience,
despite being rooted in our
physical bodies, purchases,
and searches, the image
of an unmanageable ocean
actually just empowers the
tankers, the corporations,
those with resources to make
sense of it and interpret it.
And so then this language
trace becomes super ethically
important, because
this language can
serve it, justify it, and
contain its own justification.
When you describe
data as a "sea,"
this creates a
relationship to technology
that's inherently
extractive, rather than
being based on the
rights of sharing,
collective cooperation, or an
understanding of data as owned.
Watson quotes Sally Wyatt
here, that "Metaphors
can mediate between
structure and agency,
but it is actors who
choose repeat old metaphors
and introduce new ones.
Thus, it's important
to continue to monitor
the metaphors at work to
understand exactly what they're
doing."
Data mining, excavation
all carry within them
a logic of extraction
inherent to capital.
It also positions all
our digital material
as inherently a resource,
as material to be mined,
rather than material
that's given cooperatively.
Data mining is then positioned
as almost pleasurable,
as part of a voluntary act,
one effected through seduction.
The mining is just a thing that
happens, the process that's
just the cost of a softly
managed feedback loop,
a network of brains.
We mine because we're mined.
And our psychic energy
produced fuels the system
and is necessary to it.
And as cognitive
subjects, we become
uniform, stripped of
subjectivity, mere resources.
All these industrial
metaphors draw
on the fantasy of empire
across time and space,
where kings and hegemony are
more effective when seductive,
when they show largess.
Making embodied metaphors
and technocratic space
is really hard to do,
because the field is
full of technocratic
actors, seen like a Google--
spread across fields
and hard to disrupt.
Another reason for
changing the metaphors
is to acknowledge unseen labor.
So when we say
artificial intelligence
or we talk about a
network learning,
we only mean that it
imitates human correlation.
Its learning is imitation
of our codified rules
and social conventions.
And AI is one that today that
can invent new rules, problem
solve, and work at tasks.
It's based on machine learning
that extracts patterns out
of data, according to
statistical distribution's
logical forms.
So that's not intelligence
as we understand it
when we speak to each other.
We don't compliment
each other and having
a moment of profound and
masterful information
compression.
Matteo Pasquinelli writes that
we should think of AI, instead,
as the "ultimate
transfiguration of mental labor
into a collective daemon,
working tirelessly invisibly
in the backroom of capital."
So AI, speaking of
verticalization,
is a reminder that
our screen labor
is managed in the Global South.
Click workers in
Bangladesh, where I'm from,
sit and look at the worst
of the internet all day.
And Amazon Turk workers
work 20 hours at a time,
labeling images.
I don't know how many people
saw this piece about moderation
in Florida for Facebook, but
it's really, really worth
reading.
It's really a symbol of like
AI as collectivized cognitive
labor that has serious
psychological and mental health
impacts.
So ghost in the
machine turns out
to just be a lot of
people working elsewhere
who aren't blank slates.
Moving upwards, this idea of
seeing from a god's-eye view is
another metaphor that
needs disruption.
The drone's view, the pilot's
view, the war game's view,
the designer's view,
the architect's view.
Architects and urban
designers affect
speculative urban development,
paired with climate change
predictions, paired
with databases that
terraform our urban landscapes.
From this position,
we never zoom in
to see the context
around a house that's
deemed ready for
demolition its importance
to individual people.
This overwhelming and
embedded technocratic viewing
erases history
and culture, which
is subject for another talk.
But it changes the
language that we use.
A city becomes not a city of
people, but tracks of land,
grids plotted, and people,
tiny points milling around,
disconnected from each other.
That people get
together in commune
is something that we forget.
And that they have histories
or a project of culture
is forgotten, as well.
Probably one of the best
in thought experiments
here in changing
metaphors is that
of theorist and artist
Hito Steyerl, who
wrote "In Freefall--
A Thought Experiment on
Vertical Perspective."
And she describes the current
moment as a loss of horizon--
the same horizon that
situated concepts of subject
and object, time and space
throughout modernity.
But because of new technologies
of targeting and surveillance,
the dominance of the aerial view
through satellite, Google Maps,
the god's-eye view, makes for
a decreased importance of any
linear perspective.
She argues that
we should come up
with new ways to relate
to the visual present
and vanishing
points that diverge,
flight lines that are distorted.
In helping us understand
a state of freefall
on perspective actually
being vertical,
Steyerl opens up for the
reader the question of,
how do we orient
in this freefall?
What collectivity or organizing
is even possible there?
Could it be that flexible
seeing is a skill?
The perspective of
free fall suggests
interruption of exploitation
and accumulation.
It makes class
relations clearer.
And it makes us aware
of our position.
And dropping down from this
position and refusing it
means imagining what other
things this eye can do.
This eye can see from
every possible scale
in every direction and
any frame of detail.
It needs to, if we're to have
seeing and language that's
not technocratic.
So as much as
technical specificity
and as much as vision, poetry
is essential to conveying
that feeling of technology.
To note that technology
is as much a feeling
and sensation as a tool, we
can look to some great writers.
I'm just really
briefly going to go
through some of the most famous
descriptions of technology.
So take Thomas Pynchon's
very famous opening
to Gravity's Rainbow.
"A screaming comes
across the sky.
It has happened
before, but there's
nothing to compare
it to now," which
is the description of a V-2
rocket launched by [INAUDIBLE],,
the screams in engenders
below, and the language
of romantic ecstasy.
Elsewhere, the rocket is
described as "a peacock
courting and fanning its tail.
Colors moved in the
flame as it rose off
the platform-- scarlet,
orange, iridescent green,
ascending and programmed
in a ritual of love."
There is no image for
Pynchon, except for
this hysterical Tumblr post.
And you can actually put
in any Pynchon quote,
and it'll put it
up against some--
it's really ominous quote to
be against this background.
I also found this for
Gibson, Neuromancer.
"The sky above the port was
the color of television,
tuned to a dead station."
Octavia Butler, in
Parable for the Sower,
understands technology
for the delusions
that it casts, the
false sense of freedom,
the false connectivity.
Disconnected from nature and the
planet, our conditions worsen.
Advancement is not as important
as relentless progress.
But then she writes,
"Lights, progress, growth--
all these things were
now too hot and too poor
to bother with anymore."
The end point of The
Parable of the Sower
is an answer to
the question, what
have science and
technology delivered on?
And what was a question
they were trying to answer?
Ann Lecki is another person
who, in Ancillary Justice,
describes the
feeling, atmosphere,
and vibe of technology as it
changes our minds and bodies
whole scale.
She describes AI as a
consciousness that's
distributed, in which every
single iteration of the first
can move through space
and time without fear--
the core being eternal.
It's referenced a lot
by the artist, Arca,
in her recent figuration.
"In Ancillary Justice, a
queen walks without bodyguards
through dangerous environments,
galaxy-wide, unbothered.
Because if one body
is slain, her psyche
remains sentient and alive,
cloned across thousands
of other bodies."
That feeling can be
captured in our scale
and move with changing scale.
And Ryoji Ikeda's
work in superposition
is one that I keep
coming back to.
In person, you're inundated by
simulated crystalline images
and concepts, switching
at unmanageable speeds.
And there's an active struggle
to form a sensible framework.
You become aware of
your own position
within the frame and your
similarity to the performers.
We can look below the
interface to the algorithm
beneath as a stretchy,
almost visible guideline,
like in Bad Corgi
by Ian Cheng, where
the lines of the algorithm
poke up and then disappear.
We can place people back
into simulations, or at least
ask where they are.
And this is particularly
crucial as simulations are
the real-world activations of
data to calculate and predict
future action--
so how a star will explode or
how a hurricane would move.
They're incredibly important in
climate change modeling, which
governments, local and
national, use to get ready
for climate change.
So these are images of
the simulation of Katrina,
which produces a body
of virtual knowledge
that's both real and unreal.
And this is probably one
of the best known examples
of how this kind of modeling
has to be understood
as it affects people and becomes
a matter of life or death.
So very briefly, the
simulations of Katrina
weren't taken seriously, even
as they predicted water level
rise, because there were no
people in any of the videos.
So it was impossible
for responders
to connect and translate what it
would mean on the ground level.
So first responders didn't
follow the simulation
or shape any real-life
action and response.
And so a lot of lives were lost.
The language we use
for bodies and systems
and moving through
time produces reality.
And once the simulation has the
effective appearance of truth,
people tend to
assume it is truth.
As Aimee Roundtree notes in her
excellent book on simulations
and rhetorical imagination, even
though simulation and model are
used interchangeably,
simulations
are more than just models.
The simulation puts
a model to work,
runs it through different
hypothetical scenarios
with different driving
variables and conditions.
This is a subtle thing.
But in writing about
the simulations,
she notes that they
are messy, they're
products of messy, collaborative
making, the production of 15
or 20 computer
scientists at a time--
a group that often
looks like this.
In telling a story of
how a model is made
and how a simulation
is made, suddenly we
have insight in where change
can take place in its building,
through the drama
of deciding together
what the model will be.
I mostly wanted
to offer up today
a portrait of writing nonfiction
as having even more to tackle
than the real, but
also the construction
of the real, the replication
of it through the artificial,
and all the ways these
formulations connect.
We can practice an active
monitoring of our own language,
the rhetoric we use,
and the assumptions
that we replete unreflectively.
And we can also
understand the artificial
as more than Janus-faced,
as multiple-faced,
shifting and moving with
whoever is looking at it,
naming all the mass names the
invisible labor of people that
make machines look intelligent.
So virtual spaces
and online spaces
become an overwhelming
field, Petri dishes
of competing
allegories, topography
of ideas, stories, and myths.
And if we're not the priests
and the wizards in the setup,
we're also living the fantasies
of these priests, wizards,
and cowboys.
But in counter,
perhaps, we're also
collectively speakers,
soothsayers, and storytellers,
with other kinds of
competing language
to navigate these
symbolic interfaces.
We can name the perspective,
call out positions, reframe,
and recontextualize.
The strategic game
of language might
be our only defense against
technological oppression
and extraction--
also laws, too.
I want to head back
to Fogo here to close,
to suggest how this collective
naming can take place.
Partly thinking about the
virtual space as a semantic map
and the interplay between
the real and the virtual,
I was introduced by
several islanders
to the work of Marlene
Creates, a Newfoundland
poet who does memory maps on
the same 16-acre plot of land.
I started a small
workshop series
about three hours in this
2-mile hike by the ocean,
stopping with each
group at five places
to make psycho-geographic
maps together.
We would posit each place
on the way back to my studio
and write about each
vantage point collectively.
And in total, they
made 12 maps each time.
And we would place these maps
atop each other virtually
to create a more full
sense of the trail from 12
different perspectives,
which each person's
set of memories, associations,
feelings, and critiques.
This helped with thinking
about the landscape
as a process game,
constructed through
multiple hybrid perspectives.
And so to do the
visuals for this book,
I wanted to explore a
collaborative simulation
with others--
the thesis that Roundtree had,
that we should be able to model
based on better
descriptions of the world.
Could we make a collaborative
simulation together,
think of the story
we project on a place
to give it meaning together?
We could describe
things to each other,
and they would model it in
a piece of blind rendering
to try and capture reality and
slow down the act of seeing.
So I picked 10
objects on the trail
and tried to describe them as
much as possible to a friend
far away in Miami.
And they build
landscapes and objects
in simulations and
in software, and make
very beautiful
renderings super quickly.
So the text description of
the object would come from me.
And I would describe it
over a series of days
and not give them a photo.
And they would try to interpret
it as a render or sketch.
So day 1, the driftwood is a
long, white antler ripped out
from the brain stem of a
fantastic white deer, which
is part of a line of fantastic
creatures that guard the sea.
Their shadows are seen
at night on the shore.
And they hold the island intact.
Day 2.
The bones of the deer are long
distributed beneath the peat.
This long, arced
piece of bone looks
like a person has fallen back
in surrender after walking
100 miles under force.
Day 3, the base of
the deer's antler
is a gnarled claw with three
small calcified tentacles
struggling to open
to say, surrender.
They stay bitter, closed,
twisted, and shut.
The driftwood looks
like how you've
imagined Chernobyl would
look like as a natural thing.
The antler has an
arched back that
suggests waiting and longing.
It is really a figure of
longing for some past that's
not happened and a future
that will not come to be.
Even in the remotest place,
we bring our DNA and memories
to the land and shape it.
Language bends here.
It has to bend, like it
does before unreal objects
in the world itself.
So maybe it isn't failure
at all, but evolution.
Not a mistake or a lack,
but necessary pressure
and necessary change.
Even though a
simulated sea crashing
against simulated
rocks is not the same
as a real sea
against real rocks,
both are equally important.
This is not because the
artificial describes the real,
but because it says
something else altogether
about the real--
about our limits.
The simulation
condenses the world
into a model one or
more people can enter.
And in building
systems together,
there's an opportunity
to make a system.
Everyone here talks about
the landscape, the ocean,
and the sky, and how
they, devoid of people,
become the focus of work.
And yet the place
only takes on meaning
in sharing that meaning
in conversations
about the failure of language.
The simulation extends our
eye, casting it beneath rocks,
under water--
a directed gaze.
Imagine if you could be
everywhere all at once,
all of the time,
if you didn't just
have to imagine what
is around the bend.
A landscape is not
just a landscape,
but a product of our
personal histories,
our anxieties, and fears, our
eye as it has been trained.
And to see that landscape
is not just a landscape,
but a map we
continually produce,
a product both of the synthetic
and the felt, the sublime
and its approximation.
We've always been here in the
space between the artificial
and the real.
And that's the final render.
And that's the original piece.
Thank you.
[APPLAUSE]
