[MUSIC PLAYING]
[APPLAUSE]
CHRISTOPHER NOESSEL:
So again, thank you
for taking the risk
to come to a talk
where there was a mysterious
word in the title.
I will explain it over
the course of this talk.
So the first place I'm going
to start is with this person.
I've already given a clue,
but does anyone who this is?
AUDIENCE: Ada Lovelace?
CHRISTOPHER NOESSEL: It is.
It is Ada Lovelace.
And the reason I
bring her up on screen
is not because of
her stature and not
because she was the world's
first computer programmer,
working with Charles
Babbage, but because it
was in her memoirs of working
with Babbage that she included
this quote.
Babbage, of course, did
both the differential engine
and the analytical engine.
And in her memoirs,
she put the sentence,
"The analytical engine
has no pretensions
to originate anything.
It can do whatever we know
how to order it to do."
Alan Turing, in his
understated seminal paper,
"Computing Machinery
and Intelligence,"
called this Lady
Lovelace's objection.
This caught my eye when
I read this partially
because she was objecting
against something that nobody
had objected to yet.
They didn't have time.
The computer didn't exist.
So why was she forestalling
an argument here?
Why was she saying,
no, it's cool,
computers can't do anything?
I looked further
back, obviously,
than Lovelace and Babbage,
and found that the answer
is probably in mythology.
Right?
All the way back--
and I know this
is a screengrab from
the Disney version,
but the original Goethe version
of "The Sorcerer's Apprentice"
was a piece of mythology
that said, hey, look,
tech can get way out of control.
And in fact, in that
poem, it was only
the presence of
the sorcerer that
saved the world from
destruction from tiny brooms.
Sorry, Mike.
Another myth comes from
the Golem of Prague.
If you're not
familiar with this,
this was a creature
made of clay.
It had a pretty
cool little thing
where you put a glyph on its
forehead to bring it to life.
But then after that-- this
one doesn't have a mouth--
but anything that you
wrote as an instruction
and put in its mouth, it
would just follow mindlessly
as best as it could.
And it was a story of
hubris and terror--
watch the machine go crazy
with this instruction.
I mean, of course, we
still tell terrifying tales
of machines that go awry
for their instructions
and wreck human
damage on their path.
That's Hal from "2001--
A Space Odyssey."
So there are lots of
places across myth
in paths before Lovelace's
time, and certainly after,
where we're telling
ourselves these tales
that technology is terrifying
and should not take initiative.
And I'm going to take that
question as we move forward,
partially as a
narrative construct.
But it's also the question
that drove some patterns that I
had identified, probably about,
starting about, 10 years ago,
but really started thinking
about it pretty hard
about five years ago.
And there was a pattern
that I saw in the world.
So does anyone know what
the object on the left is?
It's pretty easy.
It's a gimme, right?
This is to encourage
you to talk.
It's a DSLR camera.
In fact, that's the
model that I have.
Does anyone recognize
the object on the right?
AUDIENCE: A tile?
CHRISTOPHER NOESSEL:
It looks like a tile.
In fact, it's creepily like
a tile, but it's not a tile.
AUDIENCE: Narrative?
CHRISTOPHER NOESSEL: Yes, that
is the GetNarrative camera.
And if you're not
familiar with it,
the way it works is you
leave it plugged up,
plugged in overnight so
it has a battery charge.
And then you unplug it.
As long as that
lens sees light, it
takes a picture every 30 seconds
of whatever's in front of it.
Then you come home at
the end of the day.
The camera's taken about
3,000 to 5,000 photos.
You plug it in, and it
does four cool things.
The first is it automatically
uploads all those photos
to a server.
The second thing is it
uses some smart algorithms
to divide your day into scenes.
Here is when you
were at breakfast.
Here is when you were at lunch.
Here is when you were
listening to Chris talk.
The third thing it does is it
uses some algorithms to detect
what the best
photo from that set
is, based on things
like level of clarity,
contrasts, that sort of thing.
And the last thing it does
is it uploads those photos
to a phone for you,
to say, what do you
want me to do with these?
And the app is pretty clever.
You can say, oh yes,
I love that one.
Post that one to social media.
Or you can say, I disagree
with you on this one.
Show me all of the
images, and let
me rifle through,
and by the way,
learn, for next time,
what I like in my images.
You can tag folks,
and you can delete it.
And you can say, wow, don't
share any of these things.
And so that's what the
GetNarrative camera does.
It's part of a category
of cameras called the Life
Blogging Camera.
And for my money, what makes
this really interesting
is that they're both cameras,
but the one on the right
is a camera without
a photographer.
So anyone know what the
thing on the left is?
AUDIENCE: [INAUDIBLE]
CHRISTOPHER NOESSEL: Dyson
brand vacuum cleaner.
And the thing on the right, I
probably don't have to explain.
It's a Roomba, right?
And similarly to the
GetNarrative camera,
we have two objects
that ostensibly do
the same thing for their users.
But the one on the left requires
you to grab it with your hand.
In fact, that's the only
purpose that handle serves.
And to use it, you step
on a switch, to unlock it,
and you step on a second
switch in order to turn it on.
And then you push the
vacuum around the house.
Meanwhile, this thing
does the same thing,
but there's no handle.
There's no real step for
you to step on to unlock it.
There is something where
you can turn it on manually.
That's what that big button is.
But the important
thing here is that it's
a vacuum without a vacuum-er.
Pardon-- that would
be clever if I
was working in another
language than English.
But there's no human whose
job it is to vacuum there.
It's not quite automatic,
because if I spill cocoa
on my floor, I can
grab that thing,
put it nearby, and
say, clean here.
And it'll focus on that
spot for a little bit.
Roomba has recently
been blasted this week
in the news for some of
their privacy violations.
But let's bypass that for now.
I just want to talk
about the functions.
And the thing on
the left is what?
AUDIENCE: A car.
CHRISTOPHER NOESSEL:
It's a car, right?
And the thing on the right
is your car, at least
the current model
that I'm aware of.
And it's similar in
that same pattern.
It's a car, but
without a driver.
And when you look at all
three of those things,
they are emblematic
of a larger pattern
that I was seeing in my life.
I had a automatic cat
feeder, so when I traveled,
my cat wouldn't starve,
or I didn't have
to ask a friend to come over.
And at the time that I began
thinking of these ideas
really deeply, I was
working in a robo investor.
And so this was
something that you
would say how much
money you had,
how much money you
would give each month,
and what your
financial goals were.
From that point
forward, the investor
would manage your portfolio.
And the thing, when thinking
about these patterns, what
they all shared was
that they all did things
on their user's behalf.
But they weren't
quite automated.
With the GetNarrative
camera, you
can actually tap it in order to
take a photo or take a video.
With the Roomba,
you can pick it up
and force it to
vacuum somewhere that
wasn't in its current plan.
I'm presuming-- I've
never ridden in one--
but that you can stop the Google
car, and say, I've got to pee,
or, let's take a
food break, or I
want to go see what that
is, the thing over there.
So they're not quite automated.
And when I tried to dig deep
into the epistomological
definition, it seemed to me
that the major difference
was that you grant these things
agency to act on your behalf.
So for my money, this
pattern was rightly
called agentive technology.
I didn't make up that word,
but I rescued that word
from obscurity.
I think before I published
it, the only place it was used
was in linguistics.
But that's what I'm calling
it, this weird space
in between automation
and assistance.
And we'll talk a little
bit more about that later.
But that's the pattern
I had identified.
Now, these are all first
world problems, admittedly.
But they're super
easy to understand.
And that's why I go
to these as examples
first and quite
commonly, in the book.
But I really wanted
to push the idea
and say, OK, are there
some third-world problems,
or other world problems,
that this technology can
be applied to?
I'm going to review
them quickly.
In the upper left-hand
corner is ShotSpotter.
If you're not familiar with this
tech, what this service does
is it will work with a
precinct in the United States
in order to sprinkle microphones
all over a neighborhood.
Those microphones aren't
particularly great,
but they do have
fantastically accurate clocks,
and they are connected
to the network,
and they're trained
to listen for gunfire.
The minute any of
them hear gunfire,
they report the exact moment
they heard it back to a server.
And of course, you
can triangulate it.
And the cool thing
is the accuracy
of that is down to a meter.
In doing an interview with
ShotSpotter for this book,
they loved to tell a tale
of one pair of officers
who were able to arrive,
in winter, so quick--
they were able to respond
so quickly to the gunshots
that, even though there
was snow on the ground,
and they were worried they
wouldn't be able to find
the shell, they did find
it by the smoking hole
that it left in the snow.
Pretty cool, right?
In the upper right-hand
corner is Volvo--
hey, Kimberly-- with
their self-braking trucks.
In the grille,
they have a series
of sensors that not only track
motion, but actually extend
to that motion.
And if the vector of
any object in motion
intersects with the car, and
the driver is not braking it,
they will break it.
It's going to save
a ton of lives,
even before we have
self-driving trucks.
That's a human augmentation.
In the lower left
is a swarm robot
called Prospero, designed
and prototyped in Australia.
And with these
little robots, you
can actually set some seed
in here, show on a map,
tell what kind of seeds they
are, and the swarm can go out
and plant crops accurately and
quickly in a matter of moments.
Some major issues with what
do we do with the workers who
used to do that sort of thing?
But all of this technology has
that question underneath it.
The nice thing about
Prospero is that it is not
only faster, but also safer.
So dangerous
terrain, the robots--
they can handle it
fine, and we don't
mind if a robot
breaks its little leg.
In the lower right-hand corner
is a service called Scarecrow.
This is a drone that is
trained to hover high
above endangered herds
and watch for humans
that are approaching.
And if it doesn't
know those humans,
it will drop down in
between the herd and humans,
and the wasp-like buzzing
will scare the herd
in the opposite direction.
Now, I made that last one up.
It's a lie.
But the reason I made it up
is I thought of a problem
in the world that bugged
me-- animal rights.
And I tried to apply this kind
of thinking, this pattern,
this template, to the problem.
And I was able to come up
with something fairly quickly
and fairly compelling.
And in fact, I was giving
this talk in Delft,
in the Netherlands,
fairly recently,
and one of the audience
members raised his hand
and said, oh yeah,
the Dutch people
are already building that.
So it's a viable idea, at least
the Dutch seem to think so.
But all of these are to
illustrate, even though they're
a little more complicated,
a little more nuanced,
that pattern of
agency doesn't just
apply to first-world
problems, even though those
are easier to understand.
So I want to assert
that, yes, I think
it's as big as we can think.
Put that part down--
pattern established.
I'm going to talk to you
about five reasons why
I think this pattern is nifty.
The first is that it's new.
But you can hear in
my voice, and you
should be able to
see on the slide,
that there are air quotes
around the word new.
And the reason why is because
that image in the background--
does anyone recognize
what that is?
AUDIENCE: A cockpit?
CHRISTOPHER NOESSEL:
Yes, a particular machine
within the cockpit.
AUDIENCE: Auto pilot.
CHRISTOPHER NOESSEL: Autopilot.
Anyone want to
take a stab at when
the first autopilot
was demonstrated
at the Chicago World's Fair?
1914, 103 years ago.
Now, of course, it was
electromechanical at the time.
It took a lot of engineering
effort, a lot of money,
in order to make it.
But that's a piece of
agentive technology.
You granted agency
to fly the plane.
And it doesn't just
do it automatically.
We still have pilots
on those planes,
even with our modern autopilots.
But let's put that down.
Between that and the
thermostat, that you'll
see in Chapter One of the
book, it's kind of new.
There are lots of precedents
in the world ahead of this.
But the reason I
say that it's new--
that public APIs for narrow
artificial intelligence have
become available within
the past five years--
IBM's, yours, OpenAI's, right?
The public understanding and
appreciation of these services
are on the rise, so
we have a marketplace
that will accept them.
We don't have to train
them on this sort of thing.
We have a lot of
sensors and actuators
that are available to us,
as designers and makers,
in order to make
these things happen.
I have 32 examples in the book.
There's a question mark there.
I don't remember
the exact number.
But all of them except
for the autopilot
are within the past five years.
So it is new.
I believe that this is a
way of thinking that's ahead
of a curve; .
So new, pretty cool.
The second reason is
that it is different.
If you ever studied
interaction design, like I did,
we talk about things like
affordances and mapping and all
those things.
And the canonical example
for that is a hammer.
How does a human know
how to use that hammer?
Well, they look at it, and
they say, oh, well, this
looks like it fits a hand.
And it's longish, so I'm
going grab it at the end.
That looks hard, so it should be
able to hit something and drive
a nail.
Those affordances, those
mappings, those system
states all belong to
something like a hammer.
But that doesn't apply
when you're talking
about an agentive model.
You don't need an
affordance for the Roomba,
because I don't have
to grab the handle.
I don't need mapping
when it comes to the--
I'm not going to say Google
Car, because, of course,
you need mapping for a car.
I don't need mapping when it
comes to my cat feeder, right?
I just need good controls
to set the thing up
and some kind of feedback
mechanisms to let me
know that, yes, my
cat is being fed.
So for my money, if
the hammer is not
the right model for an
agentive technology, what is?
And I think you can go
turn to a butler or a valet
as a better model.
You would tell this person
what your goals were
and what your preferences were,
and from that point forward,
it would manage as
best as it could
to those goals and preferences.
It would come to you when
there was a problem--
hey, the larder is empty.
Or in case of a valet--
I don't have any clean clothes;
something has gone wrong here.
And you would help it.
And you could even
tweak it, saying, wow,
I appreciate your
bringing me that tie,
but I never want
to see it again.
It's hideous and ugly.
To illustrate this notion that
the model must be different,
I built a model of
interaction design.
If you studied this, it
should be pretty familiar.
Human sees the
state of the system.
They think that's not
quite the right state.
What can I do next, judging
the affordances of the System
And then they do
something to the system.
They press a button, wave
a hand, speak of something.
And then, if that's
the red human part,
the blue is the
familiar computer part.
Computer takes that input, goes
through some sort of algorithm,
and results in an output.
This we know is a oversimplified
model of human cognition,
but it's a very useful
oversimplified model
of human cognition and
interaction design.
But when we compare that to a
model for agentive technology,
it gets different.
So right, if the computer-- the
[INAUDIBLE] still down here--
but if there's a computer
running the system
with a human, not gone, but just
on the outside of the system,
peeking in occasionally,
making requests,
tweaking, then we
run into a whole lot
of different use cases.
And that's what this map.
Is it's in the
book that you have.
It's in piecemeal
throughout section two,
but it's in its whole version in
the very back, in the appendix.
But you can see here, the
unique use cases for this
involve a lot of setup.
How do you give that
agentive technology
your goals and your preferences
towards getting to those goals?
Oh, I want my son to go to
college when he is of age.
Is it going to be
a state school,
or is it going to be a
really expensive school?--
those preferences.
Do I want investments
to go to vice funds
or specifically not
to go to vice funds?--
all the sorts of
things that one might
think about in the setup
of, like, a robo investor.
Use cases for seeing are,
of course, monitoring.
How do I build trust
that this thing
is going to do what
I need it to do
when it's in charge of something
as important as my money?
How do I know that it's working?
How do I know when it's running
out of things, like resources ?
So there are some unique
use cases for seeing.
And then there's doing,
right?-- pausing and restarting.
One of the great things
from the research
that we did for the
robo investors--
we wound up talking to a
guy, and we said, OK, if you
had a million dollars,
and you wanted
to give it to the robo
investor, how would you
break that million up?
And he said, well, I'd give the
robo investor probably 900,000,
but I'd keep 100,000 for myself.
And we said, why?
He said, I'm going to
see if I can beat it.
And we were like, this is
an algorithm that looks
at the entirety of the market.
He was like, yeah, but I'm sure
I have some instinct that it
doesn't.
So that need to play
alongside was illustrated.
Because we're talking
about narrow artificial
intelligence-- and we'll get
to that in a little bit--
it's going to get
some things wrong.
So the user needs
tools in order to tune
its blacklists, its
white lists, its play.
And then, of course,
for problems, there's
a handoff and take-back
problem, where
the ANI runs into something
that it can't cope with.
And we need elegant ways
to pass back and control
back and forth.
And lastly, since almost
all agents are persistent,
trained to watch some
data stream for a trigger
and then execute
their behaviors,
we get into a problem
with disengagement.
How do we know when the
agent is no longer necessary?
In the case of a robo
investor, it's significant.
If I die, that
money just doesn't
stay with the robo investor.
It needs to go to
somebody that I name.
So that disengagement
becomes really important--
not so much with a
Roomba, but certainly,
in other circumstances.
So this model, these
unique use cases,
are underscoring what
I say when I say, hey,
this is new and unique.
OK.
Number three, we have been
studying user-centeredness
for a long time.
And part of what
we are always doing
as we apply design towards
the problems at hand
is we try and maximize
the user value
and minimize the amount
of work that they
have to do to get there.
And I can't imagine a better
equation then nearly zero input
for maximum user value.
Right?
If you're familiar with
Pine and Gilmore, in 1999,
they published a book called
"The Experience Economy."
And they categorize
the types of products
that can be pushed to
market in one of four ways.
The first is of a commodity--
barely differentiated, cheap
as dirt, traded on the market.
And if you wanted
a cup of coffee--
and that was their example,
so I'm using it, too--
you would go and
go to a wholesaler,
bring a bag with you and a
scoop, and scoop that stuff in.
And you'd pay less than a
cent for that bag of beans.
Commodities are really
cheap, because they require
the user to do a ton of work.
The next level up is called
a product, where the company
says, you know what?
Don't worry about going
to the wholesaler.
We're going to grind
these beans for you,
put them in a really cool bag--
we'll even design that
bag to look really cool--
get it on grocery
stores near you.
And for that, you
will pay a premium,
compared to the commodity.
But hey, it's closer to you--
grocery stores or bodegas--
and it's pretty.
It'll look good on your shelf,
and it's already ground.
You don't have to worry
about grinding it.
The third category they
had was a service, where
a company would say,
you know, don't even
worry about going to
the grocery store.
Come on to this space
that we've set aside.
Let's call it a restaurant.
And if you want a cup of
coffee, you just tell us.
We will go into the back,
grab our product made up
of a commodity, make you a cup
of coffee, bring it out to you,
and we will even clean the dish.
And for that, you'll
pay a premium,
compared to the product.
Now, the point of their book was
that there was a fourth layer
that they had
identified and that
had been brought
to their attention
by Starbucks; so logo time.
Starbucks was a service, but
how are they getting away
with charging as much as they
were for a cup of coffee?
And they said it was
because of the experience.
You're just not
going into a diner,
and there's a ton of stuff.
You're going into a
coffee experience, where
they have lots of wood, gorgeous
wood paneling, and lights,
and cool music playing.
And they abuse the
Italian language in order
to sell you this stuff.
And for that, you
consumers-- it's pretty much
been shown in the
marketplace-- are
willing to pay a premium
for that deep experience.
The reason I spend a little
bit of time with this model
is to show that I think
there's another layer of value
that agency unlocks.
All of these require
attention to extract value.
It's either me with a scoop,
me in a grocery store, me
in a diner, or me at Starbucks.
But in the case of
the Roomba, I get
value out of that
object when I'm at work.
My cat is fed when
I am traveling.
There is value
there that doesn't
depend on the 16 hours of my
attention, which is limited
and a very competitive space.
Right?
If you think about that,
the opportunity for value
that you can provide
to your users,
your customers, however you
want to think about them,
is comparatively infinite.
So I believe there is
a post-attention value
that we can begin to
capitalize on when we equip
our products for this mode.
Number four, P W singer
wrote a scary book
called "The Something of War."
I have lost it, but in
it, he makes this argument
that there are
certain technologies
that once a civilization
adopts, they can't go back.
He's thinking specifically
of drones in warfare.
Once we send machines
to do the fighting
against other
machines, why on earth
would you send your
flesh and blood?
Dark, a good conversation
worth having,
but in terms of agentive
technology, I believe
it's threshold.
Once you have the
Roomba in your life,
how much are you going to
want to go back to the Dyson?
Once you have a car
that will drive you
and your kid to their school,
and you get all this quality
time with them, how
much are you going
to want to get behind the
wheel, and say, get in the back,
and I really can't talk
to you until we get there?
Right?
The amount of value that you
get from an agentive technology
is so great, I believe, that
going back becomes drudgery.
And for that, it's a
threshold technology.
And for that reason, it's a
massive competitive advantage
to those companies
that adopt it first.
Last bit, it's AI--
which I don't think
it's the AI to fear.
But it's certainly
interesting to me.
So let me take just a
moment and situate it.
I include this as
part of the talk,
because I don't know how well
you guys are versed in AI.
But I may be overstepping.
You may be much more
knowledgeable about this
than I am.
Bear with me if you are.
But in the literature
of AI, AI as a field
is broken up into three
primary categories,
depending on the capabilities
of the AI itself.
The first one, the one
that most people think
of when they hear AI,
is general AI, so-called
because the AI can
generalize knowledge,
just like you or I can.
I run a blog about science
fiction interfaces.
It's super nerdy.
And so I'm going to make a
couple of sci-fi references.
And this is my first--
who does not know
the movie "WarGames"?
OK, spoiler alert-- I'm going
to tell you a little bit
about "WarGames."
But in this, there is an AI that
has been trained to play games.
And a lot of them are harmless
games, but one of them
is global thermonuclear war.
What the AI doesn't know
is that it's actually
tied into the nuclear arsenal
of the United States--
oh my god-- and it's
got this countdown.
It's going to start playing this
game and ruin all our lives.
Well, the protagonist
ends up getting an idea
near the end of the film.
Hey, let's play
Tic Tac Toe, and he
begins to play Tic Tac Toe
with this AI called Whopper--
terrible name-- or
Joshua is the nickname.
And if we're playing Tic
Tac Toe over and over again,
the AI goes, suddenly
realizes, oh hey,
this is a game
that can't be won.
And then it generalizes that
knowledge and thinks, huh.
And it begins to run
through scenarios of global
thermonuclear war and says, oh,
that's a game that cannot be
won.
Why on earth would we play it?
The countdown timer
stops, they get saved,
the guy gets the
girl, movie over.
But it's a pretty good
example of general AI, right?
That AI is able to generalize,
doing what you and I have done
since toddlerhood, when we
took the physical things
and the abstractions
in the world
and then began to build
them to the adult knowledges
that we move through
the world with today.
Once we have a general AI, one
of the first things that we're
going to do, whoever gets that,
is going to ask it to say,
can you make a copy of yourself
that is a better predictor
and has better outcomes?
And make sure that that copy
is deeply interested in making
a copy of itself.
And that'll make a copy,
and that will make a copy,
and that will make a copy.
And eventually, what'll
come out the other end--
and it depends on who you
ask how long that takes.
It could take a couple of hours.
It could take a
couple of months,
but will be something so smart,
that the running metaphor is--
its intelligence will be to
us as ours is to a bird's.
We will not have the language
to think in the questions
it is interested in.
If you want to be scared of
an AI, be scared of that AI.
Because if that is not
pre-loaded with a care
for human wellness,
we're in trouble.
And in fact, the mathematician
and science fiction author
[INAUDIBLE] I'm probably
murdering the pronunciation--
coined this moment as we pass
from general AI to super AI
as the singularity,
because we don't know what
life is like with a
functioning god able to answer
our questions.
I raise these
because agentive tech
is a mode of interaction that
does not involve these two.
It might set it up, but
let's talk about that
at the end of the talk.
The one that we have
in the world now,
which is not as
terrifying as these guys,
is narrow artificial
intelligence, so-called
because it's very good
at one or two things
and can't generalize
that knowledge.
I can't ask the Roomba to
help me plan a Thanksgiving
dinner, not yet.
I'm sure it's in the roadmap.
But narrow AI is the one.
I'm a very practical person.
And this is the one
I'm most interested in,
lots of exciting stuff
to talk about here.
What do we do with all the
jobs that are going to be lost?
What do we do?
Who's got the last jobs?
I think it's
designers and judges.
But let's talk about
this, because this is
what we have in the world now.
For my money, we
talk a lot about AI
in terms of what it can do.
But I'm most interested in
what the relationship is
of the user to the work
that the AI is doing.
And I found three categories.
I may be wrong.
If so, let's chat about it.
But the first one is automatic,
stuff that should just happen.
A pacemaker is a good example.
I never want a pacemaker
to ask me in the morning,
do you want me to
make your pace?
Because the answer
is always yes.
The same thing with a
grocery store automatic door.
I never want that to ask.
Just open.
Just do it.
This sort of thing does fit
narrow artificial intelligence,
but only for well-constrained,
unpersonalized things.
Assistants are things that
you would use to help you do
a task, like an angel on
your shoulder whispering
in your ear, sci-fi
reference number two--
like Jarvis for Iron
Man or now Friday.
Right?
It's an assistant to him, though
I make the case in the blog
that Jarvis is the Iron Man.
And when I thought about
this, I was like, well,
what do we parse,
what do we give
to the assistants of the
world, and never want
to hand off to a non-human?
And I found 4 and
1/2 categories,
and they're listed up there.
The first one is our jobs.
If you and I have an
economic agreement
that I will do work for you, and
instead, I hand that to an AI
and spend my days on
the beach, we probably
have an ethical problem.
There's an argument
to be made of, oh,
hey, why don't you
sell me that AI,
so that I can use
it at my company?
But for the most part,
you're not doing your job.
You're sneaking
away from your job.
So jobs are something that we
want humans to be outfitted
with assistants, not agents.
The second is human connection.
I suspect that in
about 10 years,
I'm going to greatly
prefer an AI for my doctor,
to be able to recognize
and diagnose the problems
that I have with
my health, right?
Already we know that Watson can
read every medical publication
that is published around the
world pretty much in real time.
Its diagnosis, despite the
MD Anderson troubles of late,
I'm going to be much more
confident in than humans.
That is not true for a nurse.
Part of the purpose of the
nurse is human connection.
I don't want an AI
waking me up and saying,
do you need more pain killer?
That's not going to
help me feel cared for,
helped me feel loved--
not loved.
It's not going to get me
on the road to wellness.
So if human connection is part
of the purpose of the thing,
then I don't think we ever want
to hand that off to an agent.
And we do want it
to be an assistant.
The last two are related.
If the point is
physiology, then I
can't send a robot
to do the work.
I can't send a robot to
go to the gym for me.
I'd be jacked if I could.
Skills are similar.
If I'm trying to learn
French, I can't send an agent
to French class in order to
study the language for me.
I have to be there so that my
brain acquires those skills.
Lastly, there's a
half category of art.
I say it's a half category,
because there are plenty
of examples where
computer-generated stuff,
we find delightful
and hilarious.
This is InspiroBot,
popular of late,
but it's a neural
net that's trying
to create inspirational images,
and they're just hilarious.
But the reason I say it's
still a half category
is because, when people find
out that a poem they love
has been written by an
AI, they feel betrayed.
not true with visual imagery.
So it's a half category.
And I'm sure they
are going to be
artists who are going to rock
out using AI in their work.
But that leaves agentive
tech, that thing
where we partner with a
piece of technology in order
to do the things for us
that we want it to do.
And that means everything
else, if it's not one
of those 4 and 1/2 categories.
And that's a lot.
That includes things
that we're not good at.
The whole reason
we had autopilot
is that humans have only about
20 minutes worth of vigilance--
and I know you're already
past that in the audience,
so thank you--
but where we can
only pay attention
to a signal for 20 minutes
before our attention
begins to drift.
And so we need partners
to help us with the task
that we must do that
are longer, that
are beyond common
human capabilities.
It includes technologies
that we're unwilling to do.
We've known about the
Pacific Gyres for the better
part of two decades now.
And we haven't
been cleaning them.
And yes, we could probably
put some humans in some boats
to go out and
clean that mess up.
But once we have
robots doing it,
I think we're going
to be hard pressed
to find humans that want to
take those robots' place.
By the way, the
Pacific Gyre does not
look like that, just
soupy, plastic mess.
It's not actual trash
floating in the ocean.
And it also includes stuff
that we just cannot do alone.
We know we only have about 4
billion years on this planet,
which doesn't seem like a lot.
But boy, how do you get
an entire civilization
onto some other rock
is a major undertaking.
And we can send humans
out into the void.
But we're fragile, error prone.
Sending robots is
a much saner idea.
But the farther away
from us they get,
the longer that
communication time is.
And we have to have some
kind of smart on those robots
as they explore the
galaxy, to handle
things that happen in between
the moments of communication.
Fortunately, NASA
is already on this.
They have something called
the NASA Agent Architecture.
But we can't do this task
alone that we have to do,
and we're going to have to
partner in very smart ways
with our technology in
order to make it happen.
Which brings us back to Ada--
you remember that Ada's
objection, or the Lovelace
Objection, as Turing
called it, was,
can computers take initiative?
And I think I've certainly
shown that, oh yes, the patterns
that I showed at the
beginning where they can.
Even the examples that I showed
show that they do already.
A few people have
already started
to move their technology
in this direction.
I'm writing, for
both business reasons
and for ethical reasons,
I think that they
should in certain cases.
So part of the mission I've
got, in writing this book
and going and making
talks like this,
is to convince people that, hey,
let's move in this direction.
Let's build our
products such that they
are equipped this way.
And so I'm going to leave
you with three questions
to ask for the products
that you guys build.
And the first is
basic, in that it
works from the micro
interaction level,
all the way up to the strategic
directions of products,
and to say, are we asking
users to do something
that we could do for
them, if they so wanted?
Once you answer that
question, and you
begin to create the
agency of your products
or agentive modes
of your product,
then you have to say, OK, well,
it's just not purely agentive.
The Roombas in the
world I think are
going to be the exception,
something that just fits nicely
into one of those categories.
How can our products
support automation
when the confidence
is super high,
agentive when it's a task
the user doesn't want to do,
and assistive modes?
Because smart,
sophisticated products
will have to work in all three.
And lastly, how does our
product help the user?
Whether the product, the AI is
taking initiative or the human
is taking initiative--
how do we flow smoothly
between those modes?
I think once we answer
those questions,
we will be many further
steps on the way
to getting our
technology to take
advantage of this
conceptual framework
of agentive technologies.
So you can follow me
@agentivetech or @chrisnoessel.
But that's it.
Let's have a chat about it.
[APPLAUSE]
AUDIENCE: So I had a
question about where
the initiative of
decision-making
falls into this pyramid.
So you mentioned a lot of tasks
that agentive technology will
do what we're unwilling to do.
But it sounded like that's
where the humans have
to kind of give the initiative
to these other devices to do.
How do we seamlessly move
into a place where humans--
where we can suggest things
that we think should be done,
and humans will trust
that suggestion?
I think that's a big
problem that we have now
as we move into a world
where things are getting more
and more intelligent,
and people can
get scared of those suggestions,
which might sound simple,
but they don't know all
of the algorithms that
are going on behind it.
And so it's moving
in that direction.
Do you have any
thoughts on that?
CHRISTOPHER NOESSEL: And
in fact, earlier this year,
the EU passed a
resolution about AI
that included a provision called
the right to explanation, which
says that if you are
subject to an AI's decision,
you as a citizen have the
right to understand how
that decision was
made, which is going
to be problematic in the
world of neural nets.
But in the book, I
describe a pattern
called a hood to look under.
And it's a
trust-building pattern.
And the notion is that, for
any decision, you should be--
and it's easier with narrow AI--
you should give that user an
opportunity to open up the hood
and see how that
decision was made,
disagree with that
decision, and either provide
categorical imperatives for
future behavior, white lists,
blacklist inclusions, or
even particular tweaks.
So I think you should be
able to go down all the way.
In common usage, I
suspect that's going
to be similar to the
GetNarrative camera--
wait a minute, why did
you show me that image?
Oh, yeah, you're right.
That's the only image from
the party that was clear
and that showed the people well,
and that the balance was right.
OK, I agree with you.
And over time, having
that hood to look under
will help me build trust,
and when I don't trust it,
and it's got it wrong,
to influence it,
such that it behaves
the way I want it to.
So check out that
pattern in the book.
And I think it may
answer your question.
There is a second
question implied there,
which is about recommendations.
And I think that
any system should
be able to recommend to users
new behaviors occasionally.
There's an ugly
tension between agency
that goes off and does its
work out of your attention
and the brand importance
of not becoming
a commodity, brands that want
to be first and foremost.
So there's a weird
tension there,
and I believe that the
corporate pressure we will have
will be to overcommunicate.
And our job as designers is
to match what the humans want.
But I don't have any problem
with an agent coming and making
an occasional recommendation
to me, as a user,
and I suspect users at large.
So I don't think the
decision itself is a problem.
Understanding how
it was made and then
being able to tweak that is the
important part of the pattern.
AUDIENCE: Why
designers and judges?
Why are they the last
ones to have jobs?
CHRISTOPHER NOESSEL:
Yes, I suspect
it's because that we
do the same thing.
We judge what's good,
and we know how to--
we are good at understanding
humans and then designing
systems, depending on
your definition of design,
that optimize for a
human set of effects.
And AIs will not
know that inherently,
from the very beginning,
and they'll come to us
to ask those questions.
Similarly, I think that
AIs are an alien intell--
or the general AI will
be an alien intelligence.
And they'll need to
turn to judges in order
to understand what to do or
what humans would ordinarily
do in the edge cases.
And laws, of course, are
all about the edge cases.
So I think those are the
reasons for the last two.
I didn't say this over
the course of the talk,
but I'm actually pretty
hopeful about the role
that agentive technologies
would have in setting up
a general AI to be benign.
I know it's risky to say.
But if the output of a ton
of agentive technologies
in the world is a
set of instructions
for how you want things to
behave in order to serve you
well, what we'll have
at the end of that,
at the advent of general AI,
is 4 million laws of robotics,
where an AI, that can read all
of these instructions at once,
can begin to infer, oh,
this is how humans generally
want to be treated.
And that's a pretty important
big data piece of information
to handle general AI.
So I'm hopeful in that regard.
Yes?
AUDIENCE: I'm still
thinking through this.
But what are some examples
of agentive products
that you've found have been
able to close the gap maybe
between the manual, traditional
way of doing something and then
handing off that service
and building off that, maybe
some agentive
products or completely
AI-generated products?
Like, the poems that
you found people just
did not like the machine
having generated--
how have you seen
ways that that gap
can be closed to where they
do appreciate those poems
in the future, for instance?
CHRISTOPHER NOESSEL: So
let's tackle the two.
The first question was, what are
some examples of products that
have genuinely managed to work
fine and go from human distrust
to human trust?
Spam filters-- spam filters
are a really great example,
partially because, when
they were first there,
people don't
necessarily trust them,
and partially because
well-done spam filters--
and I include Gmail on this--
still give you the
opportunity to go in
and review, open the hood--
do I agree with you
on these things?
They give controls
for blacklists--
I never want to hear
from this person again--
and white lists-- I always
want to hear from this person--
and incorporates deep narrow
artificial intelligences,
in order to-- and
group feedback.
If 20 people all mark
this as undesirable,
you can pretty much guarantee
that the other users that
are similar to them will
find the same thing.
So email filters are a great,
great example, and partially
because, I don't
know about you, but I
haven't gone into my rejected
folder in a really long time,
because I trust them.
Yeah?
Does anyone still go in?
Right?
So that's a great
example as a model.
The second part of
your question was, oh,
things that have failed.
I don't have a great
answer for that right
off the top of my head.
The GetNarrative
is a risky example,
partially because that
company has been struggling.
But I don't think it's because
of the agency of the product.
I think it's actually because
of our concerns about privacy.
The domains in which
I can wear a camera
and not worry about the privacy
of the people in front of me
is actually pretty
limited across my day.
It's not my work, certainly
not inside my family,
unless I'm just sharing
those to a small group,
like maybe a party.
But still, when I wore one-- and
I did for about four months--
people would be
like, what's that?
Maybe hiking, maybe
public things,
like walking around
a city, but that's
a small percentage of
certainly my time in the world.
So I think that the
problems that they had
weren't about the technology,
but about bringing
that technology to
that particular domain.
I'll think about
it a little more.
I think Roomba is making a big
mistake with the announcement
that they gave this week, that
they have slowly been building
up a model of people's homes.
And even though
they said, we'll let
users opt in to sharing that
home platform, the details
that Roomba has found, with
co-marketers, it's still
super creepy, knowing
that it's creating
a model of your home, that
can be hacked by anyone.
So I don't know how
that's going to fare.
I suspect that the announcement
this week was a bit of a flag
to see how people responded
and it's being overshadowed
by modern politics.
But I hope that they get
some negative feedback,
because people are
going to be creeped out.
I can think of
some other examples
and tweet them out
there, if you like.
I will volunteer three of the
things, four other things.
The first is-- one
of the questions
that I get asked a lot
about is, what do we
do with the humans
in the world of ANI?
I don't have a pat
answer for that,
but I am a big believer
in universal basic income.
I think it's the--
we have always had technologies
that slowly obviate jobs,
but the speed at which we are
going to replace entire fields
is going to be massive.
And we have to come at that from
a cultural answer, not a oh,
it'll-work-itself-out free
market kind of answer.
The other three
are the next steps.
Over the course of this book--
and I'll give this to you guys,
because you're a pretty
advanced audience--
I talk about the
world as if there
were one user and one agent.
That's certainly not true.
We have to get good
at that foundation,
but over time, it's going to be
user, one user to many agents,
and even multiple users
to multiple agents.
I don't talk about that
here, but it's something
that we're going
to have to solve
as a community of practice
working with this.
The second thing is that I don't
talk about the third category,
the assistive tech.
I believe that the long history
we have of interaction design
will fit us well to
assistive technology.
But I'm also quite
concerned that that becomes
a human crutch for
cognition, as opposed
to equipment that
makes us better.
I don't also talk about
the flow between agentive
and assistive and
automatic in this book,
but it is something
that, if you begin
to incorporate agentive
aspects of your product, then
it's not purely agentive,
like the Roomba,
that you have to work through.
And I don't give
any advice to that.
I'm working with
Wittgenstein's ladder, right?
Let's get this done
first, and then we
can build up those other skills.
Thank you, guys.
[APPLAUSE]
