MALE SPEAKER: Welcome.
Let's start with a pop quiz.
What do Benjamin Franklin,
Karl Marx, and the philosopher
Hannah Arendt have in common?
Anybody?
So they all proposed this
notion of calling humans
Homo faber-- man, the toolmaker.
So we make tools.
We make tools.
We alter the environment,
and then the tools alter us.
And sometimes we lament that.
And sometimes these tools
have big effects-- clothing,
cooking, fire, automobiles,
computers, and so on.
Sometimes they have
smaller effects.
But it looks like
right now we're
in a period where
we're going to start
using more and more tools.
They're going to be ubiquitous
throughout our life.
And our speaker
today, Nicholas Carr,
has taken upon himself
to investigate this.
How do these tools change us?
What's for the better,
what's for the worse,
and can we figure out
a way to design them
so that we'll live
better with them?
Welcome to Google,
Nicholas Carr.
[APPLAUSE]
NICHOLAS CARR: Thank you.
Thank you.
Thanks very much, Peter.
And thanks to [? Anne ?]
Farmer for shepherding me
through the process
and bringing me here.
And thanks to Google for
hosting these events.
I've been to a couple
of other Google offices,
but this is the first time
I've been to the headquarters.
So it's exciting.
The Googleplex has kind of
played a role in my fantasy
life for a long
time, I realized.
Not a weird role; it's kind of a
dull fantasy life, but what can
I say?
So it's good to
be here in person.
I started writing
about technology
about 15 years ago
or so, more or less
the same time that Google
appeared on the scene.
And I think it was
good timing for Google.
And it was also
good timing for me,
because there's been plenty,
obviously, to write about.
And like, I think, most
technology writers, I
started off writing about the
technology itself-- features,
design, stuff like
that-- and also about
kind of the economic and
financial side of the business,
so competition between
technology companies
and so forth.
But over the years, I
became kind of frustrated
by what I saw as the narrowness
of that view, that just looks
at technology as technology
or as a economic factor.
Because what was
becoming clearer
was that computers, as they
became smaller and smaller
and more powerful
and more connected,
and as programmers became
more adept at their work,
computing, computation digital
connectivity and everything
was infusing more and more
aspects of everybody's life--
at work, during
their leisure time.
And so it struck me
that, as is always true,
and as Peter said,
with technology
we kind of-- technology
frames, in many ways,
the context in which we live.
And it seemed to me
important to look
at this phenomenon, the
rise of the computer as kind
of a central component
of our lives,
from many different angles.
So, see what
sociology could tell
us, what philosophy
could tell us,
and all these different
ways we can approach
an important phenomenon
that's influencing our life.
So four or five
years ago I wrote
a book called "The
Shallows" that examined
how the use of the internet
as an informational medium
is influencing the way we think,
and how we're adapting to this,
not only availability of
vast amounts of information,
but more and more an actual
active barrage of it,
and what that meant
for our ability
to tune out the flow
when we needed to,
and really engage
attentively in one task,
or one train of thought.
And as I was writing
"The Shallows,"
I also started becoming aware
of this other realm of research
into computers that struck me
as dealing with an even broader
question, which is what happens
to people and their talents
and their engagement
with the world
when they become
reliant on computers
in their various forms to
do more and more things?
So what happens when we automate
not just factory work, but lots
of white-collar,
professional thinking,
and what happens when we begin
to automate a lot of just
the day to day
activities that we do?
We've become more and
more reliant on computers,
not necessarily to take
over all of the work,
but to become our aid to help
shepherd us through our days.
And that was the spark
that led to "The Glass
Cage," my new book, which
tries to look broadly
at the repercussions
of our dependence
on computers and
automation in general,
but also looks at
the question of,
are we designing this stuff
in an optimal fashion?
If we want a world in which we
get the benefits of computers
but we also want people to live
full, meaningful lives; develop
rich talents; interact with
the world in diverse ways,
are we designing all of
these tools-- everything
from robots to simple
smartphone apps-- in a way that
accomplishes both those things?
And what I'd like to do is
just read a short section
from the book that,
to me, provides
both an example of a lot of
the things I'm talking a lot,
a lot of tensions
I'm talking about,
but also provides sort of
a metaphor for, I think,
the circumstances we're in
and the challenges we face.
And this section, which comes
in the middle of the book,
is about the use of computers
and automation, not in a city
or even in a kind
of Western country,
where there's tons of
it, but in a place that
looks like this-- up
in the Arctic Circle,
far, far away, where
you might think
is shielded from computers and
automation but in fact is not.
So let me just read this to you.
"The small island of
Igloolik, lying off
the coast of the
Melville peninsula
in the Nunavut territory
of the Canadian north,
is a bewildering
place in the winter.
The average temperature hovers
around 20 degrees below zero.
Thick sheets of sea ice
cover the surrounding waters.
The sun is absent.
Despite the brutal
conditions, Inuit hunters
have for some 4,000 years
ventured out from their homes
on the island and traversed
miles of ice and tundra
in search of caribou
and other game.
The hunters' ability to
navigate vast stretches
of barren, Arctic terrain,
where landmarks are few,
snow formations are
in constant flux,
and trails disappear
overnight, has
amazed voyagers and
scientists for centuries.
The Inuits' extraordinary
wayfinding skills
are born not of technological
prowess-- they've
eschewed maps, compasses,
and other instruments--
but of a profound understanding
of winds, snow drift
patterns, animal behavior,
stars, tides, and currents.
The Inuit are masters
of perception.
Or at least they used to be.
Something changed
in Inuit culture
at the turn of the millennium.
In the year 2000,
the US government
lifted many of the restrictions
on the civilian use
of the global
positioning system.
The Igloolik hunters, who had
already swapped their dog sleds
for snowmobiles, began to rely
on computer-generated maps
and directions to get around.
Younger Inuit were
particularly eager to use
the new technology.
In the past, a young hunter had
to endure a long apprenticeship
with his elders,
developing his wayfinding
talents over many years.
By purchasing a
cheap GPS receiver,
he could skip the training
and offload responsibility
for navigation to the device.
The ease, convenience,
and precision
of automated navigation made the
Inuits' traditional techniques
seem antiquated and
cumbersome by comparison.
But as GPS devices
proliferated on the island,
reports began to spread
of serious accidents
during hunts, some resulting
in injuries and even deaths.
The cause was often traced to
an overreliance on satellites.
When a receiver breaks
or its batteries freeze,
a hunter who hasn't developed
strong wayfinding skills
can easily become lost
in the featureless waste
and fall victim to exposure.
Even when the devices operate
properly, they present hazards.
The route, so meticulously
plotted on satellite maps,
can give hunters a
form of tunnel vision.
Trusting the GPS
instructions, they'll
speed onto dangerously
thin ice, or
into other environmental perils
that a skilled navigator would
have had the sense and
foresight to avoid.
Some of these problems
may eventually
be mitigated by improvements
in navigational devices,
or by better instruction
in their use.
What won't be
mitigated is the loss
of what one tribal
elder describes
as "the wisdom and
knowledge of the Inuit."
The anthropologist
Claudio Aporta,
of Carleton
University in Ottawa,
has been studying Inuit
hunters for years.
He reports that while
satellite navigation offers
attractive advantages,
its adoption has already
brought a deterioration
in wayfinding abilities,
and more generally, a
weakened feel for the land.
As a hunter on a
GPS-equipped snowmobile
devotes his attention to
the instructions coming
from the computer, he loses
sight of his surroundings.
He travels blindfolded,
as Aporta puts it.
A singular talent that has
defined and distinguished
a people for thousands of
years may well evaporate over
the course of a
generation or two."
When I relate that
story to people,
they tend to have
one of two reactions.
And my guess is both
of those reactions
are probably represented
in this room.
One of the reactions
is a feeling
that this is a poignant story.
It's a troubling story,
story about loss,
about something essential
to the human condition.
And that tends to be the
reaction I have to it.
But then there's a very
different reaction,
which is, well, welcome
to the modern world.
Progress goes on, we adapt, and
in the end, things get better.
And so if you think
about it, most of us,
probably all human beings, once
had a much more sophisticated
navigational sense,
inner navigational sense,
much more sophisticated
perception
of the world, the landscape.
And for most of us, we've
lost almost all of that.
And yet, we didn't go extinct.
We're still here.
By most measures,
we're thriving.
And I think that is also a
completely valid point of view.
It's true that we lose
lots of skills over time,
and we gain new ones
and things go on.
So in some ways,
your reaction to this
is a value judgment about
what's meaningful in human life.
But beyond those
value judgments,
I think one thing,
or a couple things
that this story, this
experience tells us,
is how powerful
a new tool can be
when introduced into a culture.
It can change the
way people work,
the way people
operate, the way they
think about what's
important, the way they
go about their lives
in many different ways.
And it can do this
very, very quickly,
overturning some
skill or some talent
or some way of life that's been
around for thousands of years,
just in the course
of a year or two.
So introducing computer
tools, introducing automation,
any kind of technology that
redefines what human beings do,
and redefines what we do
versus what we hand off
to machines or computers can
have very, very deep and very,
very powerful effects.
And a lot of these effects are
very difficult to anticipate.
So the Inuit hunters,
the young hunters,
didn't go out and
buy GPS systems
because they wanted to
increase the odds that they'd
get lost and die.
And they probably
weren't thinking
about eroding some
fundamental aspect of culture.
They wanted to get
the convenience,
the ease of the system,
which is what many of us
are motivated by when
we decide to adopt
some kind of new form of
automation in our lives.
And when you look at all
these unanticipated effects,
you can see a very common
theme that comes out
in research about
automation, and particularly
about computer automation.
And it's something that's been
documented over and over again
by human factors, scientists
and researchers, the people who
study how people interact with
computers and other machines.
And the concept is referred
to as "the substitution myth."
And it's very simple.
It says that
whenever you automate
any part of an activity,
you fundamentally
change the activity.
And that's very different
from what we anticipate.
Most people, either
users of software
or other automated systems
or the designers, the makers,
they assume that
actually you can
take bits and pieces
of what people do.
You can automate them.
You can turn them over to
software or something else.
And you'll make those
parts of the process
more efficient or more
convenient or faster
or cheaper.
But you won't fundamentally
change the way people
go about doing their work.
You won't change their behavior.
In fact, over and
over again we see
that even small changes,
small shifts of responsibility
from people to
technology, can have
very big effects on
the way people behave,
the way they learn, the way
they approach their jobs.
We've seen this recently with
the increasing automation
of medical record keeping.
As you probably know,
we've moved fairly quickly
over the last 10
years from doctors
taking patient notes on paper,
either writing them by hand
or dictating them,
to digital records.
So doctors, usually as
they're going through an exam,
will take notes, usually
going through a template
on a computer or on a tablet.
And for most of us,
our initial reaction
is, thank goodness for that.
Because having records on
paper was a pain in the neck.
You'd have to enter
the same information,
depending on when you
went to different doctors.
And God forbid you got sick
somewhere else in the country,
or something, and doctors
couldn't exchange,
had no way to share
your old records.
So it makes all sorts of
sense to automate this
and to have digital records.
And indeed, 10 years ago when
we started down this path,
the US started down
this path, there
were all sorts of studies
that said, oh, we're
going to save enormous
amounts of money.
We're going to increase patient
care, quality of health care,
as well as make it easier
to share information.
And there was a big study
by the Rand Corporation
that documented all this.
They had modeled the
entire health care
system in a computer,
output various things.
This was going to
be all to the good.
Well, the government
went on to subsidize
the adoption of
electronic medical records
to the tune of something
like $30 billion since then.
And now we have a lot
of information about
what's really happened.
And nothing that was expected
has actually played out.
And all sorts of things
that weren't expected, have.
For instance, the cost
savings have not materialized.
Cost has continued to go up.
And there's even
some indications
that beyond the expense required
for the systems themselves,
this shift may increase
health care costs
rather than decrease them.
The evidence on quality of
care is very, very mixed.
There seems to be no doubt
that for some patients, those
with chronic
diseases that require
a lot of different
doctors, quality goes up.
But for a lot of patients,
there hasn't been a change.
And there may even have
been an erosion of quality,
in some instances.
And finally, we're not
even getting the benefits
of broad sharing of the records,
because a lot of the systems
are proprietary.
And so you can't transfer
the records quickly or easily
from one hospital to the next
or one practice to the next.
And now some of these problems
just come from the fact
that a lot of
software is crappy.
And we've rushed to spend
huge amounts of money on it.
Lots of big software
companies that supply this
have gotten very wealthy.
And doctors are
struggling with it.
Patients are struggling with it.
And so some of
those things will be
fixed at more expense over time.
But if you look down lower,
you see changes in behavior
that are much more subtle
and much more interesting
and go beyond the quality
of the software itself.
So for instance, one of the
reasons that everybody expected
that health care costs would
go down-- the assumption
was that as soon as doctors can
call up images and other test
results on their computers
when they're in with a patient,
they wouldn't order more tests.
So we'd see fewer diagnostic
tests and fewer costs
from those diagnostic
tests-- big part
of the health care
system's costs.
Actually, exactly the opposite
seems to be happening.
You give the doctor an
ability to quickly order
tests and quickly
pull up the results,
doctors actually
order more of them
because they know it's
going to be easier for them.
And so the quality of the
outcomes doesn't go up.
We're just seeing more
diagnostic tests and more
costs, exactly the opposite
of what we expected.
You see changes in the
doctor/patient relationship.
If you've had the experience,
if you've been around for awhile
and had the experience
of going from a world
in which you went into a
doctor's office for a physical
or whatever and the doctor
paid his or her whole attention
to you, to the world of
electronic medical records,
when the doctor
has a computer, you
know that it intrudes in the
doctor/patient relationship.
Studies show that
doctors now spend,
if they have a computer
with them, about 25% to 50%
of the time during an exam
looking at the computer, rather
than the patient.
And doctors aren't
happy about that.
Patients don't tend
to be happy about it.
But it's kind of a
necessary consequence,
at least how we've
designed these systems,
of this transfer.
The most interesting--
I'm just going
to give you three examples
of unexpected results--
but the most interesting
to me is the fact
that the quality of the records
themselves has gone down.
And the reason is that
first of all, now doctors
use templates,
checkboxes lots of times.
And then when they
have to put in text
describing the
patient's condition,
rather than dictating it from
what they've just experienced
or hand writing it,
they cut and paste.
They cut and paste
paragraphs and other stuff
from other visits that
the patient has had,
or from visits by other patients
that have similar conditions.
And this is referred to
as "the cloning of text."
And more and more of
personal medical records
consist of cloned
text these days,
which makes the records
less useful for doctors,
because it has less rich
and subtle information.
And it also undermines
an important role
that records used to play in
the exchange of information
and knowledge.
A primary care physician used
to get a lot of information,
a lot of knowledge by
reading rich descriptions
from specialists.
And now, more and
more, as doctors say,
this is just boilerplate,
just cloned text.
So we've created this system
that eventually will probably
have the very important benefit
of allowing us to exchange
information more and more
quickly, more and more easily.
But at the same
time, we're reducing
the quality of the
information itself
and making what's
exchanged less valuable.
Now those are three examples of
how the substitution myth has
played out in this particular
area of automation.
And they're very
specialized, and you
see all sorts of these
things anywhere you look.
But there are a couple
of bigger themes that
tend to cross all
aspects of automation
when you introduce software
to make jobs easier
or to take over jobs.
What you tend to get, in
addition to the benefits,
are a couple of big
negative developments.
Human factors, experts,
researchers on this
refer to these as "automation
complacency" and "automation
bias."
Automation complacency means
exactly what you would expect.
When people turn over
big aspects of their job
to computers, to software,
to robots, they tune out.
We're very good at trusting
a machine, and certainly
a computerized machine, to
handle our job, to handle
any challenge that might arise.
And so we become complacent.
We tune out.
We space out.
And that might be fine
until something bad happens,
and we suddenly
have to re-engage
with what we're doing, and then
you see people make mistakes.
Everybody experiences automation
complacency in using computers.
A very simple example is
autocorrect for spelling.
When people have
autocorrect going,
when they're texting or using
a word processor or whatever,
they become much more
complacent about their spelling.
They don't check things.
They let it go.
And then most people have had
the experience of sending out
a text or an email
or a report that
has some really
stupid typo in it,
because the computer
misunderstood your intent.
And that causes maybe a
moment of embarrassment.
But you take that same
phenomenon of complacency
and put it into an industrial
control room, into a cockpit,
into a battlefield,
and you sometimes
get very, very
dangerous situations.
One of the classic examples
of automation complacency
comes in the cruise
line business.
A few years ago, a
cruise ship called
the Royal Majesty was on
the last leg of a cruise
off New England.
It was going from Bermuda,
I think, to Boston.
It had a GPS antenna that
was connected to an automated
navigation system.
The crew turned on the
automated navigation system,
and kind of became
totally complacent--
just assumed, OK,
everything's going fine.
Hey, the computer's
plotting our course,
don't have to worry about it.
And at some point the antenna,
the line to the GPS antenna
broke.
And this was way up
somewhere and nobody saw it.
And nobody noticed.
There were increasing
environmental clues
that the ship was
drifting off course.
Nobody saw it.
At one point, a
mate whose job it
was to watch for
a locational buoy
and report back to the bridge
that, yeah, we passed this
as we should have, he was
out there watching for it
and he didn't see it.
And he said, well,
it must be there
because the computer's
in charge here.
I just must have missed it.
So he didn't bother
to tell the bridge.
He was embarrassed
that he had missed
what must have been there.
Well, hours go by, and
ultimately the ship
crashes into a sandbar off
Nantucket Island many miles
off course.
Fortunately, no one was
killed or injured that bad,
but there was millions
of dollars of damage.
It kind of shows
how easily, if you
give too much responsibility
to the computer,
the people will tune out.
And they won't notice
things are going wrong,
or if they do notice, they might
make mistakes in responding.
Automation bias is
closely related.
to automation complacency.
And it just means that
you place too much trust
in the information coming
from your computer,
to the point where
you begin to assume
that the computer is infallible.
And so you don't
have to pay attention
to other sources of information,
including your own eyes
and ears.
And this too is something we
see over and over again when
you automate any
kind of activity.
A good example is the use
of GPS by truck drivers.
A truck driver starts to
listen to the automated voice
of the GPS woman telling them
where to go and whatever.
And he or she begins to ignore
other sources of information
like road signs.
So we've seen an increase in
the incidence of trucks crashing
into low overpasses as we've
increased the use of GPS.
And in Seattle a
few years ago, there
was a bus driver carrying
a load of high school
athletes to a game somewhere.
12-foot-high bus approached
a nine-foot-high overpass.
And there were all these signs
along the way-- "Danger-- Low
Overpass" or even signs that
had blinking lights around them.
He smashes right into it.
Luckily, no one died.
A bunch of students
had to go the hospital.
The police said, what
were you thinking?
And he said, well,
I had my GPS on
and I just didn't see the signs.
So we ignore, or don't even see,
other sources of information.
In another very different
area, back to health care,
if you look at how radiologists
read diagnostic images today,
most of them read them as
digital images, of course.
But also there's
now software that
is designed as a decision
support aid and analytical aid.
And what it does is it gives
the radiologist prompts.
It highlights particular regions
of the image that the data
analysis, past data,
suggests are suspicious.
In many cases, this
has very good results.
The doctor focuses attention
on those particular highlighted
areas, finds a cancer
or other abnormality
that the doctor may have missed.
And that's fine.
But research shows that it also
has the exact opposite effect.
Doctors become so focused
on the highlighted areas
that they only pay cursory
attention to other areas,
and often miss abnormalities
or cancers that
aren't highlighted.
And the latest research suggests
that these prompt systems,
which, as you know, are
very, very common in software
in general, these
prompt systems seem
to improve the performance
of less expert image readers
on simpler challenges,
but decrease
the performance
of expert readers
on very, very hard challenges.
The phenomena of automation
complacency and automation bias
points to, I think, an even
deeper and more insidious
problem that poorly designed
software or poorly designed
automated systems
often triggers.
And that is that in
both of those cases,
with complacency and
bias, you see a person
disengaging from the
world, disengaging
from his or her
circumstances, disengaging
from the task at
hand, simply assuming
that the computer
will handle it.
And indeed, the computer
has been designed-- whatever
system we're talking about-- has
been designed to handle as much
of the chore as possible.
And what happens then is
we see an erosion of talent
on the part of the person.
Either the person isn't
developing strong, rich
talents, or their
existing talents
are beginning to get rusty.
And the reason of
is pretty obvious.
We all know, either intuitively
or if you've read anything
about this, how we develop rich
talents, sophisticated talents.
It's by practice.
It's by doing things over
and over again, facing
lots of different
challenges in lots
of different
circumstances, figuring out
how to overcome them.
That's how we build the
most sophisticated skills
and how we continue
to refine them.
And this element,
this crucial element
in learning, in
all sorts of forms,
is often referred to as
"the generation effect."
And what that means
is, if you're actively
engaged in some task,
in some form of work,
you're going to not only
perform better, but learn more
and become more expert
than if you're simply
an observer, simply passively
watching as things progress.
And the generation
effect was first
observed in this very
simple experiment involving
people's ability to
expand vocabulary, learn
vocabulary, remember vocabulary.
And what the researchers
did back in the '70s
is they got two groups
of people to try
to memorize lots of
pairs of antonyms,
lots of pairs of opposites.
And the only difference
between the two groups
was that one group
used flash cards that
had both words spelled
out entirely-- hot,
cold-- the other had flashcards
that just had the first word,
"hot," but then provided
only the first letter
of the second word, so "c."
And what they found
was that indeed,
the people who
used the full words
remembered much
fewer of the antonyms
than the people who had to
fill in the second word.
The reason?
There's a little bit more
brain activity involved here.
You actually have to call
to mind what this word is.
You have to generate it.
And just that small
difference gives you
better learning,
better retention.
A few years later,
some other researchers,
some other professors
in this area,
realized that actually, this is
kind of a form of automation.
What this does, giving
the full word in essence
automates the work of
filling in the word.
And they explained this as,
in fact, a phenomenon related
to automation complacency.
You might be completely
unconscious of it,
but your brain is a
little more complacent.
It doesn't have to work
as hard in this mode.
And that makes a big difference.
And it turns out that
the generation effect
kind of explains
a whole lot about
how we learn and develop
skill in all sorts of places.
It's definitely not just
restricted to studies
of vocabulary.
You see it everywhere.
If you're actively
involved, you learn more.
You become more expertise.
If you're not, you don't.
And unfortunately, with
software, more and more,
the programmer, the
designer, actually
gets in the way of
the generation effect.
And not by accident,
but on purpose.
Because of course, the
things we tend to automate,
the things we tend to
simplify for people,
are the things that
are challenging.
You look at a process.
You look where people
are struggling.
And that is both often the most
interesting thing to automate
but also the place
that whoever's
paying you to write the software
is encouraging you to do,
because it seems to
create efficiency.
It seems to create productivity.
But what we're doing is
designing lots of systems,
lots of software that
actually deliberately--
if you look at it
in that sense--
gets in the way of
people's ability
to learn and create expertise.
There was a series
of experiments
done beginning about 10 years
ago by this young cognitive
psychologist in Holland
named Christof van Nimwegen.
And he did something
very interesting.
He got a series of
different tasks.
One of them was solving a
difficult logic problem.
One of them was organizing
a conference where
you had a large number
of conference rooms,
large number of speakers,
large number of time slots,
and you had to optimize how you
put all those things together.
So a number of tasks that
had lots of components.
Required a certain
amount of smarts,
and required you to work through
a hard problem over time.
And in each case, he got
groups of people, divided them
into two, created two
different applications
for doing each of these.
One application was
very bare bones.
It just provided you
with the scenario
and then you had
to work through it.
The other was very helpful.
It had prompts.
It had highlights.
It had advice, on-screen advice.
When you got to a point where
you could do some moves but you
couldn't do others, it would
highlight the ones you could do
and gray out the
ones you couldn't.
And then he let them go
and watched what happened.
Well, as you might
expect, the people
with the more helpful software
got off to a great start.
The software was
guiding them, helping
them make their initial
decisions and moves.
They jumped out
to a lead in terms
of solving the challenges.
But over time, the people
using the bare bones software,
the unhelpful software, not
only caught up but actually,
in all the cases, ended up
completing the assignment much
more efficiently,
made much fewer
incorrect moves,
much fewer mistakes.
They also seemed to have
a much clearer strategy,
whereas the people using the
helpful software kind of just
clicked around.
And finally, van Nimwegen
gave them tests afterwards
to measure their conceptual
understanding of what
they had done.
People with the
unhelpful software
had a much clearer
conceptual understanding.
Then eight months later, he
invited just the logic puzzle
group.
He invited all
the people who did
that back, had them
solve the problem again.
The people who had, eight months
earlier, used the unhelpful
software solved the
puzzle twice as fast
as the people who used
the unhelpful software.
The more helpful the software,
the less learning, the weaker
performance, the less strategic
thinking of the people
who used it.
Again, this underscores
a fundamental paradox
that people face, people
who develop these programs
and people who use them, where
our instinct to make things
easier, to find the
places of friction
and remove the
friction, can actually
lead to counterproductive
results, where you're
eroding performance
and eroding learning.
So if you look at all
the psychological studies
and the human factor
studies of how
people interact with machines
and technology and computers,
and you also combine it with
psychological understanding
of how we learn, what
you see is that there's
a very complex cycle involved.
If you have a high
degree of engagement
with people, if
they're really pushed
to engage with
challenges, work hard,
maintain their awareness
of their circumstances,
you provoke a state of flow.
If you've read Mihaly
Csikszentmihalyi's book "Flow"
or are familiar with
it, we perform optimally
when we're really immersed in
a hard challenge, when we're
stretching our talents,
learning new talents.
That's the optimal
state to be in.
It gives us more skills,
pushes us to new talents,
and it also happens to be
the state in which we're
most fulfilled and
most satisfied.
Often, people have this feeling
that if they were relieved
of work, relieved
of effort, they'll
be happier-- turns
out they're not.
They're more miserable.
They're actually happier
when they are working hard,
facing a challenge.
And so this sense of
fulfillment prolongs
your sense of engagement,
intensifies it.
And you get this
very nice cycle.
People are performing
at a high level.
They're learning talents.
And they're fulfilled.
They're happy,
they're satisfied,
they like their experience.
All too often, you stick
automation into here,
particularly if
you haven't thought
through all of the implications.
And you break this cycle.
Suddenly you
decrease engagement,
and all the other
things go down as well.
You see this today in
all sorts of places.
You see it with
pilots, whose jobs
have been highly,
highly automated.
Automation has been a very
good, very positive development
for 100 years in aviation.
But recently, as
pilots' role in control
of the aircraft, manual
control, has gone down
to the point where they may be
in control for three minutes
during a flight,
you see problems
with the erosion of
engagement, the erosion
of situational awareness,
and the erosion of talent.
And unfortunately, on
those rare occasions
when the autopilot fails
for whatever reason,
or there's very
weird circumstances,
you increase the odds that
the pilots will make mistakes
sometimes, with
dangerous implications.
So why do we go down
this path so often?
Why do we create computer
programs, robotic systems,
other automated systems, that
instead of raising people up
to their highest
level of talent,
highest level of awareness
and satisfaction,
has the opposite effect?
I think much of the
blame can be placed
on what I would argue is the
dominant design philosophy
or ethic that governs the people
who are making these programs
and making these machines.
And it's what's often referred
to as "technology-centered
design."
And basically what
that means is,
the engineer or the
programmer or whatever
starts by asking, what
can the computer do?
What can the technology do?
And then anything that the
computer or the technology
can do, they give that
responsibility to the computer.
And you can see why this is what
engineers and programmers would
want to do, because that's
their job-- to give interesting,
to simulate or automate
interesting work with software
or with robots.
So it's a very
natural thing to do.
But what happens then is
what the human being gets
is just what the
computer can't do,
or what we haven't yet figured
out how the computer can do it.
And that tends to be things
like monitoring screens
for anomalies, entering
data, and, oh by the way,
you're also the last
line of defense.
So if everything
goes to hell, you've
got to take over and
get us out of the fix.
Those are things that people
are actually pretty bad at.
We're terrible at monitoring
things, waiting for an anomaly.
You can't focus on it for more
than about a half an hour.
Entering data, becoming the
kind of sensor for the computer,
is a pretty dull
job in most cases.
And if you set up a system that
ensures that the operator is
going to have a low level
of situational awareness,
then that is not
the person you want
to be having as the
last line of defense.
The alternative is something
called-- surprise--
"human-centered design,"
where you start by saying,
what are human beings good at?
And you look at the
fact that there's
lots of important
things we're actually
still much better
than computers at.
We're creative.
We have imagination.
We can think conceptually.
We have an understanding
of the world.
We can think critically.
We can think skeptically.
And then you bring
in the software,
you bring in the automation,
first to aid the person
in exploiting
those capabilities,
but also to fill in
the gaps and the flaws
that we all have
as human beings.
So we're not great at processing
huge amounts of information
quickly.
We're subject to
biases in our thinking.
You can use software
to counteract these,
or to provide an additional
set of capabilities.
And if you go that path, you
get both the best of the human
and the best of the machine,
or the best of the technology.
And some of the ideas here
are very, very simple.
For instance, with
pilots, instead
of allowing them to turn
on total flight automation
once they're off the ground and
then not bother to turn it off
until they're about
ready to land,
you can design the software to
give control back to the pilot
every once in awhile
at random instances.
And just when you
know that you're
going to be called upon
at some random time
to take back control, that
improves people's awareness
and concentration immeasurably.
It makes it less
likely that they're
going to completely space out.
Or in the example of the
radiologist-- and this
goes for examples of
decision support or expert
system or analytical programs in
general-- one thing you can do
is instead of bringing
in the software prompts
and the software advice
right at the outset,
you can first encourage
the human being
to deal with the
problem, to look
at the image on his
or her own, or to do
whatever analytical
chore is there.
And then bring in the
software afterwards,
as kind of a further aid,
bringing new information
to bear.
And that too means you get
the best of both people.
Unfortunately, we don't do that,
or at least not very often.
We don't pursue
human-centered design.
And I think it's for
a couple of reasons.
One is that we human
beings, as I said before,
are very eager to hand
off any kind of work
to machines, to software,
to other people,
because we are afflicted
by what psychologists
term "miswanting."
We think we want to be freed
of labor, freed of hard work,
freed of challenge.
And when we are freed of
it, we feel miserable,
we feel anxious, we
get self-absorbed.
And actually, our
optimal experience
comes when we are
working hard at things.
So there's something
inside of us
that is very eager
to get rid of stuff,
even if it's get
rid of effort, even
if it's not in our own benefit.
And then the other
reason, which I
think is one that's even
harder to deal with,
is the pursuit of
efficiency and productivity
above all other goals.
And you can certainly see why.
Hospitals who want the
highest productivity
possible from radiologists
would be averse to saying,
well, we'll let the
radiologist look at the image
and then we'll bring
in the software.
Because that extends the
time that a radiologist
is going to look at this.
And that's true of any of these
kind of analytical chores.
And so there's this
tension between the pursuit
of efficiency above all other
things and productivity,
and the development of skill,
the development of talent,
the development of high
levels of human performance,
and ultimately the sense of
satisfaction that people get.
And I think in the
long run, you see signs
that that begins to backfire.
Toyota earlier
this year announced
that it was replacing
some of its robots
in its Japanese factories
with human beings,
because even though the
robots are more efficient,
the company has struggled
with quality problems.
It's had to recall 20
million cars in recent years.
And not only is that
bad for business,
but Toyota's entire culture
is built around quality
manufacturing, so it
erodes its culture.
So by bringing
back human beings,
it wants to bring back both
the spirit and the reality
of human craftsmanship,
of people who can actually
think critically about
what they're doing.
And one of the benefits
it believes it will get
is that it will be smarter about
how it programs its robots.
It will be able to
continually take
new human thinking, new
human talent in insight,
and then incorporate
that into the processes
that even the robots are doing.
That's a good news example.
But I'm not going to
oversimplify this or lie
to you.
I think this tension
between placing efficiency
above all other things,
immediate efficiency,
is a very hard instinct,
very hard economic imperative
to overcome.
But nevertheless, I think
it's absolutely imperative
that everyone who designs
software and robotics and all
of us who use them are
conscious of the fact
that there is this trade-off,
in that technology isn't just
a means of production, as we
often tend to think of it.
It really is a
means of experience.
And it always has been, since
the first technologies were
developed by our
distant ancestors.
Technology at its best,
tools at their best
bring us out into the
world, expand our skills
and our talents, make the
world a more interesting place.
And we shouldn't forget
that about ourselves
as we continue at high
speed into a future where
more and more aspects
of human experience
are going to be offloaded to
computers and to machines.
So thank you very much
for your attention.
[APPLAUSE]
Thank you.
AUDIENCE: One thing
that immediately
sprung to mind with
most of your examples
is that they seem like
examples of poor automation.
And I'm wondering
if you could say
whether you feel
that there could be
or are already any sufficiently
flawless technologies
that we don't have to worry
about the sort of problems
you're describing.
NICHOLAS CARR: I think
in all instances,
you have to worry about them.
I agree with you that
a lot of these problems
are not problems about
automation per se.
Nobody's going to stop
the course of automation.
You can argue that the
invention of the wheel
was an example of
automating something,
and I don't think any
of us regrets that.
But I do think it's often unwise
design decisions, or unwise
assumptions that come in.
But as to the
question of whether we
will create infallible
automation, I don't think so.
I mean often, you get
this point of view,
and it seems to be quite
common in Silicon Valley,
if I can say that--
oh, people are only
going to be a temporary nuisance
in a lot of these processes.
We're going to have
fully self-driving cars.
We're going to have
fully self-flying planes.
We're going to have fully
analytical systems, big data
systems that can pump
out the right answer
and we won't have
to worry about it.
I don't think that that's
actually going to happen.
I mean, it might
happen eventually.
It's very, very difficult
to remove the human being
altogether.
And so to me, what that
means is, OK, fine.
You can pursue that as
some ideal, total flawless
automation.
But in the meantime, we live
and work in the present,
not in the future.
And for the foreseeable future,
in all of these processes
there are going to
be people involved.
And there are going to
be computers involved.
And instead of
just saying, let's
put the computer's interests
before the person's,
I think the wise way is, as
I said, to go with a more
human-centered design
that realizes and starts
with the assumption that
the human being is going
to play an essential
role in these things,
for as long as we can imagine,
or as long as we could foresee.
And so we better design
them to get the most out
of the person as well
as the technology.
Peter?
MALE SPEAKER: Thanks.
That was great, and
I certainly agree
with the idea of focusing on
the human-centered design.
I want to make one quick
comment, and then a question.
I noticed on your
hot/cold thing,
there was another researcher
who took passages from books
and presented them, and
then multiple choice quiz
or whatever.
And then they took
the same passage
and deleted a key sentence,
and people did better
understanding the point then.
But somehow no authors
are willing to have
the guts to do that.
So will you be that author to
delete the important sentences
from your book and make
the reader engaged more,
and therefore learn better?
NICHOLAS CARR: If any
of you buy my book,
I would be happy
to take a Sharpie
and erase certain sentences and
you can get the full benefit.
But I will take that under
advisement for future books.
MALE SPEAKER: And
then a question.
I'm interested in the difference
between automation complacency
and authority complacency.
So you see a lot of
these incidence reports,
and they'll be some
underling who said, you know,
I kind of noticed
something was going wrong.
But the surgeon or
the pilot or the CEO
seemed so sure that I
didn't want to say anything.
And that has nothing
to do with automation.
It's just authority.
NICHOLAS CARR: I think
that's probably pretty much
exactly the same phenomenon.
And I actually do
think it probably
has something to do
automation, because you could
say that automation
complacency comes when
the computer or the machine
takes the role of authority.
So the person
defers to it, and I
think that's certainly
one way to interpret
a lot of the findings-- that
you don't question the machine.
You don't question
the automation
in a way that would be wise.
So I think they're probably--
I think complacency has been
a problem since long before
computers came around
for people, for those
reasons and others.
But we've created a new way to
generate the same phenomenon.
Yeah?
AUDIENCE: I'm
curious to ask if you
know much research about
what percentage of time
or experiences we need to keep
manual in order to make sure
that the skills don't
fade away, and if you have
any thoughts on how
much this transfers
from domain to domain.
So in the wayfaring example
of the Inuit, you might say,
you could use GPS
80% of the time.
But you've got to do
20% of it manually
to keep your skills up.
And maybe airline pilots
have the other examples
that you cited.
Do you know how much
this has been studied
and how much it might vary
from domain to domain?
NICHOLAS CARR: As far as the
second question, I don't know.
I mean, I don't know
any rules of thumb,
either in specific domains
or that cross domains.
I can say, though, that there's
enormous amounts of research
that's been done in
aviation because of the fact
that the risk is so high,
and lots of people can die
and lots of money can be lost.
Ever since computerization of
flight began back in the '70s,
there's been, whether it's NASA
or the FAA or universities,
there has been tons of research.
So my guess is that there
has been an examination.
There probably have
been tests where
you have different levels of
automation and manual control
in comparing different
levels of performance.
I didn't come across those
specific studies in my work,
but my guess is that that
would be an obvious thing that
would have been done.
So I'm saying that in
aviation, there's probably
at least some sense of
how much-- at what point
does performance start to
drop off or start to drop off
dramatically because
you've turned over
too much responsibility
to the machine.
Whether that would
also translate
into the same kind
of percentages
in different domains,
I don't know.
Yeah?
AUDIENCE: Thanks for coming.
The talk was really interesting.
In your talk, you pointed
out that technology always
comes with trade-offs.
And it's hard to
disagree with that.
But I'm wondering about
the title of your book.
The title is "The Glass Cage."
And to call technology
a glass cage
seems like a much more negative
assessment than merely saying
that it comes with trade-offs.
So I'm wondering if you could
say what motivates this title.
NICHOLAS CARR: Well,
the title is a reference
to-- back to pilots' experience.
Since the '70s, pilots and
others in the aviation business
have referred to cockpits
as glass cockpits.
And it's because
increasingly, they're
wrapped with computer screens.
If you look at a
modern passenger jet,
it's insane amounts
of computer screens--
all sorts of input
devices and stuff.
One aviation expert
refers to the cockpit
now as a flying computer.
So in one sense, it's just
kind of a play on that,
because what I argue is that
we can learn a lot from pilots'
experience as we enter into
a world that essentially,
more and more of
us are going to be
living inside a glass cockpit.
We're going to be
looking at monitors
to do more and more things.
We're already there,
some would argue.
And I do think that what a
lot of examples of computer
automation tell us is that
the glass cockpit can become
a glass cage, that
if we design it
to be the primary
or the essential way
we interact with the
world, then it cuts us off
from other sources of
learning and information
that might be
absolutely essential.
But we're so focused on what
the computer's telling us
we lose that.
So I mean, it is intended
to be a little bit ominous,
that we can either get
trapped in this glass cage,
or we can use technology
in a more, what
I think is a more humane
and more balanced way.
FEMALE SPEAKER: Thanks
for that question.
And on that note, please
join me in thanking
Nicholas Carr for
coming to Google.
[APPLAUSE]
NICHOLAS CARR: Thank you.
