RICHARD: Hi, welcome
to Talks at Google
in Cambridge, Massachusetts.
We have a very important
topic for us today.
Those of us in the high
tech community, of course,
are oriented towards thinking
of technology as a good thing--
something that brings
benefits to people.
But it's not always that way.
And it's our responsibility
to also think
about what it means to
deploy that technology.
Is it really for
good or for not.
Is it good for
all people or does
it have different effects on
different segments of society
and who gets to
decide these issues.
Is it just us as technologists.
To what extent is there a
role for a broader public.
So Professor Sheila
Jasanoff has written
a very intelligent
and thoughtful book
on these issues.
And so we're very pleased
to have her here to speak.
Professor Jasanoff.
SHEILA JASANOFF:
Thank you, Richard.
It's a pleasure to be here.
And thank you all for
coming during the aftermath
of the lunch hour.
So let me begin by telling you
that this where this book came
from and where it sits in
relation to the sorts of things
I normally do and then take
you through pieces of it that
are maybe the most removed from
what the people in this room
ordinarily do.
Because although
there is a chapter
in the book on information and
communication technologies,
I'm not specifically going to
tell you about that chapter.
If you're interested, you
can always read the book.
So I write for a very diverse
interdisciplinary audience
because my work sits at
the intersection of law,
social sciences, and
science and technology.
But I've never written
for a trade press
and never written for a
totally general audience.
But Norton was
beginning a series
on ethics connected with
various public issues.
They were originally doing
this in collaboration
with a non-governmental
organization.
And the people at
Norton were not
thinking about science
and technology.
But this NGO partner that
they were working with
said that they wanted something
about the ethics of technology.
And then this was translated
to me by the Norton editor
as meaning risk.
Now, I didn't want to write
a book exactly about risk.
I think I agonized more
about the title of this book
than I have of any of my
other dozen plus books.
So it became something of
an exploration of the way
decisions are made and
inclusion issues are worked out
at the frontiers of technology.
But to do anything like this,
one has to begin somewhere.
That is, there's a
lot of [? vogue ?]
if we're talking about new
and emerging technologies.
In fact, there's a whole
society given the name
SNET, the Society for Nano
and Emerging Technologies.
But you can't really
say anything sensible
unless you know something
about the history of where
this came from.
So in the half hour or
so that I'll talk to you,
I'll take you a little
bit through the history
of American efforts to
grapple with the value
dimensions of science
and technology
and then go into
what I see as some
of the challenges of today.
Because I think
that in this century
the kinds of ethical
and value questions,
political questions
that we're confronting
are different in
all kinds of ways.
And I'm hoping
that there be time
for a discussion in
which I get to hear
from you what strikes
you as interesting,
wrong, misguided, whatever.
So the little roadmap.
People tend to
think that the tech
world just sort of arrived.
I mean, I still remember when
my daughter first taught me
how to quote, "Google", unquote.
That's within living memory.
And so people forget
that invention has
been with us for a long time.
And so many of the things that
we think about in connection
with invention are
not in themselves new,
but invention happens inside of
a society, embedded in society.
And as history
changes, things change.
And the question as invention
speeds up to some extent
as it's felt to be doing at the
moment, the question of what
we want, what we want to
retain of the past world, what
we want to give up--
these things also
change to some degree.
People talk about unintended
consequences a lot.
In the Q&A, people are
interested in why I really
dislike that term.
We can go into that.
I prefer to talk about
undesired consequences.
And then the question
that Richard has already
brought up-- if there are all
these questions, than there
are answers.
And who is in the
position of authority
to be giving answers that should
be binding to other people.
So that's a rough
structure of what
I want to talk to you about.
So when I say we've always
been inventors, of course,
we can go back into
the depths of mythology
to find that people have been
thinking about invention,
power, the connection
between inventors
and states for a very long time.
Daedalus, seen in
Western mythology
as the primal
figure illustrating
what the very processes of
engineering might be about,
was in the employ of King Midas.
So there was a very close
hand and glove relationship
between the ruler of the state
and the person whose ingenuity
he was enrolling.
And we all know the myth
of Icarus and the fact
that things didn't always
work out in the right way,
although it's
questionable whether it
was the technology that
failed or whether it
was human error-- it's
usually represented
as having been Icarus'
problem and not the problem
that wax was the material
that was being used.
So we have all of
these kinds of stories
that we've grown up with.
And when I say we--
it's interesting
that when my father was
teaching me English,
one of the first pieces
of English language
prose that I remember
was him making
me read a story about Icarus.
And I still remember
the sentence,
"Icarus flew too near
the sun and the heat
of the sun melted the wax."
I mean, this is embedded
in my five-year-old brain,
as what I was taught.
This is what being
post-colonial means, in a sense.
So technology has been,
as the Icarus illustrates,
the dream about liberation.
Not being wedded
to various things.
And all of you are deeply
familiar with ideas
like singularity.
The singularity that imagine
us being not interestingly
situated-- no two people sitting
exactly next to each other
or very few.
And sort of dotted about.
But maybe it's because the union
that we imagine among people
is a different kind
of one not constrained
by physical presences.
So what we want to
be liberated from
has also changed over
the years and that
raises new kinds of questions
for ethics and governance.
And what I was saying is
that make sense of the new,
you kind of have to
delve back into the old.
So a moment of old in American
science and technology policy
begins shortly after World War
II with, among other things,
the formation of our National
Science Foundation in 1950.
But the National
Science Foundation
grew out of the experience
of the wartime years
in the kind of partnership
that the federal government
had built up with
scientists and engineers,
very prominently including those
in this particular geographical
area.
So a quotation that's often
used to be a kind of emblem
of the spirit of technological
entrepreneurship in that period
is this from one of
the first commissioners
of the Atomic Energy
Commission, Lewis Strauss.
And he imagines nuclear power
as producing this phrase
that many people actually
know-- "electrical energy too
cheap to meter."
But it's the rest
of the sentence that
catches my attention
because it's
a liberationist dream--
"People will know
of great periodic
regional famines
only as matters of history
will travel effortlessly
of overseas and under them,
through the air with a minimum
of danger and at great speeds."
So this is being said in 1954.
I came to the US in 1956
and it took 17 hours
to come from London to New York.
So you know, civil aviation
was in a different state.
The question of speed
meant something different
at that time.
Letters didn't go
back and forth in what
we would consider to be normal,
let alone at electronic speeds.
So one can look at
this kind of quotation
and see in it a set
of visions about what
the state imagined itself
doing in this modernist period.
There would be progress.
Science and technology would
be used for human betterment.
And Vannecar Bush, who was
an MIT professor but was
FDR's as de-facto
science advisor
wrote "Science, the
Endless Frontier",
he imagined that this was
indeed an endless frontier.
But at that frontier,
you would get efficiency
in the energy too
cheap to meter.
And people were writing
about how things could
be made extremely efficient.
There was this very popular book
called "Cheaper by the Dozen"
about how a guy had-- well,
two efficiency experts, a man
and his wife-- had
had a dozen children
and they could manage things
much more readily for 12
children than for only one.
Accessibility, that's all
over Lewis Strauss' statement
that I quoted for you.
Eradication.
The way you get
rid of bad things
is you just-- it's
like your iPhone
where you just flick the
screen and it goes away.
And this was the
sort of imagination
of how technology would liberate
us from the scourges of hunger
and disease but also
from physical barriers--
the over the seas and
under them kind of point.
And also, a point that's not
made much of-- that all of this
would somehow be transparent.
Not with governments
sequestering information.
So this is an ideal that
has persisted through
to the present in some ways.
I think of this as somewhat
the vision of one worldism--
that everybody was
united in wanting
exactly these sorts of things.
And there's a little
piece of folk culture
that no one in
this room will know
because it was the poem that
was the motto of the UN Women's
Guild that I like to put as
a bracket on the other side
of Lewis Strauss because
it expect expresses,
in a completely
different register, some
of the same high modernist
dreams of what progress
is about.
So this is the UN Women's
Guild poem, the motto.
"There shall be peace on Earth,
but not until all children
daily eat their fill.
Does the hunger
eradication go warmly clad.
It should be against
the winter wind.
And thus released from
hunger, fear, and need."
There's the eradication myth and
then a unity and universalism
at the social level, as well.
So this was by a an
anonymous American poet
written in the 1950s.
And it started appearing
on every piece of stuff
that the UN Women's Guild used
to sell-- the linen tea towels
and various things of that sort.
But at the sort of high
governmental level,
you have a person like
Lewis Strauss embodying
a set of visions and at the
sort of popular culture level
almost, you have the UN
Women's Guild talking
about these sorts of things.
Thematically, it's the same.
It's the imagination
of a world united
through a particular
kind of progress that
has gotten rid of ancient
evils of one sort or another.
So where are we now.
So people sometimes
call this the era
of the convergent technologies.
And again, we can point
to historical moments
and documents that
sort of embody where
these ideas come into being.
In this case, it's a
governmental author,
Bainbridge, who wrote a report
from the National Science
Foundation that's often taken
as a sort of originary moment
for talking about NBIC,
which are nanotechnology,
biotechnology,
information technology,
and cognitive science.
And you see that there's a whole
list of areas of application
where it's no longer a
science in the abstract,
no longer a particular
technology-- but everything
coming together in this
moment of convergence.
And it's a very ambitious idea.
So the National
Science Foundation
is going to put its money behind
these convergent technologies
and you're going to get
scaler improvements,
of a sort, that people
had not imagined before.
So just going back
to that passage,
one can drag out a bit
different scales of intervention
at which technology is supposed
to be working say human health.
All of human health,
human cognition.
Not one mind, but
human cognition.
Human communication.
So it's the entire
species that's
going to be made better
through these improvements
and the outcomes
are going to benefit
particular groups, particular
societies, national security,
as mentioned.
So certainly, at
the national level.
And through unifying
science and education,
we're going to
get a sort of bump
up in the capabilities
of the society
to do all of these things
through the convergent
technology lens.
So one can then stop and
say, OK, so technologically,
we are now in a
formal ambitious era.
We're talking about
benefiting everybody
at the sort of elevated scales.
And we're talking
about doing it moreover
through a convergence among
technological frontiers
that previously might not
have been in contact with one
another.
So one can step back
and say, well fine,
there will be consequences
that we may not be happy about
and what tools do we have to get
to at these kinds of frontiers
and make sure that
they're evolving
to the benefit of the societies
they're intended to serve.
So in public policy,
you can step back
and say, well,
there are these sort
of standard models
for how you go
about regulating
undesired consequences
or regulating to prevent
them or manage them
at one level or another.
And to sort of
paradigmatic approaches
can be separated a
little bit from another.
One is about risks--
familiar, I'm sure,
to everybody in this room.
But you can parse out what
the key elements of the risk
paradigm might be.
So first of all,
there is, in the risk
based way of going
about doing regulation,
a kind of built in
commitment to the idea
of technological progress
the things will evolve
and the way you need to manage
them is through regulation.
So risk, by definition, focuses
on the harmful consequences
and says, OK, things will
have to evolve as they will.
What we need to stay focused
on is the bad consequences
and we need to prevent them.
And all we need for that is
the right kinds of expertise.
So we have risk assessment,
we have other expert based
evaluations, and then we can
set in place policies of control
that are going to make sure
that these risks are somehow
contained-- that
they do not escape.
And this word, containment, is
all of the policy discourse,
whether you're talking
about nuclear or bio
or informational risks
of different sorts
that we can come back to.
The other major paradigm
begins somewhere else
and is focused on rights.
And this one says
that what we should
begin with is what human
beings need and want
and make sure that whatever
we're doing in public policy
and regulation-- it is
centered on developing
these capabilities
of people, the ones
that we want to protect.
And this is focused on
freedoms and undue constraints.
How can we eliminate them.
This is where you would put
the concern for privacy,
for instance, although
you could put privacy
on both sides-- on the risk
side and on the right side
and we can talk about that.
And we need new
institutions, therefore,
to give effect to these rights.
And we need policies
that liberate people, not
constrain them.
Now, the problem with
these NBIC technologies
is that they don't fit these
regulatory ideas all that well
partly because what
makes them valuable, what
makes them powerful is precisely
that they can't be contained.
Now there's a
historical moment when
one sees this very clearly--
when the biotechnologies come
into being.
And we have a conference
in 1975 at Asilomar
and the molecular
biologists are debating
what they're supposed to
do with biotechnology.
They come up with the
idea of containment
because they borrow it
from the nuclear industry.
And they say we need
two kinds of control--
biological containment
and physical containment.
Physical containment
to make sure things
don't escape from the lab.
Biological containment to
make sure that the organisms
themselves don't multiply
in ways we don't want.
And they just have
no imagination
that what the biotech industry
wants to do is non-containment.
They want their
crops to be all over.
They want their anti-Zika
mosquitoes to be all over.
And so the whole
idea that they enter
the biotech revolutionary era
with this idea of containment
is a little ironic
in retrospect.
And it took them two years to
abandon that as a principle,
but that they didn't come
up with anything necessarily
significantly better.
So the technologies
of this moment
are characterized by
things that violate
the notion of containment--
right, left, and center.
They're notable
because of their powers
of diffusion and penetration
because of their pervasiveness
and because of their
evolutionary capability.
These are not
static technologies,
you can't box them in and
say undue consequences
or undesirable consequences
will be mitigated because we've
erected a box around them.
So this poses a bunch of
governance challenges.
It's interesting
that people see even
the history of
regulating biotechnology
in the period from
1975 to 2000 has
a story of massive
regulatory failure
and therefore, people
are trying to think
of new paradigmatic
approaches to how we should
go about regulating things.
And in that context,
it's interesting
that people are no longer so
much talking about science
as they're talking about
technology, which, obviously,
is of central significance to
the tech companies of today.
So I want to do the
last third of this talk
by saying that in a way, We?
Ought to be more focused
on the rights paradigm.
But the rights paradigm also
has destabilizing forces
attached to it.
The rights paradigm
was created for people
who look kind of like
us and it's imagined
that we begin and end with the
physical persona that we have.
So for instance, if you
look at all of the privacy
decisions of the
US Supreme Court,
they're all premised on a person
in physical space, a contained
human being.
So why should we not
have the government
regulating contraceptives.
Well, it's like the cops
kicking down the bedroom door.
This is the sort of imagination.
Why should be not have
tracking of potentially
criminal engagements
inside of telephone booths.
Because telephone booths
are kind of like rooms
and therefore, people should
be entitled to the same kind
of privacy in the room.
So I would suggest
that we live in an era
when this idea of a physically
contained human being sitting
in physically contained
space has broken down
for all sorts of reasons.
You find it in fiction.
I don't know if anybody here is
familiar with Philip Pullman's
famous trilogy,
His Dark Materials,
in which the central characters
all have a demon, as Pullman
calls them, attached to them.
This is an animal
avatar like persona
that is part of that human.
And if you kill that
demon for instance,
you're killing the person.
And it is said that
Pullman was profoundly
inspired by this famous
painting of Leonardo-- the Lady
with the Ermine, where
there's this organic unity
between the lady and her ermine.
And he got this idea
of the animal demon
by looking at that painting.
But if you think about
who we are today,
you could think of us as being
an era of overlapping subjects.
So there's the classically
the depicted human subject
who is the human body.
But there is the genomic
genetics subject, the product--
you could think of it as the
23 and me subject if you want.
And then there's the one that
you all are more involved with,
which is the subject
that is constituted
by the informational traces
that are left behind.
And the point is
that these are not
separate-- they're
not separable.
But it does throw up a
set of questions for law
on public policy.
Which if these personas
are trying to protect
and what rights do they
have, vis-a-vis us.
So that case like Google
Spain-- which, if all of you
know about it-- is about
the memory preservation
through Google could
be seen as being logged
in that area between the
physical human subject
and the informational
human subject.
And so is some kind of
set of understandings
of what is due to that
subject in the middle going
to carry over to the
informational traces.
And if so, how.
So it is in that trification
of the subject-- I mean,
I sometimes like to say we
have become the Trinity,
in a sentence--
where on that sort
of parsing out of the human do
you overlay public policy that
has evolved over millennia
dealing with only
that figure in the middle.
So very quickly, I think
that our classical frameworks
for regulating all
run into problems.
We live in what
is popularly known
as the neoliberal era, in which
states have decided that they
need to let markets
do more, which means
let the private sector do more.
But we know that when
markets come into action
it's often at the time
when products have already
been designed and they're
already out there.
And it's often too
late at that point
to get the kind of
public involvement
that might actually
lead to changes
in the design of a
commodity, whatever it is.
And the ambivalences
are not apparent
when you're living in a
society of early adopters.
What are the people thinking
who don't want that.
And so by the time
these concerns
mature, often,
markets are already
to set have, too
many sunk costs,
and are not able to do
things in a reasonable way.
And then markets, as business
historians will know,
don't remember
their own mistakes.
I mean, in this respect,
markets are bit like science.
The false [INAUDIBLE]
are forgotten,
they were just those things
that those people did.
They were mistakes,
we do better,
and therefore, only
the progressive history
gets written and not
the regressive, the one
that made for complications.
On regulation, we don't need
to say very much because this
is so much the mantra
of the moment--
that governments
can't do anything.
And therefore, all we should
do is remind ourselves
that there are arguments here.
It's not just that they
are governments, it's why.
And partly,
governmental regulation
presumes a degree of
availability of knowledge
and a level playing field
of knowledge that turns out
not to be available
in most governments.
The rollout of the Affordable
Care Act as one recent case
study that says something
about governmental capability
in these areas.
But on top of it, there are
very specific US policy choices
we've made.
Which is to say that regulation
should happen very far
downstream.
It should address the
product and not the process.
And we've said
this over and over.
But that also means that
early corrective steps
are less likely to be taken.
And furthermore, of course, we
live in a globalizing world.
So when we turn
to regulation, we
create patchwork systems and
multinational enterprises,
well aware of that phenomenon.
So one step that we've
increasingly been taking
is saying, well OK, all
these problems-- they're
a little bit about values.
So why don't we do ethics.
And there's been, in fact, a
sort of statistical bursting
out of ethics fields and
ethics deliberative bodies
around most of these
frontiers areas of science
and technology.
There are almost as many
hyphenated ethics fields
as there are sciences
and technologies.
And my research group
for the last two years--
I had somebody who was a
philosopher of neuroethics
and has a job in
being a neuroethicist.
But you know, what's a
neuroethicist as opposed
to some other kind of ethicist.
So ethics deliberations
end up committing a lot
to private bodies.
And so I sit on an
ethics committee
at Harvard dealing with
bio-related issues.
How people get appointed
to these bodies, what
their deliberations are--
this is all quite secretive.
It's not meant to
be out in the open
where publics can
have some kind of say.
So that's the sort
of thing I mean
by saying that questions
of value are privatized.
And moreover, ethics
itself becomes
a new kind of expertise.
But we might argue
that everybody
has values about what they
consider to be good or bad.
And that by training people to
become ethical philosophers,
we may be weeding out some
of that ambivalence that we
should be paying attention to.
And some public values
may never get picked up
in a thing called ethics.
So even these ethical ways
of talking and thinking
may not be enough.
In my own work, I tend to
think that context-- what
other people call context--
is extremely important.
That we need to pay attention
to the different historical
pathways by which
cultures have evolved.
And the kinds of central
commitments norms,
values that they adhere
to, which are not
the same across countries.
So here is a little example
from a tech company, in a sense.
This is an article
that caught my eye just
in January of this year.
Of course, I knew about
the phenomenon in general
that Uber was having a hard
time penetrating into Germany.
But the New York Times
writer's explanation
for why miscalculated and
had to pull out of Frankfurt
was quite interesting to me.
"With a thriving
financial center-- so this
is the pullout of Frankfurt--
and cosmopolitan population,
the city, Frankfurt,
seemed like an ideal place
to operate and grow.
Yet the company was
forced out by a mix
of cultural and legal missteps."
And the sentence,
specifically, "miscalculated
how best to gain the support of
skeptical locals unaccustomed
to its win at all costs
tactics and it underestimated
the regulatory hurdles
of doing business
in Europe's largest economy."
So these are three
items, three variables.
Skeptical locals, win
at all costs tactics,
and regulatory hurdles of
doing business in Europe.
So you know, these are
absolute truisms to anybody
who thinks about public policy
and these are not novelty.
I mean, I, myself would love
to go to the right people
and ask who did you
have scoping out
the regulatory environment
in which you were going
to launch this technology.
And then a bit later
in the article,
it also talks about how the
Germans don't like credit.
Well, anybody who's
lived in Germany
knows Germans don't like credit.
You go to a restaurant,
they'd rather
have you come in with
400 euros in your pocket
than accept your credit card.
And you won't get mugged so
you don't need your 20 euro
urban wallet in case
you do get mugged.
They evaluate the risk of credit
default in a different way
from what they evaluate the risk
of being mugged on the street
as you're carrying your
dinner money around with you.
So these are kind
of obvious points
and one can then do
the analysis to think
about why Germany, with its
thriving financial centers
and so on and so
forth, has ended up
with such a different idea of
the risks and benefits of doing
business from where
we happen to be.
So this all prompts
me to think harder
about a sort of patented
phrase in my work, which
has turned out to
be a kind of sleeper
when I wrote an article called,
"Technologies of Humility."
I didn't expect that it
would become, practically,
my most cited piece.
Most humorously, somebody
sent me a clipping
from a paper in
Oxford, UK one saying,
"Harvard professor
advocates humility."
So it has gotten a little
bit of circulation.
But what I say in that piece
is that we should pay attention
to where questions come
from for public policy.
But I think investment is a
lot like public policy in terms
of where it's trying to go.
So framing-- how is the
question being posed and is it
being posed the right way and
what are other ways of posing
the question.
I need not tell
engineers that this
is a sort of fundamentally
important set of things
that people understand
when they're there
in the guts of a design
process but do not
understand when they get out
into public policy world.
And then other questions
of ethics and values--
you're going to innovate,
but who's going to get it.
Everybody's heard
about the Luddites
and how bad they
were because they
went around breaking machines.
Very few people say,
who was out there
to give the Luddites
a different job when
their hand crafted
looms were going
to be driven out of
business and instead, they
were going to have
something else.
Well, those skeptical
publics and fans
don't want to be driven out.
So that's part of the
vulnerability point.
Distribution.
Richard mentioned this
at the outset-- who loses
and who wins and learning
how do you actually
historize experiences and
learn to do things better.
So I think one can carry
this idea of humility
forward into thinking about
technology in and for society.
I think one can pull out some
policy prescriptions that
may sound a little
bit banal, put
at this level of abstraction.
But if you get into the nuts
and bolts, the nitty gritty
of historical cases of business
success and business failure,
you find, repeatedly, the same
pattern things showing up.
So people do not attend
to the communities
and the social
norms of the places
where things will be tried out.
They do not engage with
history and culture
because it's assumed
that the new will
be valued exactly the same
way as where it was invented
in the first place.
Concerns about distribution
tend to get it back seat
and we've seen that just
over and over again.
And then this idea
that communication
needs to be two way-- that
it's not just the people who
have the new thing that are
going to go out and sell
to the people who surely will
be demanding this innovation,
but that technologies
like legal systems
and like political
systems also have publics.
And how can those
publics actually
be heard back in
that design process.
So I'd be happy to talk in
more detail about any of this
that interests you
and above all, I'm
interested in the kinds of
questions and observations
you may have.
So let me stop there.
AUDIENCE: Hi.
I'm trying to think
about how to phrase this
correctly, but in
the modern world
where technology is not
subject to a single locality--
so there isn't a single
entity that can regulate
entire industry--
how would you propose
that the technologies that
are going global be regulated
or not?
And in particular,
I'm thinking about
of technologies
like Twitter, who
we praised a lot of when the
revolution in the Arab springs
were happening and everybody was
like, oh, Twitter is amazing.
But now that is being
used by a group like ISIS,
we have the opposite
view, right?
SHEILA JASANOFF: I mean,
that's an excellent question
and I think it
can be taken apart
into several different things.
So one point is just
the evolutionary point.
That regulation often operates
in a kind of linear mindset
that there is a moment--
usually, the marketing moment--
where something leaves the
lab and goes out into society,
which is the right
moment to regulate.
And that's that.
But we know that
pretty much every kind
of technological
system-- that that is not
how it works because the
amount of information
we have at that very moment
may be extremely limited.
Think about pharmaceutical
drugs for instance--
you do clinical trials,
but the clinical trials
give you only an approximation
of the actual effect
of long term use in
bodies of different sorts.
And Twitter is no different.
You can think of it as a
medication being let loose
on society and the
clinical trials
revealed that there are
different population
groups that are going to react
and respond in different ways
to the availability
of the technology.
So I think that one
needs to sort of have
the idea that the regulatory
process has to be an ongoing
and recursive one.
This is what I mean about
the learning experience.
And how you built that
in-- many businesses
think that the competitive
movement of getting something
out there is
incredibly important
and don't want to look
back and rethink things.
Who has the
responsibility, anyway,
for taking things on board.
So that's one kind
of point, I think,
to wrestle with the need
for more recursion in policy
making.
The second point
is that it's not
going to be the case that
regulation will be universal
in the same way that the
technology itself, in a way,
isn't universal.
So all sort of
studies of technology
out of the field of STS,
science and technology studies,
show that uses are continually
adapting technology.
So one can say a phrase like
technological capability
that may have some
universal dimensions.
But what people will take and do
with that in their own context
is going to vary.
And then again,
being attuned to that
and accepting the fact that
maybe there isn't going
to be a one size fits all.
And so what degree of
variation then, is tolerable.
Why does Trump's idea about
not having the Pacific
treaty or the European treaty,
the disastrous consequences
NAFTA-- why do those
have so much traction.
It's partly because
people are actually
rebelling from a different
side against the idea
that once you get a
technology on the books,
it should be free
flowing completely.
But this is, of course,
a topic that, especially
at this company, I don't need
to belabor because when Google
talks to China, Google
is saying something very
different from when
Google Docs to Spain
or when Google talks to
Congress on Capitol Hill.
But the democracy implications
are not necessarily
being heard.
These are very top to top
kinds of deliberations.
And I think that there needs
to be much more opening up
about the kinds of values.
How much contradiction
we actually
tolerate across the world.
AUDIENCE: So I actually
have a question.
The field of GMO, genetically
modified organisms, I think,
is an interesting prism
for a lot of this.
So from a first
world perspective,
I've got the money to feed
my family organic foods
and I might say it's my right
to exist in GMO-free world
because I have these fears and
who knows what bad things may
happen and so on.
And then there's the
other perspective
the GMOs may turn
out to be the thing
to give the other billions
of the world's population
these benefits of
enough food to eat
and freedom from disease
and starvation and whatnot.
How does one make progress
with these opposed principles?
SHEILA JASANOFF: Well, I
think that other recurrent--
an excellent question--
goes back to the point
about framing that
I was making before.
So the debate in GMO is often
couched in exactly that way--
that the Europeans can afford
to be resistant to GMOs
because they also have the
capacity to pay extra money,
buy their organic products.
But this is not the kind
of technological solution
that's going to benefit
the rest of the world.
You may have seen that
one of the big business
stories of the recent past is
the deal that Bayer is making
in buying Monsanto and I'm
very interested to see what
two different corporate cultures
that have grown up and evolved
under a very different
regulatory presumptions
are going to do when they're now
married into the single entity.
But the kinds of
points that people
would make if they looked
deeply into the GMO controversy
would say that that initial
idea-- that it's a binary.
That some people are rich and
can afford to do without GMOs
and that other people
are poor and need GMOs
was not a very helpful
framing to start with.
That is not the case that people
with a genetic technologies
have deeply investigated
what the nutritional and crop
needs of the parts
of the world are,
where GMOs are going
to be used heavily.
Many people whom will say-- this
is quite a credible argument--
that the precursor to the GMO
story is the Green Revolution
and that the Green Revolution
tells a shaded story.
That there's no question
that the Green Revolution
produced high yielding farms and
massively raised the capacity
to generate food crops in
parts of the world that had
previously been in a deficit.
But they will also
say that the long term
environmental consequences,
which now are to paying back
to damage agriculture, were
not foreseen and calculated
because you could only
grow these germs with very
high inputs of electricity,
fertilizer, and water.
And therefore, these were
subsidized elevations of grain
and had very unequal effects
across the areas of the world
where the Green
Revolution was introduced.
So I think that the
technology story always
needs to be told in a
more differentiated way
than the
proponent-opponent version.
I mean, I really think that this
is not a black and white issue.
This is a who loses,
who wins issue
and even that is too binary.
Because you need to
throw in temporality over
what period of time.
You might gain.
I've heard biotech people
from the third world talking
about the fact that they
planted this kind of cotton
and the first year, the
yields were improved,
the second year, the yields
were vastly improved.
The third year was stabilized.
The fourth year a bug came
and got rid of the mono crops
because it was resistant to
the particular change that
had been introduced.
So one has to take those
stories on board as well
and think about how
technology should
function in a differentiation.
I guess that's one
of my main issues.
AUDIENCE: So I wanted
to ask about the sort
of perverse incentives that
some of the public policies
have in driving ethics out
of the role of invention,
such as regulations that prevent
questioning of things that
are being done by
companies or that
limit their liability
in such a case
that they don't have to consider
the negative consequences
of what they do because
they're protected
by the regulatory
industry as opposed
to incentives that would
drive them to be concerned.
SHEILA JASANOFF: Yes, so
how should one-- in essence,
you're asking what one
can do about capture.
That cozy relations built
up between governments
eager to protect
their industries
and therefore, they're protected
against the worst consequences
of what they're up to.
So the piece of this
that my book is most
focused on is not, in
a sense, the phenomenon
in the first place, but how
the phenomenon that's actually
put it as if it's rational.
So where people
sort of blatantly
get together in a
room and say, we're
going to come out giving
you everything you want.
So how Trump and Christie
managed certain dealings
with casinos and debts in
the state of New Jersey.
I mean, the public can
understand those stories
when they hear them.
But if you tell people, I
conducted a risk assessment
and discovered that this
company's product is safe,
then you're resorting to science
to produce a kind of argument
that everybody buys into
the rationality of science.
But if you look
harder, you often
find that the technical
arguments that were made
shoved under the carpet fairly
important uncertainties,
unresolved questions,
which might
have come to light if the debate
had been in a more open avenue.
One such thing
that's going on right
now is around this extremely
promising gene editing
technology that's
a suite of things
that goes by the name CRISPR.
I'm sure many of you
know all about it.
So who is leading, who is
spearheading the debate on how
to regulate CRISPR.
So far, it's been a few
national science academies
that, actually,
have held meetings
that they call summit
meetings to work out
a global regulatory package
on the implications of CRISPR.
But I think people already
know that, for instance, even
with CRISPR, the question
about whether you're
wanting to do human genome
editing that affects
the next generation of the
human or whether you're
wanting to do genes drives that
will alter a species so that it
behaves in a different way in
the ecosystem-- these are going
to carry around them,
different forms of debate.
And we have forums in which
these things could get raised,
but it's not yet happening
because the people who
control the frontiers
of invention
are usually setting the terms
about how the debates ought
to be conducted.
So my work is more around
that sort of phenomenon,
the sort of premature
taking out of politics
under the guise of
technical reasons.
AUDIENCE: Several
inventions-- and I won't even
say technology-- over
the past few years
have been in the form of
provocations, probably
intentionally, like Uber.
I think the people at Uber
knew that what they were doing
was against the existing
regulations on taxes
and so on-- or
certainly suspected it.
Google, when it scanned
millions of books,
knew that they were in a
legal gray area on what they
were doing.
And I think they thought
they were doing a good thing,
but legally, it
was unsettled law.
Wall Street, when
it bundled mortgages
into these packages that could
then be traded on exchanges,
obviously, they were doing it to
make more money for themselves.
But they also made the market
for mortgages more liquid
in some ways.
And then of course,
there's catastrophe later.
I don't see how we can
advance in any of these areas
without making mistakes.
I don't know if
long term, Uber is
going to make taxi drivers
poorer, or make traffic worse,
or make things better.
I don't know.
I have my opinion
on Google Books,
but let's leave that
aside because you'll think
I am partisan.
The bundling of securities seems
like a good thing in many ways.
Are we condemned to oscillate
to have almost business
cycles of inventions, where
something is introduced?
We have the reaction,
it's over-regulated,
under-regulated.
And I don't see it happening
in any evolutionary way.
I don't see it possible
for Uber or Google Books
or securitization to go
before some wise committee
at the Senate and say, we
want to do this experiment
with taxes, let us do
it and see what happens.
So how do we go forward
without making huge mistakes
and oscillating in this way?
SHEILA JASANOFF: Thanks
for the question.
I mean, oscillation
implies that we've already
parsed the world out
into a right-left binary.
First of all, I don't think
that's how technology evolves.
I think, arguably, all of
these destructive cycles
that you're describing
began in experimental ways.
And it's not that somebody
blanketed the entire US
with Uber type solutions.
They were introduced.
And market research was done
and people went forward.
So you know, I think
there's something
to be done in conceptual
terms in recognizing
how the introduction
of a technology
like an experimental
system and then one
can ask, well with
experiments, we
have a long history of thinking
about how not to violate
people's rights and expectations
in biomedicine, for instance.
We've evolved a set of
ethical understandings
about how to do experiments.
It's just that we don't regard
throwing somebody out of a job
as being similar
in characteristics
to causing somebody to have
fatal side effects from a drug
treatment.
Although if you listen
to the economists,
they would say that the
chances of your lifespan being
drastically altered by having
no job are actually quite high.
So I'm not sure that the
conceptual distinction
between the economic disruption
and the putting at risk
of health and safety
should be seen as so
different from one another.
So I think that these same
people with the talent
to imagine the
disruptive technology
should themselves be
taught not to think
in a black and white world--
as if it is a binary, as
if it is either/or or nothing.
In fact, I think all
of industry development
is full of experiments
and failed experiments.
It's full of choices
of one sort or another.
AUDIENCE: But I'm not sure
what you're suggesting there.
Are you suggesting that
Uber should petition
the state of Massachusetts that
Brookline will be an Uber zone
and we'll try it
out and see what
happens to the tax industry?
I'm not sure what
you mean there.
SHEILA JASANOFF: Well,
the first thing I'm saying
is that Uber doesn't have
to petition Brookline.
At the introduction
of a new technology,
you see the companies are
actually making experiments.
That's what they're doing.
So I'm not making
up the experiment.
I'm not saying go tell people
that they are experimenting.
They are experiments.
AUDIENCE: But at
the margin, they're
not throwing anyone out of work.
At the margin, they're
not changing traffic.
It's only when they
become successful enough
that those effects
might kick in.
SHEILA JASANOFF:
Well, at that point,
we should think about
what the scale up involves
in terms of displacement.
So for instance, if you take
an agricultural region that
is organized around many,
many, many small holdings
instead of one that's organized
around huge agribusinesses
the way commodity
crops are in the US
and you take that
technology and you say,
I'm going to take it
to this other place
and you know that the
economies of scale
are going to privilege the
people with the larger holdings
at the expense of the people
with the smaller holdings.
That's where an ethical
burden may kick in--
that you should be
thinking about what you're
going to do with the people who
are displaced because they are
the more vulnerable
and they are the ones
whose conditions of life will be
so adversely affected that you
ought to be thinking about it.
When countries embrace
technology policy
in a transitional
mode, they often
take pains to think about
what to do with people who are
going to be thrown out of work.
So when Germany, for
instance, decided
to phase out its
depot coal mines,
it did start thinking
about how to do
reskilling work with
some of those people
who had been in the mines.
And whose responsibility
this is-- it's
not just a question of
Brookline versus Uber.
It's really an open
question for us
as members of civilized
technologically
advanced, striving,
innovating societies to keep
on asking those questions.
Acting as if it doesn't
matter, as if all disruption is
the same anyway--
which I think it isn't.
I mean, whether you
disrupt something
that was unjustly created
or whether you disrupt
something that has
democratic backing behind it
are two different kinds
of considerations.
In any case, I think nobody
should be off the hook, OK?
that?
Is, to think forward about
the consequences-- just giving
somebody a free pass, saying,
well you're doing technology,
you're doing innovation,
therefore, you
don't have a
responsibility-- that
is clearly not a way to go.
And we do a lot
of different ways
in which these debates
can be organized.
That is, there are
zillions of experiments
that have been done with how to
engage a diversity of publics.
I don't think that we're
operating in a vacuum.
I don't think it's
a blank slate.
I think that if you take
the challenge of how
to do democracy
seriously, recognizing
that the introduction
of technology
disturbs the dynamics of
politics in the same way
that the introduction of
a new piece of legislation
or a constitutional
amendment does.
Then I think that it
liberates people to go back
to the theme of liberation to
rethink how to do democracy
in more creative ways.
