[MUSIC]
Welcome everybody.
I'm Jennifer Widom.
I'm the Dean of the School of Engineering.
We have in the audience today graduate and
undergraduate alumni and
we have members of the community.
I also wanted to mention that we are live
streaming to thousands of alumni and
friends, all over the world.
The School of Engineering has been
co-hosting a series that we call
Intersections.
It brings together faculty from
the School of Engineering and
from humanities and sciences to share
insights on a common theme or idea.
And this, tonight,
is the third in that series.
We have the series because we recognize
the world's most challenging and
complex problems need to be addressed not
by individuals working by themselves, but
by people working together and
talking together across disciplines.
We also value an understanding
of the relationship,
the very important relationship,
between humanities and technology.
And I would say especially today,
and urgently into the future.
And the themes in the book being discussed
this evening are an incredibly dramatic
example of why having that
understanding is so critical.
I'm really pleased that Persis Drell
will be the guest tonight.
Persis is a colleague, she's a friend,
she's the Provost at Stanford,
and the former dean of
Stanford engineering.
She's a scientist, and she someone
who's thought very deeply about what we
like to call the humanist engineer.
>> Don't you see the danger, John,
inherent in what you're doing here?
Genetic power is the most awesome
force the planet has ever seen, but
you wield it like a kid
that's found his dad's gun.
>> Dr. Valman, I learned a great
deal from you at the university.
[MUSIC]
>> [INAUDIBLE] He's alive!
He's alive!
He's alive!
He's alive!
He's alive!
In the name of God,
now I know what it feels like to be God!
[MUSIC]
>> Do you read me, Hal?
>> Affirmative, Dave.
I read you.
>> Open the pod bay doors, Hal.
>> I'm sorry, Dave.
I'm afraid I can't do that.
>> I'll tell you the problem with the
scientific power that you're using here.
You know, you read what others had done,
and you took the next step.
You didn't earn the knowledge for
yourselves, so
you don't take any responsibility.
>> Our scientists have done things
which nobody has ever done before.
>> Yeah, yeah, but your scientists were so
preoccupied with whether or
not they could they didn't
stop to think if they should.
[MUSIC]
>> Please welcome your
Philosophy Talk host to the stage,
professors Ken Taylor and Joshua Landy.
>> [APPLAUSE]
>> [MUSIC]
>> Will new technologies like
artificial intelligence and
bioengineering be
the salvation of human kind?
>> Or will they destroy our bodies,
our democratic institutions,
and even our planet?
>> And who is going to control
the technologies of the future?
>> This is Philosophy Talk.
The programmer that questions everything.
>> Except your intelligence.
I am Josh Landy.
>> And I'm Ken Taylor and
we are coming to you from Cemex
auditorium on the Stanford Campus.
>> Continuing conversations that
begin at Philosophers' Corner,
where Ken teaches philosophy and I direct
the philosophy and literature initiative.
>> Welcome, everyone, to Philosophy Time.
>> [APPLAUSE]
>> And let's hear it for our musical
guests, the Tiffany Austin Trio.
>> [APPLAUSE]
>> Today, we're thinking about monstrous
technologies as part of Stanford
University's Frankenstein@200 project.
>> Monstrous technologies,
that's a strong word Ken.
>> [LAUGH] Josh, come on,
look, I love my iPhone.
But, gotta admit,
smartphones are causing an epidemic of
distraction, insomnia, and depression.
That seems pretty monstrous, to me.
>> That's just techno-panic, Ken.
>> Techno who?
>> Look, look, people are always freaking
about the latest technological invention.
You know, like the printing press, the
mechanized loom, newspapers, electricity.
It always turns out there was never
anything thing to worry about.
>> Josh.
Come on, tell that to the inhabitants
of Chernobyl or Fukushima, or
the victims of asbestos poisoning,
or all those thalidomide babies.
You're a big literary guy.
It's just like Mary Shelley says in her
novel Frankenstein that we're thinking
about, technology can be deadly.
>> Barry, look,
you watch too many Frankenstein movies.
The novel, the novel's a lot more
subtle and sophisticated than you or
Hollywood are making it out to be.
I mean look, that novel,
it's not just some Luddite's read
against the horrors of technology.
I mean look, it's a philosophical
investigation into personal identity.
It's a brilliant experiment
with literary form,
it's an exploration of deeply
buried antisocial impulses.
I mean, that-
>> [CROSSTALK] You're forgetting
the main thing, Josh.
It's also about a technological marvel
that runs around killing people.
[LAUGH] You left that out.
>> [LAUGH] Right, fair point.
Touche.
[LAUGH] But,
but remember that great scene in the novel
where the creature learns about language.
He calls language a godlike science.
And writing, writing he says opens
up a field for wonder and delight.
>> Josh, you're waxing so
poetic, big deal.
>> Well, not that it's a big deal.
Writing is a technology.
It's among the greatest
technologies ever invented.
The novel celebrates that kind of
technology and we should, too.
>> Look, Josh.
Look, look, look.
I love writing, too.
I mean, that's why I spent hours and
hours writing, myself.
It's not just because
I have writer's block.
Because I love writing.
I even love other people's writing!
I love reading your writing, Josh.
But even a technology as glorious and
as powerful as writing has its downside.
Look, no writing, no Mein Kampf.
No Mein Kampf, no World War II, QED.
[APPLAUSE] Technology can
be-- Godwin's Law, Ken?
Look it's not the technology of writing
that was responsible for Mein Kampf.
It's the guy who wrote it and it's
the people who read it and believed it.
I mean you can't blame technology for
what people do with the technology.
You have to blame the people.
>> But you're missing my point Josh.
Technology is often designed, explicitly
designed, to exploit human weaknesses.
Why do you use your iPhone so much?
Why?
Cuz it's a drug, Josh.
And so is social media.
And social media, this addictive drug,
is driving people to suicide, and
it's ruining our democracy.
>> Sounds like you don't trust people very
much, to handle their own technology.
>> Why should I?
>> Well okay, what do you want?
You want the government to intervene?
I mean, that's paternalism.
Look, if I wanna waste my time
watching cat videos on Facebook,
that's my business.
>> No Josh, it's not just your business.
My life is impacted by your choices.
Their lives are impacted by your choices.
Facebook and its addicted addled users,
they're destroying our democracy.
My democracy.
>> Okay, I'm not gonna defend Facebook.
>> That's good.
>> But the question still remains.
How can we prevent all the negative
outcomes, the monstrous outcomes?
Without losing all the benefits.
And how can we have the good outcomes
without resorting to paternalism and
stifling individual freedom?
>> That's a good question Josh,
good question but
I think the answer is kinda obvious.
Technology, producers and designers have
to take on some other responsibility,
they have to do a better job of
predicting, compensating for
the effects of their inventions.
And they have to care.
They have to care about more than
just creating cool fun gadgets and
making a bunch of money, Josh.
>> Yeah, and pigs have to fly.
>> [LAUGH] Well, look, if producers
refuse to regulate themselves,
well then the rest of us will just have
to make them an offer they can't refuse.
>> What?
>> Don't take me wrong.
I'm talking about changing
the incentive structure.
>> Okay, so taxes, regulation.
>> Or social shaming, maybe.
>> Well look,
I agree we gotta do something.
If we don't do anything, nightmare
scenarios may well be around the corner.
I agree with you, we're gonna start
seeing things like on Black Mirror.
>> Yeah, right, that's a great TV show.
But how much of that show is pure fiction,
and how much of it is future reality?
>> Well as it happens Ken, we asked our
roving philosophical reporter, Liza Veale,
to find out, she files this report.
>> So
if you've seen the show Black Mirror,
you've wondered how crazy is this stuff?
Total fantasy 15 years away,
ten years away.
The interpersonal rating system
in this episode seems familiar.
>> And that's one rushed swing.
[MUSIC]
>> You want a cookie with that?
It's on the house.
>> Sounds awesome.
[LAUGH]
[MUSIC]
See you tomorrow Deejay.
>> See you Lacy.
>> [LAUGH] Saw your boy in
the fire hat just now, so cute.
>> Yeah, he's really something.
>> [LAUGH]
>> Let me
introduce someone who takes the technology
in Black Mirror very seriously.
Dylan Hendricks,
he works at the Institute for the Future.
It's the oldest foresight
institution in the country.
They don't predict the future,
but they anticipate it.
I didn't know this kind of thing existed,
but I'm glad it does.
>> We can only achieve the future
outcomes that we can envision and so
if we can imagine more possibilities
then we have more choice in where we go.
>> Hendricks says Black Mirror is, and
bear with this tongue twister,
the first futurist fiction for futurists.
>> It is something that,
internally within our culture,
everybody watches Black Mirror and
has opinions about it.
And things we like and dislike.
And we'll pick it apart, but
it's worth picking apart.
>> He says the show has gotten people
thinking about the implications of
the technology we're beginning to use.
Though it's set in the future it's
grounded in very real existing
possibilities and liabilities.
So how far are we from the futures
depicted in Black Mirror,
that's what Dylan Hendricks is going to
talk about in some cases not far at all.
This episode is pretty straight forward.
It's basically 40 minutes of
this robot chasing this woman.
Henrik says this robot
is based on existing one
developed by a company
called Boston Dynamics.
They are quadrupeds with
machine learning capabilities,
similar to artificial intelligence.
>> They actually learn by being in
the world, by sort of, actual feedback and
experience of the world.
So, they're learning how to open doors,
how to recognize objects,
how to navigate terrain.
>> Hendrik says it won't be long before
this technology is used by the military,
but it also won't be
expensive to manufacture for
personal, private consumers.
>> That is something that we are gonna
have to deal with, in our lifetimes,
very likely.
Of this idea of sort of guard robots, that
are so capable that they're terrifying.
>> So that's an example of something
that's not at all far-fetched.
Then there is Arc Angel.
In this episode a brain implant
allows a mother to monitor her child,
her location her vital signs the mother
can see the world through her
eyes she can even censor what she sees.
Hendrick says this chips has
a lot of realistic elements and
some fantastical ones.
>> Obviously already, today,
as soon as a child has a phone
now the parent has a choice of like,
do I just track my kid all the time?
You've given them sort of a GPS tracker,
basically.
And so that is something where
the temptation of parents to be able
to know everything a child's going
through at every moment Is more or
less kind of a realistic, even sort of
present day, kind of choice for parents.
>> As a society,
we have the technology to surveil and
control our children
more than we ever did.
So we face a question.
>> Are children allowed to make mistakes?
Isn't that a part of growing up?
And if you stop them from making mistakes
will they make bigger mistakes later?
>> The way the mother sees through her
daughter's eyes Hendricks says that's
the most far-fetched aspect,
but it's not crazy.
>> I mean it's not impossible actually I
will say that there have been studies done
of resconstituting sort of memory images
from people's brains under CT scans.
So the idea that we could eventually
sort of capture images directly
off of the optical nerve and
translate them, and broadcast them.
That's not insane, to think that
we could do that, at some point.
>> In the episode, Be Right Back,
a woman uses this service that recreates
the consciousness of her deceased partner,
based on the digital communications
he made while living.
Then that consciousness is imbued
onto a very life like physical form.
Her partner in some way is
brought back from the dead.
>> You could have left me some clothes.
I mean, talk about
an undignified entrance.
[SOUND] That's a bit creepy,
what you're doing.
>> [SOUND]
>> Is there at least a towel?
I'm dripping everywhere.
Hello.
>> So
when it comes to the idea of creating and
imbuing machines with consciousness.
>> For me this is always kind of
a non-starter because there is actually
nothing close to an operational
theory of consciousness,
that is possessed by anybody in
the scientific community, right?
Everything around consciousness
is philosophical.
>> Hendrik says without a theoretical
understanding of what consciousness is,
there's no ways to theorize a path for
creating it.
We're stuck.
If we're getting any closer,
it will be the kind of discovery,
on the level of discovering fire,
something that changes everything.
What we do have is machine learning,
computers can cull enormous quantities
of data and learn from them, so
they can perform in ways that
we don't program them to.
But that they've taught themselves to.
The effect can appear
to us as consciousness.
But there's kinda a big difference there.
It's a recurring paranoia in
Black Mirror that machines or
entities will be able
to hold consciousness.
Ours or their own.
It comes up in the episode USS Calister.
It's about a virtual reality world
that players can control and design.
But and
this is a minor spoiler, the people in
the simulations are actually sentient.
And because this guy's a jerk and
he's abusing them the abuse is real.
>> Well all right.
Check security protocols.
If they've expired, you're in trouble.
>> Yes Captain.
>> Johnny recheck those probe results.
No room for errors.
>> Of course Captain.
>> Helsman Packer.
>> Yes Captain.
>> Vanilla latte.
Skim milk.
>> At once.
>> Walton.
[MUSIC]
Exit game.
>> But, like I said.
The question of creating consciousness
in virtual spaces or for
machines is not so
important for Dylan Hendricks.
>> The viable part is this idea
that as virtual reality and
mixed reality technologies become
more mainstream and accessible.
Which is sort of inevitable, because
we've already reached a turning point,
where they're very compelling.
That people will wanna spend more
time in simulated environments.
>> Hendricks is an optimist.
The main bone that he has to pick with
Black Mirror is how terrifyingly dystopian
it is, as a futurist:
>> There's a strong desire for
us to have sort of more identification
of what are the positive futures, right?
What are the things where we use
technology to actually solve problems?
In a real way.
>> So when it comes to virtual reality
Hendricks doesn't just see doomsday
scenarios.
He sees opportunities.
Here's one.
He says if we can't stop humans from doing
bad things to each other in real life
then maybe we can channel that anti-social
behavior into virtual reality.
Here's an example that might
be a little hard to swallow.
>> That sexual assault in
the real world went down
because of these kind of
simulations existing.
Is that simulation not then
kind of a public good, right.
If it turns out we can't fully deter
behavior but we could channel it
into something where it's less
destructive to real people's lives.
>> Henrick says this question
will only become more pressing.
And it doesn't just apply to VR.
Does this technology have more potential
to encourage impulses towards,
anti-social behavior or
to mitigate the consequences of it?
Hendrix is asking these questions.
So are parents, so are some technologists.
But who has the final say?
For philosophy talk, I'm Liza Veale.
>> [APPLAUSE]
>> Thanks Liza for
that tour of the dystopian
possibilities for the future.
I'm Ken Taylor, along with my
Stanford colleague Josh Landy,
we're coming to you from Cemex
auditorium on the Stanford campus
as part of the universities
Frankenstein at 200 Project.
>> Our guest today is a physicist and
former dean of the School of
Engineering here at Stanford.
Who recently became the 13th
university provost.
Please welcome to the Philosophy Talk
stage, Persis Drell.
[APPLAUSE]
>> Hi.
>> Thank you.
>> Welcome, Persis
>> [APPLAUSE]
>> So first is Josh and I,
we're talking earlier about potentially
dangerous even monstrous technologies.
I know that's been a topic that
you've been interested in for
a whole, when did you first get
interested in these kinds of questions?
>> Well I think the right
answer as I grew up with it,
my father was a theoretical physicist,
who spent part of his life pursuing the
dream of understanding the natural world,
and then spent a lot
of his life attempting
to preserve the world from the horrors
of nuclear war as an arms controller.
And so,
it was in the house from when I grew up.
>> So where did that leave you?
McCan and I were arguing early about
whether we should be optimists or
pessimists.
I mean the threat of nuclear war or
something is on the horizon.
So do you think that there is now or there
is coming up some Victor Frankenstein
type who is about to unleash
something really deadly on the world?
Or do you think basically
we're gonna be okay?
>> I think there are all these engineering
students out there who are working
on things that could unleash
something terrible on the world, but
it could also be something
that's wonderful for the world.
That's the way technology and
discovery works.
>> Yet, go ahead.
>> Well, I'm just optimistic
that the good will win.
>> Well, wait.
[LAUGH] I mean,
I want to take the world at large.
Cuz we're focused,
we can be focused on the American context.
You know if Hitler had won that war,
if Stalin had not,
you know if Stalin had,
well he did prevail for a long time.
>> He sure did.
>> Visited horror upon horror.
Technology in the wrong hands,
some terrorist gets a dirty bomb.
I mean in total looking
at the world in total.
What controls technology in
the world taken in total?
>> People.
In the end it has to be people
taking responsibility for
the technology that they create.
The technology is going to
be invented no matter what.
You can't stop it.
>> Can we keep germ warfare, germ weapons,
nuclear arms, chemical weapons.
Chemical weapons are all over the world.
They're the cheap dictator's
weapons of mass destruction.
Can we keep problematic technology out
of the hands of all problematic people.
>> We can never do it perfectly.
But we've had nuclear weapons since 1945
and they haven't been used since 1945.
It's a great example.
We have biological weapons that we,
so far, have controlled.
Does that mean we can stop working at it?
Absolutely no.
But, you can be,
you have reason to be optimistic.
>> So you look out at the world.
You look out at global warming and
all this type of stuff.
You look out at the world and
you are confident and optimistic.
And you don't look at it like
those folks on Black Mirror.
>> I never watch Black Mirror, so
I don't want to speak about Black Mirror.
>> Your world is more a Star Trek world,
not a Black Mirror world.
>> [LAUGH]
>> We're going out there and
solving all the problems.
>> But if I wasn't optimistic,
where does that leave me?
>> Well maybe vigilant, I mean right.
[CROSSTALK]
>> I believe in being vigilant.
>> So okay.
So what about you know,
things that are on the rise, things that
for example, video fabrication technology.
I mean it seems like if I understand
correctly, we may be on the verge of
people being able to make you be saying,
on video, anything they want you to.
>> Yeah.
>> I mean, so
who's gonna control that kind of thing?
Really good question.
>> [LAUGH] [CROSSTALK]
>> We'll get to that.
So, we'll try to answer that
really good question and
more really good questions from our
audience after this short break.
This is Philosophy Talk,
coming to you from CEMEX Auditorium
on the Stanford campus.
Our guest is the Provost of
Stanford University, Persis Drell.
>> In our next segment, we're gonna
talk about how we can balance exciting
innovation against social responsibility.
How do we get the upside
without the downside?
>> Invention, attention, and prevention,
along with questions from our
technologically savvy audience,
when Philosophically Talk continues.
>> [APPLAUSE]
[MUSIC]
>> [APPLAUSE]
>> Thanks again to our musical guest,
the Tiffany Austin trio.
This is Philosophy Talk, I'm Josh Landy.
>> And I'm Ken Taylor, and our guest
is Stanford physicist and Provost,
Persis Drell, and
we're thinking about monstrous technology.
>> So we're singularly taking
questions from you folks.
So if you have a question, please take a
spot in front of one of these microphones
at the front of the stage.
>> So Persis, okay, I mean,
we live in a capitalist society.
I think capitalism makes it really hard
to balance innovation against
social responsibility.
Can we do it?
>> Well, I would say we have existence
proofs where we've done it in the past.
I would say we've done it with
nuclear technologies, and
we've done it with some of the biological
technologies, like recombinant DNA.
Where the community has really stepped
up and taken some responsibility, and
not just pushed technology as far as
it could into commercial applications,
cuz they recognize there were potential
dangers and threats out there.
>> Yeah, yeah, I mean,
you really are an optimistic person.
>> [LAUGH]
>> I'm glad you're our Provost.
I'm glad, we wouldn't want a downer for
our Provost.
I really wouldn't.
Right, I just wouldn't.
But I've gotta say,
I'm much more of a downer.
I just, I'm not sure I believe in
the capacity of capitalist production of
technology to always regulate technology
for the human good, and here's why.
Because the decisions
are made kinda locally,
that is, I'm gonna automate it and
make my company more efficient.
I'm gonna lay off workers.
I don't think about
the aggregate effect of that.
I think about my competitive advantage.
This person competing with me, and
it's not just within this economy,
it's around the world.
If I don't do this, so the pressures
are all generated bottom-up, locally.
But then they aggregate into something,
a mess, right?
>> I think we saw this in space,
with Facebook, right?
And Twitter, I mean,
the way in which these social media,
their profit is driven by clicks,
and likes, and shares.
And those are driven mostly by
controversial news stories, and so
conspiracy theories and fake news,
well, they're good for the bottom line.
>> So, l think what you're bringing
up is a really good point,
which is that with the two examples that
l gave, the threat was evident early.
You knew there was a really
serious threat out there.
>> l think the threats, and
they are very real, and
we've seen them acting out,
from social media,
machine learning, those we didn't
realize they were dangerous.
And so now we're in a somewhat
different situation.
I would also say another stark difference,
in my view, is that in the cases,
again, let me use nuclear weapons or
recombinant DNA.
Though there were leaders of the field who
stepped up and led, where are the leaders?
>> Well, right, but that's great, I mean,
I think the threat from nuclear weapons
is you're right, it's stark because
people used to think about living in
nuclear war and all that sort of stuff.
And then all these studies came out about
nuclear winter, and it's like, my God,
we can't just trust this to the Soviet
Union and the Americans fighting it out.
It's like,
the whole world has a stake in it, right?
And I think global warming's the same way,
but
there's a who goes first
problem kind of thing.
And the Paris Accords were cool, but look
what our current President did, right?
I mean, but still,
there's a who goes first problem, right?
I could free ride off
the world during this.
So, I just think we don't really
have mechanisms that kind of force
the balancing of innovation and
responsibility.
I just think we have precious
few mechanisms that do.
>> Well, in the case of global warming,
I think capitalism will come to our aid,
when the economy starts to
tank because of the effects.
But I'd still go back to the Internet and
say the challenge there
is I think there is the recognition
of the problem is now real.
But I'm not yet quite seeing where
the leadership is going to come from.
If the field itself does
not take some leadership,
I think that's when regulation comes
in in a very heavy-handed way.
>> [CROSSTALK] Make them
an offer they can't refuse,
just like I said in the opening.
>> Yeah, I could agree with you now,
Ken, on this one.
Because when you see what's actually
coming out of Facebook now,
it seems as though either
they're genuinely in denial or
they're in some kind of-
>> Well, wait a minute, wait a minute.
See, again, but
there is the logic of capitalism and
we are a capitalist production.
And markets are cool things.
I don't wanna deny that
markets are cool things.
And I think Facebook's business model
depends on something important,
that they're a platform,
not a publisher, right?
And I think being a platform allows them
say, let all comers go and we like that.
We like, there's freedom,
there's accessibility,
if we force them to be a publisher,
well, I don't know.
And then take on all the liabilities and
obligations that a publisher has,
that I actually don't know if
they'd survive economically.
Right, so I think this is non-simple.
How was that decision made?
>> How was that decision made?
That they were-
>> To be a platform and not a publisher.
How did?
>> Because they're good capitalists.
Right?
>> [LAUGH]
>> What's happening to the publishers?
They're getting hammered
by this new technology.
And if they take on to certifying,
and verifying and
distributing, and
being liable and all that stuff.
I don't think they get all those
investors to invest in them.
I don't know, there might be
some of them in the audience.
Would they have gotten all those
investors to invest in them if they said
our business model is we're
going to be a publisher.
>> Maybe, but meanwhile,
if we're relying on people within those
industries to do self-regulation,
it seems to me we're just putting
the foxes in charge of the hen-house.
I'm not sure we're really
going to get very far.
>> You're giving them
competing incentives.
>> Right.
Exactly.
So the incentive structure
seems to be wrong.
And I don't know whether the process.
I love it that you're an optimist,
even about global warming.
I think you're a little more
of an optimist than I am.
Maybe capitalism will kick in
after everything, it's too late.
But anyway, so what do you think?
Is there a way that we could,
maybe without regulation?
Maybe we could sort of change the
incentive structure at least at the level
of the social and make it uncool
to destroy the planet for cash.
Stuff like that.
>> [LAUGH]
>> So are we talking about global warming
or are we talking about the Internet?
>> Sorry.
>> Just so I know
which problem I'm talking about here.
>> Either one.
>> Pick the one you want to talk about.
Pick the one you can be optimistic about.
>> [LAUGH]
>> Optimistic about.
Well, they're really very, very different.
>> Yeah.
>> [LAUGH] I think on
the issue of the Internet,
I think the threat has become
visible in recent years.
And I actually see
evidence that the thought
leaders are starting to think
about how to address this.
And I'm seeing a lot for
example more discussion.
Not my field of expertise but
let me just put this out there.
Around say the subject
of machine learning and
the dangers of machine learning than
we heard say even five years ago or
ten years ago around new
social media platforms.
Now that just might be that the threats
of machine learning are more obvious.
But I actually think it might be that
the thought leaders are starting to have
much more of a sense of responsibility.
The culture of Silicon Valley has been,
as I think was articulated in
a New York Times article quite recently,
build it and ask for forgiveness later.
I think we're starting
to move away from that.
And so I see the,
again optimist that I am,
the beginnings of that development
of social responsibility.
We see that among our
students here as well.
>> Right, I'm wanna ask you about the
students the next time but I wanna back
off to the, I'm gonna ask about
the thought leaders and the innovators.
I grew up to believe let science and
innovation go.
You said, innovate, build it,
then worry about the consequences.
Because if you start out,
do you believe we should ever restrain
technology and science in advance and
say don't go there.
Don't go there, don't go there.
>> I don't think that works.
I think certainly at the basic
discovery scientific discovery stage
you don't know enough.
And then even when you're going
into the technology stage,
you may discover a biological weapon.
You may discover limited
nuclear weapons and
decide we're not gonna build
limited nuclear weapons.
But, you know I have to know what could
be done there, because somebody else,
that bad guy on the other
side of the field
might not have had the same
moral sense of responsibility.
So I have to be able to defend myself.
>> But
since science is about the dissemination.
If we are talking science,
in university for
example, and
we are not talking private industry we do
not keep discovery a secret we
disseminate them, we publish them, right?
I suspect,
tell me if I'm right about this,
probably every college physics student
knows how to build a nuclear bomb.
>> I hope not.
>> No it's actually not that easy but-
>> [LAUGH]
>> I mean I'm not saying they have
the capacity but they know it has to be,
or they can figure out what has to be-
>> Well,
they know they all know the basic physics.
>> Right.
>> The technology to
make it to really have-
>> Right, to make it miniature and
all that stuff-.
>> And to ensure criticality and
so forth is hard,
but I am told if you search hard
enough on the Internet you can
find it to make smaller nuclear weapons,
so you control the fiscal material.
>> And take the Iranians who Trump is so
worried about and
the world is worried about.
I mean, they're smart enough.
>> Sure.
>> They're technologically advanced enough
that if they set their mind to it, now
the question is will we put a stop to it?
But they're technologically
advanced enough to do this.
>> Right.
>> You can't take this knowledge and
put it in some bottle.
So-
>> Right, so that was why so
many people worked for so
hard on the non-proliferation treaty.
And we didn't get it, and that's where
the focus now is is in proliferation.
It's not so much in mutual destruction and
worry about the Soviet Union
which doesn't exist anymore.
>> So, sounds like there's actually,
in a way, three stages here.
There's the stage of invention.
And I agree with you, we can't stop that.
>> Right.
>> But
then there's the stage of recognizing
that it's potentially dangerous.
>> Yes.
>> And maybe we should be
encouraging people to do that.
And then potentially down the line,
there are interventions that we can use,
regulations or social models.
>> Right, and I think that's
a wonderful separation, Josh.
And it's at that middle stage
where it's absolutely critical for
the technologists themselves
to have a sense of moral and
social responsibility towards what
they see themselves developing.
That is the critical moment.
Because if it's later and
its regulation it's not so good.
>> You're listening to Philosophy Talk.
We're talking about monstrous
technologies in front of a live
audience at the Stanford Campus
with our guest Persis Drell and
we've got questions from
that live audience.
I'll go from one side to the other
of the room, I'll start with you.
Welcome, step forward,
tell us your name, where you're from.
Don't tell us last name, just first name,
cuz there are crazy people out there.
>> [LAUGH]
>> Welcome to Philosophy Talk, sir.
>> Good evening, my name is Jaesh I
did my masters here at Stanford.
Thank you for having me here.
Very quickly, my question is, you made
a very good point about not being able to
stifle or to stop or slow down
the invention phase of technology.
But what about the aspect
of influencing it?
Because there's a lot of discussion around
how for example, all these things like
Twitter, Facebook, Google were
built by a select few of the world.
And how that's affected the way those
technologies have shaped our world.
For example Twitter is a great
place where people are harassed and
bullied all the time.
And people have discussed about
how it has implications on people's
lives both positive and negative.
What are your thoughts on how
we can sort of change that.
How can we add more diversity in both?
In terms of the people who
are inventing those technologies,
as well as the ideas that
we use when we do that.
>> Got an answer?
>> You're looking at me,
I was looking at Chuck.
>> I'm looking at you.
>> So Twitter,
it's a tool that has marvelous benefits.
And it used in really negative ways.
I ponder a lot if they're a way of
ensuring at least some accountability
in Twitter.
And my understanding is that
many companies in these,
they actually have rules that they
don't even enforce themselves
about fake accounts and so forth.
So that is a place where, I believe,
Twitter let's pick on them.
Should actually force a little more
accountability in the use of its platform.
And along with that it would be great for
society to realize that just because
Speech is protected doesn't actually
mean it's appropriate all the time.
>> Right.
>> So what do you think of the following?
I think in America we have and
I think this is really
connected to technology too.
Although it may not
sound like it at first.
We have the wrong model of a corporation.
We have a shareholder
model of a corporation.
A corporation is supposed
to serve its shareholders.
I think we need to advance
to a stakeholder model,
of a corporation should
serve it's stakeholders.
And who is the stakeholders,
lots and lots of people.
So that the interests of lots and lots
of people, are somehow brought to bear.
I know there's a debate about
this in economic theory, and
all this sort of stuff.
But it seems to me until we make the
corporations accountable in a broader way.
Either we're gonna have the heavy
hand of government or something.
I mean what do you think of that?
>> I'm a physicist.
You're a philosopher.
>> [LAUGH]
>> We're redesigning the US economy as we
speak.
I don't know.
[LAUGH]
>> [LAUGH]
>> [LAUGH]
>> [APPLAUSE]
>> I got another question.
Welcome to Philosophy Talk.
>> So I'm Carl.
For a long time I was at MIT but
now I'm here at Stanford.
And it has its advantages here.
But anyway we're building
the Internet of Traitorous Things
where a device is traitorous if it works
against the interest of its users.
The biggest companies in the valley think
that within ten years the majority of
the people in this room will
be wearing holo glasses.
What are we going to do where
everything that you do, see and
say can be recorded and
owned by somebody else.
>> Stay home and never go out-
>> You don't have to wear the darn
glasses.
You don't have to give all
your information away.
>> Luddites aren't gonna make it.
Do you have a cell phone?
>> Yes.
>> You're in.
[LAUGH]
>> [LAUGH]
>> You responded to the last set
of questions, you said we're philosophers,
you're a physicist,
here we are redesigning the economy but
this is part of my point and
I wonder what you think about this.
We all have to be into this making and
remaking of the world together.
And I don't want to train our students and
say hey look I'm an engineer,
I'm a physicist.
>> I totally agree with you.
>> Right, so how do we do that?
How do we get this conversation
to be a broad conversation
involving all these people?
It seems to me that's what we need,
we need the physicist sitting with the
philosopher sitting with the politician
sitting with the journalist right?
>> And the economists.
Let's not forget the economists because
they actually know what they're
doing in this case.
>> [LAUGH] Maybe.
>> [LAUGH] Maybe [LAUGH].
>> But there are some models.
I mean hospitals have ethic boards and
they're responsible for certain kinds of-
>> Well, aren't we responsible for
there being a review boards,
human subject,
because of some experiments that
took place here at Stanford?
>> The prison experiment?
>> Yes.
>> Yes, the IRB.
>> Right, so it's not as though
everything's the wild west.
Right and so we just have to
throw up our hands it seems to me
an have we could shift, we could decide
as a society we want to shift some of
these areas of technological innovation
more in the direction of things that
have a little bit of inbuilt policing and
responsibility.
>> Yes we could.
And it wouldn't be that hard to do.
The first thing though is
to acknowledge the threat.
And I think that's really
what is just happening.
So I agree with where you're going but
I would point that
probably it's unrealistic to think
it will all be in place now.
Because I did not think
until a few years ago,
we recognized the magnitude of the threat.
>> Welcome to philosophy.net.
What's your comment or question?
Who are you and
what's your comment or question?
>> My name Victoria and I'm currently
an undergraduate student on campus.
So I guess my question is
philosophical perhaps in nature.
Since structurally the rise
of artificial intelligence or
by extension the aspiring to
develop it and research it.
Implies the displacement of conventional
workforces, so the abilities of computers
to start taking on complicated
tasks that is beyond the monotony.
So we've seen in cases where they're
starting to take on diagnosing diseases.
Or produce legal documents or
even creating a poem, etc.
So in that case, of course these
abilities got magnified and
those workers that are most vulnerable
in societies are harmed most.
So in that case, do you think there's
a moral responsibility of companies or
people who participate in AI development
to compensate that in forms of
taxation or others?
And how do we go about-
>> [LAUGH]
>> kind of thinking about that
conceptually?
>> Do you have a view or
do you want to punt this one?
>> Well I do, but I actually would love
to hear from the philosopher first.
[LAUGH]
>> Well I think this is
a really hard question.
And I think it's a huge question.
Because there are people speaking of the
economists I mean there's a disagreement
about this.
People used to believe that in the old
days technology destroyed jobs but
it produced compensating jobs.
That's actually not as
true as you might think.
That's a complicated thing.
But some people believe the day is coming
when technology is just the net destroyer
of the demand for human labor, and that we
could see, in the next ten or 15 years,
we can see the demand for human labor
decrease 40% in the next century.
We can see the demand for
human labor almost go away right.
How do we live in such a world?
Of course there's a moral
responsibility but who does it fall on?
It's a really hard I mean that is
among the hardest questions we face.
>> I'm not saying it's an easy question.
But I think we can apply
similiar principles here
to the ones we apply elsewhere.
And just say, look, if there is something
that a reasonable person or set of people
could predict, then you should be
setting about trying to predict it.
And if you're not even trying then I
think you can be held liable for that.
>> So, there's a lot of questions,
we got to take a break.
We'll start the next segment with a bunch
of questions after some more music,
but I remind you,
you're listening to philosophy talk.
We're coming to you from Cemex Auditorium
on the Stanford campus as part of this
university's Frankenstein at 200 project.
>> We're thinking about monstrous
technologies with Persis Drell,
Stanford's new provost.
>> In our final segment,
we'll ask Persis how she thinks we should
train the engineers of the future.
>> Educating for responsibility plus
more questions from our own audience
when Philosophy Talks continues.
>> [APPLAUSE]
[MUSIC]
[MUSIC]
>> [APPLAUSE]
>> Thanks once again to our live
musical guest, the Tiffany Austin Trio.
I'm Josh Landy and this is Philosopy Talk.
The program that questions everything.
>> Except your intelligence.
I'm Ken Taylor.
We're thinking about
monstrous technologies.
With Persis Drell from
Stanford University.
So, we've got a whole bunch of questions.
So why don't we start out with them now.
I don't know.
I think I was on this side of the room.
Welcome to Philosophy Talk.
>> Hi, I'm Sarah.
I'm an undergraduate here at Stanford.
And we live in which there are huge
socio-economic disparities and
as newer technologies become
available they're often only
available to those of higher
socio-economic status.
So I was wondering what y'all's opinions
were on how inventors and companies can
ensure that those disparities don't become
so big that they're unable to be overcome?
>> The presupposition of her question
is that it is the responsibility of
the technologists themselves.
Do you share that presupposition?
Or is that a broader
society responsibility?
>> I would like to say I think broader
society should take responsibility.
I like it when technology does
take responsibility too, but
I would also like to
point out in some ways
certain technology has been
incredibly democratizing.
So it has cut both ways but ultimately for
me, and this probably reveals a certain
amount about my political persuasions.
I do believe society should
be taking responsibility,
to ensure that it is available broadly.
>> So, how do we do that?
You're not a political person.
>> [LAUGH]
>> I want to ask.
>> We could go there, but.
>> We make you philosopher king.
>> Yeah yeah, but I do want to ask you
a question about something you said, now.
>> Mm-hm.
>> I mean, you said technologies
that are democratizing.
Sometimes, technologies
that look democratizing?
So the internet is supposed to be
a great democratizing technology, right?
Sometimes technology that
are democratizing just break down
the public square because they
substitute noise for knowledge.
One of the things that these top
authorities did, they certified stuff,
as legitimate, as knowable,
as worth paying attention to.
When everybody has access-
>> But
let me give a very specific example.
Theoretical physics used to be if you
wanted to do theoretical physics you had
to be in one of those pillars like
Princeton or Stanford or Harvard,
Oxford, and if you wanted to learn about
the hottest, latest thing in theoretical
physics, you had to write away with a
little postcard for a little pre-prepping.
That's gone.
And now theoretical physics innovation
comes from across the world,
it's been phenomenal.
>> That's true.
That's the up side, right?
>> I'm the optimist.
>> [LAUGH]
>> Welcome to Philosophy Talk, sir.
What's your comment or question?
>> I'm Wade from Portland, Oregon, and
as an aspiring engineer in a very large
organization, how do you feel one should
navigate these considerations when
you're just a tiny cog in
maybe a much greater machine?
>> Yeah, there you go, Persis.
[LAUGH]
>> Wow.
>> [LAUGH] That's for you.
>> Never lose your moral compass
no matter what level you are in
the organization and
hold yourself accountable to it.
And it will guide you.
>> Okay, so that,
those are inspiring words.
So this brings me to asking you, are these
words enough to inspire the next
generation of budding engineers?
We've got a room including
some current students here.
What do we do to try to make sure
that the next generation are gonna be
helping the world rather than
some new Victor Frankensteins?
Creating without the proper vigilance.
>> So, I'm a huge believer
that engineers need to be
educated not just in engineering, but
they need to be educated broadly.
Because they need to care about
the impacts of the technologies that
they're gonna be involved in inventing.
They don't get that by taking more
physics classes or more math classes or
more engineering classes.
They get that by taking
a philosophy course or
they get that by taking
a literature course and
being forced to think through
the impact of what they're doing.
Or, if they're really more directly
interested, take a social science course.
But, I think that educating engineers
to be engineers only is criminal.
>> I totally agree with you.
I say to students, I put it starkly and
they sometimes gasp.
>> [LAUGH]
>> I say, Hitler had his technologists,
Stalin had his technologists it's
not enough to be at technologist,
if you're just the technologist,
you're fit to be a tool of some broader
social thing, but is that what you want?
You wanna just be a tool, and
students sometimes are taken aback when-
>> [LAUGH]
>> I say this, but
that brings to how we make them not
just be tools how do we educate them
to be technology leaders and thinkers.
I know, I think you think
you would like them to take
a philosophy course, but you're not gonna
force them to take the philosophy course.
You're not gonna require them to do that.
>> Well I do believe that
if you require things,
people do them because it's required but
if they don't come to subjects willingly,
they're not gonna absorb and
learn them, and internalize them,
and then it's just a waste of time.
So, they have to come willingly at
Stanford and most other institutions.
We have gentle ways of encouraging
people to get breadth.
They could be a little
less gentle in some ways.
They could be a little more prescriptive.
>> But here's what I think.
A university education is and
I think it is in a great crisis.
I believe we're in a crisis state.
And one I think there are two
sources of the crisis.
But we're focused on one source.
I think we have become too focused on
imparting into our students a narrow,
technocratic education.
Right.
And partly, our students are demand,
want that of us.
Because of their parents,
and we have this silly
reputation as undeserved as Get Rich U and
all that sort of stuff.
And they think chase the brass ring, and
get the hot job, and all that stuff.
And I think we need to address this.
And I don't think this is a small thing,
I think it's a huge thing.
>> Okay, but there's another piece of it.
Which is that, I do think we have
students majoring in technical subjects
computer science whatever, for
the wrong reasons and all.
And helping them choose the subject they
want to major in for the right reasons is
obviously part of our responsibility and
we could be doing it better.
But I also think that, and
here I'm going to speak of somebody who
was only in the school of engineering for
two and a half years.
I'm not an engineer.
I've never taken an engineering course.
But what engineering did and
is doing which is very impressive is they
actually think a lot about not just what
do they need to impart to the students.
But what do the students want to learn and
how do they want to learn it.
>> Right.
>> And
that focus has helped some of
the majors and CS is one of them.
Be incredibly attractive and
with really good on ramps.
>> I understand that.
>> And I think other subjects
certainly my own subject could learn
a few lessons from that.
>> Yeah.
Welcome to Philosophy Talk.
What's your comment or question?
>> I want to defend some economists now so
[LAUGH].
>> [LAUGH]
>> I teach economics.
My name is Makia.
I teach economics, business, and
computer information systems.
And one of the things I wanna presuppose
is that economic growth is good
first of all.
Can we agree upon that, okay?
So free economic is good one of
the leading sources just in the principles
level economics,
I'm quoting Robert Hall and John Taylor.
Because that's the textbook that we
use for Principles of Economics.
And most of the growth that's taken place
in the last 50 years, in the modern world,
in the First World,
has been due to technological increases.
So, increases in productivity, not
increases due to more people working or
working harder.
And that also, that economic growth
also brings about good things.
We have longer life-expectancy,
you can't just say people live longer.
But what happens when people live longer,
the useful life is a lot longer.
You have people, I think,
probably 50 is the new 30 now.
[LAUGH].
>> I'd like to think so.
>> Things like that.
[LAUGH]
>> Let me ask you a question though.
Because as an economist,
what side are you on?
Productivity?
One person producing more production
per capita or something like that.
But what do you think about
this debate of whether
technology is going to diminish the...
decrease the debt demand for human labor?
Is that gonna happen or not?
>> That's exactly what my next point.
One of the things I tell my macroeconomics
students is when we're talking about
economic growth.
Cuz that's a large focus
of a macroeconomics class.
Is where are people
migrating to in the world?
Are they migrating to the robots or
away from the robots?
Where are they migrating
to where the factories are?
Where the computers are or away from them.
So we see that, you know,
we have at this point in human history.
We probably have more people working
as a percentage of the population and
especially adult age
people with opportunities.
And we also have, you know, and you have
to say that there are more people being
one thing that you have
to take a look at is.
Is people will be displaced, so
to a certain degree,
technology displaces people.
But those are the people who don't,
a lot of the times,
have the education to adjust.
So you have to be malleable.
And to a certain degree,
machines or robots,
I'll just [INAUDIBLE]
are substitutes to humans.
But to a greater degree,
they're complements.
So machines and people work together.
And that is why you see increasing growth.
>> Okay so [CROSSTALK] [LAUGH] [INAUDIBLE]
>> That's
my question there is why is that
bad if we are having technology and
its increasing growth and
good things are happening why is that bad?
>> Well I'm no economist, I'm no futurist.
The fear is, take driverless technology.
There are three million people,
I think, in this country.
Who make their living off driving things,
right?
And they're gonna be
displaced pretty quickly.
And that's a good stable middle class
job and those people are not malleable.
I mean, you could say be malleable but
people aren't as malleable.
I mean, a 50-year-old truck driver
in Pennsylvania who gets displaced,
he's just displaced.
He's not gonna do anything else.
And how do we deal with that?
>> And that's not to mention
climate change, right?
We're making all of these incremental
advances in life expectancy and
things like that but this is coming
at the cost of future generations and
we're not thinking about them.
Ultimately all of these advances
are just gonna be completely dwarfed
by the challenges we're
gonna face in the future.
>> So you want to respond to that?
>> No I just wanted to
end on an optimist note.
>> [LAUGH]
>> That's it.
[LAUGH]
>> It's all gonna be okay.
>> No, it's not gonna be
okay if we don't work at it.
>> Right.
>> But we have to work at it.
>> Right.
>> And we cannot give up and
cede that responsibility to anyone else.
>> Okay so this is what we're gonna do.
This is basically the end of the show
except there are people standing in line
with questions and
if you're standing in line with questions
we're going to take your questions.
I'm going to make a clean break,
you probably won't get on the air but
you'll get to talk to us anyway.
So we'll take these people
who are standing in line.
Come up to the mikes.
And then I'm gonna stop.
And then you're gonna say something wise
and then we'll say goodbye to you, okay?
>> [LAUGH]
>> But right now we're gonna take
these three people.
>> Even wiser than that?
>> Yes [LAUGH]
>> [INAUDIBLE]
>> Okay, he needs a clean break.
Welcome to philosophy talk sir,
what's your comment or question?
>> That one are I know
about is self driving cars.
And so there is a big push for that and
the technlogies that are required is so
complex for a lot of people, scientists
and engineers are working on this but
all the scientists and
engineers who are working on this,
they don't really ask the future
implications of self driving cars.
But a lot of the burden is not on them,
it's on the capitalistic system,
it's the heads of Google and
Ford who are actually
want to make a lot of money out of
this self driving car business.
So who should like,
how do you think we should
pushback on the self driving cars for
example as consumers?
>> Mm-hm.
Well if we don't like self driving cars,
no one is gonna force us to get them.
The fact is that if we have self driving
cars, it would be extremely attractive.
It will make commutes more attractive.
It will make probably make highways safer.
>> No doubt about it.
>> So I think you could argue
that the technology is good.
You worry about the loss of jobs, and
that is then a societal responsibility,
for either retraining, or slow evolution.
I think actually one thing that we can do,
I don't know if we will do it,
is that I think the transition between
every vehicle having a person in
it driving actively driving and
a fleet of vehicles with no one it is
actually a slow transition and so you
could do it in an evolutionary way if you
really thought about it and planned it.
>> That's definitely right.
I mean I think the deeper point is that
we've gotta get past
thinking of technology.
And I think this, in educating our
students, we have to do this too.
We have to, thinking of technology and
technological innovation as
just a thing unto itself.
That just, well, we produce it,
it changes the world.
We, my god.
We didn't have any agency in
changing the world that way?
How does technology changed the world,
by being deployed,
by human beings in that context,-
>> It's probably a much bigger system.
>> Right.
>> Right.
>> Say again.
>> It's port of a much bigger system.
>> Yes, and so we have to think
about the home systemwide thing,
and even a young designer-
>> Can do that.
>> Can be alive to the fact, that I'm
entering a large complicated system, and
I'm a thought leader, and
I went to a place like Stanford, and
I should be reflective, and
I should be a citizen, and all that stuff.
>> I mean it gets back to
Persis' earlier point, so,
don't just study your particular field,
but learn about human psychology,
learn about macroeconomics, learn about-
>> Right.
>> Dance.
>> [LAUGH]
>> Exactly.
>> [LAUGH]
>> First thing.
>> Welcome to Philosophy Talk.
So I'll just back and
then we'll have a clean break.
>> Okay, good evening.
My name is Shelby and
I have engineering and
business degrees from here at Stanford,
and there's two points.
I wanna be an optimist.
I am an optimist somehow or
other, but logic gets in the way.
>> [LAUGH]
>> And when you talk about people using
technology, people being responsible,
the problem that I see there
is it has to be every
person responsible forever.
I mean,
we've got things going on in North Korea.
We got this that,
they're doing CRISPR all over the world,
and creating stuff,
I mean it's just out there, and
that's the one thing, and
then the other thing is that we have AI.
Now AI is a different
monster from all the others
because it can have a will
of its own philosophically,
potentially, and we don't know what
kind of will it might develop.
So those are, now, so what's the question?
How can we not be pessimistic even though-
>> [LAUGH]
>> Yeah, that's one
for you, for-
>> [LAUGH]
>> But I gotta say,
AI is coming like gang busters.
>> Yeah.
>> AI is coming like gang busters,
and the promise, John McCarthy,
our former colleague, or late colleague,
they had this meeting back in,
when was it, 1950 or something like that.
They got together.
They go, by 1970 AI.
Okay, that was a little premature, right?
But AI is coming like gang busters,
and the data's coming.
What anything a human can do, some AI
software will be able to do better.
That's just coming.
>> I'll take that bet.
>> Yeah, I will too.
>> No, it's coming.
Anything [INAUDIBLE].
>> I wanna see someone design
the software to detect.
>> Look, it's already the case.
>> [LAUGH]
>> And this isn't even a super intelligent
AI, it's just think machine
learning technique.
They can already out-diagnose
your average doctor.
Right?
>> They diagnosis, fine.
>> Anything that involves
pattern recognition,
that involves-
>> We do a lot of thing that are beyong
pattern recognition.
>> [LAUGH]
>> I'm talking about today and
what's coming next.
It's coming.
Okay, we'll take a vote.
>> If you wanna be-
>> [LAUGH]
>> Just try Google Translate,
then-
>> Yeah.
[LAUGH]
>> You'll sleep better.
>> [LAUGH]
>> But you know how Google, nevermind.
>> [LAUGH]
>> Welcome to Philosophy Talk, sir.
>> Yeah hi.
I'd like to just compliment
the last speaker because he
usurped most of what I wanted to say.
>> [LAUGH]
>> You talk about technology, but
there isn't a technology, and I'd like
to at least just have you distinguish,
as he was trying to say, like the world
of medicine has done magnificent things.
You cannot deny that.
Yet on the other hand, in areas where
they have begun to encroach on truly
dangerous things, the world of medicine
seems to have taken it seriously and
they are doing something about it,
and I'm presuming it's because
the government has been bothering,
it's really a serious issue.
Somehow the world of technology
is focused on kiddy things and so
forth, that are fun and so forth,
and there is this issue of
the fact that I think somebody
likes Fowler writes a book, and
he's a little obviously extremist, and I
haven't finished the book to be truthful.
But the reality is that he's saying good
things about the fact that the owners
of these magnificent six companies or
whatever they are, really think they own,
they're very God-like, and
they are determined to guide our will.
I mean, they are the gatekeepers.
We learn what we wanna learn from them,
they can model it, they can modify.
It's a dangerous issue and
if you think that they're going to
ultimately come forth and say, well, we're
gonna drop all of this and, forget it.
I mean, to be pessimistic is one thing,
but to do something about it is going to
require, I think, ultimately that
there's a stand up sort of thing,
sort of like Florida youngsters
getting up and saying enough already.
There's gonna have to be some groups stand
up and get some action out of a congress
or some place that says, look,
you can't let this go on.
You've created the a division of labor,
I mean a division of the money that's
unacceptable, you've created the ability
take everybody's privacy and destroy it.
I mean, you have,
that's what they were saying.
You have no privacy.
You own a phone, you have no privacy.
I mean, it's just-
>> Okay, Persis,
this just goes back to where
are the grown-ups, right?
>> Yeah,
it goes back to where are the grown-ups.
>> Right.
>> [APPLAUSE]
>> What I think is, it will be fascinating
when the history of this is written,
because I'm not sure even
the owners of those companies had
a clue what was going to be coming.
They have a choice to start
taking some responsibility, or
government regulation will come in and
break them up the way AT&T was broken up.
I mean, there is an ultimate
authority there, we think.
But, I'm not sure that's the right answer.
So, I actually do hope the grown-ups
stand up and start working at it.
>> Okay, the last question.
>> Thanks.
I wanted to ask about the discussion
you had about encouraging humanities
education.
My husband and I are both alumni here.
He was a philosophy major and
I was an engineering major, and
we discuss this a lot about our own
children now where he hopes they get
a strong humanities education.
>> And you?
>> And I say absolutely.
It's good to get some, but
they better major in-
>> [LAUGH]
>> Engineering.
It goes back to, I find it's employable
to get the engineering degree, and
I do believe you're a better person for
having a humanities education,
but then what happens after
they get out of college?
They need to pay for their rent.
They need to do all these things.
So, I mean, how do we close the gap there,
and encourage more of that?
>> I take it that Persis wasn't
necessarily recommending that people major
in a humanities subject.
>> [LAUGH]
>> No.
>> Right?
>> But so-
>> So the thought could all just be, look,
if you're gonna be an engineer,
also learn some other things.
I think that's a good compromise.
>> Well, but I think, if I were to
lay my bet on the table of the degree
of the future,
it is not the pure engineering degree,
it's going to be a social science
degree with computational literacy,
because you need to know the questions
to ask, and there are these huge
societal challenges and I do believe the
social sciences helps you understand what
questions to ask, but you can't understand
them without computational literacy.
So I really think that's the magic
combination that I would put my money on.
>> So
if you really want your kids to get a job-
>> So, look, the degree of the future-
>> I have that parental inference.
>> [LAUGH]
>> This isn't going on
the radio, so I can say this.
>> [LAUGH]
>> This is the Stanford audience.
The degree of their future is
the Symbolic Systems degree at Stanford-
>> [LAUGH] And with that, [LAUGH]
>> It's dance, dance,
is the degree of the future.
>> [LAUGH]
>> I want to address this,
I want to address this briefly and
then I'll give you a clean break, David.
We are educating the makers of the world,
the remakers of the world.
Human society is constantly making and
remaking itself.
And these Stanford students that we have,
they're going to play a role in making,
and remaking the world.
And we need to educate them,
in multiple things.
We need to turn out excellent students.
We need to turn out the great engineers,
great scientist, great artist,
great literary thinkers, great
philosophers, great social scientists.
But, we also need them each to understand
that making and remaking the world
is a deeply collaborative thing that
requires multiple disciplinary talent.
There is no competition.
We have this stupid thing that's happened.
The students say, well,
are you a techy or a fuzzy?
You better be both.
>> Right.
>> [LAUGH]
>> You better be a te-fuzzy or
a fu-techy, or whatever that is.
>> [LAUGH]
>> Okay. [LAUGH] >> Because if we don't
produce people in whom all of these things
live simultaneously, then the making and
remaking of the world will be a disaster.
Well, and the very-
>> It requires
a moral compass.
>> [LAUGH]
>> Of course it does.
>> Are you training for that or not?
It wasn't on the list I
heard you just recite.
>> Yeah, I'm deeply committed to that.
I'm deeply, deeply committed to that.
Yeah, okay, so now we're gonna
pretend none of that happened, Devon.
>> [LAUGH]
>> So what I'm gonna say is, Persis,
you got one last bit of wisdom for
us, right?
So, okay.
>> Uh-oh.
>> So he needs a clean break.
>> The pressure is on.
>> So Persis,
you got one last bit of wisdom for us?
>> I think we have to hold on to our
optimism, despite the challenges ahead.
>> Well, on that optimistic note,
I'm going to thank you for joining us.
>> [LAUGH]
>> It's been a great conversation.
>> Thank you.
>> [APPLAUSE]
>> Our guest has been Persis Drell,
former dean of the Stanford School
of Engineering and
recently appointed as our
university's 13th provost.
Now this conversation continues
at Philosopher's Corner
at our online community of thinkers.
Where our motto is, Cogito ergo blogo,
I think, therefore I blog.
And you can become
a partner in that community
just by visiting our
website PhilosophyTalk.org.
>> And if you have a question that wasn't
addressed in today's show, either here or
on the radio, we'd love to hear from you.
Email your question to us at
comments@philosophytalk.org,
and we may feature it on our blog.
Now let's hear from a man of
monstrously rapid speech,
it's Ian Shoales,
the 60 Second Philosopher.
>> Ian Shoales, at the time Mary Shelley
created Frankenstein, a barbette, kind of,
Galvani had just made a dead frog's leg
twitch with electricity, allegedly.
And grave robbers were
digging up bodies for
medical students to use
in their anatomy classes.
All of which, kind of, provided
the juice as it were for her novel.
One of the reasons body snatching was so
reviled was the widespread Christian
belief that come judgement day
we would be resurrected whole.
Cutting up bodies might make such
a resurrection a little more problematic.
I don't know what the deal was on
amputees or guillotine victims.
Though headless ghosts, again, you might
recall, were a bit of a literary trope for
a long while there.
Also about ten years after
the writing of Frankenstein,
two enterprising grave
robbers named Burke and Hare.
Took anatomy research a step further by
murdering people to provide cadavers for
her anatomy's lectures.
16 and all is believed their unique method
of strangulation became known as burking.
Just as Galvani's name wound up
describing frog leg twitching.
And the name of Frankenstein
came to be known as the monster,
not the creator of the monster.
What's up with that?
The whims of legend often trump
the facts of history as we all know.
The recent passing of the Reverend Billy
Graham is certainly proof of that.
I recall his being regardless of
somewhat relatively liberal figure in
the Evangelical world.
Certainly ecumenical, dining with Popes
and talk show hosts and Presidents, and
a preacher who eschewed fire and
brimstone.
Had a nice haircut, wore tailored suits,
and did not sweat or storm about.
But when he died, my, my, my,
there was a torrent of ill-wishers glad
he was dead because of anti-Semitism.
And he had lunch with Kissinger.
Same guy, but now he's a monster.
We have seen the reputations of Darwin,
Freud, and Marx go up and down, but
Darwin and Marx sometimes achieving
monstrousness, along with misapprehension.
And Freud achieving more and
more irrelevance as drugs replace therapy.
And psychotherapists wander the streets
carrying their sad little signs,
will reveal your unconscious processes for
food.
>> [LAUGH]
>> The monsters
of tomorrow are being stitched
together today with gene splicing,
artificial intelligence application,
autonomous cars, global warming fears and
the tyranny of the algorithm.
[COUGH] These could be real
monsters are just topics for
the blockbusters of tomorrow like
cloning dinosaurs for instance.
It is proven to be not so much a threat to
humanity as a never ending money machine,
without once achieving reality.
As for the Frankenstein monster himself,
the dissection does not hold the appeal it
once did although artificial limbs abound
along with hearing aids, eye glasses,
mood enhancing drugs, endurance enhancing
drugs, strength enhancing drugs that can
serve to make any of us an ugly psychotic
monster with predator natural abilities.
No Frankenstein creator needed.
Right off the shelf.
Eliminate the middle-man.
Also with advanced post-Freudian
counseling techniques we can embrace our
new selves.
The stray limbs we acquired legally.
Assimilated into a new, post-human whole.
And should we choose to run amock under
medical supervision, of course, and
then escape to the Arctic, global warming
will assure that we are not fleeing from
pursuers, leaping from ice float
to ice float, but relaxing.
On the newly warm and
placid sea on a floatation device,
catching up with the latest
fake news on our smartphone.
And rubbing sunscreen
on our brand new legs.
>> [LAUGH]
>> A little R and R and
then back to DC to lobby for the
trans-human movement, Hacked Lives Matter.
Won't you please give?
I gotta go.
>> [LAUGH]
[APPLAUSE]
[MUSIC]
>> Philosophy Talk is a presentation of
KALW local public radio San Fransisco and
the trustees of
Leland Junior Stanford University.
Copyright 2018.
>> Our executive producers are David and
Matt Martin.
Special thanks to the Stanford School of
Engineering, the Symbolic Systems Program,
the Department of Philosophy,
and the Program.
>> Thanks also to Sun Lee, Emily King,
Dan [INAUDIBLE] and
our musical guest Adam Schulman on piano,
David Uhl on the bass, and the one and
only Tiffany Austin on vocals.
>> [APPLAUSE]
>> The senior
producer of philosophy
talk is Devon Strolovitch.
Laura Maguire is our Director of Research.
And our marketing director
is Cindy Prince Baum.
>> Support for Philosophy Talk comes in
various groups here at Stanford University
and the partners in our
online community of thinkers.
>> The views expressed or misexpressed on
this program do not necessarily represent
the opinions of Stanford University or
our other funders.
>> Not even when they're true and
reasonable.
>> [LAUGH] The conversation continues
on our website philosophytalk.org,
where you too can become a partner
in our community of thinkers.
I'm John Landy.
>> And I'm Ken Taylor.
Thank you for listening.
>> And thank you for thinking.
>> That's a wrap.
>> [APPLAUSE]
[MUSIC]
>> [APPLAUSE]
