Good afternoon.
My name is Melissa Nobles.
And I'm the Kenan Sahin Dean of
the School of Humanities, Arts,
and Social Sciences
here at MIT and also
professor of political science.
I had some remarks
prepared today
in advance of the
next panel on ethics.
And then after hearing
the conversation
with Mr. Friedman
and Dr. Kissinger,
I thought it provided
actually quite
an appropriate
introduction to this panel.
And so I kind of quickly
re-writ my remarks
in keeping with what I
heard this afternoon.
So in this article in
this talk, we at MIT
were admonished to think
deeply and carefully
about the social implications
of AI in particular
and computation in general.
He also suggested that the
discipline that I study,
political science, and
particularly politicians
are not really
keeping up with all
of the technological advances.
And to a certain way,
that is certainly true.
The humanities, the social
sciences, and the arts
are all grappling very
deeply with the ways
in which computation
is changing the world.
And it is in effect changing
the way we study the world.
And our expectation is
that with the bridge hires
being contemplated
in the college,
it will go a long way in
helping us to achieve that goal.
But at the same time,
the conversation
also reminded us
that technologists
themselves must much more deeply
understand what they are doing
and how what they are doing
are actually deeply changing
human life and taking that on
in a really deep and intentional
way.
So as I understood
it then at the end,
he gave not only President
Reif, but he gave all of us
a homework assignment.
And that as an
assignment is this.
It is to be truly
collaborative in our endeavors,
because the welfare literally
of humankind rests on it.
And it is with
that gravity that I
hope we will think
about the next panel.
So I call the panelists
up to the stage.
Are you all coming?
They're coming.
So our panel will again be
moderated by Tom Friedman.
And they will explore more
broadly the social implications
of computation.
And I'll introduce them
now once they're seated.
If you all, when
I say your name,
you can just raise your hand.
The first is Ursula Burns.
She's the executive
chairwoman and CEO
of VEON, a leading global
provider of connectivity
and internet services
headquartered in Amsterdam.
We have Ash Carter, who is
the director of the Belfer
Center for Science and
International Affairs
at the Harvard Kennedy
School and a former US
Secretary of Defense.
Jennifer Chayes is a technical
fellow and managing director
of Microsoft
Research New England
for New York City and Montreal.
Joi Ito is director
of the MIT Media Lab.
Megan Smith is the
founder and CEO of shift7,
a company driving tech
forward, innovation
for faster scaled impact.
She's also a former US
Chief Technology Officer.
And finally Darren
Walker, president
of the Ford Foundation and
international social justice
philanthropy.
As I mentioned,
their discussion will
be moderated by Tom Friedman,
the three-time Pulitzer Prize
winner and a weekly columnist
for the New York Times.
Please join me in
welcoming them.
[APPLAUSE]
Wow, what an all-star cast.
This is great.
What a great way to
conclude this seminar.
Joi, I'm going to
start with you.
You're on the spot, pal.
And I'm going to start--
I happen to do a column--
I was in India last week.
And I did a column on
while AI's disrupting
the outsourcing industry.
And so we have comments
on our columns.
And I was reading the comments,
because they were particularly
interesting this week.
And there was one
letter there that
came in that made me think
of you and our prep call.
And the letter was from
Robert W. in San Diego.
And he said, "A
letter to the editor.
What we call AI is really
just machine learning.
My cats display
some intelligence.
If I come home from the
store with a bag of things
and toss them on
the couch, my cats
will run to the bag
to see what's in it.
They will smell it and pat at
it because they wonder about it,
just like our ancient ancestors
looked up at the stars
and wondered what those
points of light were.
It's a sign of intelligence.
A computer will never wonder
about the world or anything
in it, because it's
not intelligent."
And I was thinking about
it, because when we talked,
you said one of the
things you thought
was really important that we
not think of AI as some totally
new thing, that it's really just
extended intelligence, machines
increasing the
power of machines,
and that your big
concern is putting
this power, this
amplified power,
in the hands of technologists
not really prepared for it.
Please elaborate.
So let me just describe
extended intelligence
and talk a little
bit about that.
So actually Norbert Wiener,
MIT professor 50 years ago,
wrote this great book called
The Human Use of Human Beings.
And in it he describes
organizations
as machines of flesh and blood.
And I think corporations
are super intelligences.
They're this aggregate
of all these--
not necessarily more
wise than humans,
but they are more complicated.
And we can barely manage them.
And the way I think
about intelligence,
we already have
machines in the system.
And I think of AI as sort
of jet packs and blindfolds
that are going to come
on and just send us
careening in whatever direction
we're already headed in.
So it's going to make us more
powerful, but not necessarily
more wise.
And I think that a key thing
is to get our house in order
before these jet pack comes on.
And I think a lot
of the presentations
before were about that.
I think the problem has been
that a lot of machine learning
in AI has been in the
domain of engineers.
And it has looked like
a technical problem.
And it's been very difficult.
Even though we talk
about explainability,
a lot of the explainability
has been explainability
between technical people.
And it's not
explainability to courts.
It's not explainability
to political systems.
And when you look at the code,
a lot of the technology people
say, oh, we're just technical.
We don't deal with
racial problems.
We don't deal with the
political problems.
Weirdly, when you
look at the law,
they often say the
same thing, too.
Torts law says if you run over
a rich person and a poor person
at the same time, we pay
the rich person more,
because our job in torts is not
to deal with redistribution.
We're just trying to
technically keep the status quo.
So one of the really
interesting things
is that the technology
people and the law people,
which are both kind of
necessary to get this right,
have kind of punted
on the politics piece.
And I think we're
getting to the point
where as these things
are getting deployed,
this interface between
society and engineering
is not yet tuned to the point
where we can integrate society.
And I think we have to do that.
And to me, I think that's why
this college is so important,
that that interface
has to happen
before we put these jet
packs and blindfolds on.
It's a good segue to you, Megan.
You said the
College of Computing
should also have the best
community organizers, the best
school of social scientists,
the best justice technologists.
So you produce
bilingual scientists
with a heavy emphasis on
the non-CS part of the job.
Yeah.
What did you mean by that?
Well, we have an incredible
EECS department at MIT.
Extraordinary.
We don't need to replace
that or replicate that.
Computing is really
for anything.
And it was so interesting
to me as US CTO.
By way of example, we
think computing today
is for certain things--
self-driving car, precision
medicine, these topics.
And yet why is it also
applied to any topic?
And I'm wearing CS for
all, CS for all people
to have hands-on
keyboard designers,
but CS for all topics.
And the challenge--
if I as CTO would
have gone to HHS,
Health and Human
Services, a trillion dollar
agency, and I went to H,
and I went to a meeting
of precision medicine,
they'd say, oh great.
Let's get started.
If I went down the hall to
the foster care meeting HS,
they'd be like,
why are you here?
My computer's working.
right?
And these are
extraordinary people.
Our civil servants who work
in HS are just amazing.
They know more about these
systems and [INAUDIBLE]
those things.
So I guess one of
the key things that I
hope for this is
that I hope we're
going to do a lot of computing
on foster care solutions.
I think we should do some
computing on equality,
world child poverty.
We feed 22 million children
in the free and reduced lunch
program during the school year.
And we can only get to 6
million of them in the summer.
That's a big data problem.
I think it's more important
than self-driving cars.
I love my friends who
work on self-driving cars.
But there's lots of us who
could work on lots of things.
And so one of the
things I'm really
excited about for this
computing college, the College
of Computing, and I think
everyone who's working on it
is that we really could
not just diversify tech,
but we could techify
everything else.
And we could really work
on the hardest [INAUDIBLE]
the hardest problems together
in this collaborative way.
It's such an opportunity.
It involves not only
actually solving
some of the ethics problems.
We'd be bringing some of these
topics into the mix of the code
and some of these other
humans in the mix of the code.
We're super lopsided
on who gets to speak,
who gets money, who
gets to set the agenda.
This is really silly.
And I call it TQ,
like tech quotient.
Let's add the TQ to everything.
And we can really embrace
solving a lot of things.
And I think we'll start
going in the right direction
if we have blindfolds on.
So I haven't checked this.
But I'm going to make a bet that
there was not a single question
on the implications of AI in
the 2016 debates for president.
Can you imagine that
happening in 2020?
Yeah, definitely.
I think that's going to happen.
And it was interesting,
because we're
moving so fast that I
remember when Secretary
Foxx, who was Secretary
of Transportation
with Secretary Carter--
when he came to be confirmed
the Secretary of Transportation,
there were no questions on tech.
And by the time he'd finished,
self-driving cars and UAVs
and all that.
So we're moving fast.
And yes, it will be there.
So Darren, I want to test out--
I've got a book idea.
And I want to test it out
on you since you're here.
And it sort of goes like this.
I actually wrote a book
a couple years ago,
Thank You for Being Late.
And it argued that the
world is being reshaped
by three accelerations, what
I called the market, mother
nature, and Moore's law.
So technology, globalization,
climate, biodiversity,
loss population, three
giant accelerations.
And people are polite, so
they often come up to you
and say, hey, you're
writing a new book.
And I'd say, well
I just wrote a book
about the three largest forces
on the planet reshaping--
I don't have three
new ones this year.
And that actually
got me thinking
about what is going on.
And what is going on seems to me
is all of them are going deep.
And the book I think
I'd like to write
would just be called
Deep, because I
think all these things are
now going at a deep level.
It was very interesting
being in India last week,
because Jio, this new cell
phone company by Reliance,
has so driven down the
cost of cell phones
that suddenly a
couple hundred million
more Indians are getting
access to the network
now, because it got cheap.
And now they're able to go
deep in wholly new ways.
And of course, if you
watched the Oscars,
what was the Song of the Year?
It was Shallow.
[LAUGHTER]
But actually the verse is--
Was it?
It was.
--"I'm off the deep end.
Watch as I dive in.
I'll never meet the ground.
Crash through the surface
where they can't hurt us.
We're far from the shallow now."
And I think we're far
from the shallow now.
And so you've talked about
public interest technologists,
Megan, Joi.
You're all really talking
about if we go deep
without the philosophical,
legal, ethical norms
around just behavior
and privacy,
we're going to be really
far from the shallow.
What are your plans
at the Ford Foundation
to address that problem?
Well, first I think if we are
going to go deep without a view
as to whether AI
can advance justice
and whether it can
strengthen our democracy,
if we're going to engage
in this enterprise
without those questions driving
our discourse, we are doomed.
If we do not
understand that there
is a difference between
private interest
and the public interest and
that space is contested--
we saw that space in
the Zuckerberg hearings
where we had powerful
senators asking
this CEO basic questions about
how to turn their computer on.
The question was actually,
how do you make money
if you give it away for free?
Because quite frankly, in any
other sphere of importance
in our society, at a
congressional hearing there
would be some smart
person sitting
behind that congressperson
passing them notes,
saying ask him this.
He's wrong.
Challenge him.
The data says this.
And there's someone
sitting behind
in health, the
environment, human rights.
There was no one sitting
behind those senators,
because there are very
few people on the Hill
working in the public
interests on this larger
issue of this fourth
Industrial Revolution.
Why is that?
Because MIT and Stanford didn't
train them to go to the Hill.
And that's the potential
that this new Schwarzman
College offers and
why I think what
Rafael and Steve and the
others here are doing
is so potentially
transformational
for higher education and
more broadly society,
because if we don't have a
view about the public interest,
if we can't even define
what is the public interest
in this conversation,
then we won't
work for the public interest.
The professor said a moment
ago, I'm very excited about AI.
If I'm a black man on parole
or about to be paroled,
I'm not excited about AI.
Why not?
[INTERPOSING VOICES]
Because the way in which
predictive analytics is driving
decisions as to who gets
paroled and who doesn't is
having a pernicious effect
that is, in fact, reifying
and amplifying the very
human biases that we see
reflected in our society.
So the potential of AI
should be to help correct
the inherent biases,
the historic biases that
play out every day in America.
But it's not doing that.
And the answer can't
be, we don't know.
That can't be the
answer in a society
where inequality is
growing and where
those who have historically been
marginalized and disadvantaged
are having their
disadvantage compounded.
So will AI be a leveler?
Or will AI simply compound
the disadvantage and bias
that is already built into
our systems and structures?
I lived through this.
I've been at the New York
Times for 40 years almost.
And I lived through
this transition,
because I work for a news
organization that was basically
for most of its life printed
on a dead tree, on paper.
And over here, we
had a regulator
who said if somebody wants to
run an ad on your dead tree,
they have to identify
where the money comes from.
And over here, we
had an editor who
said if you make a
mistake on the dead tree,
you have to correct it.
And on top of the dead tree,
we had readers and advertisers.
Then along came Facebook.
They said, we're
not a dead tree.
We're a platform.
We don't need any
of your regulators.
We don't need any
of your editors.
But we want all of your readers
and all of your advertisers.
And we didn't know what to do.
And they were kind of cool.
And we were sort of old,
dead tree journalists.
And so we did the only
thing we knew how to do.
And that was trust them.
And they completely
violated our trust.
I mean not ours.
I mean the community's trust by
scaling their platform helter
skelter without
building in the editors
and the implicit regulation.
Do you think there's
any rolling that back?
No.
We have to engage.
And we have to talk
about the things
that we don't like
talking about, or at least
elites don't like talking
about, like regulation
and redistribution, because
unless we are prepared to have
a system that is fairer,
then our democracy
is going to be undermined.
So the question
for me is, who is
going to write the regulation?
Because actually, there is a
dearth of talent in Washington
who even understands the
fundamentals of that platform.
Ursula, you were saying no.
Elaborate.
Well, For full transparency,
I serve on the board
of the Ford Foundation.
And I think Darren's
foundation on it's
our problem that was created
and it's our problem to solve--
my reaction of no is
in the administration
that we're in today.
I don't think that there's any
interest whatsoever in pulling
back anything that would point
that towards more justice, more
equality, more freedom, less
regulation, or more regulation.
Bad time for government to be on
vacation at a giant inflection
point.
It's a really bad time.
And that's what
I'm nervous about.
I'm nervous about the
fact that we're moving--
I love Joi.
I love your analogy.
We're moving really fast.
And in a week, it's like a year.
So a year has passed in a week.
And we have probably a
couple of more years.
By the time we get
to the point where
we realize that
there's something
that we must do to
actually right the ship,
the ship will be in the
middle of the ocean.
That's one of the reasons why
I'm so excited about Schwarzman
College and I'm so excited
about being here at MIT,
because at the heart
of MIT is this idea
that hearts and minds, hands
can actually all come together
for the better good.
And this is not
only about getting
a whole bunch of good
computer scientists writing
these great good programs.
It is about making the
world a better place.
And we have to actually figure
out a way to mix this together.
There is not a lot of other
checks and balances out there.
You said something that
was really interesting.
You said we trusted them.
We didn't even trust them.
I mean, we didn't even ask them.
I'm not even on Facebook--
They asked us.
--so I--
Right, neither am I. But I think
that what we have to do now
is we have to
actually stand up--
And get in their face.
--and get in their face.
And the people who have to do
that are people who are smart.
A lot of it's going to fall on
the back of education, higher
education, because
I don't believe
it will be led by the government
until we actually force them.
And there's not enough
people out there to do that.
So just one thing.
So just a quick follow-up,
a little career counseling.
A lot of students here.
You've been a big employer,
hired a lot of people.
Going into this age of AI,
what would be your advice
to a student of the
kind of background
that you would be looking
for as an employer?
I want someone who believes
that nothing is inevitable.
I want someone who believes
that nothing is inevitable,
that their involvement, their
engagement, their contributions
to the solution will make
the solution a better place.
People who walk into my
company, any of the companies
who actually believe that
there's this thing that's
set up and all I
have to do is fit in,
I want them to leave tomorrow.
I want people who
walk in and say,
there's a way to make it better.
And it's not technical.
It's not scientific necessarily.
It's not even social.
It's active engagement.
It's a little bit of broad
knowledge and responsibility
to other people.
And it is amazing.
We have these amazingly rich
people-- that's why I love you,
Steve--
who literally own the world--
I mean, 1%, 99.9%.
And the idea that
there is nothing
they can do to make it
better is a false idea.
We don't only need them.
We need literally
the guy who comes
to work tomorrow for VEON
or for Xerox or for MIT
to actually believe that
nothing is inevitable,
that better is possible.
And I want them to
work towards that.
That's a great job offer.
Jennifer, [INAUDIBLE]
I just wanted to--
[INTERPOSING VOICES]
--comment, because there is
a nascent field that I hope
will be very well represented
in the new College of Computing.
In my labs, we call it FATE--
fairness, accountability,
which is really
being able to audit the
outcome, transparency,
so interpretability when
someone is not granted bail--
hopefully you haven't
used deep learning,
but you've used something
which is interpretable--
and ethics.
They put ethics on the end,
because otherwise there's
a conference called FAT,
which doesn't sound so good.
OK, but anyway.
So there are already
two academic conferences
in this area.
There's the
[INAUDIBLE] conference.
And there's AI ethics,
ethics and society.
And these nascent fields
are bringing together
legal scholars, ethicists,
social scientists, and people
in AI and asking,
how do we make some
of these decisions in a
more equitable fashion?
So I'm very excited.
I mean, I personally have
done something which we
call algorithmic green lining.
So we take a population-- you're
going to let people into school
or you're going to
grant them loans
or you're going to do something.
And how do I take that
objective function,
which as you said, if
we don't watch out,
we just optimize to some
objective function, which
amplifies this.
I mean, it's really simple math.
It just amplifies the
inequities in the data.
And instead we have
some diversity component
or we have some
fairness component,
which has to come
from interactions
with social
scientists, ethicists,
and give you an outcome which
is fair according to this.
And so there is a nascent field.
And I think that the
College of Computing
is an ideal place to really
grow this field, the interaction
of the computer scientists and
[INAUDIBLE] and the other part.
So I just wanted to say it's not
like no one is looking at this.
But it's nascent.
And it is something that I
think all undergraduates should
learn about to help them.
When they hear about
the predictions of AI,
have them question
those predictions.
But it can't happen without
universities taking the lead.
Absolutely.
And the challenge
for universities
that Rafael has
unlocked, because I've
talked to dozens of
university presidents,
is that the nature of
the problem to be solved
requires synthetic thinking
across all disciplines.
And when you talk to a provost
or university president,
they say with the
door closed, we're
not set up to work that way.
And so I literally have
had presidents say,
this is a powerful idea.
But we don't' know how to do it.
But the fight that I will
have to have with my deans
and faculty over this
cross-campus learning
and structures that
will need to be created
is too big a fight for me.
And so the brilliance
and the potential
here is that this new college
is starting from scratch.
And it can build all
of the disciplines.
And that's the challenge, right?
And so that's why
what MIT is doing
is going to set the pace for
every other university that
wants to be relevant
in the future.
My motto in doing journalism
is from my teacher,
Lin Wells, who said,
never think in the box.
Never think out of the box.
Today you have to
think without a box.
So if you are not, in
my case, arbitraging
what's going on in climate,
what's going on in technology,
what's going on in
globalization, what's
going on in business, you're
going to miss the story.
I'll give you an example,
the revolution in Syria.
The revolution in
Syria actually began
with the worst four-year drought
in Syria's modern history
between 2006 and 2010 where
a million Syrian farmers
and herders left their
homes, flocked to the cities,
lived on the margins
of these cities.
Outside did nothing for them.
Then they got connected
on cell phones.
Then the Arab Spring happened.
The whole thing was a
complete melange of market,
mother nature, and Moore's law
blowing the lid off the place.
And if you're just--
I got my BA in Arabic
and Middle East History.
If I'd stayed
there, I would never
have understood what's going on.
Ash, I'm really glad
that I live in a country
where engineers at one of our
biggest tech firms would say,
don't want my work
going into weapons
that are going to kill people.
I'm also really glad I
live in a country protected
by the Pentagon, the US Army,
Navy, Air Force, and Marine
Corps, because
there are people who
want to destroy our freedoms.
How do you resolve that?
Well, you're referring to the--
[INTERPOSING VOICES]
Google, yeah.
--even at Google.
And here's how if they were--
which it was not all
Google employees,
but some Google employees.
The first thing
I'd say is, listen,
I respect that at least
you're thinking ethically.
I'm going to come
to a different place
with that chain of reasoning.
But good on you, because you're
thinking about the morality
of what you're doing.
And by the way, think about that
with respect to everything else
you're doing.
It's not just if you're working
for the Defense Department,
suddenly moral weight
falls upon you.
Moral weight falls upon
you whenever you're
doing anything consequential.
Now for defense, I'll just
say one thing about defense.
And then I want to offer
a little hope here--
Please.
--and sort of how-tos
that I've learned
over the course of a long
career in technology,
not just defense.
But seven years ago, 2012,
before all this discussion
went on, I was the number two
in the Department of Defense.
And I issued a
directive addressing
the issue with respect to
autonomous weapons that
is still in force.
And what it says is
that there will not
be autonomous lethal
systems in our future,
that any decision to
use lethal force, which
is a grave matter on
behalf of our people,
there must be a human being
involved in the decision
making.
I didn't say in the loop,
because those of you who
understand technically,
that's not actually
literally the right
formulation, but a human
being involved in
decision making.
Nobody paid any attention
to it at the time.
But that is the extant guidance
in the Department of Defense
and the right guidance.
So the first thing I'd remind
them if they didn't know
was that.
But more fundamentally,
I'd say this.
Look, early in my career, I
was brought up by the Manhattan
Project Scientists.
And they said, get in the game.
This is too big a deal for
you to stand on the sidelines.
So I'd say first of
all, good on you.
You're thinking morally.
But secondly, you're headed
in the wrong direction.
If you want us to do the right
thing, it's our government.
It's the only
government you got.
You can't walk down the street
and shop at another government.
Get in the game.
Make us do the right thing.
Bring your knowledge to that.
And also I might say,
and that takes us
in a whole different
direction, how do you
like working for the PLA?
Because you do that, and
over there you can't tell.
The People's Liberation Army?
The People's Liberation Army.
And that is--
What do you mean by that?
Well, I'm somebody who worked
with the Chinese a long time.
And I don't mean I want to see
a Cold War between the United
States and China.
But it's a communist
dictatorship.
And they're intent upon using
AI as an instrument of control.
Those are not the
American values
that I think are important.
Now, to what to do about it,
yes, educating our students.
That's why I'm
here, because I want
to respond to that exact
desire of those young people
to say, hey, wait a minute.
What's going on here?
Are we doing the right thing?
But give them
something to work with.
Here are some
things to work with.
Accountability has
been discussed here.
And as an algorithmic matter, if
you know how these things work,
that is an automatic.
It needs to be a design
criterion for people
who are designing AI.
You need to say--
you need to build that in.
I realize that's
difficult. But I've
been running technology
programs for my whole career.
And I've heard
plenty of engineers
say they couldn't do
something when they
didn't want to do something.
And so I would say
if you don't do it,
I ain't buying it, if you
don't put it in there.
And then Darren was speaking
very eloquently about AI
as an amplifier of crummy data.
And there need to be data
standards and transparency
there as well.
Otherwise, you're just
massaging yesterday
into a perfected version of then
rather than creating tomorrow.
That is something
that is doable.
And the last thing is I'll
say to Congress, yeah.
I mean, that was a big--
that day will go down in
my personal mental history
as one of the greatest
missed opportunities
in our collective
history, because imagine
a different hearing.
Imagine a hearing in
which the members asked--
I've been joking that I wish
they had been that poorly
briefed when they
asked me about issues
of [INAUDIBLE] many,
many testimonies
as they were asking Mark
Zuckerberg about technology.
That is there's something
that can be done about.
And he, for his part, didn't
pass the test of history
in my judgment.
Nothing against him personally.
But that's not going
to fly, that accounting
for the ethical behavior
of what you've done.
For the members, I've
seen it work differently.
And you're right.
Darren's exactly right.
There are people behind him.
Where do they come from?
In my experience in
technology, in addition
to having some people who
know how to do the mix,
which is what
we're all about now
and what the Schwarzman College
of Computing is all about.
A mechanism that works well
for Congress I was part of once
upon a time for one instance,
and it has a few ingredients.
The first is the members have
to really want the information.
And now that's not
true about everything.
So I'm sorry to say,
you're not going
to be able to do a scientific
advisory piece of work
on some subjects where
they don't want to hear.
But here, this is something that
is not yet partisan, whatever
that would even mean
in this area, that
is not yet polarized.
They genuinely wonder.
So that's the first thing.
The second thing
is that you need
to have something that is
demonstrably not only expert,
but monitored, usually that you
get some wise group of people
to oversee it and
assure the members it
hasn't gone off track.
And the last thing-- and
this is really crucial, Tom--
is these people
find it much easier
to digest choices and
options than they do--
[INTERPOSING VOICES]
--an answer.
So here's what you do.
You say here are some
different ways you can
go about accomplishing this.
But they're all
technically well grounded.
But they differ in respectable
ways from one another.
And then they can
do what they do
best, which is apply
their broad values
and experience to represent
the people back home.
Now, when you frame things that
way, it goes down a lot easier.
And secondly, you find
that 80% of the content
is actually in the
common findings that
are behind all of the choices.
And that's a great healing
and coming together factor.
So for all these
things, I'm just saying.
There are ways-- not only
do we need new kids who
get it and are spirited, and
I'm so glad for this generation
which feels the way they do.
But you can give them
something to go on.
And the reason I
was comfortable,
just to get back to your
original question, with that
directive way back
in 2012 was I knew
I wasn't whistling in the wind.
I didn't only have confidence
in the morality of it.
I had confidence in
the doability of it.
So this is doable, darn it.
And we can't--
Good.
--have otherwise.
How doable is it to really
substantially increase
the representation of women
and minorities in CS, AI?
And what's the Schwarzman
School's plan for that?
Megan?
There are plenty of people
who could be in this faculty.
So we could gender balance,
race balance, geo balance,
topic balance this faculty.
It's 2019.
There's no question.
It's just a question
of how good are we
at finding the talent
that definitely exists.
I'd be happy to help.
It's there.
And we could do that.
And it's one of the most
important things we can do,
because we can set
this team up to be
the right kind of broad
faculty, thinking broadly,
coming from all over the world.
There's no question.
I think it is
absolutely required
that we approach this from
a completely different point
of view.
And that point of view is one--
we have to approach it that
the outcome has to be x.
This is not we're going
to try to figure out a way
to not hurt anybody's feelings.
And we're going to do the right
laws and whatever the heck it
is to make people tenured.
I think if we do that,
we know the outcome.
The outcome will
be majority male.
It will be an age swing that's
pretty skewed towards older.
So I think we can-- but we
have to be willing to do it.
It's like Darren said it.
Darren said, you go
to the universities
and they say, well,
I can't do that.
I know it's the
right thing to do.
But I can't do that,
because I would
have to fight with my ABCDE.
And I say, well, we
should fight with them.
But Ursula, I think
we have to be--
we have to be very
careful, because we
have allowed a narrative that
is about political correctness.
Absolutely.
This is about excellence.
That's right.
[INTERPOSING VOICES]
And so we need to talk
about excellence--
And there are excellent people.
--and stop talking about we need
this many blacks, this many--
yes, we need to--
but this is about excellence.
Excellence.
And you can find them
everywhere, by the way.
Yes.
So this is where my algorithmic
green lining came from.
I have a few heuristics.
I have much more
diverse labs than almost
any other technical labs that
I know of at any companies.
I have more women.
I have more people of color.
We have more LGBT.
And so what do I do?
If I'm looking for somebody
in machine learning,
I've a little
heuristic in my mind.
I've seen people go from
information retrieval
into machine learning.
I've seen them go from
information systems
into machine learning.
So I don't lower the
standards, for God's sake.
I broaden the scope of
what I'm looking for.
And that was where the
algorithmic green lining
came from, because I have a
few heuristics I've developed
over a couple of decades.
I want AI to look at the
very high dimensional space
and search it and see
what are the areas,
so I get a Pareto curve if
you want to be technical.
What are the ways that I
can change the criteria that
will get me the most gender
diverse, racially diverse,
et cetera?
So I want AI to be so
much smarter than me
in this high dimensional space.
So I think it is doable.
It's absolutely doable.
And there are, in fact,
technical solutions
if we make sure
that we are talking
to the people who understand
the social implications.
[INTERPOSING VOICES]
Thank you.
I think you've hit a good
nerve with all of us.
[INTERPOSING VOICES]
--for me, too.
When I opened up all military
specialties to women,
there were people who'd say--
you'd get members of Congress.
And so they'd say, well, you
know, we need a military.
We can't be running--
you can't be running
social policy there.
And I would say,
you don't get it.
I said exactly what Darren said.
This is half the population.
I am an all-volunteer force.
I need excellence.
For me to take half
the population off
the table would be contrary
to mission effectiveness.
And I can't do that.
So you're not only
wrong, you're upside down
in framing it that way.
And the second
thing I'd say, Tom,
to what everybody else has
said, this is, again, an area
where it's not like we
don't know how to do this.
There's tradecraft out there.
And if you despair
some of these things--
I had.
And these are not
things you want
to have to learn to be good at.
But when it came to sexual
assault, post-traumatic stress,
there are various
things that arise
in the environment
I used to run,
just as there is in the
environments everybody else.
And sometimes we
wrote the playbook,
which I'm proud
to say, because we
tried hard to learn and do
better and improve ourselves.
In other places, you could go
out and there's a playbook.
So the idea that
there isn't a way
to increase diversity in senior
leadership ranks and companies
I think is rubbish.
I think there are proven ways.
Go out there and find out
what people have done.
And you don't have to
invent it all yourself.
Tom, can I just give us
one reason to feel hopeful?
Yes.
Because this--
Then you're going to close it.
I would hate for us
to leave this not--
Exactly.
--feeling hopeful, especially--
Bless your heart.
[INAUDIBLE] of this moment.
There is a solution here.
In the 1960s, what we
think of today in the law--
I was trained as a lawyer--
as public interest
law did not exist.
If you were a young,
aspiring lawyer
and you were leaving
law school, you
went to work for a law firm
or a corporation or maybe
the government.
There was no such thing
as public interest law.
And law schools didn't
have a curriculum.
Well, that changed in part
because of the intervention
of a group of
philanthropists, foundations,
including the Ford Foundation,
to create intentionally
a new system.
That can be done.
Absolutely.
And actually MIT is going
to be the anchor of what
we will know in this society
as public interest technology.
On the Hill, there is
something called Tech Congress.
[APPLAUSE]
On the Hill, there is
something called Tech Congress.
A group of foundations are
funding young, bright PhDs,
MAs in CS to work on the
Hill for Congress people.
There are a number
of philanthropists.
We need more to join
us in this venture.
And there are a
number of efforts
like that that are inputted
into this ecosystem
to do exactly what you're
saying [INAUDIBLE],,
to actually
transform that system
to serve in the public interest.
Darren, that's a great--
[INTERPOSING VOICES]
--I think, setup for the
close, because I unfortunately
have to catch a plane.
But I just want to say that
when Rafael called me and said,
I want you to do this
seminar, I really
didn't know-- when Rafael
calls, I say aye-aye, sir,
and I'll be there.
We all do.
That's why we're all here.
But I'm so impressed by
what I've heard today.
I'm so excited
about this school.
[INTERPOSING VOICES]
I think it is so cool.
Steve, you and Christine
are doing the Lord's work.
Rafael, I salute you.
It was a privilege
to be here today.
[APPLAUSE]
