- Thank you Stephanie.
Thank you that was just so amazing.
But as impressive as
Amy and Stephanie are,
what I find most inspiring
is that there are
many students like them
in our AI4ALL programs.
But so far I've only told
you half of the AI4ALL story,
the part about how it
started here at Stanford.
By 2016 the demand for our Stanford camp
far exceeded all expectations
with students flying in
from Ohio, New Jersey,
Connecticut or even China.
We realized Stanford
alone couldn't keep up.
As luck would have it,
I crossed paths with a true
ally in the fall of 2016.
Melinda Gates and I began
a series of conversations
about our concerns and sense of urgency
to make AI and tech more
inclusive and diverse.
Starting from students in classrooms
and reaching to workplaces
in the industry.
Not only did she support
and encourage our vision for AI4ALL,
but she coordinated an early
funding round to help us grow.
And that quote I used about
guys in hoodies earlier?
It was hers. (audience soft laugh)
What began as a Stanford program
became a national nonprofit in early 2017.
Co-founded by Olga, Dr. Rick Sommer,
myself, and a great team of
staff lead by Tess Posner,
board members, and advisors.
That year, we extended to a
second campus at UC Berkeley.
Now in 2019 I'm very proud to say
we'll be on 10 campuses this summer
from Boston to Pittsburgh,
from New York City to Phoenix, Arizona,
and many more are in the plan.
Dr. Olga Russakovsky,
now graduated and an assistant
professor at Princeton,
founded a chapter there as well.
Our students expanded to become
an even more diverse group
with an emphasis on
women, people of color,
and students from low-income
families and rural communities.
This simply wouldn't have been possible
without the generous
support of Melinda Gates,
among many others.
Now, I have heard that
behind every great woman
there is a great man. (audience laughs)
And I think it's actually
true in this case.
Melinda Gates and her husband Bill Gates
co-founded and run the Bill
and Melinda Gates Foundation,
which has dedicated itself
to expanding opportunity
to the world's most disadvantaged
people and communities.
They've done truly amazing
work on global issues
like healthcare, education,
and income inequality
and are perhaps most famous for
their aggressive mission
to eradicate malaria.
Bill of course has his own legacy in tech,
which began by writing a basic
language interpreter in 1975.
He co-founded Microsoft
and led the company to
become the worldwide leader
in business and personal
computing software and services.
After decades spent in
the software industry,
which he arguably helped to create,
he shifted his focus to working at the
Bill and Melinda Gates Foundation in 2008.
As a tech leader turned philanthropist,
Bill is a valuable ally in
our quest to use technology
to make the entire world a better place.
So to join Amy and Stephanie
in a conversation about the future of AI,
please welcome Mr. Bill Gates.
(audience applauds)
- Thank you so much for
joining us today Mr. Gates.
I guess just to jump right in,
to start off on a positive note,
what excites you about AI
and it's potential to benefit humanity?
- Well there's so many things
that are deeply mysterious.
The ones that I get to focus on
have to do with health
in developing countries.
95% of the children who
die under the age of five
are in these countries where
we have almost no doctors
and we don't have the skills
to bring the kind of interventions
that we take for granted here.
The idea that we can take AI
and understand, for example,
why prematurity rates are so high
and understand the nutritional
deficits that take place,
of the kids in these very poor countries,
up to 20% of them die
before the age of five
and 40% of the remainder
will never develop
physically or mentally
to their full capacity.
So they are deeply malnourished
during their early years
and so their ability
to learn and contribute
is permanently damaged.
We've always known that there's
various dietary influences
that microbiome affects
both the prematurity
and these nutritional outcomes,
but it's only with AI,
including partnerships
with the Mark Davis lab,
immunology lab here at Stanford,
that we're taking all that
data and using AI understand
okay, what is it about
proteins or pathogens
and some really low cost
interventions are now emerging
to help us intervene
and dramatically reduce
prematurity and this malnutrition.
So it's when I see it applied to something
that without AI, it's just too complex,
we never would have seen
how that system works,
that I feel that, "Wow,
that is a very good thing."
- Moreover, what are some
actionable items one can take
to ensure the responsible
and ethical development
of human-centered AI?
- Well the world hasn't
had that many technologies
that are both promising and dangerous.
You know we have nuclear
energy and nuclear weapons
and so far so good.
Although memories seem
to be fading on that
in recent behavior
certainly is deeply
concerning on that front.
With AI the power of it is so incredible.
It will change society
in some very deep ways.
So it's great that Stanford's stepping up.
One of the early pictures in there
was actually of Shakey
the Robot over at SRI
and I was 13 years old when
I saw that video of Shakey
and it's funny to think
how over-optimistic,
we were like, "Oh Shakey
is stacking up the blocks.
Now let's get it out in
the factory tomorrow.
This is going to be really
easy to solve these problems."
And so for a long time AI,
when I started Microsoft,
I literally wrote a note
to my parents and I said,
"Okay I may miss a bunch
of breakthroughs in AI
and that'll be what I give
up to create this company
but oh well."
Well for about 20 years
I didn't miss much.
(audience laughs)
More recently, there's
amazing things going on
and fortunately Microsoft
has gotten to a size that it,
along with Google and many others,
gets to participate.
But the fact that the
technology's moving so quickly
and the policies and
understanding around it,
even something just as
simple as face recognition,
what sort of awareness and use case
should there be for that?
Even that,
and you have, these are not
issues that confine themselves
to nation-state boundaries in a simple way
like a lot of previous technologies.
So it is concerning that
someday Stanford will not
want to brag about how
it was a pioneer in AI
unless we do a good job managing it.
- Yeah, I guess along the same lines
as we see that problems are
becoming more and more complex
and require collaboration
across disciplines,
how would you encourage
this cross-disciplinary collaboration
that is central to the
development of human-centered AI?
- Well there are potential collaborations,
another area our foundation
works in a lot is
the US education system.
In there, the very basic questions
about why are some teachers so good,
why are some students
not very well motivated,
and other students are
very well motivated?
Unfortunately with deep correlations
with socioeconomic factors,
we are really at the
very beginning of that.
The state of the art is such that
everything we've learned
about education in the last 100 years,
you could not say that the best teacher,
the most inspirational, excellent teacher
lived 100 years ago.
That's how much we've
learned about education.
Now doctors it's a little better.
You wouldn't say that
the best cancer doctor
or eye doctor was one that
somebody went to 100 years.
In the case of the US,
the dropout rates have not improved,
the overall academic
achievement has not improved
even as we've doubled the percentage
of GDP that goes into the field.
So the opportunity here,
to take and get out of endless debates
but to really look into,
okay, what are those good teachers doing,
what is the nature of that motivation,
which interventions
can really change that,
that would be a very profound thing.
Education is sort of primal
and yet if you look in the R&D percentage
that society assigns to education,
where do the smartest people go in?
Do they go into educational research?
How does the educational research budget
compare to, say, the NIH research budget?
What is the equivalent of Burt
in the world of educational research,
where somebody has something profound
and everybody goes, "Oh,
ah, that's so fantastic?"
There is no equivalent.
It's kind of a desert.
So anyway, I think it is a chance,
given the incredibly
general purpose nature
of these technologies to
find patterns and insights,
it's a chance to do something
in terms of social science policy,
particularly education policy,
also healthcare quality, healthcare cost,
it's a chance to take systems
that are inherently complex in nature
and just individuals kind of trying
to troll through the data
can only find weird correlations
like okay, Minneapolis
spends half as much as Texas.
But okay, how do you intervene?
What is the next step?
Are kids growing up in a certain location
seem to do better, income
and race independent,
than other locations?
That's the kind of
thing a human might spot
but these systems should help us
look not just at correlation
but try interventions and
see causation as well.
So it's a chance to
super-charge the social sciences
with the most important by
far being education itself.
- On a different note,
what do you think are some
of the biggest problems
that artificial intelligence
can uniquely solve?
- Well the,
if something is complex enough,
like take the microbiome.
That's billions of data points,
even the subspecies matter a great deal
we've proven recently.
Not just macro statistics like diversity
or lack of bacillus or something.
But you really have to get down
and look at those gene profiles.
We have this incredible result that
if you give kids in some countries
once a year an antibiotic
that costs two cents
called azithromycin,
you save 100 thousand lives.
And in a sense it makes no sense
because you can't
that antibiotic is
disappearing from their system
within a few days.
So there's something
about their microbiome,
the intestinal/gut junction,
how that has this profound effect
and I don't believe that
without machine learning techniques
we will ever be able to take
the dimensionality of this problem
and be able to find the solution
about what is going on there.
And once we understand it of course,
we'd like to magnify that effect
and avoid using a
broad-spectrum antibiotic,
which has resistance-type effects at all.
So many complex problems
and many very complex data sets
only with these techniques
that are, in a sense,
pattern recognition techniques.
The upper bound before the
breakthroughs in machine learning
was such that many deep societal problems
were not trackable.
Now if we get the data sets,
make sure they're used appropriately,
because I think we can
deal with privacy concerns
and yet still have the type of
deep longitudinal information
that would reveal these patterns.
So it's a chance,
whether it's governance,
education, health,
to accelerate the advances
in all the sciences.
- Yeah you mentioned the
potential that AI has to
benefit society in many ways.
Could you talk about an AI application
that has already has been
positively transformative to society?
- Well I wouldn't say there are that many.
You know certainly the
search engine technology that
Google or Bing are using,
which has been greatly beneficial.
The amount of AI that's
being applied there
is super impressive and that
led to the sort of foundational,
in terms of the cloud platform
and how that was created
in a very generalized way.
We, in terms of actual medicines
that would not have been discovered,
the next 10 years are where
you're going to see that
in this dramatic form.
In particular, the work on prematurity.
To give an example,
we took the 23 and Me data,
working with them, and saw
by using AI learning that there was this
deep association with
malfunctioning selenium processing genes
and risk of prematurity.
So we literally have now 20 thousand women
who live in areas of Africa
that their natural diet
has no selenium in it,
that we are intervening by
giving them small amounts.
We'll know 18 months from now
based on preliminary data
we expect to see about a 15% reduction
in prematurity, which
for Africa as a whole
would project out to be about
80 thousand lives saved per year,
which always when you say
show one picture and one life,
it's more dramatic than tens of thousands.
So I think it's the current set of things.
The deep machine learning
didn't really get into
the broad discovery process
or what had been called systems biology
until quite, quite recently.
And in the case of
education, it's not yet,
we've not even begun to do that work
in terms of understanding
motivation and engagement
and teaching styles
and teaching assistance
that would really improve
the output of the system
i.e. better learning, less dropouts,
key things that the current
status is deeply unsatisfactory.
- I think for the remainder of the time
we would want to open the floor
for some questions from the audience.
When it's your turn, please
state your name and affiliation,
and in the interest of time,
please keep you questions brief
so we can get to as many as possible.
I think we have one over here.
- Hi I'm Kumardev Chatterjee.
I'm the founder of Unmanned Life,
where we try to use human-centered AI
to do autonomous systems.
And so my question back to you Dr. Gates
and of course to you was
where do you see the boundary between
ethics and machine learning,
particularly when it's
applied to autonomous use?
We will live in an autonomous
society where most things
will be done by machines by
machines in one way or another.
So where is the boundary there between
the ethics and machine learning
and the data sets that we're getting,
particularly because autonomy can be very,
in some sense, ahead of time.
So how do we ensure that
ethically it does the right thing?
- Well it's a very broad question.
There's different domains and
there's a different degree of autonomy.
You know the book Army of None talks
about the current weapons
systems that we have,
like the Aegis missile firing system,
that by most definitions
is an autonomous system.
That it is authorized to fire
based on incoming targets.
There were a few cases
where it accidentally
shot down a commercial airliner.
So even in that case where people thought
it was for very well bounded
it turned out to be very complex.
Then again, you don't want to be
too risk-adverse on these things
because the idea of solving
very tough problems,
you always have to compare to
what the current solution is.
So if you're not going to
have as many car wrecks,
you might not want to
set the criteria to zero.
Then again, enforcing good behavior,
understanding what the liability
of the machine will be,
that's probably why autonomous cars,
the US in certain respects,
will be one of the last
places in the world
that you'll see very widespread use
because our sense of liability
and our desire to preserve the status quo,
if there's any chance
that something might be,
even in a framework,
considered a step backwards,
that's very tricky.
The place that I think
this is most concerning
is in weapons systems.
In the medical field, we
just don't have doctors.
Most people are born and die in Africa
without coming near to a doctor.
So there are definitely things,
like we're doing a lot of work
with analyzing ultrasound.
We can do thinks like sex-blind the output
because we're not having
anybody actually see the image.
We can tell you what's going on
without revealing the gender,
which is, of course, when you
do that it drives gendercide.
And yet, we're doing the analysis,
the medical understanding
in a much deeper way
and that's an example.
It's all done with a
lot of machine learning.
I was meeting with the guys at Google,
who are helping us with this this morning
There's some incredible
promise in that field
where in the primary healthcare system
the amount of sophistication
to do diagnosis
and understand, for example,
is this a high-risk pregnancy?
Yes, let's escalate that person
to go to the hospital level
even though you couldn't afford to do that
on a widespread basis.
So this stuff is going to
be very domain-specific.
In some domains like education,
I'm more worried that
the privacy concerns,
which are appropriate,
they're good privacy concerns,
but if you don't put a lot
of creativity in how you have
longitudinal data access
while not violating privacy,
you're going to default to
the data sets not being there.
In US education today that is the default,
that there isn't much information
that would allow you to
find positive exemplars
either at the teacher or
school or district level
and therefore, really examine what inputs
are allowing for that
unusually positive performance.
- So if you're thinking deeply
about technology and ethics,
are there any things
you think in retrospect
you might have wanted to do differently
in your time leading Microsoft
or any lessons you learned,
thinking about that in post?
- Ah (audience softly laughs).
Well certainly the
really profound societal changes
from personal computing
are really just beginning.
And so we didn't disrupt the way that
people get news or communicate.
It led, the PC led to the
internet, led to the cellphone
led to social media today.
And so the awareness
that once you had made
that access to information,
including information that stimulates you
or that you agree with, and
that you cluster in that way,
there wasn't a recognition
way in advance that
that kind of freedom would have these
pretty dramatic effects
that we're just beginning to debate today.
A lot of the personal
computing early period
we were worried about the
so-called digital divide,
that is that computers would be
available to the kids who were better off
and accentuate, rather than reduce.
Now at a classroom level,
the actual data about
the value of computers in the
classroom is essentially nil.
So that's good,
we didn't create this
gigantic digital divide.
That is, the schools with the
computers are just as bad as
the schools without the computers,
which in an absolute
sense, they're quite bad.
Sometimes you get false
positives when you worry about,
because you think your solution
is so incredibly magical.
There are things in
terms of internet access,
getting that out to rural areas,
getting that into parts of Africa,
we still, that's an unfinished agenda
that through a variety of
cheaper satellite antennas,
so-called white space type access,
I do think that that
general connectivity issue
that we've been working
on for over 20 years
largely will become a solved problem.
And I hope that computers prove to be
very valuable in classrooms
so that then we do have
the need to get them out
on a very widespread basis.
But only at the individual
levels do you see,
in terms of the highly motivated learner,
do you see that it really has
changed the learning outcomes.
And that's only in, say, the top 15%
of the highly motivated learners.
- Hi, my name is Ron Lee.
I'm the physician focusing
on integration of AI
in clinical processes for the
healthcare system at Stanford.
We often think about in
medicine and other fields
of relying on AI to reduce error.
And even in medicine
we have seen algorithms
with error rates that are
lower than that of the human.
But at the same time when
AI system makes an error,
the effects on society,
but also just how society
perceives that error
is very different than when
a human makes an error.
So doctors make mistakes all the time
but then when you have some AI system
making that same mistake, the
reaction is very different.
So I wonder how do you
think about this dichotomy
and it's affects on how AI would progress
and be accepted by society.
- Yeah, good example of this is
another group that the Foundation funds
has done work where you just use
a cellphone camera to take a
picture of a woman's cervix
to predict whether she has cervical cancer
and that you should intervene.
And the results, the
National Cancer Institute
is very engaged in this
because the results are dramatic
compared to the very best humans,
and of course the typical humans,
particularly as you get out
into developing world settings,
are either not available at all
or their performance is well,
well below the gold standard,
which we were able to exceed here.
And so certainly on those
image recognition things,
that's getting to a point of maturity
that it will become accepted.
One thing that we're
going to have to build in
is a feedback mechanism.
That is when the
algorithm makes a mistake,
the ability to take that training set
and constantly improve it
because something new may come along
that the original training
basis wasn't good enough.
Completing that circle,
even in the US that's a
very difficult thing to do.
When you're out in rural Africa
and you don't have these
electronic health records to say,
okay you tested this person,
you've told her she didn't
have cervical cancer.
Isn't in interesting
that three years later
she died of cervical cancer?
Let's go back and look at those images.
So you want to complete that loop.
As usual if you have negative
consequences from mistakes
it kind of discourages that
completing the loop type system.
There are a few cases
like in civil aviation
where the willingness to look at mistakes
and apply massive resources
like we're seeing today to say,
"Okay what went wrong here?"
It really is pretty mind-blowing.
And of course you have
software-based elements,
including the 737 Max case.
So we threw
software-driven surgical tools,
software-driven flight tools,
software-driven weapons systems,
we are accumulating a
sense of understanding.
It is fairly troubling that
today's deep learning systems
are mostly opaque
and so one hopes that
sometime in the next decade
somebody comes up with AI systems
that are both as good or
better than what we have today
and yet have a degree of explainability,
including sort of the
strange false positives
that make absolutely no
sense to human cognition
that still, for most, take the
visual side of these things
do trigger in a way that is somewhat,
would not have been predicted.
It is impressive today that the FDA
is taking in diagnostic tests.
There's three that were
given the early approval
where there is this notion
of dynamic improvement
that will go on.
And that will actually have
more impact in the developing world
for things like tuberculosis, malaria,
we just have way more lives
to save than the US does.
If everybody in the US lived
to 100 it would not match
what we can do in the developing world
in terms of the net
change to human benefit.
So it's nice that it gets piloted here,
but a lot of the impact is where
you don't have the human comparator
is not at all what we take for granted.
- My name is Elin Thai and
I'm a Stanford PhD student
and I'm curious about
so in the past few weeks we just had
a lot of rains in northern California
and it's hard to imagine like here,
we have been suffering from
droughts in past few years
now we are suffer from
the impacts of floods.
So I'm curious in terms of AI,
how do you think we can make use of AI
to help those who are affected by floods
or other kinds of natural disasters
and allow the whole
society to work together,
not just within Stanford,
but also make a huge impact
and work with the citizens together?
Thank you.
- Yeah the last time I was in this room,
we were talking about climate change
and of course
climate models are extremely imprecise.
Unfortunately, AI alone will
not make those models precise.
The amount of data that you
would need to really understand,
over a period of months or years,
weather conditions requires
a pretty unbelievable amount of data.
You can sort of prove it
because the huge nonlinearities
in the system there.
These systems can improve to some degree
and so if you can predict floods,
you can predict out, that's good.
We do know that we need to
make our systems more resilient
because climate change we do
know brings higher variance.
And so if you're a farmer in Africa,
today a subsistence farmer,
which is 70% of the people
who live in abject poverty,
today you get about
one out of 10 years where
your crop completely fails
and you need a buffer stock
or government programs.
It appears it'll get, by
the end of the century,
to about one year out of four.
Now in the Western world you have savings,
you have governments with tons of money.
As long as your gross productivity
isn't going down substantially,
then you just are able to cover that year.
If you're a subsistence farmer,
what it means is that your kid is getting
so little nutrition that if they survive,
they are permanently damaged.
And so it'd be great to
have the very best AI work
and the very best weather modeling work,
and the data collection,
that is actually a huge limiting factor
is how you program up the
resolution of the initial conditions,
including in the ocean,
the hardest piece being the biosphere
because you have very non-linear reactions
to weather and heating
within the biological systems
including biological systems
that are in the ocean.
So I would sit here and make
some fantastic prediction
that we will be able to model
out those negative things.
You want to have a lot of extra resources,
you want to be agile about
bringing those extra resources to bear,
primarily in equatorial regions,
where you have subsistence farmers.
The world is not very good at that today.
We have the World Food Programme
that does some of those things
but if your figure of merit
is avoiding malnutrition
when you have negative weather variance,
we do a very, very bad job of it.
- Hi my name is Laila,
I'm a student at Stanford.
Does it concern you that
AI talent and innovation
is concentrated in a few big
tech firms and universities?
And if so, how can we
encourage more competition?
- Yeah, in a sense,
when you have competitive,
something that's competitive
where somebody's ahead of other people
and is at the state of the art,
it's not normal that
you'd have lots of people
who are at an identical position.
So you take designing nuclear weapons.
We didn't have lots and
lots of places in the world
that were the equivalent of Los Alamos.
There was,
we did create the competition
with Lawrence Livermore labs
just to have a tiny
bit of diversity there.
And so there's, I think
you have to draw bit of,
yes we should draw more universities in,
and universities in general
are motivated to think more about
societal benefit than the private sector.
So it would be unfortunate if
the universities fall behind.
So it's great that Stanford is
putting together these initiatives,
and there are even questions about
access to cloud computing power
that matches what the private sector has
and how we're going to make sure that
Stanford and hundreds
of other universities
actually can run data sets.
I mean for example if you want to
look at bias in word embeddings,
you better be able to
create the state of the art
word embedding system and have access
to play around with that system.
Unless we're careful,
the private sector will
kind of run away, not
just with smart people,
but also with the ability to
do super, super complex models.
So yes it'd be good,
most advanced technologies
in the US, post World War II,
were created as part of the
military industrial complex,
and therefore the US, in terms of
it's application to weapons
and the government itself being
involved at an early stage
to think through what these things mean,
it was natural that the
government was seeing it
partly through wearing that
defense-related thinking.
Now that these AI technologies
are completely done
by universities and private companies,
with the private companies
being somewhat ahead,
the government just doesn't
see it in the same way
that they did with previous technologies
and hopefully things like
humanist, your institute,
will bring in legislators
and executive branch people,
maybe even a few judges to get
up to speed on these things
because the pace and
the global nature of it
and the fact that it's really
outside of government hands
does make it particularly challenging.
The US was in this totally unique position
for most of these
breakthrough technologies
and now yes, the US is
still very much the leader,
but not in the same dominant, dominant way
that you can be sure,
hey, 10 years from now
will the best AI that does reading
and scans the scientific literature
to look for biological advances,
will that be best here
in the United States
or might it be in other locations?
Very hard to say.
And even the definition,
when somebody says to me,
"Is China ahead on AI?"
That's an ill-defined question
because there isn't a boundary where
there's that's Chinese AI, that's US AI.
For Microsoft, we have a lab in Beijing.
Google has a lab in Beijing.
Some of the best AI work
in the world is being done
across the street from
Tsinghua University.
Now what kind of AI is that?
It's global AI
and if people start thinking of this
in terms of nation-state terms
and try to draw boundaries,
that's going to be potentially difficult
and potentially quite problematic.
- I'm afraid that's all
the time we have for today.
Please join me in thanking
Mr. Gates and Stephanie
for being here with us.
(audience applauds)
