Welcome, I'm John Frank, and this is our TechTalk today,
and our guest is Eric Horvitz, Technical Fellow
and Director of the Microsoft Research Labs.
Eric, welcome to Brussels, and thanks
for coming to the Microsoft Center today.
It's great to be here.
I've been enjoying my day.
Can you tell us a little bit about
Microsoft Research and its work in AI?
Microsoft Research has labs across the world.
We started at our Redmond site,
which I still view as home,
and Microsoft and Microsoft Research
have long looked at
Artificial Intelligence
as a core direction
for the world, for the company.
When I say for the world,
I mean for computer science.
It's one of the pillars upon which
Microsoft Research was built.
When I first came to Microsoft
and chatted with folks:
Bill, with Nathan Myhrvold
at the time, Rick Rashid,
the goal was to think through deeply
about the possibilities & pathways
to computers that could hear, see,
understand, speak, and so on.
And so those of the core aspects
now of Artificial Intelligence?
Well, the founders of the field of AI
thought deeply about 4 pillars:
the capabilities to perceive a perception,
the notion of learning from data,
to learn over time
and become more flexible,
to reason, to do logic, for example,
and they call that a special
natural language processing,
the capabilities to understand
and converse in language.
At this point, where are we
in the journey? How good is our AI?
Well, it's been a very exciting time,
but I've said that since my graduate days
at Stanford in the 1980s.
Work has continued.
It's been a tough slog in many areas,
especially when it comes to addressing
what I would call the mysteries
of human intellect,
what we see in terms of
the intelligence of people.
That said, they've been some really great
leaps in our ability to learn from data,
to recognize patterns, and to press
these recent leaps in Machine Learning
into really kind of magical
services & tools,
and these include
speech-to-speech translation,
speech recognition that actually works,
systems that can recognize objects
in large hierarchies of objects,
even language understanding systems.
I say 'understanding' in quotes
because these systems don't have the same
semantic understanding as we have,
but the ability to read text
and answer questions intelligently.
How do we ensure that we have
human values as we develop AI?
That's an interesting question.
The rubber meets the road in AI systems
where decisions are made.
Most people think about
AI as making predictions
or classifying under uncertainty,
but the idea is to transform
these inferences into action in the world.
When you have action in the world,
you have costs and benefits,
and you want to make sure that
the actions under uncertainty,
and the expected costs & benefits
are balanced in a way that resonate
or line up with human values.
It's quite an interesting process
that involves reflection
about the actual usage case:
what are we actually applying AI
to do in the world?
Will it benefit people?
What are the potential harms
associated with the application?
Most systems have failure modes.
We often say that any AI system has,
even with it's classifying
objects in the world
to try to recognize speech,
or do a medical diagnosis,
a false positive rate
and a false negative rate,
and you can imagine that both
these kinds of failure modes
have a rich cost structure to them.
And when we say we want to align
the value of these systems,
or even the usage, to human values,
we often went to deep dive
to understand what that might mean,
and it turns out we can do technical dives
as part of that equation.
What are some examples of AI
that you're most excited about?
And can you talk about, maybe,
what are some of those technical aspects
to ensure that we operate ethically?
Let's start with some high-stakes areas.
I think that's where we can get
very quickly into reflections about value.
Healthcare diagnosis systems
that can actually
make decisions about therapy
and treatment in healthcare
is a place where we see great benefit,
as well as potential caution
when it comes to systems
that perform well,
as well as grappling with the errors
that might show up.
So one concrete example of something
that we've worked on in the past,
and continue to work in
for the general realm,
is predicting healthcare outcomes.
One example is building a system
that could take
thousands of variables
off of patient records,
internally, where privacy is protected,
inside a hospital,
and predict for every patient
that's going to be discharged,
which is always a fun day at the hospital
for a patient and his/her family,
predict that that patient will come back
and be re-hospitalized in 30 days.
We built the system several years back,
and shipped it worldwide,
and it works quite well.
It posts a probability of
re-admittance within 30 days,
right in the chart
next to the patient's name.
Now, we knew that system had
a false positive and a false negative rate,
so we knew we had missed some patients,
and some patients would get
special handling & treatment
and interventions, who didn't
necessarily need it,
but it's an example of where
great value can be gained.
In Medicare services,
it tracks readmissions data,
and a group of physicians
and data analysts in 2009
took a look at what they considered
preventable readmissions in the US alone,
and they estimated it at
about $18 billion a year,
and of course, it's not just the money.
When someone gets readmitted,
they're pretty sick, typically,
so you can see great benefit there.
What's the responsibility of companies
when it comes to dealing with
automated decision-making
so that those decisions are
accountable and transparent?
I think we have a lot of work
to do as a leader
in tools & platforms at Microsoft,
in working to develop, and make available
to our partners and to our developers,
as well as to our first-party
folks internally,
new kinds of methods
that give us visibility into
the operation of these programs,
the analyses.
We want to have tools that
allow for a lens on inference,
so this idea of transparency
or explanation
not an easy topic - but we have
directions, and research going on.
We want to reflect about appropriate uses.
We want to have engineering practices
or best practices internally,
and then share them with third parties,
which basically take us from a data set
in terms of how we document our data,
to understand it and track it
over time for reuse, for example,
understand what algorithm we pick,
what studies were done,
what's the performance of these systems
and even understand what
the maintenance requirements are.
Sometimes, we think:
"Oh, we're going to build a system
and throw it out in the world,
and it's done",
but in reality, the world is a moving target.
There's lots of dynamism in the world,
and we have to think deeply about even
maintenance of these systems over time.
We also have to think deeply
about this whole notion of
what's Microsoft's positioning
and point of view
when it comes to hard questions
about new forms of automation,
and how they might be used in the world?
We have to ask questions about which
technologies and techniques and uses
might pose risks to our commitment
as a company,
to core human rights, to our own
human rights statements over time.
And I think it's really interesting and
important as a best practice for a company
to have an internal
set of processes in place:
a committee, boards,
Deep Dive working groups
on key issues around these
new kinds of companies
that's coming to the fore
with Artificial Intelligence.
You've been working precisely on
that set of issues within the company,
can you tell us more about it?
Yes, two years ago, we set up
a board called AETHER.
Kind of loosely stands for
AI Ethics & Effects in Engineering & Research,
looking at challenges & questions,
potential technical fixes
as well as policy directions
in what I'd call broadly
the influences of AI on people & society
or "AI in the open world".
There are several working groups
working on sets of topics
spanning from sensitive uses
to fairness & bias,
issues about intelligibility & explanation,
safety & reliability,
human-AI collaboration
and best practices: how do we build systems
that follow best practices,
and going from data to algorithms
to runtimes in the open world.
Now, these workgroups are doing
quite a bit of intensive work
along the lines of both new technologies,
new kinds of tools we can use
internally at Microsoft,
as well as provide to third parties
as part of our toolset,
thinking through issues about policy,
and how we've been engaged
with the broader public,
at times even catalyzing discussions
we think that are important to have.
You've been working with the
partnership on Artificial Intelligence
that brings together
industry and civil society,
precisely for some of those discussions?
Right, so we have our internal committee
work going on, with its working groups,
Microsoft has also been quite an active
supporter and co-founder of other efforts,
and one particularly that you mentioned
is called the Partnership on AI,
the longer phrase is "Partnership
on AI to Benefit People in Society",
and its a 501(c)(3) or a nonprofit group
that's led by a balance board
of folks from industry
including representatives from Amazon,
Facebook, Google, Microsoft, IBM and Apple,
with 6 representatives of civil society
and nonprofit AI research
and AI scientists:
economists, philanthropists, and so on.
It's a really fabulous group,
all arrayed around bringing together
in a multi-party stakeholder way
sets of people having discussions
about best practices for AI.
Here in Brussels, there's a great deal of
focus on privacy, and privacy protection,
and perceived tension with AI,
can you talk about what solutions,
perhaps from a research point of view,
are on the horizon?
Well, privacy research is a really rich
and evolving area for
core computer science,
as well as policy work,
and their combination.
You might say that data is the fuel
of AI technology, applications.
As such, there are reasonable questions
about how we can do AI
while really retaining the privacy of
individuals and ensuring that privacy,
so I'm happy to say that some of
the exciting directions in this space
really enable us to do some
interesting learnings
without ever putting people's
personal data at risk.
One approach is called, I'll use a strange
phrase right now, homomorphic encryption,
which means we can actually learn,
make Machine Learning from data
all the way to the algorithm, and out
to the inferences, completely encrypted
so no data is ever shared in a way
that's unencrypted, end-to-end.
This is part of our private AI work.
Another approach is called
multi-party computation,
where we have assurances in place
that let us learn from, let's say,
3 different hospitals,
where all the hospitals care about building
a better predictive model
to diagnose an illness,
and hospitals might not want to share data,
and might be under obligations
to not share data, with each other,
but we want to be able to build
a system that can learn from all 3,
and then share out a classifier
that can be used
to make practices and competencies
better at all 3 hospitals.
Final question: any advice
for policymakers about
preparing for the new world
of Artificial Intelligence?
For the first time in history,
we've working on technologies
that are, in some ways, pushing into
the realm of human intellect,
coming into places where, in the past,
we relied and expected people to be
making decisions, to be performing tasks.
As such, we have many questions
to ask about accountability,
ethics and value and actions,
notions of safety & reliability,
trustworthiness.
The same kind of questions we've asked
people over the years in organizations
and try to get assurances about
when it comes to labor
and work and enterprise.
We have a new lens of computation
to look at those questions through now,
and it's a really, really
interesting, rich space
that spans failsafe technologies,
labor law, philosophy,
ethics, psychology,
human-AI collaboration,
which gets into social experiences
for human beings
in working with each other
and studying that
to understand better what it means
for machines to work with people,
and people to work with AI systems,
so it's just a lovely broad space.
My sense is we need to get
more inter-disciplinary about it.
We need me to bring in people from
different disciplines to work together,
and we need to ensure that people
who are not technical geeks, non-engineers
that they have the same, or nearly the same,
deep understandings where it counts
to address this, what I would call,
civil society bandwidth gap
through education, engagement,
through special funding.
Well, thank you very much
for being with us today, Eric.
It's been great hanging out with you!
Thankyou.
