The part about trying to create
regulations, trying to inform regulations
and policy for this new AI age, that is
hard.
[Music]
I'm happy to be able to welcome Dr.
Osonde Osaba from the RAND
Corporation.
Glad to be here.
Also a professor
at Pardee Rand Graduate School.
Oshonde,
it's great to see you.
Thank you for
joining us in this Staying Human
interview series here at the AI summit
in San Francisco.
What's your angle on
policy for AI at Rand?
So I do a couple
of things at Rand.
I do what you might call
pure AI policy thinking.
Like how does
the presence of AI machine learning
models - what does that mean for policy
and regulation.
And I do the other half
of my work is developing AI machine
learning models to answer policy
questions - say from Defense Department,
from health, from health agents and stuff
like that.
The part about trying to create
regulations, trying to inform regulations
on policy for this new AI age that is
hard.
That is very difficult, because
there're all these concerns that they
didn't used to be concerns in the past.
This
idea of how do you define discrimination?
How do we assure equity fairness when
computers are involved?
Along with being
a researcher you're also a professor at
the Pardee Rand Graduate School.
Tell me
about that school.
Like what's what's
happening there?
So at the Pardee
Rand Graduate School we have
build in the future, the future
leaders of tomorrow, the future
regulators of tomorrow.
People understand
how we can do policy analysis.
How we can
make good policies in response to our
changing world.
So we have this redesign
effort like PRGS where we're trying to
make our students more agile, more able
to plug into the context in which they're
going to be thinking about policy.
It's
not easy.
It's a lot of work and it's
something I'm deeply invested in.
That's good and a lot of these students
are coming from backgrounds that aren't
technical backgrounds yet you're
teaching them some really technical
topics - How's it going?
Like how's that
work?
Ah yes.
I think about it as an
exercise and learning to talk to
decision-makers who would actually be
making these decisions.
Because if I
cannot teach students who are a captive
audience -maybe they don't have the right
background - if I can teach students, in a
captive setting, what it means to have machine
learning models out in the world.
Why
it's important to think carefully about
policymaking in that with AI models, then
I'm
not gonna be able to convince somebody at
a
higher level.
As you're explaining this
to these students what kind of case
studies or use cases do you use to like
really drill the points of the
importance of AI and maybe some of those
characteristics you were referencing
before?
I think of machine learning as
tools for answering questions - predictive
tools for answering questions.
So I don't
try to like sell them AI just as a
new thing.
My job is to teach them
how to formulate the predictive question
in a way that they can throw machine
learning models at and that's how I try
to approach it.
I watched your TEDx talk
and you talk you make a comment in that
talk about AI being an intelligence in
our own image.
Talk to me a little
bit about that?
So again I'll pull
back from saying AI and say machine
learning.
AI is a brother-space
that's more that's less data-driven and
it's in its most general sense - the
machine learning approaches, what we call
statistical machine learning.
Why I make
that distinction is because most of the
performant models we have here now.
The
ones that run Google, Microsoft, and Apple
VM, statistical machine learning models
that train, that learn trends from the
wealth of data that we've collected in a
Big Data Revolution - All that data is
human generated.
It reflects our human
behaviors are human biases.
So
whenever we train a model based on those
types of data, we I really train those
models to better reflect human behaviors.
So it's in our own image.
Right.
And that is,
as I say, it's not necessarily a good thing.
Sometimes it reflects our wisdom, our
ability to answer questions and spot
trends and do interesting things but
other times you reflect all the biases
that are built into our societies over
the past millenia.
And so, I mean one bias that we have to
worry about it's obviously automation
bias but what are some of the other
biases that we should be considering
when we're thinking about you know
instantiation of artificial intelligence
broadly in an important space like
healthcare?
This question is how biases
creep in a lot of the time we see this
in criminal justice systems and in public
systems like child welfare systems and I
imagine it's got to happen in healthcare
systems.
You have to be careful when you
have decision-makers circumvent
automated machine learning models or
decisions based on past, prior
understanding.
We also have situations
where models can't explain themselves as
well.
In the healthcare
setting that's a huge problem because
you have decision makers who
have liabilities insurance liabilities
and you're asking them to adopt a
decision made by this system that unable
to explain itself.
How do you justify
that when you're on the hook
in terms of accountability in terms of insurance.
