Here with Steve Chien, Head of AI at JPL-NASA - Steve, how's it going today?
Very good I'm very excited to be here at The AI Summit and to hear a lot of great things that are happening in AI
Yeah fantastic so
you've just come off stage at your keynote a
couple of hours ago can you tell me a little bit
about it and what you were trying to really
communicate to the audience?
Well that AI is really being used all
across the space exploration enterprise
everything from making the spacecraft
smarter to analyzing the huge data sets
on the ground and then to operating
things like the communications antennas
that we need to talk to the spacecraft
A lot of the AI systems used in
space right I mean I imagine they
exhibit a lot of autonomy, and
there is more of a degree of you
know you can't control everything all
the time. Can you speak to that in terms of what could these businesses and enterprises
learn in terms of implementing something
that's actually autonomous?
Well there's there's one aspect which is a direct relationship there's a lot of business
to be done in autonomous systems so if
people want to let's say drill for oil
underneath the ocean or or monitor their
pipeline operations it requires the same
kind of autonomous operations and autonomous robotics that that we're
talking about - that we routinely develop
at NASA. But there's also transfer at a
more abstract level so NASA has immense
amounts of data that we have to sift
through for scientific purposes and
understand that data and that's exactly
the same thing that many companies are
doing they're dealing with the fact that
there's this explosion of data and
trying to figure out how to make it a
routine process to understand that data
and exploit that data to their
commercial advantage - it's just that
we're trying to do it for more
scientific purposes. So there's a lot of
crossover just in areas like that
Sure, so your keynote was was entitled AI
and the search for life beyond Earth you
talk to me a little bit about all that
data you guys are sifting through and
you know why is it so important to space exploration?
Well the entire purpose of space exploration from our perspective is to advance
science and so the data is really the
prize for sending a spacecraft so we
send the spacecraft to Mars not for the
sole purpose of sending the spacecraft
to Mars but to learn more about what
Mars is like and there are direct
applications of this to all kinds of
business enterprises you know companies
want to understand more about the
customers. Healthcare companies want to
understand more about their patients.
Different commercial vendors you know in
in retail want to know more about their
customers and want to anticipate the
customers' wishes so that they can you
know support their business more
accurately and more quickly and so
these are all the same kinds of things
that that we've done that we do work on
at JPL.
Speaking to that, you know,
you have to put a lot of reliability
into the systems you're using
Could you talk to me a little bit about that -
what kind of advice would you give for
building in reliability into an AI system?
Well that is one of the
true challenges that we're facing in AI
right now as we have these software
systems that are increasingly complex
how do we scale up our methods of
verifying them and making sure that
they'll behave properly and you know
there's been a lot of news in the press
about autonomous driving cars we don't
have the ability to just test out a
rover on Mars for you know let's say
eight or ten years or millions of hours
so we're a firm believer in traditional
testing methods but also in terms of
formal methods trying to be able to
prove certain properties about our
systems and at some level that's the
only long-term alternative and so I
think that is the future that's the
future of all kinds of systems not just
autonomous systems but any kind of
system you have to be able to understand
what it does at a general level and show
that it will behave a certain way.
Do you think you know technologies like AI
remove the need for humans to
become interplanetary?
I'm not quite sure I understood your question.
Do we
need to really does this remove the need
to send manned spacecraft up into... if
you can analyze data from a distance
using machine vision.
So that's a very
good question. So I believe that AI in
space exploration is more complementary
to human exploration so first of all at
least for the foreseeable future we
don't have an AI that we'd be as good as
having the scientists involved or having
astronauts go and explore
themselves and so for that reason we
want to send humans to these remote
locations but we want to send robots
first because it's very expensive to
send humans so we want to learn
everything we can with robots and then
later on send humans but even when the
humans are there they'll need robots
because we don't want the humans
spending their time on the mundane... keeping the plumbing
running and everything... we want the
humans to be focusing on the truly
intellectual tasks and we want to
relieve them from getting stuck in the
mud of the day-to-day operations of the
space station or the Mars habitat or
whatever it is so I think that human
exploration and robotic exploration they
go together - the robotic exploration
first and then later on the robots to
help the humans and this synergy is very
important there are some missions that
we don't know when we'll be able to send
a human on so if we send the spacecraft
to another star system sending a human
on a one-way hundred year
mission is you know well beyond our
current technology but to send a robot
we could think about that now we're not
ready to send one but perhaps someday
Sure, sure. You've talked
about bringing the
lessons of NASA's use of AI
here to the other enterprises but what
what is it about the companies here
today that you're looking to really take
home?
Well to me there are a lot of
technical obstacles that are keeping us
from moving forward but there's a lot of
what I would call business or people or
process challenges to moving things
forward so I heard a lot of interesting
talks on AI in healthcare and they
obviously have tremendous challenges
because you know people's lives are at
stake how do you introduce AI how do you
decide how things are done so this gets
back to you you know a question you
raised earlier about how do we make sure
that the rover does the right thing? Well
it's pretty important when it's a two
billion dollar rover it's pretty
important when it's someone's human life
at stake and so how you interject how
you infuse the AI into your overall
process those are sort of very similar
questions and so there are things
about understanding the whole process
understanding AI in the context of the
overall system and also understanding
the culture - the culture of the doctors
versus the culture of spacecraft
operations - these are all things that are
lessons across these different
disciplines.
What are the challenges of building
explainability and accountability,
especially in like you know
mission-critical kind of
life-threatening situations you know?
How do you see a
way of overcoming those?
So this topic of
explainability is one of the true
challenges to AI right now and most
people talk about it in the context of
machine learning but my work is more on
the side of model driven AI but explainability is equally important there we
actually have some very good
technologies for explaining tactically
what the AI systems do so you know when
the car you know drives right when we
expect it to go in the center you know
go straight it's very easy to say well
this particular thing triggered and so
it thought that there was an obstacle
there but that's not the deeper answer
that we want and what's the challenge to
AI is to be able to explain like a human
expert would explain so the human expert
doesn't say well this line of code
triggered and then this line of code
triggered he said you know there's this
general class of white objects that we
didn't really consider might be
traveling at this trajectory and so we
need to reconsider our design in light
of that and that is the challenge the
current challenge with AI - to explain at
what we would call the strategic level
and this comes up a lot in in space
exploration in
terms of science planning. When we
deploy our planning system and it says
you can't take that observation then the
scientists always comes to us and says
why is my observation not there? And we
can say oh it's because you know the Sun
angle is wrong the illumination is wrong
for your observation but that's not the
real answer we want. The real answer is
well it's because of these other
observations and if we change the
trajectory this way then we can get your
observation or we can take it at this
other time and so that's the next level
of explanation that we need to get to.
Definitely, well, Steve thanks for talking to me
today - good to have you here.
