Shumian and Professor Ioannis, thank you for
joining us. Shumian, congratulations on winning
this year's Best Paper Award at CVPR.
Thank you. Thank you for inviting me
 for this interview.
Thank you for inviting me.
We’re delighted that both of you can join
us today. Can you briefly introduce yourselves?
Yeah, sure. I'm Shumian. I'm a second year
PhD student at Carnegie Mellon University
Robotics Institute. I work with Srinivasa
and Ioannis on NLOS (non-line-of-sight) imaging.
That's what this paper is about.
Hi, everyone. I'm Ioannis Gkioulekas. I am
an assistant professor at Carnegie Mellon
University Robotics Institute. I've been there
for a couple of years. And I work on computational
imaging and computer vision.
So the paper today that won the award is called
“A Theory of Fermat Paths for Non-Line-of-Sight
Shape Reconstruction”. Can you give us a
quick summary of your paper?
Okay. So the problem we want to solve in this
work is to reconstruct objects that is blocked
by some occluders, and out of the field of
view of the camera or the sensor. So the way
we do it is that we will look at some other
surfaces like a wall and that wall is going
to give us some information through reflections
of that wall, it's going to give us some information
about the non-line-of-sight object, then we
use a time-of-flight sensor to collect data
from there, and we use those kind of time-of-light
information to reconstruct the NLOS shape.
How long did it take you to work on 
this paper with your team?
I've been working on this problem for two
years. But during the two years, we tried
different kinds of things. And this is what
we current come up with. And I will continue
working on this problem for a while.
So what are the most important 
contributions from this work?
The quality of reconstructions of the none-line-of-sight
object that we have, I would say pretty close
to the reconstruction, you would get from
line-of-sight settings, where your camera
can directly see your object. So it's pretty
exciting to see those none-line-of-sight reconstructions
are starting to look like line-of-sight reconstruction.
It's as if we are making the entire world
specular or like a mirror so that we can reconstruct
every object from everywhere.
And can you tell us about some real life applications
where this work can be evidence?
Sure. There are a lot of critical applications
that this NLOS techniques can be done.
For example, for medical applications, we can
use this kind of techniques to do minimally
invasive surgery. For example, for doctors
who want to look inside your body, it might
be possible just to shine light to your throat,
and then the photons will travel through your
body and comes back. And then you measure
that, it might give some information about
what's inside your body.
And also in autonomous driving. Your car is
driving one way, but it's critical to know
what's happening on that side of the road
and around the corner. It would be great
if you can know in advance 
what is happening there.
And also in some disaster environment, there's
a fire happening. This kind of technique can
be used for searching and rescuing, like what's
happening on the other side of the corridor
if there is a fire blocking your view.
So they're pretty significant use cases. That’s
very exciting work. And what inspired you
to work on such an interesting topic?
Yeah. So I mean, those exciting potential
applications that I've mentioned, obviously
this work is - and also I think it's interesting,
because it's like magic at first, because
what's happening around the corner, so that's
like - all of us are curious about what's
happening around corner. But before this kind
of technique - actually in ICCV 2009, Ramesh
Raskar’s group from MIT, they did the first
NLOS reconstruction to show us the potential
like the possibility of doing this kind of
work, seeing around the corner. And currently
the entire computational imaging field is
just pushing this technique to another level.
I want to be a part of that, so I joined 
this team to do that.
It sounds like it's a great step towards the
next level as well. And there have been many
research studies using LiDAR to solve similar
problems. So why did you pick a different
method to address this issue?
Actually, the method that we're using is not
dramatically different from LiDAR, in a sense
that LiDAR uses the first returning photon
to estimate depth. But the way we do it is,
we use some subsequent photons from the 
time-of-flight information that we collect,
to do NLOS reconstruction.
Because, for example, we are directly looking
at a wall, if you only use the first returning
photon as the LiDAR does, we're only going
to reconstruct that wall. But what's interesting
is around that corner, you have to use some
subsequent photons that's coming indirectly
from those subjects back to your sensor, you
have to use those kind information to do the
reconstruction. And similar to LiDAR, they
use only the time information to do depth
estimation, because time multiplied by the velocity
of light is the path length. And for us,
we also only use the time information, so we
can directly reconstruct the shape of those objects.
Professor, for you observing the progress
on this research, how was that for you from
your experience?
So it's pretty interesting to be doing research
in non-line-of-sight imaging, because as Shumian
mentioned, there is a pretty large part of
the computational imaging community working
on this problem. And there have already been
a few amazing achievements in this area by
other groups like Matthew Toole, and Gordon Wetzstein
in Stanford and Andreas Velten in Wisconsin,
and so on. So all of these provided us with
a lot of inspiration about how to continue
pushing forward in this problem. So it's been
pretty exciting to watch over the years and
see how much we can add to 
these with our own paper.
And given what you know today, where would
both of you or what would your perspective
be on what the next stage of 
development needs to be?
I mean, the main problem in NOLS
imaging is signal to noise ratio,
so we're trying to measure some photons
that bounce several times is times on the walls,
go to other parts of the scene and come back
to us. There are very few of those photons.
We're measuring 10 to 15 photons, that's about
the level of signal we have. So a thing that
we will need to really push forward in terms
of increasing that signal in order to make
all of these applications as Shumian mentioned
earlier very practical. So I think that we
are not a pretty good place as far as if we
can find the signal to reconstruct something
out of it. So now we need to work on the first
part, how do we enhance the signal, so we
can try to use it in much more uncontrolled
settings than what we're doing right now.
Shumian, fom your perspective, is there anything 
additional you'd like to add as well?
I wish I can add to what Ioannis said.
You're all good to go with that? That’s
excellent. So and again, this question is
for both of you. The paper has six authors
from three Institutes. Can you tell us more
around - certainly from your perspectives, Shumian -
around the teamwork and the collaboration?
How was that experience for you?
Yeah, so Ioannis and Srinivasa are both my
PhD advisors and the three of us, we have
weekly meetings to discuss this. But actually,
the initial idea about this work, the very
initial thoughts about this kind of algorithm
are from the discussion between Ioannis and
Aswin. Aswin did contribute a lot intellectually
to this work. And we also have very useful
discussions with Kyros. Kyros also
put a lot of efforts in this. And they provided
us with the initial hardware setups at University
of Toronto, and Sotiris did all of the tedious
experimental work at first. Without Sotiris
and Kyros and their hardware stuff,
none of this could actually be applied to real results.
When you work in such a collaborative environment,
how do you communicate with each other on
a daily basis? Like how does that teamwork
come together and form as well?
Mainly I will communicate maybe with Ioannis
daily, but Srinivasa - Joannis and I will have
weekly meetings twice a week, and Srinivasa
always drops by my office and says “What's
happening?” “What's going on?” And I
will explain to him what is happening today
and what I plan to do next. And with Kyros,
Aswin and Sotiris, we might discuss things
that I'd have at high level and to see if
this is the correct direction to go.
Do you ever have differing opinions 
on the way forward?
That’s right. That's how research is done.
Different people, they have different opinions.
And I will try out some of those ideas to see which 
things work and to verify all of those ideas.
So it's a constant reiteration.
Yes, exactly. I think that's how 
each paper is done.
And Professor, from your perspective, then
from the university’s point of view,
how was the collaboration as well?
Yeah, it's been a pretty interesting collaboration.
Kyros and Srinivasa are both very senior
people in our field, and therefore, they always
bring a lot of wisdom in these problems.
As Shumian mentioned, 
Aswin and I originally were
discussing the first steps towards this
idea. And we were working towards solving
some mathematical problems, and then again
Shumian said that we recently, thanks to Sotiris
from University of Toronto, we got the first
measurement that showed that what the argument
came up would actually work. So it 
was a pretty significant team effort.
That sounds great. So where do you go next
from here, Shumian? What are your next steps?
Yeah, I really appreciate this award. It's
a great encouragement. And currently, I'm
a second year PhD student, so this motivates
me to work harder and pushes my own boundaries
even further away. And I encourage everyone
who is interested in computational imaging,
this is an interesting and exciting field.
And if you're interested in physics, optics,
computer vision, it's really like a cross-section
of everything, so working in this field is
really exciting.
That's awesome. And Professor, from your perspective
as well, how does it feel to see Shumian get
this achievement today, 
the award of the team as well?
Yeah, it's great. As Shumian said, it's nice
to see computational imaging as a smaller
part of the computer vision community right
now be recognized. And hopefully this will
encourage others to also work in this area.
It's also great to see female students receive
this award. And we have a lot of issues with
diversity in STEM. So I hope that this can
also help in that direction.
Very encouraging, very inspiring for everyone
out there. That's awesome. And I really love
the applications that you talked about. They're
very real life significant challenges in our
world today. So I want to thank you both for
joining us. It's been our pleasure to have
you with us today. And congratulations again.
It's such a great achievement.
Thank you.
Thank you very much.
