CHRIS DIBONA: Hi, everyone,
welcome to the auction of the
Kennedy Space Center assets.
[LAUGHTER]
We'll be starting with the
Vehicular Assembly Building,
and then we'll be moving
on to the launch pads.
So I'll keep my introduction
short.
My name is Chris DiBona.
I look after open source
for Google, as
much as anyone does.
And I also try to help out with
the NASA relationship.
So sometimes--
about once a month, it's looking
like-- we have over
folks from NASA, and we
try to introduce them.
And say, hey, a month from
now, maybe we'll
have another person.
Maybe three weeks,
maybe five weeks.
But if you're interested in
space and this country's
activities therein, as well
as the world's activities
therein, just keep an eye on the
space alias, and I try to
get the word out.
So look into Talks at Google.
You do this as well, right?
So we have a lot
of folks who--
if you're a space enthusiast,
this is a good place to work,
is the long and short of it.
Today, we have Ross Beyer.
Beyer?
Bay-er?
Beyer?
ROSS A. BEYER: Beyer.
CHRIS DIBONA: Beyer.
That's what I thought.
And he works at the Carl Sagan
Center of the SETI Institute.
But he also does really exciting
things for the
intelligent robotics group.
And he's going to go over
some stuff today.
He's also, it's worth pointing
out-- he ruined-- not ruined.
Made his vacation awesomer
by planning out some--
while we were all relaxing and
skiing and drinking egg nog,
he was planning photography for
the Mars Reconnaissance
Orbiter, among other things.
And the HiRISE project, right?
ROSS A. BEYER: Mm-hm.
CHRIS DIBONA: So he's more
productive than any of you.
And I hope you'll welcome
him to Google.
So Ross.
[APPLAUSE]
ROSS A. BEYER: Thank
you, Chris.
I will turn this microphone
off, so I don't get
interference.
So I'm going to talk to
you today about maps.
And although that may seem like
a facetious thing to talk
about to a Google audience,
maybe some people
here aren't from Geo.
But what we're going to talk
about is not just maps.
When we talk about maps, a lot
of people just think of them
as utilitarian things.
Things like this, that just
show you information.
People have an atlas.
They get that information.
It's utilitarian for them.
Some of the maps, of course,
that you create, certainly
Google Maps allow people to
do some analysis with the
information presented
to them in the map.
So they can say, I'm here, and
I want to go there, and the
tool will provide them a couple
different routes.
And so they can decide, right?
It gives them the ability to
choose by picking amongst
those options.
So utilitarian maps totally
have their place.
But there are other kinds of
maps, ways that we can put the
data together in a way that
will help people, either
scientists or engineers, really
step forward and really
learn things or do analysis,
either to land spacecraft on
other planets, or to explore
those other planets, or to
learn more about them.
Other kinds of maps are
more interesting.
Here's a map you may have seen,
put together by Mark
Newman, of counties in
the 2012 election.
This is how they voted.
There's not just red and blue,
but also purple, in between.
And this is really cool.
If you want to where you
are, you can see
how that county voted.
And it's interesting for that,
but it doesn't really tell you
other things, right?
Because your eye does a very
good job of doing kind of an
area-based matching, saying,
well, shouldn't that all be
blue, or at least mostly purple,
if the president that
we have actually won?
But that's, of course, because
there aren't the same number
of people in every county.
So if you change that to this
map, which is where all of the
counties are, modified by the
number of voters within them,
then you can see the--
you can almost see
Treasure Island.
You can certainly
see LA County.
Manhattan becomes gigantic,
and Chicago as well.
And so this is another way of
presenting that information.
You wouldn't want to plan your
route to your vacation with
this map, but this map,
information displayed in this
way, really helps you learn
something about this data in a
way that the other map
maybe doesn't.
There are other kinds of maps
that we make, too, that aren't
necessary geographical
in nature.
You've seen, perhaps, maps
like this, maps of the
internet or maps of this
sci-fi TV show.
This is a map of physics,
which is fun.
My favorite is astronomy here.
In parentheses, you may not be
able to see, it says "early
physics." Which is fun.
And this isn't something
that we do kind of
in the modern day.
You've seen, maybe,
a lot of these.
This was done in the 1930s.
So geeks and maps that are not
necessarily about places have
been around for a long time.
But as we step off into the
solar system and we try and
decide, what kind of maps do
we need to make to help
scientists learn things about
planets-- either how they were
formed, or things about their
environment, so that we can
land safely and operate
there--
we have to think about what kind
of data we can acquire
and, once we get that data, how
we put it all together.
So before we step off to the
planets, I'm going to talk a
little bit about this
scene on the Earth.
So this is a view of
the Grand Canyon.
This is not typical of how we
have planetary data, because
this data was probably taken
from a rim, based on
it's kind of view.
But it shows you some things
that I want to talk about here
for a few minutes.
When we go to a new planet, we
don't have the luxury of being
able to scurry around on it.
The first thing we send
are orbital probes.
And sometimes we get lucky and
send rovers or landers.
But perhaps any good geologist
could take a photograph and
start telling you the story of
probably approximately what
happened here, in a
geologic sense.
They can probably tell you
something about how these
layers were built up in a
constructional sense, over the
millennia, and also how the
erosion processes then cut
this river through
these layers.
But there's a point at which a
geologist, just from looking
at that picture, will stop,
because they will have to say,
well, we have to start
measuring this.
I can spin you a good story, but
we can't really go forward
and do hypothesis-testing and
really kind of understand what
happened here until we can
measure it and map
it, they will say.
And so certainly 100 years ago,
the solution was to send
people down and measure
things.
The modern solution may
even still be that.
You might say, well, we could
have aerial surveys, or maybe
a satellite that we
can send over.
But it may still even be cheaper
to hire a couple of
grad students with GPS units to
scurry around these slopes
and take measurements and
actually take a photograph of
this contact right here, and
say, what is different between
the rock here and there?
And how does that inform what
happened, in a geologic sense,
in this place?
Now the other thing you
might want to do--
again, if you can't do that,
because of course, we really
can't do that on other
planets right now.
We only sent six guys to the
nearest globe 40 years ago,
and we haven't really done that
since, which is too bad.
We have sent a lot of
robots, of course.
But what you want to do, when
you get those first
photographs of another planet,
and you see something like
this, you want to say,
well, how big is it?
How deep is it?
What is its shape?
What can we use that information
for, to advance
our knowledge of what
we're looking at and
why it got that way?
So when we go forward into the
solar system, overhead
photographs are great.
They're important.
They're the essential
first step.
But we also need to know
something about the depth.
That really, really helps
us out, to give us a 3D
sense of the place.
And so one of the tools that we
have at Ames is this tool
that we've developed.
This is my kind of "ad" slide
with the URL on it.
It's the Ames Stereo Pipeline.
This is a piece of software
that we've
developed over the years.
It's Apache 2.0-licensed.
It used to be NOSA, and
it was terrible,
and we got it changed.
So this tool takes data,
stereo photographs, and
creates depth maps or terrain
models out of it.
10 years ago, it started as
software on our robots.
We had a little robot camera,
and we wanted to
build terrain models.
That's what this first model on
the left is, is a couple of
fake rocks put in front
of our rover.
And you can see that we're
accurately detecting that this
rock is closer to us than
the one behind it.
Over the years it's evolved so
that we could take data from
orbital satellites.
This one is of the moon.
And also, I'll talk later, at
the end of this talk, about
how we can apply these
algorithms and this software
to terrestrial data as well.
Because the Earth's a planet
too, as people always tell me.
But first, let's talk
about the moon.
So what if you want to
land on the moon?
You need to know something
about its shape.
Because if you want to land
something very precisely, near
something very dangerous, you
want to be careful about
what's there and where it is.
And this, perhaps, busy slide
talks about how you would want
to precisely land something
in rough terrain.
We landed these rovers on Mars
in big fat parking lots, for
the most part.
Now that's not to diminish
their danger.
They were certainly challenging
in their own ways.
But for the most part, we
targeted big, flat areas where
there wasn't very
much topography.
If we want to go to someplace
really interesting, like here,
we can't just close our eyes
and hope for the best.
We need to know a lot more about
the terrain, and we have
to have smart systems on those
spacecraft so that they can
compare a map, perhaps that we
made ahead of time, to what
they're seeing out of their
LIDAR eyes and out of their
visual cameras, so they can
match that up and say, oh,
maybe I'm up here.
I'm on the top.
But there's a canyon
ahead of me.
I have to prepare for that.
So that's what this is about.
It has different kind of
algorithms that happen.
But as you get closer and closer
to your target, if you
want to land very precisely,
close to something, you have
to be very smart about not only
the map that you preload
into the spacecraft, so that it
has a sense of where it is,
but also as those algorithms
work, what
they're looking for.
So to do those things, we have
to talk about what data we
have for the moon,
for example.
And one of the kinds of data
we have are laser altimeter
data from the LRO spacecraft
that's in orbit right now.
There's something called this
Lunar Orbiter Laser Altimeter.
And as it flies on its
north-to-south orbit, it puts
down this stitching
of laser shots.
Very highly accurate to
within a centimeter.
Great data, but sparse.
If you zoom all the way out, as
we've gone around the moon
over the last four or five
years, we've gotten this very
good coverage.
But even at the equator, there
are kilometers between these
kind of individual stripes.
But this is an essential piece
for building kind of a
framework of the moon
in its shape.
The next piece that we have
are very high-resolution
imagery that we get from
the LRO camera.
Here are some examples of its
scene at 50 centimeters and a
meter per pixel.
Great images of the moon.
You can see individual house-
and car-size boulders.
You can see artifacts on the
surface of the moon.
That's the "Apollo
15" landing site.
And we can take that data with
the stereo pipeline and build
terrain models like this
scene in the middle.
And you can see that we're
resolving some of those big
boulders in the terrain
model that we built.
And those are great,
high-resolution, dense terrain
maps that we have, that we can
have for that final bit.
But somewhere in between the
global but sparse data and the
very local and high-res but
dense data, we need something
else to kind of bridge the gap
for landing site simulation
and for spacecraft.
And so I'm going to talk about
a project that we worked on a
couple of years ago that
actually used data from the
"Apollo" command modules.
So as you know, every time we
send people to the moon, we
send three people.
Two guys went down, one guy
stayed up in that thing.
And on the command module was
a suite of cameras and
instruments.
One of them is something
called the
"Apollo" metric camera.
And what it did is, as this
command module made orbits
around the equator of the moon,
it just kind of took
these rapid-fire photos.
And this is an example
of them.
And they had 75% overlap, so
great stereo overlap, taken
within a short period of
time of one another.
And they allowed us to build a
great terrain data set at 10
to 40 meters per pixel.
One of the challenges was that
the data taken by the "Apollo"
metric camera were on film
that was then returned to
Earth and lived in a vault at
JSC for about 40 years.
And a couple of years ago, it
was pulled out and scanned
with a very high-resolution
scanner.
But there are certainly
blemishes.
The scanner got down
to the photo grain.
I don't know if you can see
that here or not, but it
certainly did.
But we could also see lots of
interesting dust that we
weren't sure if it was on the
bed of the scanner or part of
the original negative
development.
So we had issues that we had
to work around with film.
Different kinds of problems than
we do with the CCD data
that we work with now.
So that was fun.
But ultimately, what it allowed
us to do is to build
this map of the moon, which
covers this kind of belt
around the equator
of the moon.
It's this very dense,
40-meter-per-pixel posting,
and over a wide region to fill
the gap between the very
high-resolution terrain
and the very
sparse laser altimetry.
And what it allows us to do is
to do things like this, which
is to create that terrain, put
the image down on top of it,
and provide you a view of the
"Apollo 15" landing site at
Hadley Rill that no spacecraft
ever took.
But one of the problems with
images like this is that it's
not actually a dynamic thing.
So this image, as you see it,
the photograph is actually
placed down on top of
the topography.
We can't arbitrarily change
the sun or change the day.
And that's maybe not so
important at the equator.
But a lot of the things that
NASA's talking about doing on
the moon are at the poles.
And at the poles, the lighting
really changes from day to
day, depending on when you're
going to run your mission.
And so if your goal is to build
a map that is lit the
same way that your robot is
going encounter it when it
comes down to land, you want
to be able to control the
scene and move the sun around
and modify those lighting
conditions.
And you can't do that
with just this.
Now, you can.
Having a terrain model is
an important first step.
But it's not the only step.
So now I'm going to
talk about albedo.
But first, I'm going to diverge
and talk about Jayne.
So one of the things that I
wanted to ask you is, in my
attempt to talk about
albedo, is what
color is Jayne's shirt?
Is it black?
Is it purple?
Maybe it's a mauve.
Hard to tell, really.
But if your goal is, from this
photograph of Jayne, to
determine what the RGB triplet,
if you will, of the
color of his shirt is, you've
got a lot of work to do from
this one photograph, right?
So what are the things that
you might need to do?
Well, you need a very complete
understanding of all the
wrinkles in his shirt.
You would need to know where
the camera was in space,
relative to your very
high-definition
model of his shirt.
You need to know where the
light is coming from.
If you had all of those things,
you could do the math
and find out what the color
of his shirt was, or
possibly his hat.
And maybe you would
discover that his
hat is actually two-tone.
It's orange on the bottom
and yellow on the top.
But it's hard to see, because
it's kind of washed
out from the light.
This is kind of the inverse
problem that
game developers have.
When they have a video game,
they already know where the
stuff is, and they light it
and they show it to you.
This is the inverse problem.
You see the scene.
Maybe you know where everything
is, and you want to
try and find out what the shape
of that thing is and
what the inherent properties
of the shirt,
in this case, are.
The other thing that you need,
of course, is not just the
color of his shirt--
because if you did that, and
then maybe you simulated that
in your little video game model
that we were talking
about, you'd get a very
plasticky-looking shirt and a
very plasticky-looking hat.
So in addition to just kind of
the raw color of the thing,
the other thing that you need
is a good mathematical model
for how light interacts
with the thing.
Which is why even if his shirt
and his hat were the
same-color thing, they would
look different, because his
hat is kind of fuzzy wool, and
the shirt is something else.
Video game designers skip
this and use textures to
kind of fake that.
But people like Pixar and ILM,
that do high-quality digital
movies, they have that
down really well.
They have the equations that
describe how light interacts
with various surfaces.
And so that's what we're trying
to do on the moon.
And the way that we do that, of
course, is by knowing where
our spacecraft was, knowing
where the sun was,
understanding that terrain--
that was the step that
I just showed you.
And we can back out what the
actual albedo, or inherent
brightness, of the
surface was.
So what we would do is we would
take images that look
like this, on the left, which
is an actual scene, and you
can see the seam go down the
middle, and convert it into
this image on the right, which
is kind of, if you will, a
flattened view.
Not perfect.
Nothing is perfect.
But an idea of, if you could
take a shovelful from any one
of these pixels and take it back
to your lab and measure
it, this is what its brightness
would be.
This, along with kind of that
model of how light interacts
with the surface, allows you
to go back and light your
scene any way you want, and
have a pretty good feeling
that when your spacecraft comes
and compares what it's
actually seeing out of its
camera with the map that
you've preloaded and lit for the
right way, that they'll be
able to tell the difference
between a giant pit that will
kill it and just a shadow that's
in the right place.
And that's an engineering
example.
But also, you could look at
this from a scientific
perspective, and say, are
these places, are
those really brighter?
Is there a different
material there?
And you can go back and use
that to follow up with
spectrometers and other
kinds of data.
So that's the kind of work we
do with albedo and the moon.
So now I'm going to
jump to Mars.
And I love Mars.
As Chris just indicated, I was
working on Mars over the
winter break.
And these are examples of
terrain models made with the
HiRISE camera.
The HiRISE camera
takes beautiful
50-centimeter-per-pixel images
or 25-centimeter-per-pixel
images of Mars.
We can take stereo images.
These are all terrain models
that we've built in their kind
of false color map glory.
You can do fantastic
science with these.
You can learn things about
polar deposits
and craters and layers.
I could spend a whole hour
talking about every one of
these little subframes, and I
put them all up here so that I
would only spend one
slide on it.
But understanding 3D is great
and very useful for
scientists, but I can't spend
any time on it, because there
are other things
to talk about.
So one of the other things
that we do, actually in
conjunction with some of you
guys, are mappy things.
And we help the science
community use them in
hopefully smart ways.
So this is the HiRISE, that
image camera that I just
talked about that's orbiting
Mars right now.
We have a way you can
visit this website.
You can sign up, and you can
say, I want a picture right
there, where the blue dot is.
Take me a picture right there.
And this is why-- you have to
say your science rationale.
And you can give us notes.
But the way that we enable
people to do that is by
showing them a Google Maps.
This is a Maps API with the
Mars map loaded in.
The red dots are where we've
already taken photos.
The white areas are where other
people have suggested
places to us.
And this is a mechanism for
people to talk to us via this
map, if you will, and show us
where they want to take data,
and why it's important.
And then after we gather this
information into our database
and we do a science-planning
cycle, we go into a more
complicated tool like this one,
that some people in the
audience used to work on, before
they came to Google.
And it loads up--
again, on a map--
information of where we're
going to take an image.
And when we're doing a
spacecraft-planning cycle, it
shows us-- our observation is
that Christmas-colored little
spot in there.
Other instruments have these
other yellow outlined boxes.
That's where they're
going to take data.
This is the zoomed-in view.
There's a kind of a
more larger view.
This is tilted on its side.
North is that way.
Because that allows this to not
just be a spatial map, but
also a map in time.
Our spacecraft flies from south
pole to north pole on
the day side.
And so these are other things
about what the spacecraft is
doing, what its roll angle
is, other things.
And so this is a time-based
plot, but if syncs up with
this as well.
So that's kind of a neat thing,
to marry space and time
in this graph.
Super useful.
We use it day in and day out to
plan observations on Mars.
The other thing that we've done
is working with Google to
provide more of NASA's
data to the public.
We take a ton of data of great
places in the solar system.
But kind of aside from the
Astronomy Picture of the Day
and other kind of enthusiast
sites, not many people can
really see all of that
data or explore it in
a really good way.
And so we're very fortunate to
be working with elements of
the Geo team to build Mars mode
in Google Earth and Moon
mode in Google Earth, and get
not just maps, but also the
things that go along
with the maps.
Videos from the "Apollo"
landers, waypoints, all kinds
of information for the rovers,
for the "Apollo" missions,
everything.
It's great.
We're really happy about it.
And it's super fun.
And the great thing about this
is that the idea was to
provide this Maps interface
for the public.
Our target audience
was the public.
The overriding thing in
our discussion-- our
end user was Grandma.
Can Grandma use this
interface?
But the other thing that I
wanted to make sure that we
did was also to provide
something that was useful for
scientists, as well.
Even though that wasn't
the primary target.
Scientists, this isn't--
as you well know.
Google Earth is not a GIS.
But it comes really close
in a lot of cool ways.
And so our ability to put things
in and make sure that
when we show data, that you
could also drill down and find
the raw information, so that a
scientist could say, oh, I'm
interested in where the rover
was on its track right here.
You can bring up a rover, the
waypoints for each of the
rovers, and you can
bring them up.
And you can go off to the other
NASA sites that have the
raw data for the rover at
that place in time.
And so it was very
handy for us.
It was reassuring for me, as a
scientist, to be able to build
that into a tool that I might
also be able to use.
In addition to just kind
of putting content
on, the other thing--
I showed you these kind of
wire-frame boxes before, of
where things were.
Location is important.
It's an essential first step.
But the other thing you might
want to do is actually take
all of those images and put
them on the globe so that
someone can actually paw through
them, much like they
do in Google Earth or in Maps,
and really look at
what the data is.
And so some of the projects that
we've been involved in
have been making these big,
planetary-wide mosaics.
But again, sparse mosaics.
So these, on the left, we worked
with Microsoft in their
WorldWide Telescope project to
put all the HiRISE images--
25 centimeters per pixel--
this doesn't do it justice,
because you could zoom all the
way into these things in
WorldWide Telescope.
We also just, this year,
did the same thing.
There's a medium-resolution
imager that's only 3 to 6
meters per pixel, called
Context, CTX, on Mars.
And that also has kind of a
sparse mosaic that we put into
Mars mode for Google Earth.
And these provide another step
for people to explore Mars at
a very high resolution.
Not just average people,
but also scientists.
Also with Google Earth, one of
the things that I talked about
is I wanted to make it useful
for scientists.
And one of the greatest things
about building a tool or
working with something is when
someone uses it in a way that
you didn't expect, and
it's a cool result.
And so one of the things that
we learned was happening was
that people who drove
the Mir spacecraft--
you may have forgotten it.
It landed in 2004.
One of them is dead.
This is "Spirit." That's where
it is right now, where we lost
contact with it.
But "Opportunity" is still
kicking, almost 10 years after
it got there.
And so after the initial flash
of the spacecraft landing and
all the guys in coordinated
shirts at JPL jumping up and
being happy, operations for
that spacecraft don't just
happen at JPL.
Over time, the scientists who
plan and operate those
spacecraft, they go
back home to their
families and their friends.
And so the planning process of
where to drive one of these
rovers is this geographically
distributed process, of
different members of the science
team taking different
shifts at different times, and
saying all right, where are we
going to send this rover?
The scientists assemble
a plan and then
essentially send it to JPL.
And the guys at JPL actually
drive the rover and plan out,
you know, drive a
meter this way.
Turn right, drive a
meter that way.
But it comes from
the scientists.
And one of the things that
they learned they had a
problem with doing is kind of
workshopping where they wanted
this rover to go in the next
day, or the next two days,
when they were doing
this planning.
And what they ended up doing is
they ended using, after we
had made Mars mode in Google
Earth, they started
using it for that.
Because it was a great way for
somebody on their laptop or in
their office machine to pull up
Google Earth, flip it on to
Mars mode, and use the authoring
tools to say, I want
the rover to go here.
And they could use the polygon
tool and draw a line, and then
they could send that small KML
file to their colleague two
states away, and they
could talk about it.
And they could use the shared
map abilities of Google Earth
to plan out what was good
and what was bad.
I don't really need to tell you,
and be evangelical about
the values of a shared
digital map.
But for these guys, it was a
new experience for them.
They didn't have this for
their planet before.
That's not to say that Google
Earth was the ultimate
planning tool.
But it allowed the scientists to
kind of say, well, let's go
this way, and that way and
annotate things and discuss
things before they said, all
right, we have agreed that
this is what we want you to
do, and then they can send
that information to JPL.
And JPL kind of executes it.
So that was a cool way that
people used Mars mode that I
did not expect.
We have other challenges in the
solar system, beyond the
moon and Mars.
A lot of them, I'm not
going to talk about.
But small bodies, small
asteroids, things that are not
triaxial ellipsoids.
How do you define latitude and
longitude on something that's
shaped like a peanut?
How do you even conceive
of coordinates in map
projections, and then how would
you deal with those
things in software?
These are challenges that
are ahead of us,
actually still, for NASA.
Even though we have explored
some of these places, we
haven't done it enough times to
standardize on something.
And so there are still lots of
challenges in mapping in the
solar system.
But I'm going to come back to
the Earth and talk about how
we've adapted some of this stuff
to terrestrial work.
So again, kind of the underlying
theme is this
three-dimensionality.
I'm going to show you some
examples of maybe terrain
products that you're familiar
with and how it compares to
what we've been able to do.
So this first one on the
left is an SRTM DEM at
31 meters per pixel.
Let me give you the
background.
So this is a quarry south of the
San Luis Reservoir, where
we've done some rover tests.
And one of the things we wanted
to do is to actually
have a DEM of our area, much
like if we went to Mars, we'd
have a DEM of the area first,
and then you would land your
rover, and you would drive your
rover around in the area.
So we wanted to have
the same thing.
We wanted to have a DEM
of our test site.
And then we would drive our
rover around inside of it and
compare, and that
would be fun.
And so the SRTM data here,
at 31 meters per pixel--
I don't know if you can tell or
not-- you can't even really
tell that there's
a quarry here.
This just looks like
a nice mountain.
The next one over is the
USGS NED DEM, at
9 meters per pixel.
Here, you can begin to see that
there are some terraces
here where the mountainside
has been quarried.
And this terrain model that we
built from our tools, from the
WorldView-1 data, is at half
a meter per pixel.
And that's the big scene
that you see up above.
And you can see the terraces and
locate yourself very well.
And we can compare that great
against the rover that we had.
It also had a LIDAR, a Velodyne
system for pinging
its terrain.
And also stereo cameras, so it
allowed us to really compare
between the two models, just
like you would on an actual
space mission when you'd
actually do that.
But we're not talking about
actual space missions.
We're talking about the Earth.
And the reason why we had the
ability to change this to work
with terrestrial data is because
of a need that came
from Arctic researchers,
of all folks.
So it turns out, the people who
study ice in the Arctic
used to have a satellite, of
all things, called ISAT.
And it had a LIDAR on it, and
it took data of the ice pack
in the Arctic regions.
Great.
Important for monitoring
ice, because of
course, that stuff changes.
You want lots of repeat
coverage.
But at one point, ISAT,
of course, died.
And it will still be a few
more years before the
replacement for ISAT, which will
probably be called ISAT
II, will be launched.
And in this time, there's no
good terrain data of the ice
at the poles.
One of the things that Arctic
researchers kind of found out
is that the private
companies--
WorldView, Digital Globe--
I don't know.
Maybe all those guys are
the same company now.
They have paying customers that
will pay them to take
data around the equator, where
people live, but no one was
really interested in buying
data at the poles.
Good for cryosphere
researchers.
So what they found is that
there's a lot of overlapping
stereo coverage at the poles,
over this ice, that would
allow them to kind of
fill in gaps in
their ice pack coverage.
And they asked us, well, can
you use your software to
create terrain models
of this for us?
And we said, I don't know.
Maybe we can.
So we figured out how to pull
in that terrestrial data--
which is kind of a funny thing
that we didn't know how to do
that, since we were so busy
on other planets.
But it allows us to build
terrain miles of ice sheets
and ice packs.
And why that's cool-- so this
is an example from the
Jacobshavn Glacier
in Greenland.
And here's the image-- ignore
the black parts.
This is the bay, and you
can see icebergs
floating in the bay.
And this is the glacier, coming
in here, and you can
see the terrain model
that we made.
What researchers did--
so this is useful for monitoring
ice thicknesses,
but also how things change.
So I was talking about how it's
cool when people use your
tools to do things you
didn't expect.
One of the things that one of
the researchers at University
of Washington, this guy named
David Sheen, did is--
part of the stereo process is,
you know, you have your left
image and your right image, and
you correlate them, and
you try and make sure, you
know, this thing is that
thing, and that builds the
parallax of your image to
build the terrain model.
So there's a correlator step,
that compares the images.
And these are two terrain
models that we made from
different times.
One is from 2009.
One is from 2011.
There's some difference.
This pool was full of melt
water, when it wasn't a
couple years ago.
But what David did is he took
those terrain models that we
had made and ran them through
our correlator again.
Which seemed strange to us.
We wouldn't have thought
to do that.
But what David wanted to do is
to find out how fast the ice
was moving between
the two terrain
models that we had made.
And that resulted in this map
down here, that shows how the
ice was moving.
And it turns out that it's
really hard to instrument
glaciers, because of the Arctic
conditions and the fact
that ice moves, but it moves
slowly and it really chews up
instruments.
And so it's really actually hard
to get this kind of data,
really, in any other way.
So that was pretty cool.
We were pretty happy
about that.
So it allowed these researchers
to take terrain
models of glacier ice and see
how they were moving.
More importantly, they're
interested in how ice pack
changes are happening.
And again, without ISAT up, it
was hard for them to monitor
and measure those things.
And so here are two, again, kind
of different examples.
So this first image
on the left is a
difference-in-terrain model.
So where you see dark blue, the
ice has gotten thicker.
Where it's white, nothing
has changed.
And of course, out here, where
there's no glacier, you
wouldn't expect anything
to change.
That shouldn't change.
And this is over a winter.
So between September and March,
this shows when it gets
cold in Greenland over winter,
the ice pack thickens.
The glacier snout gets thick
before it calves off
into the sea ice.
That's what you would
kind of expect.
The second image, of
course, shows--
the other thing was it
was taken over a
different time frame.
It was taken from the end of
winter to one year later at
the end of the summer.
And it shows the loss
of ice in the
glacier and the retreat.
And if you can do this enough
times and take measurements of
the terrain model along that
glacial trace, our terrain
models show that, you know,
not only is the glacier
retreating back, you're
losing depth.
And so we are helping these guys
learn things about the
ice pack, which is cool, and
not anything that we would
have thought we would
have been involved
in two years ago.
But it's a way of, again,
providing the data to people
in a way that's useful
for them.
We didn't do this analysis.
We simply provided them with
the tools to build the maps
that they needed to
do this analysis.
So ultimately, I know
that I've talked
about a lot of things.
I've gone all over the solar
system on you today, and I
apologize a little
bit for that.
But not too much.
There are a lot of challenges
in how we take data, how we
present data to scientists and
engineers in ways that they
can meaningfully consume it and
use it to be useful for
their jobs.
Either it's landing spacecraft
or doing engineering purposes
or science exploration.
And we have lots of interesting
ways to go about
doing that.
And one of things that we're
really interested in doing is,
we've talked with some
of your guys here in
Geo about Earth Engine.
And we're real excited about
getting other planetary data
into Earth Engine and unleashing
the powers of Earth
Engine on planetary data.
Because other planets
are like Earth too.
Now, that doesn't work quite as
well as the other way, when
you say Earth is a planet too.
But Earth Engine should be able
to allow us to answer
questions that we can't
really do right now.
The planetary scientists aren't
even thinking about it.
When I talked with Noel about
Earth Engine, how it would
work last year, he said, well,
what could you do if
processing time wasn't
a problem?
What could you do if
you could just have
all the data available?
And I was like, I don't know.
I've never thought about
it that way.
Planetary scientists, much like
terrestrial geologists,
kind of usually pick a
spot, and they say,
here's my study area.
It's this few kilometers wide,
and I'm going to study the
crap out of this little patch,
and really, really learn about
this one spot.
Partially because it's important
to know something
very specific about that place,
but also because it's
hard to pull in all the data--
and I mean all the data.
Hyperspectral data and terrain
data and image data.
There's a ton of data
about this planet.
Ton of it.
And no one person can kind of
have it all swapped into
memory and do something
sensible with it.
And so I'm excited about kind
of unleashing Earth
Engine to do that.
Also, a lot of times what we do
is we see something in an
image and we say, wow,
where have I seen
that texture before?
Or that kind of crater?
Or that shape?
Was it at a talk?
Or was it somewhere else?
And so just having kind of sane
ways to interrogate the
data, kind of Image Search for
planetary data would be
awesome, and that
kind of thing.
So we're really looking forward
to applying mapping
concepts and planetary stuff to
the Earth work that you do,
to the Earth Engine stuff.
And also using that to provide
more public-facing stuff.
I don't know if any of you have
visited Moon in Google
Earth lately, but it's stale.
I hate to tell you.
All the same data's there when
we launched it in 2009.
And we've taken data of
the moon since then.
Mars is a little better.
We've added some more stuff.
Some of the rovers are moving.
Unfortunately, we don't have
"Curiosity" in there yet, for
reasons that are complicated
and strange.
But we're working
on those things.
And the more data we can get
to the public is great.
And if we can provide this kind
of science back-channel,
then scientists can find
it useful, too.
So I'll stop there with my kind
of crazy, random walk
through planetary mapping,
and take any
questions you might have.
Thank you.
[APPLAUSE]
ROSS A. BEYER: And I'm told
there are microphones at the
side of the room if you have
questions, so that it can get
captured for posterity.
AUDIENCE: So I was wondering,
what's the most surprising
thing that you've learned
from the Mars imagery?
And a related question would
be, what's the biggest
unanswered question that you
hope to be able to answer from
the Mars imagery
in the future?
ROSS A. BEYER: Ooh.
Those are really
hard questions.
[LAUGHTER]
ROSS A. BEYER: So there isn't
really any kind of one thing
that I've seen, that I've been,
like, wow, that is the
most important, crazy thing
I've ever seen on Mars.
Because since it's kind of my
job to look at pictures, which
is awesome, I see a lot of crazy
things that you wouldn't
expect to see on Mars.
So I've seen--
we've caught avalanches in
process of happening at the
Martian poles in springtime.
We can actually see
dust coming down.
We usually don't get to see
dynamic things, so those are
pretty cool.
Usually when we look at Mars,
we have to play CSI.
What happened here?
I don't know.
We weren't here when
it happened.
How did it happen?
So it's fun to see those.
It's fun to see dynamic
processes on Mars.
It's fun to see dunes and
ripples move and change.
It reminds us that Mars,
although mostly dead, is not
quite dead yet, and that there's
still things happening
on Mars, even today.
Another fun thing that I've
seen is boulder tracks.
So when we have these really
high-resolution images, there
are slopes where apparently,
some time in the past, a very
large boulder detached and
rolled down a slope.
And I actually saw one, and you
can kind of see the path,
and it kind of plows through the
dust, and you can see the
rock at the very end.
We saw a couple of those.
And one time, we saw one where
it hopped this small crater.
It was really cool.
You could see the
path going down.
You could see it go up one side,
and then you could see
it land on the other, and there
was no trail in the
middle of the little
tiny crater.
It was only a meter
or two across.
So it must have had
a lot of energy.
So those were cool.
As far as what would
we hope to learn?
Man, kind of everything.
So the biggest question that
we have, the million-dollar
question that we're hunting
for on Mars, the most
important thing, that who knows
if we'll ever get to it,
is did life ever evolve
independently on
the surface of Mars?
That would totally change
everything we know about how
things work.
Turns out that's a really
hard thing to get at.
Because it's possible that we
could look everywhere and
still not find it, even
if it did happen.
So our process on Mars is to
kind of follow the trails, not
necessarily of water, because
although that is what we do,
is we try and think about, what
places might water have
been for a long enough period of
time to make enough layers?
Or where do we think life
might have started?
It needs energy and it needs
water-- at least
as far as we know.
It may be different on Mars.
But we have to go with
what we know.
And so places where we monitor
things on Mars where we might
find traces of that are
valuable to us.
So from above, we can't
go in and say, is
there a microbe there?
We can't see that from
where we are.
But what we can do is we can
say, oh, is there liquid
moving down the slope
there today?
Or is that just ice?
Or is that just dry rocks?
And taking that data and
understanding that data set as
a whole, so we can find places
where, oh, was there water
activity here?
Is there water activity
somewhere today where we could
send a spacecraft
to do more good?
To do more science in situ,
at the pinpoint of
where we need it.
What we're looking for from
orbit is to kind of get the
gestalt and figure out where
it's worthwhile to send more
spacecraft or more assets.
That's a great question.
I probably answered
it very badly.
AUDIENCE: How long does it
take to process a single
stereo [INAUDIBLE]?
ROSS A. BEYER: How long
does it take?
It depends.
How big are your input images?
So MOC images--
I know this, because he knows
how big a MOC image is.
From the Mars rover camera from
the spacecraft a couple
years ago, they're a couple
thousand pixels by a couple
thousand pixels.
Those take like five
minutes to run.
HiRISE images that are much
bigger, that are 20,000 pixels
wide by 40,000 or 50,000 pixels
long, those can take
10, 15 CPU hours.
Kind of depends on what's in
the scene and how smart our
algorithms are.
So if things are well-behaved
and it finds its matches
quickly, it can go shorter.
If it gets confused and finds
a lot of false positives, it
may take longer.
I don't know how long it took
to do the terrestrial ones.
I have to talk to
Zach about that.
But in that order of time.
Great.
Well, thank you very much.
Enjoy your afternoons.
[APPLAUSE]
