[MUSIC PLAYING]
EITAN MARDER-EPPSTEIN:
Hello, everybody.
AUDIENCE: Hi!
EITAN MARDER-EPPSTEIN: Hi.
All right.
That's a lively crowd today.
So let's make sure
my clicker works.
Perfect.
So hello, again.
My name is Eitan
Marder-Eppstein,
and I lead the developer,
engineering, and relations team
for Project Tango.
And today I'd like to share
a bit about our projects
as well as where we're
going for the future.
But before I do that, I normally
start off the presentation--
just by show of
hands, how many of you
have heard of
Project Tango before?
Whoa.
This is probably
the best response
that I've gotten to that.
I would say 3/4 of
you raised your hand.
So for those of you who don't
know what Project Tango is,
hopefully this will
serve as a nice overview.
And for those of you who do know
about the project, hopefully
it shows the progress
we've made as well as where
we're going for the future.
All right.
So at a fundamental
level, Project Tango
stems from the observation
that today interactions
with our phones really ends at
the square box of the screen.
And whether you're
at home browsing
the internet for a new appliance
for your kitchen, or at work
planning for a project
that you're going to build,
or out at dinner socializing
with your friends at a table,
your phone is with
you, but it really
lacks an understanding
of your environments.
You're limited by the
boundaries of the screen.
And what Project Tango hopes
to do is to break this boundary
and to bring your phone
and the world more
in tune with each other.
So we want to teach phones
to see and understand
their environments.
And we would like to
augment and improve
our own ability to
answer questions
about the world around us.
Those questions might be things
like, how much paint do I
need for a wall in
my room if I'm going
to be repainting my house?
Or will the couch I'm going
to buy online actually fit
in my room?
I recently bought a sleeper
sofa and made a big mistake.
Or how do I get from
where I am to the laundry
detergent in a store?
So Project Tango allows us
to answer these questions
and more.
And we use special cameras
and inertial sensors
that we've built into phones
to make sense of the world
much like we do.
And we allow the phone to build
a human-scale understanding
of space and motion.
And then we expose this
understanding to application
developers by adding three core
technologies to mobile devices,
and I'll talk about each
of them a little bit here.
So the first one
is motion tracking,
the second is area learning, and
the third is depth perception.
Motion tracking allows Tango
devices to track their position
and orientation in space.
As the phone is moved
through the environments,
it keeps a log of its trajectory
to within centimeters.
This works almost like a
mouse but in full six degrees
of freedom.
Area learning allows Project
Tango devices to recognize
where they've been before.
When a distinctive visual
landmark like a sign,
or a poster, or a
picture is seen,
the device stores
this in its memory.
And the next time it or any
other Project Tango device
sees this visual fingerprint
of sorts, the device
recognizes its precise
location in the world.
So I could walk into
a store and recognize
that I'm at the
checkout register
just by seeing a sign
above the cashier.
And the third technology that
we expose is depth perception,
and this allows
Project Tango devices
to see the world
in three dimensions
And we've done this by
adding additional sensors,
special depth cameras,
to mobile phones.
This allows your
phone to understand
the full geometric
structure and also
the scale-- the metric
scale-- of your environments.
So to give a more
concrete idea about what
this technology enables--
it's nice to talk about it--
but what I'm going to do
is show a video giving
an overview of the
technology that we've built.
And then I'm also going to
show a couple of live demos
on stage just to give you a
feel for what this enables.
So I'll play the video now, and
then do demos right after that.
[MUSIC PLAYING]
OK.
So as you saw, I think that's
some pretty powerful and cool
stuff, but there's still nothing
like seeing it in action.
We don't have a
ton of time today,
but I'd like to run through
at least a couple of demos
to give you a better sense
of the types of applications
that we're building on
top of this technology.
I will also be available after
this session at office hours.
And just a little
plug-- if you come,
I'll let you play with
the device yourself
so you can handle it.
So to start off with, I
always like to give insight.
Can we switch to the
device, by the way?
Whoever-- oh, awesome.
All right.
So to start off
with, I always like
to give a little bit of
insight into what is actually
going on under the hood.
So here you see an image
from a wide-angle camera
that is mounted on the
back of the device.
And you'll see a bunch of
colored dots in the world
as well as some
plots on the screen,
and the colored dots
are visual features
that we're tracking
across frames.
So we're looking for points
that are on corners or anywhere
where there is a
change in contrast.
And we're actually tracking
that in our video feed
and fusing the data
together with information
from the accelerometers and
gyroscopes on the device-- so
the inertial sensors.
And here, as I rotate,
you're probably
used to seeing that even
just from a gyroscope,
but I can actually start walking
around the space assuming
my cable doesn't let me down.
And I can return to
roughly where I was before,
so the device
understands how I've
moved through the environments.
Now, we've also
got a depth sensor
on the device which allows
us to see the world in 3D.
So when we combine the ability
to track the device's position
as you move through space
with the ability of it
to see in three dimensions--
this stage is mostly green,
but we'll try and build
a little bit of a model.
Maybe I can get some of
you all in the front.
So as I move around
the world, you
can see that very
quickly I'm able to gain
a pretty nice,
geometric understanding
of my environments.
And this is powerful.
This ties your device
to the physical world.
And when you have
this relationship,
you can start
building applications
that before were not possible.
So one example of
an application is--
say I want to take a basic
measurement on this wall.
So I can measure that
that's 0.8 meters,
but you're noticing that
it's not just a single frame.
I can walk around
the environment.
I can walk up to the
front of the stage,
and I can produce measurements
that are tied to the space
that I'm in through the frame
of the device moving around.
Other things that
I can do-- say I
was interested in
hanging a picture here,
and I wanted to know roughly
the size of the wall.
I can also measure area.
So you can do things
very, very quickly
that would take time otherwise.
I can also do things like
measure through walls.
If I wanted to drill through
a wall, you can place a point,
walk through to the other room,
and place in other points.
I almost want to go
backstage to do it,
but I feel like that
might be a bad idea.
So this is really powerful.
You're augmenting your world,
and you're answering questions
that we as humans have trouble
with through the device.
Other applications
that you can use
leveraging this same
technology involve
being able to shop for furniture
or for other household items
online in a much
more practical way.
So I referenced before that
I really screwed up my sofa,
but now I can drag
items into the world.
I can look at them from
different perspectives.
I can get right up-close.
If I want to, I can
change the color.
And the beauty of this is
that this chair is at scale.
So if it fits in my
room, I know that I'm OK.
I can also drag another
chair here and align it.
I think it should snap.
Well, all right--
having trouble snapping.
But I can place furniture in
my space very, very easily
and see how it actually looks.
And this application was
a partnership, actually,
with Elemental Studio
who did the application
design and Lowe's
who now advertises
that if you need a new
fridge, you can check it out
in this application.
All right.
So that's a practical
use of the technology,
but we also believe that
this technology has the power
to unlock our
imaginations and to allow
us to have a little bit of fun.
So this stage was made
for this demo, I swear.
So this is Mittens.
He's a cat but virtual.
And you can see that
I can go up to him,
and he knows where I am,
and he'll paw the device.
But then other
things that Mittens
can do-- he'll follow
me around the world--
is that mittens understands the
geometry of the environment.
So he can actually
jump onto this surface
because we have an
underlying model
of what the surface looks like.
And when you take augmented
reality and mix it
with the ability to understand
the geometry of the world,
it becomes much,
much more powerful.
So you can change Mittens
to Rufus or to Gray the cat
who has really big
eyes-- if you like.
And they can follow
you around the space,
and they'll walk
with you as you move.
And then the last
fun application
that I'll show-- it's another
game that I will challenge
folks to beat me at
during office hours--
is Schell Games that
one of our workshops
decided that Jenga was a
really, really annoying game
to play normally, I guess.
You have to stack the
blocks, they fall down,
you have to restack them.
It's not so much fun.
So they made virtual Jenga.
You can see that
it's on this surface.
And I can take a block, and
I can move it to the top
and stack it.
And it's supposed to
be a two player game,
but I guess I'm going
to win no matter what.
And you can see that it's a
very natural interaction where
you're using the device
as a six degree of freedom
controller-- interacting
with a virtual object tied
to the real space.
And then this is not
going to go well.
OK.
So I lost Jenga to myself, but
now I don't have to clean up.
So if I wanted to, I could
just restart the application,
hold tight for a second,
and have my tower
rebuilt in front of me.
All right.
So can we switch back
to the main screen?
So those are all of the demos
that I have for you today.
But I hope-- clicker.
Oh, this went
backwards-- seconds.
OK.
Those are all the demos that
I'm going to show for you today,
but I hope it gave an idea
of the kinds of applications
that can be built on top of this
technology and the power of it.
All of the demos that
I showed are currently
running on our
Development Kit hardware.
But this has actually
been a progression for us,
so it's taken many iterations
of hardware and software
to get to a platform stable
enough to build these apps
and to have them work reliably.
And we are starting
now to partner
with OEMs and
chipset manufacturers
to bring this technology
to the masses.
This is no longer
us playing around.
And, in fact, at
CES last week we
announced the world's first
Project Tango smartphone.
So Google, in
collaboration with Lenovo,
is going to build a
smartphone that will
launch in the summer of 2016.
It'll have a mainstream
price points under $500,
and it will launch globally
including in the US.
And we know that with a launch
of this size and scale, just
putting the device out
there isn't enough.
We really need to have
applications and compelling use
cases to make this
product a success.
And to build them
we're looking for help,
so we're looking to partner
with developers in this space.
And to this end,
we've launched what
we're calling the App Incubator
program for Project Tango.
This is, again, a collaboration
between Google and Lenovo.
And through this
program we're looking
for developers and
studios who are
passionate about
the technology, who
are passionate about
the space, and who
have ideas about what they could
build on top of this platform.
And then Google and Lenovo
will work with these companies
to provide funding,
to provide supports,
and to help make your vision
and your dream of what
you could do with an
application and platform
like this a reality.
There will be opportunities
to be featured on the device
as it launches,
and Lenovo has also
stated a desire to
have some of these apps
be preinstalled on the device--
opening up the market that will
be exposed to your technology.
And we are accepting
proposals for applications
through February
15, 2016, so we're
looking to make decisions as
to what applications we fund
in the relatively near future.
You can go to
google.com/projecttango,
and you'll find more information
on applying for the program.
So if you have an
idea that you're
passionate about
for this platform,
we really, really
want to hear from you.
We know that this can't
happen without creative input,
and we know that
we don't have all
of the great ideas that could be
built on top of this platform.
I'll conclude by just saying
that we have APIs for C,
for Java, and we have a
tight integration with Unity.
And we are also working on
an integration with Unreal
as well.
So if you use any
of those programming
platforms for
Android, Tango should
be relatively easy to pick up.
I want to leave some
time for questions,
so I'll take those now.
I think I've been told there
are mics up front and one
in the top, so I'll
take questions.
Yes.
AUDIENCE: How accurate is
that IR sensor in Tango?
[INAUDIBLE].
EITAN MARDER-EPPSTEIN:
So the question
was how accurate
is the IR sensor?
So you don't get direct
access to the infrared camera
with Tango.
I can talk about the
accuracy of depth,
but as far as seeing through
walls-- that's something
that's a little bit hard to do.
But the depth is accurate to
about 3% over the distance
that you measure.
Does that makes sense?
OK.
Yes.
AUDIENCE: How does it
deal with wacky lighting
and crazy shadows like
the shadows [INAUDIBLE]?
EITAN MARDER-EPPSTEIN:
Yeah, so the question was
how does Tango deal with wacky
lighting, or crazy shadows,
or changes in the environment?
I think you saw in the
video Tango working
in the presence
of a lot of people
that were all moving around.
And the device
has the capability
to recognize what
are dynamic obstacles
in the scene versus static.
And as long as it
can see some points
that it can track over time,
it's able to keep its position.
Yeah, so if you move in
front of the Tango device,
you'll actually see
the points on you.
In that display I
showed, they'll turn red.
And what that means is,
hey, that is not a thing
that I should be basing--
AUDIENCE: [INAUDIBLE].
EITAN MARDER-EPPSTEIN: Yes.
Yeah.
And then for lighting the
camera changes its exposure
based on the lighting to try and
maximize the ability to track.
And that's the other reason we
have that wide-angle camera.
We want to see as much
of the world as possible
because if you take
a narrow-angle camera
and you walk really
close to that wall,
you'll just see white.
But the wide-angle gives
us a very wide field
of view of the world and helps
the robustness of our system.
You in the-- yes-- jacket.
Yeah.
You'd put your hand down.
AUDIENCE: So I'm assuming
all this processing is
happening on the device, right?
And all you need [INAUDIBLE]?
EITAN MARDER-EPPSTEIN: Yeah.
So the question was we assume
that all the processing is
happening on the device,
which is correct.
All of the processing is
happening on the device.
And how easy is it to
develop a simple app?
So there-- getting up
and running with Tango
is very, very easy.
In Unity, it takes
less than 10 minutes
to have a basic
motion controlled
camera in a virtual scene.
In C and Java-- maybe up
that to 30 minutes to an hour
just to get everything set-up.
And developers.googl
e.com/projecttango has all
of the entry points
for our tutorials,
and I'm also very happy to
talk about that at office hours
as well.
You standing in the back.
Yeah.
AUDIENCE: So does Google
or any of its partners
have plans for, say,
planetary scale SLAM
geospatial database-- sort of
like an Instagram for Tango?
EITAN MARDER-EPPSTEIN:
So the question
was does Google have
plans to, I guess,
map the world with Tango?
So at Google we're always
interested in mapping
the world.
We do think that this technology
has a lot of potential
for indoor navigation.
So we've started with some
trials where you go to a mall,
and you turn on the device,
and you can do things
like get directed to a store
with turn-by-turn directions.
I know I could have used this
at CES last week, actually.
Vegas is not good for
indoor navigation.
And you can also
do things like try
and find your
friends because you
have a shared coordinate frame.
So it's an exploration
that we're beginning.
We do think location-based
experiences and navigation
experiences are
one of the pillars
that we'll explore
for this technology.
AUDIENCE: So no Google Maps
integration out of the box?
EITAN MARDER-EPPSTEIN: I cannot
comment about Google Maps
integration.
Yeah.
AUDIENCE: Is there any
compatibility from Project
Tango with Intel's RealSense?
EITAN MARDER-EPPSTEIN: Yes.
So Intel has announced the
availability of developer kits
that run Project Tango
integrated with their RealSense
cameras.
And I believe they're $399, so
you can today go and preorder
an Intel RealSense device that
has Project Tango capabilities.
And we're working with them
as well as our other chipset
partners.
Other questions?
You in the front
row in the back.
AUDIENCE: Yes.
Do you see a future with
Project Tango and gaming
with Google Cardboard, for
example-- virtual headset,
you know, [INAUDIBLE] HMDs?
EITAN MARDER-EPPSTEIN: Yeah.
So I think Project--
the question
was do you see a future for
Project Tango with gaming?
And I think that
there is a huge future
for Project Tango with gaming.
I think there's
also a huge future
for using Project Tango for any
kind of positional tracking.
And you can think about
extensions to that.
We've played around with taking
it and putting it in an HMD.
There are other problems
with that like screen refresh
latency and all of the problems
that come along with that.
But I do think that people will
use the devices in that way,
and it's really
interesting to me.
Yes.
AUDIENCE: Can you use
that device [INAUDIBLE]
will up your-- outdoor
with a lot of trees
and wind in the [INAUDIBLE]?
EITAN MARDER-EPPSTEIN:
The question
was will Project
Tango work outdoors?
And the answer there is
that pieces of the stack
will work outdoors,
so the motion tracking
and area learning capabilities
will work outdoors.
But depth perception--
AUDIENCE: Trees, leaves, and--
EITAN MARDER-EPPSTEIN: Yeah.
Trees and leaves are
actually great features
to track as you move
through the world,
but the problem is the sun.
The sun turns out to be a
really, really strong source
of infrared light.
And so for the
depth sensors that
are currently on
the Tango tablet,
they get overpowered by the sun.
So you would be able to do
outdoor navigation with a Tango
device but having Mittens
follow you through the trees,
unless you were in a shaded
area, would be difficult.
AUDIENCE: What about lenses--
special filtered lenses
to filter out
overwhelming [INAUDIBLE]?
EITAN MARDER-EPPSTEIN:
Well, I think
the problem with using
specialized lenses to filter
out IR, which is
something that you can do,
is that the device needs to see
the IR pattern to be able to do
its depth computation.
And so if you
filter out that IR,
the device doesn't
see the pattern.
There are solutions to this.
AUDIENCE: --sensor.
EITAN MARDER-EPPSTEIN: Yes,
it depends on the sensors.
The current sensors-- no.
But over time we hope to improve
the ability of these devices
to see outdoors.
Other questions?
Yes, you next to
the blue-- yeah.
AUDIENCE: [INAUDIBLE]?
EITAN MARDER-EPPSTEIN:
So the question
is can you share the
3D models that you
create of an environment?
Right now the
constructor application
that I showed you will
export to OBJ and PLY
files which are just
standard mesh assets.
You have to give
permission to export.
The user has to click a button,
but once you have the file
you're free to email
it or do whatever.
And you could import it into
another Project Tango device's
memory space and use it
if you were so inclined.
Yes.
AUDIENCE: [INAUDIBLE]?
EITAN MARDER-EPPSTEIN:
So the question
was will Tango technology
be extended into other form
factors like robotics?
I think absolutely yes.
Well, I hope so.
My background's
actually in robotics,
so I have a little bit
of an interest in this.
But we've already had
interest from people
who are doing
quadrotors-- anyone who
needs a light
weight localization
solution on their device.
I think there's huge
impact potential there.
We're really focusing
on the smartphone space
first to drive down the
cost of these sensors
and to make the technology
more ubiquitous.
We believe that that's
the best way to get it out
into the world, and then
things like robotics
can piggyback on top of it.
All right.
I am out of time, but I
will have office hours
right after this.
So thank you very, very
much for your time.
Enjoy the rest of this.
[MUSIC PLAYING]
