AI has become quite prevalent
these days. I mean we see mobile apps
telling you to point your phone at your
kind of code, got a magazine or something
like that.
All these solutions are
basically using computer vision and your
phone camera. All they're really doing is
recognizing a picture, so ideally a 2D
image, but they can extrapolate from that.
Once they recognize the image they can
get its position in space and its
orientation and then just tie virtual
content to it. Right? Now the problem with that is,
it looks pretty impressive, but
they don't actually ,these systems,
understand the environment around them.
It doesn't know that this is a card, it
doesn't know that I'm holding it in my hand
or anything like that. So it might put my
hand in front of it. If it can still see
part of the card or anything like that,
it will keep drawing the content above
my hand and the illusion is very easy
to break with these things. That's because we, you know, the
same as the depth problem but we see in computer
graphics before. Exactly yes, and also
it limits the kinds of things you can do.
If all it really understands is
ok I've seen this thing I know my
position in space. It quickly limits the...
the things you can do with physics
or moving about you're tired of looking at
that mark and maybe moving a little bit
off it with optical flow as we discuss
another time. The thing is that there are
ready solutions that are trying to deal with this.
For example there is the Google Tango
project. which, what they did was this
group, they looked at the kinect and said
okay what does the kinect do? It has a
depth camera it reads the environment
around it.
Well, it was designed to read, eh.. to look at
humans and register what they're doing.
But it turned out it was pretty good at 3D
scanning stuff. So they said, alright let's
strap one of those to back of a
tablet and they came up with this.
This is the original Google Tango
development device. It has on the back of
it, a laser sensor depth cameras and
what have you. I believe it's billed as the
spatially aware tablet. So, it has a very
good understanding of where it is, its
orientation, what it's looking at and
the motion. So what this does is it creates
a point cloud in pretty much well real time, of
the environment around it.
Unlike the previous AI that is
where we started reading that original
point where with understanding of the
environment and
understanding of the motion of  the tablet, we can
start tying virtual content to things we
couldn't before. So, I can put a mesh on
this floor and it will stay there or I
can put a piece of sound on the corner
of this cabinet and whenever I come
close i'll get to hear the sound. But this is
now an old device under with about 3-4
years old and it was only developer
device. There is actually the first
commercial device the Lenovo fab2pro
I believe. It is now looking like a proper
commercial device horrible case
notwithstanding and just like the
developer device it does have an array
of sensors over here.
So what can this do let's have a look.
It is probably my favorite example although
it's not very exciting.
We have this plane that is reading the
environment pretty well and telling me
what play i'm looking at the green mill
dog yeah so the green little dot is
basically the edge detection and this
playing over here is aligning itself to
the real world so in this case the table
there's a lot of whites are playing a
surface basically that is aligning
itself to the real world
I can see that only the edges line up
with the table and the area near the
wall ovens and as we go over the floor
seats of its turnaround and lining it up
to any surface
ok yeah so that's all very well but what
does it actually mean well it means that
i can go and understand the real world
so there we go
ok let's measure the size of the door
you're drawing a line
yeah i'm basically using a measuring
tape and there we have it so there are
essentially two market on this you can
see how with an understanding of the
space around it can actually start
measuring as well so we could go around
this room taking measurements and end up
with an architectural floor pan
I mean that is one of the use cases you
know they felt safe agents real estate
yeah it has it already though we've been
using it to create basic floor plans and
3d model that they'll just checking the
website with a better understanding of
what is what the environment Uranus
populated with and how the cameras
moving about you can start doing way
more interesting things such as
populating the environment around you
with your own virtual content or
whatever you might want to see so this
is the Augmented camera application and
what is a normal camera application but
again with that whole understanding of
the environment things we can start
putting virtual content inside it so in
this case we can put a cat where would
you normally have a cat
let's see here oh oh there it is
reflected on the floor there and we have
a cat that is standing on the floor
playing on the floor with thing of the
devices and understanding of what floor
is and I guess we can also use a laser
pointer or where we cannot all of that
is that it should when it sees it up
there definitely not acting like him cat
I know to be honest but down the fish so
that's fine it's recognized the floor
and we have a cat playing on the floor
but maybe we can get it up onto another
surface that is recognized like say the
table and gets the difference between
the floor down there and the top of the
table but and it's jumped off the table
with a traditional AR application that
was just using a marker to do that it
would have an understanding of that it
would just be doing everything in
relation to that market whether we had
on the floor in the table
it wouldn't matter this now is were
actually having lied i went to
intelligence content in this case
populating space are real space the
objects that we use every day you can
see how this could end up being
something a bit more exciting in fact we
do have something to show for that one
of these so this is the max of holland
same principle that strapped to your
face
so as you can see in has again it's
pretty much a connect all across the top
over here right this is the proper
heads-up display on say something
earlier endeavors like the google glass
which is simply just the screen off to
the side which didn't cover your normal
vision with ottoman content but this is
yeah I see the real deal
this is the wall I don't want to make
that claim by upset start from a lot of
the stuff I've tried this is probably
the one that gets it does in fact i can
show you a bit more than
