RYAN HICKMAN: Hello.
Everyone excited they
got notebooks?
I wish we were giving
out free robots.
Well, people are still
coming in.
But let me see by a show
of hands, who here
really loves robots?
Good, good.
We've got everybody.
And my friend PR2.
So you came to the right talk.
This is Cloud Robotics.
This is a tech talk.
And I am Ryan Hickman
from Google.
I also have with me
Damon Kohler.
We're both on the Cloud Robotics
team at Google, which
I'm sure you've never heard
of before today.
And we have Brian Gerkey and Ken
Conley from Willow Garage.
So today, I'm going to give you
an introduction to what
Cloud Robotics is.
It's probably something you've
never heard of or thought
about in the past. So I'm going
to tell you what it is
by concept.
And then Ken and Brian are going
to give you an overview
of ROS, an open source platform
for robots, and a
demo of the PR2 doing
some matrix.
Then, Damon is going to talk
about work we've done with
Willow Garage to port
ROS to Android.
So Android apps can run ROS and
talk to the amazing PR2.
And then I'm going to close out
with a demo of a prototype
object recognition service that
we created, and then give
all of you some action items
so that you can all become
roboticists.
Then we'll stick around for
Q&A and we've also got the
feedback link up there.
People have been using that
and the hash tags.
So what is a cloud-connected
robot?
If you look at this PR2 over
here, it's an amazing machine.
But it still has a
limited amount of
memory and storage space.
It cannot know everything.
It can't process all the sensors
that it has in real
time in all the ways
that we want it to.
It's just limited, even
as great as it is.
But if you tap into the cloud,
we can move some of that
perception of the world
around the robot--
the understanding of
what the task it's
been given to do is--
the robot can share information
with other humans
and with other robots.
And it can react in a
smarter way by using
brand new cloud services.
So how are we going to connect
hardware with the cloud?
Well, if yesterday you saw the
open accessory API was
announced at the keynote, and
some of you might have
gone to that talk.
Hopefully, some of you got
the ADK development kit.
I see some thumbs up.
So you can now take the sensors
of an Android device--
the touchscreen, the microphone,
the speaker, the
gyroscopes, the memory,
the processors--
and you can use that
for a robot.
You just need to jack in motors,
actuators, lights, and
give it mobility.
So you can have an Android app
that actually physically
interacts with the world.
And we think robots are
my favorite use
case of that new API.
So Google also has some cloud
services which are really
useful for robotics.
If you've ever used Google
Goggles on your phone, you
know that you take a picture
of something, and then the
phone takes that picture,
sends it up to a cloud
service, and it's compared
against the massive database
of potential matches.
It's compared to more things
than you could ever store on
your robot.
And then it comes back, and in
seconds, it tells your phone,
it tells the app, what was that
thing you just took a
picture of.
That's really powerful
for robots.
If robots are going to roam
around in the world and
encounter things that they did
not expect to run into, they
need to look at them, send that
data up into the cloud,
and say, robot, this is what
you just ran into.
This how you should
interact with it.
That would be a great
cloud service.
We also have mapping and
navigation services.
If you've ever used turn-by-turn
directions on
your mobile phone, you know
how powerful it is to open
your phone and have it know
exactly where you are, where
you're going, and how
to get there.
Well, that would also be
terrific for robots.
Imagine if they knew where they
were and how to get where
they need to go.
They could also know where other
robots were and where
all the humans were.
That would be a great service.
And then, Google has
some competencies
with voice and text.
We have voice recognition,
optical character recognition,
then language translation,
and then text-to-speech.
So if you put all of that
together, that's going to
allow me to say, robot, fetch
me a beer, instead of doing
what Brian is doing today,
with lots of
clicking and typing.
So in essence, the
cloud is going to
enable smarter robots.
It starts with that
off-the-shelf hardware.
The mobile phones and the
tablets of today and
commercial sensors that you
see on the top of the PR2.
And tapping those into a common
set of APIs, a common
open framework.
That's going to enable you to
solve hard robotics problems
that haven't been
solved before.
Because the basics will be taken
care of for you now.
And using the cloud
gives us scalable
CPU memory and storage.
You essentially have unlimited
amounts of knowledge that you
can tap into.
Unlimited processing power.
You don't have to be worried
about your power
budget on the robot.
You can just send all of that
data off to the cloud, and
have one, 10, 10,000 servers
all crunching
the data for you.
So to give you an overview of
ROS and an introduction to
what that is and some
demos, I'm going to
hand it over to Ken.
KEN CONLEY: Thank you, Ryan.
Hi everyone, I'm Ken Conley from
Willow Garage, and this
is my colleague Brian Gerkey.
Today we're going to speak to
you about ROS in the cloud.
Just like Android provides you
tools and libraries to develop
applications for smartphones
and tablets,
ROS does for robots.
And just like Android, ROS
is completely free and
open-source for you to
customize and extend.
We have developers around the
world at the top research
labs, like MIT, Stanford,
Berkeley, University of
Pennsylvania, Georgia Tech, and
much more, all providing
libraries for you to use
with your robot.
In fact, there are thousands
of them.
All the way from low-level
sensor drivers to computer
vision algorithms to some of
the latest and greatest
research results being
publishing
in conferences today.
Now you may wonder, what can
these libraries help me build?
A lot of it depends on what
your robot looks like.
ROS runs on robots like the one
here with two arms and a
mobile base and a lot
of sensors on top.
But ROS runs on many other
different types of robot,
including robots that fly
through the air, like
quadrotors, as well robots at
sea and robots in the ocean,
both in and underneath
the water.
Some of these robots are built
out of plywood and motors by
graduate students, and others
are ones that you can buy off
the shelf and start programming
immediately.
One of the questions I get most
frequently asked about
ROS is, what do I
need to run ROS?
Well, that depends on
what you want to do.
If you're going to build your
own autonomous car, you're
probably going to need some
servers in the trunk.
But instead, if you're just
trying to visualize data from
a surfboard, you might be
interested in knowing that ROS
can run on platforms as small
as Arduino, Beagle Boards,
PandaBoards, and other low
cost, small platforms.
So what is ROS?
Well, at its heart, ROS is a
message-passing system, based
on an anonymous,
publish/subscribe
architecture.
In ROS, you have nodes, which
are processes, and they
communicate with each other over
topics which are usually
network sockets.
So these nodes can be
on one computer or
they can be on many.
And so to find each other,
there's a ROS core, which acts
as a name service.
ROS has language findings in
a variety of languages,
including C++, Python,
LISP, and Java.
And it also has command line
tools that let you interact
with it directly.
So to give you a quick Hello
world example in ROS, we're
going to use a command line
tool called rostopic.
On the first line, we do a
rostopic publish to the
chatter topic, a string
containing Hello world, and
we're going to publish it
10 times per second.
Now, on another terminal, or on
another computer, you can
type rostopic echo of the
chatter topic, and you'll see
that data displaying
to your screen.
And it's pretty simple to just
start exchanging messages.
Of course, you need a lot more
than messages to build a robot
application, so we provide a lot
of functionality as well.
We focused on three main areas
in ROS for our own
development.
Perception, mobility,
and manipulation.
Because we believe a combination
of these three
capabilities are what you need
to build robots that are meant
to interact in environments
designed for humans.
So if you're navigating around
a crowded living room and
having to avoid people or your
cat, or if you're trying to
get a robot to do your laundry,
you're going to need
these capabilities.
In order to design these,
we also needed a robot.
And so we built our own.
It's sitting over
here to my left.
It's called the PR2, and it's
built by Willow Garage.
We built the PR2 to be
a world-class mobile
manipulation research
platform.
So all the best researchers out
there in robotics would be
able to do anything they could
dream of with this platform.
So of course, since it is a
research platform, it has a
pretty big price tag.
It's $400,000.
But for all of you in the
audience that contribute to
open source, we have a discount
price of $280,000.
Just want to put that out
there, in case you
want to order one.
For its brains, it has--
it's pretty beefy.
It's got two servers in it,
each with eight core Intel
Xeon processors and
24 GB of RAM.
It's covered head to
toe in sensors.
It's got seven cameras, multiple
laser rangefinders.
And it's got these really
awesome arms. And we put these
capabilities and computation in
there so researchers would
really be able to do innovative
applications that
would get us toward
that Jetsons,
Rosie the Robot future.
So here's the video of what some
researchers at Berkeley
did a year ago with the PR2.
They got it to fold towels.
When they first published this
video, it said 50x on it,
because it's took 25
minutes per towel,
which is pretty slow.
But just last month, we
shot this new video.
They have it running
five times faster.
It only takes between two and
six minutes per towel.
And they were also able to get
rid of a lot of custom
hardware they used the
first time around.
So they were able to improve
the performance both in
software and the hardware
requirements.
So, fairly soon these
researchers--
This is real, folks.
You can do it.
So, soon these researchers think
that they'll have all
the basic problems of laundry
solved, from loading the
washing machine to emptying
the dryer to
folding their clothes.
But this is the perfect
opportunity for the cloud.
Because fashion, as we know,
is constantly changing.
So even as these researchers
add new features like pants
and baby clothes and socks, you
want your robot to be able
to fold your laundry whether
it's a brand new t-shirt that
you got from a conference or a
Snuggie you got for Christmas.
So now, we're going
to give you a live
demo of ROS in action.
I'm going to show you a tool
called rviz, which is probably
the most widely used
tool in ROS.
It's a 3-D visualizer.
Now what we see here on the
screen is a model of the PR2.
This is actually connected
to this PR2.
So as I move it, around you can
see that the display on
the screen updates.
We can add all sorts of 3-D data
into this view, so we can
understand what the robot
is seeing and thinking.
So right here, we see data
from this tilting laser.
So as you can see, it's able
to see and build a nice 3-D
model of the room that
we're sitting in.
We can also see other sensors.
Like on top of the head,
we have this Kinect.
the Can we bring the stage
lights up real quick?
It's a little dark, but you can
see there's some nice 3-D
color images of people sitting
in the front audience.
And I'll note, this is
just a normal Kinect
sitting on a head.
You two can just buy one from
Best Buy or Fry's Electronics
and get started using this
software on your own.
Now, you may wonder, what sort
of tools do we provide to you
to make all this possible?
Because this is actually
pretty complicated.
We have two different sensors
and a complicated robot, and
even as I move the head of the
robot up and down, you can see
that the 3-D cloud is
correctly moving and
re-registering its position
in the world.
Well, we have libraries in
ROS that do that for you
automatically.
And one them is called tf, which
stands for transforms.
So just like in physics, where
you need a reference frame to
understand your data, in
robotics we call these
coordinate frames.
And with these coordinate
frames, they may be as simple
as saying, here's a position
on a map, or I can say
something more complicated like,
this is a position one
meter in front of my hand.
Or I can attach it to data and
say, this is data collected
from this sensor mounted
to this point on
the head of my robot.
And it will do all
the rest for you.
So we can see what they
look like here.
As you can see, there's lots
of these on the robot, and
they allow us to make maps
simple so that we can just
focus on building
applications.
So in ROS, the message
is the medium.
By that, I mean instead of
having code APIs in C++ or
Java that you call to make a
robot do something, in ROS,
you just publish a message.
And that will get the robot
to do an action.
So we're going to run through
some quick examples that show
you that moving a robot's pretty
easy, and you can do it
in just, in this case, three
lines of Python.
So in this example, on the first
line I'm going to create
a point, and I'm going to put
it one meter in front of the
base link, which is one of these
coordinate frames I just
showed you in the visualizer.
On the second line, I'm going to
create a message, and this
message is a point head goal,
which will contain this point
that I want to have
the robot look at.
And on the last line, we're
simply going to publish it to
a topic that makes the
robot move its head.
As you can see, robot's now
looking at the base.
Now, with just a couple more
lines of code, we can make it
do something more interesting.
Instead of having it look at the
base, I'm going to have a
look at the hand.
And instead of publishing one
message, I'll publish messages
ten times per second, so that
the robot will be able to
track the movement
of the hand.
As you can see, it's now looking
at the gripper as I
move it around.
In five lines of code,
I now have the
robot tracking an object.
And as you remember before,
those coordinate frames could
be anywhere, so I can attach
those to any sort of object,
and get the head moving around
and tracking it.
Now I started off showing you a
Hello world example where it
just printed Hello world
to the screen,
but this is a robot.
It doesn't have a screen
to print hello to.
It'll have to wave to us
instead, which is actually a
lot more fun.
So this looks a little bit
more complicated, because
instead of creating a point,
we're going to create a pose.
Because we have to control the
orientation of the arm.
This is all actually
pretty simple.
So for the orientation, we're
going to use a quaternion,
which is also used in some
APIs in Android.
And for the rest of it, all
we're going to do is take a
point a half a meter in front
of the side-plate of this
robot and move it back and forth
as a sine wave. So it
looks like fancy math, but
remember sine waves, they just
go up and down?
We're just going to move one
back and forth like this.
Now if we run this, we'll
see we have the robot
waving hello at us.
And if we run our previous
example and another process,
we'll see that the head
of the robot is now
tracking the hand.
And this, in a nutshell,
is how you
program robots with ROS.
We have a bunch of nodes, and
they each try and do one thing
well and no more.
And then they can use ROS to
communicate with each other to
do more complex behaviors.
So whether you're just trying
to get the head of the robot
to follow a hand, or if you're
trying to fold towels, you can
all do it just using ROS.
So this is the obviously a talk
about cloud robotics.
Well, ROS was designed from
the ground up to be
distributed.
This PR2 has two computers,
in it but our original PR2
prototypes had four computers.
And we wanted to fully harness
that computational power for
our applications.
But what if you were able to
take the nodes that were
running in one of these
computers, and just move them
into the cloud instead?
And take advantage of those
object recognition, voice
services, mapping and
navigation, and other great
things that the cloud
has to offer?
Of course, we can do this,
but we want to know why.
Well, the main reason is that
personal robots need to be
inexpensive.
Even with my employee discount,
I'm probably not
going to have a PR2 in my
house folding laundry.
It's a research platform,
and it's very expensive.
And so for us to see that as
robot app developers, we need
an inexpensive platform.
So what makes robots like
the PR2 expensive?
One of the main costs
is the servers.
And not just in terms
of dollar costs.
The majority of the power in the
PR2 goes to powering the
computers, not the motors.
And if you remove just one of
the two computers in the PR2,
you double the battery life,
from two to four hours.
So that means computation for
robots has costs not just in
terms of money, but battery,
cooling, and space.
All which are cheap
in the cloud.
Another thing that makes robots
expensive is sensors.
Just three years ago, when we
started in creating ROS and
the PR2, if you wanted to build
your own little robot
like this that just saw in two
dimensions, you'd have to buy
a laser that cost well
over $1,000.
But just last November,
Microsoft released the Kinect,
which is based on technology
developed by PrimeSense.
This is fantastic, because
suddenly even high schoolers
who are shopping at their
favorite electronics store
could buy a robotics-grade 3-D
sensor for them to develop on
at home and on their
own desks.
We really want to leverage the
potential of this new device,
so we sponsored a contest a
month later to see what the
developers in our community
could do using the ROS, the
Kinect, and some computer vision
libraries that we have.
And this is what they
came up with.
So in this first example, you
simply draw some buttons, you
press them, and you have
your own soundboard.
And of course, people also use
the Kinect with actual robots.
So, flying a helicopter around,
and using the Kinect
to find obstacles, as well fly
down the middle of a corridor.
People also used it much
like you would use
it in a video game.
So you wave your arms around,
the Kinect tracks you, and
you're able to get a robot
to mimic your actions
identically.
You can perform complex tasks
like playing chess or having
your robot fetch you a tissue.
People also used it
as a 3-D sensor.
You can use it-- you can wave it
around and use it to build
a 3-D map of your environment,
or you can move it around a
single object and use it to
create a detailed model of
just that object.
And we think developers will
be able to do a lot of
exciting new applications
using these sorts of
technologies.
And I should note that
everything that you see in
this video is open source
for you to use
and play with yourself.
As developers, we know that
we need a common hardware
platform if we want to become
app developers.
And so we've done that with a
mobile 3-D sensing platform
that we call TurtleBot.
TurtleBot provides you a Kinect,
a dual-core atom
netbook, and an iRobot Create
base integrated complete with
open source libraries and tools,
so that you can start
writing robot apps
for the home.
You may be wondering, what sort
of apps can I develop
with these sorts of
capabilities?
So let's look at a Google
Streetview car.
Or as I like to call it, a
Google Streetview robot.
Because if you look at it, up
top it has cameras so it can
take panoramic images, it has
lasers so it can see in 3-D,
it has GPS so it knows where it
is, and of course, it can
drive around.
Well, as it turns out, the
TurtleBot has all these same
capabilities, just at
a different scale.
And that scale is the home.
And so you could use it to
build your own home-view
alternative to streetview.
And as you have your robot
going around, creating
panoramic maps of your home,
you could feed it to object
recognizers in the cloud, which
could help you start
building an index of the
objects in your house.
And as we know, if you have a
crawler and if you have an
indexer, you can build
a search engine.
At long last, you can finally
find your keys.
There's probably a good
reason why we call web
crawlers for robots.
Now, a home search engine would
be very different from a
web search engine, because it
would give us a new class of
data, as developers,
to play with.
It'll tell us, what are the
objects in my house?
Where are they located?
Where have they been?
And it could even give us
information about their
dimensions.
All sorts of new data that
we can build on top of.
And because it is a 3-D sensor,
we could also use it
to create new pipelines from
physical to digital back to
physical again.
So when personal computers first
came out, they had dot
matrix printers so we can
print documents out.
And soon after, we had scanners,
so we can reapply
those documents back
into digital form.
Well, on the right of this
slide, you'll see a
Thing-O-Matic from MakerBot,
which is a $1,200 3-D printer
that you and your friends could
build for yourselves.
So fairly soon, we could use
technology like 3-D sensors
and 3-D printers to create new
pipelines in 3-D for creating
objects and printing them
back out again.
And also, we want to
challenge you as
developers to think about--
if you had a robot that was a
mobile 3-D sensing platform,
and you combined it with a
smartphone or a tablet, what
sort of new applications
could you build?
What could you do if you were
able to combine the libraries,
tools, and hardware that you
get with Android, and you
combine it with a robot
running ROS?
Well, to talk to you about ROS
and Android, I'm going to hand
it over to Damon Kohler
from Google.
DAMON KOHLER: Thanks
again, I'm Damon.
I'm a Googler, and I'm
going to talk to you
about ROS and Android.
So, to make ROS work on Android,
I spent the last few
months working pretty quickly
with Ken and the rest of
Willow Garage to
create rosjava.
And rosjava is the first pure
java implementation of ROS.
And that allows us to achieve
Android compatibility.
So, the entire project is
open source, just like
all the rest of ROS.
It's currently in early
release under heavy
development, but you guys
can check it out.
And all of a code examples
I'm about to show you are
available in their complete
form later on the site.
So what does a node look
like in rosjava?
Well, right now, we're going to
implement the simple Hello
world, the first Hello world
that Ken demonstrated.
And so we start with
a talker node.
The talker nodes implement
node main.
And node main simply gives
a main loop entry
point to all the nodes.
And the main loop entry point
takes a node configuration.
That configuration contains
things like the
URI for the ROS core.
That's the DNS node.
The publisher node, in its
main loop, takes the node
configuration and passes
that into the
constructor for a new node.
And we'll call that note
the talker node.
In that talker node, we'll
create a new publisher.
And that publisher will take a
ROS string message, and it
will create the publisher
for the chatter topic.
Then, in a loop, we just simply
put the Hello world
string into the ROS string
message, and we publish it
once per second.
So now we need to look at the
other side, the subscriber.
So we create a new
listener node.
Take the config again, and we
create a subscriber for this
last chatter topic.
For that subscriber, we have a
new message listener, and it
expects raw string messages.
And then on every new message,
we simply print the Hello
world string that we received
to standard out.
So to make that work, we use
another command line tool from
ROS called rosrun.
And we run the two nodes.
And then once they both come
up, then you'll see Hello
world printed to standard
out once per second.
So what does that look
like on Android?
So here we have the same Hello
world code with an additional
counter that lets you
see every time a new
message comes in.
And this is running entirely
on that Android device.
So to start that, we have our
main activity for Android, and
then in that main activity,
we create a node runner.
And the node runner, in this
case, is taking the place of
the rosrun command line tool.
And instead of running all
of the nodes in separate
processes here, we'll run them
in separate threads.
So, in onCreate, we're going
to to find the ROS TextView
that we put into the layout,
and then we're going to set
the topic name for that ROS
text view to /chatter.
And that's the topic that
we'll subscribe to.
And then we're going to execute
it, just like we do
with the talker node.
So that ROS TextView
is both an Android
TextView and a ROS node.
So if you actually take a look
at the inside of that ROS
TextView, it extends TextView,
and it implements node main.
So in that TextView, we have
the topic name and the node
for that view.
So in the main loop, we take
the node, we create a
subscriber, we subscribe to the
topic name that was set,
and then on every new message,
we post a new runnable to the
UI thread, so that we can update
the TextView text.
And that's how Hello
world works.
But Hello World is kind of
boring, and we have all these
cool sensors on Android
devices.
So, in this example, we're
actually publishing the
orientation of the
device to rviz.
And rviz is visualizing that as
a set of coordinates that
rotate as the orientation
of the phone changes.
So in this case, our node will
grab the Android sensor
service, or the sensor manager,
rather, and it will
create a new sensor
listener for the
rotation vector sensor.
The rotation vector sensor
kindly returns quaternions,
which is the preferred
representation of orientation
for ROS, so that gets rid
of a lot of the work we
would have had to do.
So every time we get a new
sensor event from Android,
then we're going to take that
quaternion and put it into a
ROS quaternion message.
And then we're going to use
that to create another ROS
Pose message, which is what Ken
was using earlier for the
Hello World example
with the waving.
Since we're not tracking the
position of the phone, we're
going to lock it to the origin,
and then we're going
to just instead publish
its orientation.
So on every new sensor event
that we get from Android, we
publish a new ROS PoseStamped
message with the orientation
of the device.
So there's lots of other
sensors, besides orientation,
that are useful.
Cameras are super useful, and
the PR2 has seven of them,
like I said.
So this particular example, we
are subscribing to the camera
from the PR2 and displaying
that on the tablet.
To do that, we use the
ROS image_view.
And we set up a ROS image_view
that accepts a compressed
image ROS message, and then we
set the topic name to /camera
that we want to subscribe to,
and then we execute the node.
Easy as that.
But PR2s aren't the only
things with cameras.
Your Android devices have
cameras as well.
So in this example, we actually
have the camera being
published from one
device and being
subscribed to on the other.
To do the camera publishing,
we use a ROS
camera preview view.
It gets rid of all the
camera code that you
would usually write.
And we set the topic name that
we want to publish those
images too, /camera, and then
we execute the node again.
So at this point, what I'd like
to do is take all those
little demos that I showed you
and sort of wrap them up into
one package and show
you how it actually
interacts with the PR2.
Ready?
Great.
You can see Ken's tablet that
has the camera picture coming
from the PR2's head.
And then if he puts his finger
on the screen and changes the
orientation of the tablet, the
PR2 tracks the orientation of
the tablet.
So now you're actually inside
the head of the PR2.
So I've just shown you a couple
of the-- well, can we
switch back?
Excellent.
I've just shown you a few of the
possibilities of actually
integrating Android devices with
advanced robots like the
PR2 or still advanced
but more accessible
robots like the TurtleBot.
But there's lots and lots
more options out there.
So, with the Open Accessory
API that was announced
yesterday, you can start
connecting Android devices
directly to actuators and
external sensors.
But you don't even
have to do that.
Your Android device has tons
of sensors on board already
that are exceptionally
useful to robots.
So now, with rosjava, you can
actually connect those robots
to your Android devices to take
advantage of those things.
And in addition, Android
devices typically have
wireless access.
That mean your Android device,
when it becomes an integral
part of your robot, becomes
its link to the cloud.
It gives the robot the ability
to access that unlimited CPU
memory and storage that Ryan
was talking about.
So with that, I'll give
it back to Ryan.
RYAN HICKMAN: Thanks, guys.
So who in here has written
Android apps before?
Oh, that's pretty good.
There were three competing
Android talks right now.
Who in here has written
web apps before?
Oh, that's awesome.
Who in here has written apps
for robots before?
Wow.
That's incredible.
So we would love to double or
triple that, though, and hit
all of you.
And I hope if you came into this
talk not knowing anything
about robots-- which looked like
it was about 2/3 of you--
I hope you can now see why tying
the robot to Android and
then to the cloud means that
you're reducing the processing
need on the robot.
And so you're reducing using the
battery load on the robot,
and so you're reducing the
weight of the robot and the
cost of the robot.
And so the efficiency of
reducing all of that, while at
the same time tapping into new
cloud services that make it
even do more than it
ever did before.
So the price-performance ratio
shift here is pretty dramatic.
So I'm going to give you one
demo we put together here of
object recognition.
Since we're not launching any
new cloud services today,
we're releasing rosjava today,
we wanted to show you what
could we do if we took a
technology we already had.
So we worked with the Google
Goggles team--
and we wrapped it around
a web service and
created a API for robots.
And what it allowed us
to do was to train a
custom corpus of images.
So you might want your robot to
recognize something that's
not already in the Google
Goggles system, for example.
And then, once we stored that
knowledge in the cloud, any
robot could then access it.
So what you'll see here
is, [? Chaitanya ?]
is typing in the name of one
of the Android figurines.
And we went and labelled
them all.
So this was the honeycomb
figurine.
He types in the name on the
phone, and then he starts
taking pictures of it from
different angles.
And doing that from the phone
and sending those pictures up
to the cloud then trains the
cloud for what the honeycomb
figurine looks like.
We also had a web-based
interface so that if your
robot was remotely
at an object, you
could still train it.
So we did the cupcake bug droid,
and then hit learn
object and trained
it that way.
And then, because I like
cupcakes, I asked the
Turtlebot to go find me one.
So what you had there was, train
once, on one system,
store in the cloud, and
then all of the
systems can access it.
So there's actually a demo of
this running live upstairs.
Hasbro has Project Phondox,
which are these small robots
with Android phones walking
around the table.
And when they can get wireless
access, they are recognizing
different cards--
letters on the cards,
Transformers, Autobots, and
Decepticons--
and they run from the
Decepticons and they smile and
greet the Autobots.
And this is running
in the cloud.
And we actually had an
interesting moment on Monday
where we had 15 of the robots
out there, and we reset the
system when we got here, and
none of them knew anything.
They were all quite dumb.
And then we just took one of
them, and we held up the
cards, and we trained
that one.
And then all 15 of
them had that
knowledge at the same time.
So what robotics problems
can you tackle?
Well maybe you are already
really good at processing
large amounts of data.
Or someone on your team is
good at machine learning.
Maybe you have expertise sharing
knowledge amongst
different users or between
different devices.
Or maybe you have a very
accessible application, or
you're good with new forms
of user interaction.
Because it's much better, as I
said earlier, to talk to the
robot and ask it to fetch you
the beer than to walk up to it
and grab a mouse and try to
click your way there.
So what we would like all of
you to think about when you
leave here is how can you
ROS-enable your web service or
application?
It's great if it's open
source, but it
doesn't have to be.
This could be a new form of
business, which is launching
new APIs for robotics that
process information in a
special way that solves
a real-world problem.
And we want you to think about
ROS-enabling Android apps.
So you can use rosjava, and
you can write an app with
views and user interfaces that
are doing complex robotic
processing behind the scenes
and also connecting with
hardware and actuators.
And then you can put those
apps in the market and
distribute them out to any of
the 50 plus platforms that
support ROS today.
And I'm sure many more will come
online when people start
connecting hardware to
Android in new ways.
And you don't have to
go through Android.
You could take the PR2 and
give it a direct cloud
connection.
You could take any sensor, give
it a network ID, tap it
into a cloud service, and
it adds to the system.
So, to get started, go
to cloudrobotics.com.
That's going to take you to the
rosjava site, where you
can download that.
You can see the tutorials
that Ken and
Damon showed you today.
And if you haven't already been
there yet, we want you to
come upstairs to the third
floor in the Android
interactive zone.
We have the TurtleBots
running around.
You can touch them, you
can talk to some of
the folks from Willow.
We also have Hasbro there with
Project Phondox, and you can
play with those and see
how those work.
And then for you still in the
Bay Area 10 days from now,
we'll be at the Maker Faire
which is a chance for you to
see more do-it-yourself and
hobbyist devices connected to
Android, running ROS, connected
to the cloud.
So we're going to stick around
for some Q&A, but I want to
thank all of my presenters
today and the PR2
for letting us demo.
Great.
So if anyone has questions,
please just come up to the mic
and just speak up, and we're
happy to answer them.
AUDIENCE: A quick
one on rosjava.
Is it a pure java
implementation, or are there
some native dependencies
on that?
DAMON KOHLER: That's a pure java
implementation of ROS.
AUDIENCE: Awesome.
AUDIENCE: Any plans
on supporting
other language findings?
I know Damon started the
Scripting Layer for Android so
you could run Python and
do stuff like that.
DAMON KOHLER: ROS actually has
quite a few language findings.
I think Ken could probably
rattle them off for you.
KEN CONLEY: So the first class
ones are C++ and Python and
LISP and we also have
experimental support for Lua.
AUDIENCE: I mean on the
Android itself.
DAMON KOHLER: I'll be working
on rosjava for
the foreseeable future.
AUDIENCE: With the realm of
unmanned ground vehicles, how
would you compare or contrast
you guys' direction in terms
of ROS with your Northrup
Grummans, your Lockheeds, that
are working on that problem?
Are there some kind of cloud
services that could be
specifically useful for that?
RYAN HICKMAN: Yeah, I definitely
think cloud
services are useful to
robots as a whole.
And what we've seen is military
developments in the
past turn commercial, whether
it was satellites or
going to the moon.
I think it's GPS--
inertia measurement
units on missiles
are now in your Wiimote.
And it's a $10 chip, and
it used to be $60,000.
So all of those things are
turning into commercial,
low-cost devices that all
of us can then afford.
And what's happening is more
and more of the basics of
robotics are just being
taken care of for you.
And then new people are tackling
the hard problems as
they come about.
AUDIENCE: So, it seems a natural
question, have you
talked to Commander Pike about
using Go in this environment?
I'm sure he'd be disappointed
if the answer is no.
RYAN HICKMAN: I have
not conversations
with Commander Pike.
AUDIENCE: Yeah, because it seems
like Go would be a very
natural fit, where you could
delegate to an army robot, and
they work it out amongst
themselves, who does what.
RYAN HICKMAN: A great
contribution for somebody who
knows Go and is interested in
robots would be to write the
first Go client for ROS.
AUDIENCE: If only Go had
a way to talk to Java.
RYAN HICKMAN: All right.
Any more questions?
No?
Well, thank you all
for coming.
