MATEI CIOCARLIE:
Before we begin,
one message, I was
asked to pass along,
is to, please, hold off
questions until after the talk.
And with that, it's
my great pleasure
to introduce Ken Goldberg
who is a distinguished
professor of new media
up at UC Berkeley.
Ken is a roboticist, is
a computer scientist,
is really a true Renaissance
man with broad multidisciplinary
interests.
Undergraduate degrees
in EE and economics.
So, I guess, triple E at UPenn.
Then a PhD in robotics
at Carnegie Mellon.
And, now, holds a primary
appointment in IOR at Berkeley
but, also, secondary
appointments
in EECS, Art Practice,
School of Information,
and Department of
Radiation Oncology.
So really broad
multidisciplinary interests.
And, maybe because of that,
he was one of the people
to notice, early, that this
possibility of combining
two fields-- combining
robotics and cloud computing.
And he's actually been working
with Google, in this direction,
for a couple of years now.
And I'm sure he
will tell us more
about the results in that area.
And before I pass it on to
Ken, just one last thing
I want to mention is
that Ken once told me
that he considers his job
to be to make people think.
And I think he's
really good at that
because he sees connections.
And he sees new angles
in new directions.
So we're really happy
to have him here today
to inspire us to
think in new ways.
So take it away, Ken.
KEN GOLDBERG: Thank you.
Matei, I love your introduction.
Triple E, I think I'm going
to use that from now on.
MATEI CIOCARLIE:
[CHUCKLES] Yeah.
KEN GOLDBERG: I want to
thank you all for being here.
I want to thank
Michael, and James,
and all those I've
been lucky enough
to work with here at Google.
It's really a pleasure to be
here because right now, Google
is the world mecca for robotics.
So I want to give you
my perspective on what
I think what's happening here.
I want to take you back,
first, about 20 years
to the early days of the web.
Who remembers?
Your first encounter
with the mosaic browsers?
I had a student, a group of
students in a lab at USC.
Actually, one of
them is here today.
And we were
interested in what we
could do with the worldwide
web in the early days.
And since we were
in a robotics lab,
we decided to think
about how we could
attach a robot to the web.
And we wanted it to do
something interesting.
So we set it up as
an art installation.
It was the idea, kind
of the last thing
we thought people would really
want to do over the web,
was to actually garden.
So we made something we
called the Telegarden.
And it was a very
early interface.
It uses the first
version of HTML.
So you would be able to come in.
You could move the robot around
by clicking on this workspace.
Then the camera at
the end effector,
it would show you what
you were looking at.
And then, that way, you
could visit the garden.
But if you wanted to register,
and we would send you
a password, you could
then participate.
You could help us
water the garden.
So there was a button
here for watering.
So you can actually do things.
You could interact with
the garden over the web.
And then, if you watered for
a certain amount of time,
we considered you a
member in good standing.
And we'd grant you
your first seed.
So you could plant
seeds in the garden.
And, of course, the
interesting thing
was that, contrary to
almost everything that
happens on the internet--
where you click on something,
and you get instant
gratification.
That's not how the
real world works.
The natural world really
hasn't evolved much
in the last 100,000 years.
So when you plant a
seed, nothing happens.
You have to come back,
and water, and et cetera.
And so it was really
a social experiment
to see what would
happen, how people
would interact with
something like this.
And one thing we
didn't anticipate,
because we're engineers,
was that if you build an 11
foot by 11 foot garden
space, and you invite people
from around the world to
come in and plant seeds,
you get it very quickly
becomes horrendously overgrown.
So you get something
that was, really,
more of an exercise
in the tragedy
of the common than robotics.
But one other thing
that came up was
the question of was it real?
Actually, a student
wrote and said,
how do I know there
really is a garden.
Because, as you can see,
they could be simulated.
And, in fact, this question
is becoming, actually,
more interesting today where
the advances in graphics
have evolved to the point
where it's, actually,
there's many cases where
it becomes increasingly
difficult to distinguish
between virtual reality--
when there's a
synthetic environment--
versus distal reality-- where
there's a real environment,
but it may be mediated by
some technology like video
over the internet.
And we became really
interested in that question.
What is the basis
for understanding?
How do we know when
we're in this environment
versus that one?
And that led us to
a book that I wrote
with a number of
colleagues, including
Hubert Dreyfus--
who's at Berkeley--
the eminent philosopher.
And we ended up calling
it Telepistemology,
the question of what is
knowable at a distance?
And how do
technologies influence
what we can know
over a distance?
Now, that was 20 years ago.
And a lot has
happened since then.
So now, we're in 2014.
The field of robotics
has evolved dramatically.
And there is, really,
an inflection point
in the evolution of robotics.
We now have over
a million service
robotics out in the home.
There are defense robots,
the enormous investments
in defense.
And thousands of
surgical robots are
being used around the world.
And there's also the many
advances in technology,
like sensors.
This is the connect center.
This has revolutionized
the field
because it provides
very low cost access
to three dimensional
models of the environment.
There's also this development.
[VIDEO PLAYBACK]
-And one of my responsibilities
as Commander-in-Chief,
is to keep my eye on robots.
[LAUGHTER]
-I'm pleased to report that
the robots you manufacture here
seem peaceful.
[LAUGHTER]
-At least for now.
To help everyone, from
factory workers to astronauts,
carry out more
complicated tasks.
NASA and other agencies
will support research
into next-generation robotics.
[END VIDEO PLAYBACK]
KEN GOLDBERG: So this
is a major milestone,
as well, because when the
president came out and said
that he was going to start
this initiative, the National
Robotics Initiative, this
has galvanized research
around the country.
And, of course, all of you
are familiar with what's
going on here at Google.
The self-driving car is one
major project that's actually
very interesting and, I
think, a perfect example
of the kind of
thing I want to talk
about today which is a
broader approach that we
call cloud robotics.
And this term, I
want to give credit
to James Kuffner
who's here at Google.
He's away today.
He's in Japan.
But he coined this term in 2010.
And I find this to be
exactly the right term
to describe a host
of new technologies
and new ideas that are coming
together and really changing
the way we think about robots.
So let me give you an example.
I'll start with
this one which is
there are five ways I think
about where robots will be
affected by the
web, by the cloud.
The first one is big data.
So when robots are in, working,
moving around in environments,
they will often encounter
things that they may not
have encountered before.
So they can access
this vast library
of data sets, that
are available online,
for information about all
kinds of objects, and scenes,
and-- for example-- maps,
weather conditions, basically
skills that they could
acquire from the internet.
They can download on-demand.
The second one is
that robots often
have to do extensive
computation, for example,
to do motion planning,
or statistical analysis.
And these can now be
done in the cloud.
In other words, a
robot doesn't have
to carry around the processing
elements on its own onboard.
So it can access the
cloud with a problem,
with a problem description.
It can be run and
process in the cloud.
The third one is the idea
of people sharing resources.
And this has been
another change, really,
in the field of robotics.
As Matei mentioned, he
played a very active role
in the development of ROS
at Willow Garage, the Robot
Operating System.
This has dramatically
changed the way
we think about robotics.
There's a lot more
sharing and open source
tools that are now available.
And one other aspect of
this, that's related,
is the idea of letting
people share ideas,
so letting them work on
designs using the internet.
So-- oh, I'm sorry.
This is another image
of ROS where researchers
are sharing data, in real-time,
doing experiments, testing
algorithms on the
web using the cloud.
And what I was getting to
with the idea of individuals
sharing ideas was
that we can also
use the cloud as a way of
gaining creative ideas.
And two years ago,
I formed, I was
co-founder of the
African Robotics Network.
And our challenge
with that was how
to develop an ultra low-cost
robot for education.
It could be that students
in Africa could afford.
So the idea was to aim for
a costpoint of about $10.00.
And the idea was to use
the cloud as a resource,
to put out a
competition for ideas.
So we put this out with prizes.
And we got a number of great
suggestions, of entries.
And these are the
10 winning designs.
And they were beautiful designs.
Very, very creative
use of materials.
This one is just done with
cardboard and zip ties.
And they were all, they came
in around somewhat under $100.
But the grand prize winner was
something called the Lollybot.
And it's beautiful.
It used a game controller.
And turns out that the
vibratory motors, that
are built into the
game controller,
can be turned around.
And by patching a couple
wheels on to that,
they can drive these wheels.
And then, the thumb switches
can actually act as sensors.
And the idea is that when the
robot bumps into something.
But you need a moment
arm for the sensors.
So he came up with the idea
of attaching two lollipops.
They're actually functional.
So they, actually,
served as levers.
And, of course, what
kid could resist a robot
with two lollipops
attached on the top of it.
What's really
remarkable about this
is that you can get
these components surplus.
And Tom Tilley, who is basically
a hobbyist based in Thailand,
put this idea together.
And he puts all the
information about how
to build your own on the web.
And he put the part list.
And because you can
get these for $3 or $4,
the entire robot costs $8.96.
That's including
the two lollipops.
So this is an example
of the kind of ingenuity
that can be tapped in the
cloud for designing new ideas.
Now, there's also
the idea of clouds
being used in automation,
in factory environments,
for logistics.
Here's the very well-known
Kiva Systems robots.
These orange things
at the bottom,
they're basically moving
around to help warehouses,
large warehouses like at Amazon.
And the idea is that
all these robots
communicate on an
internal network.
So they're cloud-based.
Not using the global cloud
but using an internal cloud.
And the last way,
that I could think of,
that robots can
benefit from the cloud
is by the use of the
availability of humans
when all else fails.
Because we're never
going to get robots
that will absolutely work
in every single possible
circumstance.
So when a robot gets
stuck, when it's
trying to clean up your house
and it gets into a corner where
it's just really
not sure what to do,
the idea is it can
call a call center.
And, hopefully, humans
will be standing by,
be able to help diagnose and
figure out what went wrong.
It's a little
different than today
where you call a call
center and you get a robot.
And this will be the
other way around.
The robot will call
and get a human.
But I do think
this is interesting
that humans can also be
available as resource
to help robots.
Now, I also want to make a
distinction for that we're not
talking about the cloud
being used in real-time to do
all the computations on the fly.
So, for example, there's a
lot of robotic activities
that require a
very high latency.
So you need to be able
to-- or low latency.
And so you need to
be able to respond
to things and very quickly.
And we're not, we can't depend
on the cloud as you know.
We can't get all kinds
of real-time response,
at least not yet.
So the idea is that you can
do a lots of pre-computation.
And I'll be talking
about some of what
some of those architectures
might look like.
But the idea is that
you're pre-computing
a lots of things in
advance so that you
can index those in real-time.
And then make use of them.
So there will, also, still
be local computation.
So, in summary, these
are the five benefits
that the cloud offers.
The first is big data
so there's access
to all these resources of
images, maps, and models.
The idea of cloud computing
for a variety of computations,
including statistical learning.
The idea of open
source so humans
are able to share code,
data, and designs.
The idea of robots
sharing information,
so learning from each other
and all those experiences being
accumulated so they can
be, basically, combined
and accessed globally
when on-demand.
And then, the last is
this idea of call centers.
[VIDEO PLAYBACK]
-You fly that thing?
-Not yet.
-Operator.
-Kenny, can I get a pilot to
program the self-helicopter?
Hurry.
Let's go.
[END VIDEO PLAYBACK]
KEN GOLDBERG: So you remember
that scene from "Matrix,"
right?
This is sort of the idea
we're talking about which
is that the robot, in this
case, the person doesn't need
to have all that information
stored in her head.
She can access it on-demand.
And so this is very similar
for the idea of cloud robotics.
OK.
I have to do this manually.
That's for the part two.
And play.
Good.
All right.
So that introduces this
idea in a nutshell.
And now, I want to go into some
of the examples of research
that we're doing in our
lab and in collaboration.
Some of this is collaboration
with researchers
here at Google.
So the first one has
to do with grasping.
And the truth of what it
looks like, let's say,
from a robot's
point of view, let's
consider something as simple as
just sitting at a dinner table
and wanting to pick up a cup.
Now, it's something that humans
all, we all do effortlessly.
It's very simple.
But to put yourself into the
position of being a robot,
this is what things
look like to the robot.
So everything is very noisy.
Your perception is imprecise.
There's dropout.
There's a lot of noise.
And one of the other
things to keep in mind
is you, also, don't even have
very good control over your end
effectors.
You don't even know where your
own hands and fingers are.
So it's a very big
challenge on how to do this.
And what we've
been thinking about
is how can the cloud
be used as a benefit?
And this is one idea
that we're exploring
which is using technique,
using probability
distributions to
model the environment.
So because we don't
know it exactly,
we can put distributions over
the objects in the environment.
And then we can try to compute
the best strategy given
all the distributions
that are there.
And the way that we do
this is by sampling.
So we're going to sample
from these distributions
and then run an analysis
for each one of them.
And again, this can be done
in parallel, in the cloud.
And then we're
going to, basically,
have all these
applications report back
to be able to decide
which strategy,
which motion has the highest
probability of success.
So let me go into a
little more example
in the particular
problem of grasping.
And we'll look at a
two-dimensional problem.
Here, we have this is
a part that a robot
might want to pick up.
Now, if you look at
this, you'd say, OK,
it's obvious where I
want to grasp this.
But the fact is that because the
robot's sensing is imprecise,
the true object may
be any one of these.
There's a very large
number of options.
And we don't know.
The robot doesn't know what
is the real object that's
in front of it.
So now the question is
what is the best strategy?
Where should you grasp this
part given that uncertainty?
So the idea is that we
can do an analysis given
any one particular
shape of the object.
We can perform a
mechanical analysis
to figure out the success
of a particular grasp.
And there's some
nice theory that
uses coefficient of friction
and the pushing directions that
can tell us whether a grasp
has a chance of success.
And then, that's only
from one particular grasp.
What we now want
to do is consider,
for each of the
possible objects--
So we consider all the objects
as a probability distribution.
Then, we sample from
that distribution.
And we send each
of those samples,
or groups of those samples,
out to nodes in the cloud.
And we have those.
Then, each of those try a number
of different grasp strategies
to determine what is
the probability of each
of those grasp strategies.
And then we accumulate
all this back.
And the idea is
what we want to do
is compute a probability or,
in this case a lower bound,
on the probability of
a successful grasp.
And what you're seeing here,
with this whisker diagram,
is these are all approach
directions from the gripper.
And the length of the whisker
is related to the probability
that that approach direction
will be successful.
So you can compute this.
And, again, this is
very parallizable.
And then, we want to be able
to do this in the cloud.
And so here the algorithm
is outlined here.
This is the result of
this whisker diagram
for this particular
part after we
sampled these different
approach directions.
And this gives you some idea of
how this could be parallelized.
We've actually run this on
some multi-core machines.
And on of the things
that's interesting
is that the results are
not always intuitive.
So that, for example,
this one-- part
D-- the intuition
would be to have
the robot gripper come here.
And that actually would,
it turns out, not be very,
the probability of
success is not that high
because of the uncertainty
in the part's shape.
So there's a lot of opportunity
for these two corners, here,
to intersect with the
gripper if you do that.
So it turns out this is actually
the optimal grasp in this case.
So, again, we can't trust
our intuition completely.
But the computation can
be done by sampling.
And this can be done very
rapidly in the cloud.
And we've done some
experiments using PiCloud,
an architecture that's
available on the web.
And we're getting
results like this.
So we're seeing that we
are getting approximately
linear speedup for
many of the cases.
But not for all of them.
And one of the things that
we've recently realized
is that there's a problem with
dropout where some of the nodes
don't return.
So we have to be really
smart about the sampling.
We have oversample the parts so
that we can allow for dropout.
And we, also, have to really
do a-- the next idea-- is
to do adaptive sampling.
So we don't have to depend
on all these processors
all coming back to us
within a given time.
So a lot of interesting
new work to be done there.
The second area,
under grasping, has
to do with the object
identification.
So as I mentioned earlier,
you're moving around.
Robot's moving around
in a new environment.
And it comes across an object
that it doesn't recognize.
So this is where I started
working with James here
at Google, about two
years ago, on using
the Google's Recognition Engine.
And as you all know,
Goggles is very effective.
It's been running
for many years now.
And it has built up a fairly
big library of images,
and tagged images.
It's using machine learning to
associate those tagged images
with sets of web pages.
But our idea is to say, if
we can use the same system
and adapt it so that instead
of giving us web pages,
we could look at an
object, take an image,
and send it up to the web.
And then index, into the cloud,
a variety of descriptors,
of semantics for that object.
So it could give us things
like the exact geometry,
or a model, 3D model
of that object.
We could learn about its
physical properties--
it's mass, it's
friction, it's mechanics.
And, importantly, also we
could pre-compute a variety
of different grasping
strategies that
would be appropriate
for that object.
So those could all be stored in
a cloud with those image tags.
So that would
allow the robot to,
then, successfully
pick that object up.
So we've been on the
architecture of this.
We implemented a
version of this.
And the idea is
that we're making
use of the vast resources that
Google is employing everyday,
where people are taking images.
They're also tagging images.
And this is what's allowing
the Google Object Recognition
Engine to work so well.
And then, the idea,
and this is something
that we would hope
to see in the future,
is that companies
and other resources
would be available
that would allow
us to take all kinds of this
semantic information, store
that along with grasp analysis
that's done in the background
online.
And then-- and one of
these would be tools like
Matei's-- a terrific grasp
analysis tool that we could
grow on a variety of
different approaches.
For example, simulated
annealing is one technique
that could be used
to find good grasp
but, again, is not practical
to do that in real-time.
But could pre-computed offline.
And then, online what
happens is the camera
would take an
image of an object,
send that up to the
Recognition Engine with label
and identify that object.
And then would send
back the CAD model.
We would be able to use
our 3D sensing to adjust,
to basically
transform that model
to the environment
in front of us.
But then, we'd also have
a set of candidate grasps,
that we would select from,
based on obstructions
and whatever limitations were
present in the environment.
But then, what's kind
of, also, interesting
is that we would choose a grasp
and, then, execute that grasp.
And then , close the loop
by sending the results back
into the cloud.
So if the grasp was
successful, we'd
be able to report that
so it accumulates,
the probability should
be adjusted accordingly.
Or if it's unsuccessful, which
is particularly important,
you'd be able to instantly
learn from that mistake.
And so that grasp would be
removed from the library.
And other robots
could be notified
for the next time they try
to pick up that object.
So this is the paper that we
published last year on this.
And we have some
more results on that.
It's available online.
Now, a shift, in
the last few minutes
I have, to talk
about health care.
And then I'll be able
to take your questions.
On health care,
there's two areas
that we've been looking at.
One has to do with
radiation therapy.
And I hope that none of you, in
the audience, ever need this.
But if you have a cancer
that is in a body cavity,
there are a variety
of treatments.
But one of them is
something called Reiki
therapy, intracavitary
Reiki therapy.
And what they do
is they, basically,
insert radioactive seeds
into the body using needles.
And then those seeds expose
the cancer to radiation,
hopefully killing the tumors
and sparing the healthy tissue.
Now, these are some of the
devices, that are used today,
to achieve, to guide the
radioactivity into place.
And these look like medieval
torture instruments.
They really haven't
changed much in many years.
And what's important
to notice is
that they're all standardized.
So they're not customized
to the shape of the body.
So our idea is to
use 3D printing.
And to develop a new
version of these implants,
these applicators that
are custom-designed
for the anatomy
of the individual.
So this is an example for a
gynecological case where we're
able to scan the body, build
a three-dimensional model.
And then generate
an implant that's
tailored to the
shape of the cavity.
But what's really
interesting is that we also
plan channels, within the
cavity, using 3D printing,
so that we can guide
the seeds right
next to, so they can dwell
right next to the tumor zone
and, then, be quickly moved
away to avoid contact,
minimize radiation to
the healthy tissue.
So it's, essentially, a
motion planning problem.
It's how do you design
these paths, these channels
through the solid material
that will achieve this desired
doses to the tumors and minimize
doses to the healthy tissue?
And you have, essentially, this
is a classic motion planning
problem where we have
multiple paths that
have to coexist
and be disjoined.
So we've been looking at this
in a variety of contexts.
We're working with
faculty at UCSF.
And we have some
initial results.
For example, we did a comparison
between a, in simulation,
we did a comparison
between what's
news today, which is
a standardized ring.
As you can see, because of
the proximity, the distance
between the dwell
points to the tumors,
the performance here
is not very good.
But this is what you would
achieve with, let's say,
a classic technique like
drilling holes in an implant.
So you have only
linear channels.
And so, here, you'd get
a number of dwell points
right at the tip.
But because of crowding,
there's a limitation
of how many dwell
points you can get.
And then this is
the idea that we're
exploring which is where you
have curved channels that
can then be-- and the
only way to fabricate
these is by 3D printing.
And I should say that the idea
of this is how to computing
the optimal set of channels
here is something that can also
be parallelized and
done in the class.
The last thing
I'll tell you about
is something we're calling
superhuman surgery.
And this is done
with Jur van den
Berg who's now here at Google,
sitting right over there.
And the idea is that--
you've, hopefully, heard
about what the da Vinci
robot System that's
been used for many
surgeries around the world.
It's in 2,000
operating rooms today.
And this is a, it's a
very effective tool.
But one of the things that's
important to know about it
is that it's operated
purely in master-slave mode.
So it's always under complete
control of the human surgeon.
The robot is just reflecting
what the surgeon's motions are.
Now, what we were
interested in is can we
start to relax that subject?
Can we, not replace
the human entirely,
but can we have certain
subtasks performed autonomously
under the supervision
of the doctor?
So, for example,
there's two benefits.
One is to reduce fatigue.
So something like
suturing can be
very challenging for a doctor.
It's just tedious.
And they actually spend
a fair amount of time
performing these sutures.
So if a doctor could
specify, the surgeon
could specify the
position of sutures,
the robot could
perform automonously.
And the other advantage
is for telesurgery
so that we could
allow, let's say,
a master surgeon, who
may be located very far
away from a patient,
could perform surgery
over a distance.
The challenge, today, is that
directly operating the robot
is not practical
because of time delays.
But if we could automate
each of the subtasks,
then the surgery might be
able to be accomplished
by the surgeon in
a supervisory mode.
So one of the challenges
we had was, we were facing,
is how can we automate
such subtasks?
And this an example
of a human operating
a robot to do suturing.
And suturing is extremely
subtle and complex to program.
So we decided that because
this is so complicated, what
we would use is a technique that
was pioneered by my colleague
Pieter Abbeel, at
Berkeley, which
is to use robot learning
from demonstration.
So we want to have a human,
an expert human surgeon
perform demonstrations
of a task like suturing.
And then, what we're going to
do is learn from those examples.
So to illustrate that, I'll
just use this to give you
the idea of how
this would work is
let's say we consider a task
like performing this figure
eight motion.
Now, these are examples
of what the human might,
the surgeon may
actually perform if we
asked him or her to
perform this trajectory.
Now, these are not very good.
But they're,
actually, this is just
the nature of robotic
teleoperation today.
It's that there's actually fair
amount of noise and imprecision
in how it gets translated.
So you might collect, let's say,
a dozen human demonstrations
that look like this.
But the idea is that they're
all attempts at some underlying,
there is some
underlying trajectory
that they all have in common.
So we can treat that
as a latent signal
that we're trying to infer from
a number of noisy observations.
So we can use, first of
all, dynamic time warping,
which is well-known and
well-developed from speech
recognition, to time
align all of the examples
that were given.
And then, we basically treat
them as noisy observations.
And we use a common filter
model, a linear dynamic model
where we're basically
take the data
and run it through this
common smoother that
allows us to extract parameters
for an underlying signal that's
the latent trajectory.
So we take these
human demonstrations
and we get something
that looks like this.
Now, what's also nice
about this approach
is that also results
in a smooth trajectory.
So it takes out a lot of the
jaggedness, the high frequency
elements out of the trajectory.
And then, what we want to do is
now take this and execute this
on the trajectory on the robot.
We perform it, we use it
to perform the motion.
And then, we observe the motion.
And it's not always perfect.
So then, we adjust
it using ideas
called iterative learning.
So what we're going to do
is observe the results,
change the parameters,
observe the results again
until the trajectory looks
like very close to what
we have been speaking.
But then, the new
idea-- and this
is where it becomes
superhuman-- is that we also
want to increase the speed,
increase the velocity.
So now, within this
framework we now,
in the iterative
learning phase, we
increase, we turn up the speed.
And we run it again.
And it deviates from
the desired trajectory.
We adjust the parameters, run
it again until it converges.
And then increase
the speed again.
So the idea is can we keep
increasing the speed over time
so that what we'll end up with
is performance with a control
signal that will, then,
give us a trajectory?
This is what it would look
like at one time, four times.
This is at seven times speed up.
And here's at 10 times speed up.
So, actually, getting
something that's
very close to what we want
but faster than the samples
that we actually collected.
So the goal is, here, and this
is still a work-in-progress,
is can we get the
robot to actually do
something that is better than
what even the best surgeons can
do?
In other words, can it do
it faster and more precise?
So we're working
now with a team.
We have a National
Science Foundation grant.
We're working with a team
of colleagues on this.
This a master surgeon, from
UC Davis, who's helping us.
And we're working
with, this is the Raven
which is an open-source
surgical robot system.
So I'll just close with some
future directions and some
of the things I'm excited about.
One is this area that
we call belief space.
And you're fortunate to have
some of the world pioneers
and world experts
right here at Google.
Jur being one of them.
And here's the idea is that
if we imagine this is a robot,
here, that wants to get into
this green zone over here,
passed two obstacles.
And let's say there's two
light sources here and here.
The classic technique is
to sort of move like this.
And these circles indicate
the position uncertainty
for the robot.
And as you can see, there's
a fairly decent probability
the robot will have a collision
with one of the two obstacles
en route.
But the new idea is that
with the belief space
is that you actually
take this into account.
And you build in a
model of uncertainty
so that it actually,
the optimal path
is to move like this
down into the light zone.
So that where the
uncertainty can be reduced.
And then, the path has a very
low probability of collision.
So this analysis is
subtle and complex.
And when it's Gaussian
it's somewhat tractable.
When it's multimodal,
a mixture of Gaussians,
it's actually very,
very challenging.
But how can we do this
kind of calculations?
And there's a lot
of excitement, now,
that there are techniques
that can reduce the complexity
and that we can perform
this in a cloud.
We can do some of
this planning using
the offline cluster-- sorry.
A cluster is available
in the cloud.
And the other area
is Google Glass.
I see at least one person
in the audience wearing it.
There's idea this can be,
provide augmented reality
and lots of
information on-demand.
But it's also
interesting in that it
can be a way of collecting
vast amounts of video
of human experience
through the eye,
through the point of
view of the human.
And we think that, actually,
may be very, very valuable
because if you've
collected all this data,
and you've used deep learning,
other techniques that you could
have that may be able to
extract structure and features
from vast data sets, from vast
libraries of data of humans
manipulating objects
in the environment.
Could we start to learn
structure and manipulation
strategies from
all these samples?
There's many other
things going on.
This is a robot that
has become available.
It's a very low cost.
It's about $150, by a
company here in the Bay Area,
called Romo.
And what's nice about it is
it makes use of the cell phone
and all the advances that are
happening with cell phones.
So that, onboard, it has
cameras, computation,
networking, speech
recognition, et cetera.
And so the robot system could
be very low cost as a result.
And what's nice is
that this designed
to work very closely
in the cloud.
So this robot can be
used for telepresence
but, also, can then be
automatically updated
with new software and data
as it emerges on the cloud.
So that's an example where
the robot, essentially,
is very low cost.
But it's taking
advantage in leveraging
the cloud, the voucher
resources that are on the cloud.
And then, even as
some people are
thinking about things like
this, the robot app store.
Where you'll have something
like an app store for robots.
So when a robot is
moving around and decides
it needs to learn
something new, it
can download an app on-demand
for solving some interesting,
for some problem it
doesn't know about.
This all ties into
Internet of Things.
So there's been a lot of
discussion about the idea
that all kinds of
objects, not just robots,
will be networked together.
This will be a great
benefit for robots,
obviously, because they'll be
able to access and communicate
with devices and sensors
in their environment.
And General Electric is
talking about this context
of for their industrial
systems for large, for example,
airplane turbine engines
will be communicating
with each other in the cloud.
So they'll be able to
share operating set
points across many
different engines.
And there's lots of
interesting questions
about proprietary data, how
do you share information
without violating the
confidences between, let's say,
competing companies.
Now there's a lot of
things going on here.
And there's a resource that
we've set up at Berkeley.
It's a website if want
to learn more about this.
We keep it up-to-date
with new information.
And one thing I want to
mention is a special issue
of this journal, the IEEE
Transactions on Automation
Science and Engineering.
And if you want to publish any
papers in this general area,
I encourage you to submit them.
This special issue will
come out next year.
And I want thank some of
the organizations that
have made this work possible
for my students and I.
And we're at
Berkeley if you want
to come up, and visit us,
and learn more about this.
So I'll close with
this quick summary
of what cloud
robotics has to offer.
And I can't overestimate the
amount of potential I see here.
This is really
changing our field.
It is rare, in the
course of your career,
that you see a new development
occur that really changes
the basic assumptions,
the fundamental ideas
around robotics.
And this is exactly
what's happened
in the last couple of years.
So you're exactly at the right
place for this here at Google.
And thank you for
your time here today.
Thanks.
[APPLAUSE]
