robots will cohabit our environments and
in building like airports, a car park, or a
warehouse, mobile robots can rely on
wheels to locomote
efficiently around on these flat
terrains alone there's a lot of people
that work in environment like these and
it's where they have to do inspection of
Mines and sewage systems that work in
industrial facilities with stairs or
even after an earthquake or natural
catastrophe rescuers go in these
buildings and put themselves into
dangerous situations wouldn't be great
if you could help these people so with a
mobile robot it's in these environments
where robot with legs can relieve itself
from constraints that are posed by
wheeled and tracked vehicles look at
your walking going up steps and down
climb over obstacles crawl underneath
obstacles and even go up and down stairs
in my work I'm gonna focus on the
mobility of legged robots in rough
terrain and there's several key
constraints that we have taken to
account the robot has to make contact
with its feet on the intermittent basis
unless you chose stable foothold in
order to lock them out just to make sure
it doesn't collide with the environment
then it has to all the time keep
stability and make sure it doesn't
collide with itself and keep joint and
torque limitations and I was talk to my
work in five parts are going to start
with the relation and melding of rain
sensors and relatives to perceive
France's environment in order to
understand it when it takes these
measurements and then look at terrain
mapping where the robot like sense out
of his data and try to map it is
understandable to the robot then I'm
going to look at how to control the
robot and now that we have a map and
know how to control the robot I'm gonna
look really how can I look into the
future and see where it's gonna step and
find a safe path over those obstacles
and finally I want to broaden the
context and look at collaborative
navigation for flying in the wheeled
vehicle to improve their navigation
skills
we start up with the relation and
modeling of rain sensors and here we're
gonna look at different technologies and
my goal is then to model and understand
the errors and the noise that we get
from these sensors because understanding
the elements we can create high-quality
maps from this data so looking at the
performance criteria here we have a
robot with a sensor in front of it and
important continuous measurements such
as the minimum of the maximum range
horizontal vertical field of view but
also importantly the density the number
of points that we get out per
measurement to make sure the sensor
works in sunlight and like I mentioned
an important factor is what is the
result in error when taking these
measurements
I have evaluated four different sensor
technologies only structured light laser
range or lidar time flight cameras and
assisted stereo cameras in the following
I want to focus on the noise Memling of
the collect version2 time of flight
camera his camera works by strobing an
infrared light onto the scene for which
it is reflected back to a sensor by
measuring the phase difference we can
measure the time of flight that takes
each ring and we can create the depth
image from each pixel in the image what
I'm interested in here showing for one
ray is the axial noise among the
measurement array at the bachelors which
is perpendicular to the measurement
ready to do this we have set up an
experiment where we use a target at
different distances from which we can
rotate and on the left you see an image
a sample image of the result in depth
image from the top view we can use their
angle theta to measure their target
angle and alpha to medal the some
horizontal insulins channel in this view
you see how we measure the actual noise
among the measurement train from the
front view
interesting the lateral noise but
looking how sharp can we create an edge
of the target taking many measurements
at different distances and angles we get
this plot where we see how the noise
increases over distance and over the
target and the target elements plotted
on the horizontal axis in throws on the
vertical axis and then colours are the
different ranges we will measure that so
we came up with this empirical noise
model which very accurately predicts the
noise at these different measurements
for the lateral noise there's no such
clear tendency and we choose to model
the noise as the 90th percentile if we
go out in sunlight we see a different
behavior we're at more direct angles of
the Sun we have higher noise and we can
understand this by looking at the leg
line with cosine law which explains the
light intensity as the sunlight incident
and as the sunlight hits the target and
we introduce this additional term
depending on the sunlight angle alpha
now we can see that two men go outdoors
from indoor overcast and direct sunlight
on the left hand side we see the axial
noise and other assets and electrons and
then we go outside the noise and
sunlight is one of magnitudes bigger
than indoors and if we compare this to
all the other sensors we see a similar
behavior and we can say that for a range
that we're interested in with the
walking robot from one to three meters
the noise is in a range where it's
acceptable in order to create the map
that is useful and we can also see that
the race deviates from very low to high
noise at distances such that we need to
take care of this noise in order to
create the best quality maps that we can
get looking at other measurement
characteristics this table can inform us
how to select the sensor for our mobile
robot in our use case so in one example
the prime since the first century is
really not sunlight resistance or could
not be used outdoors
another important characteristics is the
density interactivity sense and we're
all good but there's one really lacking
behind and when I explain that in the
following example
here's a top view of the robot and you
see the measurements taken for one
rotation of this lidar sensor you see in
the close-up it would take two seconds
to cover a 1 by 1 centimeter area and
for further away points takes up to 22
seconds to cover every cell
this isn't comparison to their
intelligence their camera where we got
that many plants and closer up to
two-and-a-half thousand points per
square centimeter which probably went
through need so we can approximate the
sonars woman with this model which lens
depending on the surface normal and the
distance from the sensor to the plant we
can then use the inverse of this model
to predict what is the ideal resolution
that when when the manness sends result
so for the real sense in this case we
want to lower the resolution to a
resolution of 300 12 or 234 to cover
every cell at least once with one
measurement now that we have understand
these sensors look at rain up in here
the power is to local create the map
online tens representation of the
terrain 30 sensors we got a model there
to train as a two and a half the surface
map to represent the train and
importantly we can only rely on
proprioceptive localization which makes
it much more of us because we don't need
an absolute or externalization systems
so let's imagine the robot on the
terrain at times t1 and as it works for
it it's now there would like to create
the map now in a classical view sitting
in the inertial frame from the outside
we know from experience and from
literature that this property sensing
which relies only on inertial and
kinematic data
lifts over time in position in joining
so through the position of the robot as
it walked becomes more uncertain here
depicted as the orange robot now if you
do nothing straight forwards we create
inconsistencies in the map due to that
which all problem for planning
by my work I proposed to take a
different approach and Madeline Titan
from a robot centric perspective now we
sit in the situation of the current time
of the robot and it knows about this
position exactly but the past position
of the robot becomes more uncertain now
if you do mapping we see that in the
front the map is very clear but the data
that what the robot has not seen for a
while becomes more unclear and we have
we can introduce this covariance
boundaries to upper and lower estimates
of where we expect the real train to be
now formalizing this in my work I love
separated work in two parts which is the
data collection and the data processing
I'm gonna go through these steps first
we have the range measurements and for
each cell where we have a measurements
we have to transform that to a height
which we can straight forward from the
range measurement vector and transform
it to the map frame and then use a
simple projection mapping to get the
vertical height of it and for the
earlier importantly wants to get the air
of the cell resulting from the sensory
measurement covariance which means--
evaluated in the part room and from the
sensor orientation covariance on random
pitch will be which we get from the
legged state estimation now if there's
an empty cell we can fill in the stator
but if there's already data within a
fuse one date with the system in the
sense of a common filter where we
evaluate the new height as follows and
then we have also the variance for the
cell in this common filter we can create
the consistent map and this is getting
the laser data into the map and in the
second part I'm gonna introduce the
areas that we get from the robot motion
and we have to transform the height
variance to a full 3x3 covariance matrix
to do that then we can take the robot
Poisson certainly from time k to k plus
1 and get immune cell covariance out by
taking the old to previous cell for
aliens and
taking this robot pose covariance
updates from tongue came to k1 and
today's I know that Yuko beans to do the
proper and propagation mapping now that
we have this 3x3 covariance matrix to
each cell in the height what we really
want is to get the height in the lower
and upper boundary so we have to look at
each cell individually and I'm gonna
explain it with the following
illustration so imagine this is a
profile cut from an obstacle from the
terrain and then sample now one point
and we look at the L ellipsoids and
create a probability density function
which is weighted empirically based on
the distance from this cell to the
neighboring cells now from this public
identity function I can integrate that
up and read the cumulative density
function from each then I can sample the
first and zero quantiles and from this
predict the maximum and the moment
height expected to be at this position
now I did this for when cell and if I do
this for the entire map I can create
this confidence bound now the same view
here in 3d this is the original train on
the left and on the right of plotted
these ellipses from the top view and if
we go through all of these points what
we get is correctness smooth out terrain
on the left for the estimated terrain
and here blue means that we're more
certain about this central point and let
our means were less certain about it and
in the middle and try to see the up in
lower confidence bounds which tell us
here is the maximum expected thrown to
be and importantly later wanna choose of
course to be a certain areas to step on
because that the robot knows even though
the train is uncertain there's still
certain positions where it can safely
step on now here's the train mapping two
examples when it does real-time train
mapping from left hand side to robot
scarlett in door with a structured light
sensor with a static gate on the right
hand side it's out there complete
different set up in first
with the rotating laser and sensor
dynamic trotting game to a robot animal
and in both cases same a principle that
we see the difference in quality of the
mouse
now to analyze this more clearly we have
done the following experiment we have
created a scene and then scanned it as a
ground truth with this huerta stationary
treinta absolute point cloud from the
environment and you see in the video how
the robots works through this
environment and in the front you see the
current scan that it currently takes in
comparison to the ground truth now when
evaluating this this is a top view
now if the train on the left-hand side
we're going to do it on a show how does
snap evolves and we're gonna look at one
profile cut and look at it from the side
from the right hand side plot so in the
beginning we see that the map the
estimated terrain he plotted in blue
dots and the real train in the black
line they are very well aligned that's
the way it works we notice in the back
that the train starting it starts to
drift away and in the final picture we
see that there's quite an error between
the estimated internal terrain however
the method is shown to wait because the
true train lies well within the
confidence bounds of the estimated
training so now that we have mapping I
wanna show you how we control the robot
the entire goal was to create the
controller for the robust motion
tracking of legged robots and read a
focus on creating an interface which
separates the motion generation from the
control so how do we control the robot
typically we use an interface such as a
joystick computer screen when we use a
motion scripts that we can replay in a
robot if you're going to a step further
we can create footstep planners or for
motion planners to to complex motions
and then we somehow interfaces with the
controller which was to real-time
tracking of the robot and here we see
classic the state estimation controller
than thing in this real-time do tracks
the desired motion but every time create
the new interface it's error-prone
work so I propose a universal interface
four legged robots control which one I
call the free gate API this is a
unifying interface where I defined
emotions by the sequence of not values
in the second step I can transition from
the current state to desired emotion and
spline through these nut points for
trajectory generation then from the
real-time control we sample is
trajectory at the resolution that we
need and finally we get to the control
of the state for the swing lags in the
state for the base I want to focus a
little bit on this free that API which
is a very important part of this work if
we get API consists of two main motions
one are for the leg motion which can be
defined either in joint space or
conditioned space for the end effector
here shown for the red leg and base
motions where we define the position
orientation of velocities of the torso
of the robot which then automatically
mean that the legs on grant are
determined through its motion there's
different types I can send this
commanding when is a target well simpler
and go to this position your leg on the
base and if you know complex motion I
can do full trajectories and I've
created this library with a set of
automatic tools which means I can send
it a target for location in the case of
a footstep and the robot will
automatically fill out how to step there
the best simulate the base auto command
means it generates the poles
automatically given the current foothold
situation such that all footholds can be
reached but the robot stands stable from
these elements of API parts we can mix
and match or commands together and
here's a simple example of the robot
walking so it uses the base auto command
to make sure the base is aligned
correctly in turn uses a simple
footsteps to walk so another example
but we use a joint reject to inhale
return a wanton arrogant for example
change the configuration and use the end
effector to touch something said we can
really with this tool take these
elements we paralyze them as we want
them importantly all these commands can
be represented in an arbitrary frame
which is important for the task at hand
I read the story we've credit an API for
the versatile robust and task oriented
control of legged robots and they
illustrate this with the federal example
here we asked a robot to do with three
legged push-ups where one leg should
stay in the air and on the right hand
side you see the motion script that we
use to program this so first we use the
base other commands to move to base the
stable position then Hotel it's the
right front leg should move to
transcendence or height in the footprint
frame which is he found between the legs
let me simply ask with the pensado
command to move your base to a height of
38 centimeters but keeping the leg at
the same position and then this motion
if adopted the position occurs
automatically you know if the base up
and down here to 45 centimeters and then
lift it again down in straight profile
type to the ground
so with these 35 lines of codes I've
already programmed robot to do this
complex motion through this API now when
working in real environments it's
important like in track the robots
motion with respect to the environment
accurately now to show this with
pre-program the sequence of footsteps
here shown as blue dots on the ground in
the world frame and the robot is
localized with respect to the world
frame with a scale matching with the
laser and we're going to sub multiple
ones from different positions and show
the result we can see that after one
step already the robot steps on to their
desired locations and when repeating is
from different positions you can see
that the motion converges very quickly
and even it's hard to see but the person
who pushes the robot or later we use a
pipe on the ground too diverges from the
desired footsteps
so the motion can be tracked for busting
even on their disturbances now we use
this in several projects such that in
terrain that we know of structure rough
terrain and here we see the robot
choosing from a set of templates to
climb over obstacles climb over gaps and
either it's known from the environment
or the user chooses in adequate motion
here we rotate the legs to go over big
obstacles and finally we can also climb
very steep industrial stairs which are
stupid in 45 degrees now since this is
so flexible you can go ahead and for
example change the height to make the
robot crawl by changing its layer
configuration to the spider like we can
really go into pipes and use this motion
the flexibility of two robots to achieve
these maneuvers and one step further we
can do simple manipulation and he would
see the robots pressing a button of the
elevator and in this task we use the
april tab which you see in the video in
order to determine the position and
simply told that for the free that they
apply this is where we want the robot to
push and these are templated motions
that from which we can choose in the
library of course this interface is also
meant as an interface for motion
planning and on the left hand side is
what I've done with my student where we
did kinematic whole body motion planning
in order to climb these stairs we can
also do highly dynamic maneuvers on the
right hand side when we see the robot
jumping in the memory now that we know
how to control the robot our goal is to
put things together and create the
locomotion planner that uses the map and
these control abilities to run over to
train the goal is to walk to that the
system works and previously unseen
dynamic environments and everything
should be fully contained so there's no
external equipment and other repairing
happens in real time
this is the overview of the entire
scheme and I'm gonna go through it
step-by-step so here again we seen the
classical control group of state
estimation whole body controller and
we've seen in part one and two how we
use the distance sensors in order to
create the consistent elevation map of
the terrain then a the locomotion
planner takes a train data at the
current position of the robot through
run through a set of processes in order
to create a free gait motion plan which
is then translated as we have seen in
part three through the whole body
controller and I'm going to focus now on
the locomotion planner in this part I'm
going to go through it step-by-step so
when we get the elevation map we can
processes and check for every touch and
figure out for example the surface
normal which will be important later
then we can process it with different
quality measures such as slope curvature
reference of course in certain limit
rain to create a photo quality measure
telling us where is it cell feel good to
step out and where is it dangerous and
finally we can create a
three-dimensional signed distance field
in order to do fast collision checking
now first we want to generate sequence
of steps and here's a top view on the
left we see the robot standing in an
arbitrary configuration and on the right
we see the golf pose and the process
works as follows first we interpolate a
set of stances between the start and the
goal and then just move one standard and
choose the appropriate leg which gives
us the first step in the next step we do
the same thing over again interpolate
choose the next dance and choose the
second step now if we do this we can
start of course from any configuration
which is nice but also since we do we
compute a tional every time this motion
converges to a skew between the left and
the right legs which is important for
stability and speeds during the
commotion and also because we to
recomputation every time its robust
deviation from the original plan in five
minutes nice
motion generation always ends up in a
squared position of the robot
I imagine the robot stands in front of
this gap and the nominal football tells
it to stand right there in the gap so
now our goal is to adjust these
footsteps in order to find the same
flick emotional return so we sample in
search radius all the candidates and you
can categorize them first we have
trained where footsteps candidate which
are invalid from the terrain but would
be actually reachable by the robot like
here the yellow ones or down there we
have valid blue areas which are fine to
step out but I'm not reachable by the
robot and finally their positions which
are both valid from a terrain and
kinematics point of view and we choose
the closest point to the nominal as a
adjusted foothold now we have to check
this kinematic reach ability but how do
we do this this is done in the so-called
pose optimizer where it's task is given
in foot locations to find the robots
base position and orientation that
maximizes the reach ability and
stability so in the image the goal is
really given those red parts at the feet
to find the robot pose position in the
orientation such that the legs can be
Regent away about the still stable and
we can formalize it is a nonlinear
optimization problem wherein the cost
function P lies the deviation of the
current set up for to a default
kinematic configuration as shown here
the difference between the foot and the
foot in the default configuration and
then we can increase this ability by
penalizing the center of mass deviation
from the centroid of the support polygon
as shown in the support polygon the
ground out now to constrain the solution
we add to constraint stability
constraints to ensure that the center of
mass is within support Pentagon and to
joint limit constraints which makes sure
that the robot doesn't it's like don't
always try
to go too close now we can solve this
problem very efficiently as a sequential
erotic program in roughly open five to
three milliseconds on the onboard PC of
anyone on the left-hand side you see a
couple of examples only given those
footholds how the optimizer finds these
solutions which fulfill the kinematic
instability constraints on the
right-hand side you see an interactive
demo by a drag to fit around and the
poles of the robot is automatically
adapted now that we have to just at
foothold on the last step the goal is to
connect the start in the target location
with the shortest collision-free swing
trajectory and since we have
parameterised also in trajectory of this
plane we can optimize over these lap
points and we do this in an optimization
problem where the goal is to minimize
the path length while making sure we
don't run into collisions with help of a
collision function which is based on the
scientists field that we generated from
the elevation map and here we use this
trim to normalize it 3ds collision
fields now here's an example where we
have to train from the side and the
robot standing on it and we have a train
with a low confidence bound so typically
and then the signed distance fields
which tells us how close you are to the
obstacle and then a typical solution in
this case would be that the robot
smoothly goes over the terrain but it's
collision free I imagine we don't know
that well of the terrain and the
confidence bound is much higher for
example for a hind leg then the
collision field is bigger in the
solution this is a much steeper path the
trajectory of this swing leg which is
nice in an uncertain area with a robot
step much more carefully from top down
in order to make sure it doesn't collide
with the environment now putting things
together we did a comparison with the
blind reactive working that we
implemented
left-hand side the robot walks blind
takes big steps and has only to feel the
ground through the contact forces and
like we see in the success rate we see
that up to obstacles of 10 centimeters
this works well but if we go up to
higher obstacles this is going to simply
fail that we also needed to ram to
actually work up on this obstacle and
comparison on the right hand side you
see with active mapping what robot steps
much more certain onto the obstacles and
this actually takes the same step length
but is faster in the execution of the
motion and we have shown that we can
achieve running over obstacles with up
to a 50% of the leg length now in a
little bit more complex scenario we see
here at the robot walking over stairs
but we don't tell her about that these
are so this is just an arbitrary
training for the robot and here we use
the stereo camera in front of the robot
in order to create this elevation month
you see in blue are the areas which are
valid to step on in white ones the ones
which were not but really see that the
map is not play for telling our
framework can robustly track this motion
we're gonna see in a second how to hind
legs slip but due to re planning process
this is not a problem in the rebounding
just continues from where it started
since we have knowledge of the surface
normals we can feed that back to the
controller and use the first control
ability of a robot in order to constrain
our reaction forces on the ground such
as the robot does not slip on inclined
surfaces like these
you're reactiveness of our approaches
shown here when we throw stones in front
of it and we see in the map up there how
quickly the entire process reacts and
since the replanting happens it can work
safely all these obstacles that have
flown in front of it can you show the
robustness by pushing and pulling the
robot or changing the plate rates and on
Android uses localization in order to
navigate to global coil in the room and
although we strongly disturbed it -
robot - robot finds generates the
sequence to the goal location and
finally this is to showcase the
robustness of the approach where we walk
over moving obstacles over person as a
soft body or even in a very narrow path
so this is foam it is really shown to be
flexible in all of these environments
and tasks so now that we have a robot
walking over rough terrain I'm going to
expand a little bit and show how we did
a work with the collaborative navigation
of a flying in a working robot and here
it's about to use their different
abilities to create a bigger framework
the motivation is that from a flying
viewpoint that can very quickly robot
can see the terrain from up top and
flying fast around however it has
limited sensing and payload capabilities
in a limited operation time on the other
hand of a walking robot which has a
rather low viewpoint and is compared to
the flying vehicle rather slow however
you can carry high payload and sensing
and has an operation time which is much
higher than the flying vehicle and this
is the overview of the approach and I'm
not gonna go into the deepest but I'd
rather demonstrate the complexity that
is gone into this work with many of my
co-authors so it's really all these
technologies bringing them together and
I'm going to show you the demonstration
that resulted from this work so here the
goal was to go from a start location to
a target location where there's
only one possible part and there's
obstacles in between so first we let the
flying robot explore the environment and
we see in the left bottom corner it
creates a set of visual features which
are been added in a simultaneous
localization and mapping framework to
consistent math we can use their camera
images to create with our elevation map
in framework a dense representation of
the entire terrain as these two maps are
then transferred to the walking robot
which interprets them so here it looks
at the Traverse ability and then finds a
global path from a start to the goal
location and it starts trucking this and
while it does it uses another camera
image which is on the robot to localize
itself within the map that was created
by the flying vehicle we can see here
how it matches those visual features
from the current viewpoint in the global
map it updates the map continuously and
we're throwing an obstacle in front of
it while walking and since it updates in
real plans to motion such other key
adapt to changing environment and then
make it safely from the start position
with help of the flying vehicle to the
goal location so in conclusion I have
shown five contributions to the work in
left rein locomotion with legged robots
first have a variety of different sensor
technologies and show them harder
applicable in mobile terrain mapping
I've noted the noise for actual
activation to turn off light camera
which is very important for mapping and
this work can be extended this framework
to new sensors as they are released and
knowledge about the sensors can they is
applicable to other mobile robots in the
second part of shown and robot centric
formulation of an elevation map in
framework which explicitly incorporates
a drift of the stay estimation we have
open source software which has been used
by
many other projects for example for our
mapping navigation planning autonomous
excavation and co-localization 3ds
elevation maps flood control I've shown
a framework for the versatile robust and
task around the control of legged robots
similarly our software has been used in
many applications such as the artist
challenge the emergency challenge I will
have created automated ducking and even
make the robot dance with listens to
music and creates dance generation based
on the music kickers for locomotion
planning we have created a framework
that enables a robot to cover left rein
in realistic environments and some of
you might know these stairs it's just
outside this building where what we took
you about for a walk in Zurich and
stairs on a rainy day so it really shows
real-world application of this robot in
real-world settings and we walked up
roughly 30 meters over a course of 25
steps
lastly I put my work into a bit of
context where I've shown a framework for
the collaboration between flying and
walking robot but they utilize their
complementary features as a heterogenous
team with that I would like to thank you
for your kind attention
