>> 
Hi, everyone. Thank you for coming out. My
guest today is David Sachs. He began developing
motion sensing applications at the MIT Media
Lab. And now he does advanced application
development at InvenSense where he does a
lot of interesting things with compasses,
accelerometers, and gyroscopes including building
them into Android devices. Please join me
in welcoming David Sachs.
>> SACHS: So the purpose of this talk is to
learn about sensors that are in consumer electronics.
This includes accelerometers, gyroscopes,
and compasses. It's kind of a zoo having all
these different sensors out there. There's
a big problem, people don't really know how
to use them. They don't necessarily know what
the sensors do or what their strengths and
weaknesses are. They don't know how to put
the data together. There's also a lot of misinformation
out there about what's actually in different
devices and this causes a lot of confusion.
So, I'd like to pay my respects to some nonexistent
sensors which you'll see in press releases.
Even though they don't exist, people write
all sorts of interesting stuff about them
on forums. These include the six axis accelerometer
which apparently the Samsung Epic has according
to Sprint. The six axis gyroscope. The very
mysterious gyroscopic accelerometer and the
gyrocompass. The gyrocompass, I have to say,
actually does exists. There is such thing
as a gyrocompass. It's not shipping in any
Android devices today. It looks kind of like
that. Maybe someday. So, this talk is going
to start by covering some examples of applications.
We'll show you some demos then we'll get into
the more science aspect of it. We'll talk
about the sensors themselves; accelerometers,
compasses, and gyroscopes. Then we'll talk
about sensor fusion. How do you put this stuff
together? System integration and by that I
mean, how did we make this? So this is a modified
in axis one. It has accelerometers, compasses,
and gyroscopes in it. It does sensor fusion
and we wrote a bunch of application on top
of it. In order to put the extra hardware
in it, we had to rip something out first so
I'll leave you guys to try to guess what it
is we ripped out and you'll find out about
halfway through the talk. And then, of course,
you know, really no one cares about all of
this stuff. They just want to use sensor fusion.
Most people just want to write applications
that use motion so, you know, hopefully, this
technology can make life easier for application
developers. So, they don't have to worry about
what's an accelerometer, what's a gyroscope,
what's a compass, how do you put all this
data together? They just want to know how
to use the information that comes out of it.
So let's get started with some application
examples. Gaming. Gaming is really fun. This
party started with Nintendo. They did motion
sensing gaming. Now everybody is doing it.
So, I'm just going to jump right in and show
you some motion sensing gaming demos. All
right. So, this is not a game. It's just a
sword but you can see it does what I do. That's
pretty cool. So this uses gyroscopes, accelerometers,
and compasses that measures all sorts of orientation.
You can see it's pretty stable and it moves
quickly when I move and it stops when I stop.
The LAN sees very low on this system but I'm
not sure if that will come through on the
VC system. But that's not a game. I promised
you a game. So here's the game. I'm flying
through a tunnel. There's my tunnel. I can
go left, right, up, down. I can turn sideways.
I can fly upside down if I want. I have to
fly through these doors. That's tricky to
do while talking on the microphone. Yeah.
That's a motion sensing game. Pretty fun.
So we'll come to this game at the end of the
talk and I'll tell you how we did this. Moving
on. So here's some more applications. We have
a virtual reality and augmented reality. I've
got virtual reality on the left, augmented
reality on the right and whatever the heck
street view is in the middle. I'm not sure.
It's sort of like a real virtual reality that's
augmented or something. I don't know but all
of these use a motion sensing. So I'll show
you this on our phone. It'll be kind of hard
to see but I'll have a video I can show you
too. So here's our virtual reality system.
I can look down. I can look up. I can look
forward. Turn sideways, pretty cool? So the
motion sensing part of this is basically one
line of code. Once you have your output of
sensor fusion, you just draw some 3D stuff
and put in your one line of code. So for those
of you who can't see, here's the video of
basically the same exact thing. This is an
older system running on a G1 but it's really
the same application. So, you can see.
So here it turns off the sensor fusion so
now it's just accelerometers and compasses
and it still works. Actually, you can do all
of this stuff already, right? It's just a
little bit slow. That's the main thing that
all the sensor fusion adds. It just sort of
upgrades the motion sensing capabilities.
All right, so mouse. This is an example of
a mouse that has gyroscopes and accelerometers
in it. It does some sensor fusion and controls
the cursor.
Here's the demo of that. It's a gyromouse,
pretty simple, right? It's very easy to do
once you have all your sensor fusion done,
basically. Again, it's just a couple of lines
of code to do something like this. So it stops
when I stop; moves when I move; left, right,
up, down.
User interface, okay. Well, user interface
is kind of a zoo. There's all sorts of stuff
you could do and--and call it user interface.
I'm going to play a little video. We also
have phones that run all of this stuff. You
guys can play with it later on if you want.
Here's what we did. We zoom in using motion
then you can pan around using motion also.
You can do a lot of stuff that way using only
one hand which is kind of nice. You don't
have to smudge your screen and you know the
pinch to zoom thing is kind of tricky to do
with one hand. This is a little bit easier.
Here are some more complicated stuff, so there's
some gestures. You can draw a letter in the
air and that launches an application. You
can also train a signature. So here, he's
training the letter M because his name is
Mike. Once he's trained his signature then
the, you know, he can go to an authorization
application and then instead of typing something
in which would smudge the screen, he can draw
his letter M in the air, and it unlocks it.
If you draw some other pattern, it doesn't
work. So these are examples of things that
are pretty easy to do once all the sensor
data has been put together and you no longer
have to worry about what's an accelerometer?
What's a compass? What's a gyroscope? Image
stabilization, we don't have a good demo of
that here today but it's something that people
have been doing with these sensors for decades.
And navigation, of course, this is a talk
in its own. This stuff is pretty hard. We'll
touch on it a little bit, but you know, I
won't get too deep into how the navigation
algorithms work. And, of course, all of this
stuff runs on a handset that's why this is
really interesting for Android. Once the sensors
are abstracted out of the system and you just
have the result of your sensor fusion data
then you can write upset take advantage of
all of the stuff. So let's jump right in to
the science part talk about the sensors. Accelerometers,
easiest to visualize as a mass on a spring.
They're very simple. They sort of jiggle when
you shake them, they pick up just about every
kind of movement, that's the good news. The
bad news is they pick up just about every
kind of movement. Usually, you have no idea
what it is you're actually looking at. So
here are some examples. The mass will droop
under gravity, right? So, that's pretty intuitive.
In a real accelerometer, of course, you don't
have these curly springs. You have beams usually
that flex. If you drop it, it actually measures
zero. That's kind of strange. The first time
you played with accelerometers, that's free
fall and the reason why is because gravity
pulls both on the mass in the middle and on
the frame of the accelerometer. So, there's
no relative movement between the mass and
the frame. So how is it that you measure gravity?
Well, there's this additional force holding
the structure up. So when you have that additional
force holding the structure up then the mass
will droop under gravity. So, this gives the
kind of surprising result if you've played
with accelerometers a little. If you point
an accelerometer axis up, it will actually
measure plus 1G. So, you might have thought
it would measure a minus number because gravity
is pulling it down but that's not the way
it works. It's actually measuring the force
of you holding the accelerometer up. So accelerometer
just exposed to gravity actually outputs zero.
Of course, it also measures sideways movement;
you shake it side to side, you accelerate
it, the mass experiences a reaction force
and typically you get something like this.
So this is usually what you're looking at.
It's just a big mess and you don't know what
it is exactly you're looking at. So, that's
an accelerometer. Here's one close-up. Like
I said it's got beams that flexed, not curly
springs. This is using MEMS Technology. Pretty
interesting, you can etch mechanical structures
into silicon so that makes it really cheap
and you can put logic next to it pretty easily.
Compasses, so compasses no longer look like
that. That's sort of what you imagine. Compass
is a magnetic field sensor, it picks up every
possible magnetic field that includes the
vibrating motor right next to it. If this
isn't a phone, it includes some stuff from
the Bluetooth chip. It includes speakers,
microphones anything on the circuit board
that's been magnetized and it's kind of a
train wreck actually. I'm always surprise
that this work at all in phones but, it actually
worked pretty well considering the noisy environment.
Why do you need the 3-Axis Compass? I get
that question a lot. Well, if you have your
compass held like this and my X and Y axis
are flat then I really only need the X and
Y axis to measure my heading. But if I hold
my compass like this, now my X and Z axis
are flat and I need an X and a Z axis to measure
my heading. So if you don't want to tell the
users how to hold the compass and you want
them to hold it anyway they want then you
need a 3-Axis Compass and you also need an
accelerometer. So you can't compass at any
orientation without an accelerometer to tell
how you're holding the device. Magnetic fields
are kind of weird. The Earth's magnetic field
doesn't point exactly north so you need to
know your GPS location to know where north
is if you have magnetic north. And there's
also something called magnetic inclination.
The magnetic field is not actually horizontal,
it's somewhere between horizontal and vertical.
Here, it's actually more vertical than horizontal.
It's like, 60 degrees from horizontal. But
there's enough of a horizontal component to
resolve your heading. Of course, when you
get up to the north or south poles then you're
screwed. So how do compasses work? Well, this
is how the compasses with the biggest market
share work in consumer electronics. There's
not a spinning needle; there's actually just
current in a wire and that current in a wire
gets deflected by the Hall Effect if there's
a magnetic field present so you can actually
create a compass with just pure silicon electronics.
You don't really need ferromagnetic material
for it. And that's a close-up of a 3-Axis
Compass. It has some hall sensors and some
logic. That's a compass. So now, we get to
gyroscopes. So gyroscopes are the new comer.
Gyroscopes sense angular velocity. So, that's
very different. They don't sense an external
reference like magnetic north or gravity.
Gyroscopes measure their own rotation. So
how does that work? They use something called
the Coriolis Effect. So the Coriolis effect
happens when you have a mass that's moving
and your frame of reference is rotating. So
when that happens, you get a fictitious force
on the mass and you can pick up how your frame
of reference is rotating. So the Earth is
rotating. So the Earth's rotation has some
impact on things for example, weather systems,
the way weather systems spin in the northern
or southern hemisphere depends on the Coriolis
Effect. If anyone ever told you that toilets
spin one way in the northern hemisphere and
the other way in the southern hemisphere because
of the Coriolis Effect, that's actually not
true unless your toilet is like, miles long
or something like that or perfectly polished,
anyway. Okay, so how do you actually pick
up the Coriolis Effect? Well, I said you have
to move a mass so you might as well move that
mass back and forth really, really quickly.
So, that's what a gyroscope does. It actually
oscillates. There's nothing spinning in MEMS
gyroscope. And with oscillates and you can
pick up the Coriolis Effect from that mass
oscillating by looking at something else that
happens. So the mass oscillates, there's a
torque that's provided and then you can look
at these capacitive sense coms picking up
that signal. So this is a slow motion, of
course. Typically these things oscillated
very high frequencies, maybe 25 to 30 kilohertz.
So the actual frequency of oscillation where
you put that depends on what else in the system
you're trying to avoid. So if you're trying
to avoid audio frequencies or if there's another
motor in the system that's going at some frequency,
you might want to be careful about what frequencies
you put your gyro at. Here's a close-up of
a 3-Axis gyroscope. So you can see these mechanical
structures in that cut-away. These are the
ones that vibrate and wiggle and sense and
stuff. And then underneath is the silicon
that has all the logic. So the bigger chip
underneath does all the signal conditioning
and actually does the sensor fusion. So this
chip has a 3-Axis gyro that actually does
sensor fusion. It takes input from other sensors.
So, that brings us to sensor fusion. So how
do you put all this data together? So let's
start by putting accelerometer and gyro data
together. Here's an example of accelerometers
being used as a tilt sensor. So, it basically
works but it's kind of noisy. So what does
everyone do, they filter it. So here's a low
pass filter. It works but it creates a delay
so that's why accelerometer of tilt games
are always kind of slow, because to get pass
all of this noise, you have to add a low pass
filter. Yeah. So here's a gyro data. So a
gyro data looks much nicer. It's really smooth.
It doesn't have that weird spike. But, of
course, gyro data isn't perfect either, right?
Because gyro data first of all, it doesn't
actually measure gravity, right? The whole
point is to figure out your tilt with respect
to gravity. Gyroscopes don't measure that,
accelerometers do. So, you know, we can put
this data together, we get the nice dynamic
response from the gyroscope and we get the
gravity measurement from the accelerometer,
then we have something that works well. So
I'll give you a little demo of how that works.
Okay. So here's just a little oscilloscope
shot of a whole bunch of data. So if you look
at the Y accelerometer graph, that's in--on
the middle, on the right, your right. Again,
if I point the Y axis up, it measures 1G,
as I point it down, it goes to minus 1G, so
it's measuring tilt, that's the Y accelerometer.
It looks kind of noisy, that's my hand shake
actually being amplified a little bit because
it's adding in the linear acceleration of
my hand shaking. And if I move quickly, you
get these weird spikes, see that? It should
get a square wave but instead I get these
strange spikes. So, now let's look at the
Z gyroscope data. So the Z gyroscope is this
axis I'm rotating about now and you get this
nice big angular velocity signal on the Z
gyroscope. But whenever I stopped it goes
to zero. Gyroscopes don't actually measure
gravity, they measure rotation. So, how--wherever
I stop it's zero. So I need to put this data
together somehow.
So, on the left we have a signal we call gravity.
So this is one of the outputs of sensor fusion
that you might want in your application. If
you compare Y gravity to Y accelerometer,
you can see they look really similar, right?
The Y gravity is much smoother because that
weird jittering thing is gone. If I move quickly,
the spike is present in the accelerometer
data but not in the gravity signal. So that
gravity signal is kind of strange. It's actually
mostly gyroscope data but with the accelerometer
used to correct drift. So this gravity data
is probably what you wished you had when you
were writing your accelerometer based game.
It's--it's--what you really want is the output
of sensor fusion, not just one sensor.
So to summarize, what you do is you take the
gyroscope, you get your orientation from it
and then you use the accelerometer to inject
the correction term that keeps the orientation
correct with respect to gravity and removes
drift. So how did we get that orientation
signal from the gyroscope anyway? So, you
have to do something called Integration, right?
So, gyroscopes output angular velocity, what
you really want is angle. So you do Single
Integration. Integration is a little weird.
The blue stuff is just noise that I generated
and the red stuff is the integral and you
can see it has very different properties.
So it looks less noisy but more drifty. So,
intuitively, integration turns noise into
drift and that's actually one reason why the
gyroscope signal is so clean, it's because
it's integrated and here's the math behind
it. If you integrate a cosine, you get a sign,
but you also get this 1/f that comes out in
front. That means, if you have 100 hertz jittering
on your gyro data from noise, when you integrate,
that drops by a factor of 100, so you loose
your noise. But on the other hand, if you
have a very low frequency noise on the gyroscope
when you integrate, it gets amplified. So
you have--you get rid of your noise and you
add drift. So integration is kind of a mixed
blessing. So, in order to do this well, you
have to do it quickly. So here's a very simplified
equation, you're integrating angular velocity,
so you multiply it by time and accumulate
it that gives you your angle. Now, in order
to do this well, the time has to be really
accurate. So you're multiplying your gyro
data by time that means, if your time is off
by 5%, it's just as bad as if your gyro data
is off by 5%. It's the same impact and you
also want your time interval to be really
small. So what we do is we do it in hardware
so you can do it at a very fast rate. Basically,
you have your sensors then you have a separate
motion processors, then you have an application
processor and the sensors and the motion processor
in our case are on one chip, one piece of
silicon. So that sort of abstracts out a lot
of this high rate integration stuff. So, let's
move on. So, now we're going to combine compass
and gyroscope data. It's basically the same
story. Compass data gives you the answer you
want which is your heading but it's noisy--it's
noisy for two reasons. One reason is that
it's picking up real noise, real signal. So
we live in an environment that's magnetically
very noisy. So, and this compasses, they pick
up everything that's magnetic. The other reason
is that it's not integrated so it doesn't
have that benefit of dropping the frequency
component. So, compasses as we said require
tilt compensation, you can't figure out your
heading unless you know where the horizontal
plane is relative to how you're holding a
device. Tilt compensation is done with accelerometers
but by themselves, they don't measure gravity
well. So you get this strange sort of boot
strapping that you have to do where you have
to tilt compensate the compass with accelerometers
and gyroscopes before you can complete your
sensor fusion. So in summary it looks like
this, again, gyroscopes provide orientation,
accelerometers provide a correction due to
gravity and compasses provide a correction
due to magnetic north. So, I'll show you a
little demo of that.
So here's my sword again, this is using all
the sensors together.
So let's say I've just used accelerometers
and compasses, so it still works. It gives
me the right answer. It's just slow, right?
There is this delay. Now, let's say, instead
I use only gyroscopes. So gyroscopes have
really good dynamic response. A perfect gyroscope,
I could do this forever and it would never
drift, of course, real gyroscopes drift. So
this thing won't stay in the right place forever.
But it looks pretty good in the short term.
What you really want is all these sensors
together, so now I have the gyroscopes providing
the good dynamic response and I have the accelerometers
and compasses providing the correct answer
in the long-term. So that's my fused data.
So now we get to position. So accelerometers
measure side to side movement, so can't you
get position out of them? Well, it turns out
it's really, really hard but let's talk about
how that would work. So linear acceleration,
which is what you want just my side to side,
up, down, and forward-backward movement. You
have to take the accelerometer data and you
have to remove gravity, this is called gravity
compensation. So once I've removed gravity,
whatever's left is my linear movement, then
you have to integrate it once to get velocity,
integrated again to get position, that's your
double integral. Now if the first integral--a
single integral creates drift, double integrals
are really, really nasty, they create horrible
drift. Here's a graph of it. The red stuff
is a single integral. You can see that random
walk we talked about before, it sort of meanders
off in some direction. And the green signal
is a double integral. It's just taking off.
Now, this is integration of noise. So you
can see this is simulating an accelerometer.
I'm holding it in my hand, not doing anything
and it drifted off by 20 centimeters in one
second. So I don't know. Is that good or bad?
Actually, I think that's pretty good as you'll
see in a second. But--so one second of drift,
20 centimeters of error. That's from integrating
noise. But that's not the problem. Here's
the real problem. The problem is remember
back at the beginning of the slide when I
removed gravity, well, I probably didn't do
that exactly right. That's pretty hard to
do that perfectly. So let's say I got it wrong.
Let's say I thought I was holding this thing
at 37 degrees but I was really holding it
at 38 degrees. So I screwed up my estimate
of gravity by one degree. Well, now, I double
integrate that. I get a parabola. So I'm double
integrating a constant, right? Here's what
that looks like. So for comparison, the green
line at the bottom is the same as the green
line in the graph on the left. Now, it just
looks flat, right? So the new error is this
blue curve, that's eight and a half meters
of error in one second. So I was holding this
for one second. I screwed up my orientation
by one degree and I got drift of eight and
a half meters. So you can see why it's really,
really hard to do any kind of linear movement.
So step one is try to figure out how to avoid
it in your application. Just use the orientation.
You'll be happy you did. There are some ways
to improve the linear movement estimate. But
if you're going to do linear movement, any
kind of orientation error is really, really
important. All sorts of errors, couple and
including things like cross access error between
the accelerometer and the gyroscope, everything
matters. So, let's finish this off. I have
my orientation estimate and I add linear movement.
So, the way you do that is you first gravity
compensate the accelerometer using the gyroscope.
So in order to make sure I have the best possible
estimate of which way down is, I need all
of these sensors. Then once I have the best
possible estimate of which way down is, I
subtract that. Whatever is left is linear
acceleration and then I double integrate and
pray. It still usually doesn't work that well.
So here are some tricks. Well, what we have
is at least better than a high pass filter.
That's what everybody is doing now. They want
to make some game where you shake it, so they
high pass filter the accelerometer data. Whatever
is left they pray as you shake signal and
it really doesn't work. The reason why it
doesn't work is because you're assuming that
gravity changes slowly but it doesn't. Gravity
changes as quickly as I can do this. So gravity
changes really fast because it's the gravity
with respect to your device, so high pass
filters don't work that well. This works a
little better. If you know something about
how the device is moving, you can do even
better by modeling your dynamics with something
called the Kalman filter which probably could
be a completely different talk, so I'm not
going to go into it in too much detail. But
let's say these sensors are in a car. Well,
the way cars move is very constrained. Cars
can't accelerate in any possible direction.
They're not going to go straight up for example.
So you can constrain your error using a model
of how cars move and a Kalman filters are
pretty good way to do that. That's for automotive
navigation. If you're doing pedestrian navigation
and you want to walk around, usually you use
a pedometer algorithm. So you don't just double
integrate the person, you watch for steps
and you do an algorithm based on steps. So
those are two really common tricks to avoid
your double integral. Okay, now, before we
move on to the next section, let's just cover
a little bit of terminology. There are two
coordinate systems that are important. One
is the coordinate system of the Earth which
includes gravity, includes magnetic north.
If you're controlling a TV or a computer,
it would include the TV or the computer. And
the other is the coordinate system of the
device that you're holding. So those are usually
called world coordinates for the Earth and
body coordinates for the device that you're
holding. And then, you know, accelerometers
and compasses obviously measure relative to
the Earth. They measure gravity in magnetic
north. Gyroscopes are different. They actually
measure in a different coordinate system.
Gyroscopes measure in its own coordinate system.
So it measures rotation around itself. And
then now--just very quickly, the difference
between degrees of freedom in axis, you see
this thing a lot--six axis, something or rather.
What does that really mean? Well, what people
really want is six degrees of freedom. That
means I can move left and right, that's one;
forward and backward, that's two; up and down,
that's three. And then you also have rotation
around the three axis. That's--we're going
to call that roll, pitch and yaw. Although,
we'll define that a little better in the second.
Those are six degrees of freedom. Six axis
usually just refer to the axis of the sensors.
So, six axis can mean almost anything which
is why it gets abused a lot in marketing.
I have a six axis sensing device. I have no
idea what that means. You could have a one
axis accelerometer. You could have six of
them in a row and call that a six axis sensor.
So some people when they say six axis, they
mean accelerometer and gyros. Some people
mean accelerometers and compasses. Anyway,
so six degrees of freedom versus six axis
at least know what they mean. Okay. So now
we get to system integration. So, that's the
part where we built this. So what we ripped
out was the headphone jack. That was probably
the most fun part. Then we took a little board
and soldered it to the I2C bus and then we
started hacking Android. So here's what we
did. Actually, the first thing we did is we
just built our own JNIWrapper around our own
library using a standard Linux I2C driver.
So, that's not really using the sensors within
Android. That's sort of using the sensors
near Android or something like that. This
is what we did in our second draft. It's a
little bit tricky because right now there
isn't really a great way to do a whole bunch
of math on these sensors in post processing.
So for example, we could put it in the kernel
but we don't really want to put our stuff
in the kernel. We could put it in the user
space but then it gets copied. So if you put
too much stuff in the sensor manager, you
get all these copies of the sensor manager.
So what we ended up doing was actually hacking
the sensor service which is not really what
you're supposed to do but it actually worked
pretty well. So we put our driver in the sensor
service. It talks to our sensors through a
standard I2C driver and then it sends data
out to the sensor manager. We're not really
thrilled with it but it actually works pretty
well. We're working on a new version of this,
that will be a little more efficient but this
is sort of where we are right now. So, maybe
that doesn't matter to you, you just want
to know how to use this data. So here's what
comes out. So you've got your sensor manager,
right? You want to write an application that
uses sensor fusion. Here's what you have right
now. You have some raw data and you also have
some compass and accelerometer functions.
So, we added a whole bunch of other sensor
fusion stuff. So we have for example a Quaternion
output, which we use to do a lot of stuff.
We also have rotation matrices and Euler angles.
And then we also added a bunch of higher level
algorithms on top of that. So our gesture,
pedometer stuff, the signature recognition,
we call Glyph. That's where we train characters.
So this is our entire library that we exposed
through the sensor manager. We're not really
sure where these higher level functions should
go but, we stuck them in the sensor manager
and it worked for us. And then, of course,
all that stuff just goes up to the application.
Okay. So here we get to the final section.
So here's how you use sensor fusion. Well,
first of all, what comes out of sensor fusion?
Well, we have this gravity vector we mentioned
and linear acceleration. So, one way to think
about this is you have your accelerometer
data. It's really the sum of two things that
you wanted separately. So sometimes you want
gravity. Sometimes you want your linear acceleration
but you--what you ended up with was the sum.
So sensor fusion basically helps you separate
them so then you have gravity separate from
linear acceleration. If you added them back
together, you'd get this raw accelerometer
data. Then we have orientation. Well, orientation
is kind of a crazy zoo in itself, so there's
always different ways of expressing orientation.
Euler angles, rotation matrices, axis and
angle, quaternions, maybe you just want the
change in your rotation. So we're going to
cover all of these just a little bit and talk
about how you would use them in an application
and have a couple lines of code for each.
And, of course, the other thing that comes
out of sensor fusion is better raw data. So
all of these sensors, they have problems but
luckily they have different problems. So they
all have--since they all have different problems,
they can--they can calibrate each other. One
sensor can make another sensor better. Okay.
So, very quickly, gravity--well, you know
what that is. It's that thing everyone is
using to write their tilting games with, right?
Except that it doesn't work that well. So
it should be easy to port your existing accelerometer
code. Basically, you ripped out the accelerometer
stuff and you put in gravity. It should just
work better. And you can take out that low
pass filter that you've been using because
you don't need it anymore. So, that's gravity.
That's probably the easiest one to know how
to use. If you already know how to write a
tilt game with accelerometer data, you can
just immediately do a better one that uses
all of the sensors. Linear acceleration, so,
that's, you know, for shaking games or something
side to side, up and down. Again, you've already
tried to do it with accelerometer data. It
didn't work that well. It should work a little
better just by putting in linear acceleration
data. All you need to do now is take out the
high pass filter that you've been using which
didn't really work anyway. I'm going to demo
that one quickly because that's not completely
intuitive. Okay. So on the--on the left, we
have linear acceleration. On the right, we
have raw accelerometer data. So you notice
again the Y accelerometer data measures gravity
as I tilt back and forth. But the Y linear
acceleration data does not. So it stays at
zero because I'm not moving sideways. If I
go up and down, both of them measure that,
right? So the Y accelerometer and the Y linear
acceleration both measure this up and down
movement. But the Y accelerometer has this
bias error that's because it's measuring the
up and down movement plus gravity. So it's
measuring both of them whereas Y linear acceleration
has had the gravity removed by the sensor
fusion process. Yeah. So if I turn it upside
down, I get the bias in the opposite direction.
All right. Okay, so now we get to the last
part of the talk which is how do you express
rotation? Expressing rotation is actually
really, really hard. So, we have to spend
at least a couple of minutes on it. So, really
the problem we're trying to solve is how do
I say what happened to the green teapot to
make it the blue teapot. Something happen
to it, it rotated. But how do you write that
down? It turns out there's a few ways to do
this and all of them have some problems. It's
good to understand how they work if you want
to use this stuff in an application. So let's
compare it to linear movement. So for example,
let's say I have some arbitrary linear movement
I want to express. That's the diagonal green
line. Well, I can express that as the sum
of a vertical movement and a horizontal movement.
So, that's great, that works really well using
vectors. I can go the other way, too. I can
do the horizontal movement then the vertical
movement. It works great. So what if I try
this with rotation? Well, it turns out it
doesn't work that well. So let's say I take
my teapot, rotate it around the handle then
rotate it around the spout. Well, here's where
I ended up. Let's try it again. Take my teapot,
rotate it around the spout then around the
handle. Well, I ended up somewhere completely
different. So rotation inherently is messy.
It doesn't work as well as linear movement.
So what are we going to do? Are we screwed?
There's a couple of ways of expressing rotation
that worked pretty well. So we'll talk about
those. This one is really cheating, but I
like it. Let's forget about rotation and just
talk about the change in rotation. So, from
one point to the next, how much did I rotate
around the Y-axis? How much did I rotate around
the X-axis? If you do this for a while, you're
not going to get anything that accurate, but
it turns out it works really well in the user
interface. For example, let's go back to the
gyromouse. So again I move left, right, up,
down. It works pretty well, right? Well, this
is easy actually. All I'm doing is looking
at the change in rotation around one axis
and mapping that to movement of pixels, left
and right. I can look at the change in rotation
around this axis and map it to pixels moving
up and down. So it makes it seem linear that's
why it's nice for user interface. So to summarize,
you look for your change in angle which really
should be between two rotation matrices. It's
not that accurate. It's easy to map to stuff
and you can do it in a couple of lines of
code once you have it. So if your sensor fusion
outputs what you want which is angle change
then here's all the code you need. So, for
example, we use this in a panning when we
pan in an image. It's really simple. Just
accumulate your angle change in your pan variables
and then use a glTranslate; three lines of
code to make a motion-based panning system.
Okay, so now we get to Euler angles named
after this guy. Yes, it's pronounced oiler,
I'm not sure why. But, everybody loves Euler
angles even though they actually don't work
that well. We'll explain why they don't work
that well. So again, we want to get from the
green teapot to the blue teapot. So we can
express that as a series of rotations. So
we're going to rotate around the vertical.
We'll call that "yaw." We're going to rotate
that around an axis through the side of the
teapot. We're going to call that "pitch" or
"elevation" and then we'll rotate through
the spout and we'll call that "roll." So,
you know, if you look at my arm, this is yaw.
That's also called heading, just which direction
you're looking. This is pitch, also called
elevation. And then this is roll, rotation
around the axis through my forearm. Now, we
have a problem. What if I point straight up
and do this? Is that yaw or is that roll?
When I'm sort of changing my heading and I'm
also rotating around an axis through my forearm
and actually it turns out when you point straight
up with Euler angles, everything breaks. And
I'm going to demo everything breaking. It's
really important to understand that this happens.
Euler angles work okay for these guys, so,
in a lot of navigation systems, they used
Euler angles. So is it a problem that you
can't point straight up or straight down?
Well, it depends on your application. Let's
say your application is a passenger jet or
a car or a submarine. The fact that you can't
point straight up or straight down is not
really a problem. If your pitch goes to 90
degrees or minus 90 degrees, really it'll
be the least of your worries that the Euler
angles aren't working. So let's use it in
an application. Just be careful. Don't let
your pitch go to 90 degrees. There's tricks
you can use to get around that. You can sort
of change your definitions on the fly so if
this is my forward direction, as I start to
point up, I might run into trouble because
my pitch is going to 90 degrees. Quick. Let's
redefine, this is forward. So now my pitch
is zero. People play all these games, they're
really messy. Personally, I try to avoid using
Euler Angles unless I have to. Here are some
code. So once you've got some Euler Angles,
you can do a pitch and a yaw rotation, for
example. Here's another example, sometimes
you get lucky and someone provides an API
that takes pitch and yaw. So let's look at
a demo of that. So, of course, I always go
to the Grand Canyon with Google Earth. Okay.
So, I got my yaw and let's call that pitch.
I can turn sideways. This is pretty fun. So
with a very small amount of movement and I
can control which way I'm looking. I go like
this, I can look down. Okay. So where do I
run into trouble? Well, let's say, a point
straight up. Hey, it works fine. So that's
because I cheated. This is actually roll.
So I switched pitch and roll to make it work.
That's another messy game people play. So
I'll show you pitch. Now as I turn sideways
like this, eventually, my pitch goes to minus
90 degrees and my Euler Angles should explode,
there they go. Okay. So it's important to
know that this happens, right? If you're going
to use Euler Angles, but it does work, right?
It worked great till I did the thing I wasn't
supposed to do. So if you constrain your application,
Euler Angles actually work just fine. Okay.
So now we get to rotation matrix. Again, we
want to go from the green teapot to the blue
teapot. Here's a rotation matrix. Basically,
multiply it by any point on the green teapot.
It gives you the point on the blue teapot.
So from that definition, it's pretty simple.
It's nine numbers. What are those nine numbers
mean? Should I just ignore them and just use
it as a matrix and multiply things by it?
Actually, I like those nine numbers. I like
to use them for things. Here's a good way
to visualize them. If you picture X, Y, and
Z axis sticking out of the teapot, those axis
twist around with the teapot. You got three
axis together that's nine numbers. By coincidence,
those are the same nine numbers in your rotation
matrix. So, for example, if you have a rotation
matrix and you want to know what's the direction
that stuff will come out of the spout of the
teapot; you actually already have that information.
It's just a vector that you can pull directly
out of the rotation matrix. So sometimes you
can just take numbers out of the rotation
matrix and map them to things. Here's how
you use a rotation matrix. You can use it
to rotate stuff in OpenGl. So there you go.
There's one line of code to use your sensor
fusion in an application. You just multiply
it by your rotation matrix and you're basically
done. You got to be careful because OpenGl
actually defines their matrix like that so
you might have to add some zeroes on a one.
And, of course, you can also--just like I
said, pull numbers directly out of the rotation
matrix and use them for things. Just look
at the numbers and sometimes they're useful.
Okay. So, a couple more. Axis and angle. Well,
you can express the rotation of the teapot
from green to blue as one rotation around
this axis by this angle. So that's sort of
interesting. Any rotation no matter how weird
it seems, you can write it down as one rotation
by one angle around one axis. So you need
the four numbers here. The angle and three
numbers for your axis. We'll come back to
that in a second. But first, quaternions,
so these are my favorite. Everyone hates them.
People are scared of them. They actually work
really well. I think people are scared of
them because if you look them up, you get
stuff like four dimensional vector that lies
on a hypersphere. That's a hypersphere. Actually,
you can't really draw a hypersphere so that--I
think the pictures said two dimensional projection--of
three dimensional projection of hypersphere
or something like that. Anyway, I don't really
care because quaternions basically axis an
angle. You can use it like that. If you don't
believe me, that's a quaternion, look at the
numbers. There's X, Y, and Z, it's your axis
and there's theta, that's your angle. So most
of the time, that's how you use it. So here's
how you use a quaternion in one line of code
glRotatef. There it is. This is how that sword
worked. This is also how our little virtual
reality demo works. There's really one line
of code to do all of the motion sensing in
that application and that's the line of code.
So you pull out the angle from the first component
of your quaternion and then the other three
become your axis. It's pretty simple, actually.
Quaternions--the reason why people like them,
is because you can do stuff like interpolate
between them. Don't try interpolating between
two rotation matrices because it usually hurts.
Euler Angles also hurt. You can do it with
quaternions so let's say, "I don't like my
green and blue teapot. I want to find the
one right in the middle or I want to extrapolate
and find the next one." That's pretty hard
to do with anything except quaternions well.
And that's basically it, I'm going to show
one more demo. So here's my flying game again,
here's my tunnel. So let's just summarize
everything that we did. So this flying game
uses vectors from the rotation matrix to figure
out which way I'm pointing. It uses roll from
the Euler Angles so you can use Euler Angles
in this game, right? Because I can't point
straight up, anyway. I'll bang into the tunnel
so sometimes you can constrain your applications
so Euler Angles work. We also use linear acceleration
in this game. If I punch forward, it speeds
up. If I pull back, it slows down and so there
you go. There's your sensor fusion. Okay.
That's all. Thank you.
