We're very pleased to
have Tam Vu his talk today
and it's going to be on Earable Computers.
And particular this is some
of the most exciting research I've seen.
And so Tam is an assistant professor here
at Department of Computer Science
but also has a joint appointment
with the Department of Pediatrics
and the School of Medicine.
He's also a fellow of the
Institute of Cognitive Science.
He joined us from [mumbles]
some of you may have known him.
He was actually [mumbles]
our students here at CU
So Tam Vu is coming home, to join us.
Tam has done some outstanding work here,
he has received an NSF Career Award,
two Google Faculty Awards,
which are really difficult to get.
He has I think, I counted
10 best paper Awards
and nominations or equivalents,
which is just off the charts
and his work that he's talking about here,
has actually founded to
two __startups_ already.
One of them which has
half a million dollars
of funding already.
Two.
[laughs]
And so without further ado,
let's___stop__ over there.
[audience applauding]
Thank you.
Thank you, thanks everyone
for taking your time
to come to the talk today.
I'm gonna be sharing
about our recent work,
in our Mobile Network System Lab,
but mainly focusing on Earable Computers.
So this work was motivated by the fact
that our head is actually
a very, very good source
of many important signals.
And it's actually the house
of the most important,
body part which is our brain, right?
By just knowing your brain signal,
we can infer, one can
infer the cognitive load,
in form of emotions, or stress.
And also if we can capture other signal,
from the brain and from the head,
less such as EMG or muscle
signal, we now can infer,
the eating activities, speaking activity,
we can even refine our stress monitoring
and pain monitoring.
Also, if we now know the eye motions
or EOG, another type of
signal from the head,
now we can infer, are you focusing?
Are you awake?
And what's your sleep qualities
throughout the night, right?
And if you keep adding
other signals such as
so the heart signal, for example, EKG
it should propagate from
your heart all the way up
to the head as well.
So you can actually
capture the heart signal
from the head as well.
Now, if we were able to capture
those head-based signals,
we can actually enable a number
of interesting applications.
For example, we can reduce
driver distractions,
reduce drowsiness of drivers
while they're driving.
And of course, that is the cause of many
of the traffic accidents, right?
According to some of the
statistics here, driver errors is,
accounting for 87% of
all the accident, right?
And fatigue for example, is
accounted as a number seventh,
in the causes of those crashes.
So now, if we can monitor their fatigue,
if we can monitor the awakeness level,
we can actually reduce
this like potentially.
Other potential application
could be helping our student,
helping our employee in
an in school at work.
Where if we can monitor their focus level
or the reverse of the
focus is a distraction,
then we can help them to improve,
reduce the distraction
and improve focus, right?
Other examples more related to health care
where epileptic seizures detection
and prediction can be of huge use, right?
We've talked to many seizure patients
where they're actually
talking about the fact
that only if you can predict
that seizure would happen
in the next 15 second,
it can actually change the
quality of life significantly.
It can prevent them
from breaking their hip
while they are walking or
avoid a traffic accident.
Because if you can just predict
that the seizure would
happen, you would warn them,
they might pull over or
they might just sit down.
That would be significantly
improving their life quality.
The problem though, is all
of the existing devices
that allow us to capture the brain signal,
the eye signal, the muscle signal
and many of the other head-based
signals are very clunky.
You either have, if you wanna capture,
a high resolution signal,
you would wear something like this, right?
And I bet you probably
don't wanna wear this
and walking on the street, right?
It's not so socially acceptable yet.
Now, if you are willing to
take a lower resolution still,
it's still quite not socially acceptable.
And the bad part here is
that it capture the
small number of signals,
and it's not useful for daily usage.
Now what we set out a
goal to design a system
that can capture those signals, right?
The brain signal the eye signal,
the muscle signal among many
other signal from the head.
And at the same time, it
should be used for a long term,
it should be comfortable,
socially acceptable is one
of the very important factor.
And it's useful too,
we should be able to use this device
to monitor those signal continuously,
days after days, right?
That's where we introduce
our system called Earable
or Earable Computing.
The idea here is that,
it's Earable stand for
ear wearable, right?
Where the device can be
used for brain monitoring,
and head-based monitoring and stimulation.
It can come in three
different form factors.
One, it could be in-ear sensor,
it could be in-ear sensing,
you can think about that as an earpiece
that you put inside the ear
or it could be in a form
of behind the ear, where
a number of electrodes
will be coming in from behind your ear,
or the combination of the two.
So the device would have,
excuse my text here.
So the device would have
a series of electrodes
that would serve as a sensing electrodes
and also stimulating electrodes.
So sensing electrode would help us
to capture various signals on the head
while the activating electrode would send
out signal to stimulate the brain
and stimulate the head of the person.
There are two different
form factors for the device,
like one is the earphone, in the form
of a smart earphone that
were inside the ear.
And the other one is behind the ear.
Form factors, right?
So within the scope of this talk,
I'm gonna be covering,
talking a lot about Earable
while where I mainly talk
about a few difficult up
to a few different applications,
ranging from sleep tracking application
to using earwolf for
human-computer interactions,
and also monitoring different
type of vital signal.
And I'll go on and talk briefly about
other application that
we have used Earable for
and discuss the ongoing and future work.
I will also cover other topics
that we have in our
lab very, very briefly.
So first, so the Earable
was started four years ago
when we started looking
at the sleep problem.
This is an actual picture that I took,
when I was visiting children
hospital four years ago.
The kid was wrapped
around with 32 sensors,
for sleep quality monitoring.
And this is a gold standard
being done in the hospital.
It costs anywhere from 3000 to 5000.
And if you go somewhere in California,
Stanford as an example, it
costs $17,000 for study.
Which is not only that,
if you are wrapped around
with 30 something sensors
and people would ask you
to sleep normally in the hospital,
then you have to travel into the hospital.
It's very inaccessible, it's obtrusive,
it's expensive as I mentioned
and you can't do it frequently.
Insurance only paid you to do this,
once every six months maximum.
That's the best insurance possible, right?
And of course, now they would
ask you to sleep normally.
It is the hardest part, right?
Because now you have all of that
and where unconsciously
will take those sensors out,
when we're sleeping.
So, we studied the system
and what we found is
that there are three key signals
that need to be captures in order
for those ghost-ended devices to work,
namely the brain signal, the eye signal
and the muscle signal.
And then we ask the question
so can we make the user experience better?
Can we reduce the costs?
Can we make it more comfortable,
yet still having a very high
level of accuracy, right?
That's when we introduce Earable, right?
When we start looking at making a device
that can monitor these
signals from inside the ear
and then having sleep
classifications algorithms
that would stage the sleep
at the high level accuracy
as a gold standard
that you would normally find in hospital.
So what the what the system can do is
that it will send the bio
electrical signal from the ear
and it will send that data
out to a mobile device,
the host device such
as a phone or a laptop,
and from that it will do,
sleep stage classification, right?
So the device that we're making
is inexpensive, comfortable.
We designed it so that
it's soft and comfortable
and also have high level of accuracy.
So that's when we started our very first
version of Earable, right?
And this is how it looked like where
so it is a lightweight,
inexpensive, in-ear only.
So we started this out
by just try just to see,
if in-ear is a good form factor.
The motivation for this is that
that if you put a sensors inside the ear,
it has quite a few advantage.
One if you go sleep, the
sensor inside your ear
so it doesn't have a lot of
friction with all the pillow
and blankets and doesn't cause a lot
of uncomfort this plus
since we are very familiar
with having something inside the ear
because we be listening to music,
or around the ear if
you don't take those out
as you're sleeping.
Now we've done a study where
we have people wear some
of the devices on their forehead
and have the person to go to sleep, right?
What we found is that you would,
unconsciously take those out.
And there was a study that we conducted
at the very, very beginning of this study.
And so second, by monitoring
from inside the ear,
it actually close to the signal source.
It's closer to the eyes,
it closer to the muscle,
and it's also closer to the brain.
And that was a reason why we thought
that in-ear is actually
a good option to explore.
Now, the problem that we found,
though is because inside the ear,
we've very small space and the signal
that we can capture is
only one channel, right?
One channel of signal that you can get out
because there's a spatial.
The space is so small,
that even if you put
many electrodes inside,
the signal that you would
get would still be the same.
And that's why this becomes
very, very challenging.
And of course, the signal
of interest now are,
the brain signal the eye
signal the muscle signal
and the noise.
Now, that is a challenge
that can be just normally overcome
by simple signal processing
like a bandpass filter
or amplitude bait because
these signals are overlapping.
They're overlapping in frequency
and they're overlapping in amplitude.
Not only that, it's from
one person to another,
the brain signal would look,
very differently from one to another.
And even on the same person,
if you wear at this time,
and then you take it out
and you put in another time,
the signal might look
very, very different.
The reason is that every time,
you put the device inside the ear,
the contact quality
might be different than
that scaled amplitude up and down, right?
So those are the challenges
that we have to overcome.
So the question here is, how
do we split those signals,
given those challenges, right?
What we introduce is a new technique
that's combination of machine learning,
and a non-negative matrix factorization.
If we consider the signal
as x, which is a signal
that we capture from inside the ear,
that will be the linear
combination of the three signal,
the brain, the eye muscle,
and the noise signal, right?
Now, if we look at the
spectrums of that signal,
there'll be the spectral components
that does not change over time,
while those main component
would be weighted
based on the activation metric,
which is _you're_ at different time that
that main component can be more dominant
or less dominant, right?
So basically what we're trying
to do here is we try to
defactorize the signal that we get
by identifying the two
metric W and the H, right?
Where W is a core metric
that representing,
the spectral characteristic
of each signal.
Each signal meaning it's the brain,
the eyes and muscle,
while H is a time-
dependent activation metric.
So, now this become an
optimization problem, right?
Because we basically
tried to defactorize this
so that the product of W and H will be as
so we try to make it so that it as close
to the value of x as possible.
Now, how do we define and
what's the definition of close?
So we have to define a
distance metric here.
So the distant metric here is,
we use the Itakura-Saito,
diversion measure
that has very nice property.
And the property here is
that even when the signal
is scaled up and down
that so the word the scale
up and down is mainly
because of the variation every
time you put the device in.
For example, during
training, you put a device in
and you get one-core component metrics.
And then later you put it in again
and the signal is significantly amplified.
So these distance measurements allow us
to eliminate that, it's
amplitude-independent, right?
Now, once we have that, we would iterate
through multiplicative
updating iterative procedure,
so that it keeps updating
the defactorization
so that we can get as
the product of W in it
will get as close to x as possible, right?
So with that, so the high-level idea
of this is really trying to
seek for the hyperplane P.
So that it will separate the data,
the training data that we have, right?
From the origin with the maximum margin,
so that the distance, now the distance
between the product W and H
to x will be minimized, right.
So that is the core of the
algorithms and of course,
there's like we have to guarantee
there it will be converge,
we will address the local minimum problem
and we have to be able to
always return the number
of K which is the number
of basic patterns.
So, with that algorithm, we were able
to split the signal the
three main signals out
of the core one stream of signal, right?
The second challenge now
that we have to deal with is
actually a design challenge.
Now our ear is very small.
And because it's small
and when you're sleeping,
when you lay on one side
and you press in the pillow,
your ear canal actually deform,
meaning that we need to
have the design the device
so that it would deform together when
with your ear canal's deforming,
otherwise it will be very
uncomfortable, right?
Because it will press
against and if you tried to,
if you ever went for
asleep while you were,
wearing a headphone,
you've probably experienced
that already, right?
And we try to avoid that
by making the device
so that it's soft and deformable.
But at the same time, the device need
to have very good contact
with the ear canal.
If I get to have to follow
the contour of the ear canal
so that it has it always
maintain a good context
so that it can capture the
bio-electrical signals.
So what we came up with is
first in the very first version,
we use viscoelastic foam,
and then we cover it with pure silver leaf
because it has a very high conductivity.
The problem with pure silver
leaf though is breaking
it break out very very easily.
So we reinforce it by using medically,
compatible, a medical
compatible conductive gel on top
of a conductive cloth so that
it keeps the silver on top
of the viscoelastic form
as it deforms, right?
And remember in order to put this,
foam inside the ear if you
have ever used this before,
you probably have to know that
you have to squeeze it in.
In order to put inside the ear.
And if you do that with just silver,
it will all break out even just
in the first crease, right?
So by doing so we actually
address the problem,
in making the device deformable
and still highly sensitive.
And it's the nice thing
is that when you lay down
and you put on, put your ear canal on,
and then later you go on
to a different position
and go__ bang on your mic
position, for example,
now your ear canal can expand
and the device the
earpiece will also deform,
going back to the original shape,
so that it maintain high quality contact.
That's very important for
the practical purposes.
Now, another design that we
have we've been looking into
and we we've been exploring together
with the mechanical, our
mechanical collaborators
is you making use, of
a smart shape memory,
that have very nice property.
So when we talk to our user,
one of the feedback that we hear is
that I don't wanna wear
ear plug when I'm sleeping,
because I will keep hearing my heart beat
and hearing my breathing sounds
because you __blow up your ear
and that's when you start hearing it.
So that's when we start looking into
and also if you were this,
you can't hear the sound outside, right?
So if you've use it during the day
that'd be a huge problem.
So what are we exploring
is a very nice property
and a device that first
it would be flat, right?
You roll it up and put it inside the ear.
So now to remove the need
of squeezing the earpiece.
The nice thing is that this,
is a temperature-dependent material
where when you put inside the ear,
once you put it inside the ear,
the the temp, the body temperature
will heat up the device
and then it also expand the device
and it make the device following
the contour inside the ear
that make the contact
quality is a it makes sure
that the contact is
always of high quality,
but at the same time it
does give a comfort level
to a higher comfort level to people.
So that is the second challenge
that we had to address.
And then now we have a system
that can capture those, the brain, the eye
and muscle signal and split that, right?
So we built a spliff stage
classification algorithm
and we tested it on 22 patients.
And we wanted to see, can we
really try to improve sleep?
Can what, can we really track
people's sleep stages, right?
So with this, right?
We compare our device with
a gold standard device
that about 200 times
more expensive, right?
Our device is much, much cheaper.
And our accuracy is
95% of the time, right?
This is the reason for
us to be so accurate is
because we capture
exactly the type of signal
that you normally see in the hospital,
the brain, the eyes and the muscle.
There are devices in this, in the market,
for example, Fitbit, Apple Watch,
or even your phone have
apps, those are using a proxy
to kind of infer the sleep quality, right?
And that's why the
accuracy of those devices
and there are many studies
showing the accuracy
of those devices are normally about 40%
or 50%.
Ours is way better.
Now, I just want to show a quick demo
of how the device it's
two demo actually one is
so this is a device that
one of my student's wearing,
and he've made this and he's
wearing and we wanna show
that when the person's relaxed
alpha wave go up, right?
The other device can
capture the red signal
and the alpha wave will go up.
Now when we turn on the light,
and the person has opened up the eyes,
then the alpha wave go down.
That show that we can capture
through your brain signal
very, very clearly.
Also, when you blink,
the device can also capture
those blinking signatures very,
very clearly on the on the signal.
And when you look to the left, you see
that it go down and
then go up when you look
to the right in you go up and then go down
because the direction of the,
of your emotion is different, right?
When you grind your teeth,
like chewing or grinding,
we can capture that we
can not only capture
that grind, but we also can
capture how long you grind,
how hard you grind and the frequency
and that would enable like,
eating habit monitoring
for example, right?
Or detecting if you have
grinding teeth during the night.
The second demo that we wanna show is
and we actually normally,
we show this demo live,
and we ma show it some later times.
But it will take a little
bit of time to set it up.
That's why I am showing
a just a video here.
So basically, what we
wanted to show here is
that we can use this device
as a human computer interactions platform,
where if a person is wearing
a device and go sleep,
the device would now
detect the alpha wave,
when the alpha wave go up,
it will signal your smart home
and turn off the light, right?
And play some soothing music
so that you can start sleeping.
Now, when you, when the person woke up,
the alpha wave is now go down,
now the device would
signal some exciting sounds
and make some coffee
and so just so that you will be ready
for your new day, right?
Another use of this is, we so
in this example the student
is Taylor is wearing the device
and when he opened the mouth,
we can detect that the fact
that you open the mouth
and the drone would take off right
when he looked To the
left, or look to the right,
the drone will fly to the left
or to the right accordingly.
When he grinds his teeth,
the drone moving forward
or backward accordingly, right.
And when he got when he grind, he was
when he opened his mouth
again, the drone was just land.
And this is showing two things.
One, we can capture
those signal very well.
And we all we can also having,
have the system work in real time, right?
This is an indication of
a working system, right?
Now once we were able to build the system,
and we see that oh, yeah
this actually worked well
with sleep tracking.
Are there any other application
that we can use this for.
So that's when we start,
building another system called TYTH,
or Typing on Your Teeth, right?
And this is a form factor
that's around the ear.
It's mainly looking at,
is it possible for us
to replace our finger by our tongue,
and replace the keyboard with our teeth,
so that you only use your tongue
to type on your teeth in order
to provide input to your computer system.
All right. So this was motivated by,
by assistive technologies.
Back when we start seeing a lot
of news about Stephen Hawking,
we thought about ALS, we thought,
Well, if that person's still alive,
most of the time, their
tongue would still work,
even though the body might be
completely paralyzed, right?
So how can we make use of the tongue.
So that's when we, we came up
with the idea of teeth, right?
Or typing on your teeth.
The idea here is that
now if I can localize
where your tongue is inside your mouth,
and if I can detect where you're
tapping inside your mouth,
then we then we will know
what do you wanna type, right?
right and we could use that as a form
of human computer interactions.
So and of course, there
are other potential use
for this and it's very
interesting enough like
when I was like half a year ago,
when I was in visiting another
school back in for instance,
after my talk one of the
general in, in Air Force came up
and say, Hey, this is actually can be used
In what I'm doing, right?
And he was describing about
a very interesting case,
in Air Force where, when
the pilot accelerate really,
really quickly, the gravity is so strong
that the hand the limbs
are not moving anymore.
What they want to do is
being able to have a way
to control something in
those scenario but of course,
the hand is already
completely paralyzed, right.
So they want to be able to use this,
our system to control something,
at least turn on something
and when I turn off something
when it's in those emergency case, right?
The second application of this
could be it this can be used
for in tactical scenarios
where if you are a soldier,
for example, in the battlefield,
their hands already
completely occupied it.
Now we can use if you
give them the ability
to use the tongue to interface
with other devices around them.
That gives them another level of freedom
and increase their capacity right.
And last but not least, this can be used
for factory workers where the
hands are already occupied.
And now they can use this right
turn the screen or interface
with their colleague.
Now, the motive, the intuition
of this is started from
a very simple experiment.
And please join me on this experiment.
Now put both of your
finger behind your ear
and pay attention to the motions
of the skin behind the ear.
Now, if you move the tongue to the left,
you see that the left hand,
the skin on the left hand
side actually move more,
the skin on the right hand
side doesn't move as much.
And if you move on the
other way, it's reversely.
What that meant is that
the skin deformation
behind your ear actually reflecting
where your tongue is
inside your mouth, right?
So that was the intuition for us
to start building this system.
Briefly, when we look
at into the anatomical
and neurological of the
tongue motions, right?
What happened is your tongue is controlled
by the primary motor cortex.
When you want to control your tongue,
it actually generate a EEG signal,
that control the EMG signal
that would generate the EMG signal,
which is a muscle signal
that moves the tongue.
When you tap on the teeth,
it actually have a sensorial
motor and sensorial
cortex, which would feel
that and it will send back
the signal to the brain.
Now, so if we were to remodel that, right?
You first when you wanna move the tongue,
the brain will send a signal
down to the tongue, then.
So if the tongue move, right?
We want to move to localize in 3D space
of where the tongue is inside the mouth.
Now when you tap on the
teeth, as I mentioned,
the sensorial cortex
would send the signal,
will have a signal back up to the brain.
And those are the key for us to be able
to decide if the person is
tapping on the teeth or not.
Because we remember,
there's two main tasks here
that we need to do one,
being able to localize
where the tongue is inside the mouth.
Two, detecting if the tongue
is tapping on the teeth, right?
So question now is, where
do we measure this, right?
How do we measure this?
So we studied the anatomical
of the structures of the tongue,
and we realized that
there's two groups of muscle
that actually control your tongue.
The extrinsic muscles
are the group of muscle
that connecting your tongue to the bone,
while the intrinsic muscle are the muscle
that within inside your tongue.
Now, of course, we can't capture
in order to capture this,
the signals or the signal from this muscle
we have to put something on your tongue.
And of course, that's not desirable.
So what do we thought it let's try
to capture the signal
from the extrinsic signal,
meaning that any part that
connect from your bone
to the tongue, now how do we do that?
We look deeper into the the structures
of those muscles and trying to identify
where is the right location, right?
It turned out that as you can see here,
these these are the extrinsic muscles
that connect to the tongue and
it close to the ear, right?
So because of that, we decided that
okay, this is the right location
for us to put the sensor
and that's why by just
placing a sensor down here,
we can capture those
signals that the EMG signal
that we're presenting the
motions of the tongue.
So, we perform and that's the first study
and then we thought well,
let's try to to confirm that right.
So, this is an example of when the person
and these are the the the signal that
we could capture from the forehand
and behind the ear and around the ear.
And we would see that when the person's
when the subjects for one of
my PhD students used to be
move the move the tongue it
actually reflect very clearly
on this right different position
have different signatures,
that was the the indication
that yeah, the system might work.
So then we go about to
design the hardware system.
As I mentioned, there
are three type of signals
that we need to capture
here the brain signal
or EEG, the muscle signal EMG.
And another type of signal
that is not have not been measured before
and that we call, we coined a term SKD
or skin deformation, right?
So the key here is that,
now we need to capture those EEG EMG,
which is usable by itself
or we can capture that.
But the SKD is the hard part, right?
Because we don't have that.
So what we introduce is a
different ways of sensing that.
So imagine, human, this
is your human skin.
And what we wanna capture is
the deformation of the skin,
the deformation of the skin is actually.
the skin motion, right?
Now, if we put right
behind that a capacitor,
that's a simple capacitor with
a soft material in between.
Then when that skin move,
the two plate of the capacitor change,
meaning that the distance
between the two would change.
And when the distance
between the two change,
the capacitor would change,
because the distance reflect
the capacitance value.
And from that, we were able to capture,
the skin deformations
behind the ear, right?
And so those are the hardware design,
but still, there's a lot of challenges,
in using those signal because as you can ,
infer already right that the
brain signal is really weak.
The muscle signal is extremely strong.
So how do we have?
How do we split the signal out?
How do we use those
signal in effective ways
so that we can really classify those.
So we have a series of
algorithms that that try
to extract the key
signatures, the key signals.
And then we have a serial algorithm to try
to detect the pressure,
or the tapping of the tongue on
the on the on the T and R, then
we have a series of
algorithms to try to localize
where the tongue is in the mouth, right?
so I'll just briefly go through this one.
So the first set of algorithm
is mainly we want it
to extract the main features of the signal
because it's noisy in this way.
So what we have is we use
a low rank analysis here
that would build the dictionary
to extract the main
structure of the signal.
And the high level idea here is
that if you can see the x signal
as one dictionary, then using
this low rank algorithm,
we can That the dictionary for signals
by and then they were
representing the key structures
of the picture of the signals, right?
An example is now if we
have a signal like this,
and that is the output
of our system, right,
then using using the low
rank and the license,
we were to capture the EEG
and the EMG separately.
And, and of course, and then
we can add the noise there was as well.
The second algorithm is mainly to do
with detecting when you
are tapping on the teeth.
Right, it is challenging,
because now first, there's two things
that you need to do in that
one, we need to be able to say,
Hey, your time is moving. Right
and then right after that,
we need to be able to say
the tongue was pressing.
So this is what the way we do this is
when you tap the signal would
be in a in a wave platform,
where you first have the
brain signal sent down
and then it will pass that, right?
So based on that patterns, we
use wavelet transformations
and we will detect that
the user is tapping.
The third algorithm is trying
to classifying which area
a person is tapping on.
Now this means that so what we do is
for the user first to wire the
system they would train on.
Let's say you have 10 different areas
that you want the users
to use as a representation
of three of 10 different
inputs, you trained for those.
So these are the algorithm for us
to do typing area classification.
Last but not least, there's
another challenges because now,
even though during the
training, you can train
for those 10 different
location, during testing,
or during a normal
usage, you might tap on a
different area that
wasn't in the training.
That's why we have to build,
a typing area localization algorithms,
where we model the mouth and the talent
and the relationship with
the tongue as a 3d space
and we try to and we have a an algorithm
to try to localize those right.
So with that, we tested our
systems and the accuracy
of the system were is
88% of Accuracy meaning
so out of out of 10 tab will be to be able
to classify about 8.8 tap accurately.
And of course, this is like a very,
very simple prototype, right?
We just show wanted to show
that this is a feasible concept.
But if you improve the
classification algorithm,
you improve the sensors,
the accuracy can go up higher.
Now, the third use of ear-based systems,
is rather a little bit different.
What I'm looking to talk about
here is a third system called
Ear-based blood pressure monitoring.
Imagine that we can monitor,
your blood pressure from
inside your ear, right?
Now, why do we care, right?
It's turned out that
there are many scenario,
in many health care systems
that require frequent
blood pressure monitoring.
hemodialysis is one
example or hypertension,
you will need to measure
the blood pressure very,
very frequently, a patient
right after organ transplant.
They also need to be
their blood pressure. need
to be monitored very frequently.
Now the current gold
standard, is this, right?
It's cheap, but it's very
uncomfortable, right?
So because this device have
to pump all the pressure all the way up,
and because of that it
caused uncomfort to people.
And now imagine that you have
to measure this frequently.
And every time you have
this device pump up,
you would be waking up if you're sleeping,
or it cause discomfort to you.
And that was the main
motivation for us to do study
to see if there's a better
way to do that. Right.
So this is discomfort, it's
limited move it you can't move
because as soon as you move,
it's actually a result in
inaccurate measurements.
So we want to be able
to measure it 24 hours.
There are other system out there that try
to measure in your phone,
or some actually use try
to use geo glasses to measure,
it's not very accurate, right?
So what we aiming for is a system
that can capture
unobtrusively comfortably,
your blood pressure from
inside the ear, right?
So we first analyzed the again,
anatomical structures of your ear,
what we found is
something very interesting
that the superficial temporal artery,
here actually run through your ear.
And because it's run through
your ear, there's a possibility
of trying to extract something out of
that artery by just shooting
light into that artery.
And that's exactly how blood oxygen,
SPO two is being monitored, right?
So we thought well, let's,
if we were to build a
device around the ear
that can shoot light into that
and then build an algorithms
that can extract the blood
pressure out without pumping
a lot of air into that
and cause a lot of
discomfort with your ear.
That would be very useful, right?
So our goal here is that
we want to build a system
that is comfortable,
unobtrusive, accurate,
and it should be a low cost, right?
So what we do here is
that we build a system
that Yeah, you would you bombed
You would send light into
that into aterial.
And we also send, we would
also pump air into the ear
through a bubble so that
it can change the pressure
so that we wanna see the pressure respond
from the, from those arteries.
So the challenge here
is that first of all,
we don't deserve a lot of medical terms.
So we there's, we don't know
what the systolic fraction ratio
is inside the ear systolic.
So with the blood pressure,
you have diastolic,
which is a low value, and systolic,
which is high value, right?
In order to measure the systolic normally,
what we need to do is we
need to pump a lot of air.
When you do like the handcuff,
for example, you pump a lot
of air onto the hand cups so
that it creates the equilibrium
of the between the
pressure that you pump in
and the pressure inside your hand.
And that's when you get the systolic.
You keep reducing the pressure
until the to the point
where you don't hear any beat anymore.
You don't see any beating.
That's another point of
equilibrium where it show
that yeah, that's the diastolic, right?
So those are known in the
literature with the hand
or any other position
that have explored before
these have never been explored before.
So, what we need to do here is we have
to be able to, to infer that ratio,
the relationship between the pressure
and the actual systolic level.
The second is that now we need
to design the India balloon.
So we actually also have
to we have in the balloon
going inside the ear,
so that we need to be comfortable
and it should be safe right
for the device to go inside the ear.
And also, we don't know how the reflection
of the of the light coming from the ear
and when you reflect out how would
that corresponding to the
actual now the the oxygen
and the pressure changes inside the ear.
So all of those challenges make,
the systems very challenging to do.
So what we came up with
is a new algorithms
that can detect the the systolic point
that then the key here is normally
and again normally you would
pump the pressure is so high,
so that it reached the equilibrium point.
In our case, we do not right,
we only pump to the Midway.
And from that we can infer
what the high point is, right?
And that's a key novelty of our system.
So, and we built a Blood
measurement algorithm
to infer the actual blood pressure from
that from that reflection point.
I'm going to go into
I'm not gonna go into
detail here because of time.
And we built a hardware system
and this is the actual
physical hardware system
that we built into the
flexible PCB circuits
that we can go inside your ear.
This is a balloon that when we press some,
when we push air in, so the
device itself would have
a pump from outside the pump
air into the into the balloon
that creates some pressure inside the ear.
As we keep increasing the pressure,
we would monitor how
that pressure changes.
We reflected on the on the light
that we're measuring from
the reflected of the,
from the artery .
So this is the hardware
and this is how it's
actually looked like, right?
It's pretty, it's still in
a pretty big form factors,
but we that we were able to we
studied the system on a group
of users and our accuracy is, is
within the FDA approval level, right?
So the device, the accuracy is plus
and minus three [mumbles]
millimeter of mercury, right,
which is within the FDA approval range.
So that was the third
system that we've built
to show the feasibility
of Earable concept, right?
There with this hardware, we
also have built other system,
like for example, Painometry is a system
that objectively quantify your pain level.
And the motivation is coming from the fact
that Every day in the US every day ,
in the US is 130 people die
because of opioid overdose.
But 60% of opioid prescriptions
for chronic pains are actually.
So you're 60% of those
opioid prescription,
is for chronic pain.
Now, what this mean is that
so normally you would go
into the hospital and they would,
give you an opioid or pain relief drugs.
People tend to take more
than what they should take.
And that's what and
then later they go back
to the hospital and they say,
Hey, I'm running out of my pain relief,
or opioid and that get, and
then now you need more in order
to feel less pain, and it
go through a loop, right?
And that's cause the
misuse of those opioids.
And all of these is
caused because of the fact
that we, normally we subjectively
measure our pain, right?
And if there was a way for doctor
to objectively quantify
the pain level [mumbles]
The pain drop more appropriately
so using Earable
we enable a different another
system called Painometry
that can quantify sleep and
for the interest of time,
I'm just gonna briefly
talk about the result here.
We've tested this on 31
patients 31 healthy subjects
and the outcome our accuracy in
classify four different level of pain,
ranging from no pain to very painful,
like number four is our accuracy is 94%.
Sorry, the lowest is 95%, right?
What this meant is when
and what we did is pretty
which we torture our subjects
by having them going through
And this is I have been approved, okay.
[laughs]
So we have and we actually partner
with a pain expert in the
inner psychology department
where we would have the
pressure machines like
device would have a piston
that the pressure on the finger just
to so that it may make
the actual pain right
and then It will induce actual paid
and then we try to classify those
that's painometry system
that enabled by Earable.
The second system is awake system
that is continuously
monitoring the excessiveness,
excessive daytime sleepiness.
And the idea here is that
well the asleep drivers and,
and pilots actually, in
many narcolepsy patients,
have hard time staying awake.
Think about yourself
when you drive like 16
or 18 hours straight, right?
It's actually very hard
to stay awake right.
So we want to have a system
that can quantify the level
of weakness while you are sleeping.
Now, if you think about this
system, as an integrated system
that integrate into your earphone right,
you will be wearing your
earphone listening to music
while you're driving, or while
you're while you're working.
The system can monitor
the level of whiteness
or the level of focus if you will,
so that it can, it will be able
to infer what state you are in
so that it can recommend using audio
or using some other brain
stimulation. techniques.
So the we've conducted the
studies and our the accuracy
of our system is around
an average is 86% meaning
that we can classify and
the level of awakeness.
So we have three level awake, micro sleep
or like drowsy and sleep meaning
Do you if you sleep less than 15 seconds
while you're driving while,
you're doing something
else, it's called drowsy.
And if you're asleep, and
then you wake up, right,
if you sleep longer than 15 seconds,
then it then it's considered
you're in a sleep state and one already.
So that's another system
and we actually have a few other systems
that we're developing using
Earable just to show that
to create an ecosystem for this.
Now, what we're having
extensive amount of work
on extending Earable right now
because of the and under the funding
from the National Science Foundation's.
So what we are doing is we introduce a
different Deep Brain
Stimulation techniques
that would stimulate deep brain
to improve different cognitive functions.
We also imposing we're studying,
different sleep improvement techniques
let's say we want to improve
not only monitoring your sleep,
but we also wanna make
sure that you sleep faster,
sleep deeper, and also
wake up, more awake. Right?
So that's another line of
work that we are doing.
We also are trying to detect
seizure, epileptic seizure
and also suppressing seizure
using brain stimulation.
We built a new OS with
a new operating system
for ear-based system,
because we identify we found
that there are some very, very
interesting unique property
that such a system would
posing, for example,
the energy, the resource
is extremely constrained,
on those earpieces, right?
The battery is limited,
the space is limited,
but at the same time, you wanna stream
that signal out all the time.
So now if you stream it out all the time,
and what we found is
that by just doing so,
the the energy most
of the energy is actually
used for communication, right?
in one of the study, we saw that it's 86%
of the energy actually
useful communication.
So now how do we make the trade
off between the compression,
should we use normal event
base OS are we actually we,
we should use some different model.
And what we introduce is a pattern patent
or patent base OS, when we first need
to see either any patent is
this signal worth storing
or transferring, right, and then it will
only then it will trigger
different activities,
in the device itself in the OS level
to send a signal out
or do the compression
before sending a signal out.
The third one is developing,
a hardware software open platform.
And this is under some support by Google
and also the company that
and a few other partners
that helping us to open up the platform.
And we want to enriching
the Earable ecosystem
with different applications.
For the interest of time,
I'm gonna be talking about only two
of these right, the Deep
Brain Stimulation briefly
and enriching the ecosystem.
So with the deep brain
stimulation of course,
there are many benefit if we can,
if we can do deep brain stimulation,
ranging from improving
the cognitive functions
to treating different neural disease
or enhance human sense,
right, different sense,
like visual smell, et cetera.
So, but why do we do it from
inside the ear anyway, right?
It's turned out that
there's many parts of your
of your brain is if you
can stimulate different
parts in your brain,
you can improve different
functionality here,
for example, if we can
stimulate this spark, right?
then it will increase the pain relief.
So you don't have to take opium, right?
It will help you have a
better respiratory control
because in sleep apnea,
one of the problem is
when you sleep, your respiratory
system is stopped working
and that's called central apnea.
By stimulating from the
brain in this location,
you can actually treat that issue
or choking prevention, right?
So there's a lot of benefits
in stimulating the from inside the ear.
And the reason for us
to do so is that one,
It's close to the deep brain area.
Now if you think of putting
the earpiece inside your ear,
it already go through the skull,
it mean that you don't
have to send too strong
of a signal compared
to when it's too loud when it's outside.
Second is that it will,
not induce effect on the cortical areas.
Why is that important?
Now, think about the brain,
if you stimulate from
the outside of the brain,
if you send a signal
from outside of the brain
and you want to be made to the
location in the deep brain,
it means that you have to send a signal
at a much higher power during order
to reach to the deep brain area.
The problem there is that by the time
that it has reached the area,
and it's if this in order
for the signal to be useful
or effective, the signal from out here,
will need to be really strong
and what happen it _would
burn_ who will burn
or it will affect all the
neurons on the way, right?
So we need to have ways
to address that problem
and presenting it from the ears actually,
can probably avoid that also
from inside the ear
actually minimize the noise,
the motion noise caused by
people motions et cetera.
So, and there are devices out
there there are techniques
to stimulate the brain, but
it's they are not portable,
big size, expensive and
you can't use this everyday
and then what we introduce
are two different techniques.
One is deep brain stimulation techniques,
using multiple beating effect
where we call it TIMNS
where the key idea here is
that when we beam a stream
of signal from to here,
if we were able to control the
phase in a constructive way,
then the signal so when
we sent from both sides,
when the signals from here is the weak,
because they're not constructive,
but these array of antenna when it go into
when it meets at the
at the deep brain area.
If we control that phases of
different antenna good enough,
then we should be able to
amplify the signal here
by the constructive effect.
But out here, it does not
affect the brain, right?
And the key here is not only,
just affecting the area of interest,
but also minimizing the
unwanted effect, right?
So the second approach
that we also exploring is
we want to do deep brain stimulation
with a pulse superposition.
And the idea is almost similar,
where when you send two pulses
from outside and when
they meet each other,
it will be constructive or destructive,
in ways that you want, right?
So those are the two angles
that we are exploring from
the brain stimulation.
We are also building an
ecosystem through Earable
which is one of the startup that license.
The license Earable technology
out of the university,
what we have now is we are
closing a licensing agreement
with Zoomy, where they actually use these
for personalized education.
So they would use these
for improving people's learning, right?
So also, we are partnering with
Google, partnering with Bose
and ONR among many other
industrial partners
with from Earable angles
to enable different
potential application, right?
And I mentioned about
epileptics, narcolepsy,
we also will be exploring focus
and integrating our device
into other medical devices
such as CPAP machine, which is a device
that helps you to breathe
or building the whole
ecosystem for the hardware.
So hardware software platform
for other application
developer to build on, right?
So I would like to just briefly,
talk about my other research, right?
I just spent quite a lot
of time talking about mobile health,
Mobile Healthcare Systems where I lived
a year ago is one of them.
We also have PHO is another
PHO 2 is another system
that we built that can
send your oxygen level
using just your phone and EBP
I just mentioned wake
and Painometry system
that are around mobile
health care area, right?
Another set of topics and work
that we've been working
on is wireless sensing.
Where we try to build different techniques
to sense wireless systems, right?
For example, with white spiral
what tried there, is we tried
to capture your breathing
activity from afar,
using radio frequency signal.
That, think about your
access point at home
that can monitor your breathing activity,
your motion, your body motions, right?
Using almost the same techniques
but with some modifications
in signal processings,
and also applying on
different applications.
We use wireless sensing
together with Professor Rick Han
we have,
Mattham which is a system
that can passively listen
to and detect if a drone is
flying around your house, right?
And this is useful for
prison where people are,
where drone are actually
flowing into the prison
to drop drugs or smuggler, right?
It's a fly across the border.
Also using this in airport because
to detect the presence of a drone.
Now the group of topics
that we've been working on
is trying out by improving mobile devices
and mobile system privacy and security.
We also have a series of wireless,
and mobile and wireless
communication system
where we build different techniques
or communication techniques
for a wearable device
that communicate with
a touch-enabled device,
so that we can communicate
through the touch,
medium itself, without any type
of wireless communication, right?
And in our recent interest,
where we're just very recently,
we received a an NSF grant on a project
we call IO Tree as in,
we want to be able to improve
the health of your tree
by having a wearable for your tree, right?
So that we are due-pricing _reviewing_ on
and the last but not least,
we are having a mobile
human Computer interactions,
a series of different topics
that we've been working on
where a TYTH is one example
that I just talked about.
We also have a system that
can completely passively,
But a passive battery-free system
that can detect your hand
gesture by just a wrist-worn
device and the smart sensors as well.
So, well, so all of this,
take a village to here, right?
So I'd like to take a moment
to thank all of my students
who have made this happen.
I have not made anything, they
do all of the work, right.
So and also acknowledge a
lot of my collaborators here
who've been brainstorming
and developing a lot
of these projects together with MNS.
And with that, I'd like to
take questions, thank you.
[audience applauding]
MAN: Questions.
STUDENT: I'm curious how
you chose the locations
for the taping on your TYTH.
So there were three different signals
that we need to capture brain muscle
and the skin color as
the skin deformation.
For the brain and for
the muscle we already
from the previous work from a year ago,
we already know that we can capture
that from behind the ear.
Now the hard part is where
to pick up the skin deformation train.
And that's when
we start studying the
anatomical structures
of the muscle and the
connection between the bone
and the ear or sorry, then the tongue.
And that's when we decided
that behind the ear is
the best place to do that.
STUDENT: I guess I meant
like the we're on the teeth
that you were going to
tap with your tongue.
How did you choose like those
specific spots on the teeth?
Oh, I see, so that is really
depending on the the number
of characters that you need
to represent or the number
of difference unique inputs that you want.
For example, now if you wanna just drive a
wheelchair meaning forward,
backward, left right,
then you probably need only
four different positions,
we will take the front, the back of your
so you can roll your tongue
back, you can tap on the front,
your deck to the left,
you can tap your right,
those are the four most
_unique_ areas right now
if you have 10 it's like
what we saw and what we show.
But if you want to have
a full keyboard gonna
be much harder, right?
But that's how we would
be picking locations.
STUDENT: So does this work equally well,
for everybody based on what you
know, are there differences?
So like, I know eye-tracking for example,
depending on the shape of your eyes,
and whether you wear
glasses and even certain,
eye-color and things like
that can affect searching,
does this work equally well with men
and women are the different
between kids and adults?
That's a great question.
I guess we are we have not
evaluated extensively enough
to be able to answer that and it's
what we've seen is different people have
different head size for example, right?
So the same device might not might work
for one person but then might have
a really bad contact
quality for somebody else.
And so I think the form
factor would be beneficial
from personalization
that different people
need to have different.
But I guess we can group them
into different group pack
for similar to when you buy a earphone,
you would have three, three earpiece
that come with the earphone, right?
the same ear, we probably
have to provide mid head size,
small head size, big head sizes
and then we can personalize it.
STUDENT: Building on John's question,
it seems like for some of these systems,
because it's machine learning
involved, there's a strain
If it's going to be four
buttons pushed yet you have
to train more than the algorithm
with your tongue doing some
exercise or whatever, okay.
So there must be like
variability between people.
And like how you have a
sense of how much training
how much data you need
in order to specialize in
two of the signals that
one person could produce,
versus is it just sort of generic
and then you train it on, two people
that model will transfer
effectively to a thousand owners.
So, that is where, so
we have not done that.
That's definitely what we
really wanna do any In fact,
we've looked into that
probably three years ago,
because of what you mentioned without
but also because of for us
for the signal separations,
if we have more information,
about a mass number of people,
then our signal separation
algorithm would look better.
But the way that we
design all of the system,
we have, we tried to give them
up a machine learning piece
as minimal as possible.
And we consider that as
there's space for improvement.
We will try to design
the hardware really well.
And then we will use,
a very simple machine learning
algorithms and techniques.
So that now if we have to convert
that into a commercial product
and in our in the company
that we're trying to build,
which we have to do that, right?
Then we have to train
with a massive number
of people have very, very
different set of properties
that we need to consider for
it to work without training,
or because there's different level
of training that's required.
For example, it could
be purpose-in training,
it could be per use training,
which we do not need to be
up to help produce training.
But the best, the real goal is trying
to do eliminate the need
of per-person training
as exactly what you mentioned.
We need to do that.
I would love to find somebody
to help me with that.
[muffled speaking]
we can oh, yeah, I mean in
the video in the last video
that we just showed, right?
And also in the live demo
when we looked at the left,
the drone fly to the that.
So yes, we can capture that.
[muffled speaking]
STUDENT: When you tap, like you said,
you had like 10 regions in your
mouth so you could [mumbles]
Also you're saying that
you would combine the eyes
and the tongue, lets see.
I think we can yeah, definitely.
That's a great suggestion.
Jim had a question,
right, was it answered?
STUDENT: Have you thought
about doing anything
so you could use some
feedback back to the user
to be able to real time
control and adjust?
And yes, so.
I guess almost all of the system that
we have is actually have
to work in that way.
Take pain away as an example, right?
We want to quantify the pain level,
but then we want to quote them say,
hey, you have a pain episode
in the next five minutes.
Don't take too much
opioid it will go over,
in 10 minutes based on the history.
So that will help them
to take less opioid.
That's one example. Another
would be If you even detect
that in the in the what system
or a driver drowsiness detection system,
if we see that you are you
start having micro sleep,
we will send out audio signal saying
that wake-up right or we can
send some light signal in
or have very loud noise amplitude.
Does that answer your question?
STUDENT: I was thinking more on the fact
that maybe something
changes during the time.
Like I turn the machine
on, I can't hear as well.
There's a lot more noise, is there a way
to attenuate some of the signals?
[mumbles]
You turn them to be something
where I'm actively participating in the--
In the sensing process?
I think that'd be a good
direction to explore.
Yeah, we have not.
[] Okay one last question.
STUDENT: This time I wonder
more about the notion
of objective pain measurement,
Yes, yes.
So, when we evaluate,
Well, when we look into
the pain literature,
almost all the existing pain
quantification is subjective.
Meaning that you have a Likert scale
of 10 like from zero to 10
with the weather face right?
And that is subjective, right?
As in how do you feel
and then you pick one
and you reflect yourself and you say,
Okay, I feel that gets
a six and you pick that.
Now objective pain measurement
here is based on your
physiological response
as opposed to asking you questions.
And by that we meant
what the muscle signal
that whether the muscle changes
that we can capture
For example, when your
pain there's a group
of muscle that is not
lethal or non-volatility,
group of muscle, there's three
groups of muscles one's here,
one's here and one's in your back
that you don't control when
you pain that just get crunched
and when you crunch the
EMG signal is sent out
that if you can capture
that we can quantify your
your pain level objectively.
What we're really trying
to do here is objective
but it actually even
better than objective,
in the sense that it is personalized.
What we just do it what
we're capturing is really
your responds to pain.
So one person given the same stimulation,
one person might respond
differently compared
to another because they might be used
to that pain level already.
So their faces might not
change as much as somebody else
that just have that painful
for the first time, right?
So it is really objectively,
personalized pain quantification.
or sometime we call it,
objectively-subjectively
pain modification.
It's answered your part of the question.
MAN: Couldn't wanna argue
that pain is subjective.
So that's why we have that
personalization, right.
So your response is different
from one person to another.
But we don't want to do we don't want you
to con-consciously tell us
what your pain level is.
And that is what the objective
is or what the objective may
MAN : I still think there's
some flexibility there
because you might be telling me
[mumbles] you only think
you're feeling pain.
So if so,
but your non-voluntary
physiological response is it's not.
It's not what but by based on
that we can if we consistently quantify,
your pain based on that,
and that's what we that's
what we meant by the objective
MAN: I remain skeptical.
And I will explaining [mumbles].
Thank you for you questions, thank you.
[audience clapping]
