- Last talk, it's combining
shape-changing interfaces
and spatial augmented reality enables
extended object appearance,
and David Lindbauer
from TU Berlin is going to present.
Once again, I totally forgot to tell you,
but you're supposed to
vote for the best talk.
So if you have any talk
you like, you can vote
for as many as you want,
and it's on your app.
- Okay, hi, thanks for the introduction.
In our work, we're combining
shape-changing interfaces
with perspectively corrected 3D graphics,
in our case, spatial augmented reality.
So we argue that this allows
for an extended and
enriched object appearance.
Shape changing interfaces in general
offer a rich tactile experience.
However, the granularity
is fairly limited.
This means they have a low spatial
as well as temporal resolution.
Even high resolution
shape-changing interfaces
like inFORM do not come
close to the millions
of pixels a regular display provides.
Besides granularity, they are
also limited in their speed.
So first they're constrained
by physical constraints
of the activators, and second,
by the constraint that you
actually do not want to startle a user
when he's touching any kind of device.
Spatial augmented reality,
on the other hand,
offers high resolution for projection,
or as well as for displays.
Furthermore, it offers the
benefits that it's able
to display an appearance that
it's independent of its shape.
This means I can
basically project anything
onto my display surface
without having to modify
the physical shape of the object.
However, obviously it does not offer
any kind of dynamic tangible qualities.
If we now look at the
combination of the two,
or more specifically at
the benefits of the two,
it quickly becomes apparent that
they are actually complementary.
So if we combine those
two, we think that we get
actually the best of both worlds.
We provide a conceptual
framework for this combination
to empower designers and
researchers of shape-changing
interfaces who want to
have their devices enriched
with any kind of 3D co-located graphics.
Our framework is inspired by known
techniques from computer graphics.
Researchers there, for example,
added perceived details
to geometry without making
geometry more complex.
For example, as you can see
here, for normal mapping,
we actually take a 2D image,
a low resolution shape,
combine the two, and get a
perceived high-resolution shape.
So we have two main
ingredients for our concept.
One is obviously physical deformation,
so we can deform the
device on a physical level.
Secondly, we have our optical deformation,
which means that I can
project arbitrary objects
onto my device, or I can
project moving textures.
So these are the main
components of our concept.
We distilled in our framework three
distinct concepts which we use.
We have bump maps, animated
texture maps, and shadow maps.
So for bump maps, we combine the physical
with the optical deformation.
We use shape-changing
interfaces for rendering
the core shape of an object,
which is easy to do for them.
On the other hand, we use
spatial augmented reality
for rendering fine details.
This allows for an
increased shape resolution
without actually increasing mechanical
or physical complexity of the device.
Here you see one of our
examples where we actually
project a texture onto the device,
and also different kind of 3D objects
without changing its
actual physical shape.
Besides bump maps, we also
use animated texture maps.
So here we use shape-changing interfaces
for rendering low velocity motion.
Spatial augmented reality,
on the other hand,
is used for rendering
high-velocity motion,
which gives us an
increased perceived speed,
again, without any additional
physical complexity.
In the example you see here,
we render the low-velocity
waves with our shape-changing interface,
and the high-velocity
waves and nice textures
with spatial augmented reality.
The last of our concepts in
this framework are shadow maps
for adding virtual depth to an object.
So the nice thing here is,
the shadow no longer depends
on the actual physical shape of an object,
or on the illumination within a room.
So we can also render virtual occlusion
and illumination without physical change.
Also we can accentuate physical features.
For example, we can make edges
appear sharper, or blur them.
In this example you see here,
we actually changed the shadow
which is thrown onto the
device without changing
anything in the room
besides the projection.
So these are our three main
concepts in our framework.
Besides that, we also provide an extension
which we called environment maps.
It's used for rendering,
for example, perceived transparency.
However, in contrast to our other concept,
this requires either knowledge
or control of the environment
which is not available if we
would equip our device with,
for example, displays,
but we can't do that
since we are using projection mapping.
So the combination of dynamic, physical,
and optical shapes allows for an extended
object appearance through 3D graphics.
Our framework is agnostic
of this play technology.
So we used here projection
mapping, however,
if we would have equipped the
devices with the displays,
the concept would still be valid.
Also, it enables features that
are challenging to realize,
for example, high frequency
texture or high velocity motion.
Also you can increase the expressivity
of a particular device.
And lastly what we can
do is we can get closer
to an accurate representation
of a desired shape,
which is not available if
we only use shape-change.
Up to this point I've only talked how
spatial augmented reality can actually
benefit shape-changing interfaces.
However, we also explored what happens
if we can actually turn
this around and ask ourself
the question, how do
shape-changing interfaces
benefit spatial augmented reality?
One of the things we explored
is view-dependent shape change.
So when you're working
with projection mapping
or spatial augmented reality,
one of the typical problems is cropping.
So if I'm tilting a device, the contents,
which would be outside
of my display surface,
I actually cropped, so the walls here
of the game area are cropped.
By altering the physical
shape of the device
based on the viewing angle of the user,
we can overcome this problem.
So here you can see the
view-dependent shape change.
When the user tilts the
device, and you can see
in the upper right that the device adapts
to the actual position of the user.
So let me briefly walk you through
how we actually implemented this.
So as I mentioned, we
use projection mapping
for co-located graphics
and OptiTrack for checking.
You might have noticed our three
little markers on the device.
Our software controls the
activation of the device.
Our shape-changing tablet is composed
of six server motors for activation
and a flexible top surface
3D printed from NinjaFlex.
This is the interior of the device.
So we use an Arduino and Bluetooth
for wireless communication,
and battery obviously,
so that we don't have to wire it.
This design is inspired
by the work of Rosmos
and colleagues who
presented the latest paper
at TI, which was a shape-changing phone.
Or they used the shape-changing phone.
Besides our projection mapping and our
shape-changing tablet, we
also needed a way to match
the physical to any virtual
target shape, and in the paper,
we detail our mechanical
distance field algorithm.
I'll briefly walk you through it.
Please refer to the paper for details
and a generalization of this algorithm.
So for the input, we take a
model of our physical device,
a virtual model which
features all its activation.
And our goal is to match
the shape of the device
to its virtual input, here the Blue Box.
We had a couple of
requirements, one of which is,
we do not want to have
close correspondence
between the physical shape
and the input target.
You can see that the Blue Box
is fairly different to
our shape-changing tablet.
This should produce a
close-to-optimal fit,
and it needs to run in real time.
Furthermore, we wanted it to
work for 3D transformation.
So a typical implementation
of matching physical
and virtual shapes is to
use the head field data
in order to know how a
device needs to be activated.
However, this only works for
basically two dimensions.
So how we achieve this
is we first voxelize
our base shape and all
its activation levels
into a three-dimensional voxel grid.
This voxel grid encodes
the dimension and states
of the device, and with dimension, I mean
the actual activators,
and the state is the level
of activation of one particular dimension.
So each voxel essentially
stores which activator
needs to be activated how much
in order for it to be covered.
If we now have encoded
all the information,
all we have to do is matching.
So we basically intersect
our virtual model
with our voxel grid,
which has the information
which activator needs to be activated,
and this allows for the
device to adapt correctly.
The nice thing of this
algorithm is that a large part
of it can actually be pre-computed.
So the voxelization and
the encoding only needs
to be done once, and this
can be performed offline.
The matching, since essentially
it's a simple lookup,
can be performed in real time.
As I said, we wanted it
to be a three-dimensional,
it should work for three dimensions,
so we also created this
hypothetical shape-changing cube
which has transformational
across three dimensions.
By voxelizing it and
encoding the information,
it also adapts correctly to
our virtual input of Blue Box.
We created three applications which I want
to show you to showcase our concept.
The first is a labyrinth
game where the physical state
of the device adapts to
one of its game elements,
so in this case the ball.
The other textures are rendered
through spatial augmented reality.
This allows for a
greater gaming experience
and more immersion, because
the user can actually
feel the game while still
having high resolution graphics.
Secondly, we created a spatial
navigation application.
The physical state of
the device is controlled
by a view-dependent
shape change algorithm.
So users can explore a map,
of here, for example, the mountains.
And lastly, we created an
ambient display application
which displays the wind and
weather of our nice island here,
and as soon as the
weather gets more rough,
the waves are rendered physically
through the shape-change
device so the user can feel it,
and also the waves and the wind
are rendered through
spatial augmented reality.
So to conclude, we combined
dynamic physical interfaces
with 3D graphics, and this gives
us the best of both worlds,
which are rich tangible qualities
and high resolution and high speed.
We think it's very important
if you're designing
any kind of device to
focus on both the physical
as well as the optical appearance.
We think that future devices will feature,
for example, wrap-around OLED displays
for an even higher resolution.
Our framework is inspired
by computer graphics,
and is display agnostic, so you can use it
like we do with projection
mapping or displays.
Our implementation is
based on projection mapping
and our mechanical distance
field algorithm allows us
to match physical to virtual targets.
So the last thing I wanted
to say is we open-sourced
our hardware as well as the mechanical
distance view algorithm on this address,
so please check it out, play with it.
Feedback is very welcome.
With that, I want to thank
you for your attention.
I would be happy to answer your questions.
(audience applauds)
- [Man] Real Vertigo, Quincy
University, very nice work.
I'm assuming you have considered
sticking a real display on there?
- We actually had one prototyped
with two displays on it,
however, they were not flexible enough.
So I think we would have
loved to use displays,
however, we were very much constrained
by the amount it can bend and can deform,
and that would have brought
us into real problems.
We use projection mapping
because it allows us
to easily prototype, however, as I said,
I think in the future you
really want this to be displays.
- [Man] Okay, we need to
talk, I got you some displays.
- Yeah, absolutely.
- Any other questions?
Are you all ready for coffee, is that it?
Yes, looks like it.
Well first, thanks to our speakers.
That was a great session,
thank you very much.
(audience applauds)
And we now have a coffee
break until 4:30, thank you.
