Yeah.
Thanks for the introduction.
I'll just say my name again.
So I'm Patrick Reipschlager
and my colleague
is Raimund Dachselt. And we've
done this work, DesignAR,
immersive 3D modeling
combining augmented reality
with interactive displays.
And this was done at the
Interactive Media Lab Dresden
at the Technische Universitat
Dresden in Germany.
So just to give you
a brief overview,
we developed a
modeling application
to create simple 3D objects by
combining an interactive design
workstation.
We've had covered
augmented reality
which the user interacts
by using touch and pen.
And furthermore, we
also investigated
how the AR space at the
bodice of the display
can be used to offload menus
and for additional views.
And at the end, I will also
talk about the larger design
space, which we call
augmented displays.
But first, we are not the first
to combine augmented reality
with display.
So let's have a quick
look at somewhere at work.
There's different work in
the modeling and sketching
division, where different
forms of displays
are often used to manipulate
AR or stereoscopic 3D objects.
For example, DualCAD symbiosis
sketch and mockup builder.
Or to enable freeform sketching,
like a symbiosis sketch.
And I also worked on distributed
user interfaces, where displays
are used to support
interaction with AR
or to provide additional
views and information.
So this is just
a short overview.
For a more detailed overview,
please look into our paper.
And now I will try to explain
what differentiates us
from prior work by giving an
overview of our core concepts.
So our general idea is to
take a tilted multi-touch
and pen-enabled
design workstation
and combine it with head-coupled
stereoscopic augmented reality.
And this creates an AR
modeling environment
which addresses the
lack of immersion
of traditional displays.
And at the same
time, this allows
us to use the precision
of natural pen
and touch into action in
contrast to, for example,
midair interaction.
So our goal was not to challenge
well-established modeling
applications but to explore how
we can use augmented reality
to improve them.
In contrast to related work,
we emphasize the alignment
of the display
and the AR content
to generate the impression
of a single, seamless system
instead of a distributed one.
And furthermore,
like I said, we also
explore how additional
space gained
by using AR can be used,
for example, to display
additional views on menus.
Of course, an
interesting question
is where do you place
augmented reality objects?
In this regard, we defined
three different levels
of proximity of the AR content
in relation to the display.
The first one is superimposed
objects directly in front
or behind the display,
which has also
the strongest connection
to the display itself.
The next one are adjacent
objects, arranged at the edges
or close to the
edges of the display.
And the third one
are objects placed
anywhere in the
environment that share
no spatial relation
to the display.
They can have other relations.
And we proposed to use natural
pen and touch into action
for the first two levels,
but to use midair interaction
for the third level to interact
independently from the display.
Yeah, we implemented
our concepts
in a prototype, of
course, which you will
see in the following slides.
We use Microsoft Surface Studio
as the interactive surface
and Microsoft HoloLens as
the head-mounted AR display,
and both devices run
Unity which we also
implemented our prototype in.
And to form communication and
for synchronization of the two
applications, we implemented
a dedicated client server
structure which uses a custom
protocol which is based
on Open sun control and TCP.
And also to synchronize
the coordinate systems,
we placed the route
anchor for the AR content
at the bottom left
corner of the display.
So if you have any further
questions regarding
our prototype, I will
gladly answer them later on.
But now I would like to
get back to our concepts
by giving you a quick overview
of our navigation and modeling
techniques.
We decided to use
touchscreen port
for all navigation related tasks
and to interact with menus.
And we also wanted
to provide users
with a simple gesture
set that works
well and easy to remember.
And this is why we also
used toggle buttons
for mode switches, for example
for translation, rotation,
and scale.
Basically, all
interaction techniques
use one-finger direct
gestures for manipulating
the x and y-axis and the
two-finger direct gesture
for manipulating the z-axis.
We use a bust modeling
approach for design
now, which means that you have
a rough model that is iterative,
fully refined by creating
new edges and faces.
And we decided--
Oh, that was a little fast.
We decided to use pen
interaction exclusively
for adding new geometry for the
modeling functionality itself.
For example, you can
create a new etch simply
by crossing two existing
etches with the pen,
and you can extrude a face by
first selecting it and dragging
it outward of the pen.
An interesting
challenge in this regard
is how you interact
with AR content that
is in front of the display.
For example, users have
to reach through the model
to interact with
the display, which
leads to perception issues.
And we solved this for DesignAR
by switching to a 2D projection
when users start the
interaction and then switch
back to the stereoscopic
AR representation
after the interaction
is finished.
So this explains
our core concepts.
I would now like to describe
the specific techniques
in more detail that illustrate
how the AR space can
be used to extend and approve
the view on the display itself.
One important
functionality is of course
to create new models, for which
we propose free approaches.
The first one is a
3D object browser
which uses AR to also
show you a preview
of previous and
future items which
also enables you to
see the objects already
in stereoscopic AR.
The next one enables
you to simply sketch
the contour of an object.
And then a rotational
solid is created, again,
as an AR object, which
is a very easy way
to create such an object.
And the third one is to use a
real world reference by simply
sketching the contours of a
physical model, which is then
converted to an extrusion
object and you can also
manipulate the amount of the
extrusion like you see now.
Yeah.
Another important concept are
2D orthographic wireframe views,
which are useful to reduce
the complexity of 3D model.
And this is a standard feature
of nearly every 3D application
that there is.
But usually they require
a lot of screen space
so our approach is
placing them in AR space
at the borders of the screen
and to maximize the screen space
that you have for modeling.
The position resembles the
corresponding 2D projection
which makes it immediately
obvious which view they show.
The interaction is linked.
So when you change the model,
when you move it for example,
the views update immediately.
And also, you can interact
with the display border
to manipulate the
orthographic views,
for example, to hide them or
to change their rendering mode.
And you can also tilt
down by doing a pinch
gesture to have a better view.
We also propose to offload
menus onto AR space.
Again, to maximize the screen
space used for modeling.
You can, for example,
do one finger swipe
to the border of the screen
to offload them to AR.
You can do swipe to the center
of the screen to move them back
to the display.
And of course, you don't--
I should show you this as well.
And of course, you
can interact with them
when they are offloaded.
So this is why we
added little handles
at the border of the
screen that you can touch
to toggle them for example.
But you can also use
the border of the screen
to interact with
more complex widgets.
For example, imagine a 2D
selection task where you first
touch the border of the screen.
And then you can move
your finger up and down
to change the rows, and you
can move your finger further
to the left to change the item.
And then when you lift a finger,
you trigger the selected item.
So very simple,
easy way to interact
with menus that are offloaded.
And last concept
I want to present
makes use of the
available AR space
to embed instances
of the modeled
object directly into
the environment.
They are spatially
independent from the display
so they can be placed anywhere.
And because they
are independent,
they are not
transformed by touch
but using midair interaction
and a dedicated transformation
widget.
But they are still coupled
to the modeled object
which is on the display.
So that means if you change
the model, the offloaded model
updates dynamically.
And this is useful to gain
an understanding of how
the model relates to
the real environment.
For example, for 3D printing.
So this concludes the
DesignAR concept itself.
But we also opened up a
much larger design space
which we call augmented
display, and which
is not limited to 3D modeling
class which I would like
to talk a little bit about.
We define augmented
display as the extension
of non-stereoscopic
interactive surfaces
like tablets, tabletops,
or display walls,
through two or
three-dimensional objects
using personal
augmented reality.
What is important
is that the display
serves as a frame of
reference for all associated
augmentations.
So besides our own work,
there are other publications
which, given this
definition, can be
considered augmented displays.
And we are very interested
in exploring this rich design
space of augmented displays,
especially regarding questions
like, what is the
spatial relation
between augmented reality
content on the display.
You already saw an example
in the proximity levels
I presented earlier.
But this can be
analyzed further.
What role do AR objects play
in relation to the display?
Are they, for example, the
primary focus to the user,
like the modeled
objects in DesignAR?
Or do they play
an auxiliary role
to content that is on
the display itself?
How does the interaction
with the display
manipulate AR objects?
So what is the spatial
coupling, for example?
And how can we use
the screen to define
boundaries for AR objects?
For example, to clip them
or change their behavior.
So yeah.
To summarize, I presented
to you our work,
DesignAR, an immersive
3D modeling application
that combines
head-mounted AR displays
with an interactive surface.
And in the future,
we plan to evaluate
our concepts of using
AR to extend the display
screen and a
formative user study,
and also to pursue the
exploration of this design
space of the exciting
new class of displays
that we call augmented displays.
Yeah.
Thank you for the attention
and I'm now open for questions.
Thank you very much.
Thank you for the great talk.
This is Andrea
Bianchi from KAIST.
I would like you to
comment a little bit
about offloading some of
the interfaces offscreen.
Like for example, the orthogonal
projection which I think
is a great idea.
But given the limitation
of the current technology,
for example, the field of
view of the HoloLenses,
can actually people see them
or do you actually have to--
Thank you.
Well, not without moving
your head obviously.
But having used it, I would say
it works reasonably well right
now.
Of course, having a
larger field of view
would help tremendously.
But we are looking
more into the future.
So the HoloLens 2 is nearly
there, should be there already,
and which probably will
have a larger field of view.
So we don't see that as a
limitation on the concept side.
But of course, it does
a little limitation
on the practical
implementation here.
But I think this will change
for future technology.
Thank you.
Fabrice Matulic,
Preferred Networks.
And I really like this work.
So it seems that you're
using the AR space,
as far as I understand,
mostly for visualization
and for very basic manipulations
of the objects, right?
So I'm wondering whether
you're not missing the whole 3D
interaction space to provide
some creations and design
tools also in that
3D space because you
have a fixed space to the
surface, where you actually
used the pen to create your
shapes and your modeling,
right?
So how would you extend that--
how would you use the
pen in a 3D space?
Maybe using a
tablet, you can just
call the tablet in the air.
And then depending
on the orientation,
you can use the whole
3D space or something
like the AR pen, which was
presented at CHI this year.
So I don't know.
Do you think you really need
to fix surface like designers?
Do designers prefer to
have a fixed surface
to do their creation,
or do you think
you can exploit the 3D space
also for creation and modeling?
It is an interesting question.
I can't really answer your
question of designers'
preferred fixed station.
At least I do when
I do my modeling.
So this is the baseline.
We have of course to
evaluate that in a real study
with designers, which
we didn't do now.
But the focus was more,
in this work at least,
was to use a stationary
display and to explore
how we can expand that using a
tablet to actually do modeling
or sketching in AR is
an interesting approach.
But I think it's a very
different approach, which
would require very different
techniques than what
we have done here.
There's a tablet and VR paper,
actually down in [INAUDIBLE]
presented at CHI also this year.
They do some kind of
very basic modeling.
I know.
I saw that paper.
Just a short additional answer
to your question, Fabrice,
since I'm one of the co-authors.
It is actually the
notion to use a tablet
and to move it in space and
to use it as a space location
orientation.
Of course, it's also represented
in the augmented displays
concept.
So the idea is it's not fixed
to a stationary display.
But any display which
can be augmented
and where you can
interact is where
you have the precise interaction
on the surface, plus an aligned
or coupled augmented
view is this concept
of augmented display.
So in a sense, yes, you
can also use a tablet.
And maybe not--
we didn't envision
VR but a normal environment
plus augmented reality
so that it could be
possible to use that
as well, as an addition.
Thank you.
So actually, more couple
comments so much a question.
But I've put it that way.
One is the concept
actually will work well.
Your left hand is actually--
Your heart--
The first thing-- my
first comment is you
don't use two hands
and we have two hands.
So one thing you could be
doing with the other hand
is-- we've done some work where
you instrument the display
and you put it in a
gimbal so you can actually
rotate and manipulate
the tablet as a means
to control the rotation
and viewing angle
without having to move
your head, Oliver.
So that's one thing
you could explore.
I think it would augment
that technique very well
and that's a good
use of the hand.
The use of the
tablet does is useful
because it does anchor
the thing in space,
so you've got better memory.
But the other part would be--
again, I think
there's some examples
that I think might
be interesting
if you look at Tovi Grossman's
3D tape drawing examples,
where he's on a flat
surface been doing drawings
but on layers.
So you can stack things up and
then change your orientation.
And he's got some very complex
curves that worked out really
well on that.
And so everything's
better than something
and worse than something else.
This is really good technique.
This is some very good
applications where
this would extend really well.
And those two techniques
would be among many
that would help with that.
Yeah.
Keep going.
Thanks.
I think that are good ideas.
Thank you.
Thank you.
And thank you for
presenting, Patrick.
Probably thanks
Patrick one more--
