- So hello, everyone.
My name is Susanne Schmidt
from the University of Hamburg.
And I'm very happy to be here today
and to show you the results
of one of our latest research projects.
And actually this project
is part of a bigger idea
to build what we call a blended space.
So first, I would like
to show you what we mean
with this concept of a blended space.
So
a blended space is an environment
with physical, real objects
that can be transformed into any state
within the reality-virtuality continuum.
So for example, a natural history museum
could just take one of
their exhibition rooms
and could place an
exhibit, a real exhibit,
inside of this space and
could then augment it.
So for example, to say
just augmenting small parts
of the exhibit, like only
highlighting some body parts
or showing the entire skin virtually,
so how did the dinosaur look like,
or even they could show in this space
how was the context of the dinosaur.
So all of these could be
done in the same space
in the same spacial boundaries.
And the second special
feature of a blended space
is that all of these states
or the transitions between
all of these states
have to be seamless.
So a user does not have
to put off his (mumbles)
and then use a HoloLens
if he wants to transition
from a VR condition to
AR condition for example.
And the first challenge we are faced with
was which kind of technology could we use
to build such a blended space
because most of the existing AR hardware
is very limited in terms
of the field of view,
so we could not use a
HoloLens, for example,
to build an immersive virtual environment.
So we decided to use a technology
which is called spatial augmented reality
or projection-based AR
or projection mapping,
which means that you
can mix virtual content
and real objects without
additional displays.
So all of the content
is projected directly
onto the surfaces of the real objects,
and therefore the user does
not have to wear big glasses
or a tablet in his hands,
so it's very comfortable for the user.
And since we want to
augment not only the object
but also the environment
or the surrounding,
we need more than one projector
and so we build some
kind of extended cave.
So we had three walls and one
floor is projection surfaces,
and in addition, we had one projector
for the physical object inside this cave,
in this case, it was an exhibit,
this dinosaur I showed
you some slides before.
And within this cave,
we wanted to show different
kinds of virtual content,
so both monoscopic content,
stereoscopic content,
and also view-dependent content,
which was floating inside of the cave.
So we also had to install
a tracking system,
and one of the users was tracked,
or his head position was tracked,
and the view-dependent content
was adapted to his current position,
so the perspective was
correct for this user.
And inside this extended cave,
the example I showed you
before looked like this,
so this is one of the
states within this continuum
with the highlighted body parts
or with an entire virtual skin,
so the user could move around
this physical skeleton,
which was augmented with the skin,
or he could
see the environment the
dinosaur was living in.
And this technology,
special augmented reality,
had some benefits for us,
of our idea of building
such a blended space
because we can present
reality, virtual reality
in all states in between,
we also have a large field of view
when the user is standing inside
this cave-like environment,
and we have a minimum level
of user instrumentation.
So if we want to show
monoscopic content only,
then the user does not have
to wear glasses at all,
and if we want to show
stereoscopic content,
then lightweight glasses like
in 3D cinema is sufficient.
So this is a benefit in
comparison to other AR solutions.
Also, in this blended space
with special augmented reality,
we have some kind of shared space
both virtually and also in reality,
so multiple users can
see the same content,
the physical content
and the virtual content.
And we found in previous
experiments or demos
that this is very preferable,
so most people like this very well.
And I think also in the keynote session,
we saw that it's very important
for public installations
to have this social idea
of such installations.
On the other hand, we also
have some disadvantages
or maybe more challenges using
the special augmented reality
for this kind of application.
First, we need a way to do the transitions
between all these states.
So usually if you have amount of display,
you also have specific
controllers of HoloLens,
you have the predefined gestures.
But for special augmented reality,
there are no integrated
devices or people know,
so they would have to learn them
and we didn't want them
to learn new gestures
or new input devices,
so we thought about
how could we solve this
without,
yeah, this new input devices
which have to be learned.
Another challenge which is very special
for our projector systems
is the self-shadowing.
So objects cast shadows in such systems
and sometimes even on
their own object paths.
We cannot solve this, but we can try
to
guide the users to favorable places.
So if they are very close to a projector,
then the self-shadows are reduced
and this is what we wanted to try.
And last but not least,
although it's a shared space,
we have to problem-led if we want to show
the view-dependent content,
for example this virtual skin
which is floating inside the cave,
then it's only a single user system
because only one user can see it
with the correct perspective
and for other users, it's distorted,
unless they are also
moving to the same spot
as the track user is,
and this is what we wanted to
do with our user interface,
so we wanted
to
improve user interaction,
use our guidance,
and also use a collaboration
within such a blended space.
Because of this, we
developed a user interface
and we decided to
project it onto the floor
because we thought it's very intuitive,
easy to learn, it's not very distractive,
so users can still focus on their exhibit,
which is in front of them.
And, of course, we also evaluated it.
I will show you this in a second.
And for this floor user interface,
we designed different
user interface elements.
The basic element,
some buttons with footsteps
so the user knows that he
has to enter such a button
if he wants to interact with the system.
And with these buttons, we
were also able to integrate
some kind of meaning,
and with regard to the position
and the orientation in the cave.
So for example, if we want to show a scene
where a user has to observe the object
from a very near distance,
then we can also put the button
very close to the exhibit,
but if he wants to get
an overview, for example,
for this context scene,
then it makes more sense
to place the button further away.
When one user entered a button,
a progress bar appeared
and afterwards a 2D floor map expanded,
and this floor map was
like a safe walking space
or walking area,
so when a user was
walking inside this area,
we could make sure that
he get good viewpoints,
so the self-shadowing
was reduced to a minimum
within this walking area,
and also from a narrative point of view,
we could make sure that he's
always seeing the exhibit
from the points which are most interesting
for this specific scene.
And also we integrated
region of interest segment
which you can see on the button part
which are only directed to users
to parts within the scene
where the action takes place
in this specific scene.
And for collaboration,
we introduced the master-follower concept,
so the first user who entered
a button was the master
and all of the view-dependent contents
were adapted to his current position,
or other users are the followers
and they get to circle around them,
and when the circle is red,
this means that they
have a very bad viewpoint
for the current scene
so they are directed to the master user
and if they are close to the user,
the circle turns greenish.
So these are the
different elements we used
and we tested them in a user
study with 40 participants
of different ages between 19 and 65.
And we had most participants
from our own department
but also participants with
a non-technical background.
And since we wanted to
test the collaboration,
we decided to use pairs of participants,
so in every experiment session,
two users were using the system together.
And we also decided to
balance partners or groups,
whether users knew each
other before the experiment
or they didn't know each
other before the experiment,
because we thought for
museums, this is very common,
that sometimes you are visiting
a museum with your friends
or with your family,
but it might also be that you
are in front of an exhibit
and you're meeting a stranger,
and we wanted to make sure
that our interface works
for both groups and not only
for people who know each other.
So we simulated the exhibition center,
as I showed you before
with the dinosaur skeleton,
and we compared our
extended user interface
with a basic one, which
only featured buttons
which were pulsating to make
sure that the users know
that they can enter these buttons.
And we compared these two conditions
with the between-subject design.
We collected difference,
subjective, and objective measures,
but I only will show a subset of them,
so you can see the basic UI,
represented by the green bars,
and the blue bars
represent the extended UI.
And we had some question
as regarding usability
and also feedback quality,
and as you can see here,
the extended UI performs
significantly better
for feedback usefulness
and also feedback accuracy
in comparison to the basic UI.
And we also investigated collaboration
with some objective measures.
So for example for the
scene with the virtual skin,
it's very important that
both users are close together
so both users, the master
and the follower user,
get a good viewpoint.
So we wanted to reduce the head distance
between the two users.
As you can see here, for the extended UI,
the head distance was lower in general,
but what is even more interesting
is that for unfamiliar users,
it keeps nearly the same
as for familiar users.
In comparison for the basic UI,
users kept a much larger distance
when they are not
familiar with each other.
And in the right diagram, you
can see a balancing score.
So in our implementation,
we didn't do some balancing,
we thought it would be best
if we had a first come
first serve principle,
so the first user who entered
a button was the master.
If he left the walking area,
then the assignment of
the roles started again.
So the users had to do
the assignment of the roles by themselves.
And as you can see here,
for the extended UI,
this worked very good,
so a balancing factor
of one would be best,
and in this case, it's very close to one.
For the basic UI, it didn't work so well,
so in some groups, one partner
was the master all the time
and the other one was the follower always.
So the interface elements helped the users
to understand that they
have to balance this somehow
to get the best experience
for all of the users.
We also investigated the
communication behavior,
so if you're interested in that,
please have a look in the paper.
And again, in summary, we
implemented and evaluated
user interface for this
kind of blended spaces
with a floor projected, with
floor-projected elements.
And the evaluation was
done with a user study
with 20 pairs of participants,
and results indicated
that the interface is self-explanatory,
that it's easy to use,
that it encourages users
to move closer together.
But we couldn't find
any positive effects on the storytelling,
so we also asked users,
did you understand the
story, was it easy to follow,
and we got high scores
both for the basic and the extended UIs,
so for future work, we could test this
in a more complex scene because
our scene is very simple.
And also, what we didn't
test was learning success,
so it was not a focus of this study,
but it would be interesting
if users were distracted
by the user interface,
and therefore maybe the
learning was worse than without,
but we hope at least that
that they are more excited
and more interested,
and therefore they learn
more than without interface.
So that's all.
Thank you a lot for your attention.
And if you have questions,
don't hesitate to ask.
(audience applauding)
- [Host] Any question?
Yes, please.
- [Muhammad] Muhammad,
Saarland University.
I have multiple questions.
So how did you, maybe you mentioned it,
but how did you measure the head distance,
and secondly,
what is the effect of the head distance
on, for example, the storytelling
or the actual experience
overall of the two users?
Because if you get too close,
I guess, it's kind of inconvenient, right?
Thank you.
- Yeah, that's a very good question.
So
we had some markers on the stereo glasses
or shadow glasses the users had to use.
And the head distance was just a distance
between the center of these markers.
And this also was an observation we did,
that sometimes if they
were too close together,
of course, for the unfamiliar
users, for some, are strange.
So
for some scenes,
it was not important
that they are too close,
it was only important
that they were not too
far away from each other.
But for this virtual skin,
if they are only one meter
away from each other,
then one of the two users
would see the skin floating
besides the skeleton
and not around the skeleton.
So for this particular scene,
it was very important for the experience
that they are close together.
But I think for future projects,
we also have to think about
how can we prepare the scenes
that they are best for such experiences.
So I think for the skeleton in particular,
it was a little bit difficult
to get a good experience for both users.
But what we also saw is
that with the extended UI,
the users realized that the master user
was the one who had the best view
and so they left the scene
and said, now, you can test it also,
so it was still a very social experience,
so still it helped, I think.
(audience member clears throat)
- [George] Hi, George,
University of British Columbia.
I was wondering when your users
are wearing shadow glasses,
couldn't you just render
different viewpoints
in alternate frames so each
user can get their one viewpoint
with maybe a lesser frame rate?
- Yeah, I think there's a
very popular paper from Lima,
so they have these projector systems
which can be used for up to six users
by using the shadow system.
So this is also mentioned in the paper
as a possibility to
show it for each users,
so a different perspective for each user.
But first, it's kind of
expensive and difficult to use
so we also work with museums.
And if we can tell them,
you can use one projector,
of the shelf-projector, then
they say, okay, we can do this,
but if we would tell them,
now you need a very complex,
expensive hardware setup,
then they probably would
say, this is not possible.
And also, what I always
think is then you, again,
have these private views for each user.
So it's very similar to
HoloLens, for example.
And what I like about this system
is that even if you
have a distorted image,
you know what the other person is seeing.
So actually, for me it's more social
than these private views,
but I understand that it's
maybe a better experience
if each user has the
perspectively correct image.
So I think yeah.
- [Audience Member] So
correct, they all see the same.
(audience member speaking faintly)
- Yeah, that's.
(audience member speaking faintly)
Mm-hmm.
(audience member speaking faintly)
- [Audience Member] We could do that.
- Yeah, this is the problem, yeah.
(audience member speaking faintly)
Yeah, so actually I didn't test it,
but for me, I thought,
maybe you are not sure
that the other person is
seeing the same as you do.
But, yeah, of course, thank you.
- [Host] Okay, thank you.
- Thank you.
(audience applauding)
