So good afternoon, everyone.
So I am Arthur Nishimoto.
I am a PhD candidate
at the University
of Illinois, Chicago.
And I'll be presenting
this work that
was conducted at the Electronic
Visualization Laboratory
under direction of my co-author
advisor, Dr. Andrew Johnson.
Now I'm using augmented reality
to extend virtual reality
display wall environments.
So the motivation
behind this is that
large high-resolution
display walls
are good for collaborative
[INAUDIBLE] where
we can have multiple
participants in front
of these high-resolution
display walls
enough that you can make out
the details when you're standing
close to the wall, but then
able to step back and get
the broader overview.
And then these sort
of environments,
because they are
kind of fixed display
walls, the field of view
when the field of regard
is basically restricted by
the physical display the wall
without the use of some virtual
navigation such as panning
or zooming.
But a key feature of having
these sort of large display
environments that you get
these face to face interactions
within people, and that
they can more naturally
interact with each other
and help [? participants ?]
have better discussions
and make discoveries.
In some cases, we have the data
that, again, can be displayed
above or below the walls.
So virtual reality
head-mounted displays
improve immersion in
these environments
particularly when
you're exploring three
dimensional spatial data sets.
And by filling a single
participant's field of view,
they can use the
head tracking to be
able to see more of the
virtual environment around them
beyond using these
as head rotation,
increasing the field of regard,
so they can look around and see
everything.
And, well, the resolution
in these headsets
are improving, which
each generation of HMDs,
they're still limited compared
to high-resolution display
walls.
And once again, they have
reduced sense of presence
or be able to--
that sense of
yourself in the space,
in this case, your
collaborators around you
unless you're using some other--
another medium such as avatars
to sort of get around that.
Likewise, an augmented
reality particularly
in see-through
head-mounted displays
do kind of get around the
sense of presence and speaking
with your collaborators,
and you still
get the ability to look
around with the higher field
[? regard ?] around the space.
But the field of view that you
can see at what-- at a given
time as well as the resolution
is typically even lower
than, say, virtually
head-mounted displays.
And so the main
goal of this work
is-- there is a way to combine
these technologies together
to get the best of
both environments,
the high-resolution
display walls,
the ability to talk to the
people around you, but still
able to use a device, an
augmented reality device
to regain that
sense of immersion,
feel the regard of
the data around you
that might not be
readily available on just
how to display
itself even if it is
at a lower-resolution than
what's on the display wall.
So just a quick look at some of
the later work, and so the work
I'll be using--
I'll be focusing on is
using these large immersive
on display walls that tend to
do virtual reality work so,
for example, CAVE and
[? LC ?] successor CAVE2.
And many of these works here
provide direct comparisons
of these--
of the same task on these
two different devices.
So for the first work,
they're exploring
a network-based graph.
It's really looking
at two participants
that are identifying
features in these graphs.
They're comparing their
experience and their CAVE2
to versus head-mounted display.
But in a lot of these, they
have the interaction model
is also different.
So in the cave, they're using
want pointers to interact,
whereas in the
head-mounted display,
they're using leap motion to
do more hand gesture work.
The second paper, sort of,
a similar idea this time
in a projector-based CAVE, and
they're looking at interacting
these spaces and, again, they're
comparing to wand navigation
to--
the [INAUDIBLE] have this
mobile phone-based HMD
that was used in here.
And so the other
area of later works
tend to combine
augmented reality
displays and big displays with
main purpose of using this
to simulate outdoor
environments to,
sort of, understand
how to better improve
augmented reality techniques
in a controlled environment.
So experiment with really
interesting factors
like how this
perception changes,
how the latency particularly
changes when you have more fine
control because you can use the
big display as in a controlled
setup.
And in both these,
the experiments,
these large displays
actually are
capable of doing 3D
stereoscopic graphics.
But in this case,
if they're using
the preceding outdoor
environments or far away,
they don't necessarily
need those cues to depth.
And so the work
I'm more interested
in is when you have a 3D virtual
environment, when you have
virtual objects that are
much closer to the user,
and then where are
those 3D cues of depths
are much more important.
And so the system
that is probably
used for this experience is
the key to hybrid reality
environment we have at the lab.
It was designed as
a hybrid approach
between high-resolution
tile display
walls and the immersive
classic projecting-based CAVEs.
So one of the key features
of this environment
is that we're
surrounding participants
with high-resolution
stereoscopic visuals that
are greater than any
VR HMD at the moment.
You have this 24-foot space that
they can physically walk around
to explore the world
as well as large enough
to have a group of
participants [? built ?]
to have this space
to face interaction.
And so this was
sort of highlighted
in the picture above
where we actually
had a group of
scientists that worked
in CAVE2 for a couple of days.
They brought in tables.
It was basically a
working environment
where they could
actually have, you know,
part of the CAVE with--
or 2D work that the
geologist were more familiar
with the maps, other documents.
But then on the right half,
or even on the full display
when they needed to,
have a virtual recreation
of this lakebed based on sonar
data that they were exploring.
The search for an
augmented reality device
to sort of add onto this
system, the main criteria
was there had to be
a see-through display
so, again, we could
preserve the high-resolution
and being able to keep
that sense of presence
with the real
world with yourself
and your collaborators.
And so the main device we worked
with is the original HoloLens
that was the one that was most
[? really ?] built at the time.
But since then,
there are certainly
other interesting devices
that could potentially
use such as the Magic Leap,
which has greater field of view
and HoloLens, as well as
also think of other ways
that in these kind of
collaborative things,
other tools that might be
more useful whether it's
heavy interaction on tablets,
or maybe more low cost of air
devices that may look it--
may look at for future work.
And so combining
these two devices
together to get kind
of the best of both,
so we retrofitted the HoloLens
with the retroreflective
tracking markers that we
used for usual navigation
within the cave.
And this was done to improve
the tracking of the HoloLens'
native tracking system.
It's also how to
minimize the differences
as you're trying to compare how
this experience would change.
And we add this
head-mounted display
with the tracking markers.
And so that's picture
in the upper left
where we add the markers.
And to preserve the
[INAUDIBLE],, we actually
add the same passive
stereo lenses on top.
And this is all to give
you a general sense
of kind of system.
So we have all the inputs are
coming to the tracking system.
The CAVE2 system
which has a tracking
server and a
central master node,
which is generating the
virtual world in a building
perspective for all the
devices, and then dealing
with either feeding
that information back
to the HoloLens to get the
perspectives to line up
as well as going to the
computing cluster, which
is rendering the displays,
[INAUDIBLE] virtual displays.
And so the big
experiment here was
having this augmented experience
without a significant impact
compared to just using
this with the standard CAVE
setup of just lightweight
glasses and a wand controller.
So the experiment
we designed here
was explored within subject
user study, and to evaluate
the impact of a
spatial search task
to this network-based
graph, understand
how the test performance
and the user behavior
changed in these two conditions.
And so, in this case,
we had 10 participants
for this study, mostly
graduate students.
Most of them had prior
experience with virtual reality
both in the CAVE or with
head-mounted displays.
Half of them had some
development experience
mostly on the HMD
side of things.
Half had at least tried HoloLens
before, and three of them
had never used a
HoloLens headset before.
The task was a visual search
of 3D spatial network graphs,
and it's sort of example that
at the top, that's actually
the trainings that
they were given,
and basically to count
the number of triangles
formed by exactly
three nodes in this--
in these-- one of
32 graph structures
that they were presented.
And so 16-- they were
given 16 of these graphs
without wearing the
HoloLens and 16 with,
and these graphs vary in
size, node complexity.
But most importantly,
they kind of-- they
changed on how
they were initially
presented to the user.
So some of them, the
graph would be completely
viewable on the CAVE screens.
To begin with, some
would be on the case,
we mean to have
this extend above.
Some would extend below.
Some would actually do both.
And, in most of
those cases, since we
were very interested
in seeing how
well the stereoscopic
affect had worked
on having the
head-mounted display
and in the cave environment,
most of these graphs
also extended from
the protection plane
into the cave environment,
so they had more of a space
so they could walk
around the cave
and look around the graphs.
And in both cases, they
still had the option
to use the wand controller to
do more virtual navigation,
so using the control
stick to just rotate it,
move the graph around.
And it took about 50 minutes for
these to complete both graphs,
so we collected both
log data, basically
all the tracked movement going
on within the cave, the one
button movements, how they
were moving the graphs around.
And then there were a
number of survey data
that was collected, so this
information before the study.
And then between each--
after each condition, we did a--
asked a number of questions to
assess the physical and mental
demand, presence,
realism, comfort,
sickness as well as
having a short session
for just semi-structured
interviews
to get more interesting
feedback from the participants.
So the infinite variable if
you had the control condition
was having a participant--
a participant explore
these graphs using the standard
CAVE2 glasses and wand which
is pictured in top.
So just lightweight
3D passive glasses
same as you see in the movie
theater using a PlayStation
navigation controller with
the wand, so they could use--
they could point and
rotate environment around.
And then the
experimental condition,
instead of wearing the
lightweight glasses,
they'd have the
HoloLens, but they still
were able to use the
wand controller exactly
as they could in
the other condition.
And then the deep end variables
task accuracy, completion,
time, as well as
several metrics on how
they're moving, where they were
looking around both physically
and virtually.
And so just a kind
of quick over--
a quick overview of the results
in terms of completion time
and accuracy, we didn't find
any significant differences
between the two conditions.
Which we did find rather
encouraging since one
of the biggest criticisms
of where in the HoloLens
was the discomfort and even
with the kind of limited
field of view.
And so this was the first
iteration of this system.
There was a decent latency
between when the participant
would move their head and
the graphic lining up,
which is around
[? 365 ?] milliseconds.
And there was a bit of a
range in their depending
on various conditions
that we're hoping
to improve in future work.
In terms of movement, we
noticed that participants
that were-- had the HoloLens on,
would use the wand to virtually
navigate specifically
less in the dataset
compared to using
the CAVE2 conditions.
So they were able to looking
at-- they were looking around
more versus flying around.
And this sort of is
connected with some
of the interview responses
is that when they had
the HoloLens on, it was
easier for them to maintain
spatial awareness and not have
to move back and forth to get
a better sense, oh, did I count
this section of the graph again
or did I have to
recount all the nodes?
In terms of the survey responses
we got, in terms of mental--
the only major-- the
only significance--
[INAUDIBLE]
significance we found
was that the HoloLens was more
uncomfortable than wearing
the glasses.
Physical and mental
demand wasn't--
there was-- we didn't find
any significance for that.
And when the HoloLens
did not significantly
impact the sense of presence,
in then CAVE2 which was also
kind of nice to see, given
that in the HoloLens,
it does change the--
the contrast is wearing
it as well as if there
is some limitations
to the field of view
compared to the other
just wearing the glasses.
Quickly, some of the interesting
interview responses is that--
I've already
mentioned was easier
to count in the HoloLens.
One of the more
interesting notions
is in the CAVE2 displays of
these large tiled displays
and has display
borders, and most people
find that distracting at first.
Usually they adjust as they
get used to the environment.
But in terms of the
search task, the grid
actually became more
useful again in terms
of partitioning the spaces
and actually became more
distracting with
a HoloLens which
is just a flat display of--
didn't have that grid.
It's like, oh, the grid is gone
and they found it distracting,
which I thought was an
interesting-- just perceptual
response.
So, just to recap the
sort of preliminary work
in combining these
large VR display
walls in augmented
reality, trying
to get the best of
both of these devices,
and so we developed this
framework that fairly reliably.
It was able to get these
pieces working together
without a significant
user impact.
And so future work would
be to kind of explore
this even further as part
of my PhD dissertation
and use this with
more complex datasets
and examine
multi-user interaction
within this framework.
Thank you.
Thank you.
[APPLAUSE]
Question?
Hi. [? Wallace, ?]
Virginia Tech.
Do you think you saw any
effect due to the fact
that the HoloLens is kind
of heavier than the glasses
that you used in the
control condition?
Did any participant do
anything like, oh, I
avoided looking up and down
too much because the thing is
heavy on my head?
Yeah, so in addition to
the comfort, we did--
yeah, a number of participants
noticed that they were--
they said they were
less inclined to look
up and down particularly in the
graphs because of the heaviness
of--
because of the weight
of the headset.
Although, well, we did--
I believe there was still a
significant portion of at least
they tried even
though they said--
some of the participants
said they didn't even
like the HoloLens at all in the
condition, but overall even,
they still at
least made attempts
to look up and
particularly down.
And something that I think
I'll explore also later
is whether there was
a difference in, like,
if they're looking
up versus looking
down again since of
the weight differential
could be also interesting.
Any other question?
[INAUDIBLE] So you
[INAUDIBLE] and [INAUDIBLE]
plus HMD, so how about
HMD only condition?
Right.
Absolutely, that is something I
want to explore in future work
to get a better sense of
how this would compare again
and would be a better comparison
to some of the prior work
that we're specifically
looking at, the interactions.
But, again, this
is the first thing
where we can go with so
this combined air display.
But I think future
work, particularly
for my dissertation, I do
intend to explore HMD only case
as well.
Any question?
[INAUDIBLE]
[APPLAUSE]
