OK.
So I'm Lauren Thevin, and I will
present you the paper Creating
Accessible Interactive
Audio-Tactile Maps and Graphics
Using Spatial Augmented Reality
and the [INAUDIBLE] as well.
It is mostly about using
spatial augmented realty
as an authoring tool
to create content.
So, first, maybe to come here
and reach a conference you
performed some daily life task.
For instance, you
look on a map where
was the location
of the conference,
you look on an underground
metro line station on a map,
and during this
journey, you may read
the paper about the last
virtual reality devices.
But how could you do that
with visual impairments?
So you don't have access
to any spatial information
when they're
represented visually,
and you can access the text
with text-to-speech, but not
to the figure of your paper.
So how special
education schools do
to make the content
accessible to people
with visual impairment?
They use tactile graphics to
represent spatial information.
So this is a raised
line map, where
the ink raised one millimeter
over the map when it's printed.
You can also use [INAUDIBLE]
models or 2D model,
for instance, here,
from a train station.
You can use magnet
on an iron board
to create streets or
building in relief
and build a map
with the students
with visual impairment.
Or you can draw on
German film, that
raised the line you
draw with a pen of one
millimeter over the paper.
But it has some limitations.
Because when it's not
the [INAUDIBLE] that
describes the a student
with visual impairment,
what he or she is touching,
the caption will be in braille.
And that have some limitations
that you can overcome.
For instance, first, braille
takes quite a lot of room
and cannot be resized,
because it's optimized
for the fingertip.
And it is limited
to braille readers.
That represents only 20% of
people with visual impairment.
Also, it is a physical
content, and so it's static,
while dynamic content
can foster the learning.
So now I will show some work.
Because in the HCI
community, some work
proposed to add audio to make
tactile working interactive.
And so you can use
a tablet that will
detect the touch behind an input
tactile map on the top of it.
You can also use an
under table camera that
will track tactile
tokens and fingers,
so you can represent
tactile graphics and maps
with this way.
You can also have an object
with embedded electronics
that are audio-tactile objects.
Or you can model in 3D object
and add audio annotation,
and you will launch an audio
annotation using a webcam.
But there are some
remaining issues.
First most of this
system are not
designed to be used with
all the current classroom
content in the school for
people with visual impairment.
And they do not
provide, actually,
an authoring tool for people
without IT backgrounds.
So that really hinders
the use of the teacher,
and of the relatives, and
the family of such systems.
So what we did is we do
participative approach.
And from the user need,
we propose a system--
so we augment directly
the existing contents
of the classroom with audio.
So for that, we proposed
to use Spatial Augmented
Reality or SAR.
So it's projected
augmented reality,
so we keep, actually, the
existing content to use,
and then, as well, can use
this content without the system
in the usual away.
And so we detect
the touch, and when
the user touches
any part of the map,
it will provide audio captions.
And here, this is our
main contribution.
We use SAR to create
an authoring tool.
We use the touch interaction
so the user can directly
draw on the tactile [INAUDIBLE]
directly with a finger
to create interactive zone
and then register and add
any audio caption.
As we use the same system
for making the audio-tactile
and for the
authoring tools, that
enables direct testing of
the creating content as well.
So now some video with a
teacher using the system
to create content.
So here, a teacher,
she, with a finger,
draws the Peloponnese
interactive area,
and the video projector
gives visual feedback
about the interactive zone.
And for the complete
process here, a teacher
draws an area around the tactile
city, [INAUDIBLE] the feedback.
Lavrio.
Lavrio, that is the
name of the city.
And then the teachers can
direct it as a [INAUDIBLE]..
Lavrio.
Rafina.
So this was really a teacher,
and we used this content
actually in a real classroom.
So what are the main features?
We can draw with the finger
when we create content.
Shapes-- so when we touch
anywhere inside a shape,
it will play audio
feedback or lines that are
represent to use streets.
And every time we are close
to the line it plays feedback.
And then we can
add audio content,
or with text that
will be rendered
by text-to-speech, or
directly by a register
with a microphone.
And then when we
touch the zone, we
have the audio [INAUDIBLE]
content and the audio content
that is played.
So what look like the system?
We use PapARt, a spatial
augmented reality tool kit that
provide [INAUDIBLE] and API.
So for the feedback we
have the visual projectors
that give visual
feedback, for instance,
about an interactive zone.
We have speakers that
play the audio annotation,
and we use tactile maps to
make the tactile graphics fix.
And the input of the user--
the touch is detected
through a depth camera.
So when we touch, we
detect the collision
with tactile graphics,
and we use as well.
And this camera also
detects color markers.
So we have interactive
wood cut that
enable to [INAUDIBLE]
direction to select
if we want to create
a shape, a line,
start recording, stop recording,
and play the audio feedback.
And we can record
with a microphone.
So here's a picture
of the system.
And this follows
some previous work.
So on the first
work, we verified
that such audio-tactile
content with this system
was actually
accessible for children
with visual impairment.
Then, in a second study,
we provide a graphical user
interface authoring
tool, and we verified,
actually, it was
usable by the teacher,
and the created content was
accessible for the student
with visual impairment.
And actually, for
that, we use Inkscape.
That is a free vectorial
drawing software.
So the teacher can directly
draw this interactive zone
on top of the picture.
And as it was usable
by the teacher
and accessible to
the students, that
will be our baseline in
our user study we present.
And then, for this study, we
follow a participatory method
that you can have more
detail in the paper.
But actually it was six
months design for our vision
of the prototype.
We used that with six
scenarios of the teachers,
and we used that in classroom
in three countries--
France, Greece, and Romania.
And we used the system
entirely by the teacher
into inclusive
classroom with children
with and without visual
impairment in this classroom.
And the result of an online
questionnaire answered
by teacher and children with
and without visual impairment
are described in the paper.
So after all this
design, our user study--
our objective was
to actually verify
the potential of a
spatial augmented reality
to create audio-tactile content.
So propose to
compare our prototype
with a GUI alternative
we already demonstrated
as efficient in our previous
work, such as Inkscape,
to have a reference of baseline.
So our first hypothesis was
that the participant can rapidly
create interactive
audio-tactile content with SAR
compared to GUI baseline.
Here the time is an
important component,
because if we want
the teacher to use
it to prepare
pedagogical content,
they don't need to take
so much time everyday
to create the content
for the following weeks.
So here we measure the time
to create interactive zone.
And the second hypothesis
was that the content creation
is possible without requiring
any specific technology-related
skills.
So for that we tested
with stakeholders
that actually used
audio-tactile graphics
during their professional life
with multiple technological
background.
I will not describe it
in this presentation,
but we can talk about that after
the presentation or the details
are in the paper.
And the last one,
we want to verify
that content creation
is easier with SAR
than with GUI baseline.
And for that we use attractive
portfolio to compare.
So how our participant?
Actually, we did our experiment
with 28 participants, 10 male,
17 female, and one
nonbinary participant.
The average age
was 37 years old.
Several participants were
cognitive science interns,
and five were
working specifically
in accessibility for people
with visual impairment.
16 participants
were from a school
for people with
visual impairment,
and five participants were from
a braille transcription center.
And so we compared
the two systems.
So here's an example of, in
videos, the participant es
interactive content with SAR by
drawing a zone with a finger,
then a recording.
She used--
[INTERPOSING VOICES]
[NON-ENGLISH SPEECH]
She adds all the captions.
And then she verifies
it's working.
[NON-ENGLISH SPEECH]
So here, with the
first system, she
annotated the first
map in 10 minutes.
And then she does the same
thing with the graphical user
interface baseline, when she
draws the interactive zone
on the picture and then adds
text that will be rendered
as text-to-speech later.
So actually we have to
counterbalance conditions.
One is the first map or the
second map that our controlled
in terms of number
of elements and type
of element, one rotate,
[INAUDIBLE] line, and shapes.
And we have two
technological conditions--
GUI and SAR condition.
So for the SAR
conditions, the setup
is the one I
previously presented.
Then we as well
give a caption to be
added to the map
to the participant
and the location of the caption.
And for the GUI,
we have a computer
with keyboard-mouse
and Inkscape open
with the image of the background
of the map to annotate.
So the complete protocol was
first introduction and consent
form and then the first system.
So all the participants
start with 10 minutes
to learn how to
use every system.
We have a familiarization
tactile graphic.
So they make it interactive
with the experimenter.
Then they have 10 minutes.
And within 10 minutes,
there's the instruction
to create as many as possible
interactive elements on the 20
of the map with the first
map they have to augment.
And then they evaluate the
content with a satisfaction
questionnaire and
attractive questionnaire.
And then they do the same
one with the second system
and the second map
before questions
regarding personality traits.
So what are the results?
Here's the results
regarding content creation.
So on average, spatial
augmented reality
was always faster than
GUI to create elements.
For instance, on average,
whatever it is, shapes
or line to contents,
a participant
requires 32 seconds to create a
zone with SAR and 57 with GUI.
And it's significantly faster
for all the elements with SAR.
One participant
didn't create lines,
because it was too
complicated with GUI.
And if we look at
the completion,
with SAR people can
test the content,
so there are a lot of times when
there are not created elements,
they are just
verifying the content.
And still, 12 participants
created more than 90%
of the 20 elements on the
map within 10 minutes,
for only four participant
for graphical user interface
condition.
So as well significantly more
content is done with SAR,
and on average it's 17%
more per participant.
So rather than user
experience, this
is a portfolio of attractive.
And in blue with a square,
it's the SAR condition
that is a rated as desired.
And in red with a circle,
it's GUI conditions
done between natural
and task-oriented.
So actually regarding
this questionnaire,
SAR is better and
easier than GUI.
And as well, what's
the participants say--
P9 says SAR shows a
simplicity of use.
P14 said, I prefer PapARt,
because, as always,
mastering computers is
time demanding to me.
P19 said, SAR is
simple to master.
Even if you choose graphical
user interface for the I/O
precision, actually
you can control,
point by point, all the areas.
For the conclusion-- but
it's lacking of precision.
It's precise by the
[INAUDIBLE] participant.
So GUI is more
precise, but strategy
are done by the
participant to avoid
the shapes that are more
complicated-- for instance,
curved lines.
With SAR, it's
simpler, user-friendly,
provides pleasure in use, but
is less perceived as precise.
We can discuss it
[INAUDIBLE] question.
The content can be created
with GUI without the toolkit,
because on a computer you
cannot direct test the content.
As with SAR, the content
can be directly tested.
You can create the content in
the classroom with the student
as well, but it requires
[INAUDIBLE] get it room.
And, finally, the
graphical user interface
is a computer, so
it's non-environment.
So users would
prefer to use SAR,
but are more confident
to use GUI alone.
And SAR is playful
and [INAUDIBLE],,
but for now it's quite new.
So thank you very much, and
do you have any question?
[APPLAUSE]
Thank you.
Questions?
Please raise your hand.
I'll start with a question.
So this is for the
teachers to create content.
Once the content
is created, it's
going to be used by
their students which
are non-sighted users, right?
And has this been
tested or do you guys--
hoping to test this
system with them,
with the actual users of the
system, as opposed to the--
well, the content
creators are users
too, but the actual end users.
OK.
Yes.
So this is actually with
already tried in two authors
to this content to verify
it was usable by the student
with the same system.
So here, it was only
on authoring tool.
And then, actually, we
use that with people
with visual impairment
in the two classrooms.
And some people with
vision impairment
used the same system
to create, themselves,
interactive content.
Because we use tangible
cards to select the component
and to draw with a finger.
However, in classroom we had to
add some additional features,
because there is a lot of
students that are putting
their hand on the table.
So we have to catch what are the
fingers of [INAUDIBLE] or not
and to remove it.
And it was with spatial
augmented reality,
the point is when you draw with
a finger, actually the offset
when you draw contains
a calibration offset.
So when you touch
[INAUDIBLE] tactile,
you will have the feedback,
even if there is something
and there is an offset,
while the sighted user
will touch the light
feedback-- the color feedback.
So actually it was more
usable with the SAR
for people with
visual impairments,
than the sighted that
will more prefer vision.
Thank you.
Do we have any other questions?
Don't be shy.
Thanks for the great work.
I was wondering, do you have any
plan to add other sensory data,
like, for example, even tactile
not touching the surface,
but maybe the object?
Or maybe other haptic feedback
or something like that?
Yeah.
So actually, this system
allows color tracking markers
and position markers.
So we use it, but we didn't
evaluate it in the study
formally.
But we added some objects
with a position marker on it.
so we can annotated, and
it follow the object.
So we'll use that, for instance,
for interactive puzzles.
We use that for building
interactive maps.
So you set a caption
for a building,
and then you can move
it and create a map
with a student with
visual impairment
prior to visiting a place.
So yeah, it works
with position markers.
Only, I want maybe
comment, and maybe you
can explore with
different surface quality,
because they're very sensitive
about their fingertips.
So that might give
a lot of variety
of materials you can use.
Yeah.
So actually, as well,
also, one of the advantages
of SAR for that, because we use
that with, for instance, real
leaves.
If you only refer on
digital content that is then
implemented or printed
in 3D or not, then
you only have one
texture basically,
or you have to recreate here.
But here we can augment real
objects with their texture.
Thank You.
Thank you.
