(Archit) Thank you so much for joining us. Welcome to the XR for Change
Talk and Play for on Ethics in XR. Thank
you again for joining us, we have an
excellent pane lined up for you
tonight. My name is Archit and I'm the
XR for change fellow at Games4Change
and before we get started I have a few
housekeeping notes. At the bottom of your
screen you should be able to see a chat
button and you can use this chat feature
to message out to the community. Right
next to it, you should also be able to
see a Q&A tab right and you can use this
Q&A tab to post any questions that you
want and we will be answering all the
questions in the last half an hour. I
want to also remind you that this
session is going to be recorded. And
with that I'm going to pitch it to
the President of Games4Change, Susannah, take it away. (Susannah) Thanks Archit, and
thanks everyone for joining us- this
afternoon? This evening? Tomorrow morning?
I guess depending on where
you're calling in from. We're really
happy to be having our latest Talk and
Play event. This is an event we would
typically do in person but, as
circumstances have dictated, we are been
holding these online and they've been
amazing conversations and we're reaching
people from all over the world so I'm
really, really happy that you all chose
to join us today as well. We are also
going to be... I just want to talk for a
few minutes before I hand the session
over to our moderator, Kent By. If any of
you are not yet aware and haven't yet
registered, we have our upcoming Games
4Change festival which is happening
online and for free for the first time
ever on July 14th to the 16th.
I think Archit is going to type in a
link on where it is and I think
there's a slide that he's gonna put up
but it's okay if we don't do the slide
but 14th to 16th and we're really excited
again for the opportunity to reach
people from all over the world. We have
an incredible lineup for three days. 
In addition to our XR4Change
content, you know, we obviously focus on games
and impact as well and we are covering
topics from health and wellness to XR
and games that are used in learning and
how this medium can be used to grow
awareness on civic and social issues. We
have some fantastic opportunities to
meet and network. We have a fantastic
meet the funder series, if any of you are
looking for funding opportunities, we
have over a dozen different kind of
funders who are gonna be participating,
from US government agencies to
foundations to BCs, so there's a
lot of really interesting type of
experiences and people to meet at the
event. So please, you know, as I said
earlier to our fellow panelists
The bar of entry is really low, like this
is the first time ever you can, like, join
our festival from your home and it's
free and you know, you don't even have to
commit the three days, right? Find some
sessions that are interesting to you and
I hope you enjoy and join us.
The last thing I will say is we have
another Talk and Play event on Tuesday.
That is with Games4Change founder
Dr. Benjamin Stokes who's an American
University and he's doing a book launch
with MIT press on Locally Played, his
book called Locally Played and it's
about how games can help foster playable
cities which is a really interesting
conversation. So, with that, I'm gonna pass the mic over to Kent who is a good
friend to Games4Change. A little
intro on Kent if you're not aware, he's a
producer of Voices of VR
podcast and also a writer of X Our
Ethics manifesto. He's been doing
podcasts for I guess six years now and
you Ken's connected over 1,500 podcast
interviews some of them were at Games
4Change a couple years ago which we
were psyched about. He's a philosopher,
oral historian and experimental
journalist and I'm sure he's going to
entertain us all and lead us through a
really exciting conversation with an
amazing group of people. So Kent, thank
you for joining us and thank everybody
for participating today. I'll see you
all at the end. (Kent) Awesome, thank you so
much Susannah. So yeah like Susannah
said, my name's Kent By. I do the Voices of VR
podcast and as I've traveled around to
nearly a hundred different gatherings
over the last six years, I've been in
conversation with the XR community and
it's from those conversations actually
that issues around privacy and ethics
started to organically emerge and as
I've done a bunch of different
interviews and about this topic and went
to help co-organize the VR privacy
summit back in 2018. There's a couple
ways that I think about it. One is that
with all new technology it starts to
blur the line of our existing contexts
and it creates new sort of ethical
situations where we don't necessarily
have the normative standards to be able
to understand it or to navigate it yet
and so it's a process of trying to put
language around it and to see how
there's various trade-offs as you try to
merge these different contexts together.
Also the issues around privacy and, you
know, blending together
certain aspects of what we have radiated
from our bodies, you know, that's a whole
other area as well. Ethtics, you know, after talking to a lot of people about
this that the whole field is vast and
infinite. I mean, it's really impossible
to try to do a comprehensive take in
this conversation that we're gonna have
here and so the best I can say that
we're gonna try to do is have each
person has their own slice of what
they're looking at. We're going to have
each of our panelists introduce
themselves and then everybody who's in
the audience will also likely have their
your own perspective of what you're
looking at and so I look forward to
opening it up to discussion and
questions here after about an hour of us
talking about it and trying to cover as
much ground as we can. And so with that
I'm just gonna hand it over to the
different panelists to go ahead and
introduce themselves and give a little
bit more context as to how you're
connected to this topic of X or Ethics.
So, Thomas why don't you go ahead and start and
then somebody just jump in afterwards.
(Thomas) Sure. So, my name is Tom Fisk. I am the
editor of Virtual Perceptions which is a
VR/AR analyst website which
covers was happening in the
industry. Earlier this year I published a
book called the Immersive Reality
Revolution where I cover everything when
it comes to immersive. We've dedicated,
also, a chapter to the ethics of VR/AR
which in hindsight is way too short
considering how big this topic is. I've
been in this industry for about four
years since 2016 and I've just been
following all these amazing people doing
these amazing innovative items. What
particularly interests me when it comes
to ethics is the use of our data So how
people collect data, and how the data is
sold and exploited in lots of different
ways which touches on upcoming VR/AR
items coming in the future. But that's my
specialism. Thank you. (Kent) Awesome, Gali? (Gali) Hi, so my name is
Gali Toriel. My background is actually a
very nonlinear background that let me do
technology. I started off as designer of
physical spaces and objects moved into
experiential design and strategy and
kind of lend it again in human-computer
interaction, focusing mainly on augmented
reality. That, for me, is the best
immersive technology. For me what
fascinates me with automated reality in
particular but immersive tech at large
is the fact that we are really about
to step into a new realm where the pixel,
the neuron, and the atoms will
become a new space. And you know, with
great power comes great responsibility
and as I was interviewing people for my
master's degree research about AR I was
kind of alarmed at how many developers
are admitting that this is one of the
greatest technologies but refusing to to
create processes and ethics and
regulations that will also prevent
misuse of it. So I published a book,
Augmenting Ellis: the Future of Reality
Identity and Experience. I've been giving
a lot of workshops and talks about it
since I consult for AR Gaming
entities and basically I just want for
us to all have a shared reality in
the technology we all deserve. (Kent) Great, Kavya? (Kavya) Thank you.
I'm Kavya Pearlman. I am the founder of XR  Safety Initiative. XR Safety
Initiative started in 2019 and the goal
is very, very simple.
It's to help build safe X/R environments.
Now why did I start X/R
Safety Initiative that is, you know, a
whole long story, but essentially because
I feel that I'm uniquely positioned to
bring about this sort of very
much needed change in the industry and
potentially produce solutions and by, you
know, collaborating or coordinating with
different entities and what makes me-
what put me in this unique position is a
couple of things that happened in my
life. One being just, you know, I found
myself as the chief security
officer or the cyber security officer
for Second Life which we
all know is the oldest existing virtual
work and prior to dealing with, you know,
security issues for Second Life, I also
found myself doing third-party security
for Facebook during 2016 election time
and that was another sort of you know
gave me unique set of skills and the
biggest thing that it taught me is that
how technology, if we are not proactive
about, you know, allowing third parties or
use of data. Like, we saw in 2016 how data
can be weaponized. So now when I combine that experience with like,
what I see right now, what we're doing
with XR,
there was a huge gap here that we need
to fill in and we talked about ethics
when this gap sort of comes up it's like
there is a gap between, you know, how do
we perceive ethics? How do we address
these issues? So I think those few
experiences make me and then I, you know,
got connected to so many incredible
people in the industry because my
background being cybersecurity got
connected to the co-founder of XR
Safety Initiative, Abraham Nikili, he
was, like, the first person who actually
discovered novel cyber attacks in
virtual reality and is actually the only
person at the moment along with his team
at New Haven University who can actually
prove what happens, forensically, inside a
virtual environment once things have
been manipulated. So, this, you know, sort
of establishing the truth is something
of a capability that we need to
understand and have as things progress in XR. But yeah, there's so much more
that we need to you know I can go and
talk about but you know I encourage
people to learn about XR Safety
Initiative because I feel like, after one
year of this work, we've sort of become
this essential piece of puzzle to
navigate these uncharted territories.
(Kent) Great, and then M, go ahead and introduce
yourself. (M)Hey, everybody my name is M. Laser
Walker I use she/her pronouns and I work
as a cloud advocate at Microsoft. So, my
background is mostly in experimental
games largely using non-traditional
physical interfaces and emerging
technologies and everything that's not
using a controller or keyboard and mouse
so that is often met sort of making my
own hardware but it also means using XR
and VR and AR tech including- I spent a
bunch of time doing research at the MIT
Media Lab focused on using fiction to
connect people with the real world
spaces and how can we sort of safely and
ethically use spatial audio in public
spaces to give people immersive
experiences and sort of since COVID is
broken out a lot of what I have been
focusing on is sort of online virtual
worlds and social spaces so one lens of
ethics that I'm particularly interested
in is
looking at how we can create spaces that
prevent abuse and harassment and things
like that. I'm also here to sort of
represent the Microsoft side of things. I
think it's interesting to have a large
tech company with some representation in
this panel although as I'm sure we'll
get into Microsoft is a bit weird
compared to talking about Facebook or
Google because we don't really have a
monolithic VRA our strategy we have
HoloLens. We have Windows Mixed Reality,
we have a bunch of developer tools and
cloud services. There is less like one
way that Microsoft used ethics in VR.
(Kent) Great. So, I know we had a chance to do a
bit of a pre discussion where we kind of
mapped out a little bit of the topics
that were interested in. I think what
might be a good approach is to kind of
go through each person sort of topic
areas and I'll give a brief summary
before we dive in just so that everybody
kind of knows that, as we go from each
person I'll sort of hand it over to them
to kind of make an opening statement. 
But, before we do that one other thing I
just wanted to mention, just to sort of
help set a broader context is that, right
now, there's open questions around what
laws need to be set at a global scale.
There's aspects of what each company has
to do to be able to set their privacy
policies for where they set those
boundaries, but for most of people that
are probably here, it's more of like an
education. So learning about what the
risks are so, as designers, "How do you
create the most ethically embodiment
ethically aligned embodiment of XR Design?" But, also as consumers, what do we
need to push back, whether it's by our
voting dollars and supporting certain
things or, you know, new economic models
or whatnot. So I think that's also
important just to say that there's going
to be a lot of different vectors in
which like action could be taken as we
think about this is a panel for XR for
Change. So, I wanted to just sort of give
a brief overview of what we talked about
and then we'll dive in. So, no, Thomas was
talking a lot about the use of data and
research and lots of issues there about
what that is. So we'll start there.
I know Gali was talking about like the
business effects are, "Who has power?
Who's in control?" And so, maybe, dive into
more of those sort of larger structural
issues there that, you know, let her sort
of make that
statement. And then, I know that Kavya
you've been doing a lot of stuff with
cyber security as well as with
harassment and then trust and safety and
security. And then, M, like you said, we'll
probably cover a lot of those cyber
harassment stuff there but, you know, I'd
love to just hear a little bit more
about this whole Microsoft angle of how
this company kind of fits in. Especially
with the relationship to government
contracts and how the government's
relationship to people as well. So that's
sort of like an overview. So, with that,
I'm gonna hand it over to Thomas to kind
of- let's kick off with the data aspects.
(Thomas) Thank you.
So, when it comes to the use of data
We're currently in a society where our data
is farmed and then used in multiple
different ways. The clear example, this is
Facebook, with redirecting of ads and
with pixels where it tracks are all our
online activity. My fear is the extension
of that as we use virtual reality or
augmented reality because the complexity
of their data becomes more intimate. Last
year, they had two preliminary findings
on the use of brain reading technology,
which reads very simplistic
instructions on the mind. The technology
is very juvenile.
It's not good enough quite yet to do
much, but give it a few years and
Facebook will start be able to read what
you are doing and the reason why I'm
cautious is- I'm talking from a UK
perspective. I also know for globally we
haven't really touched on the issues
when it comes to collecting technology
with data with immersive technology
because there's so much going on with
that. So, with that in mind, I came up with
six principles which I feel encapsulates
what we should do ethically in order to
make sure we are safe.
These are- I'm not sure we'll ever be
implemented, but I feel that in a
utopian world, this would happen. First of
all, there should be limited access.
Regulators should control which
organizations can access and use user
data. So, for example, political campaigns
would just have
restricted access. I know ad companies are
already displaying when a political ad
is happening but if I'm honest I don't
trust that at all.
I'd rather just completely limits how
much politics uses our data. I also think
there should be transparent design so
the near-ethical design of brain
interfaces must be open and
understandable by regulators and
agencies to fully understand what kind
of data is collected. Because one of the key
issues is a lot of people don't actually
quite understand how the data is then
used and then interpreted. It's a very
oblique system and in order for it to
work better it needs to be more
transparent for regulators to take a
look into. The third principle is
understandable algorithms so touching
back my previous points, where, it just
needs to be open and understandable for
regulators and agencies to understand
what's happening and then the next three
principles are based on the users. I
believe the user should completely own
their data- completely. It is not the
companies who actually own it it's the users
right to be able to own the data and
sell it. I know Kent has also mentioned
in his XR Manifesto that's he has
the same views on this topic. He also
should be open to users as well. Any user
should access their own data to the
broadest extent where possible. And then
finally, I believe there should be active
opt-in. Everyone so our company may send
you an email saying "We've updated your
terms/conditions. You may opt out
whenever you want." I believe there should
be active opt-in saying "Your account
will be blocked unless you read and
understand these principles. You must
tick this." And the reason why I believe that
should be the case is because a lot of
users don't actually understand what
they're accepting and how their data is
being used. Those are my six utopian
principles and I'm happy to hear
we all think about them. (Kent) I think that-
I'll share a brief thought and sort of
open it up for other folks to jump in-
You know, this issue of what data is
recorded and what is captured. Our
tracking data, galvanic skin response-
You know, there's going to be
all sorts of information that is going
to be revealing all sorts of information.
Now in certain contexts, like a medical
context, that's great because you want to
be able to rehabilitate yourself and so
sometimes, that- it's contextual where
like it's okay but other times if it's
Facebook having access to that then it's,
obviously, not as okay and so, Helen Ysenbalm has this privacy framework called
Contextual Integrity that tries to start
to map out how contextual privacy is.
There's not like a universal definition
where sometimes it's okay and other
times, it's not. So, that's my initial take.
My other just thought, as you say all
those different principles, is that
sometimes, in ethics, it's impossible to
implement a perfect design because
you're always trading off one thing over
the other. So what I'm really interested
in is the different trade-offs or the
dialectics of these things where you can
have a little bit of this but you're
never gonna have like the perfect ideal
situation. Because it's always going to
be some compromise that you have to do. So that's my initial
thoughts but I'd love to hear what other
folks have to say as well. (M) I think
there's a really interesting
question about how user freedom and
choosing technologies that if you look
at the web like not all web technology
was made like- we needed stuff like GDPR
to make this happen but there are a lot
of things that- you get freedom by there
being choice in web browsers. If your web
browser doesn't support the Do Not Track
header you can switch to one that does.
If your ad blocker doesn't work in a web
browser you can switch to one that does.
If you don't trust Google or Firefox you
can compile it from scratch. And that is
less viable in XR where you're not
going to make your own VR headset from
scratch and I don't quite know how to
solve all these problems in context of,
"We are stuck living in Facebook or
Valve's world". (Gali) And, for me, I think, I
really like that, what you said about
being transactional. Because, of course
this is not, you know- This is a system
that exists because policymakers and
companies and developers and users are
all contributing to it. And this is
where, you know, I love the fact that we
talk about ethics.
Because ethics talks more about norms
than regulation and this is about, "How do
we create a mindset that we
prioritize and decide what are these
trade-offs and where do they happen and
whom do they happen with?" Because, I think,
we're only now waking up or at least not
us- But I think the wider audience that
might have been a bit oblivious to
what's happening is waking up from
talking and doing workshops with with
users. I do
Utopia/Dystopia users. What scares me the
most is that most of them especially
when I talk about social media are like,
"Well you know, what are you going to do?
"It's like Black Mirror." So, we accepted
the fact that this is a way it is and it
can't be changed
and this trade-off is the only way to
achieve useful
application and prosperous societies and
this narrative is the first thing we
have to peel out. You know, you have to
trade your privacy for this.
It is one mechanism that works in
many levels but it creates a lot of
problems. So, we have to start a
conversation from the root of, you know,
"What if we didn't have all that and we
could build it all again? What
would we do then?" Because, we can. This is
the truth, we can. (Kavya) Right. And I really
admire that question. It's like, "How do we
solve it?" And then, I want to add my piece
here because we have been looking at
this very challenge for- I've been
looking at it for almost like two years
now. But as XRSI as a collective- They've
been looking at this sort of ethical
issue or this overall data privacy,
cybersecurity. All of these like
collective issues that touch- I mean we
can say XR domain, but it actually
touches a lot of domain. It touches
healthcare, it touches education, it
touches travel industry, it touches
almost every domain. Because this is
going to- We know that this is going to
be our new web and now we have an
opportunity to possibly get this right.
Now, you know, the lead asked
this question, "How do we do this?" and I
see and I have so much respect for all
of the people on the panel and just
outside the panel, trying their level
best to lend some sort of a narrative.
Somewhere, you know, Kent, was probably the
first one and through Kent I heard all
these, you know, risks that have come
about that surfaced through the ethical
manifesto. Some people call it like the
ethical dilemma and others are calling
it some, you know, like, "Hey, we need to
have some more accountability, yada yada yada."
So after about a year we just about-
I think it was February, we brought
together about 12 different
organizations including some
really, you know, key organizations that
do cybersecurity work, people that are
focused on diversity and inclusion. So
many people that actually are involved
deeply into artificial intelligence
research. So, you know and I can name
all these organizations but, you know, I
encourage you to go to this other
website. We form this sort of coalition
called Cyber XR coalition. And so what
we did it's like we brought these
experts. We looked at this very problem/ethical dilemma and all these other
risks that are- You know, we've surfaced
them we understand some of these
challenges. In fact, Gali, I think
earlier today you had retweeted
something about, you know, about being a
problem solver versus problem seeker. And
honestly, thus far, we have been problem
seekers and I think that is also what is
needed for the industry- Let's be a
problem seeker. So, after seeking that
problem, finding these ethical issues. We
established that we need a framework. We
need a framework between you know, public
private entities. Like, how should they
collaborate? We need a framework to teach
our educators what kind of a research
should they be doing you know
academically we need a framework for
users. How should they be aware of when
they are stepping into it. Checking the
box is not enough. So then, fast forward
what we arrived at is, there's some
things we can take this as an ethical
principle. But trust me, Google, started as
do no evil,
nice ethics. That's the premise that
everybody starts at, but then let's talk
about "How do we solve this? We need to now
mandate these things. How do we do this?
We zoomed out a bit. We went to- We took a
look at- Hey, what about journalism, you
know, journalism has ethics. It has been
around forever. But guess what? When it
comes to taking a stand,
ethical stand about some political-
whether to flag some tweet or not?" You
see a shrug. When it comes to whether
your platform was undermine to, you know,
it's used to undermine democracy or the
very first we see of thing we see is a
shrug. So, how do we avoid this shrug? We
make it a trust and safety issue and
that's what we did is, if you look at the
recent standards that we rolled out, the
Cyber XR standards, we try to zoom
out from just the ethics. Yes, ethics and
ethical principle remain a key component
but we need to talk about, "How do we
build trust proactively in these
platforms?" And when we start to talk
about that, they encompass privacy into
it. We incorporate, you know, ethical
principles into it. We think about
inclusion of all, you know, minorities and
races and genders and whatnot. So, that's
kind of what we did, and then we just
kind of took the critical pieces-
critical risks that have to be addressed
and put them under a trust and safety
umbrella. So, if you are a CEO, let's say.
You're developing a BCI platform. Or if
you are a indie developer you can use
this sort of, you know, comprehensive
set of risks, like, it's a list of risks
and look at it like, "Okay, these are the
10 things that I must care about and now
the next piece is to turn them into
regulations or mandates and whatnot, so
what we are trying to do now is build a
bridge between, you know, us and Facebook
let's say. Between us and we're working
to
the ICO to like, work on some child
safety issue for example. So, really just
narrowing down different, you know, like
paths that we can take now and
advise other people to take to solve
these problems. Because we're all like
sort of, been in this problem seeking
mode and I think exercise is like
releasing scanners telling like, you know, this is one way and this is our way
to solve the problem. And now we're
bringing all these entities to the table
to share more knowledge to exchange
these things and tell us are we saying
the right thing and if we are then you
must adopt it. So, now we create
accountability and I think that's kind
of what you know I hope that that would
be our major contribution is to have
people become more accountable and
implement these things and then just
like, talk about it. (Kent) Well, I wanted to do
one more quick round on the data issue
before we move into like the sort of
larger systemic issues of surveillance
capitalism and the power that Gali, I'm
sure will introduce to us but in terms
of the data, I think, there's a two quick
points I want to make. One, Tom that you
said the active opt-in. You know, there's
a challenge, the trade-off there is that
every single time you go to a new
experience, let's say Web XR, you have
to give consent for everything all over
again and you sort of get this opt-in
fatigue as we have from like GDPR but
not only is it for like every not just
to see the website it's like, "Can we have
your- tracking your eyes and your
your head position." And so, there's like
this- how to deal with that consent, and
informed consent, I think, is still a bit
of a- and make it a good user experience.
But, I just want to also just sort of
have anybody that wants to tell us our
other information, in terms of the
threats in terms of the information,
specifically around like, "Hey, if you have
access to eye tracking data, we can know
your sexual preference." Or, you know,
things like gate detection becomes once
you have how somebody's walking you can
determine someone's bone length and be
able to identify them. So, this whole
question of personal identifiable
information versus like non personal identifiable information and, with that in the future, stuff that is currently seen as
non-personally identifiable is going to
eventually, through the assistance of AI
become personally identifiable. So, I just
wanted to sort of open it up to see if
anybody had any other quick things you
wanted to say about data and the risks
of dat. (Gali) So- Oh, Tom do you want to start
or... (Thomas) No, by all means, go for it. (Gali) Okay, so
I think, you know, that there's a big
problem in, what data is being captured
conscious, and subconscious. Who's
capturing it? Who might have access to it
maliciously, and are we aware of the
big scope of, "What does it mean for us
individually and as a society?" So, I think
I'm not gonna go into the data per se,
but I'm gonna talk about spatial
computing and now that's linked to
data because you know, Vladimir Putin
said, "He who controls AI
will control the world?" And I say, "No, he who controls AR will
control the world." Because, they will be
able to control the narrative of what we
see in the world in our perception of
reality and also, we're moving into a
point where data is not just what you
tap in but as you mentioned, it's me
walking in a public space, all of a
sudden?" Me being in my home, just being-
all of a sudden, data will be captured
all the time. There will be no opting out.
And this is where we really have to, you
know, kind of put a real cut now and
put some regulations and some actions in
place. Because, before we know it, like-
1984 has nothing on what we're
talking about. It's just- we are now
at 1984 with kept smart devices when we
talk about smart and data capturing
space. This is where we lose, potentially,
all agency on how we perceive reality
and how our actions are being tracked
and used against us.
(Thomas) So, I actually want to respond to that,
because I agree with you that, "He who
controls AR controls the world," as you say,
because it controls what people see. My
counterpoint to you is, in the UK, there's
been a launch of a new company called
Darabase which is targeting that issue. So, what they do is, they look at Geo AR and
they're forming a permission-based
layer on the virtual world. So, basically
they want to map out where, if someone
wants to use the virtual layer of a
particular location then, that means they
have to seek permission from the owner.
So, they have to go through this
particular system. Which is amazing. This
is exactly what we need.
A lot of companies
suspect SnapChat already has this
permission based layer, but they've not
actually publicly actually announced yet.
But DaraBase has been formed to help
assist other companies do the same thing.
Because, of all the issues you've heard
targeted which is, such as, "What if
McDonald's basically bought the virtual
layer to do Burger King ads?" That kind of
stuff for example. But no, it's been
worked on and I'm so happy it is. (M) Do you
know, are they a for-profit
company or are they some sort of
government entity or who is controlling this lair? (Thomas) So it's a for-profit company,
it's owned by Dominic. And, he's a lovely
guy. I'm have to do introduction after
this meeting. But they are for-profit.
And they're definitely a company to look
into. It's called DaraBase. (Kent) Cool, any other-
last call for any concerns around data? (M) I
think, one interesting thing to point out
is, in a lot of these cases, the solutions
we're talking about are largely social
solutions, which is correct. That is the
way you need to approach these but
occasionally there are technical
solutions as well. Like, I'm thinking a
lot about how Microsoft had some
hololens research a year or two ago
around- the whole end is capturing all of
these AR point clouds and often sending
them up to the cloud to do all sorts of
analysis. It is not very difficult to go
from like, a very detailed 3D mesh of a
space to this is giving you detailed
information about that person.
So, they did a bunch of research into-
"Well, how can we essentially anonymize
this data?" And, they came up with some
really clever ways of restructuring that
data so you can still do the same sort
of analysis you need to, but in a way
that they can't reconstruct someone's
living room. And, again I think there's
still questions about- this is research
coming out of a for-profit company. "What is compelling other companies
to do the same sort of thing?" But, at
least, in some cases it is not a given
that, if you want all of these like rich
machine learning technologies that we
need to make XR wonderful like they
can be built with privacy at the core.
(Kayna) And, I just want to follow that up with
again, looking at it from a solution's
perspective. So, first of all this
technology, like collectively, XR- I'm
thinking of it as this perception
manipulation technology and I would have
to sort of agree with Gali is, yes,
people who will control your perception
or will be able to manipulate perception
will control the work. This is a very
obvious thing to happen. And now, one
thing that we have to admit before we
look at the solutions and this is
something that we actually- I recall last
time I was with, you know, at a panel with
Kent, I- we talked a lot about the era
of constant reality capture. This is a
reality. We are not going back and we are
not going to be able to segment where
the privacy begins and ends at times.
What we can do however is-We can sort of
define and shift this responsibility on
the industries that are the stewardess
of our data, that are owning our data
even though we can say we are the data
owners but essentially it is their
responsibility to secure them. Now, I
heard a lot of talk, and not just me, but,
you know, overall at XRSI. We heard a lot of talk about gaze data, pose data, all of
this- So, our very first order of business
really was, we formed a data
classification framework working group.
What we are going after is "What does, you
know, what does it look like at the data
structure level?" So, we're essentially in
that working group trying to create, you
know, immersive API so that,
you know, you can potentially take a
subsection of your platform data, apply-
you know, apply this immersive API, and
then be able to visualize your entire
data lifecycle. If you can see how data
is being created, transferred, stored, and
then basically archived and hopefully
someday destroyed or has some kind of
retention policy and you pop that you
know sort of into- that sort of immersive
visualization to a CEO and you already
see, "Oh, here is this transfer of data but
it's going to Facebook and we don't even
know what they do with biometrics data.
We just don't know it." Right? They take it
but we don't know what happens. So here
is this black hole. So if you can
demonstrate that- let's say a hundred
CEOs that are actually creating these
things, then we start to ask questions
then we start to shift that
accountability and then, you know how, in
cybersecurity we have this, like,
reasonable security controls must be
implemented. We have to, at the very least
ask for that reasonable security control
in this black hole where this data is
going. But until we can see the data all
this games-pose-gate and all of this
like, now, we're talking about the entire
body tracking. We would not be able to
have this conversation. We'll b***h and moan
we'll talk about gates-pose. All my data,
by data, but now once we have this
visualized thing we, again, using the same
bridges that we are building with big
tech firms, we talk to them and say, "Hey,
tell us now, otherwise, you know, there can
be consequences, too. Talk to the
regulators about it." That these are the
regulations that are needed. So, we're
looking at it from a very solution
perspective and I encourage people who
are concerned to join this working group.
Because in Phase One, we ran into a
problem with, like, "Oh, what is XR? What is
VR?" So, we just, like, kind of
standardized those terms and now we're
kicking off Phase Two under the
leadership of Diane Hussfell from
Mozilla and, so, when we kick this off
this is the objective and hopefully at
the before the end of the year, we should
have some solution where we can
visualize all this data and visualize
what is
happening to our data and then
color-code it and tell people, "Look, this is
what happens when you
use this platform." (M) Yeah, sorry. (Kent) Yeah. I was
just going to, well- Go ahead, Gali. What are you going to say? Make it
quick, though, because we have- not everyone is listening. (Gali) I love what you're doing,
Kavya, but I just want to plant a seed,
like a question. So you know last year I
was giving a talk in front of the data
economy. You and Judical was
launched and they were all like saluting
the fact that it's affecting the whole
world and companies are changing their
standards and then COVID happened and
for those who are not aware, it was found
out that Judical was a regulation but it
was a recommendation in times of crisis
and basically most countries
applied contact tracing and all of a
sudden all this work and all this thing
was, you know, disappeared in a second so
I think, you know, to layering on the work
you're doing. It's also, like, there's what
if situations that we have to think
about versus, like, the day to day risks. (Kent) Yeah.
(Kavya) Very valid point. (Kent) Yeah, it's a good point.
Yeah. The last thing that I
would say on this topic is that
researchers that are in academia-you
know, a big line of research is looking
what data are available and seeing what
you can tell from this type of data and
extrapolating information about people. So that's, like, an active area of
research that will help eventually be
fed into, hopefully, at some point- policy
being made to be able to start to either
limit the amount of data that are being
recorded or what you can do with the
data. A quick logistical note for the
webinar, there is a Q&A tab. If you want
to ask a question, feel free to, at any
moment, start to- leave your question, or
vote on existing questions, and then, in
about 40 minutes or 35 minutes or so
we'll dive into those. Gali, so, you
wanted to bring up some of the larger,
you know, I guess, power issues, who's in
control? Maybe you could, sort of, make
your argument. We can discuss it for
a bit. (Gali) I think the problem of Excel, which is not a
problem, is that, it matured again in an
era where we have like the big five or
six or seven. And what is happening
in my mind
to regulation and ethics or the lack
of them or the lack of applied ones is
also part of the fact that it's 
being actively developed and applied by
for-profit companies that are also
relying on social media business
models and surveillance capitalism. So,
I'm personally quite concerned about it
and even when you talk to people
that are trying to create alternatives-
more ethical, more inclusive, even art
pieces. So, you know, speculative art or
experimental art. It is impossible to
be sustainable without either trying to
to being sold to these companies
eventually or as a complete strategy and
I think this is a big problem because,
you know, when we talk about tech today,
tech is not just an industry it's like
the butter that is, you know, the
butter that is smeared on any piece of
bread in industry. It's part of politics
and policy. You know, we have, you know, the
heads of tech companies are directly
consulting to policy makers. It's part
of our communication media. Every system
in our lives. We are a technological
civilization now. So, it's not just an
industry. It's not even a common.
It's something that is beyond and
the power and the drives that then drives, not just the tech, but
every other layer. Like Kavya said, this
is not just a tech problem this is, you
know, goes into tourism, into services. It
goes into every layer and this is the
big problem. Now, you know, this is not to
say that we can move into, let's say,
you know, I'm a digital hippie, you know. I
know where I want to move to. But, you
know, of course we live still in a
capitalist society and a for-profit
society. But I think the biggest problem
is that we don't have enough
reward systems for those who are doing
it better and that's a big hurdle.
You know, even here in Canada, that has
fantastic incentives for digital
innovation. You first have to be a
company that proven itself that has a
span that made money. So, you have to be
big enough,
AKA commercial enough to be able to use
these funds and also there's not enough
penalties for those who, you know, are not
adhering with with not just ethical but
legal but with legal- like, violating
lingos and I think these two edges need
to be like highly, you know, elevated and
I think also on a personal
accountability. So, this is between
companies and regulators,
but, I also expect a lot more from
the users. I think, again, we became so
compliant with, like, "Yeah, you know, I'll
just go." You know, there are companies
that me, as a professional, I refuse to
work with. I refuse to work with them, you
know. Because, you know, sorry, they're evil.
They are evil, you know, and
if it were projects it they
would come to me and say, like, "Help us
change it. It's one thing but help us, you
know, amplify this, and let's hope for the
best." For me, is a big problem and I
really believe that change can come from
the community of developers as well.
We've seen it happening in companies
like Facebook, in companies like Google.
That people walked out and people are
thinking twice and whether or not they
advertise on them they participate in
them and they work for them. And I think,
you know, we have to take individual
accountability as users and as
developers to kind of, like, you know, not
be part of the problem. To be, honestly, on
the right side of history. I think, you
know, zero tolerance. This is where we're
at at the moment or should be.
(Kent) Yeah, at the VR privacy summit back in
2018,
you know, big takeaway from me was that,
the issues of privacy and the business
models are, for companies like Google or
Facebook, of surveillance capitalism.
Those are in direct competition with
each other and so until there's like a
complete new business model for how
advertising and, you know, this whole
surveillance aspect is done, then
you're always going to have this tension
between wanting to continue to grab a
bunch of data about somebody and to be
able to extrapolate information about
them so that you can sell more ads
versus, you know, the data sovereignty of
your privacy of not and to be able to
be, you know in a place where you don't
feel like you're constantly having
everything that you do or say or move or
what you're looking at what you're
paying attention to being put into this
big surveillance machine. So, there's
obviously Facebook, one of the biggest
players. Microsoft, I'm really happy to
see that they're starting with the
enterprise and they don't necessarily
have like a whole surveillance
capitalism business model. Apple also has
privacy but, you know, there's a bit
there's a trade-off of, like, you know,
like, the hololens is like thousands of
dollars and, you know, Apple, you know, you
may end up having to pay for that
privacy, so, yeah. M, I don't know if you
like what since you're inside of
Microsoft what you're kind of take is on
this sort of tension that we see within
the larger XR industry. (M) Yeah, I mean, I
think your point about us selling to the
enterprise is a good one. I think, sort of
an elephant in the room is, there isn't
really a profitable consumer VR business.
Everyone is sort of shoving money onto
the fire in the hopes that at some point
it will be profitable and that series of
incentives makes it even more likely
that, even if you would already be
leaning towards surveillance capitalism
you're going to lean into it as a way of
trying to get some return on your
investment and sort of- Even people
who aren't platform holders like if
you're producing consumer VR content,
maybe you have some arts funding. Maybe
you've raised some VC funding but you're
probably getting money from platform
holders, not even traditional games
publishers. And I think a lot of like one
way to view this the way that Microsoft
has done and now, like, magically shifting
to enterprise is if you can sell
something to company
that are paying a lot of money for that
solution, like, this is something that is
out of the hands of normal people and
they can't really afford it. But it
effectively sidesteps this issue
entirely and, like, Microsoft, if
we're selling you something to help your
business, we don't want that data. If we
are building developer tools or people
making XR experiences like we do
not want that liability of having
private data. (Kent) Any other thoughts? Tom or Kavya?
(Kavya) Sure. I just want to say that, you know, we
mostly focus on like these big tech
companies doing, you know, whatever
ethical and ethical- Their best effort
that may not be quite ethical kind of
thing. But, I think we need to take in
account what Gali said. It's not just
like bigger companies. There are smaller
enterprises that are struggling. What are
the incentives? What are the ways that
they- They wouldn't just sell out like-
There is a particular organization that
just received, like, about 7 million
dollars in funding. Their whole business
model is all about incentivizing data in
XR. So, how does a company like that, you
know, decide that how- "Now that I have
this responsibility and all this money,
I'm going to operate ethically." We
basically have to rely on the CEO.
Likewise, there was another company that
was just sold fairly recently for
millions of dollars to Niante. When I
asked the CEO he said, and I said, "How did
you make the decision of handing over
data to these guys?" And he said, "Well, I
looked at everybody and they seemed very
principled." Nice answer, but what does
that mean, principled? What- how does their
third party security program look like
when they do data sharing? Did we look at
that? How do they actually intend to, or
do they have it in writing legally? You
know, use the data that is coming in we
saw another interesting merger, or like
acquisition happen, which was Beat Saber
and then suddenly, this happened in
November and all of a sudden, magically
Facebook's Privacy Policy gets updated
in December. So, I wonder what happened? I mean,
there's just enough, you know, like,
there's enough connection to make that
speculation that, maybe, there is this
data that arrived that somebody wanted
to use and now they've you know sort of
update it. However, during this update, what
we still didn't see is, what is happening
to biometrics data so, again, looking at
it from a solution perspective- Again
this is, like, you know, the standards
that I keep talking about. It was our
first attempt to give people that
baseline. We said that if you are
building, if you're doing stuff in XR,
make it based on human centric design,
which is based on, like, accessibility,
inclusion, and trust. So, you include
ethics there. You give people some kind
of a baseline to sort of follow is, "Hey
you got your funding. You got this. You're
a small business. You're a bigger
business." If we all adhire to this,
including Facebook, including everybody-
if we just draw this line in the
sand and be like, "Hey, we need to think
about human centric design and we need
to think about, you know, trust in general
or rebuilding trust by, you know,
exchanging this data or not. Then, I think,
we can solve these issues better. We need
some sort of a baseline and then we
improve upon it. (Gali) I have a quick solution
that made me come here. You'll agree. How
about we take all the text money that
all the big tech companies are not
paying, make them pay it, and then make
that into grants for ethical and small
businesses? Ha! (Kavya) Ok. (Gali) No, it's not possible,
but a girl can dream. (Thomas) Oh, no, a girl can dream. (Kavya) I think you're right though. I mean, this is what I
thought when we started XRSI, it's
like, it's not my responsibility to raise
money for XRSI. We have now
become this essential component. So
that's my next step. I'm just going to
start putting these research components
out there that must be done. And, "Hey,
Facebook, Microsoft, Google, people who
have billions of dollars." People actually
pay 59 million dollars in a lawsuit,
just so somebody could shut up. They need
to pay. They need to build these things.
It's in their better interest. It's in
the better interest of humanity. So, I am
video- in fact,
I'm gonna call you and ask you for
advice on this as I build this. We will. We'll make those dreams come true.
(Thomas) No, I'm so happy you mentioned tax as
well. That was a historian who visited
the World Economic Forum and where all
these billionaires come with private
jets and he mentions that the real
solution for helping the world is to
make sure people to pay tax properly. And
then, he said it's a bit like going to a
firefighters conference and not
mentioning water, for example. But, yeah.
Going back to Gati's other points.
When it comes to thinking about other
companies, I totally agree. Do you got
quite a company seat's shifty policies to
make sure you get to do these other
things and it's very scary and it's
happens very regularly. And I think I
just want to cap off with mentioning one
company which I don't think no one's
mentioned yet but I suspect it's going
to become like a hot topic and that's
TikTok. Because at the moment the
companies which are getting the most
interest and insight is Facebook owned
companies as well as Apple as well as
other big tech companies. But TikTok
is very interesting because less people
are slash interested or looking into it
that much. Yes, they have such a grip
holds in those particular age
demographic all around the world and, the
way they collect the data is actually
quite scary. Because it's on those
platforms where a lot of fake
information spreads very quickly with
almost no regulation. Which is why it's
impacting a lot of people, at least
particularly in the UK, and the reason why I
mentioned in the context of ethics and
XR is because they're also looking to
augmented reality tech and- Yeah, I
have to I have to wonder what TikTok
will be doing next when it comes to augmented
reality. Which is why I'm taking a very
close eye on that company of right now.
(Kent) Yeah and it's basically like a massive
spy machine in a globe. And I'd say in
terms of like the United States based
companies ,a lot of the big companies, I'm
skeptical that they're just gonna like
do the right thing. Like, without either
pressure from consumers, pressure from
the larger culture, pressure from
regulators- Frankly, in terms of
how to give the whole power of the
government into actually forcing certain
action. So, I think as we as we think
about this it's like how do you come up
with the policies and the legal
frameworks to be able to actually push
up and to start to mandate more action
on this. Something similar to the GDPR
but something like, privacy as a human
right and how do you conceive of that
and, you know, how do you enforce that at
a governmental level or how do you, you
know, cause, yeah- there's a number of
different philosophers that have looked
at you know different approaches like
Adam Moore looking at privacy in terms of
something that you own that you could
you, like, license out like copyright
contextual integrity which is like it's
more about the context or Dr. Anita
Allen talks about how you need to
actually, like, have a paternalistic
approach where people are not
responsible enough to taking care of
their own privacy, so we need to have the
government take care of it for us.
So, all of these are not perfect and
so how to sort of force action at the
collective level. I think that's probably
a good place we could, again go on for
hours for any one of these topics but I
wanted to kind of wrap up the rest
of this round and then kind of open it
up for questions. So, Kavya, why don't you
go ahead and talk a bit about trust
security and harassment or whatever
which ones that you want to sort of
focus on at this point. (Kavya) Yeah, sure. And I
think I've shared multiple stories
around it but, before I go there, I want
to follow up with some of the points
that were earlier made is, like, "How are
we gonna get people to be more
ethical?" And I think there is like this
threefold approach that at XRSI
we are planning to take, is, we're gonna
start with awareness for all
stakeholders. Like, users, as well as it's
super to do that. We're gonna do a
campaign of awareness, of risks of XR,
going out to 29 plus countries,
however many organisations, 4,000 plus and
for that we partner with somebody who
already knows how to do that. They
already have connections, so, it's like,
great. So awareness- awareness using Games
4Change type of platforms awareness
by, you know, partnering up with
girls who dream about, you know this is a
better model to adopt and, you know, those
kinds of things. And then, incentivize. Now,
we draw a line in the sand is, like, okay,
whoever will adopt and do this? We would
personally, you know, like, hail them or
feature them on our awareness platform,
which we are gonna be rolling out next
month. It's Ready Hacker One. So, then, once
we incentivize these, once we draw
this line then we, potentially, can hope
that now regulators which, you know, again,
awareness for them as well, can
potentially understand and then have
them build something more mandatory. Then
we go into enforcement. Do, like, a slap on
the wrist, and hopefully, we can then hope
for it like, you know, 4% of the revenue
type of for GDPR. My claws come out.
The only sad part is that we are not
able to get much of attraction in the
United States whereas other governments
actually truly respond better.
So, that's the one piece that I'm, like,
still thinking about connecting. Besides
that, you know, there is the aspect of
harassment, bullying, all of this is
coming to XR, inevitably. And it's already
is there. We see that in our gaming
industry that, you know if you are a girl
avatar, you feel, you know, sort of not
comfortable in many of the cases and
still there is not enough awareness
around like, "How should you be
treating a girl avatar?" Or, "Why should you
be speaking to a girl avatar in
this way or that way?" How do we solve
that,
really, is yet another, you know,
collaboration where we are partnering up
with something called This Gamer's Safer, another organization that's really
thinking about these things very
uniquely, using computer vision and
artificial intelligence to create some
kind of a digital ID for kids and, you
know, just like, players or the XR folks
are more incentivized for behaving
better. And then, you know, define those
principles, what better means, and then
sort of, like, go about solving it. So
we're just sort of taking this piece by
piece by rolling out multiple programs
without even spinning up too many
apparatus.
There is already- There is so much work
going on, just connecting these dots, and
then just thinking about it
strategically, like these things happen.
Using my unique experience when I was in
Second Life or you know their virtual
platform sales XR. I personally
experience harassment. It impacted me
tremendously. So, like, I know that these
things have a greater impact, because it's all- It's a very compelling
reality that we experience when we
experience XR. So, yeah. Those are the
things. I'm very solution driven this
year. I'm just gonna monetize and solve and
so, like problem seeking, and then, now
it's problem-solving. (Kent) Yeah, well, I think
the- I welcome comments from everybody
and all the stuff that Kavya said but
one thing that I'd say just in terms of
the harassment is that, harassment is a
challenging thing. For one is that,
there's technology and then there's
human behavior and if, like, people are
destined, like, determined to be horrible
human beings I think there's only so
much that the technology can do to
prevent that once you get people
communicating with each other. So, I think
we've seen that across Twitter. We've
seen that in VR. And so, some of it is a
human issue in a training and the
cultivating of a culture, to sort of
cultivate the right behaviors that you
want. But, there's also- there are things
that you can do for, you know, the
technological side with personal space
bubbles, allowing yourself to block
people or ban people, to mute people. You
know, all these are sort of, like, the
basics in terms of what you need. But, I'd
say, like, there's also a very interesting
like, trade off, because, if it were up to me,
I wouldn't do any sort of, like, recording
or tracking or, you know. But the reality
is that like, in, All-Coast venues, for example,
if you report harassment then, it's been
recording whatever you're doing, so, you
have to consent to being recorded or a
lot of these social VR in order to scale
up to the level that they do, they
actually have to assign individuals, like,
a social score. It's an invisible social
score that's never revealed. You don't
know what your score is, but there is an
invisible trust factor that you have
that's essentially like a trust score.
(Gali) Oh. (Kent) So, like, there are certain things that
are actually happening behind the scenes
in order to create a safe place, but- I
think that's sort of what makes it such
a challenging
ethical issue is that you have these weird
trade-offs of like social scores and
being recorded. So, I think, that's the
challenge with this. In order to, like,
create safe environments, there's no, like,
perfect solution that satisfies
everybody's like desires. Anyway, I just
want to start there and kind of open it
up and see what other folks have to say.
(M) I'm not necessarily- I don't know, but I'm
not sure I agree that we need these sort
of recording technologies in that, at
least they are not currently working. I
have not spent time in a completely
public VR social space without being
harassed. The times that I have felt
safe have been when it is at a smaller
events where the people organizing the
event can treat it like an in-person
event and use the same sort of social
techniques that you would use if you're
at a physical meetup to make sure that
people are safe. And, like, the model for
using, like, recording video and things
like that goes to places like Twitter or
Facebook where it's much easier to
scale up human audition because
everything is text-based
and you can much more easily apply stuff
like machine learning even if you're
just looking at plain text that's faster
for human moderators and even they are
completely struggling under the load of
the abuse and harassment on their
platforms and I don't know- Even if we
were okay with saying, "Everyone is always
being recorded in every space, we're
gonna capture every piece of data we
have and forget all of the other ethical
concerns we talked about 20 minutes ago."
I don't know if that solves the problem.
(Gali) And to add to that, I think, yeah, we
need some some safeguards and some
spaces that we can, like, block, you know
the trolls. But, you know, if we think-
especially if we think about AR, we
keep talking about AR, of adding data
and editing features. AR is also a tool
that can take out things out of your the
public space so, you know, when it
just started, you know, my first thought
was, like, "Okay, great, so like,
fundamentalists, they don't want to see
women you know now it won't be on the
women it will be on them." and I'm
like, "Yeah, but what happens if, for
example, a Trump supporter doesn't want
to see people of color and they
disappear
from his reality?" So, we were
here in this, like, duality and it's
another ethical thing, like, you know, to
block someone it solves, you know,
the symptom but doesn't solve the
problem. And I think, you know, technology
is great in creating these roadblocks
and protection gear to block the
symptoms, but we have to deal with the
hard problem of this behavior and this
mindset to begin with. And some of this
behavior in mindset is allowed or
sometimes encouraged by the platform and
some of it is something that we have to
deal outside the technology world and really
have a deep, you know, root canal to take
it out. (Kavya) You're right, you're right.
And since we're talking about ethics, I
want to add to it. Just one more sort of
an ethical concern that is very least
talked about, but it must be talked about.
So, we're talking about in-game
harassment and all, but Google or YouTube a
bunch of videos where people are in VR
and then people around them are either
groping or touching or doing all sorts
of things while people are
experiencing extended reality. So, the
point today I also want to make is, we
need to build a spectator culture
including what Gali just said is, this is a cultural upbringing
that we need to do for ourselves. And, of
course, today is a great way to do so and
then, you know, 50 other people listen to
us, and then they talk to 50 other people.
That's great, but we need to do more and
better- Sort of cultural awareness. We
need to tell the researcher so that, you
know, or the university. Sometimes they're
doing research and they have no idea
whether this professor, he could be a
harasser. Or, what kind of a guideline
should they have, so these kinds of like,
you know- First, we draw the line in the
sand that, you know, "Hey. When somebody is
in VR or XR, like, please do not- dadadada- or
do not harass and do not record like all
these things have to be instilled. In a
way we just, you know, culturally- we need
to grow up.
And this is an opportunity, this new
profound technology, it is very scary, but
if we use it properly, it's amazing and
we need to know what it is before, you
know, we start like recording people or
touching people or grabbing people, you
know. (M) Yeah. That's a really good point. And
I think that is deeply intertwined with
talking about the relationship between
games and VR because I think, like, gaming
culture is so fundamentally toxic and
you look at the harassment and abuse
that happens, not just in games, but
outside games. Like, I don't know if there
is a way to save quote-unquote gamer
culture and so there's a real question
of, as VR, as this technology that has the
potential to reach this much wider
audience but is very much now a lot of
the time focusing on gamers who are
willing to spend money on expensive PCs.
Is that an initial market? Like, to what
extent are we letting that toxic culture
define what the overall culture of the
VR is and that's a real problem. (Kent) And I just
wanted to jump in and sort of expand on
your point in terms of the
public/private aspect. Because you look
at something like VR chat
they have areas where you just go into
public spaces and it's kind of
free-for-all. Or, you could create an
instance that's just your friends and
you have more of an invite-only or you
could start to have like your friends of
the friends start to come in and so
there's I see that there's this
dialectic between like those private
spaces where you have control over who's
in those spaces versus like the public
square. I guess, part of my concern as
well is, like, "What does public space look
like in the future if everything in
order to be safe is, like, totally private?
Then, how do you actually get like away
from the filter bubble aspect of just
sort of, like, radicalizing everybody in
terms of, like, never having anybody that
you encounter online that has any
different perspectives from you. And so,
yeah, like, there's like these larger
things - we've already had these filter
bubbles that have been generated by
social media and then as we have
physical gatherings within Virtual
Reality, then how do we, you know,
cultivate and create like, maybe there
there are sacrifices we do have to make
when we're on these public spaces in
order to make them safe.
But we have the option to be in the
private spaces as well. So, that's the
stuff that comes up when I hear
some of those things it's already
happening in the VR as well. Cool, well
Tom, did you have anything else you
wanted to add in before we move on to
the final topic? (Thomas) I have one thing i wanted to
say as I saw on in the chat, Julianna
says "What would you like allies to do in
to support better behavior in these
public spaces?" which Javiar's been
talking about. It's very tricky. I was
talking to lots of people from the
educators in VR summit because there's a
very tight group people working alt
space and if they explore how to treat
others in these immersive spaces and a
lot of it comes down to two things I
found. One is to pull people out of
public spaces which is very similar to
what happens in real life are using to
call out bad behavior and in a
constructive way. The second thing I've
seen which is- it's gonna
touch on the XR manifesto again. The
avatars you use. The avatars you use
really define how people react to you.
It's for better or worse and it's going
to be a big, big, big topic to explore how
you portray yourself in these spaces.
Because some women have found that more
people listen to them when they are male
bodied, which is awful. And that should
never happen but it's been recorded to
happen and, I guess, one thing we need to
explore is creating a framework within
these immersive spaces where a lot of
these biases in real life do kind of
spill over into these very intimate
spaces and, how do we, as a
community build together to solve these
issues? And I feel the solution does come
down to how active a community is to
make sure they improve. (Kent) Yeah, the code
of conduct. I think every VR
application has a code of conduct and
it's like a design challenge like
how do you ramp up all of your members
that are using the app on what the rules
are and how are those rules moderated
and enforced? So, that's sort of like, I
know there's different approaches that
VR chat uses rec-room, so, yeah, but,
an old space as well.
So, yeah, let's move on to the final topic,
then we'll open it up to Q&A. So, M, you're
at Microsoft: a big company that has lots
of government contracts and so there's
you know working with the military and
the ethic surrounds like using XR
technology for military training. The
military has been involved since the
very beginning of XR with flight
simulators and Sword of Damocles being
funded by DARPA and Tom Furness and the
Air Force. So, they're like the whole
history of XR and VR and AR is tightly
bound into the military but I'd like to
just hear you say whatever you want
about your perspective of being inside
of Microsoft and some of the ethical
issues that you see with a company
that's as big as Microsoft. (M) Yeah, I think
the military thing is tricky. You're like-
what I can say about that is, there
is, like- there are limited teams at
HoloLens working with the military that
I am totally divorced from and I know
that we have, for the ethical- We have a
larger ethics group that works with them
and brings in outside subject matter
experts to try to figure out who are
we engaging that should be engaging with
them. On a personal level, sort of, I would
note, it is not just that the history if
XR is intertwined with the military but,
in many ways the history of computing,
like, we would not be
communicating here, over the Internet, if
not for DARPA and so, like- I grapple a
lot with, you know, even as I personally
would not want the things that I build
to be used for military purposes,
military funding has directly led to a
lot of the things that I use in my day
to day life and that is something that
is really tricky to grapple with in a
way that is more abstract than say, you
know, should X tech
products have a nice contract or not? So,
yeah, that is one thing. I
think to the larger ethical point, though
I think I touched on this a little bit
that, for the most part, like, what
Microsoft is doing in XR is, selling to
companies rather than selling to
consumers which completely shapes the
model like other than-Old space is one
thing. Alt space shares a lot of the same
concerns as a VR chat and all these
harassment and privacy issues we were
just talking about. But, for the most part,
when my XR thing is a tool to
developers we're going to use the
customers
are never gonna see? That really changes
the calculus like the data we're storing- we don't want to own it. It is a
liability. We don't really have to think
about a lot of these same privacy issues
or ethics issues that other people do
which is a very privileged position to
be in. (Kent)
Well that's one more other question and
also invite other people if they want to
either ask you a question but what they
want to know about Microsoft or have
other comments about what we've talked
about. But, since Microsoft is such a big
company and with XR being so new and, you
know, how to navigate these ethical
dilemmas, like, how has Microsoft
internally started to have discussions
around ethics and how our ethics
embedded into the designs themselves?
Like, how does a relationship between the
ethical frameworks and the actual
implementation of the design? How does
that conversation take place? (M) Yeah.
I wish there is a more unified answer. I
think one answer is that Microsoft is a
very, very large company and so it is
often difficult for different parts of
the company to talk to each other. So,
like, there is the sort of
formal ethics team that I mentioned but
for the most part, like, I would love it
if the people on the HoloLens team and
the Windows Mixed Reality team were
directly having conversations about the
same things they are facing on a day to
day basis. It is possible that I am just
in a completely other arm with a company
and those conversations are happening
but the perception I have is that any of
these discussions of ethics are
happening within individual product
teams. In the sense I have gotten is they
do definitely exist like I don't
know if I'm allowed to talk to specifics
but I can think about a lot of specific
products around a machine and learning
where I've been in the room having
discussions about, you know, this is the
thing that we build a prototype of it.
This probably shouldn't exist. Maybe we
shouldn't actually sell this as a
customer facing feature and like that
has given me, like, as someone who is
relatively new to working at large mega
corps. It has given me great hope to see
that these conversations are happening
even if it is at a micro level rather
than some unified framework across the
entire company.
(Kent) Hmm. Anyone else have any comments or
questions? (Gali) I have a question for M 
 that you can't probably answer. Can you
bring, like, do you have an example, like, do
you know about, like, a concrete case
where, you know, a product that would have
made money, that was a good product but
ethically ambiguous was not released from
Microsoft? (M) I do not have a good answer
again part of that is being so
relatively new to a company. I can think-
saying, like, I'm thinking of a very
specific example where it is not a
product, but a product feature that did
ship. And I don't think we
have pulled that feature yet, but
like the fight is ongoing and I
think we are very close to having that
no longer be a thing that you can
actually use or pay for. Even though it shouldn't
have shipped to begin with (Gali) Because,
because, I'm a Mac person and I really
want to love Microsoft in my mind on
because I really think that Microsoft at
least publicly or as a consumer have
really voiced out and, kind of like
applied, a lot of things that I really
respect and didn't expect to. So, make me love you. (Kent) Well, I'm gonna say something. (Gali) Tell me
whenever it happens. (Kent) I'm gonna, I'm gonna,
say something in favor of Microsoft and
against Apple, is that, in the past
Microsoft used to take on open standards
and try to own them and kill them,
which, you know, Internet Explorer is
probably one of the greatest examples of
that but, eventually, you know, that
shifted and because Microsoft missed the
boat on the mobile revolution, you had
Apple with iPhone and Google with the
Android. Microsoft's been forced to
really take this really pluralistic
open-source like they own GitHub, they're
like, probably, the most open source
company out there now and who is trying
to own and kill open standards is Apple,
who does not implement like, the web
standards they try to own everything I
mean actually they're quite bad when it
comes to promoting open standards and
forcing everybody to go through their
their app. So, that- there's been a huge
shift
that I've watched and my sort of tech
career where Microsoft was the bad guy
and now they're the good guy and Apple
arguably and I mean they're- Apple is a
good guy on privacy but when it comes to
closed walled gardens and promoting, like,
open ecosystems, like, Apple's, like, one of
the worst. (M) Yeah, so Microsoft is now the
single largest contributor to open
source on GitHub more than any other
large tech company and having to be-
to sort of public conversations with, say,
the typescript team, like, the old mantra
of, "Embrace Extend Extinguish", it seems
like, extinguish is no more and even just
on my special computing team
specifically, we have multiple members on
the W3C web VR group and sort of
everything right now with new Microsoft
seems to be towards, how can we embrace
the community? How can we put standards?
Like, are, like, on the wall company
motto is about empowering everyone to do
more and, like, that is that is silly that
is corporate speak but also that comes
back to- our goal is not to own what you
are doing. Our goal is to help you the
ways that we can and hopefully that will
benefit us. (Kent) Cool. Any other last thoughts
comments or questions about Microsoft
and big tech companies? (Thomas) I only want
to touch on the ethics behind helping
military organizations but which in turn
speeds up technological development.
I guess I share their opinions
everyone else in the panel I feel, though. I
also feel very uncomfortable with that,
where our biggest and greatest
innovations do tend to come from
investing and looking to technology and
the same is happening with VR and AR as
well.
I've been following Microsoft closely
with what they're doing with the US Army
and all their new goggles, because
they actually renamed the goggles to
something beyond HoloLens because it's
so different now to what the HoloLens
actually is, because of this tight
connection they have to the army.
I lean towards no. I'd much rather- I
think we reached a point of development
where
US tech companies can develop without
the need of like helping out military
contracts but that's a very personal
opinion and I'm sure others might share
my view on in this panel as well. (Kavya) I wonder if it
comes down to forty million dollars. I don't know if somebody would
just give away forty million dollars
just because they wanted to be ethical,
not wanting to be called out. And it all comes down to money. People
are taking risks of very grave magnitude
and they know that even with the worst
of the regulations GDPR, even with
their four percent revenue, even with all
of these things- All they have to pay
is like X billion amount of dollars. Okay,
lunch money. Here. This is something that
we've seen in cyber security privacy
industry all the time and
it's gonna continue to happen.
It's money that's driving all these
decisions. (Thomas) Oh, no, I know. But that's the
nature of ethics, isn't it? We know that
there's like capitalist gains from it. We
just wish it wasn't the case.
That's ethics in a nutshell. (Kent) Yeah. And
there's also- when I was at the
International Joint Conference for
Artificial Intelligence, you know, Max
Tegmark had, you know, try to get a bunch
of academics to sign off saying, "We're
not gonna support any sort of AI that's
used in autonomous vehicles. They're
gonna be killing people." And then, so, then,
like, when you go down that stack, it's
like, "Well this algorithm could be used
for this use case. So, therefore, we should
eliminate this- what were they called-
dual use algorithms." And so, some people
say, "That's a good thing to eliminate
those dual use things." And some are like,
"Well, this has other uses other than that
use." And so, how do you draw the line
between, like, when you get lower down the
stack of the research. So, I think that's
sort of the tricky thing, knowing where
that line is, and knowing like, when
you're gonna like put your foot down and
say, "Okay, this has crossed an ethical
threshold that I no longer feel
comfortable with." Having technology go
out there and it takes someone's life
and killing them- automatically, that's
sort of, like, I think that happens more
in AI, but there's similar issues that I
think that come up in XR as well. Well,
tell you what, let's open up for
questions and we'll have like 25 30
minutes for questions and for however
long people want to hang around.
But, as I open up here, I see Lawrence
Slownick's question has four thumbs up. I'm
just gonna go down the list here. "Our
question is, do you know of anyone
working or researching how to articulate
the qualitative aspects of the type of
data you are sloughing off as you use XR
and the 
quantitative data sets that companies
are collecting and I worry about
sentiment tracking." I'll open it up. I've
got an answer but I'll open it up to see if
anyone has anything to say. Well, I'll sort of
add what I know and then have other
folks. So, I know there's actually a lot
of researchers. I know Jeremy Bailenson.
Well, first- sort of- maybe recast
what qualitative aspects. I think, what I
think- what I take that to mean is that
you take a lot of numbers and
abstract data and you're extrapolating
meaning out of it. So, you look at your
facial expressions and you're saying
they're feeling happy or feeling sad. Or
they're able to do what Facebook and
Games Analytica did with the psychographic
profiles where you take a bunch of data
and you basically come up with
personality profiles. So, the stuff that,
like- I think most of the stuff the research
that's out there is looking at things
like giving your eye tracking data.
You're able to determine what people are
interested in what their sexual
preferences are. I know, at the VR
Privacy Summit, Jeremy Bailenson sort of
did a recap of a lot of that data. And I
know that Jeremy's also been working on
that as a research topic as well. But I
think generally, there's a lot of
different researchers that are trying to
to look at what you can take from
immersive data and what kind of
conclusions you can extrapolate that
from that data set and I'll invite both
the panelists to share any more pointers
people have or in the comments that
people want a point, too. (Gali) Yeah. All the big tech
companies have their own research labs
squads that are doing just that so
here's an answer, you know. (Kent) Yeah, a lot of
that for the research labs is probably,
you know. Their research labs are not-
they choose to publish sometimes at
SIGGRAPH and other things when it's sort
of palatable. But, there's certainly a lot
of stuff that's not as
privacy friendly let's just say, that's
probably been happening- that's probably
been happening a lot behind closed doors,
that's for sure. Anybody else want to
have anything to add on that? (Kavya) I know it's
not really research per se, but, you know,
I would mention that, again, the XR data
classification framework working group.
And, so, we're trying to bring in
researchers who have these sort of
answers. We're trying to, you know, have
conversations with companies like Toby,
or cognitive 3D, Bad VR and have- use
thier sort of, you know, resources to
put together some sort of a framework, to
understand, like, how this data could
be profiled potentially. (Kent) Cool. Next
question is- that somebody jumping in?
Quickly (M) Yeah, I was gonna say, like- I
think the point that a lot of this
research is happening completely behind
closed doors at large companies means
that a lot of the things we've been
talking about are difficult. Like, they
are not necessarily solutions to those
problems like having a very
public ethical framework in order for
that to actually stop, you know, someone
like private Facebook or Google research
teams from still doing the unethical
research that needs, either much more
stringent regulations than I think we've
been talking about or making a
meaningful impacts on the opinions of
the employees actually working there. For
them to be able to say "No, what we are
doing is not right." (Kent) All right, so,
Siddhant Patil asked a question, "How can
privacy and security be made profitable?
What would make that an important
consideration for those businesses which
make profit driven decisions or don't
care about ethics? Is it possible?" (Thomas) You
mins if I hop in for this one? (Kent) Yeah.
(Thomas) Excellent.
I believe the- Well, the natural answer is
Apple, they've absolutely made a business
model for privacy, but what- that has been
their marketing drive for the last few
years. They've seen what's been happening
with Facebook and they're like, "Let's
capitalize on that." And the whole deal
has been around privacy. And it's worked
wonders for them because a lot of the
way they make money it's not actually
using user data. But, touching on Kent's
point it caused them other issues, which
is their walled garden. They're very
difficult to work with, which is why,
there's a lot of issues
when it comes to building products
for Apple. But absolutely. That's
the cupcake I think of. I saw M, you
nodded your head vigorously when I said
that I'm sure you have an opinion. (M) I mean,
I agree with what you said. The thing
that specifically worries me about Apple
though is, they have currently found
privacy and security to be a strong
business point because they are the only
company doing it. And right now, it is a
thing that people care about. And if
either of those two variables changed,
like when everything is still guided by
the hands of the market- who knows? Like,
Apple might not be the privacy company a
year or two from now. (Kent) My insert- oh, go ahead, Gali. (Gali) I have to agree and disagree but I think
Apple started with privacy before it
became such a popular- they integrated it
before. But it definitely, you know, they
have been consistent and persistent and
went to great lengths to
protect their users, even in court, even
against the US government.  So, on that, you
know, I can't foresee endless-
you know, a black swan, you know, I can't
foresee it, like, being not part of their
their core values in developing products
personally. (Kent) Yeah, there's actually a lot
of conflict between privacy and the open
web and web XR, and so, though make
arguments for privacy in order to avoid
implementing the open web XR
technologies. Which is, like an
interesting thing that is happening
there. For me, I don't- I actually don't
think it can or should be made
profitable. I think it's actually a bad
model that you have to pay for privacy. I
think that privacy should be a human
right that needs to be at a more
foundational level of our institutions
that are demanding privacy because it
shouldn't be something that we, by
default, have to give away in order to
get access. Because privacy- mortgaging
our privacy is artificially bankrolling
a lot of technology which is great for
technological evolution. But it's
horrible for the future of privacy so I
don't actually think it should be
profitable, I think it should be just a
human right and we should figure out how
to have everybody do it. Now, that's
certainly not the case. (Gali) It
should be unprofitable to do it any
other way in my mind. (Kent) Right. And I think part of
it is that it's the culture and the
people that value it that has the other
market dynamics but because the market
dynamics aren't doing that then we're in
the situation where by default you don't
have it. All right, next question.
Jonathan Ogaleavy. "Tom started by saying
limit access and of course he was
talking about protecting data but my
mind went straight to the radio player
one concept of limiting everyone's
access to cyberspace by closing down
the whole metaverse on Tuesdays and
Thursdays, like a museum thats close down
Monday's forcing everyone to spend time
in the natural world. This extends an
idea and ethical game design. Feedback
loops optimize for addictive and
engagement can be good for business but
bad for society. Genoneer's ten
arguments call for a population wide
hard reboot. Now. Maybe we can get that,
not now but
when COVID has a vaccine and the
immediated world is a whole new fun
experience for people, do you think the
fictional concept of a weekly or twice
weekly hard reboot of cyberspace could
be realized in actual reality in the
next decade?" The answer to that is no. (panelists laugh) Not hard, not enforced but
anyway. (Gali) Well, we have, already, regulations
about limiting screen time for
minors and kids because, you know, we know
there is an effect the visceral effect
of consuming technology at large and I
imagine that for certain populations it
will be enforced especially for underage
kids. But  I really trust humans, to- yeah, we're
probably gonna binge it in the beginning,
but, you know. We're weird, you know, we're
never gonna stay in that- we're not very
good in stabilizing, you know, behaviors,
right? And even if you look at what's
happening now. I was at a VR/AR panel
the other week- last month- I don't
know what year I'm at, and you know
everybody- we're kind of like- talk about
like, yeah, you know, we've covered all the
harsh things, but
look at like what it does, the beauty
does for AR and VR and I prepared
me a loaf of sourdough
bread and I lifted it up and I'm like,
"Well, this is the killer app of 2020. You
know, this is it." So, we want to believe,
and a lot of people in tech industry
like want to believe that, you know, once
we'll build it, they'll all come and it
will stay, but I do trust that people
inherently and biologically will find a
balance eventually. There'll be a blip
but they will find a balance and they
will want to reconnect physically and
I'm already seeing it. You don't agree
with me Kavya. (Kavya) No. (Gali) I think- I think- I dont know, I look at the- (Kavya) That's all. (Thomas) l want to just hop in and say the reason why I agree with you is
because I looked at World of Warcraft
as a great example of an immersive
world which people kind of hop into
when hardcore into but then petered out
over time and I like looking at World of Warcraft as a case study for these worlds
people to explore and it happened
exactly as you explored it, Gali. It was-
people who jumped in-got intense
initially but they petered out as they
bounced it with real life, appropriately.
But, I see Kavya looking to jump in. (Kavya) Yeah, I do want to explain
myself though the reason why I feel that
may not happen is the current
situation in the immediate it may not
happen is the current situation. One is,
we have this elevated need and want to
connect with people and we are all
cooped up in our rooms. Let's say we
build this nice amazing, compelling
realistic avatar-all of this AR/XR/VR.
What happens when these companies use
the same exact thing that they use is
the, you know, inducing Dopamine
based sort of, you know, models, like,
whenever you consume these contents you
feel better. So, I have friends who spend
about, you know, today, when XR is not that
great. But there are people who never
knew VR existed who now spend about
six to eight hours in Echo Arena, who
have been struggling with those, you know,
feelings of addiction.
So think about how, you know, we are using-
literally we have technologies now, AR technologies, that track your, you
know, mental thoughts and make you feel
positive. Well, in the wrong hands that
could happen, that could have adverse
effect. So, that's why, I'm not counting on
it because Freud said, you know, "Humans
are inherently. If you leave them to
their own devices, they'll destroy
themselves." And that's why I'm like, "You
know what? I don't trust these companies.
"We have to prepare for the worst, and
because they are going to weaponize our
information." It's gonna happen, and we are
gonna deal with addiction. We're gonna
deal with all these issues in XR which
will be worse than the current digital
ecosystem. (Gali) I agree with that completely.
So, I never said I trusted the companies
but I do trust the people eventually.
And really, when I look at the younger
generation, you know, the really young,
not us young, but the really young. I am
seeing that they're using technology
more and more as a tool and they're more
skeptical and there is smarter about how
they use it.
And they're not so quick to
adapt and swallow it the same way that,
you know, our generation is, but I
completely agree. When you have, you know,
addictive triggers and mechanisms in it
then, even, if you don't want to be part
of it, you're triggered to be part of
it and that is something we have to
solve for sure. (M) Yeah, like, I think the
World of Warcraft example is a really
interesting one because to my knowledge
there have been, you know, a small handful
of people who literally died because
they were so addicted to World of
Warcraft, they didn't bother eating or
sleeping or anything like that.
And then again, that, I think, like, the
solution is not limiting screen time per-se, like, in China, Honor of Kings I think,
it's called Arena Valor in the US is
the biggest game in China and Tencent, the
creator's limited the amount of time
that anyone under 18 in China could
play it and it didn't really do much and
they limited the time again and that
didn't really do much anything that
these are- Even before you get into AR
technologies, so many of these games and
experiences are fundamentally dopamine
slot machines. Like when you have people who are just
puddling skinner boxes, like, that is a
much larger societal problem to solve. (Gali) And,
yeah, it goes back to the business models,
like, I make money from keeping you
inside my platform because the truth is
that, social media- most companies,
you know, they're not tech companies.
They're advertisement companies. You know, let's give Facebook the name it has.
It's an advertisement company that uses
social construct to sell stuff and to
get your data and sell that as a
commodity. Same with Google, you know,
there's a reason why they're giving it
away for free, trust me there and
still the most profitable company. So, I
think this is where, like, also the
business model and the regulation around
it and the taxation around it could help
a lot in kind of, like, diverting their
incentives on how they want you to
interact with the technology and perhaps
protect us a little bit better. Yeah.
From destructive applications and
subversive applications of tech
mechanisms. (M) And, I at least have
some small amount of hope, the way that
many different countries across the
world have been implementing rules
against loot boxes specifically and like
one specific abusive form of your sort of
psychologically compulsive mechanics
which- Who knows what'll happen but that
gives me some optimism that this is a
space where, at least in some instances,
governments are willing to come in and
regulate. (Thomas) No, absolutely. I think loot
boxes are the perfect example of, "When a
country gets serious, they will introduce
rules which really help." It also is a
good example of why- how some companies
try to fight back. EA for example, when
fighting back against loot boxes, called
them "surprise mechanics". So, I have to
wonder what other companies will do when
exploring ethical issues in immersive.
And the point that I would make here is
that, there's, I guess an assumption that,
anytime you're doing anything in VR that
you're escaping and you're not in
relationship to other people or to the
wider world around you. And I 
don't think that's a good assumption. I
think that you could actually be in a
deeper relationship with other people.
But I think the ethical challenge
there is, like Gali was saying, is that,
like, from a design perspective, are you
really just trying to hijack someone's
attention and get
this, like, sort of skinner-box hamster
wheels to be able to profit off of that-
Well, they're not really benefiting from
that aside from just being addicted
completely. But, so that's more of the
design component but I think, as
individuals we all have to kind of
figure out how we're in the right
relationship with the world around us. So,
being in relationship to the earth and
to other humans around us. I think that's
a big question that is up to individuals
that we can't necessarily enforce by
shutting off the internet for two days a
week. All right, let's go through some
more questions here. "Regarding harassment and cyberbullying, how could or would you
enforce a penalisation system or a
governing body to make judgments on
harassment? How secure and private could
you make the system the users would
understand, such a system or body in
place without divulging the processes,
like Gali says, we've only been trying to
moderate the symptoms, not the deeply
related problems." A few quick thoughts
that I have on this is, "One: is that this
is kind of like a Truth and
Reconciliation Commission type of thing
and/or it's a justice like, how do
you prosecute and have defense?" And so,
I'm skeptical of just having a singular
governing body that's, like, what's
happening around the world with the
movement towards defund the police it's
to try to, you know, potentially fund it
into more grassroots organization? So,
it's not necessarily a top-down
authoritative body that's going to make
a judgment upon whether or not you were acting
in a proper way or not, but I think it-
having it more from the grassroots
bottom-up. So, as much as you can be a
relationship of people who are directly
involved and maybe deal with it directly
or if there's sort of gas lighting or
behavior that's going to be
abusive that's a open question of how do
you have some sort of, like, process to be
able to mediate these different types of
conflicts and whether or not people get
banned or how do you have people
apologize and make it right and have
more of a Truth and Reconciliation model?
We're still so early, like, all of this is
very theoretical but that's just some
initial thoughts. I don't know if others have any
ideas or thoughts? (Kavya) I agree with you, Kent and I think this
is not going to, you know, we cannot
create another sort of, like a, XR police
model. But we do have to
bring in entities that I would say, you
know, because XRSI's three principles-
ethical, unbiased, and trying to build
safe virtual Augmented environment. These
are the type of ethnicities that we have
to bring together and have their
expertise, their integrity, their ethics
leverage to build a independent review
board or something. And this is something
we thought about when we were rolling
out standards we wanted to do something
like monitoring and reporting but we
held off because we first need to just
draw that line in the sand but that's
the next step is we are going to
maintain a particular sort of a
mechanism where people could potentially
report to us what bad happen, when did it
happen,
and then we'll investigate and they
would try to like help out. Like, just
yesterday, a friend of mine reached out
to me on LinkedIn that she's being
harassed on many platforms. Her research
has been deleted. I mean, this person has
nowhere to go so she reached out to
me. Apparently, like, you know, she's like,
"Hey, you're a Cyber Guardian." Or, you know,
like, people call me that so we need to,
like, create some sense of this
trustworthy entity by bringing a lot of
the collaborative people, like, you know,
people that are on this piano. People
like yourself can- to create this
independent body that will hold people
accountable for that individual
experience that took place in XR. They both have it. (Kent) It's a hard issue. I mean,
I don't, I mean, it's like one of the
biggest issues. But- Gali, I don't know if you have
more thoughts? (GalI) I think, it's a much easier
issue that we want to admit like we
just, for me, it's again, like, zero
tolerance. Like, zero tolerance for it, as
a user of a platform, as a
developer, as a spectator,
you know, I'm looking at what's happening
you know, now in the world and I
think everyone in this panel that I've
seen in in social media have been
voicing out, you know, like- this is the
line in the sand. I'm very disappointed
from others that have a lot of
voice and are refusing to do it because
they're worried about, you know, their
future career or their position. And,
again, like, I don't think we can afford
it with bullying, with
discrimination, with races, and with any kind. Like,
if there is every time to draw a line
it's now and I really urge everyone here
to have zero tolerance- you know, zero
tolerance for it and call it
out and, you know, it's gonna be much
worse if we don't. Like, whatever
consequence you think you will have a
new career, trust me, it's gonna be much
worse for you personally and
professionally if you don't. (M) I totally
agree. And I think in a lot of cases
that's going, like- That might mean
needing to move away from platforms, like,
I'm thinking a lot about- Riot released
their new, non-VR game Valorant about a
month ago and the executive director of
the game, who was a woman, has publicly
said that she does not play the game by
herself with strangers because she
always gets abused. And, if the executive
director of this game with millions of
players can't get the political will to
solve this problem, like, if- I don't know
how you, as part of a larger community
that encompasses that game, I don't know
how you salvage that, other than everyone
saying- there are other experiences out
there. We don't need this one. (Gali) Yeah. And
you're seeing how, in social media, now
platforms are banning Facebook and it's
working, you know. There are many ways to
put pressure, you know. Especially if it's
a for-profit, you know, platform. There are
so many ways to put pressure as
individuals, as well, today on the big
players. These big companies, whatever,
big game companies, they're not- They'd
liked us to think that, there are this,
like, this cloud that can't be touched. It's
Google, It's Facebook, it's Riot. Now,
they're made of individuals that
are worried about the reputation and the
profit and people that work there that
are worried about their ethics and their
future as well. You know, they are
touchable. Oh, wait. I shouldn't say that.
Don't touch them, though, without- (Kavya) I think you're right. And I think shaming
works, you know, we do that in
cybersecurity a lot where people, like,
you know, don't close the front door and
or just leave all these other back
doors, intentionally/unintentionally. Or
talk about stupid password practices,
or, you know, lose a bunch of data. So, we do
that a lot in the cybersecurity industry. We
do put some kind of shaming and, you know,
have the community kind of yell at them. (Gali) I
like to call them out.
Because, you can't shame someone that has
none, you know? (Gali) That's right. Yeah, calling them out. Yeah,
and just exposing them and we are gonna
use- we are gonna continue to use that sort of power
of community, bringing other communities
along with it because it's not just XR community, just like I said in the
beginning. It touches all the domains. So
once we create that sort of sense of
awareness it's like, "Yeah, the XR
technology is being used. But, look at
your community is being put at risk,
because they're not following ethical
principles." Then, you can bring them, also,
and add to that voice of calling people
out. Yeah, you're right. This is one way we
would be able to get some headway. (Kent) Two
quick sort of complications of that, just
sort of as a devil's advocate is
that, for one, when you have a closed wall
garden, you do have the ability to ban
people. But, if it's an open decentralized
system like the open web, or a
decentralized system, then harassment is
still going to be an issue that the
antidote or sort of banning people is- It
may be a short-term solution that makes
things safe and that short term. But,
I think if you look at it in the long
term, you can kind of look at it as the
equivalent of, you know, sending someone
to jail and exiling them. And what does
it mean to permanently exile someone
from immersive technologies for their
entire life? What is the sort of model
for, you know, rehabilitation? Or, you know,
or, owning Hardon, or to be able to do
other models? So, I think, like having a
balance between the punitive justice but
also restorative justice? How do you
integrate that into the fabric of the
technology today when you can track IPs
and you can ban people that is maybe
addressing the experience in the short
term, but I'm skeptical that that's
actually going to change the root of the
problem which could actually be more
than a human issue than a technology a
solution that has a technological
solution I guess, is what I'm saying.
And, like, thinking about it holistically
like, what else needs to happen? Are we
going to put people into permanent exile
because they did something when they
were a teenager? And that's sort of, like,
a question that's sort of, like, the long-term
that makes it more complicated, but, yeah.
So, in short term, certainly, yeah. I
agree with people needing to be able to
have the ability to just, like, any
private business, say, "We're not gonna, you
know, have you here but when you think
about like over the long term, but also
like public spaces as well."
I don't know, Tom if you had anything
else to add. (Thomas)Uh, no. I just
share the same views of just calling out
is the best and healthiest way of doing
it. My only addition is, I just
wanna quotes Hank Green. Hank Green would
say, "We should be judged not by how we
acted when we were ignorant, but how we
responded when we are informed." I agree
with calling out people but- if they have
a history, but they are good now, don't
dog them based on the history. That's
what I'll say. (M) That's something
that games has been grappling with a
lot more accusations of abuse, this week
specifically, and in games- and most
industries having any sort of quote/unquote Me Too moment. Like, usually what
happens is the people maybe issue an
apology, they silently go into hiding and
then six months or a year later to sort
of come back and pretend that things
never happens and issuing people
permanent bans sort of, maybe solves that
problem in in some ways that it's a
clunky, bad solution for the reasons sort
of talked about. But, I think, like,
figuring out actual restorative
transformative justice and online
communities is a totally unsolved
problem regardless of whether you're
talking about XR or not. (Kent) YEah.
Wow. Well, I think we're at the time- It
feels like a good place to stop. (Gali) No! Can we
just have the question about Mars and space
exploration, please? Cause I've been waiting for it for, like-
(Kent) We'll do one quick round
and then we'll wrap up. We've got some
more participants who's been hanging
around. Alright, so, last question and
we'll wrap it up. "What do you guys think
of space exploration and the hype or
hope around moving to Mars in relation
to our collective future living with
surveillance capitalism as a global
community?" Gali, would you like to have
an answer for this? (Gali) I love this question
because I think that the answer
is within this question if there's
anything I wish for all the Silicon
Valley moguls that want to go to Mars
is, I will do anything in my power to
help them go to Mars. I think they should
all go to Mars and they can stay there
and then we can stay here and solve all
the other problems. That's my answer.
(Thomas) Great answer. I think the other answer I'll take
on this question is an answer I think
everyone this panel will agree with. The
impression I'm just getting is we all
just don't like surveillance capitalism
and we should reform it where we can. (Kent) I'm
skeptical of colonialism and sort of a
colonial mindset and I fear that going
to Mars is just going to replicate a lot
of this other a colonial mindset that
we've had on the earth and that we
should learn how to live on the earth
first before we think about colonizing
Mars so that's my answer. (Gali) Kent,
the tests that they sent to space
crashed into an asteroid. It's okay. Send
them. It's fine. We're good. (M) I
don't think we can separate the
colonialist aspects of space travel or
anything from all of the complaints
about capitalism. We have all been
railing on, like, you cannot- the two are
intrinsically linked. (Kent) Cool. Well, I feel
like that's probably a good place to
stop with me. We could talk forever about
these topics and you know, I just wanted
to thank each of you for joining in this
discussion. Both Tom, Gali, Kavya, and M.
Again, this is a never never ending topic
that, I think we're still going to
continue to talk about and hopefully
come up with more ways of making sense
and making these different trade-offs
and try to, as best we can,
and implement the most ethically aligned
design that we can with immersive
technologies. So, yeah.
Thank you all for joining us today on
this XR for Change panel. (Archit)-like
to have more such conversations, please
register for the Games4Change festival
by logging on to Festival.games
4change.org and once again, thank
you all for joining, and special thanks
to all the panelists for participating.
Thank you. Good night.
(Kavya) Bye, y'all. (Kent) Bye.
