there we go
uh hopefully you're seeing both of us
now hi there
it's great to have you on Rumman yeah
thank you for
having me on I am so happy to be talking
to you in
the person-ish pandemic person
in person-ish that's probably the best uh
best term to describe conditions right
it's uh it's been a strange time I'm
sure for you as well
yeah absolutely um although i must say
it is nice to not continually
be on the road yeah yeah you know
usually i'm away from home like 60 to 70
percent of the time
so you know it's nice to be not jet
lagged and in one location
i do miss the travel myself so i have
also a pretty aggressive
travel schedule for my work and uh it's
it's a little bit of a bummer not to
have you know kind of a new country to
check out
every couple of weeks that is very true
that is very true
and by the way i will add that we'll
probably get some sort of accompaniment
from
my cat and/or my dog my dog is sitting
right right down here and the cat's kind
of behind the computer right now
so i love that so my cat is in hiding
somewhere
and she's done really well so far at not
making an appearance in this show but
I feel like one of these days she's
going to demand her cameo
so and my pets are attention hog so the
cat makes it a point
to be vocal I I've just joined this um
this like Oxford commission we're
actually going to be announcing it
pretty soon
and we've decided that she is our
unofficial mascot because she's
very vocal during all of our commission
at all so
that's perfect so what's what's really
cool about getting a chance for us to
finally
sit down and talk is that you know as
you mentioned you're traveling a lot and
i'm traveling a lot
and i know that we have been following
each other on Twitter for a while and it
seems like
our our paths came crossing we'll be
like
in the same city uh either on the same
day
or like yeah ships passing in the night
and we just haven't had a chance
to uh overlap enough to sit down so this
is it this is our first chance to do
that
and that's really exciting um to me it's
exciting because uh from the moment I I
first kind of interacted with your
profiling and with you online I
got the sense that first of all I love
how you put come across online
but that your area of focus is so um
it's so relatable to me you know this
this intersection of course of AI and
humanity is very parallel to mine in
tech and humanity
but also I noticed um that you have
degrees in political science
and so I thought it's your your phd's in
political science
even isn't it yeah yeah so i to me that
is incredibly intriguing because it's
sort of
I can relate to the idea not that you
know I have a background in political
science mine is in languages but just
this idea that
that that education and that framework
probably shapes
you know your thinking and your mindset
about things right and
the idea of systems and and the public
good and that sort of thing how does
that how does that shape your work and
your thinking
yeah um so the thing that really drew me
to political science so this was
actually even
as an undergrad at MIT was the idea of
essentially like distilled two it's
basic it's like quantitative social
sciences math with context
and i really like massive context right
or maybe another way to put it
would be uh i think it's really
fascinating to understand
at a high level uh patterns of human
behavior using data
but the way I framed all of those
sentences
well especially the second one you know
centralizes the human it centralizes
society and what i find
intriguing frustrating depending on what
you know
my mood at the moment is that often when
we talk about
technology like artificial intelligence
with technologies in general
but especially with AI we've started to
kind of
talk about the like the technology as if
it supersedes the human and
this whole article i wrote called the
retrofit human
where i raised that concern like why is
it we build technology and assume the
human being fits in afterwards and we
really should be doing it the other way
we need to be designing our tools
because these things are tools
need to be designing our tools to help
us we shouldn't be we
like you know recreating how we
naturally are or what to be
to fit someone's notion of how society
ought to be
so in your mind what is that kind of
core human concept because to me i i
also mentioned um you know that that a
lot of the ideas felt parallel in our
work and one of the things that
that i find i keep coming back to in my
work
at the core of human experience feels
like meaning and the
making meaning and the quest for meaning
and so that's one
theme that just over and over again i
keep finding myself
returning to is there a similar concept
for you that you find yourself returning
to
in your work yeah and i think like it's
very parallel
unsurprisingly i would say it's either
something like
human self-determination or human agency
but ultimately it's just
the right to make an informed decision
the ability to
uh you know have all the information for
yourself and make that choice and
and i very carefully say that because i
recognize and want a world in which
people make decisions that i disagree
with
but you know but they are making those
decisions fully informed fully capable
so to your point on
on meaning whether it's you know being
able to derive good
meaning from the systems we've created
to make the decisions
or understanding what our meaning is or
what our purpose is as
a human being and not having that be
shaped or guided by
other forces unknowingly that's my dog
yeah is the dog gonna make a cameo is
that
i mean i'm sure he wants to come here do
you want to say hi to everybody
yeah he's not tech canaanist show
that's fantastic at the door
so i apologize for the flying at the
door oh no
the the pawing at the door just makes me
feel sad
kind of get a little cameo actually the
thing is if i like open
the door he'll go out and he'll father
come back
well i love how you put it and i think i
think
you know the agency and the
self-determination is is a really
solid piece of of what always kind of
comes back to me too i've lately started
thinking
about you know how we talk in in
literature and culture about the human
condition
and exactly yeah and i feel like that
when you break down what the elements
you know what we're typically talking
about when we talk about the human
condition it does seem like
you know agency and you know sort of
control over your own destiny at some
level
is is part of that right absolutely
absolutely um and i think what's really
great about it is you know it's
it's not normative or judgmental like i
said i'm not trying to enforce
my values on someone else my point is
that we should all make informed
decisions yeah and we have transparency
into the systems that are maybe shaping
us or
guiding us or giving us opening doors or
closing others
so when it comes to ai then it seems
like
the where that carries over is into the
idea of
you know transparency or explainability
and
is that what generally when you talk
about responsible ai
as the scope of the work that you do is
it generally focused on those
attributes or are there other attributes
that are even maybe more
pertinent to that consideration yeah i
mean so
certainly responsible ai covers those
fields i think those schools are
incredibly important
i think that you know of course any
conversation about responsibility would
be remiss not to talk about
uh fairness and accountability um
particularly when we think about
biases and biases and the technology
that's being built
and i know this has been kind of a
contentious topic lately especially on
twitter that what isn't different
just topic on twitter right uh violence
is talking about worms
has become a topic of contention if
you've seen
don't bring up cake that's all we just
don't need to talk about
cake right now that's a lot i mean look
like reality is already turned upside in
its head i want to be able to trust that
that the shoe is a shoe and not really a
cake
but you know what what i'm sad to see
often
is that so much of the work on
responsible ai
you know gets divided into camps of like
this you know politically correct
culture and non-political or like
whatever
whatever the opposite of politically
right right um but that's
not not what it is it's not like
normative judgment
passing uh at least not for me um
you know if for me it is you know just
making sure that we are aware
and have some control or agency and some
right to
uh understand and have impact on the
systems that are you know
shaping uh like the actions we're able
to take in our lives
yeah and i think you you bring up a
really good point because it does seem
like
that issue of um sort of the critique of
responsible ai or the the mechanisms of
responsible ai
um that talks about political
correctness
and you know we're having such a moment
where people are
uh you know hitting at this bogeyman of
cancel culture and and
political correctness so so this uh
tweet from
paul graham uh in the last couple days
that he says you know
people get mad when ai's do or say
politically incorrect things
what if it's hard to prevent them from
drawing such conclusions and the easiest
way to fix this is to teach them to hide
what they think that seems a scary skill
to
start teaching ais i imagine you have a
response
for that i mean just like
before i even get into what i think like
sort of let's unpack all of the
assumptions behind that statement
there's just a lot of anthropomorphizing
happening
like what is this like teaching the ai
to hide like
these are not like these are technical
systems right they are making
yes it is a predictive model it is quote
making decisions but not from the sense
that human beings
make decisions like teaching you don't
really teach
an algorithm to quote lie you can do
particular things to it
to make it come up with some answers and
not come up with certain answers but if
you are
hiding output or hiding outcomes that's
a human decision
from the design perspective so like
let's talk about the people who are
creating
ultimately that's the weird thing about
that statement this is weird and
morphizing happening i just i simply
cannot understand like
you know and this is not like sort of
the responsibility of community saying
this
we have plenty of people you know who
are some of the
the trailblazers in the field of
artificial intelligence saying like we
are nowhere near the singularity we are
not near
any sort of ai system and you know we
will define it as like narrow ai if
you're in the world of narrow ai so
let's
let's let's let's box this into what it
is today like we are nowhere near
creating this system that's called lying
or quote making decisions we're in a
world of narrow ai we apply things to
very narrow use cases so that's
that's one that's hiding things it's
very odd to me
um and like it's not about having
politically correct
answers to be like i i work for
accenture
um accenture is you know obviously that
they were poor thinking and hiring
somebody to be responsible ai
i don't sit in corporate social
responsibility i don't sit in corporate
citizenship those are amazing parts of
extension parts of every company
but i sit in core business functions if
accenture a
half million person company thought that
responsible ai was creating politically
correct answers
i don't know if i you know i mean i'm
not a ceo but like
that'd be a strange place to put
somebody yeah that's a really
interesting point
right like i'm part of core business
functions my job is to create solutions
with value
if you're creating a product that
doesn't serve a portion of your
population you have not created a good
product
so for example if you are making a
credit lending model
that is discriminatory towards women
because of the history of credit
discrimination against women
this is not about put not about
politically correct culture do you just
want to not give people money who would
pay you back like i don't understand
like do you not want to make revenue off
your product because you are
you have literally an underserved market
so you are telling me that you don't
want to address an underserved market
like and and in some sense like in a
business
sense that is what some of this work is
about
it is about making good products that
serve your
that serve your clients that serve your
customers yeah yeah 100
and i just want to interject there i
feel like i've seen interviews with you
where you talk about
the need for there to be discussion
that's bigger than profitability and
efficiency
when it comes to you know business uses
of technology we need to understand you
know
what is um you know what's about
humanity and
her human flourishing so it's really
important that those
attributes be part of that discussion
too but you're right at this core level
of course business is going to be
investing you know primarily
into technology that's going capacity
and scale to
their opportunities you know i think i
think that's a savvy
observation and you're right like why
would there be a function for
responsible ai
in the core business if it weren't uh
likely to produce
you know desirable outcomes for the
business right exactly and also i'd say
like
human flourishing you know creating
something with positive impact
is not at odds with good business and
and frankly
you know some this is what some of the
biggest ceo the ceos and the biggest
companies in the world
recognize that you know some of what you
build especially if you're a b2c company
is about brand it's about how people
feel
when they interact with your technology
or your product or
you know if you're making like soda or a
fast food chain
uh or clothing like you are trying to
spark an emotion
uh frankly right people by like
especially in the us we have no lack of
choices a lot of our goods
are actually perfectly uh substitutable
why why do you buy coke versus pepsi why
do you go to mcdonald's versus burger
king i'm just like naming things right
right like some of this is an emotional
decision
um and so it's again not necessarily
some sort of weird like
lefty pc culture to say you know we want
to make things that make people feel
good that are aligned with like
society's values
um and you know we're getting some
pretty clear indicators of what
a lot a lot of people feel today uh you
know
branding is is definitely important for
companies
yeah yeah that's a really good point too
i think that that comes up a lot
in my own work that that the uh you know
i talk about
meaningful experiences and people are
always like well how do you measure
meaningful experiences it's like well
you know
actually if you're creating meaningful
experiences then you should have a whole
host
of holistic measures that tell you that
you're on the right path
and everything you just talked about is
all you know part of a model that
actually
tells you you know you're moving in the
right direction people can remember your
brand people have delightful experiences
they'll
recommend you they'll you know your cost
of acquisition
and retention is going to be lower
because people have good experiences
with your brand
all of those things right it's also this
notion of value
right i think sometimes people can get
overly narrowly focused on value as
revenue generation
value comes from many many different
things and to be perfectly frank
you know people often choose less quote
efficient outcomes or
you know less economically sound
outcomes because of how it makes them
feel right uh you know and and i suppose
maybe a frivolous example but an extreme
example of it would be
why people buy luxury brands you know
like why would i buy a canvas bag
from like louis vuitton versus target
canvas is basically canvas right like
louis vuitton doesn't make better canvas
but like they recognize
that how it makes you feel and the
experience or to give a techie example
apple spends so much money on design
they spend like
like there are entire articles on how
every apple product
opening it is designed to feel like
you're opening a present like you're
getting something special right
that was purely intentional and if we're
going to try to make this case that tech
is about efficiency and value then you
know
go talk to apple because they don't seem
to believe that
right they fully understand the
experience of an individual in
interacting with technology like a phone
or a computer
is also an emotional experience yeah
yeah so so in terms of of
ai and and what the experiences we're
going to be
we are increasingly creating with
algorithms algorithmically
optimized systems you know how can
people think
about more meaningful and more human
flourishing kind of systems
when it comes to those types of
interactions what what do you recommend
there for people
yeah and here's where i think it's
really interesting like as a political
scientist and the social sciences
because i draw a lot from my background
when i think about these things you
mentioned the concept of systems earlier
and this is absolutely true like these
technologies don't live in a bubble they
exist as part of an existing
infrastructure of systems that impact us
so if we're talking about for example a
recommendation system
to decide if um you know
to help judges decide if certain
prisoners should you know get bail or
not
fail um what's really interesting is not
just how this impacts the prisoner
but also the role of the judge in sort
of the structure of the judicial system
and whether or not they feel they can
they need to be subject to the output of
this model or whether they have
the agency to say i disagree with this
and i don't and that
impacts you know how this outcome plays
out
for the individual who's on trial right
so a judge is somebody who is a position
of high social standing
you know they're considered to be highly
educated if there's an algorithm and
it's telling them something that they
think is wrong
they may be in a better position to say
i disagree i'm not going to do this
versus somebody who is let's say um you
know an employee
like a warehouse employee at like at
amazon
or you know somebody who works in retail
at a store where your job is not
necessarily considered to be high
prestige
and you may feel like your job is
replaceable or worse
you may get in trouble if you're not
agreeing with the output of this model
so like thinking about the system
that surrounds these models it could
actually be kind of an identical
structured model but because of the
individual's place in society
they can or cannot take action on it so
i think these things are really
important
really important to think of yeah that's
a really important point i find
in talking with companies too about
employee experience and about thinking
about how culture is going to be
developed around digital
transformation and how they're going to
incorporate more and more automation
into their businesses
so much i find of that discussion needs
to be
about you know the increasing importance
of
good judgment from humans you know like
people being able to make
good judgment calls and being able to
say like this is asking me to do the
wrong thing
and the machine doesn't necessarily know
that as you already said like there's
not kind of hidden motives
within the machine there there are
hidden motives within code because
coders put them there but you know that
it's not like um
like humans shouldn't be able to
question the output of these things so
that's a brilliant point
yeah and like two two points to that one
is when i
talk to companies about governance um
and ai governments has actually become
like one of the bigger things to think
about rather than
just purely focusing on like monologue
or modern explainability
so like a few thoughts on governance so
again kind of drawing from my
backgrounds of political scientists
i find it very interesting that all of
us even those in the responsibility
community
are approaching this notion of
governance from a non-democratic
perspective
like what what every organization is
doing uh when we create systems of
governance is put
the smartest people together and figure
out what governance means for everybody
and it's quite interesting because we
all claim to adhere to very democratic
principles but
very few organizations have actually
created a truly democratic process for
government so that's one
right uh the second very few
organizations have created really flat
organizations too and
even though they claim the two have done
so so yeah that's a very good point
yeah um and then the second is like so
i i we created um like this this
handbook for companies called the
government's guidebook it's a publicly
available document
i can share it with you if you have like
show notes and yeah
put it in there uh one thing one thing
that we call for is the notion of
constructive descent
so how do you actually enable safe
channels of descent within your
organization
how can people feel comfortable saying
you know this is not working or this is
being done unethically
or i disagree with what's happening here
and not just in the way that they're
protected but also
in a way that they feel like their
voices are being heard
i think one of the issues with you know
with uh
people being at odds with the
organizations that they're with is not
just that they disagree with what
they're doing but they everybody has the
same story
when i tried to go to management i was
shut down nobody listened to me
it wasn't meaningfully addressed and i
think that that's a that's a component
of this
um that's really important which kind of
also ties into the third point that
we haven't really solved this human in
the loop problem everyone
loves to use that phrase but i you know
it's
really hard to think of difference a
good situation
in in which we really resolved
meaningful interaction between
you know a an advanced predictive
technology and a human being
say more about that because i'm not sure
that many of our listeners will be
uh as familiar with with that concept
yeah so folks always talk about human in
the loop within an ai system so
you know the the narrative would be okay
well we're worried about
runaway ai or ai that makes biased
decisions
and then the answer seems to be we'll
put a human at the end of it and then
the human will kind of
judge the output and then the human has
agency they can say yes or no and then
that's that
right but there's so many problems with
this when you
unpack that that story like it seems to
work on face but then we've already
talked about a few issues so number one
like who is this person
in this structure of you know the
hierarchy of humanity within their
organization within society
and can they actually agree or disagree
with the output
of the model are they in a position
where they would be punished if they did
are they incentivized to do so and not
do so et cetera and then the second
question
is this person on the end can they even
understand whether or not that decision
was a good one or a bad one because that
person may not and
actually often is not a technical person
they're not a data scientist
so how are they to understand whether or
not this output makes sense or not
um and just a really good example um and
a few months ago
there was a whole apple card debacle um
when apple launched the credit card and
we had the husband and wife and the
husband got approved and the wife did
not even though i think she had a higher
credit score and made more money
but here's the part that i think to me
was the most meaningful
around what we're talking about so they
call
you know apple or whoever and they ask
you know and again
back to this notion of constructive
descent and human in the loop they asked
like hey
you know my wife didn't get approved for
the card and we're kind of wondering why
because you know like that's weird and
the answer was well the algorithm said
so and so that's
that's that right and genuinely that is
not a good answer
to give but to the person on the end
who's a customer service rep right the
question here then becomes
how do we enable a customer service
representative to understand whether or
not this model output was problematic
yeah like these are the people who
should understand it's not me as a data
scientist or
you know you as a technologist it's
actually the people who will be on the
receiving end
who will be and who end up actually
being the front line with the human
beings who are being impacted
so like that's that's the human in the
loop that i think needs to be resolved
yeah and i think in in in a number of
models business models
you know the the um the proposed answer
tends to be
we'll use a rating system to evaluate
how reliable this person's judgment or
outcome or whatever it is
and of course then you end up with sort
of algorithms all the way down it's like
you know yeah yeah i mean and also in
this example like
you know this is the this customer
service rep didn't even get any
visibility so they
they couldn't they actually didn't
really know how to answer this person's
question
um and then even thinking through at a
higher level whether or not that model
was biased like
i will say i haven't followed the story
all the way through but at first glance
i think captain also had a good article
about this is
it's not whether or not there are these
one one-off cases in which things go
wrong
because fundamentally all of these
systems are probabilistic not
deterministic meaning like there is an
error rate and there things will go
wrong
but that is just that is just a true
truism that's not even debatable
but what the problem would be is if this
is systemic
it's not just that this this one woman
with
good you know who makes a good salary
and has a high credit rating got denied
it would be if
and obviously that should be fixed but
the system is a problem
if we are seeing this across the board
across a number of women
you know as compared to like a data
scientist have to do an analysis of this
system to see if it's a problem
and certainly i mean it's easy to come
up with examples
from across different parts of society
and parts of technology where
you know this algorithmic algorithmic
bias reflects
systemic bias and that we have those
problems and
i think the the discourse on that is is
raising but it seems like
we probably also need you know beyond
discourse we need
other solutions where are you on
regulations from for much of this like
where are you feeling like we stand on
you know the maturity of that discussion
and where we need to be with with that
yeah um it's been really interesting to
see what different regulatories are
regulatory bodies are coming up with all
around the world um
so most likely europe will be ahead of
the pack on this
um the european commission's uh
the hleg has come up with a white paper
that came out in april i think there's a
follow-up to it that's scheduled for
december but who knows in
endemic times if they're going to get
everything done by then
which would be understandable if they
didn't the uk information commissioner's
office also has a really great paper on
risk-based approaches
to understanding ai systems singapore
has launched this project called project
veritas
which is getting financial services
financial service agencies together
with their financial regulatory bodies
or thinking about it in the u.s we've
had
uh the ftc federal reserve there's been
a lot of noise and there are also bills
on the table
and what we've seen interestingly is
there's been this bottom-up movement in
the u.s so for example banning facial
recognition is such a great example
you see it's we saw it starting in
cities
right before we see we saw anything
happening at the federal level
there are algorithmic accountability
bills in like in
multiple different cities and states and
again before we see it hitting at the
federal level
so i think the us is going to be really
interesting just again as a political
scientist
yeah focused on american politics this
is why american politics is fascinating
because of the way we've divided federal
and state powers and how that push
pull like ends up being like sometimes a
contentious debate
but ultimately like back to like my
first point it's good to have people
with different opinions
talking right that's kind of what ends
up being
and it also seems like it gives you know
in theory at least it gives an
interesting model for being able to test
different
approaches in different markets and see
you know what are the consequences of
doing it this way versus that way
uh and and then what's going to happen
with that at scale
but of course yeah of course that's uh
it
it supposes that that um that we can
actually anticipate that scale
uh with just what happens at that city
level and often often that's uh
that's going to be very different when
it's applied federally right
exactly yeah those are that's such an
interesting area for you
given your political science background
so do you find that you're drawn more
and more
into those not only governance
discussions within
uh corporations but the governance at an
actual
sort of political uh government level
are you uh participating more and more
in those kinds of uh
discussions yes absolutely um and i kind
of have been from why i wouldn't say day
well no almost from day one um and i
i don't know whether it's because it's
just my inclination to do so whether
it's kind of a natural part of this job
um because it does kind of combine both
i can't just think about the technology
i would be remiss not to think about
what sort of policy and regulation would
be
would be coming down the road uh in part
because you know
i want all these public servants to make
informed decisions
um and you know it is difficult to wrap
your head around the technology
when you've you know your experience has
been something totally different and
it's
very difficult to get good information
um you know from from these different
bodies and like all these
groups have you know people may have you
know different uh incentives and
different reasons for sharing
certain kinds of information not sharing
others but also i think
you know when it comes down to
businesses everyone just
wants to know what the regulatory
landscape
will be and it's useful to have that
information i mean not that i have any
sort of insider information but just
to be aware of what's happening so that
businesses can make
good decisions um you know so they're
they're kind of
building products with the future in
mind yeah and
you know so sort of speaking of building
products with the future in mind i
i guess i'm i'm curious about your own
disposition and views like are there
particular
applications of ai or just emerging
technologies in general that you get
really really excited about that sort of
maybe even fill you with hope for what
they
they're for the good they could
potentially do gosh
um i feel like lately like everything is
very doom and gloomy
it is 20 20 so it's like the world is on
fire
um what what a great question honestly
this is a very good question to ask
um i i what i think is
amazing about this technology at the
meta level and what interested me in it
uh in in technology is just how much
amazing potential it has
for us to question our institutional
paradigms and and
us to question why things are structured
the way they are and i think
the the thing if i were to pick one
thing that got me the most interested in
this technology
is actually the potential for it for
edtech which is
funny because edtech has now become one
of the biggest topics of conversation
and talking about
all the negative of the surveillance
state right but you think about it like
what's the what it should be what
something like edtech should be is a
complete reimagining of education
because number one like educational
systems do not
actually help uh do not actually help
people get jobs
they don't help people do well at their
jobs like everyone always jokes about
the number one skill you need to learn
in colleges excel
because and that's the one thing they
don't teach you right so it's there is
this disconnect between
the quote the real world the jobs we get
and then education
educational systems how they should we
know there's inequality
we know that people in the us end up
with massive student loans
you know there's just so so much that
can be resolved with this technology
whether it's remote learning
or customized learning or you know like
whatever it is and early on in the days
of
when i started my job at accenture
before then people were talking about
lifelong learning
and how you know the sort of new worlds
of technology and ai
really means that we have to embrace you
know learning and really think about how
we're going to spend the rest of our
lives educating us all of this
what what amazing aspirations yeah
um right and i sincerely hope that what
we don't do
is just try to like stick technology
into the existing broken infrastructure
that is our traditional education system
because that that would be
a disservice not just to us as
humanity but also to the technology and
the potential of technology so
but is it is it also true or not
that once you use technology to sort of
accelerate or amplify
a given system that where it breaks
might be what's instructive
about where those institutions are
already failing us like
we won't know those failings until we
try to amplify them
at some level right i mean i understand
that there are real harms that are being
caused
by doing that and the impacts are real
but i'm i'm also wondering if
uh if it's not um
if we won't get to the the level of
discussion about the failings of those
systems until they're actually being
amplified
do you think there's a way that we can
we can do that uh
effectively um i mean i think
specifically using the education example
there are so many people that have
already looked at the inefficiency of
these systems and what does work and
what doesn't work and
you know what and if we really think
about this again by going back this
notion of human self-determination
or you know whether it's meaning or
whatever we're talking about like what
is the
purpose of this system and you know
frankly can we just objectively take a
step back
and in a sense almost emotionlessly ask
is it serving the purpose it is intended
to serve right like
you know like what is the meaning of our
educational system why is it doing this
i think there are plenty of people who
have been pointing out the systemic
clause
and i think usually the pushback is that
oh it's easy to criticize the system
like but but who's going to be the one
to solve the problem and really the
smart thing to then say is
well now we have technologies and
systems that theoretically could be
designed
to solve these problems instead of being
designed to simply
reinforce the power imbalance and the
structural inequalities
and we're gonna ignore what these people
say because it's too messy
to deal with that and much easier to
just perpetuate amplify and
now like cement uh all of these
inequalities rather than do like the
extra amount of work it would take to
like fix things yes no and that
that's a brilliant way to address that
that
mindset or that problem where do you
think the the solutions best originate
are you finding
in your experience do you find the
solutions
origin originate with academics or
with private corporations or is it kind
of a mix in
in what you've seen in terms of being
able to identify the the sort of
structural flaws of institutions and
what's going to happen
when they're brought to scale with
technology um
i think it's a bit of both um you know i
i love
all of my academic friends because they
you know they do such an
an insightful job of understanding
systems and you know and again like
they're sometimes able to look at it
more objectively
because they're not inside it um but
then there is the aspect of
there's the application component of it
and that's you know
what industry does so i'll give you a
great example
um so about what two years ago at this
point a little over two years ago
um accenture came up with a fairness
tool so we were the first to create
a enterprise level bias mitigation tool
um
and the way we did it was we started off
with academic research papers on you
know
this is things like counter factual
fairness bias mitigation blah blah blah
you can find all of these papers
but what was important to us is whether
this works outside of a laboratory
setting
i think we started off with like 30 some
odd papers and we only ended up with
actually three
three of them that worked if we thought
about does this scale
you know is this generalizable across
multiple different settings
and is this possible within the way a
data scientist does works that was
basically our criteria
um so i think everybody has their role
to play it's there's
some there's definitely value in
pursuing research
and even research that seems crazy and
weird but then
there is certainly value to trying to
ground that research in something
pragmatic and applicable like it's it is
wonderful to live in a world of like
all of these possibilities but then at
some point if you want to make this
reality you have to ask yourself
will people use it how can i make it so
that somebody will use it
and is this actually as beneficial as
people are claiming it can be
and and does it matter do you think in
the
the the context in which the technology
starts you know we were talking a little
bit before we got on about
this current story about that broke i
think today
about facebook using a simulation uh
with ai to to simulate bots and other
kind of bad user behavior so that they
knew better how to moderate against it
um which i think you know i think you
had said
at one level of abstraction seems like a
really good idea from like a data
science model right
but from another level of looking at it
you can easily see how
this may not be ideal to train
to to begin to develop that sort of
training so so is there
does it matter where the the kind of
origins of of a technology are
or or do we do we always need to be
working toward
you know these good outcomes and the
best of humanity sort of outcomes
yeah um so two parts to it one i think
all of my
sts and hci friends and i agree with
them would say
uh the origin of technology absolutely
does matter like
this is why so many people study the
history of technology
you know things that are built for uh
military use
even if it is moved into the commercial
space which is by the way a lot of
technology
it will still hold with it the vestiges
of let's say surveillance or monitoring
because it is ultimately built assuming
the world is a particular way in other
words there are good people and bad
people there's me then there are the
others right
there's me then there's there's people
i'm protecting the people i'm fighting
because that's just how the military is
structured right so so then it's just
fundamentally how your view of the world
will impact the technology
that you build and i think that's really
really important and and maybe even to
abstract it even more and going back to
like all this conversation about
political correctness culture and you
know designing an a that quote hides
itself
i think what paul may be missing and
some of you may be missing is that
um often you create technology with them
often you do you create your ai run
optimization function like there's a
goal
to this and this has kind of been some
of the critiques of the way
um like you know some of these research
firms have been trying to arrive at uh
sentient ai is by having them play these
games and they have them play combative
games
right rather than have them play
collaborative right and again your
objective function matters if my
objective function
is to win a game where i have to kill
everybody
to win or it's a zero-sum world in which
i have to have the most amount of points
to win
right um then that sets up a very
different
system than one in which i'm training it
to play a game
where we have to be collaborative and
collectively succeed
like two totally different worlds but
it's all a function of your
of your objective function so going back
to this facebook example
i think it is actually really cool to
kind of basically like
simulating red teaming which is kind of
awesome because rather than kind of
wait for bad things to happen they're
saying we're going to have to
proactively model the world
but the problem with it could be is that
you have to
it's not necessarily future adaptable
and if a new thing
starts to happen that obviously cannot
be modeled within
the existing system that you built
because your existing system is only
based on the past and i think a really
good
pragmatic example might literally be
something like gamergate
right and a lot of folks and a lot
especially the women who are impacted by
gamergate will say
you know we were yelling and screaming
about how gamergate was really like the
canary in the coal mine
about like this whole in cell culture
this whole like
underground culture of like just you
know like a lot of the a lot of the
issues that we talk about today
um people getting harassed and doxxed
and you know
all of this was the player in the coal
mine was gamergate
and people ignored it but then you think
about if you're trying to build a
predictable predictive system
gamergate prior to gamergate would not
fit into your paradigm of the world
because that had never really happened
like that before right so
it's a good idea if the world is going
to stay static if the world's going to
change
you actually need to have some balance
to it that
understands how the world's changing
yeah and i think by the same token
it kind of goes back to what we were
saying earlier about you know there's
there's a body of work
already that has identified problems
like there's
the um the scholars that have already
identified
problems with edtech and sort of the
systems of institutional education you
know
that that knowledge already exists the
the the scholarship already exists
so that it's parallel here it feels like
there's been plenty of
um light being shown
on some of the areas that need the most
work in terms of content moderation in
terms of
uh making sure that you know uh bad
actors are are banned and that can't get
through
on on all the social platforms but it
seems that
twitter facebook you know and so on
don't necessarily
adopt those those recommendations and
instead it's like facebook wants to play
a game with itself
in order to come up with uh this this
war game as you as you
so aptly described to be able to
identify
what it probably could identify just by
taking the recommendations of experts
who have been
saying this kind of thing right
um yeah i mean like i said i think there
is certainly value
in like from a data science perspective
and trying to do what they're doing at
scale like one of the issues
of like any sort of moderation or
tracking is just the sheer volume
right there's just like i can't even
create a number to imagine
how many harassing situations or flagged
posts there must be on all of the social
media so how do they like
parse through and it's it's it's
actually like again from a like a data
science perspective
kind of a similar problem to thinking
about things like credit fraud
which is at a massive massive scale so
the cool slash interesting i think the
cool
part of the problem of addressing things
like credit fraud is like yes there are
people trying to defraud
your system but also there are people
who just like happened to go on a
vacation in germany and like didn't call
the credit card company
and how do you do it in a way that
you're not going to lose a customer
because you're annoying them with phone
calls or you're freezing their credit
right so it's like
it's not just like shut down everything
that looks bad and it's
