Welcome everyone to the Royal
Geographical Society and this Imperial
College debate on the ethics of AI my
name is Ian samp I'm the science editor
of The Guardian don't hold it gates me
it's great to see so many of you here
for what is surely one of the most
important questions of our time how do
we ensure that artificial intelligence
benefits us all what makes it such a
pressing issue well I'll leave our panel
of experts to explain all of that but
there's no doubt that AI is powerful
technology and one that's already
shaping our lives and that alone I think
makes it worthy of serious scrutiny
we've got a cracking panel of people to
walk us through the issues starting from
the far left as you're looking at the
stage it's Maya Pontic is who's
organized this event actually works on
affective and behavioral computing at
Imperial Reverend Reverend dr. Malcolm
Brown on is on the Archbishop's council
at the Church of England
you have Jonah Bryson who works on AI
and AI ethics at the University of Bath
Andrew Blake is the research director at
the alan turing institute and Rima patel
is at the Royal Society of Arts on the
new RSA deepmind project on the role of
citizens in developing ethical AI
can you still hear me from my lapel mic
I think hopefully the way the evenings
go to work is I'm going to talk to these
guys for we'll be discussing these
events for about an hour and then it'll
be over to you for questions so please
do get those ready I want to ask you all
to start off with just a minute or so on
who you are and why you're interested in
the ethics of I particularly interest
really stems from when I was a young
philosophy student and that's
essentially looking at engaging people
citizen in in grappling with complex and
controversial policy problems I can't
think of anything is really understood
so we don't actually know what the
future looks like to asking the public
what they think isn't just about
commissioning an opinion poll it really
really require engaging with the
uncertainty of the question the other
reason is that it really affects choice
so the trade-off that we made has a
society and I'm sure that we're gonna
have that discussion panel moved on but
there's a real reason for why it's
Shonen importantly fine way new ways of
engaging citizen in an informed and I
think I'm probably the historic member
of the panel for two reasons first of
all I was born in the same year as AI
I won't tell you which that was and
secondly I got 35 years ago I think it
is now I got a PhD in AI but that was
really in the middle of the AI winter
and you didn't go around telling people
that you had a PhD I used to say it was
computer science or something like this
but in the last few years I've been able
to come out which is a great pleasure
and I'm you know unashamedly a techie I
love thinking about the principles of AI
systems and building them but of course
I do recognize that when we're building
things that are as powerful as AI
systems cuz we got to think very
carefully and deeply and from the
beginning about how they're going to be
safe and how they're going to behave in
the way that that we want so I
absolutely welcome and this is also the
view that the Turing is who takes the
kind of very broad participation of
different disciplines to help us think
through these very important things
intelligence why are some species more
intelligent than our species why are
some people using intelligence more than
other people soon to use it and those
were the questions I wanted to
understand but I also just happened to
be a good programmer so I went into AI
as kind of a safety I thought well I
want to go to a good University and
since I'm a great programmer that'll
help me get up
so that's why I that's why I do AI how I
got into AI ethics originally was
because I just noticed people being
weird around robots because I was
actually a psychologist so I noticed
that people thought that if you piled up
a bunch of motors together like a person
and you put it the MIT AI lab but you
had an ethical obligation to it it
didn't work they'd say you can't unplug
that I'm like it's not plugged in and
they're like well if you did plug in I'm
like well it doesn't work and they they
clearly had no idea what they were
talking about so for a long time I
thought they didn't understand AI and I
tried to explain any either people I now
realize the problems we don't understand
ethics or what it means to be human and
so there's so that that's another set of
problems they because I had those papers
going back longer I think I got invited
to a lot of tables like this and
especially when it was policymakers I
found out that because of the other work
I was doing on human cooperation that I
actually had a lot to offer so the
reason I spend so much time doing this
stuff now is because as people have said
it's important and the questions that
policy me because they're asking right
now
really big questions and so I'm putting
the time I can find it for research into
working on those problems very much in
learning mode
I'm an ethicist my job is to support the
church's leadership and in fact the
whole Church of England in trying to
make sense of things
religions are about trying to make sense
of the human condition and the nature of
society and here we have something
happening very rapidly which people
don't know very much about I dare say it
most of our church leaders come from a
humanities background which is why we're
running a big program with Durham
University on scientific literacy for
church leaders it's just quite clear I
think that most ethicists like myself
argue from analogy we try and look at
things we do understand and apply them
to things we don't understand and so on
this one I find myself really scratching
my head which is the right analogy what
are the right analogies in understanding
something that is really opening up the
world in new ways to us and I want to
try and help contribute to the field
parts a theological angle to what should
I think be an interdisciplinary study I
think to understand the ethics of AI you
need a bit of history a bit of politics
some psychology you know and quite a mix
of things and very much in learning mode
so my expertise of human emotions by its
search of course this raises a lot of
ethical issues and I kept our questions
so this is one of the things that I'm
interested in but also I live in this
world and I used all these things that
we all have here which is the mobile
phones and iPads and they are all
actually with a I but what they do they
use the data they use our data they use
our private data and actually some of
the companies use this data because we
allow them to do so and they're selling
this data
I think this is not ethically fine so
that's the second thing the first thing
is about is wonderful because you have
approximately fifty percent of females
and there is quite a lot non-white faces
yet if you think about computer science
a who builds AI technology it's white
male it's ten percent of females it is
even lower percentage of non-white
people I don't think we should use the
technology that is built by such a small
minority of this population so that's
another ethical constraint on a star
properly by looking purely at the
technology not the companies making it
the research is making it not the
applications yet just the technology
because there may be people here who
will benefit from this and know what are
you talking about with AI but crucially
is there anything about the technology
in ethical terms it apart from other
kinds of intelligence so this is only
one small part of what it is to be human
and I do think this is what people get
confused about if you just mean human
when you say intelligent then you're on
the wrong track
we aren't building artificial humans but
we are increasing what we can act on
increasing what we can sense and so
that's what AI is doing so I guess
there's two ways I would say that AI is
different is really different from the
rest of computer science I mean arguably
computer science is a subset of AI just
part of the way you do AI of course but
especially because it's especially about
the perception so we are able to
perceive using AI things we couldn't
perceived before and companies can
perceive things about us and governments
can
things about us and we can perceive
things about ourselves as well as each
other it's not just about uncovering
secrets it's about discovering
regularities nobody knew before so the
perception is one side of that the other
thing which was my PhD was about the
basic difference in software engineering
is that you have a system that has its
priorities so how do you set those
priorities how you describe it so it's
not a system that's just passive it's
something that that's what we call
autonomous is when it actually acts
without having to be necessarily told
that how these systems the moderns many
of the modern systems at least something
in how they are created how they work
that introduces particular ethical
issues as well and I'm thinking about
how machine learn from training what
does that introduce that you don't get
from normal programs where you just code
exactly what's gonna happen
sure well there's a lot to talking about
that I think actually it's worth a quick
deviation to talk about the difference
between machine learning and AI because
I'm not sure if that's one time the
economist for example one of my favorite
favorite sources of science insight was
saying that machine learning and AI were
the same now they've changed their view
decided they're a little different for
me machine learning is a set of very
specific algorithms that are designed to
take in data and learn patterns and
rules from that from that data
artificial intelligence is a rather
broad activity that encompasses machine
learning as one of its components but
there are lots of other things going on
there too and my favorite example is and
I think perhaps the most sophisticated
AI system that has yet been built is
Watson not so much the commercial
offering that IBM talking about now but
the original Watson that won the game
show Jeopardy which was an incredible
achievement I mean this is a very a
rather broad set of skills that you need
you've got to be able to hear questions
you've got to be able to access a lot of
general knowledge gotta be able to
produce coherent speech and even get the
timing right knowing when to to jump in
with the answer so to me that is an
incredible achievement and when you
look at how it's built there's a lovely
paper that the IBM people wrote about
the sort of overall design of the system
you see it has many many moving parts
each individual moving part this one
might be machine learning this one might
be about speech this one about accessing
knowledge and also lots of duplication
to make it robust so components that are
supposed to be doing the same thing and
maybe voting too what's the answer so I
think AI is a sort of sophisticated
engineering discipline that is pulling
together many of these things now that
wasn't not your original question what
was the back to machine learning yes yes
it is does seem to be the case and I'm
aware that there may be a media bias on
this but does seem to be the case that
machine learning is driving a lot of the
current interest but the question I was
interested in was can you nail down if
there's anything specific about AI as
opposed to other kinds of coding yeah
that has its unique or particular
ethical issues because done and well I
think I go back to what Joanna was
saying about systems that make decisions
you talked about actions actions and
decisions and do so autonomously so they
are you know there are not machines that
are taupey under control of humans these
are machines if you like but are taking
a lot of initiative so we really ought
to be concerned about what sort of
decisions and how those sit in human
terms
hasn't come out so of course if the data
is biased in any way these bars will be
picked up and it will be propagated from
all the decisions so for example if you
have say giving the jobs to people if
the
job was always given to people coming
from a certain area because that's a
wealthy area or it's the area where most
of the people can have certain degree
then you will continue predicting that
these people from this area will be
getting the job more regularly so
immediately you do introduce the bus
because somebody else from another area
could also have a good degree and but
also you know be wonderful but it's not
from the area so the prediction will
actually be so there are these biases in
the data that you can pick up and you
need to deal with it there is another
thing and that is of course you do not
know when you do machine learning and
when you do everything automatically
whether there is something that would
actually be automarketer automatically
made to collect certain things of data
so for either you know cyberattack or to
actually collect certain things from you
or to for example block your mobile
phone battery that will die after two
years and you know you need to buy a new
phone so it doesn't matter all of these
kind of things could be actually biases
in the data or in the way a is employed
potentially could exhibit in the future
to the extent that they might be able to
resemble human competencies and that's
really interesting if you think about an
example which Kali gave me and she said
that her three-year-old daughter were
speaking to Siri and she said to serious
Siri do you love me and you know I love
you and there was a kind of really
interesting moment there where she was
really reflecting on the potential of
this technology in future to really pose
some really challenging questions about
what it is to be human in a world where
is developing chemistry and I think
that's a fundamental issue
I agree that you were trying to get at
the bias but I'd like to say that I
don't think that the bias itself is
different because human what we're
saying is that human culture creates
biased artifacts and that's actually
been true our line and a is no different
I think the difference comes from again
this this weird over identification so
when people and there are there are
pretty major people in AI that do the
hype and say oh you know programming is
over now we use machine learning and
machine learning is all there is today I
know machine learning is one way we
program AI and people are using that
that that magic dust the as excuses to
go back to things that we had previously
outlawed like persistence of these
persistence of these stereotypes um HR
departments have creates the difference
is Authority surely if we treat the AI
program that has biases built into it as
somehow overruling our human judgement
is rather different from an HR director
who can be challenged it's maybe old
politics doing very conscious of
something Tony Benn used to say he said
he asked this these questions of anyone
who had power what power do you have
who gave you that power in whose
interest we use that power to whom are
you accountable and how can we get rid
of you and I think those are actually
very interesting questions and AI is
again maybe I'm reaching for the wrong
metaphor here but it seems to me to be
the advance that makes this problematic
is that it involves manifestations of
power that we're not completely used to
handling is it in the people who create
the AI is it in the user where does
responsibility and accountability lie
and how can we change it if it goes
wrong and I think those are areas where
we're floundering because the lines of
accountability responsibility and
authority and that's how the you know
the innocent user attributes Authority
as well as how Authority is built into
the program and all that's still very
unclear
there's something else that hasn't come
out so far which is and I know this
doesn't apply to all AIS but there will
be some where you will not be able to
get a good understanding of how a
decision has been made and I know that
can sometimes get the backs up of
certain AI researchers but it's true as
nandri yes and it's some something that
is a very live issue at the moment both
from people interested in ethics but
also the people who are building the
systems i guess what's really brought
into a head is that in 2012 we had this
real breakthrough which is why we're all
talking about AI now which is deep
networks and the networks were
celebrated because they were so
effective they were three times as
effective in in doing vision recognition
they were three times as effective in
speech recognition but they are black
boxes and even more so than previous
technologies so you have this of course
if if you have a black box that's
deciding what is the meaning of a word
you perhaps don't worry too much about
understanding the rule but if that same
black box is deciding whether to give
you credit or not the bank you then want
to challenge that because it becomes
much more important whether you can do
that so this has actually inspired
researchers there's a lot of rethinking
going on now about how you break open
these black boxes and design them from
the beginning to be less black can you
make the box and that's black or can you
parallel up the black box system with
another sort of shadow system that is
that is more transparent and it's a
tremendously interesting field of
research the turing institute is
absolutely all over this that if you
have a product and it's dangerous
do you sell it now you know what it when
do you let it go
and i feel like because people are
fooled because they think intelligent
means person and people are so ready to
say you know siri do you love me then a
lot of companies are trying to get out
of what would have been
their responsibilities of due diligence
before they really software
I don't think deep learning is the end
of responsibility you audit accounting
departments without knowing how their
synapses are connected of the humans
right if even if deep learning was a
complete black box we could still do
tests and characterize and have other
processes that are making a ring around
what it could what it allowed to do
there's all kinds of ways we could
handle that and we've been doing with
more complicated things which are people
for a long time but in fact there are
ways to kind of get at it and pro that
what it's doing but the point is that a
lot of this comes down to power and a
lot of it comes down to deception by
those in power trying to create
basically she'll get shell companies
without even people in them we've
identified these issues the the issue
around black boxes the issue around bias
but I don't feel I've got a good sense
of how big a deal those problems are are
they are they side issues I get the
sense that they're okay so I think deep
learning is definitely not the only way
forward however what's going on now is
it everybody's doing deep learning if if
you go in any of the of the universities
companies everybody's talking about deep
learning everybody wants Udupi deep
learning something like all my students
want to do deep learning never mind the
time tell them - all of them deep
learning does work really well we don't
know much about the planning we don't
have theoretical underpinning and all of
these things it still deep learning so I
don't know about it I would not say you
know I think it's a big problem you know
how these legislations and things and
control really can be can be done this
is this is - we need to discuss and we
need to discuss this issue with the
government and we need to find a way to
actually what what we discussed some
time ago I believe that we need to have
something like auditing of the software
because a lot of things can go wrong and
especially if we cannot actually have
explicit machine learning programs that
will
that are open boxes and not black boxes
right so so and this auditing of machine
learning is something that it's
absolutely not there nobody talks about
it and and that's one of the things we
need to talk to the government because
in that way you you can stop April
taking all of our data all the time
under the you know quotes like you have
to have it that way otherwise we will
get the virus I I wondered where this
accountability is going to come from
because it's amazing how often you can
talk to academics who know all about
these issues and they're working on
great ways of making black boxes you
know transparent I suppose the other
would be in and making them suitable
getting rid of biases and yet there are
there are things out at the moment that
are biased and inscrutable humans are a
biased and inscrutable as well but
there's like this is extra stuff we're
adding to affect lives you can't just
say yeah well there are no worse than us
but there's just a load more
decision-making going on through these
systems and work on academically fine
but when does it actually get better for
people in the real world really
something very interesting about that
that people in the real world never have
an agreed answer to a question there is
no more consensus to a whole host of
question so in a way what is
complicating this situation is the AI is
increasingly starting to make decisions
that otherwise people would made and
people would make those decisions
differently so so that's adds and yet
another layer at the core complexity to
what is already challenging ethical
landscape and so one of the things that
we're doing at the RSA is we're
convening citizen juries so essentially
randomly selected groups of people to
deliberate on particular ethical issues
so we're looking in this instance at
criminal justice system you use in the
education of AI in relation to criminal
justice system to understand better the
parameter for ethical applications and
we're also looking at the issue the way
in which a is influencing democratic
debate in this space to do that and the
reason that when it's not just about
well what does any citizen think or what
do people think off the top of the head
reason why we're doing this is because
finding a moral consensus is going to be
incredibly difficult and actually
understanding what could create a moral
consensus but in a particular social
cultural ethical context it has to
happen and we have fine new ways of
doing that and we're prototyping
experimenting it's extremely
experimental space it's really really
crucial and but there's actually that
whole other space where they've known
agreed moral consensus and then the kind
of question come back is this decision
and our mission should make and who
should take responsibility so and most
engineering achievements were done with
a purpose of improving the human lot
this fantastic bridge is in order to
enable something to cross it it strikes
me that much of this conversation
insofar as I've heard it has been about
the achievement for its own sake and the
application seems to be thought of after
the achievement I think the other thing
that worries me slightly is that well
slightly majorly is that these fantastic
achievements and they are amazing
stretching of the human capability are
hitting the ground at a time when our
political and economic cultures are
essentially dominated by an intense
individualism that's utterly relaxed
about wealth inequality
despite being singularly occupied with
things like gender equality we haven't
quite squared those yet and where it
feels as if the failings of our
political culture are beginning to
morphing
- something that returns and retrieves
the concept of the common good but it
hasn't got there yet and I see the AI
industry operating in that old paradigm
of late capitalism you might say where
the common good and moral consensus are
actually written out of the the equation
ever since Hayek that / possibility of
moral consensus has been out of the
discipline of economics it's beginning
to come back but not yet in the context
of these developments so it feels to me
as if we're seeing something fantastic
happening but into an old political
culture that can't handle it
and yet the emerging political culture
isn't yet talking about it either but
what I wanted to do was to come back and
and defend myself a little bit from the
characterization I got it's not the fact
that we've figured out how to handle
accountants more or less I mean there
still is a lot of white-collar crime
doesn't mean that we figured out how to
handle AI I'm just saying it's no harder
of a problem let's and and I thought of
a really great example of an ethical
excess that was immediately recognized
as such so what happened was there was a
company that was absolutely evil and bad
and everything in in in America called
Enron and when they went bankrupt there
their email database was seen as an
asset that was could be given away it
would just became a property of the US
government and so they exposed the email
and we are probably all of us who do
machine learning have used the unknown
database at some time - the research
when that happened you know okay
it was supposedly your business email
but you all know you know there's other
stuff in there right you know people's
personal lives were destroyed there was
you know you know affairs revealed all
these things happened and now nobody
does it anymore
we no longer consider email something
that even no matter how evil the company
was no matter
bankrupt it's no longer an asset that
you just give away or sell off and and
there's very strict rules about how you
can access even government email where
everybody's supposed to know that it's
accessible and things like that so so we
can do that and and also with inequality
we had to solve inequality early in the
20th century and it took us way too long
and I hope we're faster this time
myself I want just to say we are
attacking AI quite a lot but you asked
very beautiful when it will be you know
when will it be useful for Humanity it
is actually useful in in now because
think about it
we can currently go okay how many
research papers we have just in cancer
research
something like 300 to 500 per day not a
single doctor can go through that number
of papers per day so having actually a I
giving you summarization of the things
is wonderful right it actually increases
your cognitive ability that's great
another thing vision that's my field
okay I can currently measure on 30
frames or 60 frames per second any
movement of the face there are certain
tremors of the muscles that are
indicative of various diseases such as
for example Parkinson's or depression
different kind of movement or dementia
again different kinds of movements I can
measure those from a single webcam we
with the human eye cannot do that how it
works on 15 frames per second we just
don't simply see it and the camera can
see it and it's the same as the x-ray it
can raise of Y again tells you I see
something right so it's not replacing
really it is a handsome and it is the
symbiosis between the AI and the humans
so we are currently in in the in the
place where the technology really can
help us and is helping us it is the fact
however that it should not be
monopolized it should not belong to for
companies it should belong to the
society and that goes this to this to
the greater good it should not be
individual would you brought us on to
the next bit which is about let's talk
about the people making them the
companies making them and so on we need
to trust the big tech firms to behave
responsibly don't we yes because there
is nothing else there is no other
mechanism apart from existing product
law or is that is that wrong really big
tech might be like it's another company
unfair company it's another country we
need to negotiate with like at the UN
because they have more power than quite
a lot of countries you know and it is in
their interest because of the nature of
their business that they are not their
interests that we flourish they want
humanity to flourish and so like if you
look at what's happened in the face with
Facebook in the last year I think they
really have realized that they made a
mistake and they're trying to figure out
how to do a better business model and
they realized that they can afford to
lose some money while they figure it out
so I think I and I believe other tech
giants are also in in similar places
although not all the same ones but it is
it's complicated it's about treaties
I've had someone from Google say to me
we know something has to happen we just
don't want it to happen Tony in one
place you know they don't want you to
shoot down one I have others come up
they want to figure out a way to change
the playing ground but they're willing
to sign they're willing to sign treaties
there's everybody will play in the same
thing one of the aspiration certainly of
the British government who've written a
lot about it recently in their
industrial strategy is that this is
going to be something that enriches
productivity and our ability to do
things in a much broader way and
actually the industrial strategy that
came out at the end of November is
rather fulsome in it it has four main
areas that it thinks will really
transform productivity and an AI is one
of them I think it is important to the D
tax that is broadly accessible and I
think also while we're kind of thinking
about all of the pitfalls and the
minefield if you like that
I presents also just to keep in mind the
huge benefits and you know Maya was
already talking about that in the
context of health I just wanted to
mention one an interview on the BBC from
John Bell who's a senior physician in
Oxford saying well he thinks actually AI
will be essential to save the NHS that
we won't be able to afford the the NHS
unless we can mobilise the efficiencies
in many in many domains that I will
bring Microsoft yeah
you you're maybe closer to the industry
the companies or certainly have been
they're not people on the panel do you
think those companies have earned public
trust you know that's a very complicated
question I mean you know we're in a
position where we do trust them for some
essential services I mean you know you
can switch off your mobile phone and
give it up if you want I don't think
many people are making that choice of
course a few people do they switch off
their Facebook accounts because they
they find that it's it's not what they
want but I think we're in a position
where we are thinking about the
trade-offs between let's say giving away
our data and knowing that we're
releasing we might worry about that but
we also see huge benefits so you know I
like being able to I'm not very good at
remembering routes to places and you
know I love having the phone you know
take me around the city and guide me in
my car so you know we see huge benefits
from the technology we're going to kind
of we're going to engage in this
struggle I don't think we're just gonna
come down in one size they know
technology is too dangerous a bit like
you're you know going back to your
bridges you know we could say well you
know bridges may fall down shouldn't do
bridges but you know bridges are so
important to us that we're willing to
sort of engage with the hazards and of
course we expect the engineers and
people who build that AI systems are a
species of engineer we expect them to
take safety very very seriously and in
this kind of ethical context safety has
this rather broad reach
to these companies to behave socially
responsibly the big tech firms have set
up the partnership on a ai' yes the
benefit society pretty much all of those
big companies have also been criticized
for aggressive tax avoidance and it's
quite hard to for me to square in my
head how a company I mean let's let's
take Microsoft and you know Graham ate a
billion through Ireland but now Ireland
is now being forced by a view to take
back unpaid taxes from other companies
but I need to think of that same company
in that same group of companies as
setting the rules for how through the
partnership how AI will be beneficial to
society because I think a lot of us
would think that taxes are beneficial to
society because they help build
everything how do I square that well you
know we're ranging far and wide now into
into politics and way beyond technology
I suppose you know there are many
powerful organizations not just
companies but let's say governments that
that we trust to look after our
interests and these organizations are
seldom unalloyed you know they have a
complex job to do sometimes they make
good decisions sometimes they make
decisions that are not so good and you
know I think these powerful companies
are like that they because I know a lot
of the people in in Microsoft that I
work with there are a lot of very
thoughtful and well motivated people
sometimes things don't get done right
and I think you know when you're
entering a kind of very complex arena
where the stakes are high you know that
things are happening that we really care
about then you should expect that some
things will be done well some things
will be done not so well but personally
I do have a lot of confidence that these
big companies we're talking about are
very serious about making good systems
just as you gave the example of a
Facebook they hadn't appreciated perhaps
the consequences of the services they
were offering once it's become kind of
clear seems like they really want to do
something about it right now but I have
said recently that they're willing to
pay more tax so like I said I think they
realized they are starting to make the
right noises and I think they are
starting
I was at the UN's Internet Governance
forum and they were sitting there right
next to the countries and the NGO so I
think there this is sort of happening
and and I want to go back to something
you said about bridges falling down yeah
one of the metaphors I make because I
had the pleasure to talk to some
architects or somebody said it brought
it so fun you know it used to be that
any rich person could build a building
and it would fall on people with some
pre-built probability and some people
would die and whatever and now you know
you have to get planning permission you
go out and you figure out where the
building belongs
everybody is has been licensed you know
they all know how to go out and get and
the building gets inspected you know
that's what computer science was a toy
and and we could build lots of stuff
only human build pretty cool tools it
wasn't just a toy those tools but now
that it's become infrastructure and now
that it's falling down on a few people
and maybe killed some people you know we
need to think about licensing and
inspection we we've talked about these
big companies you see a lot of really
good people go to the same small number
of really big companies for pretty big
salaries is there an issue and we talked
about ethics so that's probably why it
feels like we're beating a little bit is
there an issue of concentrating
intellectual wealth I guess intellectual
capital in a small area is there
actually a financial inequality issue
with all these people not only going to
these small number of big companies but
getting a lot of money compared to many
issues so yes and let me answer first on
that yes we have a problem because we
have currently an inequality based
purely on the knowledge of AI machine
learning people who who are experts in
the field are able to get the salaries
which are currently five to ten times
the average the average salary in London
which is really
so if the inequality is huge five to ten
times yes so five is minimum so it's
that's like one problem the second
problem is the taxes and how the
companies are are made there are there
is no geopolitical border for this
companies so they they are their Globo
they can go anywhere and they don't have
to pay the taxes they can make the deals
with the government saying that they
will employ people hence the government
give them the tax breaks these gives
them more money this is how they buy
more people the problem here result is
that you will have so-called
intellectual capital concentration in
these few companies meaning further that
they will have monopolies in the future
because everything is about AI so we
will not have a free market we will have
monopoly
do we want that I think the regulations
are really of importance we need to
regulate these companies the fact that
they are global and they don't have to
pay taxes is also actually applicable to
their people this doesn't happen now but
it could easily happen
anybody can live anywhere we are talking
about programming and machine learning
and AI they can work from Swahili and
for a company which is in States
wherever right so it's really important
to understand these things that actually
the government's will lose hugely in
their taxes so this is the disturbance
for the government they need to do
something if they want to survive simple
as that right so this is like one part
the second part is what Andrew mentioned
I really would like to go to that point
he said something so many people say I
will give my data because there is this
greater good that will help me this is
how many people say the situation sure
give the data but get the money for it
it's your data you know so it's just I
bet that's my issue I don't I why don't
we are the owners of
is data so if somebody else wants to
profit from it why don't we get the
piece of that because it's more than
just government's losing control when
governments lose control people lose
control at the moment at least in
democracies governments who are mode of
being able to keep these things under
some sort of control and I think you
could touch on something really
important there whose is this data and
again I'm struggling to find metaphors
that work but are we looking at
something here that is so ubiquitous or
potentially so ubiquitous to the way we
shall live in the future that it is more
like language than product language
which you know he's owned collectively
it does evolve it changes you can
control it to some extent I mean if I'm
called McDonald and open a restaurant I
can't call it McDonald's because it's
plating you know it's a brand but even
so the Academie Francaise has found real
difficulty trying to control the
evolution of the french language because
it's owned by everybody who speaks
French now is this really something in
AI that is going to be so ubiquitous
that the idea of monetizing it turning
it always into a product rather than
seeing it as a collective possession for
the benefit of everybody to work through
in the for the good of all can we say if
we started working with that metaphor of
language would we think of it
differently would we find more creative
ways of handling the fact that I own my
data but if I can't access my data
without someone else's product in what
sense have I got any control over wealth
concentration in a small number of
companies but we are we worrying too
much about that you know that's what
what did Teresa may say and I think it
came out in wendyhall and during percent
his own report about a new startup in
London every or new in country every
week or every month or some a good rate
we're worrying too much is there not an
ethical issue
this concentration of smart people into
a small number of companies how do you
see it I think it's a call to action
that's that's how I see it I mean I
think you know in in many respects the
success of these relatively few
companies is inspiring and they've
invented things that simply didn't exist
before you know Apple invented the smart
phone we hadn't conceived of anything
like that and so I think the kind of the
positive way to react to this is to be
inspired to do likewise and this is
course what the the small companies are
doing there's the challenge of how'd you
get the small companies bigger I think
that's one of the big challenges that
actually the UK is very good at at
startups but getting startups to grow
big I've been is the biggest computer
company that we've grown in recent times
course now that's Japanese owned but you
have a very good long run as a British
British company so I think we should be
inspired I think we should spread the
goodness think about I think training is
very important you know if we want to
have a vigorous kind of ecosystem in the
UK innovating these technologies where
we we need to train more people we need
to think about how those people may also
benefit small companies rather than
simply going to the highest bidder to
think about those mechanisms and I just
want to say one thing about about going
back to the thing about data jaron
lanier is a very interesting writer and
he has a whole book which is pretty much
about this thing about data
concentrating in fewer and fewer hands
and one of the things he put suggests is
the idea of kind of micro payments for
data but just and it's not so much about
the the cash I suppose is acknowledging
that the data has value and into the
kind of the infrastructure that we built
of course we do sometimes get that Danny
when we get valuable services for free
so if we go online and we do search you
know actually what's happening there is
not trivial I mean this is access to all
the world's knowledge and let's not
forget that we didn't have that 20 years
ago
that is a pretty big benefit and so that
that in a sort of rather indirect and in
exact way is some of what we're getting
back we are not giving away our data we
are bartering our data and and so we are
bartering our data for services and what
bartering one of the things bartering
entails is ducking out of taxes inside I
was supporting the the big companies a
few times but one of the things that
made me really unhappy recently it was
one of them said well I don't remember
which said we're willing to pay more
taxes there's evidence of this we've
just opened up a research branch in
Paris and we'll be paying a lot of tax
there no that's not tax right and when
you when we don't so we have all these
free services that means we have it
denominated the value both of the data
going out and that the service is coming
back and that means we can't tax it and
it's really hard to denominator it's not
a traditional product I think this is
one of the reasons the supposedly
productivity is stagnant is we can't
even measure productivity right now but
what we can do is we can see how much
money people are getting I say a company
it's getting we could see its valuation
change and then we could say what
proportion of its citizens are say in
the EU and then they you is big enough
to say you know even if it's you know a
Chinese or an American company say like
hey if you want to do business here we
need to see that a proportional amount
of tax coming into the EU and so that's
the way I would solve that problem I
wouldn't even try to you know Ashley to
nominate the transactions we can't do
that except to see what how rich the
companies become all talk about the
products the actual things that we you
know we're using day to day that has AI
in it and my do you think the products
that are AI driven at the moment the
kind of day-to-day stuff is affecting
our psychology in any way that has
ethical implications okay so in
principle Facebook is using AI it's a
simple version of AI currently but it
does use AI and it uses things like
tagging for example which is not any
more simple right so it's
recognising your pictures and who you
are so that's that's quite advanced
however the problem with all of this is
that you can see that everywhere you go
to the restaurant what you see people
sit on a date so why do they date the
phone why do they have this other person
on the other side it's unbelievable you
know I have seen the whole family's to
kids to parents everybody with the phone
I mean we we forget to communicate with
each other we are hiding behind these
phones and behind this kind of
technology so this is a big issue I was
so that was funny what I said what sir
said is that I was called from Colorado
State they had an epitome of suicides
between their teenagers and the reason
is exactly that they they found on the
Facebook whoever killed himself somebody
puts the picture and these guys became
celebrities and the kids got this idea
well I will become a celebrity if I came
Mikael myself they had something like 35
suicides in less than a year so it was
it's a it's a horrible thing then there
is a lot of bullying through the social
side a lot and whoever have teenage kids
know about that
so Instagram is currently the way to
bully other people in the high schools
that's horrible I mean this is not the
way we invest envisage the the usage of
this technology think about
relationships 35 currently UK 35 % of
all relationships is made through the
online dating I don't say that's bad I
just say it changed us it's my turn to
be positive about AI now well we're
talking about ethics I think SIG's
mostly bad stuff right AI is also
helping good things happen with
relationships
questions about AI I think we've got the
key there are these houses and they're
real hazards and there are also the
benefits so now if you move to a new
town and you've known nobody you've got
far more tools for meeting people and
getting friendships and and that is all
going to kind of happen much faster if
you've got family in Australia you know
you came to the Europe to study let's
say now you've got Skype and you can
communicate with them and you know that
is a huge thing that you have to do so I
think you know it's high stakes all
around I think in our society yeah the
polarization of society but also to
polarization of narrative and the
challenge with something like ni is that
it had the effect of creating as you've
mentioned already networks of
like-minded people which shared and
similar values but it also had the
effect of creating echo chambers virtual
bubble and in many ways creating context
in which misinformation disinformation
are often used and applied and that's a
distinct issue but it's very closely
connected to the way that is developing
and so what we are seeing is a really
interesting kind of polarization of
narrative both online and offline but I
think very much perpetuated by a
polarization of narrative online so a
really interesting example of the sort
of narrative tends to be that in
relation to and a range of elections
that we just witnessed so many people
didn't really expect outcome general
election to be the way what many people
didn't really expect the Trump dudes and
lots of people I think why it's
surprised by the using the application
of tech within those contexts
in order to polarize narrative so our
perception increasingly of what other
people think is becoming quite shaped by
the chambers in the context that we are
we are heart of now I think that's
really interesting in terms of an
ethical issue because we were faced with
a challenge and that speaks to to
competition that we were having just
earlier which is how do we create a
conception of a common good when you
you've essentially used technology to
create very disparate groups of society
disconnected from each other what you
said okay polarization is incredibly
highly correlated with in fact so this
happened you know in the previous it's
in fact in the United States politics
you get you get very clear examples of
it coming and going and waxing and
waning so it's not technology is not
necessary for it and also every time
people actually do look at the role of
social media it doesn't seem to actually
be a major factor so the proportion of
time you spend on social media is not
one of the determinants actually it
seems that for most people they don't
get that much of filter bubble because
most people are Facebook friends with
the people they went to high school with
and the people who live around them and
that's it and so the elite tend to have
filter buffers so it's a problem with
really in the league do you have a lot
of impact but it's not as much of a
problem for most people so the the or at
least it's not coming from the social
media the there was one other piece with
that well anyway interactions are
complicated oh I was gonna say another
thing which is correlated with higher
quality and high political polarization
is these 50/50 elections they're to come
down to and what and so then you come to
list of problems but the other one that
sitting there is when you've got this
50/50 election what if you can tweak
just a few voters can you throw it in a
way you didn't expect and it really did
look like in both bribes a and the Trump
case that the losing side
it's the winning side expected to lose
they had their concession speech Nigel
Farage gave his Trump's was written so
it wasn't just that the winners were
surprised I mean the losers were
surprised to lose it was that the
winners were surprised not to lose yeah
so that did make it look like obviously
it's a I don't think it's
straightforward in terms of I'm not
saying that they're the cause or
relationship but I think that what's
really interesting about the technology
in the applications use of AI is it has
the potential to perpetuate polarization
we have quite polarized societies
--along
very different lines and so our
societies are interestingly and
increasingly polo I on other lines
economic social you know demographic
such as Thatcher but the really
interesting in intervention of
technology and in particular AI is the
potential it has to perpetuate that
inequality that we were making which is
the extent to which a I potentially
could address help address inequality
but it also had the potential the
potential and the and I don't know
enough about it but I can see no reason
in principle why I might not be designed
in such a way as to reinforce community
but in practice what it seems to be
doing is privileged enjoys in a way that
is actually quite undermining of
community in the sense that we now see
ourselves as members of communities of
choice we unfriend people if I introduce
just a moment of of Christian theology
saying love thy neighbor we tend not to
actually choose our neighbors in the
geographical sense beyond a certain
point you have been community means
living with people you don't necessarily
get on with and haven't chosen it's the
balance between the chosen and the given
which i think is part of dare i say the
human condition
and we are moving for better or worse
and it can be for better the boundary to
make more things chosen than given but I
think in the end if I can again
introduce a theological concept some
point we die and we tend not to choose
that choice has its limits and so the
given and I'm serious that the govern is
part of being human and if we move that
boundary too far into imagining that we
can choose the people in all these ways
we're actually denying something quite
important about ourselves before we go
to the audience questions which we'll do
after this I wanted to ask that the
techy ones the techy three on the panel
is probably going to make more sense but
the rest of you put in I get a sense
we're in the foothills actually of what
do you see in your labs in your
interactions with the people in the area
do you see products applications coming
down the line what do you think that's I
think you know one thing I would like to
say is that I think we're getting a very
interesting debate here and from the
arts and humanities crowd and I think
actually but I think technology needs
the Arts and Humanities crowd in at a
much earlier stage not become in after
the products have been built and soon
I'm not sure this is really going to do
what we want but to be in the technology
companies and there is the closest we
get so far maybe this is the right way
to go is the discipline of design I was
very keen when I was running the
Microsoft lab to get designers in and
you do get a completely different
thinking so we don't want to have just
because you're good at coding doesn't
mean that you should be the person who
decides what these systems look like in
practice and pragmatically that happens
a lot but actually we need a much
broader reach in in design and in kind
of philosophical thinking and in legal
thinking I noticed you haven't mentioned
terminator robots in a very rich field
for that which is a company that 5ai
company is building autonomous cars and
I'm scientific advisor I'm getting quite
involved in that of course that is a
whole new set of issues than the ones
we've been talking about I guess the
best way to characterize is is that this
for the first time is safety-critical AI
we talked about bridges and their safety
critical the other systems we've talked
about are mostly social systems although
you know sometimes safety critical
autonomous cars are very obviously
safety critical and so that that's
provoking a whole new kind of thinking
there are unique vulnerabilities in
machine learning vision systems you can
you know generally a perception system
was artificial and I think my will
probably agree with Nia not and not at
the same level of reliability if you
read the scientific papers and you find
some perception system that is right 99
times out of 100 something like 90 that
stop exactly so let's be generous and
say ninety-nine thousand five hundred
but that's not nearly where we need to
be for cars so that's what you know the
the industry calls that two nines of
reliability 99 percent we need something
like seven nines to get to this roughly
human level of performance and safety
driving that is a fascinating challenge
there's a very direct
safety first of all disagree some of the
things that you're attributing to
artificial intelligence I would say are
at least as much just ICT
communication technology because humans
are sticking their brains together right
so in a way it doesn't matter if it's a
machine or a human you're getting more
ideas faster and for more sources so why
I mentioned that is that if we don't
really know how much just our ability to
communicate is making these political
and economic changes then how will it
change when we get real-time translation
so real-time translation will just
change the way things feel and it is
already changing I would love to see
things like a real-time translation of
institutions so if you're a migrant that
you suddenly plug into the local tax
system you don't just wind up being an
illegal alien somehow but anyway that
and then the other thing that I really
worry about and that I wrote a paper
about in September it's something from
the European Parliament was this
proposal about artificial intelligence
as being a persons and people say well
it's as a legal a legal personhood for
artificial intelligence like you have
for corporations and they say oh don't
worry that's just a legal hand you know
it's just that it's called a legal
fiction you know they mansplain it to
you it's called a legal fiction it's
gonna be a little convenient it'll help
with the contracts look the legal the
legal fiction which is legal personhood
for corporations only works to the
extent that the people who are the
decision-makers for those corporations
don't want those corporations to
disappear that they that they they would
go to jail or that they they would
actually not want them and that's why
you can have shell shell companies they
actually allow you to do unethical
things a completely synthetic legal
person there would be no dissuading it
and and human law would not work on it
it just doesn't make sense and again
there's this attractor which is you know
the company is trying to get out of
their obligations and I'm not I'll talk
about a company like car companies say
trying to get out of their obligations
and the Futurists who want to believe
that AI is going to come to these
benevolent alien so we're going to make
the world better and what I really worry
about is like a despotic leaders the kid
that don't want to accept that death is
inevitable and we hear reading about
tech billionaires trying to upload their
minds
and live forever what if they what if
some bad Putin chatbot yeah it's true
there on some computer staying crap and
like people continue like creates
Anarchy for hundreds of years that would
be awful that's one of the things I
worry about is giving is is not piercing
this veil and getting people to see the
human human responsibilities cord human
justice and we cannot I really do want
to go to the audience questions but I do
want to ask you before that about some
of the way you're doing actually is
going to have interesting actual
implications if it comes to fruition
cause you're working on systems that can
pick up our emotions because there's
other work going on maybe you're doing
as well of systems that can display
mimic emotions we were talking you know
you only have to draw two two spots on a
box and people give it empathy I mean
what are the consequences of having very
convincing systems people think okay so
this is this is really important
yes we can build robots that that a lot
of people built robots I mean hands
robotics is one of those that build
robots that look as humans and they're
very realistic and they can make
expressions that are very realistic and
they can maybe recognize your
expressions in their react appropriately
and so please remember everything is
programmed so I don't think it'd take
you long to build a system that would
pick up on people's emotions and sell
them things when they were most likely
to buy stuff oh yeah so better separate
issue that's Facebook so they they filed
the patents where they will have a
camera on which they will have a face
recognition engine that will be
connected to the Facebook they have two
point 1 billion profiles they can
recognize people in the shops and they
will know from our searches what
like how much we will pay for it hence
each and every shop will give us custom
prices for each and everything unless
any do you want to jump in on the
audience for questions
we should have a bunch of Raving mics or
at least I'll be arriving when you start
putting your hands up quite hard to see
an awful lot of you especially on the
balcony so you mean shout or yeah I
don't know if we can do anything with
the lights to make that easier burst if
you've got questions get your hands up
and that's Alicia's start yes and if we
can keep our questions short slide here
we can have a lot and we don't have a
whole load of time clinical
neuroradiologist Charing Cross Hospital
so within my field it's probably one for
medical specialties which is gonna be
most affected by AI with systems that
can automatically read scans now given
the issues surrounding blackbox
algorithms do you think it would be
immoral for us to intrument these
systems until we actually unravel some
of the mysteries that surround a black
box system given the fact that they can
actually directly in effect a patient
care one problem is that if the black
boxes are performing well enough it
might be immoral not to use them rather
in the way that you know clinical
studies sometimes get cut short because
it becomes absolutely obvious that such
and such a drug is such a lifesaver you
can no longer it's no longer reasonable
to have the control group so I think the
dilemma is is to actually the best would
be to advise the doctor give him what
the what the AI system will find and the
doctor can take this into account or not
so but just cutting it I don't think
this is this is good so think always
about this symbiosis of AI
it's the best I've heard people who
organized HR are just really happy about
how AI is helping them see things that
their normal processes weren't helping
them see before but if you tried to
replace the humans with AI then you'd
have a whole bunch of other failures so
it's trying to use mutual strengths
absolutely it's about keeping humans in
and trying to use mutual strengths but
for practicing doctor isn't there a
potential legal issue where to go
against a decision by your AI assistant
could potentially have legal
implications if things go wrong because
they say well you didn't do what ideally
as I said before justice has to stay so
that the human is the one who's
responsible and so ideally they it would
only be things that was being brought up
but I see what you're saying that you
know somebody is very upset because you
know they're you know this is the thing
they used to say about driverless cars I
haven't heard that much recently but
it's like okay there's been one-tenth as
many people died but it's gonna be a
different 10th and everyone's gonna be
up in arms about that but in practice
most people seem to realize that driving
is a Russian Roulette and they don't
want to be out on that and they're like
okay I'm happy so I haven't actually
heard that but I this is something it's
good that we've identified an issue now
and I hope that we set a really nice
precedent and make good law about it
because otherwise that could go horribly
wrong okay more questions we might sort
of work around a bit so that we sort of
get everyone and don't sort of just have
everyone the people with the mice
running around like crazy so can we go
we've got we can bring - oh I thought we
had like loads of my exciting so as a
woman here in the front row blue top
stripes so front row
great thank you so much this has been
fascinating a couple of you mentioned in
the beginning the issue about what it
means to be human and there's
disagreements around that I'd love to
hear more about that as we face this
potential system or even some people
have argued a life form that
totally different motivations and ours
that has intelligence without
consciousness so what does it mean to be
human faced with this the basic idea is
that the the way the a and a I stands
for artifact so when we build it we are
responsible for it and that's the
biggest difference between us in life
now some people say oh but you choose to
have a child okay half the cases that's
true it's different you don't get to say
am I going to use lidar you know how
many end effectors am I gonna have you
know what's the the the which kind of
CPU am I gonna put in there's you have
it's not like having a baby it's like
writing a novel you have complete
control within the parameters of you
know the laws of physics so if we create
things like that now I'm not saying I
think we can create things like that but
even if it is possible to create
something is just like a person what
would we be doing why why would we you
know give up our responsibilities again
I think our justice system would just
sort of fall apart so so people don't
like that because they don't like to be
limited in what they're allowed to do
but I basically I think so first of all
I think it are lost justice system would
fall apart secondly because of the
things we've all been talking about I'm
sure that we would believe we had
created something it was conscious and
needed something Ababwa before we
actually had because we're so easily
fooled so I just think we shouldn't go
there and I think I want focus on the
eye and of course the rules among humans
are tend to be made by those who value
intelligence very highly but being
intelligent or having a particular level
of intelligence isn't a measure of being
human and so again it comes down to the
point that we has cropped up again and
again who makes the rules on which
principles and are we actually using
intelligence artificial intelligence as
a tool or are we likely to get into a
situation
as almost refers to the previous
question where we trust it more than we
trust ourselves or people like us we
tend to defer to human authorities when
people know more than we do but not
necessarily because they're more
intelligent than us I think there could
be a question here about the way we're
talking about this is about the
narrative around AI that leads us to
place it on a pedestal as something that
in many respects will exceed human
performance that doesn't mean we should
trust it more than we trust our
judgments because human judgement is a
mix of consciousness of all kinds of
things that are not just about
intelligence so the church has no
problem with researchers pursuing this
idea of what's always referred to as
human level intelligence as if human
level intelligence is the pinnacle of
intelligence but the church has no issue
with that
no pursuing human capacities and
capabilities is entirely what we call to
be and to do it's all about application
and it's all about application in ways
that make us more deeply human we don't
know yeah that's the main problem so
what is human we have no idea each and
every brain of ours is completely
different and what we know about brain
did maybe 5% of of knowledge about brain
we have no clue what's in the brain and
each and every of these brains are
different question about social license
and social risk as well so in the past
we have designed systems car insurance
is one of them that allowed us to
distribute the risk and essentially
compensate people for for-4 risks that
are I suppose inherently shared in the
system so it's a question and I'm just
waving it but if AI and the development
of AI is now going to put some kind of
special risk because there will be
questions there people don't agree on
and gonna have to come up with an
outcome answer in relation to autonomous
cars for example how to redesign systems
around that that ensure that social risk
is distributed or that people at least
compensated in some way shape or form in
a way that maintain the second point
which is social license of the
approaches to operate and I think that's
really really key and it really underpin
a lot of the discussions that we've had
today check you around why companies
might want to way I like to call this a
sort of Desai
announced defend model I often people
make a decision they announce something
and they may have to face a massive
backlash against their their initiative
or whatever it was very developed and I
think the companies but also wider
society as a whole has to really think
about moving away from decide Mouse
defend towards engaged deliberate decide
and it requires a massive massive
cultural shift we're not ready for it
yet
but we have to be mostly I am NOT
confident that we will always be the
best so I so but the but when you get a
recommender system like Amazon that
tells you what what book you should be
right that's just based by doing a
lookup it's not very complicated machine
learning you just find another person
that bought a bunch of the same books
and you make a projection right that is
basically I I think in the next 10 or 20
years we're all going to be able to you
know find our best mate look up Google
our next best move and I think it's it's
so that's why I'm doing about
think about human judgment being special
I think well it's sort of is human
judgment it is just data based on other
people but the point is I think we have
a big challenge of the humanities when
we are suddenly faced with the fact that
we could use AI to figure out we can do
better than we can do it ourselves
and I think that's that's the coming
challenge we should get let's get some
more questions could we just get some
stuff around here and then I'll move
over to these guys then we'll hit the
hit the top crowd okay
whoever she's pointing out I can't see
who you're pointing up I can see
everyone's pointing she said that she
wants to make herself better and if she
gets better than Hanson robotics and it
was programmed yes I will repeat the
question the girl is asking about the
Hanson robot robotics robots Sophia who
said that she will bath rock herself and
she will be Shiva little he have a
family
and she will love the family if you care
about the family and she'll better
everything and so on so the issue is
that this is a robot you can program the
robot to say whatever so this has
nothing to do with reality cancer
robotics really like to make splash
stories that will all fascinate us and
amaze us and probably anger us because
then we will talk a lot about them and
that publice
is better than no publicity so do not
trust these things this is the same as
you would trust the Terminator movie it
is this we cannot do when you look at
the robots the robots currently really
funny they usually fall they fall in all
possible ways they cannot move very well
let alone do really stuff let alone
understand really stuff making kids
making herself better yes she may know I
mean the robot can acquire the knowledge
through internet for example and and
know a lot of Siffin recognize a lot of
stuff but you know it's not about really
making her emotionally better making
somebody a better person let alone care
care is an emotion which you show and
feel robots cannot feel we can just
program them so it's it's you know the
NASA they're a bunch of scientists and
they say that they have a Twitter
account that is the rover or whatever
and it's just somebody typing you know
and they and they're pretending to be
the so that kids can imagine being a
satellite or whatever but it's not okay
because it's gain to the point where
it's right in front of us and it's
confusing everybody the British the
British are one of the was the first
country the UK was the first country
that had a sort of a national level
document the EPSRC principles of
robotics and the fourth principle was
that you shouldn't make a robot seem
more human than necessary that's that
it's machine nature should be
transparent now the reason for that is
they're both cons some German
philosopher guy he was really smart and
he figured out that it was wrong whether
or not you dogs were something you
needed to worry about for themselves
because we
with them if you the people who kick
dogs often hurt people too and so you
said you need to take care of a diet
because you think of it as being like a
person regardless of whether or not the
dog itself is something that's important
so some people have used that to argue
we have to treat robots really well too
because the remind us of ourselves and
so therefore it do go on for us not to
treat them correctly but the people who
came together and I'm one of them I'm
biased I but but they came together and
wrote the principles of robotics said
was know then that means that we should
make it more obvious that it isn't a
person and so all these people that
think it's really cute to make their
robots look like people at the tell
children that they're alive and whatever
let's get some more questions tweet
pashmina sorry I don't have a very wide
vocabulary pashmina software company and
I happen to sit next to the marketing
department so whenever I hear companies
making grandiose claims for social good
like saving lives all sorts of alarm
bells go off in my head
so like Sir James like Hill in 1975 he
said you got to remember these guys
trying to sell you something should we
be allowing these software companies to
get away with the claim of anything we
do there are unanticipated consequences
and therefore use that as a
get-out-of-jail-free card yes I'm not
sure I've got a good answer to that you
and I may be the only people old enough
here to know who so James like Hill is
my advisor I'm gonna pass that saving
lights they actually agree with that
think about drones they could go into
the fire tell you actually which kind of
fire you have and only in the rooms
where you can pass through you you let
firemen scene or you you say it safe
enough that you go in so you don't think
that's that's wrong really or think
about for example these things that I'm
talking about we would be able to
diagnose certain things much more in
advance just based on the AI you know so
it's great I mean it's it's the same as
CT scan or MRI scan or x-ray right it
helps you do some things that otherwise
you would not be able to do I don't I
don't see anything bad in that so saying
saving lives is not wrong in that sense
it it there are certain technology is
able to do certain things to save our
lives and that's great I think that's
great but of course we should not make
you no promises which are not possible
we should not do this what Hanson
robotics does because it gives us a
completely wrong perception of the
technology we should not call the
technology miraculous because it's not
it's awesome but some miraculous ok so
there's gonna be trade-offs about you
know like the unintended consequences
and there will be there will be some
mistakes but we should they should
figure out what the standards are about
what you do before you release it and we
should absolutely hold people to account
to see did they proper follow software
engineering did they did they can they
demonstrate they the law in the
accounting of the software that that was
really something they couldn't have
handled in the lab and caught beforehand
I I just I think it's like any other
product and we will just establish this
and that's why we're thinking that it's
people then we think it's something
different but it's not people it's a
product
Silicon Valley sort of issue the whole
mantra of move faster break things when
it comes to autonomous cars yeah I think
it depends to some extent how high the
stakes are
I mean safety-critical AI autonomous
cars that's an unfamiliar space largely
now if you're recommending movies and
you get the recommendation wrong well
you probably won't even notice there's
you know the consequences are not very
bad so why not get out there and I think
I mean it's certainly true in terms of
you know entrepreneurship that you know
releasing the new software every two
years fully tested and all that it's
very conservative and it may not be very
creative so yes there's certainly some
some virtue in that mantra but I think
we've just got to be aware of how high
the stakes are I mean we do know the
actors and characters are different and
yet we still enjoy movies yeah so
actually my group is doing research
right now about whether people can both
find a robot really cute humanoid but
also understand that it's a machine and
turn it off and whether we can introduce
that duality have some more questions
it's great I'm so paranoid about sort of
guessing what clothes are called get it
wrong my name is justina and I wanted to
ask a question around governance and
Prejudice I read a report by ProPublica
who were talking about and in the u.s.
they are using a type of machine or
software to see how likely you are to
Rhea fend you were arrested for a crime
and what they found over a long period
of time was that black and then Latino
members of society were more likely to
be rated as higher offenders
if in reality they had never offended
and a white counterpart actually had
like 10 years of history but was rated
as a 3 so I know that we said that these
are core issues me to be thinking about
but my question is this software is
being used right now it's affecting
judges judgment and it's an immediate
issue so what are some areas around
governance that we can start doing today
rather than waiting a few years down the
line so what you saying was that there
is a system and it's called it's called
compass and is made by a company called
North Point and some interrogation of
this by Pro Publica has shown that over
time this software is used to look at
reoffending rates people in prison and
the Ministry of Justice has a system
called Oasis which isn't isn't the same
kind of thing but we do have a system
that does
achieve the same end the ProPublica
investigation found it over the long
term and I'm not smelling this I should
be but over the long term it seemed to
is it right that it was saying that
black offenders and other non-white
offenders were more likely to
reoffending so was suggesting that they
yeah right yeah yeah so there was a bias
in the in the judgments that it was
handing down essentially on how on
people's reoffending risk and this is
challenged in court as well and one of
the problems is that it's a black box
and the companies say it's proprietary
so you cannot so we you know people have
gone out and done but actually they've
shown with Mechanical Turk that most
people can do better right but also
they've built better decision tree
software which is actually you know
something you can open and see what it's
doing that does a better job so
absolutely is something we could right
now say
that no you cannot have an auditable
software that makes decisions and in
fact most criminal justice system so I
think there's a really interesting
question there of a decision and that's
cool because arguably one of the reasons
for why we historically have a long
period of time legitimacy and simply
saying we're gonna replace that with
really very nature the other issue is
the trade-off between efficiency and I
guess what that system needs to do in
order to get there and so this is the
problem with something that might end up
happening not being quite outcome
firstly there is a decision that we need
to make as a society as to whether we're
going to hand that over to criminal
justice system that will make a decision
on the spot or are we actually going to
say well we want to take time and think
about this properly and deliberate about
that the cost there but there's also
cost in the other side a respect room as
well so reducing maybe the time they've
been making a decision but issues with
with the implications it's something
that's being used to help judges and
it's already being done so it's been
plugged in to sort of the future
scenario of how far the application use
of AI can go and so that's like one
extreme but obviously there is a really
interesting points about the fact that
the use and application of AI in
criminal justice system can help assist
judges and it can help assist juries to
make their decision actually but then
there's a really interesting question
which is what does it do is it to
provide a recommendation or is it in
which case there's a really there's a
power dynamic there which needs to
definitely be on page is it to help
interpret data and again the power okay
so in the UK right now some police
departments are already taking this kind
of advice and if you suspected that the
system was as bad as this californian
system what would you do
so I don't know I don't do British law
so so if you found out that your
communities police system was using an
AI system and and maybe you don't even
know if it's biased or not and you just
want to find out whether it was biased
what are the steps you take there's a
lot of great innovation going on around
that I mean I guess with compass I'm
very curious to know there are two ways
you could approach that you could train
the system on previous decisions in
which case if previous decisions have
been biased then you'd bake the bass in
alternatively you could train the system
on previous outcomes that is did the did
the person Aria fender
did they not even then you may not find
that that is socially except the the
result of socially acceptable and
because it may still may be because the
data set was too small or for whatever
reason it may still appear to be using a
principal
is unacceptable socially but what is
some one I find fascinating I'm sorry
I'm you know unreconstructed techie is
that people are finding ways both to
monitor much more effectively what these
systems are doing and to kind of bake in
principles as well as data so that you
know so that you simply cannot use color
shall we say as a as a criterion for
making the decision and even disallowing
the kind of indirect use of color so
it's easy enough just to strike the
color field out of your data if it
happens to be there but if you use let's
say postcode data it might turn out the
indirect theory so even those kinds of
biases can be sort of sniffed out and
something can be done about which i
think is absolutely great exasperation
about this issue though is in the
systems are out there affecting people's
lives and their freedoms it's not it's
it's great to have these academic it's
great to working on it but here what we
haven't talked about it what is the
pressure outside of Europe because we
know there are some things coming
through in Europe in a few months time
that may change things what is the
pressure in us where that system is used
on people every day to if that system is
deemed to be because you were talking
about cool stuff we could do that to
make it better but in fact there's a bad
system and it's really important to
realize that the system looks like it
was a badly handcrafted decision tree or
something right it's like somebody just
built a bad system sold it to California
now California won't admit they made a
mistake right or some part of California
the whole state but anyway one system
like that has actually gone to court
successfully was an Idaho in this case
it was about allocating so there were
people that suddenly got less money for
their disability all right and they said
and the Idaho said we've outsourced that
it's this guy you know Fred I don't
remember
Fred Fred system okay and Fred says
that's my IP you can't look at it and
the ACLU took him to court one they want
to looked and it's a mess it's like it's
not it's a it's a I it's a giant Excel
spreadsheet you know with macros and
stuff I yeah it just made no sense right
the point is probably what happened
there was that Idaho was running out of
money they didn't want to raise their
taxes they said to this guy save money
and don't and make sure that nobody can
tell how it was done right that is still
AI you don't have to use the
sophisticated machine learning systems
unfortunately right so I think it's
really essential that we have better
answers than we've been able to generate
here to your questions citizens need to
know how they go and complain if they
think that there's something going wrong
and the problem of course like
government in general of course will be
people that believe like the most
perfect system in the world is doing
something wrong but there needs to be
ways to audit there needs to be clear
ways to go and and deal with these
problems because yes this is a present
problem and as I said the and the ACLU
was able to in Idaho and I believe
actually Texas has found something
similar they said that's not due process
due process involves being able to audit
the code right so they some states are
making that decision and it but it
hasn't been regularize yet and and as
people were saying before it's an
international problems so I hope this is
something we can come to terms with for
example through the UN's Internet
Governance forum I really love to get a
question from the balcony if there's any
one take can we take this one here and
yeah I wonder if we might actually just
get you you should give your you guys
give you a questions like each one
really quick and then we'll take the
free and respond to the most interesting
words the guy knows where to put the
microphone okay here one either side of
you okay yeah go ahead Fred okay I'm a
PhD student at the data privacy group
Imperial and I was wondering so AI is
all very data dependent right any any
machine
bill was gonna be super data-dependent
and we can we're seeing research that
comes out that says for example you can
figure out what my gender is for my
location etcetera etc but on the other
hand there are people like the senior
editor of The Economist that argue that
we should you know and stop worrying
about privacy and give AI all the data
it needs because we can you know cure
cancer so I was wondering if you guys
had an input on how we should make these
decisions in you know in any of the
fields that they could you could be
deployed you guys address that can we
just grab the other question that was up
on up on there super quick I mean just
just just say question pretty pretty
briefly about you just saying pretty
briefly so what kind of social political
economic system do you see existing in
50 years time when when AI can do most
things okay
and so I'm Harry Berg and I wanted to
ask you a question directly to Malcolm
well I think raised some of the most
thought-provoking points points and that
some how do you think that the religious
community will adopt or be affected by
the use of artificial intelligence the
first question was a bit quick to me I
didn't catch it
did you data privacy and whether we
should keep data privacy how because a
lot of people say well if you give your
data we will be able to do some amazing
things like your cancer and on the other
side you know we do have these kind of
biases right so I believe on that
privacy that it is something you should
actually be responsible for so it's your
own decision currently it's not your
decision unfortunately this choice it's
taken away from you and I believe that
this choice must come back to us so I
believe that it is on us that's on our
plate scientifically to find a way that
we will protect each and every datum
that is ours and once you can do that
attack your tag your data with your own
time
right you will be empowered to give the
data to commuting is useful or you want
or whatever else reason right including
money currently we are not able to do
that and I don't think this is okay no
matter what's the cause right so that's
really interesting intersection between
the issue of data and consent in
particular consenting to give over your
data and again this question about AI
and its competency its ability to
presumably in future and indeed now
gather information gather data that we
might not necessarily consent to to
giving of course and I think there's a
kind of cumulative problem there that we
haven't cracked which is that we seem to
think as a society the data ethics is
governed around this principled consents
meaningful consent there's a lot of
discussion about ethics as being well I
consent to give my day away and a lot of
debate about the issue of contents as no
longer being meaningful but maybe we
need to really really rethink this
principle of consent as being the basis
of our data away but particularly if we
never really consent to give our data in
a meaningful way you know in in in in
future so you can't imagine the AI
systems in the future that gather huge
amount of information about you know
imagine I'm walking down a park every
single day there could be a system that
that gathers a lot of data that I mean
just by being around me and being able
to act and respond in that way but
actually I've never actually consented
to give that and given to your phone so
this is already given is always other
day I went to yeah
I went to a hotel you using maps
software and seven days later I was
asked by the phone to give a review and
I've never actually said to to to my
phone ooh
you know I'm really happy that you know
that I'm staying at XY their place but
you and it asked me for information I
think for me felt yeah we're seeing that
now and and that could potentially
become taken to quite extreme limits in
future we do on the night but before
that I think the last two questions come
together for me rather interestingly
okay
how will the good religious communities
respond to AI well communities of faith
communities are not separate bubbles
they're everywhere
there will be people with faith people
quit Christians Muslims all sorts doing
AI stuff at the cutting edge I expect
they may not make a big deal of it but
they'll be there so I expect that we'll
deal with this just like pretty much
everyone else does although given that
most religious communities in the West
Oh have a slightly higher age profile
than the population as a whole will be
slightly late to adopters and just as
the Church of England is using digital
in getting its message out will will use
AI in as far as it worked but what is it
about being religious what what is that
you know what what's the distinction I
think it's really quite interesting this
is the sect that the second question
about 50 years from now I think one of
the reasons why religious communities in
general are standing out more as board
and non-mainstream is because most
religious traditions have serious
problems with the atomized
individualization of societies being
religious means seeing yourself as part
of a historical community that's been
around for quite a long time and living
it's not just propositional knowledge
it's how you live as a person with faith
and that I think means relationships are
privileged community is a privileged
concept and we see a lot of social
trends
undermining that and that's part of the
marginalization of religion in the West
now the key question about AI is is it
going to exacerbate that trend or can it
be used to challenge that trend I think
if we see AI driving more and more
wedges between people and their
interaction and their sense of belonging
to one another there could be I don't
say I predict it there could be not only
a religious pushback but from all sorts
of people who find the erosion of
community deeply threatening if on the
other hand AI can be fair I can do
things that lead to healing the sick
strengthening community making us
actually work better together
I can see religious community is
embracing it gladly so I think again it
takes us back to where we've been for a
lot of this evening the political
economic historical context is
absolutely crucial to how this pans out
you said 50 years in future I don't
think this question is possible to
answer think about it
25 years ago we did not have internet 25
years ago we did not have mobile phones
definitely not smartphones so but the
Internet is really important how much we
connected with each other
how much disruption it introduced just
five years ago we had the race again of
deep learning
I mean narrow networks were introduced
in 1970 but after after so many times
that we have the processing machines it
can actually run it random hence things
are absolutely unpredictable
I put a project three years ago I have
to redefine the project because the
methodology is completely changed so
it's really important to understand the
speed and the acceleration of the of the
technology is phenomenal and we have no
clue where we will really be however is
it's really important in my opinion to
talk about these kind of issues and try
to regulate
delete certain things like for example
who will be responsible if the driver's
car kills somebody right
these kind of things you have we have to
start thinking about what we're going to
do if we will have no countries but we
will care companies that own us is that
good thing so these are the kind of
things that we have to think about
that's really predicting 50 years in in
you know I don't think it's possible
because I don't think as cool as all the
stuff we've been talking about is I
don't think it's that big much bigger of
a deal than writing was or than
telephones were telephones Telegraph's
rail right and it's and that is not to
say those weren't really big deals right
but I think there's two fundamental
problems one is sustainability how do
you live in a planet how do you live in
a space and the other is this inequality
issue you've been talking about how do
you distribute things between people and
those are the issues we have to deal
with well or badly I think right now the
big issue is there's people who want it
to be more unequal so that they have
even more power and they don't realize
that that makes them their position even
more fragile as well and that's what
happened after in America after the
crash after world war one and the crash
to 29 then even the most elite said ok
we've got to fix this and then
unfortunately in Europe had been left in
such a bad state after World War 1 that
it was only after World War two that and
they did things like outline the
extraction of wealth from countries in
1945 they sat down and made illegal
things that are happening now when
people transfer wealth other countries
which is one of the things that keeps
her in Greece in Afghanistan and Russia
right so those are the kinds of
challenges and how we handle this in the
next few years and whether we damp down
the inequality without Wars that's the
big question but I think in 50 years
we'll have gotten used to AI like we're
used to writing and you know writing
used to be seen as magic and witchcraft
and you know like you're actually at a
distance that was really freaked out -
it will be used to it
fifty years and we'll be back to the
basic human problems again we're gonna
be kicked out very very soon but Andrew
will remove if you don't add anything
more on the the fiftieth the efficient
please do I think that's a splendid come
to rest on I think that's um I certainly
have great optimism along with Matt
Ridley a writer that I greatly admire I
have optimism for the kind of ingenuity
of humankind and you know we have many
problems and a society to face but I
think AI and actually the use of data I
think that we haven't emphasized quite
enough as a means of making good
decisions and these are fantastic tools
that we will use to craft very
innovative solutions to the the problems
that we face for example sustainability
I think we just need to understand
better this sense of common good if it
did what does that look like and part of
our challenge with AI is that we have
never really managed to forge that and
we have found it particularly difficult
in recent years to forge that and so AI
is developing rapidly and not only do we
now need to forge that sense of common
goods or what I get Michael Stan will
called the sites for the cultivation of
a common citizenship we also need to
understand and interpret how they apply
to the changes that are happening and
interaction between technology and
citizens so I think sites for the
cultivation of common citizenship have
to be creative but they're more
generally we've got a huge challenge
which is understanding how they applies
to the effective AI itself
I actually think spaces like this are
really crucial we need much more events
if we are going to realize that vision
we don't have enough but I think it's
fantastic that we made a good start we
absolutely are on the dot going to be
kicked out so you
thank you to you for coming along on a
Friday night
thanks very much sure they'll grab you
