RICHARD DAWKINS: When we come to artificial
intelligence and the possibility of their
becoming conscious, we reach a profound philosophical
difficulty.
I am a philosophical naturalist; I'm committed
to the view that there is nothing in our brains
that violates the laws of physics, there's
nothing that could not, in principle, be reproduced
in technology.
It hasn't been done yet; we're probably quite
a long way away from it, but I see no reason
why in the future we shouldn't reach the point
where a human-made robot is capable of consciousness
and of feeling pain.
BABY X: Da.
Da.
MARK SAGAR: Yes, that's right.
Very good.
BABY X: Da.
Da.
MARK SAGAR: Yeah.
BABY X: Da.
Da.
MARK SAGAR: That's right.
JOANNA BRYSON: So, one of the things that
we did last year, which was pretty cool, the
headlines, because we were replicating some
psychology stuff about implicit bias—actually,
the best one is something like 'Scientists
show that AI is sexist and racist and it's
our fault,' which, that's pretty accurate
because it really is about picking things
up from our society.
Anyway, the point was, so here is an AI system
that is so humanlike that it's picked up our
prejudices and whatever and it's just vectors.
It's not an ape, it's not going to take over
the world, it's not going to do anything,
it's just a representation, it's like a photograph.
We can't trust our intuitions about these
things.
SUSAN SCHNEIDER: So why should we care about
whether artificial intelligence is conscious?
Well, given the rapid-fire developments in
artificial intelligence, it wouldn't be surprising
if within the next 30 to 80 years we start
developing very sophisticated general intelligences.
They may not be precisely like humans, they
may not be as smart as us, but they may be
sentient beings.
If they're conscious beings, we need ways
of determining whether that's the case.
It would be awful if, for example, we sent
them to fight our wars, force them to clean
our houses, made them essentially a slave
class.
We don't want to make that mistake, we want
to be sensitive to those issues, so we have
to develop ways to determine whether artificial
intelligence is conscious or not.
ALEX GARLAND: The Turing Test was a test set
by Alan Turing, the father of modern computing.
He understood that at some point the machines
they were working on could become thinking
machines as opposed to just calculating machines
and he devised a very simple test.
DOMHNALL GLEESON (IN CHARACTER): It's when
a human interacts with a computer and if the
human doesn't know they're interacting with
a computer the test is passed.
DOMHNALL GLEESON: And this Turing Test is
a real thing and it's never, ever been passed.
ALEX GARLAND: What the film does is engage
with the idea that it will, at some point,
happen.
The question is what that leads to.
MARK SAGAR: So, she can see me and hear me.
Hey, sweetheart, smile at Dad.
Now, she's not copying my smile, she's responding
to my smile.
We've got different sorts of neuromodulators,
which you can see up here.
So, for example, I'm going to abandon the
baby, I'm just going to go away and she's
going to start wondering where I've gone.
And if you watch up where the mouse is you
should start seeing cortisol levels and other
sorts of neuromodulators rising.
She's going to get increasingly—this is
a mammalian maternal separation distress response.
It's okay, sweetheart.
It's okay.
Aw.
It's okay.
Hey.
It's okay.
RICHARD DAWKINS: This is profoundly disturbing
because it goes against the grain to think
that a machine made of metal and silicon chips
could feel pain, but I don't see why they
would not.
And so, this moral consideration of how to
treat artificially intelligent robots will
arise in the future and it's a problem which
philosophers and moral philosophers are already
talking about.
SUSAN SCHNEIDER: So, suppose we figure out
ways to devise consciousness in machines,
it may be the case that we want to deliberately
make sure that certain machines are not conscious.
So, for example, consider a machine that we
would send to dismantle a nuclear reactor
so we would essentially quite possibly be
sending it to its death, or a machine that
we'd send to a war zone.
Would we really want to send conscious machines
in those circumstances?
Would it be ethical?
You might say well, maybe we can tweak their
minds so they enjoy what they're doing or
they don't mind sacrifice, but that gets into
some really deep-seated engineering issues
that are actually ethical in nature that go
back to Brave New World, for example, situations
where humans were genetically engineered and
took a drug called Soma so that they would
want to live the lives that they were given.
So, we have to really think about the right
approach.
So, it may be the case that we deliberately
devise machines for certain tasks that are
not conscious.
MAX TEGMARK: Some people might prefer that
their future home helper robot is an unconscious
zombie so they don't have to feel guilty about
giving it boring chores or powering it down,
some people might prefer that it's conscious
so that there can be a positive experience
in there, and so they don't feel creeped out
by this machine just faking it and pretending
to be conscious even though it's a zombie.
JOANNA BRYSON: When will we know for sure
that we need to worry about robots?
Well, there's a lot of questions there, but
consciousness is another one of those words.
The word I like to use is moral patient; it's
a technical term that the philosophers came
up with and it means exactly something that
we are obliged to take care of.
So, now we can have this conversation: If
you just mean conscious means moral patient
then it's no great assumption to say well
then if it's conscious then we need to take
care of it.
But it's way more cool if you can say: Does
consciousness necessitate moral patiency?
And then we can sit down and say, well, it
depends what you mean by consciousness.
People use consciousness to mean a lot of
different things.
A lot of people that this rubs them the wrong
way, it's because they've watched Blade Runner
or AI the movie or something like this, in
a lot of these movies we're not really talking
about AI, we're not talking about something
designed from the ground up, we're talking
basically about clones.
And clones are a different situation.
If you have something that's exactly like
a person, however it was made, then okay it's
exactly like a person and it needs that kind
of protection.
But people think it's unethical to create
human clones partly because they don't want
to burden someone with the knowledge that
they're supposed to be someone else, right,
that there was some other person that chose
them to be that person.
I don't know if we'll be able to stick to
that, but I would say that AI clones fall
into the same category.
If you were really going to make something
and then say, hey, congratulations, you're
me and you have to do what I say, I wouldn't
want myself to tell me what to do, if that
makes sense, if there were two of me.
Right?
I think we'd like to both be equals and so
you don't want to have an artifact of something
that you've deliberately built and that you're
going to own.
If you have something that's sort of a humanoid
servant that you own, then the word for that
is slave.
And so, I was trying to establish that, look,
we are going to own anything we build and
so, therefore, it would be wrong to make it
a person because we've already established
that slavery of people is wrong and bad and
illegal.
And so, it never occurred to me that people
would take that to mean that oh, the robots
will be people that we just treat really badly.
It's like no that's exactly the opposite.
We give things rights because that's the best
way we can find to handle very complicated
situations.
And the things that we give rights are basically
people.
I mean some people argue about animals but,
technically, and again this depends on who's
technical definition you use, but technically
rights are usually things that come with responsibilities
and that you can defend in a court of law.
So, normally we talk about animal welfare
and we talk about human rights.
But with artificial intelligence you can even
imagine itself knowing its rights and defending
itself in the court of law, but the question
is why would we need to protect the artificial
intelligence with rights?
Why is that the best way to protect it?
So, with humans it's because we're fragile,
it's because there's only one of us and, I
actually think this is horribly reductionist,
but I actually think it's just the best way
that we've found to be able to cooperate.
It's sort of an acknowledgment of the fact
that we're all basically the same thing, the
same stuff, and we had to come up with some
kind of, the technical term, again, is equilibrium;
we had to come up with some way to share the
planet and we haven't managed to do it completely
fairly, like everybody gets the same amount
of space, but actually we all want to be recognized
for our achievements so even completely fair
isn't completely fair, if that makes sense.
And I don't mean to be facetious there, it
really is true that you can't make all the
things you would like out of fairness be true
at once.
That's just a fact about the world, it's a
fact about the way we define fairness.
So, given how hard it is to be fair, why should
we build AI that needs us to be fair to it?
So, what I'm trying to do is just make the
problem simpler and focus us on the thing
that we can't help, which is the human condition.
And I'm recommending that if you specify something,
if you say okay this is when you really need
rights in this context, okay, once we've established
that don't build that.
PETER SINGER: Exactly where we would place
robots would depend on what capacities we
believe they have.
I can imagine that we might create robots
that are limited to the intelligence level
of nonhuman animals, perhaps not the smartest
nonhuman animals either, they could still
perform routine tasks for us, they could fetch
things for us on voice command.
That's not very hard to imagine.
But I don't think that that would be a sentient
being necessarily.
And so, if it was just a robot that we understood
how exactly that worked, it's not very far
from what we have now, I don't think it would
be entitled to any rights or moral status.
But if it was at a higher level than that,
if we were convinced that it was a conscious
being, then the kind of moral status it would
have would depend on exactly what level of
consciousness and what level of awareness.
Is it more like a pig for example?
Well, then it should have the same rights
as a pig—which by the way I think we are
violating every day on a massive scale by
the way we treat pigs in factory farms.
So, I'm not saying such a robot should be
treated like pigs are being treated in our
society today, on the contrary, it should
be treated with respect for their desires
and awareness and their capacities to feel
pain and their social nature, all of those
things that we ought to take into account
when we are responsible for the lives of pigs
also we would have to take into account when
we are responsible for the lives of robots
at a similar level.
But if we created robots who are at our level
then I think we would have to give them really
the same rights that we have.
There would be no justification for saying,
oh, yes but we're a biological creature and
you're a robot; I don't think that has anything
to do with the moral status of a being.
GLENN COHEN: One possibility is you say: A
necessary condition for being a person is
being a human being.
So, many people are attracted to that argument
and say: Only humans can be persons.
All persons are humans.
Now, it may be that not all humans are persons,
but all persons are humans.
Well, there's a problem with that and this
is put most forcefully by the philosopher
Peter Singer, the bioethicist Peter Singer,
who says to reject a species, the possibility
that a species has rights and ought to be
a patient for moral consideration, the kinds
of things that have moral consideration on
the basis of the mere fact that they're not
a member of your species, he says, is equivalent
morally to rejecting giving rights or moral
consideration to someone on the basis of their
race.
So, he says speciesism equals racism.
And the argument is: Imagine that you encountered
someone who is just like you in every possible
respect but it turned out they actually were
not a member of the human species, they were
a Martian, let's say, or they were a robot
and truly exactly like you.
Why would you be justified in giving them
less moral regard?
So, people who believe in capacity X views
have to at least be open to the possibility
that artificial intelligence could have the
relevant capacities, albeit even though they're
not human and therefore qualify for personhood.
On the other side of the continuum, one of
the implications is that you might have members
of the human species that aren't persons and
so anencephalic children, children born with
very little above the brain stem in terms
of their brain structure, are often given
as an example.
They're clearly members of the human species
but their abilities to have the kinds of capacities
most people think matter are relatively few
and far between.
So, you get into this uncomfortable position
where you might be forced to recognize that
some humans are non-persons and some non-humans
are persons.
Now again, if you bite the bullet and say,
'I'm willing to be a speciesist; being a member
of the human species is either necessary or
sufficient for being a person,' you avoid
this problem entirely.
But if not, you at least have to be open to
the possibility that artificial intelligence,
in particular, may at one point become person-like
and have the rights of persons.
And I think that that scares a lot of people,
but in reality, to me, when you look at the
course of human history and look how willy-nilly
we were in declaring some people non-persons
from the law, slaves in this country, for
example, it seems, to me, a little humility
and a little openness to this idea may not
be the worst thing in the world.
