[MUSIC PLAYING]
DAVID CHALMERS:
Thanks for coming out.
It's good to be here.
As Eric said, I am a philosopher
thinking about consciousness.
Coming from a background
in the sciences and math,
it always struck me that the
most interesting and hardest
unsolved problem
in the sciences was
the problem of consciousness.
And way back 25 years ago
when I was in grad school,
it seemed to be the
best way to come at this
from a big picture perspective
was to go into philosophy
and think about the foundational
issues that arise in thinking
about consciousness from any
number of different angles,
including the angles
of neuroscience
and psychology and AI.
In this talk, I'm
going to present
a slightly different
perspective on the problem
after laying out some
background, the perspective
of what I call the
meta-problem of consciousness.
I always liked the idea
that you approach a problem
by stepping one level up,
taking the metaperspective.
I love this quote, "Anything
you can do, I can do meta."
I have no idea what
the origins was.
I like the fact
this is attributed
to Rudolf Carnap, one of
my favorite philosophers.
But anyone who
knows Carnap's work,
it's completely
implausible he would ever
say anything so frivolous.
It's also being attributed
to my thesis advisor, Doug
Hofstadter, author of
"Godel, Escher, Bach"
and a big fan of
the metaperspective.
But he assures me he
never said it either.
But the metaperspective
on anything
is stepping up a level.
The meta-problem,
as I think about it,
is it's called the
meta-problem because it's
a problem about a problem.
A metatheory is a
theory about a theory.
Meta-problem is a
problem about a problem.
In particular, it's the
problem of explaining
why we think there is a
problem about consciousness.
So there's a
first-order problem,
the problem of consciousness.
Today, I'm going to focus
on a problem about it.
But I'll start by introducing
the first-order problem itself.
The first-order problem is
what we call the hard problem
of consciousness.
It's the problem of explaining
why and how physical processes
should give rise to
conscious experience.
You've got all of these
neurons firing in your brain,
bringing about all kinds
of sophisticated behavior.
We can get it to be
[INAUDIBLE] explaining
our various
responses, but there's
this big question about how
it feels from the first person
point of view.
That's the subjective
experience.
I like this illustration of the
hard problem of consciousness.
It seems to show someone's
hair catching fire,
but I guess it's a
metaphorical illustration
of the subjective perspective.
So the hard problem is
concerned with what philosophers
call phenomenal consciousness.
The word consciousness
is ambiguous 1,000 ways.
But phenomenal
consciousness is what
it's like to be a subject from
the first person point of view.
So a system is
phenomenally conscious
if there's something
it's like to be it.
A mental state is
phenomenally conscious
if there's something
it's like to be in it.
So the thought is there
are some systems--
so there's something it's
like to be that system.
There's something
it's like to be me.
I presume there's something
it's like to be you.
But presumably, there's nothing
it's like to be this lectern.
As far as we know, the lectern
does not have a first person
perspective.
This phrase was made famous by
my colleague, Tom Nagel at NYU,
who back in 1974 wrote an
article called "What Is It Like
To Be A Bat?".
And the general idea
was, well, it's very hard
to know what it's like to be
a bat from the third person
point of view, just looking
at it as a human who has
different kinds of experience.
But presumably, very
plausibly, there
is something it's
like to be a bat.
The bat is conscious.
It's having subjective
experiences, just of a kind
very different from ours.
In human subjective
experience, consciousness
divides into any number of
different kinds or aspects,
like different tracks of the
inner movie of consciousness.
We have visual experiences
like the experience of, say,
these colors, blue and red and
green from the first person
point of view and of depth.
There are sensory experiences
like the experience
of my voice, experiences
of taste and smell.
They're experiences
of your body.
Feeling pain or orgasms
or hunger or a tickle
or something, they all have
some distinctive first person
quality.
Mental images like
recalled visual images,
emotional experiences
like a experience
of happiness or anger.
And indeed, we all seem
to have this stream
of a current thought
or at the very least,
we're kind of chattering
away to ourselves
and reflecting and deciding.
All of these are aspects of
subjective experience, things
we experience from the
first person point of view.
And I think these
subjective experiences
are, at least on
the face of it, data
for the science of
consciousness to explain.
These are just facts
about us that we're having
these subjective experiences.
If we ignore them,
we're ignoring the data.
So if you catalog
the data that, say,
the science of consciousness
needs to explain,
there are certainly
facts about our behavior
and how we respond
in situations.
There are facts about
how our brain is working.
There are also facts about
how subjective experiences,
and on the face of
it, they're data.
And it's these
data that pose what
I call the hard problem
of consciousness.
But this gets contrasted
with the easy problems,
the so-called easy
problems of consciousness,
which are the
problems of explaining
behavioral and
cognitive functions.
Objective things you can
measure from the third person
point of view typically
tied to behavior.
Perceptual discrimination
of a stimulus,
I can discriminate two different
things in my environment.
I can say, that's
red, and that's green.
I can integrate the information
about the color and the shape.
I can use it to
control my behavior.
Walk towards the red one
rather than the green one.
I can report it, say
that's red, and so on.
Those are all data too
for science to explain.
But we've got a bead on how
to explain though they don't
seem to pose as big a problem.
Why?
We explain those easy
problems by finding
a mechanism, typically a neural
or computational mechanism that
performs the relevant
function to explain
how it is that I get to say
there's a red thing over there
or walk towards it.
Well, you find the mechanisms
involving perceptual processes
and action processes in my brain
that leads to that behavior.
Find the right mechanism that
performs the function you've
explained what needs
to be explained
with the easy problems
of consciousness.
But for the hard problem,
for subjective experience,
it's just not clear that
this standard method works.
It looks like explaining
all that behavior still
leaves open a further question.
Why does all that give
you subjective experience?
Explain the reacting, the
responding, the controlling,
the reporting, and so on.
It still leaves
open the question,
why is all that accompanied
by subjective experience.
Why doesn't it go on in the
dark without consciousness,
so to speak?
There seems to be what the
philosopher Joe Levine has
called a gap here,
an explanatory gap,
between physical processes
and subjective experience.
At least our standard
kinds of explanation,
which work really well for
the easy problems of behavior
and so on, don't obviously
give you a connection
to the subjective
aspects of experience.
And there's been a vast amount
of discussion of these things
over--
I mean, well, for
centuries, really.
But it's been
particularly active
in recent decades,
philosophers, scientists,
all kinds of different views.
Philosophically, you can divide
approaches to the hard problem
into at least two classes.
One is an approach on
which consciousness
is taken to be somehow
irreducible and primitive.
We can't explain it in
more basic physical terms,
so we take it as a
kind of primitive.
And that might lead to dualist
theories of consciousness
where consciousness is
somehow separate from
and interacts with the brain.
Recently very popular
has been the class
of panpsychist theories
of consciousness.
I know Galen Strawson was
here a while back talking.
He very much favors
panpsychist theories
where consciousness is
something basic in the universe
underlying matter.
And indeed, there are idealist
theories where consciousness
underlies the whole universe.
So these are all extremely
speculative but interesting
views that I've explored myself.
There are also a reductionist
theories of consciousness from
functionalist approaches, where
consciousness is just basically
taken to be a giant
algorithm or computation,
biological approaches
to consciousness--
my colleague Ned Block
was here, I know,
talking about
neurobiology-based approaches,
where it's not the
algorithm that matters,
but the biology it's
implemented in--
and indeed, the kind
of quantum approaches
that people like Roger
Penrose and Stuart Hameroff
have made famous.
I think there's interesting
things to say about all
of these approaches.
I think that right
now, at least,
most of the reductionist
approaches leave a gap.
But the non-reductionist
approaches
have other problems in
seeing how it all works.
Today, I'm going to take a
different kind of approach,
this approach through
the meta-problem.
One way to motivate this is to--
I often get asked, well,
you're a philosopher.
It's fine.
You get to think about these
things like the hard problem
of consciousness.
How can I, as a scientist or an
engineer or an AI researcher--
how can I do something
to contribute,
to help get this at this hard
problem of consciousness?
Is this just a problem
for philosophy?
For me to work on it
as a AI researcher,
I need something I
can operationalize,
something I can work
with and try to program.
And as it stands,
it's just not clear
how to do that with
the hard problem.
If you're a
neuroscientist, there
are some things you can do.
You can work with humans
and look at their brains
and look for the neural
correlates of consciousness,
the bits of the brain that go
along with being conscious.
Because at least
with humans, we can
take as a plausible
background assumption
that the system is conscious.
For AI, we can't even do that.
We don't know which AI systems
we're working with that
are conscious.
We need some
operational criteria.
In AI, we mostly work on
modeling things like behavior
and objective functioning.
For consciousness, those
are the easy problems.
So how does someone coming
from this perspective
make a connection to the hard
problem of consciousness?
Well, one approach is to
work on certain problems
among the easy problems
of behavior that
shed particular light
on the hard problem.
And that's going to be the
approach that I look at today.
So the key idea
here is there are
certain behavioral
functions that
seem to have a
particularly close relation
to the hard problem
of consciousness.
In particular, we say
things about consciousness.
We make what philosophers
call phenomenal reports,
verbal reports of
conscious experiences.
So I'll say things
like, I'm conscious,
I'm feeling pain
right now, and so on.
Maybe the consciousness
and the pain
are subjective experiences.
But the reports, the
utterances, I am conscious,
well that's a bit of behavior.
In principle, explaining those
is among the easy problems.
It's objectively
measurable response.
We can find a mechanism in
the brain that produces it.
And among our
phenomenal reports,
there's the special class
we can call the problem
reports, reports expressing
our sense that consciousness
poses a problem.
Now admittedly, not everyone
makes these reports.
But they seem to be fairly
widespread, especially
among philosophers and
scientists thinking
about these things.
But furthermore, it's a
sense that it's fairly easy
to find a very wide
class of people
who think about consciousness.
People say things like, there
is a problem of consciousness,
a hard problem.
On the face of it,
explaining behavior
doesn't explain consciousness.
Consciousness
seems non-physical.
How would you ever explain the
subjective experience of red
and so on?
It's an objective
fact about us--
at least about some of us--
that we make those reports.
And that's a fact
about human behavior.
So the meta-problem
of consciousness then,
at a second approximation,
is roughly the problem
of explaining these
problem reports,
explaining, you might say, the
conviction that we're conscious
and that consciousness
is puzzling.
And what's nice about this is
that although the hard problem
is this airy fairy problem
about subjective experience
that's hard to pin
down, this is a puzzle
ultimately about behavior.
So this is an easy
problem, one that
ought to be open to those
standard methods of explanation
in the cognitive
and brain sciences.
So there's a research program.
There's a research program here.
So I like to think
of the meta-problem
as something we
could play that role.
I talked about earlier,
if you're an AI researcher
thinking about this,
the meta-problem
is an easy problem, a
problem about behavior,
that's closely tied
to the hard problem.
So it's something we might
be able to make some progress
on using standard methods
of thinking about algorithms
and computations or
thinking about brain
processes and
behavior while still
shedding some light,
at least indirectly,
on the hard problem.
It's more tractable
than the hard problem.
But solving it ought to shed
light on the hard problem.
And today, I'm just going to
kind of lay out the research
program and talk about some ways
in which it might potentially
shed some light.
This is interesting
to a philosopher
because it looks like
an instance of what
people sometimes call
genealogical analysis.
It goes back to
Friedrich Nietzsche
on the genealogy of morals.
Instead of thinking
about what's good or bad,
let's look at where our
sense of good or bad
came from, the genealogy of it
all in evolution or in culture
or in religion.
And people think a
genealogical approach
to God, instead of thinking
about does God exist or not,
let's look at where our
belief in God came from.
Maybe there's some
evolutionary reason
for why people believe in God.
This often leads, not
always, but often leads
to a kind of debunking of our
beliefs about those domains.
Explain why we believe in
God in evolutionary terms,
no need for the God
hypothesis anymore.
Explain how moral beliefs and
evolutionary terms, maybe no
need to take morality
quite so seriously.
So some people, at least, are
inclined to take an approach
like this with
consciousness too.
If you think about the
meta-problem explaining
our beliefs about
consciousness, that
might ultimately debunk our
beliefs about consciousness.
This leads to a philosophical
view, which has recently
attracted a lot of interest,
a philosophical view
called illusionism, which is
the view that consciousness
itself is an illusion.
Or maybe that the problem of
consciousness is an illusion.
Explain the illusion, and
we dissolve the problem.
I take that in terms
of the meta-problem,
that view roughly comes
to solve the meta-problem.
It will dissolve
the hard problem.
Explain why it is that
we say all these things
about consciousness, why we
say, I am conscious, why we say,
consciousness is puzzling.
If you can explain all
that in algorithmic terms,
then you'll remove
the underlying problem
because you'll have
explained why we're
puzzled in the first place.
Actually, walking
over here today,
I noticed that just a
couple of blocks away,
we have the Museum
of Illusions, so I'm
going to check
that out later on.
But if illusionism
is right, added
to all those
perceptual illusions
is going to be the problem
of consciousness itself.
It's roughly an
illusion thrown up
by having a weird
kind of self model
with a certain kind
of algorithm that
attributes to ourselves special
properties that we don't have.
So one line on the meta-problem
is the illusionist line.
Solve the meta-problem, you'll
get to treat consciousness
as an illusion.
That's actually a view
that has many antecedents
in the history of philosophy,
one way or another.
Even Immanuel Kant and his
great critique of pure reason
had a section where he talked
about the self or the soul
as a transcendental illusion.
We seem to have this
indivisible soul.
But that's the kind
of illusion thrown out
by our cognitive processes.
The Australian
philosophers, Ullin Place
and David Armstrong,
had versions
of this that I might
touch on a bit later.
Daniel Dennett, a leading
reductionist thinker
about consciousness
has been pushing
for the last couple
of decades the idea
that consciousness involves a
certain kind of user illusion.
And most recently, the British
philosopher, Keith Frankish,
has been really
pushing illusionism
as a theory of consciousness.
He has a book centering
around a paper
by Keith Frankish on illusionism
as a theory of consciousness
that I recommend to you.
So one way to go
with the meta-problem
is the direction of illusionism.
But one nice thing
about-- many people
find illusionism
completely unbelievable.
They find, how could it be that
consciousness is an illusion?
Look, we just have these
subjective experiences.
It's a data about our nature.
And I confess, I've got some
sympathy with that reaction.
So I'm not an
illusionist myself.
I'm a realist
about consciousness
in the philosopher's sense,
where a realist about something
is someone who believes
that thing is real.
I think consciousness is real.
I think it's not an illusion.
I think that solving
the meta-problem
does not dissolve
the hard problem.
But the nice thing about the
meta-problem is you can proceed
on it--
to some extent, at least
in initial neutrality--
on that question, is
consciousness real or is it
an illusion.
It's a basic problem about
our objective functioning
in these reports.
What explains those?
There's a neutral
research program here
that both realists,
illusionists,
people of all kinds of
different views of consciousness
can explain.
And then we can
come back and look
at the philosophical
consequences.
So I'm not an illusionist.
I think consciousness is real.
I've got to say, I do feel
the temptation of illusionism.
I find it really intriguing and
in some ways attractive view.
It's just fundamentally
unbelievable.
Nevertheless, I think
that the meta-problem
should be a tractable problem.
Solving it, at the
very least, will
shed much light on the hard
problem of consciousness
even if it doesn't solve it.
If you can explain
our conviction
that we're conscious,
somehow the source,
the roots of our conviction
that we are conscious,
must have something to do
with consciousness especially
if consciousness is real.
So I think it's very
much a good research
program for people to explain.
So then I'll move on
now to just outlining
the research program a little
bit more and then talk a bit
about potential
solutions and on impact
on theories of consciousness
before wrapping up
with a little bit more
about illusionism.
So this meta-problem, which
I've been pushing recently,
opens up a tractable
empirical research program
for everyone, reductionists,
non-reductionists,
illusionists, non-illusionists.
We can try to solve
it and then think
about the philosophical
consequences.
Now what is the meta-problem?
Well, the way I'm
going to put it is it's
the problem of topic-neutrally
explaining problem intuitions
or else explaining why
that can't be done.
And I'll unpack all the
pieces of that right now.
First, starting with
problem intuitions.
What are problem intuitions?
Well, there are
the things we say.
There are things we think I say.
Consciousness seems irreducible.
I might think consciousness
is irreducible.
People might be disposed,
have a tendency to say
or think those things.
Problem intuitions all take
to be roughly, that tendency.
We have dispositions to say
and think certain things
about consciousness.
What are the core
problem intuitions?
Well, I think they
break down into a number
of different kinds.
There is the intuition that
consciousness is non-physical.
We might think of that as
a metaphysical intuition
about the nature
of consciousness.
There are intuitions
about explanation.
Consciousness is
hard to explain,
explaining behavior doesn't
explain consciousness.
There are intuitions about
knowledge of consciousness.
Some of you may know the famous
thought experiment of Mary
in the black and
white room who knows
all about the objective
nature of color vision
and so on, but still doesn't
know what it's like to see red.
She sees red for the first time.
She learns something new.
That's an intuition about
knowledge of consciousness.
There are what philosophers call
modal intuitions about what's
possible or imaginable.
One famous case is
the case of a zombie,
a creature who is physically
identical to you and me
but not conscious.
Or maybe an AI system, which is
functionally identical to you
and me, but not conscious.
That at least seems
conceivable to many people.
So this is the
philosophical zombie.
Unlike the zombies and movies,
which have weird behaviors
and go after brains and so
on, the philosophical zombie
is a creature that seems,
at least behaviorally, may
be physically like
a normal human,
but doesn't have any
conscious experiences.
All the physical states,
none of the mental states.
And it seems to many people
that's at least conceivable.
We're not zombies.
I don't think anyone
here is a zombie--
I hope.
But nonetheless, it seems that
we can make sense of the idea.
And one way to pose
the hard problem
is, why are we not zombies.
So this imagined
ability of zombies
is one of the intuitions
that gets the problem going.
And then you can go on
and catalog more and more
intuitions about the
distribution of conscious,
maybe the intuition that
robots won't be conscious.
That's an optional one, I think.
Or consciousness matters
morally in certain ways,
and the list goes on.
So I think there is an
interdisciplinary research
program here of working on those
intuitions about consciousness
and trying to explain them.
Experimental psychology and
experimental philosophy--
a newly active area--
can study people's intuitions
about consciousness.
We can work on models of these
things, computational models
or neurobiological models, of
these intuitions and reports.
And indeed, I think
there's a lot of room
for philosophical analysis.
And there's just starting
to be a program of people
doing these things
in all these fields.
I mean, it is an
empirical question,
how widely these
intuitions are shared.
You might be sitting
there thinking, come on,
I don't have of
these intuitions.
Maybe this is just you.
My sense is-- from the
psychological study to date--
it seems that some of these
intuitions about consciousness
are at least very widely
shared, at least as dispositions
or intuitions, although they are
often overridden on reflection.
But the current data on
this is somewhat limited.
Although there is a lot of
empirical work on intuitions
about the mind concerning
things like belief,
like when do kids get the
idea that your beliefs
about the world can
be false, concerning
the way your self
persists through time--
could you exist after
the death of your body--
where consciousness
is concerned,
there's work on the
distribution of consciousness.
Could a robot be conscious?
Could a group be conscious?
Here's a book by Paul
Bloom, "Decartes' Baby"
that catalogs a lot of
this interesting work,
making the case that many
children are intuitive
dualists.
Thinks they're
naturally inclined
to think there's something
non-physical about the mind.
So far, most of
this work has not
been so much on these
core problem intuitions
about consciousness,
but there's work
developing in this direction.
Sara Gottlieb and Tania Lombrozo
have a very recent article
called "Can Science
Explain The Human
Mind" on people's
judgments about when
various mental phenomena
are hard to explain.
And they seem to find that
yes, subjective experience
and things to which people have
privileged first person access
seem to pose the
problem big time.
So there's the beginning
of a research program here.
I think there's
room for a lot more.
The topic neutrality part--
when I say we're looking for
a topic neutral explanation
of problem intuitions,
that's roughly
to say an explanation that
doesn't mention consciousness
itself.
It's put in neutral terms.
It's neutral on the
existence of consciousness.
The most obvious one
would be something
like an algorithmic explanation.
Now here is the algorithm
the brain is executing
that generates our conviction
that we're conscious
and our reports
about consciousness.
There may be some time between
an algorithm and consciousness,
but to specify
the algorithm, you
don't need to make claims
about consciousness.
So the algorithmic version
of the meta-problem
is roughly find the algorithm
that generates our problem
intuition.
So that's, I think, in
principle a research program
that maybe an AI
researchers in combination
with psychologists--
the psychologist could help
isolate data about the way
that the human beings are
doing it, how these things are
generated in humans.
And the AI researcher
can try and see
about implementing that
algorithm in machines
and see what results.
And I'll talk about a
little bit of research
in this direction
in just a moment.
OK now I want to say something
about potential solutions
to the problem.
Like I said, this is a
big research program.
I don't claim to have the
solution to the meta-problem.
I've got some ideas, but I'm
not going to try and lay out
a major solution.
So here are a few
things, which I
think might be part of a
solution to the problem,
many of which have
got antecedents
here and there in scientific
and philosophical discussion.
Some promising ideas include
retrospective models,
phenomenal concepts,
introspective opacity,
the sense of acquaintance.
Let me just say something
about a few of these.
One starting idea
that almost anyone
is going to have here is
somehow models of ourselves
are playing a central role here.
Human beings have models of
the world, naive physics, naive
psychology, models of
other people, and so on.
We also have models
of ourselves.
It makes sense for us to
have models of ourselves
and our own mental processes.
This is something that the
psychologist Michael Graziano
has written a lot on.
We have internal models of
our own cognitive processes,
including those tied
to consciousness.
And somehow something about
our introspective models
explains our sense, A, that we
are conscious and B, that this
is distinctively problematic.
And I think anyone thinking
about the meta-problem,
this has got to be at
least the first step.
We have these
introspective models.
If you were an illusionist,
they'll be false models.
If you're a realist, they
needn't be false models.
But at the very least,
these introspective models
are involved, which is fine.
But the devil's in the details.
How do they work to
generate this problem?
A number of
philosophers have argued
we have special concepts
of consciousness,
introspective concepts of these
special subjective states.
People call these phenomenal
concepts, concepts
of phenomenal consciousness.
And one thing that's
special is these concepts
are somehow independent
of our physical concepts.
They explain we've got one
set of physical concepts
for modeling the external world.
We've got one set of
introspective concepts
from modeling our own mind.
And these concepts,
just by virtue
of the way they're
designed, are somewhat
independent of each other.
And that partly explains
why consciousness
seems to be independent of the
physical world intuitively.
So maybe that independence
of phenomenal concepts
could go some distance to
explaining our problem reports.
So I think there's got to be
something to this as well.
At the same time, I don't
this goes nearly far
enough because we have concepts
of many aspects of the mind,
not just of the subjective
experiential past but things we
believe and things we desire.
And so when I believe that
Paris is the capital of France,
that's part of my
internal self model.
But that doesn't seem to
generate the hard problem
in nearly the same way in which
the experience of red does.
So a lot more needs to
be said about what's
going on in cases like
having the experience of red
and having the sense that
that generates a gap.
So it doesn't generalize to
everything about the mind.
Some people have
thought that what
we might call introspective
opacity plays a role,
that when we introspect
what's going on in our minds,
we don't have access to the
underlying physical states.
We don't see the
neurons in our brains.
We don't see that
consciousness is physical.
So we see it as non-physical.
Most recently, the
physicist Max Tegmark
has argued in this direction,
saying somehow consciousness
is substrate-independent.
We don't see the substrate.
So then we think maybe it can
float free of the substrate.
Armstrong made an analogy
with the case of someone
in a circus where--
the headless person illusion
where someone's there
with a veil across their head,
and you don't see their head.
So you see them
as having no head.
Here is a 19th century booth
at a circus, so-called headless
woman.
There's a veil over her head.
You don't see the
head so somehow,
it looks-- at least for a
moment-- like the person
doesn't have a head.
So Armstrong says maybe that's
how it is with consciousness.
You don't see it as physical,
so you see it as non-physical.
But still the question comes up,
how do we make this inference.
There's something that's
special goes on in cases
like color and taste and so on.
The color experience seems to
attribute primitive properties
to objects like
redness, greenness,
and so on, when, in fact, in
the external world at the very
least, they have complex
reducible properties.
Somehow, our internal
models of color treat colors
like red and green as if
they are primitive things.
It turns out to be useful to
have these models of things.
We treat certain
things as primitive,
even though they're reducible.
And it sure seems that
when we experience colors,
we experience greenness
as a primitive quality
even though it may be a
very, very complex reducible
property.
That's something about
our model of colors.
The philosopher
Wolfgang Schwartz
tried to make an analogy with
sensor variables in image
processing.
You've got some visual senses
and a camera or something
you need to process the image.
Well, you've got
some sensor variables
to represent the sensory inputs
that the various sensors are
getting.
And you might treat them
as a primitive dimension
because that's the most
useful way to treat them.
You don't treat them as certain
amounts of lights or photons
firing.
You don't need to
know about that.
You use these sensor
variables and treat them
as a primitive dimension.
And all that will play into
a model of these things as
primitive, maybe
taking that idea
and extending it
to introspection.
These conscious
states are somehow
like sensor variables in
our model of the mind.
And somehow, these
internal models
give us the sense
of being acquainted
with primitive
concrete qualities
and of our awareness of them.
This is still just laying out.
I don't think this
is still yet actually
explaining a whole lot.
But it's laying out--
it's narrowing down
what it is that
we need to explain
to solve the meta-problem.
But just to put the
pieces together,
here's a little summary.
One thing I like about this
summary is you can read it
in either an illusionist
tone of voice,
as an account of the
illusion of consciousness--
so this is how false
introspective models work--
or in a realist tone
of voice, as an account
of our true correct
models of consciousness.
But we can set it out in a way
which is neutral on the two
and then try and
figure out later
whether these
models are correct,
as the realist
says, or incorrect,
as the illusionist says.
We have introspective
models deploying
introspective concepts
of our internal states
that are largely independent
of our physical concepts.
These concepts are
introspectively opaque,
not revealing any of the
underlying mechanisms.
Our perceptual
models perceptually
attribute primitive perceptual
qualities to the world.
And our introspective
models attribute
primitive mental relations
to those qualities.
These models produce the
sense of acquaintance,
both with those qualities
and with our awareness
of those qualities.
Like I said, this is not a
solution to the meta-problem,
but it's trying, at
least, to pin down
some parts of the roots
of those intuitions
and to narrow down what
needs to be explained.
To go further,
you want, I think,
to test these explanations,
both with psychological studies
to see if this is
plausibly what's
going on in humans-- this
is the kind of thing which
is the basis of our intuitions--
and computational models
to see if, for example, we
could program this kind of thing
into an AI system and see
if it can generate somehow
qualitatively similar
reports and intuitions.
You might think that last thing
is a bit far fetched right now,
but I know of at least one
instance of this research
program, which has been put
into play by Luke Muehlhauser
and [INAUDIBLE] two researchers
at Open Philanthropy very
interested in AI
and consciousness.
They actually built--
they took some ideas
about the meta-problem
from something
I'd written about it
and from something
that the philosopher Francois
Kammerer had written about it.
A couple of basic ideas about
where problem intuitions might
come from.
And they tried to build them
into a computational model.
They built a little
software agent,
which had certain axioms about
colors and how they work.
There's the red and there's
green and certain axioms
about their own subjective
experiences of colors.
And then they combined it
with a little theorem prover.
And they saw what did this
little software agent come up
with.
And it came up with
claims like, hey, well,
my experiences of
color are distinct from
any physical state, and so on.
OK they cut a few corners.
This is not a yet truly a
convincing sophisticated model
of everything going
on in the human mind.
But it shows that there's
a research program here
of trying to find
the algorithmic basis
of these states.
And I think as more
sophisticated models develop,
we might be able to use
these to kind of provide
a way in for AI researchers
in thinking about this topic.
Of course, there
is the question,
you model all this stuff
better and better in a machine,
then is the machine actually
going to be conscious
or is it just
going to have found
self models that replicate
what's going on in humans.
So some people have proposed an
artificial consciousness test.
Aaron Sloman, Susan
Schneider, Ed Turner
have suggested somehow
that if a machine seems
to be puzzled about
consciousness in roughly
the ways that we are,
maybe that's actually
a sign that it's conscious.
So if a machine
actually looks to us
as if it's puzzled by
consciousness, is that a sign
of consciousness?
These people-- this is
suggested as a kind of Turing
test for machine consciousness.
Find machines which are
conscious like we are.
Of course, the
opposing point of view
is going to be no, the machine
is not actually conscious.
It's just like machine that
studied up for the Turing test
by reading the talk
like a human book.
It's like, damn,
do I really need
to convince those
humans that I'm
conscious by replicating all
those ill-conceived confusions
about consciousness.
Well I guess I can
do it if I need to.
Anyway, I'm not going to
settle this question here.
But I do think
that if we somehow
find machines being puzzled,
it won't surprise me
that once we actually have
serious AI systems, which
engagement in natural language
and modeling of themselves
and the world, they might well
find themselves saying things
like, yeah, I know
in principle I'm
just a set of silicon circuits,
but I feel like so much more.
I think that might tell us
something about consciousness.
Let me just say a little
bit about theories
of consciousness.
I do think a solution
to the meta-problem
and a solution to
the hard problem
ought to be closely connected.
The illusionist has
solved the meta-problem.
You'll dissolve
the hard problem.
But even if you're
not an illusionist
about consciousness, there
ought to be some link.
So here's a thesis.
Whatever explains consciousness
should also partly explain
our judgments, now reports
about consciousness.
The rationale here
is it would just
be very strange if these
things were independent,
if the basis of consciousness
played no role in our judgments
about consciousness.
So they can use this as a
way of evaluating or testing
theories of consciousness.
For theory of consciousness
says mechanism M
is the basis of consciousness,
that M should also
partly explain our judgments
about consciousness.
Whatever the basis is ought
to explain the reports.
And you can use this.
You can bring this to bear
on various extant theories
of consciousness.
Here's one famous current
theory of consciousness,
integrated information
theory developed
by Giulio Tononi and colleagues
at the University of Wisconsin.
Tononi says the basis
of consciousness
is integrated information, a
certain kind of integration
of information for
which to and he has
a measure that he calls phi.
Basically, when your phi is high
enough, you get consciousness.
A consciousness is
high phi, and there's
a mathematical definition.
But I won't go into it here.
But it's a really
interesting theory.
So here's a-- basically
it analyzes a network
property of systems of units.
And it's got a
informational measure
called phi that's supposed
to go with consciousness.
Question, if
integrated information
is the basis of
consciousness, It
ought to explain problem
reports, at least in principle.
Challenge, how does that work?
And it's at least far
from obvious to me
how integrated information will
explain the problem reports.
It seems pretty
dissociated from them.
On Tononi's view, you
can have simulations
of systems with high
phi that have zero phi.
They'll go about making
exactly the same reports
but without
consciousness at all.
So phi is at least
somewhat dissociable.
You get systems with very high
phi, but no tendency to report.
Maybe that's less worrying.
Anyway, here's a
challenge for this theory,
for other theories.
Explain, not just how high
phi gives you consciousness,
but how it plays a central
role in the algorithms that
generate problem reports.
Something similar goes
for many other theories,
biological theories, quantum
theories, global workspace,
and so on.
But let me just wrap
up by saying something
about the issue of
illusionism that I was
talking about near the start.
Again, you might be
inclined to think
that this approach through
the meta-problem tends,
at least very naturally,
to lead to illusionism.
And I think it can be-- it
certainly provides, I think,
some motivation for illusionism,
the view that consciousness
doesn't exist, we
just think it does.
On this view, again, a
solution to the meta-problem
dissolves the hard problem.
So here's one way of putting
the case for illusion.
If there is a solution
to the meta-problem,
then there is an explanation of
our beliefs about consciousness
that's independent
of consciousness.
There's an algorithm
that explains our beliefs
about consciousness.
It doesn't mention
consciousness.
Arguably, it could be in
place without consciousness.
Arguably, that
kind of explanation
could debunk our beliefs about
consciousness the same way
that perhaps explaining beliefs
about God in evolutionary terms
might debunk belief in God.
It certainly doesn't prove
that God doesn't exist.
You might think that if you can
explain our beliefs in terms
of evolution, it somehow
removes the justification
or the rational basis
for those beliefs.
So something like
that, I think, can be
applied to consciousness too.
And there's a lot to
be said about analyzing
the extent to which this
might debunk the beliefs.
On the other hand, the
case against illusionism
is very, very strong
for many people.
And the underlying worry is
that some of the illusionism
is completely unbelievable.
It's just a manifest
fact about ourselves
that we have conscious
experience, we experience red,
we feel pain, and so on.
To deny those things
is to deny the data.
No, the dielectric
here is complicated.
The illusionist will come
back and say, yes, but I
can explain why illusionism
is unbelievable.
These models we have, these
self models of consciousness,
are so strong that
they were just
wired into us by evolution.
They're not models
we can get rid of.
So my view predicts that
my view is unbelievable.
And the question is, what--
the dialectical situation
is complex and interesting.
But maybe I could just wrap
up with two expressions
of absurdity on either side of
this question, the illusionist
and the anti-illusionist,
both finding absurdity
in the other person's views.
Here's Galen Strawson,
who was here.
Galen's view is very much that
illusionism is totally absurd.
In fact, he thinks it's
the most absurd view
that anyone has ever held.
There occurred in
the 20th century,
the most remarkable episode
in the whole history of ideas,
the whole history of human
thought, a number of thinkers
denied the existence
of something
we know with certainty
to exist, consciousness.
He thinks this is just a sign
of incredible philosophical
pathology.
Here's the rationalist
philosopher, Eliezer Yudkowsky,
and something he wrote a
few years ago on zombies
and consciousness and
the epiphenomenalist view
that consciousness plays
no causal role, where
he was engaging some stuff I
wrote a couple of decades ago.
He said, "this
zombie argument"--
the idea we can imagine zombies
physically like us but without
consciousness-- "may be a
candidate for the most deranged
idea in all of philosophy.
The causally closed
cognitive system
of trauma's internal narrative
is malfunctioning, in a way,
not by necessity but just in
our own universe miraculously
happens to be correct."
And here he is expressing
this debunking idea
that on this view,
there's an algorithm that
generates these intuitions
about consciousness.
And that's all physical.
And there's also this further
layer of non-physical stuff.
And just by massive coincidence,
the physical algorithm
is a correct model of
the non-physical stuff.
That's a form of debunking here.
It would take a miracle for
this view to be correct.
So I think both of
these views are onto--
these objections
are onto something.
And to make progress
on this on either side,
we need to find a way of
getting past these absurdities.
You might say, well,
there's middle ground
between very strong
illusionism and very strong
epiphenomenalism.
It tends to slide back
to the same problems.
Other forms of
illusionism, weaker forms
don't help much with
the hard problem.
Other forms of realism
are still subject to this.
It takes a miracle for this
view to be correct critique.
So I think to get
beyond absurdity here,
both sides need to
do something more.
The illusionist needs to do more
to explain how having a mind
could be like this, even
though it's not at all the way
that it seems.
They need to find some
way to recapture the data.
Realists need to
explain how it is
that these meta-problem
processes are not completely
independent of consciousness.
Realists need to explain
how meta-problem processes,
the ones that generate
these intuitions
and reports and convictions
about consciousness,
are essentially grounded in
consciousness even if it's
possible somehow for them or
conceivable for them to occur
without consciousness.
Anyway so that's just to
lay out a research program.
I think a solution to
the meta-problem that
meets these ambitions
might just possibly solve
the hard problem
of consciousness
or at the very least shed
significant light on it.
In the meantime,
the meta-problem
is a potentially tractable
research project for everyone,
and might I recommend
to all of you.
Thanks.
[APPLAUSE]
AUDIENCE: Yes, I
just want to say
I think it's very interesting,
this concept of, we have
these collection
of mental models
and that this collection of
mental models is consciousness,
basically.
Consciousness defines
the collection
of these mental
models that we have.
And the problem
with consciousness
is that we don't understand
the physical phenomenon that
causes these mental
models or that
stimulates these mental models.
So we just have this belief
that it's ephemeral or not real
or something like that.
And if you take that view,
then what's interesting
is that you could simulate
these mental models like robot
could simulate
these mental models.
And you could simulate
consciousness as well.
And even if the underlying
physical phenomena that fuels
these mental models
is different--
robots have different
sensors, et cetera--
you could still get the
same consciousness effect
in both cases.
DAVID CHALMERS: Yeah,
I think that's right.
Or at the very least,
it looks like you
ought to be able to get the same
models, at least, in a robot.
If the models themselves are
something algorithmic and ought
to be, you ought to
be able to design
a robot that has, at the
very least, let's say,
isomorphic models and some
sense that is conscious.
Of course, it's a further
question-- at least
by my [? lights-- ?]
whether then the robot
will be conscious.
And that was the
question I alluded to
in talking about the
artificial consciousness test.
But you might think that would
at least be very good evidence
that the robot is conscious.
If it's got a model of
consciousness just like ours,
it seems very
plausible there ought
to be a very strong link
between having a model like that
and being conscious.
I think probably something
like Ned Block-- who was here
arguing against machine
consciousness-- would say,
no, no, the model is not enough.
The model has to be
built of the right stuff.
So it's gotta be
built of biology.
And so on.
But at by my
[? light, ?] I think
if I had found an AI system
that had a very serious version
of our model of
consciousness, I'd
take that as a very good reason
to believe it's conscious.
AUDIENCE: In the
IIT theory, is there
a estimate or plausible estimate
for what the value of phi
is for people and
for other systems?
DAVID CHALMERS: Basically, no.
It's extremely hard to measure
in systems of any size at all.
Because the way it's
defined, it involves
taking a sum over every
possible partition of a system.
It turns out A, it's hard
to measure in the brain
because you've got to involve
the causal dependencies set
between different
units on neurons.
But even for a pure
algorithmic system,
you've got a neural network
laid out in front of you.
It's computationally
intractable to measure
the fire of one of those once
I get to bigger than 15 units
or so.
So Tononi would like to say
this is an empirical theory
and in principle
empirically testable.
But there's the in principle.
It's extremely difficult
to measure phi.
Some people, Scott Aaronson
the computer scientist,
has tried to put forward
counterexamples to the theory,
which were basically
very, very simple
systems like matrix
multipliers that
multiply two large matrices.
Turn out to have
enormous phi, phi
as big as you like if the
matrices are big enough
and therefore by Tononi's
theory will not just
be conscious, but as
conscious as a human being.
And Aaronson put this
forward as a reductio ad
absurdum of the IIT theory.
I think Tononi basically bit
the bullet and said, oh yeah,
those matrix
multipliers are actually
having some high degree
of consciousness.
So I think IIT is probably
at least missing a few pieces
if it's going to be developed.
But it's a research program too.
AUDIENCE: You mentioned belief
as an example of something
where there's another
mental quality,
but people don't seem to
have the same sense that it
is very hard to explain.
In fact, it almost
seems too easy
where people-- like a
belief about something
sort of feels like
just how things are.
But you have to
reflect on a belief
to notice it as a belief.
Do you think there
is also or has there
been research related
to this question
into why is that different?
It seems like another angle
of attack on this problem.
It's just like, why doesn't this
generate the same hard problem.
DAVID CHALMERS: Yeah.
In terms-- I'm not
sure if there's
been research from the
perspective of the meta-problem
or of theory of mind.
Certainly, people have thought,
in their own right, what
is the difference in
belief and experience
that makes them so different.
This goes way back to
David Hume, a philosopher
a few centuries ago who said
basically, perception is vivid.
Impression of
impressions and ideas.
And impressions like
experiencing colors
are vivid in force and
vivacity, and ideas are merely
a faint copy or something.
But that's just the first order.
And then there are
contemporary versions
of this kind of thing, far more
sophisticated ways of saying
a similar thing.
But yeah, you could, in
principle, explore that
through the meta-problem.
Why does it seem to us that
perception is so much more
vivid?
What about our
models of the mind
makes perception seem so
much more vivid than belief
and makes beliefs seem
structural and empty
whereas perception
is so full of light?
But no, I don't
know of work on that
from the meta-problem
perspective.
Like I said, there's
not that much work
on these introspective
models directly.
There is work on theory
of mind about beliefs.
Tends to be about
models of other people.
It may be there's something
I could dig through
in my literature on belief
that says something about that.
It's a good place to push.
AUDIENCE: Thanks.
AUDIENCE: I wanted to
bring up Kurt Godel.
You mentioned your advisor
wrote "Godel, Escher, Bach".
There's something that
seems very like Godel--
Godelian or whatever about
this whole discussion in that--
so Godel showed that
given a set of axioms
in mathematics,
that it would either
be consistent or
complete but not both.
And it seems like
when Daniel Dennett--
Daniel Dennett seems to have a
set of axioms where he cannot
construct consciousness
from them.
He seems to be very much
in this consistent camp,
like he wants to have
a consistent framework
but is OK with the
incompleteness.
And I wonder if a
similar approach
could be taken with
consciousness where
we could, in fact,
prove that consciousness
is independent of Daniel
Dennett's set of axioms,
the same way they proved--
after Godel, they proved
the Continuum Hypothesis was
independent of ZF set
theory, and then they
added the axiom of choice,
made it ZFC set theory.
So I wonder if we could show
that in Daniel Dennett's world,
we are essentially zombies or
we are either zombies or not.
It doesn't matter.
Either statement could be true.
And then find what is
the minimum axiom that
has to be added to
Dennett's axioms
in order to make
consciousness true.
DAVID CHALMERS: Interesting.
I thought for a
moment this was going
to go in a different
direction and you
were going to say Dennett is
consistent but incomplete.
AUDIENCE: Yes.
DAVID CHALMERS: He doesn't have
consciousness in his picture.
I'm complete.
I've got consciousness--
AUDIENCE: Yes.
DAVID CHALMERS:
--but inconsistent.
That's why I say all
these crazy theory.
AUDIENCE: Right, yeah.
DAVID CHALMERS: And you're
faced with the choice of not
having consciousness and
being incomplete or having
consciousness and somehow
getting this hard problem
and being forced into, at
least, puzzles and paradoxes.
But the way you put it
was friendlier to me.
Yeah, certainly,
Doug Hofstadter himself
has written a lot
on analogies between
the Godelian paradoxes
and the MindBody problem.
And he thinks always
our self models
are always doomed to be
incomplete in the Godelian way.
He thinks that that might be
somehow part of the explanation
of our puzzlement, at
least about consciousness.
Someone like Roger
Penrose, of course,
takes this much more
seriously literally.
He thinks that the
computational aspects
of computational
systems are always going
to be limited in the Godel way.
He thinks human beings
are not so limited.
He thinks we've got mathematical
capacities to prove theorems,
to see the truth of
certain mathematical claims
that no formal system
could ever have.
So he thinks that
we somehow go beyond
the incomplete Godelian--
I don't know if he actually
thinks we're complete,
but at least we're not
incomplete in the way
that finite computational
systems are incomplete.
And furthermore, he
thinks that extra thing
that humans have is
tied to consciousness.
I never quite saw
how that last step
goes, even if we didn't have
these special non-algorithmic
capacities to see the truth
of mathematical theorems,
how would that be
tied to consciousness.
But at the very least, there
are structural analogies
to be drawn between
those two cases,
about incompleteness
of certain theories.
How literally we should
take the analogies,
I'd have to think about it.
AUDIENCE: Has there
been some consideration
that the problem of
understanding consciousness
inherently must be difficult
because we address the problem
using consciousness?
I'm reminded of the halting
problem in computer science
where we say that
in the general case,
a program cannot be written to
tell whether another program
will halt because what
if you ran it on itself.
It can't be broad enough to
include its own execution.
So I wonder if there
is a similar corollary
in consciousness where
we use consciousness
to think about consciousness
and so therefore, we
may not have enough equipment
there to be able to unpack it.
DAVID CHALMERS:
Yeah, it's tricky.
People say it's like,
you use a ruler.
To measure a ruler--
well, I can do this ruler to
measure many other things.
But it can't measure itself.
It's not [INAUDIBLE].
Well, on the other
hand, you can measure
one ruler using another ruler.
Maybe you can measure one
consciousness using another.
The brain-- [INAUDIBLE] the
brain can't study the brain.
But the brain actually
does a pretty good job
of studying the brain.
There are some self-referential
paradoxes there.
And I think that,
again, is at the heart
of Hofstadter's approach.
But I think we'd have
to look for very, very
specific conditions under which
systems can't study themselves.
I did always like the idea that
if the mind was simple enough
that we could understand
it, we would be too
simple to understand the mind.
So maybe something like that
could be true of consciousness.
On the other hand,
I actually think
that if you start thinking
that consciousness can go along
with very simple systems,
I think at the very least,
we ought to be able to study
consciousness in other systems
simpler than ourselves.
And boy, if I could solve the
hard problem even in dogs,
I'd be satisfied.
Yeah?
AUDIENCE: Hey, so I have
a question about how
the meta-problem research
program might proceed,
sort of related to
the last question.
So certainly things we believe
about our own consciousness,
even if we all say them,
probably some of them
are false.
Our brain has a tendency to
hide what reality is like.
If you look at
visual perception,
there's what's called
lightness constancy.
Our brain subtracts out the
lighting in the environments
so we actually see more reliably
what the color of objects are.
Like these viral examples
of the black and gold dress
is an example of this.
And when you're presented
with an explanation of it,
it's like, huh?
My brain does that?
It's not something
we have access to.
Or Yani Laurel--
DAVID CHALMERS:
Laurel Yani, yeah.
AUDIENCE: --illusion
is another one
where when you hear the
explanation, the scientists
that understand it, our
own introspection doesn't
include that.
So how do you
proceed with trying
to get at what
consciousness really
is versus what our whatever
simplified or distorted view
might be?
DAVID CHALMERS: Yeah, I
think well one view here
would be that we never have
access to the mechanisms that
generate consciousness,
but we still
have access to the
conscious states themselves.
Actually, Karl Lashley
said this decades ago.
He said no process of the
brain is ever conscious.
The processes that get you to
the states are never conscious.
The states they get
you to are conscious.
So take your experience
of the dress.
For me, it was white and gold.
So I knew that.
Each of us was
certain that I am--
I was certain that I was
experiencing white and gold.
Maybe you were
certainly you were
experiencing blue and black.
AUDIENCE: I forget which it was.
All I remember is I was right.
[LAUGHTER]
DAVID CHALMERS: You were
sure that, yeah, those idiots
can't be looking at this right.
I think the natural way to
describe this, at least,
is that each of us
was certain what kind
of conscious experience we were
having, but what we had no idea
about was the mechanisms
by which we got there.
So the mechanisms are
completely opaque.
But the states themselves
were at least prima
facie transparent.
I think that would
be the standard view.
Even a realist
about consciousness
could go with that.
They'd say we know
what conscious
states where we know what
those conscious states are.
We don't know the processes
by which they are generated.
The illusionist, I think,
wants to go much further
and say, well it seems
to you that you know what
conscious state you're having.
It seem to you that you're
experiencing yellow and gold.
Sorry, yellow and
white, whatever it was.
Gold and white.
AUDIENCE: Black and
gold is what I remember.
DAVID CHALMERS: No, black
and blue, I think, and--
AUDIENCE: Blue?
DAVID CHALMERS:
--gold and white.
It seems to you you're
experiencing gold and white.
But, in fact, that too is
just something thrown up
by another model.
The yellow gold was
a perceptual model.
Then there was an
introspective model
that said you are experiencing
gold and white when maybe,
in fact, you're just a zombie.
Or who knows what's
actually going on
in your conscious state.
So the illusionist
view, I think,
has to somehow take this further
and say, not just the processes
that generate the
conscious states, but maybe
the conscious states themselves
are somehow opaque to us.
AUDIENCE: It feels
like some discussion
of generality of a problem is
missing from this discussion.
The matrix multiplier example
of having high phi is still--
it's not a general thing.
Is there someone exploring
the space, the intersection
of generality and complexity
that leads to consciousness
as an emergent behavior?
DAVID CHALMERS: When
you say generality,
there's the idea that a theory
should be general, that it
should apply to every system.
You mean mechanisms?
AUDIENCE: Generality
of the agent.
If I can write an
arbitrarily complex program
to play tic-tac-toe and all
it will ever be able to do
is play tic-tac-toe,
it has no outputs
to express anything else.
DAVID CHALMERS: Yeah.
So general in the sense
of AGI, artificial general
intelligence.
Some aspects of
consciousness seem
to be domain general
like for example,
maybe insofar as belief
and reasoning is conscious.
Those are domain general.
But much of perception doesn't
seem especially domain general.
Right?
Color is very domain.
Taste is very domain specific.
So it's still conscious.
AUDIENCE: But if my agent can't
express problem statements,
like if I don't give it an
output by which it can express
problem statements,
you can never
come to a conclusion
about its consciousness.
DAVID CHALMERS: I'd like
to distinguish intelligence
and consciousness
and even be able to--
even natural language and
being able to address a problem
statement and analyze
a problem, that's
already a very advanced
form of intelligence.
I think it is very
plausible that a mouse has
got some kind of consciousness,
even though it's got no ability
to address problem statements,
and many of its capacities
may be very specialized.
It's still much more general
than a simple neural network
that can only do one thing.
A mouse can do many things.
But I'm not sure that
I see an essential--
I certainly see a connection
between intelligence
and generality.
We want to say somehow a
high degree of generality is
required for high intelligence.
I'm not sure there's the same
connection for consciousness.
I think consciousness can
be extremely domain-specific
as a taste and maybe vision are.
Or it can be domain-general.
So maybe those two across
cut each other a bit.
AUDIENCE: So it seems to me
like the meta-problem as it's
formulated implies some
amount of separation
or epiphenomenalism between
consciousness and brain states.
And one thing that I
think underlies a lot
of people's motivation
to do science
is that it has causal import.
Like predicting
behaviors is clearly
a functionally
useful thing to do,
and if you can predict
all of behavior
without having to
explain consciousness,
their motivation for
explaining consciousness sort
of evaporates and it
feels like, yeah, well,
what's the point of
even thinking about
that because it's just not
going to do anything for me.
What do you say to someone
when they say that to you?
DAVID CHALMERS:
What is the thing
that they said to me again?
AUDIENCE: That maybe
consciousness exists,
maybe it doesn't.
But if I can explain
all of human behavior
and all of the behavior
of the world in general
without recourse
to such concepts,
then I've done
everything that there
is that's useful, like
explaining consciousness
isn't a useful thing to do.
And thus, I'm not interested
in this, and it may--
DAVID CHALMERS: I see.
AUDIENCE: --as well not be real.
DAVID CHALMERS: I think
epiphenomenalism could be true.
I certainly don't have any
commitment to it, though.
It's quite possible that
consciousness has a role
to play in generating behavior
that we don't yet understand.
And maybe thinking hard
about the meta-problem
can help us get
clearer on those roles.
I think if you've got any
sympathy to panpsychism,
maybe consciousness
is intimately
involved with how
physical processes get
going in the first place.
And there are people
who want to pursue
interactionist ideas
where consciousness
interacts with the brain.
Or if you're a
reductionist, consciousness
may be just a matter
of the right algorithm.
In all those views,
consciousness
may have some role to play.
But just say it
turns out that you
can explain all of behavior,
including these problems,
without bringing
in consciousness.
Does that mean that
consciousness is not
something we should care about
and not something that matters?
I don't think that would follow.
Maybe it wouldn't matter for
certain engineering purposes,
say you want to build
a useful system.
But at least in my
view, consciousness
is really the only
thing that matters.
It's the thing that
makes life worth living.
It's what gives our lives
meaning and value and so on.
So it might turn out
that consciousness is not
that useful for
explaining other stuff.
But if it's the source
of intrinsic significance
in the world, then
understanding consciousness
would still be absolutely
essential to understanding
ourselves.
Furthermore, if it comes
to developing other systems
like AI systems or dealing with
non-human animals and so on,
we absolutely want to know.
We need to know whether they're
conscious because if they're
conscious, they presumably
have moral status.
If they can suffer, then it's
very bad to mistreat them.
If they're not conscious,
then you might--
I think it's very plausible--
treat non-conscious systems,
we can treat how we like.
And it doesn't really
matter morally.
So the question of whether an
AI system is conscious or not
is going to be absolutely vital
for how we interact with it
and how we build our society.
That's not a question of
engineering usefulness.
It's a question of
connecting with our most
fundamental values.
AUDIENCE: Yeah, I
completely agree.
I just-- I haven't
found that formulation
to be very convincing
to others necessarily.
AUDIENCE: Hi, thanks
so much for coming
and chatting with us today.
I'm really interested in
some of your earlier work,
the extended mind
[? distributed ?] cognition.
And you're at a company
speaking with a bunch
of people who do an incredibly
cognitively demanding task.
DAVID CHALMERS: Yeah.
AUDIENCE: Most of the literature
that I've read on this topic
uses relatively
simple examples saying
like it's difficult
to think just
inside your head on these
relatively simple things.
And if you take a look at
the programs that we build,
on a mundane day-to-day basis,
they're millions of lines long.
I've read people in
the past say something
like the Boeing 777 was
the most complicated thing
that human beings have ever
made, and I think most of us
would look at that and
say, we got that beat.
The things that large
internet companies do,
the size, the complexity
of that is staggering.
And yet if we close our
eyes, everyone in here
is going to say, I'm going
to have difficulty writing
a 10 line program in my head.
So I've just sort
of, as an open,
I'd be very interested in
hearing your thoughts about how
the activity of programming
connects to the extended mind
ideas.
DAVID CHALMERS: Yeah,
so this, I guess,
is a reference to
something that I
got started in
about 20 years ago
with my colleague, Andy Clark.
We wrote an article called
"The Extended Mind" about how
processes in the mind can
extend outside the brain
when we become
coupled to our tools.
And actually, our
central example
back then in the mid-90s was a
notebook, someone writing stuff
in a notebook.
And even then, we knew
about the internet,
and we had some
internet examples.
I guess this company
didn't exist yet in '95.
But now, of course,
our minds have just
become more and more extended.
And smartphones came
along a few years later,
and everyone is coupled
very, very closely
to their phones and their other
devices that couple of them
very, very closely
to the internet.
Now it's suddenly the case
that a whole lot of my memory
is now offloaded onto the
servers of your company
somewhere or other, whether
it's in the mail systems
or navigation mapping
systems or other systems.
Yeah, most of my navigation
has been offloaded to maps.
And much of my memory
[INAUDIBLE] has been offloaded.
Well, maybe that's in my phone.
But other bits of my memory are
offloaded into my file system
on some cloud service.
So certainly, yeah,
vast amounts of my mind
are now existing in the cloud.
And if I were somehow to lose
access to those completely,
then I'd lose an awful
lot of my capacities.
So I think now we are now
extending into the cloud
thanks to you guys and others.
The question's specifically
about programming.
Programming is a kind of active
interaction with our devices.
I mean, I think
programming is something
that takes a little bit longer.
It's a longer timescale so the
core cases of the extended mind
involve automatic use
of our devices, which
are always ready to hand.
We can use them to
get information,
to act in the moment,
which is the kind of thing
that the brain does.
So insofar as programming
is a slower process--
and I remember from
my programming days,
all the endless hours
of debugging and so on--
then it's at least going
to be a slower timescale
for the extended mind.
But still, Feynman talked
about writing this way.
Someone looked at Feynman's
work and a bunch of notes
he had about a physics
problem he was thinking about.
And someone said to
him, oh it's nice you
have this record of your work.
And Feynman said, that's
not a record of my work.
That's the work.
That is the thinking.
I was writing it down and so on.
I think, at least
my recollection
from my programming days,
is that when you're actually
writing a program, it's not like
you just do a bunch of thinking
and then code your thoughts.
The programming is to some
very considerable extent
your thinking.
So is that the sort
of thing you're--
AUDIENCE: Yes, absolutely.
[INTERPOSING VOICES] If we, I
think as people that program,
start to reflect on what we do,
and very few of us actually--
if you're the tech
lead of a system,
maybe you've got
it in your head.
But you would agree that most
of the people on the team who
have come more
recently only have
a chunk of it in their head.
And yet, they're somehow
still able to contribute.
DAVID CHALMERS: Oh, yeah.
This is now
distributed cognition.
The extended mind,
the extended cognition
starts with an individual and
then extends their capacities
out using their tools or their
devices or even other people.
So maybe my partner
serves as my memory,
but it's still centered
on an individual.
But then there's the
closely related case
of distributed
cognition, where you
have a team of people who are
doing something and making
joint decisions and carrying out
joint actions in an absolutely
seamless way.
And I take it at a
company like this,
there are going to be
any number of instances
of distributed cognition.
I don't know whether
the company as a whole
has one giant
Google mind or maybe
there's just a near infinite
number of separate Google
minds for all the individual
teams and divisions.
But I think probably some
anthropologist has already
done a definitive
analysis of distributed
cognition in this company.
But if they haven't,
they certainly need to.
AUDIENCE: Thank you.
[APPLAUSE]
