[MUSIC PLAYING]
[APPLAUSE]
NED BLOCK: Look, I don't want
to have one of these talks
where I just drone on for
some very long period of time,
and then there's a
question period afterwards.
So please feel free to ask
questions during the talk.
However, if the
question seems to me
to be one that's
going to involve
a long, detailed
kind of thing, I
will suggest putting
it off till the end.
So I normally talk for an hour.
But I guess an hour is
really more or less the time
of the whole thing here.
So I'll have to cut stuff out.
Maybe it will be easier
to do as I move along.
Let me start with the so-called
hard problem of consciousness.
And that is the problem of
why the brain basis of a given
experience is the brain basis
of that experience as opposed
to another experience or none.
Here's an example.
This is a brain.
All the arrows are pointing to
the back of the brain, where
the early visual cortex is.
So there's this area, MT and
maybe some surrounding areas
like MST, that are
known to be the basis
of our experience of a certain
kind of motion of optic flow.
If you knock that out,
people have what's
called akinetopsia, where
they don't see motion.
And then nothing gets
activated whenever people see.
And people do see motion, even
if it's afterimages of motion
like a waterfall illusion.
So we know that is at least a
large part of the neural basis
of our experience of motion.
But why is it the neural basis
of that experience of motion,
rather than, say, the experience
of a face or something else
entirely?
It's not just that nobody has
a clue how to answer that.
It's nobody can even come up
with a hypothetical answer
to it that is at
all satisfactory.
That's why it's a hard problem.
And that's distinguished
from the easy problems.
These are my terms, produced by
my colleague, David Chalmers.
And the easy
problems are problems
about the function of
conscious experience,
how it interacts
with sensory inputs,
with other mental states
like beliefs or thoughts,
and how it affects
behavioral outputs.
So those are the
so-called easy problems.
Now, a common
distinction made is
between phenomenal
consciousness--
that's what the hard
problem is about,
what it's like to smell a rose,
see the sky, feel a pain--
and access consciousness--
which perhaps
can be defined as global
availability of the information
in an experience.
So access consciousness is the
domain of the easy problems.
Phenomenal consciousness is
the domain of the hard problem.
Now, it is commonly
felt, and I think
that the feeling
I'm talking about
is probably present in
this room among people
who have a kind of engineering
way of looking at things,
that there really isn't
any serious difference
between phenomenal consciousness
and access conscious--
maybe no difference at all.
Maybe all phenomenal
consciousness
could ever come to is
access consciousness.
Or another thought
that is related
to that is, maybe
there's a difference
in some conceptual way.
But it's nothing that we
could ever find out about,
because all you know about
anything is how it functions,
how it affects other things.
And we've already said
that's the domain of access
consciousness.
So how could we ever find
out about this stuff?
I used to teach.
I taught at MIT for 25 years.
And I found that that point of
view was very common among my
among my students.
I'm going to argue--
well, I'm not really
going to argue this,
but I will be saying that
phenomenal consciousness seems
to be better approached through
biological approaches rather
than computer approaches.
Computational
approaches are better
fitted to access consciousness.
I'm not going to argue
about that explicitly,
but what I am
going to talk about
is how we know these
things are different.
And I'll give you
some actual evidence
that they are different,
and say a little bit
about how they're different.
So the issue I'm going
to be talking about
is our conscious perception
of cognitive access
to perception, that is access
consciousness fundamentally
different?
The talk's going to
have three parts.
I'll talk about a
controversy, about sparse
versus rich perception,
about concepts.
And then I'll end with these
methodological breakthroughs
that I think have something
to say about how to approach
the issue experimentally.
And I should say, by the way,
that the experimental approach
to this subject really is
only about 20 years old.
Before that,
consciousness research
was not a serious thing.
It was regarded
as a tenure killer
among scientists-- that is, if
you study it, forget tenure.
So sparse versus
rich first-- so there
is a debate between the view
that conscious perception is
sparse and that conscious
perception is rich.
I myself think that the
resolution of the debate
is that it's cognition
that's actually sparse--
that's thought, reasoning,
problem solving,
and that actual conscious
perception is rich.
And that shows that
conscious perception
must be, at least in part,
distinct from cognition.
What suggests that conscious
perception is rich?
Here is an experimental
paradigm due to George Sperling.
So first, you have--
I guess I can use this thing.
Can you see that pointer?
OK, so first you have an
array of alphanumer-- yeah.
AUDIENCE: Could you
just quickly talk
about what's the difference
between sparse and rich
in your terminology is?
Because there's the
machine learning version
of it, which a
lot of people here
are probably familiar with,
but it's probably different.
NED BLOCK: I'm sure
it's different.
You will see that
it's going to be
defined, in fact, by some
experiments I'm going to show.
You but the idea
is that cognition
for certain kinds of materials
only holds about four items.
That is really what's
called working memory.
If you want to--
if you're interested
in how people think,
there is a mental scratchpad
known as working memory.
If you want to, for example,
go from p and if p then q to q,
you've got to store p and "if p
then q" in your working memory.
And that holds about four items.
Now, there's a controversy
about what exactly an item is.
And there are many
controversies about it.
But that's the basic idea.
And it's common to not only
mammals, but also birds,
lizards, and there's
even some evidence
that something like this is
the limit even in insects.
So it is a ubiquitous feature
of information processing
in animals.
So that's sparse.
Rich is what's in
phenomenal consciousness.
So that holds many more things.
And that's what
I'm going to argue.
So it's a way of getting at the
fact that they are different.
So here is the idea
of this experiment.
You show people an array
for a very brief time.
And then after the
array is gone off,
you ask them what they saw.
People can report
that they saw an array
of alphanumeric characters.
But if you ask them for
specific characters,
they can usually
report three or four.
So there's their limit
of three or four.
So this George Sperling
in his PhD thesis at NYU
tried to explore the idea that
people felt that they saw all
or almost all of those items.
And it's easy to be
a subject in this.
You can do it yourself.
And you'll have that sense
that you saw all or almost all.
So what he did was--
how many know about this?
I'm just curious.
OK, no one here has ever
taken an intro psych
course is my guess.
So here's what he did.
In this blank period
before you have to answer,
he used a high tone
for the top row,
a medium tone for
the middle row,
and a low tone for
the bottom row.
And what he found was, they
could report three or four
from any given row, even
though they could also
report only three or
four if not given a cue.
And this is called "Partial
Report Superiority."
And the idea is that
your phenomenal awareness
has a capacity of
3 times 3 and 1/2,
whereas you can only
report three or four.
So that is an index of richness.
And here's Sperling
saying that people
insist they've seen more than
they can report afterwards.
OK, so that's the kind of
thing that suggests rich.
What about what suggests sparse?
So here's a--
undoubtedly everybody
has seen one of these things.
You have a picture, a blank,
another picture that's
different from the first one, a
blank, and then you start over.
This is a paradigm developed
by these guys, O'Regan, Rensink
and Simons.
And here's an example.
How many see what's changing?
OK, I am going to show
you what's changing.
And I hope you'll be surprised.
There it is.
See?
OK, how many are surprised?
Why are you surprised?
You're surprised because
it's a big change.
It's not exactly in the
middle, but it's a big change.
You feel like you
saw the whole thing.
But if you did see the
whole thing, why didn't you
see the difference?
I'll show you a
couple more here.
Here's another one.
How many see what's changing?
It's a little easier this time.
How many don't see
what's changing?
OK, good.
I'm going to move
this pointer to it.
There it is there.
I'll do one more, just
because they're fun.
OK, what's changing
is this, the--
ooh, not doing very well here.
Something is the matter with it.
I'm sorry, something
is wrong with that.
OK, so here is a
very pathetic one
that I'm going to use
this for-- you'll see why.
So this one is very-- so this
was originally made for PC.
I'm using it on a Mac.
And I'm not skilled enough
to get it to go faster.
But something is
actually changing.
How many see what's changing?
OK, I'll tell you what it is.
It's the bar in the back.
See?
OK, the reason I'm
using that is because I
happen to have an eye
tracker trace of it.
And this gives you a little
recipe for making these things.
So the trace only involves
one hit on the bar.
And that it is an index of
the fact that people are,
by and large, not
attending to the bar.
And that tells you what you
can change in these pictures.
And this is a
culture-relative thing.
And different
cultures will attend,
and different genders will
attend to different things.
But if you change something that
people tend not to attend to,
then people will not
notice the difference.
And that is one of
many items of evidence
that this is an
attentional phenomenon.
But there are two
different accounts
of what kind of an
intentional phenomenon it is.
One account is this
inattentional blindness
account, which is,
you don't consciously
see the features that change.
That's the sparse view.
The other view is the
inattentional inaccessibility
or sorry, lack of access,
not inaccessibility-- lack
of actual access.
You do see the
features that change,
but you don't conceptualize
them at a level that would
allow you to notice the change.
And that difference is going
to figure in this whole talk--
the difference between
not actually seeing it,
on the one hand a sparse
view, and on the other hand,
the rich view, you see it but
you don't conceptualize it
at a level.
And here's a little diagram of
the difference between the two
views that I'm going
to be contrasting.
So the top one is
my view, that you
have rich conscious perception.
There's an attentional
bottleneck.
And then you have
sparse cognition.
My opponent's view, because they
have to explain Partial Report
Superiority, just
as I do, their view
is that the bottleneck is
between unconscious perception
and conscious perception.
Conscious perception is sparse,
and cognition is sparse.
So I'm now going to
show you a few pieces
of anecdotal
evidence for my view.
AUDIENCE: Quickly,
how would you describe
the difference between conscious
and unconscious perception?
Is it an awareness
to the information,
or is it more than that?
NED BLOCK: Remember I
started with the idea
of phenomenal consciousness--
what it's like to smell a
rose, to hear a musical note?
In conscious perception,
you're having that.
And in unconscious perception,
you're not having it.
So it's a commonly noted
thing in the literature
on consciousness, that
no one has a definition
of phenomenal consciousness.
It's something you can only
point to by using phrases
like my colleague
Thomas Nagel's phrase,
what it's like to experience
the redness of red.
And of course this,
not surprisingly,
has given rise to the idea
that when people talk this way,
they don't know what
they're talking about.
And that there isn't
really any difference
between accessing information
and having an experience
of the redness of red.
Furthermore, a
lot of people feel
maybe there is such a
thing, but I don't like it.
So I've encountered that.
I've gone around giving
talks on, this for some time.
I've encountered a lot and I'm
sure there are probably people
in this room-- maybe half
the people in this room--
who don't like
phenomenal consciousness.
Why would anybody pay
any attention to that?
But I'll tell you,
I think that it
is unwise for people,
certainly in my business
but also in yours, to ignore
such a salient feature
of our mental lives.
Because it may turn out to
be really crucial to things
you actually do care about.
So I think people really
should keep an open mind
about there being phenomenal
consciousness, what
its role is.
And people should not adopt
the view that a lot of people
do, which is that we
can approach issues,
for example in
artificial intelligence,
just ignoring this.
Because if it turns
out that a lot
of our important
mental processes
are done via phenomenal
consciousness,
then it may be that if
try to make machines
do significant cognitive
things, that we're
going to have to in
some way give them that,
or some substitute
for it that does
the same job, or something.
But we better know
about it before we
figure that we don't have
to pay any attention.
Yeah
AUDIENCE: When you're
describing consciousness,
you are kind of omitting
the role of memory.
I was hearing, for
example, [INAUDIBLE]
a lot about
[INAUDIBLE] information
processing and memory
is like a [INAUDIBLE]??
I was wondering
what about memory?
That's something that is--
NED BLOCK: OK, I think
you can have consciousness
without memory.
The reason I'm
talking about memory
is because it's involved
in a lot of experiments.
So I think it is possible
to think about consciousness
in the absence of-- without
thinking about memory at all.
But you have to have some
experimental approach
to consciousness to be thinking
about it in an objective way.
So memory turns out to
be really important.
And I'm really
contrasting two kinds
of memory, what is sometimes
called iconic memory that
has some kind of
phenomenology to it,
and working memory, which
is our cognitive workspace.
But that's not
because I think memory
is crucial to consciousness.
It's because it's
a way of getting--
a way of approaching the
subject where we know something
about how to do experiments on.
So just to remind you
what's about to happen,
I'm going to show you
two phenomena that I
think give a kind of
anecdotal support for my view.
So here is the first one.
How many have seen this?
Anybody see this?
OK, this is a slow change.
So what you are looking
at on the screen
is changing right now.
And your job is to try to
figure out what is changing.
Don't say, but--
OK, let's just look at it.
Try to figure out what's
changing, and don't say
what's changing if you know.
Is this a question about this?
AUDIENCE: I was going to
say what I saw changing.
NED BLOCK: Oh no don't say that.
How many saw what was changing?
OK, I'm now going to show you.
And I think most of
you will be surprised.
I'm just going to click
on this thing here.
See it?
OK, so here's the thing.
OK, so here is the
intuitive force of this.
I think this argues
strongly for my side.
Why do I think that?
Because look at that base.
It's a huge part of the picture.
You were staring at that
thing, looking around,
moving your eyes around.
It was on the screen for
almost a minute, I think.
You must have looked at
that thing a few times.
But here's the thing--
why didn't you
notice the change?
It's because you didn't
conceptualize the color.
You didn't say to
yourself at the beginning,
"red" and at the end, "purple."
You didn't say the
words, "red" or "purple."
If you had conceptualized them--
applied your cognition to it--
then you would have
been able to notice it.
But it's hard to
notice something
that you don't conceptualize.
So I think that
this suggests that--
so there's thing I
haven't introduced,
the global workspace.
So I'll mention
that in a minute.
But the global workspace
is an opposed theory
of consciousness--
opposed to mine.
And I think I'll explain
this in a minute.
But I think the
reason that something
isn't broadcast in
a global workspace
is, it's not conceptualized.
Now I'm going to
show you another one.
How many of you have seen this?
Oh, good.
This is-- well, I'm not even
going to tell you what it is.
But just watch and
listen to the sound here.
[VIDEO PLAYBACK]
[MUSIC PLAYING]
- Clearly, somebody in this
room murdered Lord Smythe,
who, at precisely
3:34 this afternoon,
was brutally bludgeoned to
death with a blunt instrument.
I want each of you to
tell me your whereabouts
at precisely the time that
this dastardly deed took place.
- I was polishing the brass
in the master bedroom.
- I was buttering his
Lordship's scones below stairs.
- I was planting my petunias
in the potting shed.
- Constable, arrest Lady Smythe.
- But, but, but
how did you know?
- Madam, as any
horticulturist will tell you,
one does not plant
petunias until May is out.
Take her away.
NED BLOCK: OK, so now
what you're going to see--
you're going to see
the whole thing over
again from a different
camera that shows you more.
- Clearly, somebody in this
room murdered Lord Smythe,
who, at precisely
3:34 this afternoon,
was brutally bludgeoned to
death with a blunt instrument.
I want each of you to
tell me your whereabouts
at precisely the time that
this dastardly deed took place.
- I was polishing the brass
in the master bedroom.
- I was buttering his
Lordship's scones below stairs.
- I was potting my petunias
in the potting shed.
- Constable, arrest Lady Smythe.
[END PLAYBACK]
NED BLOCK: OK, so
by the way, this
is a BBC bicycle safety ad.
And it's this kind of work
that, for example, made
people realize that it's really
a bad idea to talk on a cell
phone while driving.
And there's a lot
of experimental work
that backs that idea up.
And the reason is, this
is an attentional problem.
Now, my opponents like to
think of this as inattentional
blindness.
But I think this case
argues for my view, which
is inattentional lack of
access to the difference.
And my reason for
thinking that is,
look, for example, at
the guy who's speaking.
You looked right at him.
It really-- you really must
have registered his coat,
for example.
It was one color at the
beginning, another color
at the end, and likewise
for many other things.
So why didn't you
notice the difference?
Well, I think you didn't
notice the difference
because you didn't
conceptualize that color.
So noticing is a lot easier
if you conceptualize.
Now, I keep using
this word, "concept."
Now, I feel like I've used
up about half of my time
and I'm really
nowhere near done.
So this is over at like
roughly 2:00, is that the idea?
Well, OK, so look--
I'll have just kind
of skip around.
By the way, if you raise your
hand and I don't see you,
it's because lights
are kind of in my eyes.
So you might just
wave your hand.
Maybe I should start
skipping around.
Let me just mention
one other experiment.
So this is a sort of
Sperling-like experiment
done by a group in Amsterdam.
And the idea is
that at the start,
you have a circle of
rectangles-- eight rectangles.
And then there's a blank period.
And then there's another
circle of rectangles.
And at some point,
there's a pointer
that points at a rectangle.
And the job of the
subject in the experiment
is to say whether
the rectangle pointed
to is a different orientation
from the first one.
Everybody get that?
OK, so the answer
in all these is yes.
The pointer can come
at the end, or it
can come at the beginning,
or it can come in the middle.
If it comes at the
beginning, people
can get almost all eight.
If it comes at the
end, they can get four.
And the idea here is that's
the capacity of working memory,
which we talked about before.
The new array at the end wipes
out your phenomenal memory
of the array.
The interesting one
is in the middle.
And here, you can get somewhere
between seven and eight.
So the four at the
end is an index
of your cognitive access or
access consciousness capacity.
The one in the
middle is an index
of your phenomenal memory.
So it's these two kinds of
memory that are at issue here.
Now of course, my
opponents think,
how do we know that that
memory is a conscious memory?
Maybe these things are
actually unconscious.
And then when you get
the cue, something
is summoned up from
unconsciousness.
So what is clear, though,
is that more items can
be held in the non-conceptual.
So I what I think is
that this memory is--
and what's really crucial
to these experiments--
it's a non-conceptual memory.
Now, I think that these
non-conceptual representations
are conscious, but I'll
have to wait to get
to the evidence for that.
So just to remind you,
here are the two views,
put in terms of whether
attention promotes
conceptualization, as
I think, or whether it
is needed for consciousness.
And I should say, by the
way, which something probably
people here aren't aware of,
which is that in recent years,
philosophers have
become very tied in
with a lot of discussions in
psychology, neuroscience, AI.
And so these people--
the disagreements
here are equally
split between philosophers
and scientists.
Now, I've been using
this word "concept."
The word "concept"
is actually ambiguous
between a sense in
which it means something
that can be shared among
many people, something
kind of abstract.
That's at the top,
something meaning-like,
or this mental
representation sense
that was used by, for example,
the British empiricists.
So I'm going with the
mental representation sense.
I'm just telling you
about terminology.
Now, this notion of a
concept often puzzles people.
There's a famous case that
helps to illustrate what are--
the notion of concept this is.
French philosopher named
Bruno Latour who says
said that Ramses
II could not have
died of tuberculosis, because
it wasn't discovered then.
And this is an actual quote.
"Before Koch, the bacillus
had no real existence.
To say that Ramses II
died of tuberculosis
is as absurd as saying that
he died of machine gun fire."
And so I quote this partly
because it's kind of funny.
But the idea here is
that he's confusing
the concept in my sense
with what it's a concept of.
So that helps
illustrate the idea.
Things aren't displaying
properly, but what can I do.
So what I think is that
conscious experience
is non-conceptual.
Here is one of many
experiments that suggests that.
This is an experiment done
on 12-month-old babies.
And so there's a screen.
They're looking at the screen.
An object comes out one side.
A different colored object
comes out the other side.
Then the screen goes
down, and the babies
do not expect two things.
So they don't register color.
It's an interesting thing
about color in babies,
but their color vision is fine.
At four to six months, they
have all the basic color
discriminations.
If anybody is interested, I
can explain how they tell that.
But they do not use color in
reasoning between about four
to six months, and starting
maybe 12 to 18 months,
they start using
color in reasoning.
So their appreciation of
color is non-conceptual.
This is what I just said.
OK, I am going to have to skip.
OK, so here is a little
kind of brain illustration
of some of the ideas
that are at issue here.
So this is the back of the
head, where vision starts.
The light comes in your eyes.
The signals are
sent from the retina
to something in the
middle of your brain,
the lateral geniculate nucleus.
And they go to the first
cortical visual areas
in the back of your head.
And my opponent's-- these
arrows indicate attention.
This means that the person
is attending to the stimulus.
The arrow pointed to
the back of the head
where vision starts is
attention to the stimulus.
So this is a diagram
of a brain when
somebody is uncontroversially
conscious of a stimulus.
The person is attending to it.
There are all these
reciprocal connections,
reciprocal activations
going into frontal cortex.
The key here is frontal cortex.
That's where thought,
reasoning, decision
making live in the brain.
And my opponents think--
the people who think
all consciousness is
access consciousness--
it's that triggering
the activation.
It's called ignition that
creates these neural coalitions
with frontal cortex that are
the key to consciousness.
Here is an undeniably
unconscious perception.
There's something
called priming.
If you get a stimulus, it
affects your later recognition.
So for example, if I
have an unconscious
subliminal presentation
of the word "doctor,"
I'll be quicker to recognize
the word "nurse" as a word
if I've seen it.
Here is a controversial case,
which my opponent Stanislas
Dahaene calls "preconscious."
And that is where you
have attention away
from the stimulus,
you get strong loops
in the back of the head,
but no reportability
unless attention shifts.
So he thinks that's
not conscious.
I say that it's
probably conscious.
So I'm now going to move to--
maybe I should just quickly
explain the global workspace.
This is a diagram from
Stanislas Dahaene,
who is a French
cognitive neuroscientist.
And the idea is that
the outer circle
is the periphery of your body.
These nodes are neural systems.
The links are links
between neural systems.
The filled-in ones are active.
So here's his idea.
The idea is that
the sensory surface
produces a lot of activations.
There is a competition
among them.
Some of them form
active coalitions,
and then they trigger ignition
into the frontal lobes.
The ignition in
the frontal lobes
makes the stimuli conceptualize.
So I've been talking a
lot about the difference
between unconceptualized
and conceptualized.
This is a theory of what
it is to be conceptualized.
OK, so I'm going to quickly
get to the methodological
breakthroughs.
And I think I'll only probably
be able to do one of them.
So my two hypotheses here
are that conceptualizing
the stimulus requires
global broadcasting.
This is called
global broadcasting,
when these active coalitions
trigger representations
in the front.
Hypothesis 2 is that
non-conceptual conscious
percepts do not require
global broadcasting.
So the method here
that I could use--
maybe I will do this--
so this requires red
and green glasses,
which I happened to
have brought with me.
Actually, there's no
time to pass it out.
I'm going to pass
this out anyway,
and then I can come back to it.
So please don't touch
the red and the green.
And then I'd like to have
those back at the end.
Until those are all passed out,
which is going to be a while,
I'm just going to go on.
So if you look at this
through red and green glasses,
here's what you get.
You get first a face, then
a house, then a face--
fills your whole visual field.
It's called binocular rivalry.
You have incompatible
representations,
and the processing streams on
both the two eyes duke it out.
And interestingly, you can--
this is a terrific thing
for studying consciousness,
because you have an
unchanging input with changing
conscious percept.
And a lot of people do
identify some areas that
are more active when you're
conscious like the fusiform
face area of a face,
and other areas
that are more active when you're
conscious of, say, a house.
If you ask people to
report whether they
see a face or a
house, what you get
is frontal and parietal links
being the key thing here.
And that has been taken to
support the global workspace
idea.
However, eye movements
can be validated
as a measure of consciousness.
So the key here, this is what's
called a no report method.
So here's the problem--
reports are an index of
phenomenal consciousness.
If you say you saw it,
probably you did see it.
They're also an index
of access consciousness.
How can you use
reports to distinguish
between phenomenal consciousness
and access consciousness?
It seems impossible.
People have said
it's impossible.
However, whenever people
say something is impossible,
it's a dangerous thing, because
clever experimenters can
find a way to make it possible.
And that's happened here.
So in an article from Wolfgang
Einhaeuser's group in 2014--
by the way, I should say
that the experiments,
to the extent that I'm
going to get a chance
to go through these
experiments, are
all very recent experiments.
This is an area
that is exploding
in what people are finding out.
And my approach to
the hard problem
is, nobody can think of an
answer to the hard problem,
but that may be because of
a failure of imagination.
And the way to sort of
juice up your imagination
is to figure out how the easy
problem works, how these states
affect other states.
And maybe by doing
that, we'll be
able to solve the hard problem.
Anyway, what Einhaeuser's
lab found was that--
does everybody have the
red and green glasses?
Whatever happened to them?
AUDIENCE: There's just
not enough [INAUDIBLE]
NED BLOCK: Oh I'm sorry.
What?
AUDIENCE: They're all
distributed, but--
NED BLOCK: Oh, I'm sorry.
Well, so look, I'm just
going to show you this,
those of you who have them.
I just want to make
sure people see.
It takes a little
while, so you may--
what you should be seeing,
if this is working properly,
is alternating house, face,
house, face, unless you
have a very, very dominant eye.
How many are seeing that?
Raise your hand if you're
seeing the alternation.
OK, oh, good, OK.
I'm sorry I didn't bring enough.
So I'm going to go
back to this thing.
If you do the binocular
rivalry with a grid moving
one way in one eye
and a grid moving
the other way the
other eye, turns out
there's a nice index of
what you are conscious of.
And that index is called
optokinetic nystagmus.
And this is what the eye does.
The eye movements indicate which
thing you are conscious of.
And it can be shown using
people's reports that they
correlate pretty well.
But now, there's
this cool thing.
You've got an index of what
people are experiencing.
And now, you can show
them the original stimuli,
and don't ask them to report--
no report paradigm.
So here's what is found out.
So here's a quotation
from the article.
"Importantly, when observers
passively experienced rivalry
without reporting
perceptual alternations,
a different picture-- that
is, a different picture from
the global workspace idea--
emerged.
Differential neural
activity in frontal areas
was absent, whereas activation
in the back of the head
and the middle of
the head persisted.
We conclude that
frontal areas are
associated with active
report and introspection."
OK, so contradicting some
of these earlier things.
Now, I have a lot more
on this experiment,
which I'm not going to go into.
But I was very
pleased to see this,
because of course,
it backs up my view.
As Eric and I talked
at lunch about--
of course, I'm very--
what I'm most interested
in is finding--
is getting a leg
up on the truth.
But I have an
independent interest
in seeing my own
views confirmed.
And this did it.
And quite a lot of
these experiments
have actually
confirmed my picture
of this, which I've been
pushing for a really long time.
So let me quickly switch
to a different one.
This longer talk involves
three different techniques
for avoiding the problem.
But let me just explain the
problem I'm trying to avoid.
The problem is, both views--
this is mine, this is my
opponents-- both of them
end up with sparse cognition.
So it seems that the basis
of theorizing and reports
is going to be the
same in either case.
So that's the puzzle.
And then the one I just
showed you is a way around it.
It's called "no report,"
but of course, reports
have got to be in
there somewhere.
In the case I just
showed you, the reports
come before the experiment.
But what I'm going
to show you now,
the reports come
after the experiment.
And I'll just do
this very briefly.
This uses something called
event related potential, which
is a form of EEG.
And then the idea
here is-- this is
my opponent, Stanislas Dehaene,
one of my many opponents.
I won't explain
this whole diagram,
but the idea here is, this looks
at the difference between seen
things and unseen things.
He means consciously seen
and not consciously seen.
And you only start
to get a separation
at about 270 milliseconds
to 300 milliseconds.
So you can use
temporal differences
to get at whether it's
a conscious perception
or not on the global
workspace point of view.
So here's the technique.
This is work done by Michael
Pitts at Reed College.
So the paradigm
is this-- you have
a ring with these disks in it.
And the task is to
detect a disk dimming.
And that focuses your attention
on the periphery of the screen,
because it's a very
difficult task.
He calibrates it to be
extremely difficult.
In the meanwhile,
while that's going on,
there's constantly changing
lines in the middle.
So here's an example of a square
that occurs in the middle.
And he's set this up so that
only about half the subjects
notice it.
So what he does here is the
first stage, which is probably
the most significant one, is he
does 240 trials of this dimming
detecting thing, and then
asks people afterwards
whether they saw a figure.
And what he finds
is that if they--
he does a long,
detailed questionnaire.
He shows them various
possible examples of what
the figure might have been.
And so he's trying
really hard to rule out
guessing and other things.
So what he finds is that
conscious experience correlates
with activations earlier
than the global workspace,
at about maybe 200
to 250 milliseconds.
So it's this
temporal difference.
So again, this supports my view.
And I'll just quote some
of what he says here.
I won't show you the
whole experiment,
but here's what he says.
He says, "The pattern
results suggest
that the neural events reflected
in"-- and this is this ND1,
ND2 is this stuff right here--
"may be adequate in themselves
to produce visual awareness,
whereas more widespread activity
indexed in these things, which
occurs at 400 to
600 milliseconds,
might be required only
when the stimuli need
to be processed further to
fulfill the goals of the task
at hand.
This distinction
parallels that made
by previous theorists between
phenomenal consciousness
and access consciousness."
So this is the first
of his many experiments
that supports this
point of view.
And I won't go into
all the details.
But what he shows in this
work is that it's reporting.
It's the cognitive
processes underlying
reporting and conceptualizing
that lead to global workspace
activation.
So according to-- this is a 2014
book by one of my opponents,
Stanislas Dehaene, who says,
when "the prefrontal cortex
does not gain
access to a message,
it cannot be broadly shared, and
therefore remains unconscious."
What I think is that
that's unconceptualized,
but in many cases, conscious.
So just to sum up, the evidence
for rich conscious perception
stems from these delayed
indicators, from eye movements.
And I didn't get a chance to do
the ones about gist judgments.
And so the idea is that the--
well, I haven't
really justified this,
but anyway, the thought is
that phenomenal consciousness
probably is not as
informational a kind of thing
as access consciousness.
And so I think it's likely to
resist computer approaches.
And I'm just going to stop.
[APPLAUSE]
So questions?
And they would like you
to use the microphone.
Maybe somebody can hand
the microphone around.
Sorry, there's a little
quick run-through some stuff.
AUDIENCE: I have a question.
With the trials
that you mentioned,
I wonder if you see differences.
How do I ask this?
So the trials seem to assume
that people are always
the same.
But, like I'm kind
of tired right now.
It's Monday.
So maybe the way that I would
perform with a given trial
now, if I'm hungry, if I'm
not hungry, how does that--
the reality of the
body and outside forces
influence or not influence
the consciousness trials?
NED BLOCK: So in all
experiments, that's a problem.
There's going to be
a lot of variance.
It's just treated
as noise, really.
It's ways in which you
will not get uniformity
of response because of
variation, all kinds of things
to do with people.
And you just have to deal
with the noise in the data.
But it is a persistent problem.
AUDIENCE: Hi, back
to the title--
I wonder why-- can you summarize
why the artificial intelligence
approach won't work for the
explanation of consciousness
[INAUDIBLE]?
NED BLOCK: Well, the thought--
well, I didn't really
get into this, but
the thought behind it
is that the artificial
intelligence approaches are
probably better for
access consciousness
than phenomenal consciousness.
This idea of global broadcasting
seems more amenable.
It's information
flow, basically.
Whereas what's going on in
phenomenal consciousness
seems somehow different
from information flow.
It seems maybe to involve
biochemical mechanisms
in the brain.
We don't know what the nature
of phenomenal consciousness is.
So I guess maybe
what I should really
say is, there is a
possibility that methods based
on the flow of
information will work
for phenomenal consciousness.
It is certainly
true that advances
to do with phenomenal
consciousness
seem to be coming from
neuroscience rather
than from computer science
or artificial intelligence.
One of the hopes--
one of the points
that people have
made about the global
workspace viewpoint,
the one I've been
arguing against,
is that it's, in principle,
implementable on a machine.
So people who are interested
in machine consciousness
have been very happy with
the global workspace idea.
So to the extent
that I'm arguing
against the global
workspace idea,
I'm arguing against at least
a standard kind of computer
approach to consciousness.
AUDIENCE: In your
mind, can you imagine,
or do you believe that
machines can get consciousness?
NED BLOCK: Yeah,
I think-- look, I
think we're conscious machines.
We're meat machines.
I'm not any kind of a dualist
or anything like that.
But I think that the
most obvious application
of theoretical approaches
to consciousness
from a machine point of
view haven't panned out.
I haven't mentioned, there's
is a lot of other more
machine-friendly approaches.
So some of them may--
it depends what kind of
machine we're talking about.
My feeling is that we may need
some kind of analog processes
to deal with consciousness.
But this is initial
stages of approach to it.
AUDIENCE: I'm not entirely sure
how to ask this question, so
bear with me.
But it seemed like
a lot of things
you were examining in
terms of what constituted
conscious awareness
of information,
or conceptualization of
information, or unconscious,
or pre-conscious
conceptualization information,
recognition information, all
kind of had to do with a sense
that there's sort of like one
place where consciousness is
happening, like consciousness
is all kind of one singular
Cartesian process.
And if that's not
true, what implication
would that have for
these questions?
Is it possible that
these might all
be irrelevant questions
to consciousness overall,
and are merely a
question of where
information happens to be in
the brain at any given point?
NED BLOCK: I
definitely do not think
there is a place in the brain.
One of my opponents,
Daniel Dennett,
has used that to
caricature my position.
He calls it "the
Cartesian theater."
I think our best guess
about where in the brain
consciousness happens is that
every conscious content is
processed by the
area that processes
that kind of information.
So for example, conscious
contents of motion
have to do with activations
in that area, MT.
Probably, they involve
reciprocal connections
to lower visual areas.
Conscious appreciation
of faces probably
has to do with
activation in this thing
called the fusiform
face area, the bottom
of your right temporal lobe.
So I don't think there's any
place where they come together.
The closest thing to a place--
I like to distinguish
between what
makes the content between--
the difference in different
conscious contents
like face and motion--
I like to distinguish
that question
from what makes those
contents conscious,
a matter that has been explored
by studying, for example,
anesthesia.
And it looks like there's some
kind of general connectivity,
especially going
back to this thing
in the middle of the
brain called the thalamus.
People used to speak
of a thalamic switch.
So the closest thing to a
place where it all happens
might be that.
But I don't think
that's what explains
the difference between
consciousness of a face
and consciousness of motion.
It's more like a kind of
something in the direction
of an on-off switch.
AUDIENCE: Maybe
a silly question,
but I noticed all of
your examples in this
were done with people.
What about animals?
NED BLOCK: There
was a big revolution
in the study of consciousness
in the mid-1990s,
when Francis Crick, the Nobel
Prize-winning biologist,
and Christoph Koch realized you
could approach consciousness
through studying animals.
And a lot of the
work is with animals.
It happened that-- actually,
I skipped something that was
about--
involved subjects
who were monkeys.
But a lot of the
work is in animals.
And I think there's
every reason to believe
that our primate cousins are
just as conscious as we are.
It used to be, before
about 20 years ago,
that people thought of
consciousness in terms
of language.
And I think that
access consciousness
was what they had in mind.
And I think one of the things
that's happened in the last 20
years is people have realized
that language really doesn't
have much to do with it.
So yeah, there's a lot of work
on consciousness with animals.
Of course, it's easier to
get reports from people.
But there's been really some
surprising work with animals
where you can really get at
what they're experiencing
through non-verbal methods.
AUDIENCE: Thank you.
AUDIENCE: If you suppose we
had a synapse-level simulation
of the full human
brain, which I'd argue
we might just be 15
years away from or so,
would this simulation exhibit
phenomenal consciousness?
NED BLOCK: Well,
that's something
that people have argued about.
One common point is that a
simulation of a rainstorm
isn't wet.
And maybe a simulation
of a conscious being
isn't conscious, either.
So this all goes
back to the issue,
which we don't know
about, but which
could be true, that
consciousness essentially
involves something to do with
the neural processes that
are going on in the brain,
some kind of analog thing.
And that without an analog
device of that sort,
you're not going to
get consciousness.
For example, signals in
neurons are electrical,
but neurons communicate
via chemicals
that go across the synapse.
Maybe that's part of what's
needed for consciousness.
Maybe if you don't have--
maybe you can make an
artificial synapse.
People have made
artificial synapses.
But maybe you'd have to use
neurotransmitters for it
to really have
genuine consciousness.
This is the thing--
we don't really know what,
at its most basic level,
consciousness is.
So we don't know
the answer to that.
AUDIENCE: To that-- to not
knowing what consciousness is,
what attempts have
been made to attack
this question from the point
of, instead of asking what
is consciousness, asking
why is consciousness,
and speculating why it evolved?
NED BLOCK: A lot of people
have tried to answer that.
Of course, it's a
little hard to know.
Evolutionary
reasoning-- it's famous
for the so-called
"just so" stories,
to use Stephen Jay
Gould's phrase.
But there are all sorts
of hypotheses about why
we have consciousness.
It's obviously doing
something for us.
Here are some
possibilities-- maybe
it's motivational, so positive
consciousness, pleasure, pain.
Maybe it is a way of
organizing attention,
or has something to do with
interactions with attention.
So there's no shortage
of speculations,
but I don't think anybody knows.
AUDIENCE: Can you
describe again--
I was a bit confused-- the
difference between conscious
perception and conscious
perception that is not
conceptualized, which is
basically the thing where--
and also, once you
do that, discuss
a bit kind of the consequences.
What would that mean
if one theory is true
versus the other?
NED BLOCK: A good
index of this is
the baby's perception of color.
You've got this
six-month-old baby.
Its color discriminations
are almost
at the level of an adult.
And the way you can tell
that, for example, is
if you have a colored background
and a different-colored disk,
a baby looking at that, and
an adult looking at that,
will tend to move
their eyes to the disk
if they can see the difference.
So that gives you--
that's one method.
There's a number of methods
of finding this out.
That's one method that tells
you that babies distinguish
between colors
pretty much the way
we do after about
four to six months.
However, they cannot
use colors in reasoning.
So here's an
example experiment--
babies are very interested
in movement and noise.
So you display-- this is done
by Jean-Remy Hochmann-- you have
a display where a
wonderful, noisy puppet
thing that twirls
around will either
occur on the left or the right.
If you see two identical
shapes, then you
can set it up so that
it's on the left.
And the baby will
notice that regularity.
Two identical shapes,
look at the left,
because there's going to
be something great there.
What about two identical
colors on the left?
They can't do it.
They can't.
They don't register colors
in a way that allows
them to use them in reasoning.
So that's what
concepts are about.
They're about
reasoning, and thinking,
and decision making,
and evaluating.
They just can't do it.
And in fact, babies don't even
learn the four basic color
words until they're three
years, three months old.
An experiment done by
Mabel Rice in the 1980s
took kids who did not
know the difference
between red and green--
sorry, did not know the
words for red and green,
and tried to teach them the
words "red" and "green."
This is three-year-olds.
She had to go
through 1,000 trials
to get kids to learn
the red-green contrast.
The typical thing is, once
they learn one color word,
they learn them all.
There's been a lot of
experimental work on this.
But basically, what we're
dealing with is a creature--
namely the human baby--
that does not register
color in a way that allows
them to conceptualize it.
It's non-conceptual.
I think all perception
is like that.
its most fundamental
level, it's non-conceptual.
Sometimes they're
automatically conceptualized
when it gets into our
conceptual system.
But what you have to
realize is the perceptual
and the cognitive are just
different in the human brain.
Now, maybe that doesn't have to
be true in the machine brain.
But we would be wise to
give a thought to how
people work when we're
thinking about how
to get machines to work.
So that's the reason
it's relevant.
AUDIENCE: One way, or one
thing that comes to mind here
is, maybe another terminology
difference between these two
things that might be
useful is conceptual
as sort of like symbolic
reasoning, symbolic processing,
like you would do in logical
formulae or something,
whereas the other type
of processing that you're
considering is more like, say,
a digital signal processor that
translates audio
data to digital data.
It's not doing symbolic
reasoning there.
It's applying just some kind
of fixed transformations.
That seems to be the site
of phenomenal character
in your view, that
then gets brought up
into this sort of
symbolic central unit.
NED BLOCK: Yeah, I accept that.
Yeah, good.
AUDIENCE: Are any
of these points--
does all of this make sense
still if you are a dualist?
NED BLOCK: Ah, OK.
Yeah, I think everything I said
can be accepted by a dualist.
Well, maybe not.
I did say some
things that sounded
I think there's a neural basis.
But even dualists can
accept a neural basis
for conscious experience.
So yeah, I think everything
I said could be thought of--
could be accepted by a dualist.
So the question is whether
that neural basis is really
all there is to it or not.
AUDIENCE: OK, thank you.
That's it for questions.
And let's thank Ned again.
[APPLAUSE]
