Okay, we finally got our AV working
even in advance of
conscious computing and
the singularity being reached,
which is outstanding.
So, it's an honor to have been,
well, along with us today.
This is probably one of the more
exciting titles we've had in
a few years toward a conscious AI.
Many people in the field
of machine intelligence
are here through a pathway
being intrigued, curious,
and surprised by the existence proof
that somehow machines are
generating the states that we feel
this awareness objective states
the pains and pleasures
of life being here,
and it always seems like
such a mystery to even begin to
breach that topic at
least to do it in
a credible way without getting
in trouble and being called
something that you don't
want to be called like us
that's far-reaching, it's flaky.
I think the hope it's been for
a long time that there is
a science to be had in
this space because surely
something is going
on and it's something we
do not understand yet.
So, Manuel is the Bruce
Nelson University
Professor of Computer Science
at Carnegie Mellon University.
He's a 1995 Turing Award winner that
recognized his work in
theoretical computer science,
in particular his work
on complexity theory,
cryptography, and links
to program checking.
He got involved in
thinking few brains I
think before you go
interesting complexity theory
is my guess looking at your bio.
So, Manuel actually came to MIT in
the mid 1950s and he
pursued his passion about
understanding thinking
brains by working in
the famous neuro-physiological
laboratory of Warren McCulloch
and Walter Pitts of the famous
McCulloch and Pitts early paper,
which happened probably in
the '40s that paper was written.
>> '43.
>> 1943, that basically wrote down
some basic constructs of what
a neuron of symbolic
abstracted neuron was like,
how you'd program and look at
the behaviors of assemblies of
these units and his interest in
brain led and machines
led him to think
through applications or motivated or
framed his interest in
mathematical logic and recursion
theory for the insight it gave
to him on brains and thinking,
and he went on to do his doctoral
work with Marvin Minsky.
Did his PhD, 1964,
and then went off
to UC Berkeley as an assistant
professor, mathematics.
Came to Carnegie Mellon
in 2001 and he's had
many incredible students over
the years and he's done and
continues to do groundbreaking work.
I personally hope that your work
on consciousness and links to
computational
architectures will break
some ground because it's still
a big mystery to most of us.
>> I hope so.
>> Thanks.
>> Thank you. Thanks very much.
So, let me start by saying something
about how I got into this.
When I was a kid,
I was a very dumb kid and my teacher
told my mother that I might be
able to get through high school,
but don't expect I can
get through college.
So, I really wanted to be
smart and I asked my father,
so this must have been first
of second grade I asked
my father something I wanted
to ask how do you think,
but I asked how do you memorize.
Because I had trouble
memorizing and he said well
you just do it and that
didn't help very much and so I
kept pestering him until
at some point he said,
"You know, if you
understand the brain,
then you'll be able to think
better," and I thought,
"Wow, what a great idea."
I like that very much.
So, fast forward to
when I was thinking,
I remember when I was in sixth grade
about trying to think about
this conscious brain what it was
like and I just didn't get anywhere.
You can try to think
about how that brain is
working and you get a
sense or I got a sense
anyway that there was
like a little person in
my brain that was looking
out of my eyes and I
knew that then I'd have to
understand the brain of
that little person and this
didn't make a lot of sense,
so that was too bad.
Then, when I got to MIT,
I took a course on,
we went through the 20 volumes of
Freud that was the closest
I could get to brain's end,
and then McCulloch and
Pitts came to MIT,
and I was told by my teacher,
Freud Richard Shawnwalk, I should
go introduce myself to him.
It's McCulloch and Pitts,
they are my real mentors.
They're just terrific.
McCulloch and Pitts as you
had pointed out in our case
that they defined
the formal neuron very early.
Formal neuron, very simple neuron,
which could add and subtract inputs,
and compared the threshold,
and they basically told
the neuroscientists of the time,
"There must be inhibition
not just excitation because
if we have just excitation,
you have monotone
circuits, you cannot."
They said, "No, we've never
found inhibition in the brain."
But, they insisted it's got
to be there and sure enough
they did find it as
you might imagine.
So, that was great.
So, that's where I'm coming from.
So, I was a few years in this,
in their neuro-physiology lab
beside McCulloch and Pitts,
Jerry Lettvin was there,
and another person that was there was
Pat Wall who was especially
interested in pain.
So, you'll see pain coming
up here in this work.
So, let me get here to where
I can see what I'm doing.
So, let me start by saying
what is consciousness.
So, we have an idea,
roughly speaking,
consciousness is what
you're aware of,
what you see, what you
hear, what you smell.
Another thing that is
part of your consciousness
is your inner speech.
You all speak to
yourself all the time.
You are speaking to
yourselves and I'm
absolutely sure that my dog
is speaking to itself.
The difference that I
speak English and my dog
speaks doggish and it's
a different language,
but it is speaking to itself because
you need that speech
in order to plan.
To be able to make plans,
I can see my dog planning where
he's going to go each day,
and stuff like that.
You are also conscious of
your dreams and not conscious
when you're not dreaming,
but when you're dreaming,
you are conscious of your dreams,
and that's another example,
and your feelings joys,
fears, et cetera.
So, what's consciousness I'm
still talking about what that is.
So, this is going to come up.
Have you had the experience of
going to a party and seeing
somebody whose name you know,
but you can't quite come up with it?
It's embarrassing, isn't it?
I'm always happy when
Leonor is with me.
Yeah, I should have mentioned.
This is joint work
with Leonor and Avrim,
and Leonor is a very important part
of this project, and in fact,
she's going to be in Beijing
in November giving this talk.
So, you'll be able to see,
you have to get it from me,
you can get it much better from her,
but that's the way it is.
Anyway, so you're at a party you see
someone whose name you know
and you don't remember it.
So, it's embarrassing you go and
get yourself a glass of wine,
you speak to somebody else and then
half an hour later it
pops into your mind.
Where's it coming from?
You know some computation has been
going on in your brain
that you are completely
unaware of to cause I
think suddenly that name
to pop up into your brain.
So, we'll talk about that.
Dreams, selective
attention, the reason for
that picture is because
you have the sense when
you see that picture,
you see the whole thing.
But, I don't know if you
saw the dog that's there or
the dog house in
that picture or anyway,
there are all sorts of
things you don't see.
You form an impression
and that impression,
it's the vision people
call the scene gist,
and it's nothing more than
a couple of senses it's enough,
it's in brainish, of course,
it's not in English,
but it's enough for you to be
then be able to dream about that
and then your dream you will see
something that is very much
like that or it's incredible,
it's there, but the gist of it.
Problem clarification
and problem incubation.
The mathematicians talk about
the fact that you really need to
get very clear the definitions and
the statement of the problem
that you're working on
and clarifying the problem is
something that you do consciously.
The problem incubation, then it's
sort of like that name coming up
your unconscious part of
your brain is working on it and
that's the incubation part.
If you play ping pong you
know that to learn to
play ping pong you must
pay attention consciously,
but if you play ping pong you
know that in the tournament,
you take your consciousness offline,
you let your unconscious do the work,
and so that's an example.
Here are some quotes
from Hadamard, Poincare,
Gauss, again, showing how the
unconscious is coming in.
Hadamard, "On being very abruptly
awakened by an external noise,
a solution long search for
appeared to me at once
without the slightest incidence
of reflection on my part."
Poincare, you know
the story of his stepping
up into a bus and
at that moment a solution
to a problem came to him,
he doesn't know from where
and look also from Gauss,
"After years of failing to prove
a particular mathematical
theorem, finally,
succeeded by the grace of God
like a sudden flash of lightning,
I myself cannot say what
was the conducting thread,
which connected what
I previously knew
was what made my success possible."
The point of all this is that,
that unconscious part
of your brain is
a very important part
of your brain is
doing a lot of
the heavy lifting here.
So, what is consciousness?
So, what I'm going to
present in this talk,
what I want to present is a formal
model of the Turing machine,
something that we call
the conscious Turing machine.
So, let me mention that
when I first learned
about finite automata,
I was so happy that finally
a finite automata could
be a model for the brain.
You see, I just had no model
as in this homunculus,
didn't make any sense.
Then, finite automata was
a possible model and then
it was nice that Plato,
actually that was
Plato's model of the brain.
He points out that,
he thought that we are
born with the knowledge of
every language that
there is in the world or
they even that there
could be and that
our job as young kids is to find
our way to the state which
corresponds to the language
that everyone else is speaking.
Again, this is a little,
we know it can't be
that and then I learned
about Turing machines or
that was much better,
but it still didn't get to
the problem to consciousness.
What I want as a mathematician
is get to a good model,
which I call the conscious
Turing machine or conscious AI
and then if it
works after formalizing
the conscious Turing machine,
will define consciousness
in that model,
you know what's
consciousness, who knows?
Exactly, well this will give
you a definition and then
point out properties of
that consciousness in the model.
So, let me make the point that
when I talk about consciousness,
I mean consciousness in this model,
then there's the real consciousness,
that's another thing,
and I don't think
that I am not trying to
get a model of the brain,
that's very complicated,
too much for me.
I want a very simple
model that will be
enough for me to
understand consciousness.
Okay. So, after formalizing this,
we'll define consciousness
and then point
out properties of
that consciousness in the model.
Let me just say that the quality of
a formalization and definition
depends largely on how
closely the notion squares
with what you think it
should be and whether or not it
helps you understand the concept.
So, you will have to be the judge,
whether this is
a useful notion or not.
Okay. So, let me mention here too
that our model is inspired by
the works of cognitive
neuroscientists,
psychologists, and philosophers.
I'm lucky that it had CMU,
John Anderson is, that's interesting.
John Anderson and
Allan Newell we're both very
interested in this question and
John Anderson is still at CMU.
The work I'll be talking about,
the model that I'll be basing
it on is work of Bernard Baars,
Stan Dehaene and Kevin O'Regan.
I have David Chalmers,
he's a philosopher and
Björn Merker is an important person
because he's the one
that basically thinks that
you don't even have to have
a cortex in order to be conscious.
For him, consciousness
comes from the midbrain,
which actually is what
McCulloch thought too.
Anyway, the easy and hard problems.
So, this was defined
by Dave Chalmers.
You know what the hard problem is?
The easy problem is to
build a machine that
simulates something like
feeling pain or feeling joy.
The hard problem is to
make a machine that
really feels the pain and the joy.
You might wonder why
do you need to do
that if you have a good simulation,
but I'll explain why.
Let me get to that, first saying that
Chalmers' definition is a lot
more general than this.
He's interested in all qualia.
Definition down below.
He's interested in
a lot broader stuff.
I am going to be restricted to
pain and joy and this talk
will just be pain.
Leonor's talk will just be joy.
She doesn't understand why
I'm interested in pain.
I don't understand why she's
interested in joy, but it's okay.
Our research and this talk is
restricted to understanding
pain and joy.
>> [inaudible] .
>> Yeah, you'll see how
he comes in, thank you.
Including the extremes
of agony and ecstasy,
and we have a reasonably good answer
for pain and we have
only a partial answer for joy,
for how these feelings are generated.
So, before we get into that,
I'm a theoretical computer scientist.
I don't go in and cut up brains.
I saw Pat Wall do this
with cats many times,
but I personally don't do this.
I'm theoretical computer
science. What can I do?
So, I'm looking for
a simple model of consciousness,
not a complex model of the brain,
but a simple model of consciousness.
When I talked to John Anderson,
he says evolution abhors parsimony,
which apparently means
that, it's complicated.
The brain is complicated and then
Leonor's points out
mathematics thrives on it.
We need simplicity.
So, our aim is to
propose a simple model
that we can understand
and prove theorems about.
We want properties of consciousness
could be emergent not programmed in.
This is one of the things
that consciousness does for us.
We believe is that it
enables you to deal with
complicated worlds that were not
expected in the first place.
You'll see why in the model
that should be expected.
So, the easy and hard
problems, as I told you.
What's the difference between
the simulation which
we'd know to do and
the experience which
is what we're trying
to capture with consciousness?
I'm going to explain this
to you by describing
a disorder called pain asymbolia.
This is a very interesting disorder.
People who have pain asymbolia.
Don't look up Asymbolia.
It has to be pain asymbolia.
It's a very particular thing.
People who have pain asymbolia,
you can pinch them,
prod them, they know
where you're pinching,
how hard you're pinching.
If they are temperature variations,
they know if it's hot or cold,
they know everything you
know about the pain,
including very severe pain,
but they don't suffer.
The young girl that
has pain asymbolia,
you prick her and push her and
the doctors come around looking and
she giggles because she
knows how strange this is.
That you can do
these things and say after
pinching and pricking and I still
show in the end, try here, try there.
So, pain asymbolia,
they know about it.
They don't suffer from it.
See, our robots, we know how to
build pain asymbolic robots,
but we don't know how to do is
build robots that really feel
the pain and that's what I
would like to be able to do.
There are two kinds
of pain asymbolia.
In one, the girl who
giggles and the other,
you can see her grimacing
and it looks like she's feeling
the pain but she's not.
The great thing about these kids,
I don't know if you've
ever tried this,
you take a frozen bottle of Coke,
as long as there's some liquid in it,
you can hold it without
getting frostbite.
You try to hold it for
as long as you can.
I can never manage very long.
It hurts, really hurts.
These pain asymbolic people,
they know it's frozen,
but they can hold it for
as long as you want.
No problem, even though they feel it.
So, consciousness has to do with
the architecture of the brain.
The architecture
presented here deals with
the brain at a very high level
of abstraction,
a level way above that of neurons.
So, even though McCulloch and Pitts
were the first to define
the formal neuron,
no neurons in this talk.
We're trying to get at
a level even above that.
I loved the neurons and I
love what's being done now,
but I'm trying to get up to
a level slightly higher.
As I said, the architecture
is not obvious.
I just could not,
I was trying and
trying to get a model
that would give me some insight
into how that brain works.
It bothered me that
we're born without
a manual to tell us
how the brain works.
Well, anyway, they bothered me.
I wanted to have a manual
and basically when I
went to see McCulloch
and Pitts and I said I would
like to work in consciousness,
I was told, you may not
work on consciousness.
No. That's it.
When I pushed for it,
Walter Pitts came
over and say, "Well,
you know, you got
this much skull to get through.
EEGs will give you
just a very broad idea of
what's going on in the brain.
You can't get
any good data, forget it."
So, I wasn't allowed to
think about consciousness.
It was great that there
is something that
we're allowed to think about it in
1990 because that's when
fMRI came on the scene.
So, fMRI came on
the scene in 1990 and at that point,
Bernie Baars came up with
his model of consciousness.
So, let me tell you,
this will be sort of
fundamental to this talk.
This is Baars's Theater
of Consciousness.
It's a beautiful model.
He views consciousnesses as
the activities of
an actor on the stage.
So, it could be several actors,
but usually just one or two,
not very many actors on the stage,
and everything that that actor's
saying is being viewed by
these audience members who are
the processors of the brain.
You have 10 to the 11th neurons.
I think of the brain's having
like 10 to the 11th processors.
Each of these processors has
a different purpose in life.
They are listening to what's
going on on the stage.
So, up on the stage,
what's the name of this person
and then down below,
somebody's coming up with the name.
So, the conscious self
is not privy to
the workings of the unconscious self.
I've already told you about,
what's her name, anyway, right?
So, let me, this is
going to be something,
"red" going up, "green" coming down,
will appear in the model.
You recall when you first met
that person, it gets broadcast.
So, the entire audience
knows about that.
So, the question is, what's her name?
Then that's broadcast, then
maybe from the unconscious,
something about what she
does, that gets broadcast.
See, it comes up and
then it's broadcast,
so all the processors know about it,
and then another one speaks up,
her name begins with
a T, that's broadcast.
This business of broadcasting
this information,
eventually leads to half hour later,
her name comes up from
the audience which had
been thinking and
searching to answer that.
So next up, let me give you
Baars' Model as he published it.
This is it. His Model
of Consciousness.
You can see input on the left;
vision, hearing, touch.
Output on the right.
There's a central executive up
on top that's controlling stuff.
There's a working stage.
I call it short-term memory but
the cognitive psychologist
calls it working memory.
Then information goes
down and there are many,
many processors down below
in your unconscious.
Here's perceptual memory or
the biographical memory, et cetera.
So, what can theoretic,
I'm a theoretical computer scientist.
What can they contribute
to the discussion?
So, first of all,
a well-defined formal model,
that's what I want.
A good formal definition of
consciousness, I want that.
Explanations how agony and
ecstasy might arise in a machine,
I'd like to have that.
Understanding for distinguishing
simulation from experience.
You can already see,
you can get beautiful YouTube videos
of a robot tests afraid
of the red ball.
It's doing everything to
make you think it's afraid,
it's clearly afraid, but
you know it's a simulation.
What's the difference and how
do you tell the difference?
One of the points I
want to make is that,
without an understanding,
there's no way to tell if an entity,
an animal or a robot is conscious.
You need to have
some understanding to be able to
say whether it's
a simulation or it's not.
One of the reasons it's important,
here's something that comes up a lot.
Medical doctors, physicians
would like to be able
to know when the person
comes in and said,
he's got this tremendous pain
in his shoulder,
that he needs to have those pills.
Does he really need
to have those pills,
or is he just trying
to get pills so you
can sell them on the street?
How do you tell even?
It's amazing that there are
no good ways to tell yet.
I mean, even the FMRI does not
yet enable us to be able to tell.
It's very strange.
So, one of the things I might hope as
the outcome of this would be,
a way of being able to distinguish
the person who's simulating the pain,
from the person who
actually feels the pain.
So, next up, the formal
definition of the Conscious
Turing Machine.
So, let me make a point again
before I bring that up,
that the purpose of
the Conscious Turing Machine is
not to compute
uncomputable functions,
that's not possible,
nor is its purpose to compute
functions more efficiently.
So, I teach this undergraduate
complexity course
and that's mostly what
complexity theory is about,
is how do you compute efficiently.
That's not the problem
we're dealing with.
The purpose of this model is to
suggest possible solutions
to the hard problem.
So, here's the model.
So, this is the beginnings
of the model,
and I am trying to keep
it as simple as possible.
You'll see a few things.
There's an input on the left,
there's a short-term
memory in the middle,
and the output on the right.
Notice they're not connected
here, and they will not be.
Then there's this tree which connects
the root node at short-term memory,
down to all the processors
in long-term memory,
and no central executive,
in fact in this.
They started off. We did try
to follow Baars' model but
the central executive
did not make sense
and I'd have to talk to you
about why and what that is,
but it's kind of nice that
if you must have one,
you could imagine one
of these processors
down at the bottom being
the central executive.
But anyway,
this tree will represent
anything that's up on the stage,
gets broadcast to all the
processors below.
That's the green arrow,
and then information from
the unconscious processors
goes up to the stage.
I've got, it really should
be two different trees.
There's one for the stuff
going down and there's one
for the stuff going up
but too complicated,
too messy, so I will
just show you one tree.
Let me point out
the difference between these,
that the information from
the stage going down
to the processors, that goes fast.
I'm sure Cotch thinks that this
is much too simple,
it's a binary tree,
but even a binary tree will work,
it just gets you to a lot
of nodes very quickly.
So, in the model,
it's just a binary tree and
it goes very fast because
the "green" arrow is taking
information down to a node,
and all that node does is to
push it in both directions and
then it gets pushed until it gets
down to the bottom, very fast.
There it is. Going up
it's another story.
So, this took a while
and this is where
our opera actually will be coming in,
and I'm very happy about one thing.
In trying to create
the model to go up,
you basically have these processors
needing to talk to each other.
One processor says it has important
information to get up there,
the other one has
important information.
Which one gets up there?
The short-term memory or
the working memory is very small.
George Miller talks about,
have you seen this paper?
The Magic Number Seven,
Plus or Minus Two.
He pointed out that
you can store very,
very little in that short-term memory
and one of the questions I'll
try to answer for you is,
"Why that is in fact
necessarily so small?"
Anyway, going up,
there was a question of how will
these processors talk to each other.
It can't be anything
very complicated,
and it took a while for
us to realize it can't be
anything more than like
a comparison of two numbers,
and that's what's going on here.
It's just a comparison of
two numbers and these
numbers are chosen,
so there's actually there's
many pain processors but
my elbow has a pain.
So anyway, the pain
processor for the elbow,
it itself assigns a
weight to the process,
to the information it has
that it wants to get up.
It assigns that weight
and for the pain, it's
pretty easy to understand.
It's all determined by
how many nerve fibers,
how many nociceptor fibers are
firing and what frequency.
That determines the weight
and for joy, they're wasted.
These things go up and at some point,
maybe pain at minus five
and joy at plus three,
what happens here is pain
has a greater magnitude.
So, pain is what will go up,
but the weight it takes
up with that is the sum,
minus five plus three
is minus two. Yes.
>> We can fire experience
that's also [inaudible]
does it change the way? [inaudible].
>> Yeah, yes you'll see,
very good question.
I hope you'll see that
that's in fact what
Avrim's secret experts
problem is all about.
For now let's just
get this clear five
minus five plus c minus
two, pain goes up.
Pain at minus two and
fear at minus five,
again the one with
the greater magnitude,
in this case fear,
goes up and it goes up
with the sum of the two.
Now, it turns out
that the sign of what's up there
at the top node at the root,
the sign of whatever is up there is
actually the sum of all
the weights of stuff below.
You have to think about that,
but it's the sum.
If it's positive it
basically means that you've
got something positive going
up there and if it's negative,
then you've got pain or
some fear or something
like that going up there.
I'm trying to show
you what happens with pain and joy.
Change things move,
change around and in fact
in this model, this is a picture,
I found this frying pan,
you see the fire down there and
the intent here is to
show you that things are
bubbling down at
the bottom and there's
constantly stuff going on down there.
>> Does this mean you'll
only feel pain or fear once,
but not multiple things?
>> Yeah, so that's interesting.
You'll either feel the pain
or you'll feel the fear,
but the weight will be
an accumulation of these weights.
That's something that you can test,
that presumably you can test.
Now, you have to understand
that because of this bubbling,
you can go back and
forth between pain and
fear and that will come up also,
or you may have many pains,
but any one pain
and according to this model
you only notice one of them,
but with the weight
of all of them, okay?
Then, another point over
here is that you see this
triple over here: address,
information, weight.
This is the information
that's carried up to the top.
The pink, the information is
the actual chunk that's in
your short-term and
your working memory,
that's what you know about.
But, for various reasons you need at
every node to have both
the address of where
the information came from,
the information itself,
which is this very short description
like of the house,
of the gist of what it is
supposed to get up there,
and a weight which you may
or may not be aware of.
It's the information
that's the chunk.
Here is the complete
model, pretty much,
and still external input is not
connected to short-term memory.
Let me just mention something
else here, going through here.
Here's some experiments you can do.
You can ask, according to
this thing if you have
severe pain and real
pleasure they should cancel,
if they're about the same.
So, wow! That's kind of interesting.
Have you ever been to a dentist
and had laughing gas,
it's a wonderful experience
let me tell you, laughing gas.
So, the dentist is told
you give enough gas,
so that you don't feel the pain
because if you give too little,
you're going to be feeling the pain.
If you give too much,
then the patient will be
bouncing off the walls and
you won't be able to
keep him or her down.
So, you've got to give just the
amount where they balance and
that's essentially what this
is suggesting that when
pain and joy are but equal,
they should not get up there.
Here's the model.
Let me just mention,
you see these lines
here that there are 10
to the 11th processors down there,
if you link them all up,
as you'll see they point in
one direction or the other,
10th to the 11th squared
is 10 to the 22,
you are getting very close
to Avogadro's number.
You cannot have all of those links,
it's just not possible.
So, there's a question of well how
do you decide which links go in
maybe some of them are built in
and in fact that is the case,
some of them are built in, but by
and large what happens, I think,
and in the model is that
some processor asks
the question, "What's her name?"
Another processor comes up with
the name and that's sort of saying,
there's a information flowing from
one long-term memory
processor to another.
If enough information flows in
that direction that will grow
a link from the one has the
information to the one that asks.
These links would build up with time.
Okay, let's keep on going here.
Here's another thing that
comes out of this model.
Notice that the input
does not go directly to
the working memory and
the working memory does not
go directly to the output.
Everything is via
this long-term memory down below.
So, information from the input
can come down and then go
directly to the output.
This happens like when
you had the experience,
you touch a burning hot stove
and your hand pulls back
and you know that you're
not you're not consciously
pulling the hand back,
the information is going up to
your spinal cord and then back down.
I used to wonder why
does it go so far?
Why don't they just go up
to the wrist and back?
But you don't want to do this,
you want to really pull
the arm back anyway.
So, notice you are not
consciously involved in this.
It's going down and it's going
right back up to the output.
So, that's the model.
In some sense what's going
on in short-term memory is
looking over what's going
on in long-term memory.
It sees what's going on,
it gets some information from
below and it broadcasts it.
According to this model,
you become aware of
that burning hot stove after you've
already pulled your arm back,
and it's kind of nice if
that's actually the case.
Yes, Daniel.
>> How would do you explain
something like Korsakoff or in fact-
>> Like what?
>> Korsakoff's syndrome.
So, the person
you asked short-term memory,
but he's deliberately delaying
the long-term memories down.
>> Oh, yeah, HM.
>> Yeah, exactly. So, here
it's applying that you have to go
through the long-term
to get to short-term.
>> He did have the capability of
putting new memories down there,
but those processors were still
working and he could make
use of the old memory.
>> So, the processors can still run,
but it's not updating the memory?
>> That's right. They're just
not updating their memory.
Okay, so there are many details
here of course. Yes?
>> Previous slide, the redlings
you mentioned hadn't formed,
but do they go away after
some time or there's automatically
going to be more redlings?
>> See, there's stuff happening,
oh you mean the ones down below?
>> Yeah, the down below.
>> So, in this model,
each time a question gets answered
by a processor that
begins the process of
generating a link and
the link is generated,
in fact, that may take
the conscious part completely
out of the picture.
You sort of learn to play ping
pong and then you can leave
the conscious out of it,
it's handled totally
unconsciously. Yes?
>> Are these also related to motion machines
[inaudible] has this idea of [inaudible] .
>> Whose model is this?
>> It is the Emotion Machines
from [inaudible].
>> Does the Emotion Machine
know central executive?
I should read man's feeling.
>> [inaudible].
>> What?
>> What I'm asking is
Society of Mankind.
>> Yes.
>> [inaudible] Model
there are bilingualism.
Classical type of
machine soldering crowd,
and there's a broadcast model
[inaudible] distributed
process the order of the binary
tree and put it here [inaudible].
>> Yes. Yes.
>> For the [inaudible] last point
I would have liked
to ask opportunistic solutions.
That's simpler.
>> Yes.
>> Anyway, I think the symmetry
of the downward broadcast
at a low bandwidth,
but anyway back to
the table [inaudible].
>> So, he has this book called
emotion machine, that
he is talking about-.
>> Yes, the emotion machine.
>> All old tyrants have
read the old books.
So, he's talking about the
emotions fighting each other,
shut off some emotions.
They're going to laugh at somebody
[inaudible] and you can start thinking about
that one of the fact people [inaudible]
>> Yes, so here I was
talking about pain and fear,
and how they cancel each other.
Let me just mention that another
story that I'll get to here is,
have you read Oliver Sacks?
He's a wonderful writer.
Really worth reading
and just terrific.
He talks Oliver Sacks, he
talks about, he likes to hike,
he talks about going up into
the mountains on a hike,
and he got to a fenced out field
and a sign, "Beware the bull."
He looked and looked, no bull,
he walked into the field and he
says he would just ambling
along very nicely,
and then all of a
sudden he saw the bull,
and the bull was getting
up on its hunches.
So that scared the living
daylights out of him.
He turned around and ran as fast
as he could out of that field,
and only noticed once
he had left the field,
that he'd actually torn
ligament, or several ligaments.
It was a very bad tear.
He was in the hospital for
more than a month after this.
Basically, the fear took over.
It's only once he's out of
that field that the fear subsides,
but the pain can get up there,
and that will be part
of this model here.
Anyway, the details.
How the processors choose weights.
A pain process in the case of
a scraped knee, it's the weight.
The absolute value is
proportional to the number of
fibers that fire in
the frequency of their firing,
as you might expect.
A context processor has
a relatively high fixed weight,
high enough to keep its information,
what's called the scene just onstage,
except when concentrated attention
is required for the task at hand.
If you really are concentrated
on proving that theorem,
you may very well forget
where you are in doing that,
because you need that memory
for other things.
A task that has been
put off its weight.
You ever have the experience,
you have a test the end
of the week or something,
and you're not too worried about it,
but the next day you are a
little bit more worried,
and then a little bit more worried,
and so it goes and these
weights increase with time.
The closer you get to
the test, the greater.
Details: There's a threshold
for allowing processes into
the short-term memory,
and this is kind of interesting.
We need to be able to talk about
a threshold for allowing
processes into short-term memory.
If the threshold is set too low,
then the ideas just bubble up.
Ideas that you haven't
thought about that well,
bubble up and you have
the case of mania.
People who are in the manic state
have lots of ideas.
None very well thought about,
and they're bouncing off the walls.
If the threshold is set too high,
there's no ideas, there's
an absence of ideas,
and I call that depression.
It's just no ideas.
It is depressing to not
have any ideas at all.
It could be in this world zone.
Here's something else that
comes up in the dynamics when
processes enter that
short-term memory,
they slowly lose weight.
We need an equation to show that
[inaudible] slowly lose weight,
and when they are
kicked off the stage,
they slowly regain their weight.
This is going on all the time,
and it means that if you have
many processes of about equal
weight, they'll take turns.
One will go up, it'll be dropped,
another one will come in,
and you make cycles through them.
If you are pointing down,
you you have several pains,
you'll be aware of one,
then of the next one,
then of the next one, and
it will keep going round.
That leads to cycling,
and this is processor A links up
to B when A answers B's call,
and this is what I've
talked to you about.
What's this? I haven't said very much
about what the long-term memory
processor itself is.
Important thing is that,
each processor down there gets inputs
from other processors from
the short-term memory,
from the external world,
and gives outputs also to
those places, there are the links.
One thing I haven't
mentioned is the interrupt,
and that's very important.
It is possible to have an interrupt,
and this became important in
thinking about that extreme pain
you have when you tear a ligament.
What is causing that extreme pain?
The answer in part is that,
extreme pain will interrupt
all of the processors.
It's different from broadcast.
Broadcast, the input is
like a typical input that you
have from the from the world,
which you may or may
not pay attention to.
An interrupt basically forces
the processor to put everything on
the stack and to pay attention,
maybe just for an instant,
but it has to pay attention.
>> Why do you need those exceptions?
why isn't it just something with
a very high rate? [inaudible] .
>> Yesterday we were
talking about that,
and I don't have
an answer for you yet.
My answer for that extreme
was the interrupt,
but you might be right,
and I hope that you're right
because that means I
can simplify this.
That would be nice. Yes?
>> [inaudible] .
>> For anything? Yes.
>> -from the external world- before
they were all- [inaudible] .
>> Well, short-term memory is paying
attention to what's going on.
The input from the external world
can come down to a processor.
It can talk to another processor,
and they can go right up
to the external world.
>> [inaudible]
>> Yes, that's separate from
the inputs coming down
to the processors,
and inputs going up.
Whatever's in short-term memory,
also has it's tree to get down and
information you can get up
there. They're separate.
>> [inaudible] .
>> I think, I would like
to talk to you about.
Okay, well, details of the dynamics.
This is where Avrim comes in.
So, the thing I really
liked about the recent
that Avrim got into this is I
mentioned that there's
this model that we have.
You're just comparing numbers
and going up and that's
basically what Avrim's looking at
when with his sleeping experts.
His theorem presented
an optimal algorithm to dynamically
refine weights assigned to
information by competing processors.
So, an important thing here
is that each processor
decides by itself how
important some information is.
Some information is more important,
some is less important,
each processor decides for itself.
But, some processors could be very
optimistic and they think
that their information is the
most important of all and in what
this algorithm does is
it lowers the weight,
the value of those weights.
Another processor may have very weak,
where baby sort of,
not so sure of itself,
but it has useful information
and maybe it'll be raised.
So, this sleeping experts
theorem is just what one
wants for making sure that these
weights are treated properly.
Okay. So, there's this algorithm.
By the way, since
foundations of data science,
which you cannot buy at Amazon,
but you can get it for
free or for the web.
Avrim Blum's name was
just a catch to it.
Blum half cropped and
cannot consciousness.
So, we call that consciousness in
the model is the content
of short-term memory.
How reasonable is this definition
of consciousness?
I mean, why should
that be consciousness?
That is what's in short-term memory.
That's what we're defining
conscious in the formal definition.
You are consciously aware of
what's in short-term
memory, nothing else.
Why is this at all like what we
sense as being conscious and
the answer is that all
of these processors
are aware of what's going on
in that short-term memory?
The reason it feels
the way it does is because
every single processor in your brain
knows what's going on there.
If you're a consciousness where due
to even just one of those processors,
you would be conscious.
But, it's the fact that
all of them are getting
that information and that stage
activity can be persistent.
Many questions why is
short-term memory so tiny?
Indeed, why is short-term
memory so tiny?
Seven plus or minus two nowadays
it's three plus or minus one,
but that's okay, you got
the idea for minus two.
Our answer for it is that you want
all these processors paying
attention to the same thing.
If you had a large memory,
you could have some processors
paying attention to
this and other processors
paying attention to that,
and you don't want that,
you want them all paying attention
to exactly the same thing.
What's a chunk? Yes?
>> Some of you [inaudible].
>> You probably. So, computer.
>> [inaudible].
>> Could be smaller, yeah.
Maybe our computers could have
a larger short-term memory
because they're so much more.
Although it's interesting how
would one of our large computers
compared to the brain.
Did you know that the Titan computer
has a number of transistor is just
about exactly equal to the number
of synapses in the brain.
So, it's anyway, we
won't get into that.
Why short-term memory so tiny?
What is a chunk?
A chunk is just that information that
gets up there into
the short-term memory.
I think of a chunk really
as being like a pointer,
you remember that it consists of
an address information and weight and
the important thing
to me is that a chunk
is the information that
you're conscious of,
but the address is
the pointer back to where
that information came from,
and that is what a chunk is.
So, the psychologist,
when George Miller defined
chunk, he said, "Well,
a chunk is a digit or a letter
or could even be the alphabet,
if you know the alphabet."
Well, the point is that the chunk
is a pointer to where
that alphabet is
stored and you can now reel off a,
b, c, d, e, f, g, the alphabet.
Isn't it interesting that
the brain figures out what data,
how to store the data,
what data structure to use.
How come the alphabet, a, b, c, d,
e, f, g, is stored as
a singly-linked list.
If you want to go
backwards, very hard.
I tried at some point to
memorize going backwards.
I could do that I made up a ditty z,
y, x, w, v,
u, t, s, r, q.
It doesn't matter, you can make up
a ditty and learn it backwards.
What I discovered was there's
a singly link list going forward and
I have a singly-linked
list going back.
I don't have a doubly-linked
lists I can't tell you
what comes before p
without going forward.
Anyway, but why am I
interested in consciousness?
Well, it's obviously
useful to humans,
I mean, we are conscious.
It's useful to humans
and I think it's
useful to a lot more animals
not just humans.
It focuses the LTM
processors on creating
the current best
interpretation of the world.
We are constantly interpreting
that world and it's a checker
on that interpretation.
There's some wonderful examples
where one eye sees Einstein,
the other eye sees Lincoln,
and you keep on going back and
forth between Einstein and Lincoln,
and the brain is confused.
It doesn't know which one it is.
It's a checker on
the interpretation that you have.
It gives the entity
the ability to solve
unanticipated problems to deal
with a complex world using
all the tools at its disposal.
I think that is really
very important for this.
The ability to solve
unanticipated problems.
We want to have that and
this model does that by basically
giving the important information
to all the processors and
having them all work
in parallel, okay.
So, I already talked
to you about this,
an example by what's your name.
I talked to you about
Oliver Sacks, dreams.
Something interesting
that I won't get into,
but I kind of love this,
I will tell you. Do
you have your hand up?
No? If you saw
that picture of the house,
you can recall it right?
You can close your eyes and imagine
that upside down now or maybe better,
you can recall the front
door to your house.
What it looks like, and
everything around it,
but that recollection is not the
same as seeing the actual thing,
just not that strong.
You can dream about the front of
your house and when
you dream about it,
it will look like what you see,
it will be much more like
what you actually see.
When you as an architect
want to be able to
imagine and see exactly what
you would really see,
at least in the dream.
You'd like that.
So, one of the questions
that comes up is why not?
I will give you my understanding.
There was a time when I
really tried to remember
my dreams and everything
was just fine.
I could manage to be able to
remember more and more
until at some point,
I realized I didn't remember if what
I saw was real or just in the dream.
I couldn't distinguish
and at that point,
I realized you don't want to
remember your dreams and I
stopped trying to remember
them because I couldn't tell.
I think that our visualization,
which an architect would
surely want to be able to do.
Our visualization is also
not so good, as the real as,
not as good as what we can do in
our dreams because then again,
you would be confused between
what you're actually seeing,
and what you're imagining.
Which is essentially what
happens in Schizophrenia.
Where you hear things or see
things that aren't there,
and these people are
confused. Okay. Where are we?
>> [inaudible] People imagine things.
Let's talk about [inaudible].
>> You want to talk about?
>> Imagination.
>> Imaginations.
>> [inaudible] discriminate
that from reality.
Let's talk about
imagination for a second in
the architecture. This is imagining.
It's probably powerfully important
[inaudible] create
the world civilization magically.
It's not well that I could imagine.
>> What could be?
>> What could be.
What's going on with the LCS and
the [inaudible] conversation.
>> Yeah. I certainly will not
be able to answer that now.
Let's see, I think I'm
past overtime already.
So, am going to go fast now.
One of the examples who's
grasping the well-understood
proof of the theorem.
You know how with the theorem
when you've [inaudible] to
the proof you feel like you
got it inside your hand,
but actually what you've
got is a node at the top.
This is a- square root
of two is not rational.
What you have at the top is
the proof starts with the square,
assume to the contrary,
that the square root
of two is A over B.
Ratio to [inaudible] and the fact
that you can follow this thing
down to anything you
want is what comes up out
of this thing. Oh god!
I was going to be talking
to you about this.
The Hard Problem. So, consider
the Hard Problem for
the special case of pain.
How might the conscious
Turing Machine experience pain?
So, we tried.
Let me just say that we
tried many explanations.
We, Lenore and I tried
many explanations.
Here are suggestions. There
are many that don't work.
As attested to, for
example, by the asymbolics.
Pain might arise from observing
unconscious reactions such
as grimacing, crying out,
and such, but the asymbolics twos,
do grimace and cry out
but doesn't bother them.
So, it can't be just that.
Response to painful situations
such as a finger pulling
away from a flame.
Which you are not aware of until
after your hand is pulled back.
Sweat and increased heart rate,
muscles that vibrate.
These are things that might be
causing the pain. They are not.
They are not behind
the pain because there are
people who do this stuff,
but they don't suffer from the pain.
So, here are the suggestions
for extreme pain.
The first is broadcasts.
Extreme pain is an actor that
takes over all Short Term Memory.
It prevents all other actors
from reaching the stage.
Pain messages and
only pain messages are broadcast.
Every processor knows of the pain.
So, in the case of
Oliver Sacks and the bull,
fear got up there and fear
refused to let anything
else up on the stage
until he didn't even notice
the pain that was being caused.
So, that's part of it.
Another confirmation.
[inaudible].
Let me just mention that
there are many things that
this model predicts and there are
some things that it does not.
Clearly, I'm not telling you about
the things that it does not because I
just want to give you
the positive view of this.
Anyway, confirmation
of this business.
Under conditions that
produce great pain,
asymbolic can think
while normals cannot.
You can become asymbolic by
getting a cork on your head
as an older person when you
already know how to play
chess and even under great pain,
you can still play
chess perfectly well.
Where you cannot, under great pain,
you cannot really play
a good game of chess.
Here's another thing that
I call a confirmation.
In a Darwinian design,
you might expect pain to lead to
construct- if you're in great pain,
you need to think.
That's important to be able to think.
What kind of a design
is it that under
extreme pain you cannot
think about anything else?
The statement is that extreme
pain comes about from
pain on the stage not letting
anything else up on the stage.
How can that be?
It's kind of anti-Darwinian.
You might expect pain to lead
to constructive thinking,
but agony actually inhibits
constructive thinking.
It forces you actually to rely
entirely on your unconscious self.
Is this confirmation?
I don't know. Yes.
>> I don't think it's
a reason of that because
I feel like pressure
under extreme pain [inaudible] you
might do anything you want to avoid it.
But if you're in agony and if you're
[inaudible] on those experiences,
then you're actually trying to
extract pleasure out of it.
Could be giving [inaudible]
You'll find happiness from it.
>> From extreme pain?
>> Yeah. So, in
every week you're doing
whatever you can to escape that pain.
So that's my [inaudible]
>> Should imagine many explanations.
So, I put one of this as an example.
In evolutionary history,
if someone was in
extreme pain, better
to get rid of them.
Social maximization
[inaudible] selected for.
Another root would be deep and
urgent call her social
contact and systems.
That's more valuable than
solving the problem himself.
You don't have these kinds of social,
socio-biological explanation,
what are the tribe of.
You can imagine
that [inaudible] first thing.
>> Yeah. I will, okay.
Okay. So our suggestions for
sudden extreme pain is interrupts,
but already you've told me maybe
we don't need these interrupts.
Interrupts, which suddenly put
everything on the stack
and then pay attention,
which is different from broadcast.
Broadcast, it's an input that
you can pay attention to,
but interrupt, you're forced to
put everything on the stack.
>> Something about [inaudible] ,
could somebody say after this class
is Turing Machine [inaudible].
>> Yes, yes, yes.
>> Why do you need
the actual experience?
You have the [inaudible] sign
to all this beautifully well,
cleanly, separate broadcasts,
without ever being in it.
>> Okay, so again
the argument is that,
absolutely every processor in
your brain is aware of
what's going on there.
>> By a machine built by
Maytech sort of wire it up,
and it's this too kind of
interesting [inaudible] ,
but if you think as the listener,
going from [inaudible] every possible of a
diverse number of [inaudible] listening,
why is that each experience and you
haven't actually subjected volume?
Now, my [inaudible] somewhere is
a missing definition that says,
it is the same, but
it's unclear to me.
>>Yes, and I'm claiming
that that's all you need.
Now, we then very well
disagree on that.
>> I don't see the actual-
>> The only problem I have is that,
I said that every
single processor knows
about what's going on
there, that that subject,
that if any one of them is
responsible for consciousness,
you will be conscious of that,
but I still haven't answered
your question, I agree.
>> I'm sitting here wondering,
if young version of MatLab
one sitting [inaudible] back now,
wouldn't be satisfied
or we [inaudible].
>> He will satisfied
because this actually
answers lots of questions I had.
I mean, it's much better model
than the finite atomita,
it does answer a lot of questions.
Yes, there are some cases where
it doesn't seem to be right,
but as I pointed out,
if pain and joy have
about equal value,
then they cancel each other out.
There are things like that,
many things like that,
that seems to be answers.
Anyway, here's
the instant shock of pain,
the hard problem for joy,
won't go into it,
that's Lenor's thing,
and there's free will,
a beautiful explanation of free will.
>> [inaudible] .
>> What? You're way out of time.
>> It's very important, Bob.
>> Should we let people go,
and then we can talk about?
>> [inaudible] .
>> With this? Okay.
>> We usually end at 4:30.
We'll end this 4:40.
>> This will take three minutes.
We'll be fine. I'm saying that
free will is not a problem.
I was very happy,
so my explanation which
goes back to the Chess,
I like to say as an example,
is you know if you're playing chess,
you find yourself in a position
that you have to make a move,
and you don't know whether you
should move this piece there,
or that piece there.
You don't know which you're
going to do until you've done
some computation to figure
out the result of them.
You have free will until
you know you've done it.
You have free will to
choose what to do,
until the moment arises
that you've figured out
which move is better,
and then you give up
your free will to make the move.
That's my explanation of free will.
Then I found this wonderful
explanation by Dehaene,
which is exactly what I'm saying.
He says "Our brain
states are clearly not
uncaused and do not escape
the laws of Physics,
nothing does." Okay with that.
"But our decisions are
genuinely free whenever
they are based on a
conscious deliberation,
that proceeds autonomously without
any impediment carefully
weighing the pros and cons,
before committing to
a course of action.
When this occurs, we are correct in
speaking of a voluntary decision,
even if it is of course
ultimately caused by our genes."
We have free will because we are only
aware of what's going on
in that short-term memory,
and the long-term memory is
just a huge collection of apps,
and we don't know how
they are working,
and we are open to
whichever possibility turns out
to be important, better. Yes?
>> [inaudible]
>> The what?
>> The act of conscious
deliberation. [inaudible]
>> In the case of chess there,
you have some processor,
for all these several processors,
that are evaluating
the result of these moves.
>> [inaudible]
>> Yes. I'm not saying
that they are fine
and they aren't fine
and [inaudible] here.
They are fine and [inaudible] ,
it's just that the brain
itself is not to be viewed as
they find a [inaudible].
It doesn't make sense
to say that you know
every language in the world
when you're born,
and you just have to
find the particular one.
Anyway, that's the end.
>> So we have a mic for the capture.
Any questions? I can't
believe there's actually
questions after all that. Okay, so-
>> I had a question
about why you have
different processes for
pain, and joy, and fear.
Are these or [inaudible]
functions of the same thing?
So I can take joy as
the negative of pain,
and fear as probably
future predicted pain?
>> Your brain at
least has processors.
Your brain, your entire brain
is responsible for pain,
and it knows about pain in
the elbow to different processor.
It knows about pain in your hand.
There are lots of
different processors
that are concerned with pain.
>> Okay, But I can see-.
>> Lots of different ones
concerned with joy.
>> Yes. So I can see that joy
is the negative of pain,
and then you [inaudible]
>> You're asking about why some
are positive and some are
negative? That's built-in.
Pain is built-in, it's
negative that's easy
enough to have it open.
Joy it's perfectly possible to
have it built-in as positive.
>> So, I had to follow up
on the Emotion machine.
So the emotion machine was
a book that was never published,
but it was online by
Marvin Minsky photo.
Few years we all had to go through
it as part of
effective computing class.
One of the things he bought out,
he argues is that,
"For the long time,
mankind uses these umbrella words
to describe things that
they don't understand."
So for instance he says,
"Consciousness is an umbrella term."
That we agreed to that exist,
it's similar to the term
that physicist coined,
ether to explain
the propagation of waves.
>> Sure.
>> How do you know that consciousness
is not a [inaudible] is something
that doesn't exist but
we basically have,
worrying about it because
we can't explain.
>> I mean, fine you understand
that I'm just talking about
consciousness in the model.
I defined consciousness to be
what's in short-term memory period,
that's it.That's what you
are consciously aware of.
You're saying, different people
have different notions
of consciousness?
Yes, that's true. In this model,
that's what I take
to be consciousness.
>> No. What I'm saying instead,
the exercise in defining
consciousness might be
trying to define something
that does not exist.
>> Okay. But it does in the model.
>> I'm curious how attention
fits into your models.
So, I can sit here and
I can visually scan
my body from my toe-tips to my head,
looking at my knees, my elbows and
experiencing the pain or
comfort at each thing.
So, where would that
plug in to the model?
Is that short-term memory or
where does attention fit?
>> Yeah. So, the actors that are up
on the stage are pulling
your attention together.
What's her name or the scene?
Did you see the dog in
the scene, that house?
Some actor up there is paying
attention to some part of that scene.
So, that's coming from below.
Somehow, that actor that
has that question has
managed to get up on the stage
and is asking the questions.
>> Module- [inaudible] .
>> Way down there,
which has finally managed to
get up there and there are
many different modules
that would like to have
attention and different things.
Some people are interested
in vision, other processors,
are interested in hearing and so on.
So, it depends on which one
finally gets up there.
>> I feel like my personal subjective
means that attention is in
the conscious part because
I'm deciding that okay,
I'm going to pay
attention to my big toe
and how it's feeling right now.
Now, I'm going to pay attention to
my elbow, my shoulder- [inaudible]
>> I'm saying that
that comes from below.
>> Okay.
>> This side.
>> Since I have the mic and I have
a question, I'm going to ask it.
So, you talked about
distinguishing whether
somebody's assimilating pain
or actually has a pain,
whether the person who asked for
the pills really has a pain.
So, do you have
any suggestions towards that?
>> For deciding whether it's
simulation or it's the real thing?
>> Yes.
>> Yes, I sort of already suggested
it when I said that a person
who is really feeling
it, cannot think.
Whereas the person who's
simulating it, can.
That's a pain asymbolic.
>> Give me the mic, just go for it.
>> Have you created like a
computational model where [inaudible] .
>> Mark Wegman at
IBM Watson wants to build
this and I have
a Chinese collaborator,
Wang Hong Xi and Hurdin
who's started the process
of simulating it.
This is just starting.
Until now, my interest
is in mainly making sure
the general form of the model
is correct for what I want.
>> Yes. [inaudible]
>> Yes.
>> First of, I just
want to say thanks for
coming to speak to us
about this [inaudible]
>> It's been a pleasure.
>> Something I'm curious about
is how did you and Minora
arrive at the dimension in
this model and whether like
other immediate models
that were considered,
the tests that didn't work-.
>> Yeah, I have lots of slides about
things we considered
and then got rid of.
Part of it was trying
to make sure we have
enough that we can
explain a lot of stuff,
and part of it is making
sure we don't have too much
because I was so happy when I
was able to get rid of stuff.
For example, Bernard Baars talks
about a processor down below
getting up on the stage,
but these processors
don't actually move,
the processor doesn't
get up on the stage.
I'm saying that the only thing
that gets up on the stage is
that little bit of information
that it wants to put
up there for processing
and so I needed to
use the pointers to be able to say
where it's coming from
and who has the stage.
I've managed to get rid
of all of that. So, yes.
There were many models along
the way and it's sort
of settling down now.
>> It's an interesting model.
There's this one thing
that kind of [inaudible]
is this idea that every
[inaudible] single,
cross or negative access.
Even that, it seems
like there's a lot of
emotions that don't restrict
to positive or negative like-
>> Absolutely.
>> Curiosity or-.
>> Absolutely.
>> So how would you
mention those training
are doing - [inaudible]
and curiosity.
>> Yeah. So, it's sort
of easy when I talk about
pain and joy because clearly,
one's negative, the other positive.
What about curiosity?
It could be a positive thing,
could be a negative thing
and it's really
the processor itself that's
deciding the weight,
it's also deciding the sign.
>> But this idea that you can have
the weights of things
that come close to
my favorite line-up [inaudible].
>> Yeah. It's kind of nice
though that in the end
what's up on top is the sum of
all the weights at the bottom.
>> [inaudible]
>> I have the microphone now.
>> It just seems
intuitively to me and
I'll talk more about
it over dinner tonight
that it's not just a push but a
pull model from the executive.
I know it introduces a complexity,
but this would capture
notions of vital information,
notions of the fact of
a singular experience which
seems to be on the stage
and the conscious part is not
all just little information bits
but this idea that you're actually
also broadcasting but pulling
and actually even
pulling with addresses.
It's not just all push
from excited neural apps,
but it's a pull too from
executive and I love to
see that attitude model.
I think it's going to be
needed, this would again-.
>> Yeah, I'm hoping you're
wrong, but then okay-.
>> Why [inaudible] just to broadcast?
>> Why not just broadcast?
>> The push system broadcast.
>> Yeah, but now pushes a broadcast?
Fine. My sense is sometimes there's
an integrative approach across
all the pushes for which might
define a context or
a state that then there is
an active pull among
resources, right?
So, maybe the idea is you can have
a broadcast of a need, right?
Not just perception, but an
actual attentional signal, right?
So, this idea right now
that seems to be part of
this model that you have
a listening short-term stage
that's where some integrating
and listening into stuff
being pushed in some sort of
scoring mechanism bringing
stuff up which is great.
Once you have integration that's
not the end of the story,
that's part of the building
of a picture which
might not just be all push
built but pull built.
>> Yeah, sorry. I meant
pull is a broadcast.
>> Yeah, but now we're hearing about
a pull broadcast is asking
for certain things, right?
Requesting certain help resources.
>> Does [inaudible] in the model?
>> If it gets up there.
He was broadcasting out
all sorts of perceptual
information, right?
>> [inaudible] If anyone
knows [inaudible].
>> Yeah, I see. Does anybody know?
But it's supposed to
direct it with an address.
>> If you are in the jungle
and you get a tiger
and all of a sudden you
are looking around.
If you see the tiger [inaudible].
>> You pull with the index,
not just general broadcasts.
Unless something special property of
the broadcast that gives you
at least some efficiencies.
I think there's modeling
and an executive.
>> You're going to have
to explain push-pull.
>> What if you mean
it, you broadcast and
everyone gets a [inaudible] pull
and- [inaudible]
>> With high efficiencies.
>> Okay. So we'll publish it,
we'll just end there.
Thank you very much.
I think your father would be proud of
your answering this question.
We're trying to, did he ask
you how does the brain work.
