[MUSIC]
>> Now, speaking of great people
and kind people, nice people.
I would like to introduce
Christos Papadimitriou.
I'm really, really
thrilled and honored
that he agreed to come and
speak to us for this event.
He's really a giant in Computer
Science and one of my heroes,
and he's won the Knuth prize,
he's won that John von Neumann Medal,
he's won the Godel Prize,
he's a fellow of the ACM,
fellow of National
Academy of Engineering,
a member of National
Academy of Sciences
and American Academy
of Arts and Sciences.
He's also a long time friend
and has served a lot at DIMACS.
He was on one of the
original boards for DIMACS.
So I have this e-mail from
Christos to Fred from 1997,
giving his evaluation of how well
DIMACS is doing in research.
He's written five textbooks,
many, many technical
articles on many topics.
He co-founded my field,
Algorithmic Game Theory,
introducing the first worst-case,
analysis of worst-case equilibria
and the price of anarchy.
As a grad student he
settled the complexity of
Euclidean traveling salesman problem
and then much more recently,
settled it for Nash equilibrium.
He has eight honorary doctorates
in addition to his PhD at Princeton.
He's also a novelist,
he's written three novels.
One of which is Logicomix,
a graphic novel.
I have a copy here.
This one's personally special to me.
I got a copy when it first came out.
My younger daughter is
interested in graphic novels,
so I started reading it to her.
It's not a kids book,
and she was eight at the time.
But I glossed over some of
the more adult type issues,
but she found it fascinating
and I kept reading it for her.
I think there was
probably a year or so
where she wanted to read
this every night at night.
For some reason, she got
really interested in
the barbershop paradox.
I think I still have a
bookmark on that page.
We kept going back to
the barbershop paradox
and rereading it, rereading it.
So this one is special to
me and I have a few copies.
Christos has graciously
agreed to sign some copies.
So he'll sign mine for my daughter.
So thank you but a
few of you have said
they would want a copy already,
but if anyone else wants a copy,
a signed copy of Logicomix,
I have a few copies so let me know.
So with that, I would love
to have Christos come up
and we can get started on his
talk on Brain computation.
>> Excellent. So thank you very
much I'm delighted to be here.
I have been involved
with DIMACS, I mean,
also during sometimes in
my career in the '90s.
DIMACS was the game in my field
and I owe much too DIMACS
and of course, I have a long
history of collaborations with MSR.
Thanks. These are two amazing
groups to make survey,
so I'm glad to help.
One question that is
probably in your mind is,
why would a computer
scientist work on the brain?
Okay, and you are right.
So I mean computer scientists
work on other things, right?
So and we did for the longest time.
Okay, so for the first six
decades of our fields,
we worked on the computer,
we were obsessed on
working for the computer.
So we were so obsessed to computer
that we made the mistake
of naming [inaudible] .
In retrospect, it's very strange that
such a large group of
intelligent adults
was so stuck up with a
computer but we were, I was.
We thought it was amazing.
We understood computation like
it was never understood before.
So these were six long years
of incredible work,
very creative work.
We solved problems that was
very real databases compilers.
You know that it was a major in
electrical engineering fields.
Then of course, it all ended
because what happened is
that the Internet came,
and we immediately realized
that that was the whole point.
The computer was the gadget
that brought us the internet.
This is what we're interested in.
Of course when we
focused on the Internet,
our mind wondered because it's easy
when basically you see
it's an easy step to go.
We'll actually I don't
think that many of you
realize how similar the
universe is with Internet.
They are the same thing.
All right. This is something
that has been called a
lens on the Sciences,
computational algorithmic
lens on the Science
or computation as a
lens on the Science.
By this I mean the
following is always in
very simple terms, that occasionally,
when there is challenges
our problem in Sciences of all kinds:
social, biological and
so on, and physical.
If a Computer Scientist looks
at this problem occasionally,
not always not even often,
okay but occasionally,
an unexpected progress happens,
okay and the trick is simplified
a lot is to fantasize that
this scientific object,
be it the universe, the
cell, the market, evolution,
is trying to compute something,
and then use our insights of
what it means to compute things.
This is a trick I'm going
to play with the brain,
and before that, let me just
tell you a couple of things.
I mean that's in Game
Theory and Economics.
As you know, one of
the most important,
in some sense, or the birth
of the new era of economics
was in 1950's when John Nash
approved this famous theorem.
So we know that not all games
have the Nash equilibrium,
and this was in some sense the
start of modern economics.
Part of the reason is that part
of the brain were inspired to prove
their theorems, but not only that.
It was in some sense economists
got a license when they think
about something to say,
let's see what happens
at equilibrium.
Okay and this was an important
methodological theoretical tool.
Okay, so and it turns out
that finding Nash equilibrium
is intractable problem.
This into a larger stem cancels this.
In other words, this license
is temporarily revoked
for computational reasons
and that's a nice
compliment for computation.
All right.
Something else is that
looking at the evolution
with these computational lengths unit
you realize unexpected things.
For example, that the evolution
population Genotypes,
is mathematically equivalent to
the genes of the
species playing a game
and the strategies in the game
are their leaders of every gene.
The probability to reach
the play these strategies,
are the frequencies
of their leaders in
the genetic pool of the population.
It's a repeated game
because the repetition of
the generations of these species.
The amazing thing is played through
multiplicative weights
updates AdaBoost.
Okay and I remember
my co-author on this mission
was Irani telling me,
when once we realized that
this is how evolution works
and we know that this is a useful
mathematical viewer origin.
He told me, "you know Christos,
now I have much more
confidence in evolution."
Okay, so another thing that
you guys get out of this
just by applying Convex
Programming Duality,
you see something very interesting
that every gene in every generation,
what it optimizes is a trade off
between cumulative fitness
and a little entropy.
So in other words, diversity
of the population.
Genetic diversity is
one of the things
that are being explicitly
optimized during evolution.
These are all insights that were
just a few equations away
from the standard mathematics
population genetic mathematics
that's all we know,
but it took computer
scientists to get them.
So frankly this is the question
that I'm interested in.
How does behavior, cognition,
intelligence, language,
how do these things
come from molecules,
cells, synapses and stuff?
The way I visualize
this question is I call
it the sunset from the
Berkeley Hills question,
in the following sense that
every time I see this,
and I had opportunity to see
it many times in the past,
what comes to my mind is why isn't
everybody stopping
what they're doing?
Okay and come here to see this.
So when you work on
the brain really you look
at other people and say,
"wow strings that are working
on other stuff, okay, yes.
For about five years ago,
with Sanders Compiler, we decided
to seriously look at this problem.
The first thing you do
is you buy this book.
So that's called Principles
of Neuroscience.
It's author, Kandel, Schwartz, book.
It's a beautiful extensive compendium
of what is known about the brain.
Every four years,
it's being rewritten,
and as a result, it
becomes 200 page thicker.
If you think about
it, that's not right.
The title is Principles
of Neuroscience.
Principles as time
progresses should shrink.
So when the Greeks decided to
define the principles of geometry,
they ended up with four lines.
There is something wrong here.
My hero in this is Richard Axel.
Now my colleague at Columbia.
When I read this, I said, "Wow."
I could not pray for something
better for my research
program for Axel to say.
So he said the following
in an interview,
"We did not have a logic."
Logic, okay? "A logic
for the transformation of
neural activity into letter.
We've decided in this
logic as the most
important future generation research
direction neuroscience."
This is amazing.
This man who is sort of the pope
of experimental neuroscientists.
He's telling us that what
we need is a formal system.
The question is, what kind of
formal system would fill this bill?
I have an idea. That's
what I'm going to present.
Is this clear? This
is what I'm after.
Good. So let's start
somewhere really humble.
This is about fruit flies.
They have an apparatus.
They have a nose.
They have a way of remembering
smells. It's the following.
There are 50 kinds of odors.
There are neurons that
actually sense these odors.
Eventually, they are
projected to these neurons.
That's 50 neurons.
Then something very
interesting happens.
It's something that
computer scientists
immediately should sit
up on their chairs.
There is a projection from
a 50 dimensional space to
a 2,000 dimensional space.
It's a random projection.
They're randomly projected to
a much higher-dimensional,
four times higher.
Then another very
interesting thing happens.
Very computer sciencey too.
There is a inhibitory
neuron here which
once excited by this
havoc that happens here,
it immediately intervenes
and calms them down,
and out of these 1,000 neurons,
only 100 highest ones survive.
In other words, it
becomes a zero one.
Becomes a zero one vector.
Only the top 100 are chosen,
and everything else
goes back to zero.
This is the process.
In some sense, it is the base
of what I'm going to tell you.
We call this random projection
followed by cap, RP&C.
A 100 winners out of 2,000.
Interesting question, how do
we know that this is a
random bipartite graph?
You're not going to believe this,
but six years ago,
this man, Axel, did it.
Created this bipartite graph.
This is the adjacency matrix.
The bottomline is the past
is through all kinds
of statistical tests,
and it is a random bipartite graph,
with biases in the degrees.
Other than that, it's a
random bipartite graph.
In fact, it is in the left nostril.
The left and the right are different.
Different species
have different also.
It's amazing that randomness
is so important that this
creature probably in its DNA
a random number generator.
Okay. It's mind boggling.
Randomness, of course, is the basis
of what I'm going to tell you.
By random projection and
cap, I mean the following;
that you have a Bernoulli shower,
where you get something
that looks like that.
Then, out of this end,
you take the highest k,
and this becomes one and
everything else becomes zero.
Good. So one amazing thing
that Sanjoy Dasgupta and
a couple of neuroscientists
co-authors from San Diego,
they wrote a very interesting
paper three years ago.
They proved, essentially, that
it preserves a similarity.
What I'm saying is that, if
you have two similar smells,
where the measure of similarity
is their intersection,
then the output will
be two rather similar
smells, equally similar.
That's an interesting observation.
It was an empirical
observation on their paper.
In fact, they noticed that it
has better recall properties
than the very clever
sophisticated algorithm
that came out of Stanford and MIT
of a locality present rehashing.
This was a very interesting surprise.
In fact, there is underlying
mathematical reason.
In other words, if Alpha is
the percentage of intersection
of the two smells,
the percent of the
intersection of the two sets
of neurons that is going
to record them is this.
This looks like a very
nonlinear function,
except that for the values
of n and k for the fly,
it turns out that it is very linear.
In fact, we suspect that
this is not necessary,
that this is the right answer.
In fact empirically, if
this is the 45 degree
that says that it's
exactly preserved,
this is what the fly does.
It's very, very close.
But enough with the fly.
I mean let's do something else.
The question is does something
homologous happen to mammals?
It turns out that it does
and there is no
evolutionary connections.
So you know that there were
two independent discoveries
of very similar mechanisms,
except that in the
mammals, there is a catch
and this catch is also the
beginning of my story.
It turns out that again,
mammals have, for example,
the mouse, this is
the head of a mouse.
It has 1,500 smells.
We have 500 smells,
we lost 1000 because
we don't need them.
But the mice have 1,500 smells,
different odorants and they
have 15,000 different cells
that are olfactory receptors,
and they're all in their nostrils.
Now in our nostrils, we
have billions of cells
and each one of these cells
specializes in one of the 500 smells.
Then if we smell something,
these neurons project through
the actions to a place here,
it's like a pea-sized part of the
cortex called the olfactory bulb.
The olfactory bulb is
divided into glomerulus.
So for example ammonia is
projecting to one glomerulus.
Then the interesting stuff happens.
This is something that again
Axel in a paper of his in 2011.
That's already been an
inspiration for us described.
Basically, what happens is that
these projects to these ancient form
of cortex called the piriform cortex.
In other words the neurons here,
their axons go here.
Many other places but
most significantly here.
There they create a permanent memory
which then it's transferred
elsewhere and then go throughout.
But let's look at this particular
mechanism. How is this done?
Here is how this is
described in the discussion
section of that paper.
An odorant may cause a small subset
of the piriform cortex
neurons to fire.
Inhibition triggered
by this activity.
That's exactly what
happens in the fly.
We'll prevent further
firings so it's going to
be a particular set
of neurons that fire.
But then now comes
the interesting part.
As this small fraction of cells,
will then generate sufficient
recurrent excitation
to recruit a larger population
of neurons. Here's the secret.
Mammals in flies, no
these 2,000 cells,
there were no connections
between them,
there were no synapses between them.
Here, they are recurrence in absence.
So the thing will not stabilize.
This small fraction cells there
are going to recruit more
because now there is simple synaptic
input from these new things.
So it will move and then
move again, and move again.
In the extreme, some
cells would receive an
[inaudible] and able to fire
without receive initial input.
So there's something very
interesting is happening here.
So let me imagine that you
have a set of spiking neurons.
I think that this is a mechanism
that I propose is very general.
This is something that
mammals do, that we do.
That's the following, that if we have
a memory of this set of spiking
neurons and this neuron spikes
in some area of our brain,
this is a different area,
such that there is a
bipartite graph of
recurrence and actually a
random bipartite graph.
Random is very important,
what I'm going to tell you.
Then a random non-bipartite graph
between the recurrent
graph between this area.
What will happen is that in
the beginning, as in the fly,
something will come up precisely
by a random projection and cap.
But then, these will
start firing also and
now this whole area is going to
receive both inputs from here and
before there and as a result,
they're going to be a new set of
winners and there will be this.
Now, these two are going to fire.
This going to be forgotten and lost,
these two are going to
fire and as a result,
a third one will come and so on.
So the crucial question
is will this converge?
Is there a stable memory
going to be formed?
It turns out that it does it
preserve that similarity.
Before telling you the results,
just a little suspense,
let me tell you that maybe this
is one of the most interesting
parts of what we're doing.
We had a model of the brain,
ambitious sincerely as it sounds.
So you imagine it has a finite
number of brain regions.
Each contains n neurons.
Inhibition means that in
each one of these regions,
only k out of n,
and think of k as
square root of n, fire.
Some pairs of areas are connected
by directed BNP graphs.
Bipartite random graphs,
and all of them are connected
recurrently by directly GMP.
That's the model, nothing else.
The question is, what computation
can be done with this model?
Suppose that neurons firing
discrete steps which
is the influence of
[inaudible] immediately,
and also it's not true.
It's one of these assumptions that is
both extremely useful and
productive and also indefensible,
but they don't fire.
But suppose they do,
they are selected by random
projection and the cap.
We assume that these arrows
can be enabled and disabled.
That there is some control somewhere.
Then to have plasticity,
that's the new element of
what I'm going to tell you.
If from neuron a to neuron j,
if they're in the same area
or in downstream area,
i fires and in the next step j fires.
The weight of i j's multiplied
by something, a gets increased.
Then you have homeostasis
forgetting and so on,
but I mean these don't
interfere with or without them.
This is our model of the brain.
Incidentally, when I say
a model of the brain,
a lot of people have mathematical
models of the brain.
I'll tell you where our emphasis
is importantly I think and
the enabling way different.
People have been interested in
modeling the sensory cortex.
The visual cortex, the auditory
cortex and so on. They do, why?
Because this is what is
accessible to experiments.
So you wire an animal,
you show it things,
you make it to some
decisions, leak some sugar.
So then you have a paper.
So this is detrimental
for the past 50 years.
So since [inaudible] ,
this is their bread and
butter of all neuroscience.
So I am more interested first
of all in the human brain
and I'm more interested in
something that it only has an
old-fashioned term because
it's not a beloved
subject of anybody.
It's called the association cortex.
It means anything that
is beyond sensors.
If you wish, it starts from
the hippocampus and later.
In other words, that's the
inner life of our brain.
So if I'm interested in
thinking about language,
storytelling, reasoning,
planning, and so on,
this is what I'm interested in.
So it's pitifully little studied.
So the parameters, I suppose
that every one of this area has
about 10 million excitatory neurons
but inhibition brings it to
the square root of that.
Imagine that the probability
of connection is about 100,000
and they increase because
of plasticity is 1.1.
So this is a ballpark figures
that I'm interested in.
>> [inaudible].
>> Sorry.
>> [inaudible].
>> There's no geometer.
But of course,
I'm thinking of it as
a yes. It's a graph.
So there are three ideas:
randomness, selection,
what I call the random
projection and cap,
and plasticity.
These are the three forces
that I'm interested in
and everything else I'll tell you.
So the theorem is the following;
the process that I showed you does
converge exponentially
fast with high probability
and total number of cells
involved which is a measure
of how slowly to converge
is if Beta is bigger than Beta
star which is this value,
so it's about 0.15, then
it converges very fast.
Otherwise, it still converges
as long as Beta is not zero.
So plasticity turns out and if you
think about it's not surprising.
Plasticity helps convergence
because plasticity is stability.
If these things have been firing,
then we help them a little more
and therefore they're
going to keep firing
and they're going to be stable.
Good and here are our
simulations and so on.
So the result of such projection,
let's call it an assembly.
A set of k neurons in a
brain area whose firing in
a pattern is tantamount
to our thinking of
a particular memory, concept,
name, word, episode, etc.
It's not my idea.
This is in other words,
that's not new, nothing new.
Everybody believes that.
I mean so you know.
I know the extent to
which they think it's
the key to how the
brain works differs.
But everybody believes that
roughly this is what's happening.
Hebb proposed 70 years ago.
Now, they have been changed by
people who were interested in them.
Then these Buzsaki who
was of course at random
at Rutgers when he discovered these.
I mean they discovered
and they saw them.
There are many.
Basically, what we do that is novel
is we use them in a way that
had not been anticipated,
had not been systematically
worked out before.
There are our group with
a group in Austria.
We have simulation that
really not in our model,
which is very schematic but in a
very biologically accurate model.
We know that they exist.
By the way, you remember the
concept of a grandmother cell.
Basically a neuron that encodes
your grandmother's image.
I don't know how many
of you had to pick up.
It used to be like a
neuroscience folklore
that we have a grandmother cell.
Actually, if you think about it this
cannot happen because if
your grandmother was a cell,
there would be no grandmother
because the cell cannot do anything.
The cell has no power.
Only big gangs of cells have power.
So really it's grandmother assembly.
Nowadays, it's not
called grandmother,
let's say it's called Jennifer
Aniston cell or concept
cells because in the '90s they caught
somebody whose cell was firing
only when he saw a picture of
Jennifer Aniston or even heard
the voice of Jennifer Aniston and so
on.. Incidentally by
the way after my talk,
you'll have not only
Jennifer Aniston cells that
you're going to have
Jennifer Aniston cell cells.
Think about it, but
don't go that way.
So let's go back
to all we the business of
trying to understand
computation in the brain.
The question is what
is the right level?
Molecules. Of course computational
habits at a molecular level.
Spiking neurons and
synapses. Yes, definitely.
People have been pursuing
these two levels for forever.
People who tell you that
the real computation happens in
the dendrites, in other words,
where the axons meet for
the pre-synaptic axons,
meet the post-synaptic neuron.
That's where the computation
happens because they have
many meeting points and God knows
how the whole effect is calculated.
Is it additive, multiplicative,
max or watts or it's
whole computation tree.
Yes, a lot of computational
happens in dendrites as well.
Then there is
whole-brain computation.
I mean that's very useful in
cognitive science where basically
you'll do some experiments
and then they say you
see the subject's brain is as
if it executed this program.
So these are all useful.
But the question is there is
something in between that is missing.
Let's remember again what the
man said. We need a logic.
I think that the logic that Rachael
is looking for lives there.
So here is my bet.
So here is what I think.
We call it assembling
hypothesis that there
is an intermediate level
of brain computation.
It's implicated in carry
the higher cognitive functions
of humans such as reasoning.
It's also important for
animals but it will
be a waste of our time to
focus on non-human animals.
Reasoning, planning, language,
storytelling, math, music.
So you know science.
It's implicated in the good things.
Assemblies at it's
basic representations,
main data structure,
it means datatype.
The question is what are
the fundamental operations?
We have to come up with
fundamental operations,
the basic operations.
By the way I mean it's should
not be an idol exercise
or having of saying,
well let's see perhaps assemblies
have additional degree of Boolean.
The operation must be
useful and plausible.
When I say useful it means that,
it should explain experiments
otherwise know why have it.
Plausible means that you have to
convince yourself that
eventually can be implemented
by neurons and synapses
because otherwise it's not
something that you should
venture in.So one
operation is projection,
and I told you what it is.
It's that you have an assembly in
an area and it can be projected,
it can create a copy in a secondary.
It's a little like assignment in
programming languages except
that when you say x gets y,
x and y can go to live
independent lives.
But these two assemblies
are forever tied together.
These can always fire
if expected to go on.
The next time it will fire
this will fire as well
if this is enabled. This is the idea.
So other operations, it
turns out that two assemblies
may associated by sharing cells.
So in other words, you
have two assemblies in
the same area and they
may be sharing cells.
Why are do they share cells?
If it has square root of n, they
shouldn't share any
cells if they're random.
It does have to share cells
because there is an affinity.
I want to show you an experiment
that these two assemblies
fire and as a result,
these two assemblies get together.
This was a copy or
protection of this.
These are the projection of this,
these two fire together.
This means that they concur,
they have some affinity,
and as a result these-.
So here is an amazing experiment.
This is one that was also
an eye opening for us.
They recorded it from
one neuron in one brain,
in one subject's brain and that
neuron turns out that it was like
the Eiffel Tower and neuron and
every time you saw the
Eiffel Tower, this fire.
Actually, they recorded it
from a couple of dozens
of patients and from
several hundreds neurons.
But let's focus on this one neuron.
So they show it other
interesting things,
the familiar things nothing.
Eiffel Tower always.
Then they show him famous people.
He used to be the president of
the US you remember that. Okay good.
So nothing.
Then they do something clever.
They Photoshop Obama in
front of the Eiffel Tower.
So they probably did it in a better
way than more art than I did.
Of course, the neurons sees
the Eiffel Tower and
of course it fires.
Then they saw the Eiffel
towers of course it fires.
This other stuff, nothing.
What? The neuron fired. Okay.
You get this?
So these patients learned that
Obama has been to the Eiffel Tower.
You discover this experimentally.
This is amazing.
The only plausible explanation
is the previous picture.
That this neuron was
in the Eiffel assembly but
not in the Obama assembly.
Because the Obama and
Eiffel conquered,
it jumped to the Obama assembly.
Now, it's both in the Eiffel
Tower and the Obama assembly.
The intersection increased.
The intersection increased,
it becomes something
like 10 percent from
zero, where it should be.
Is the association
preserved under projection?
Yes, the association is protected.
So once you are associated, now,
they are projected to different area,
they're still going to be associated.
>> So does that association happen
right away or is it organic?
>> It it happens as I
showed you, I think.
We can show by math that
if you have two sets of
nodes in a random graph
and they fire together,
then their projections
will come closer.
These are all things that
you can prove. All right.
So that's good.
So here is another operation, merge.
It's a more complicated.
Basically, assemblies x and y from
different areas project to create
one assembly area A, call it z.
In in another area,
call it new assembly z.
This new assembly has z,
has ample synaptic connectivity
to and from x and y.
What you do is that you have built
a tree whose leaves are x and y,
and z is an internal
node of the tree.
So it's useful for hierarchies,
it's useful for language.
So this is what
happens that these are
the parents of this fire.
Then you could you create
two-way dependencies
between these three nodes.
The very interesting
thing is that this is by
far the most sophisticated operation
we have. We have a couple of more.
But it requires the
careful synchronization
of five brain areas
and very strong synaptic
connectivity and plasticity
between these three areas.
One question I have,
I'll come back to that,
does it need enhanced hardware?
In other words, I believe
that this is something
that probably happens
in animal brain also.
But I believe that humans are
especially equipped to use it a lot,
to use it many hundred
times every second.
The question is, do you
need special hardware?
This is is a huge fiber that
connects two brain areas
that are important for language,
Wernicke's area and Broca's area.
It's much huger in
humans than in apes.
Here's a surprise.
The left version is much
bigger than the right version.
That's the only anatomic asymmetry
that we have seen in
the mammalian brain.
So something's happening, but
we don't know what it is.
So good. The assembly
operations are these.
There are some control
operations which we need.
Pattern complete means that if
two assemblies are
associated and one fires,
there is a good chance that the
other one will fire the next.
If you remember Obama,
then you may remember
the Eiffel Tower next.
The question is, how
powerful is the system?
Other assumption, it can perform
arbitrary square root
of n space computation.
So it's very powerful.
If n is a million, that's
a lot of computation.
All right. Cool.
How much time do I have?
Okay, excellent.
That's where I'm going, the language.
So when I tell my neuroscientist
friends, including Axel,
that I'm using sign language,
basically, what they tell me is, why?
That's the hardest thing
that any brain has done.
Why don't you wait until we figure
the brain out and
then we'll sit back,
light a cigar, and
think about language.
That will be the last cherry
in the cake. I don't think so.
I think that language is
an immense opportunity,
because our brain
reflects the environment.
But this is an environment
that was created by us.
We have not evolved since language,
the last being adaptation.
It evolved. Language has
evolved tremendously,
and extremely rapidly.
Think about this, imagine
a little girl in Japan and
a little girl in England.
They speak very different languages.
These languages evolved
over 4,000 mothers,
grandmothers, and so on.
Two thousand mothers ago,
they were speaking the same language.
Japanese and English,
the difference was
mediated by 4,000 mothers.
That's remarkable. This says
that language has adapted a lot.
Has adapted to what?
I think it has adapted to our brain.
Languages are supposed to
be easily teachable
by mothers to babies.
I believe that it is invaluable
as we're studying the brain,
and in fact, there is an incredible
number of recent experiments.
Let me show you another experiment.
That's David Poeppel in NYU.
Wonderful experimental
neuroscientist.
So here's his experiment.
He read this in a very neutral
voice at four hertz for a second,
this is precisely the frequency
with which I'm speaking now.
It's amazing. Everyone in the world,
everybody speaks at four hertz.
People think that syllable is not
a linguistic concept but
a brain concept, a
neuroscientific concept.
So what happens is
that they read this nonsense
sequence of words, and what happens?
Then they're recalled from
everywhere by many technologies,
different technologies.
What you get, you take
the Fourier analysis,
you get a peak at four hertz.
Because four times a second,
your brain has to say,
what on earth does this mean?
There is activity for
this and it's recorded,
so no surprises here.
Then Poeppel did something clever.
So he repeated the experiment
except that every four words
make sense. Guess what?
All right. So of course
every four times a second,
you have to recognize a word,
but one time every second,
you've to say, "Hey, a sentence."
That's apparently a
different part of the brain.
Twice every second, you say a phrase,
because basically, what
I think happens is this.
Right? That's to me a parsimonious
explanation of this experiment.
I remember having lunch with
Chris Manning two years ago,
and at some point,
I was overcome by temptation.
I turned to him and looked at
him in the eye, I told him,
"For the love of God, Chris,
do we have trees in our brain?"
He came back immediately,
"Of course, we do."
Chris Manning is, sort of, among
the computational linguists,
the one who would be least
inclined to say that.
So I think we have
trees in our brain.
You know the Poeppel
experiment came later.
So here is another experiment.
They read to the
subjects things like,
"The ball hit the truck" and
'"The truck hit the ball."
Different areas of the
superior temporal gyrus
responded to "truck"
in the two sentences.
So there is a special
area for objects,
there is a special area for subjects,
and I'm sure a special
area for verbs.
Not only that, but the first area
also responded to 'The
truck was hit by the ball'.
So in other words,
our brain does not have an area for
objects but it has an
area for deep objects.
In other words, it says here,
forget the passive, the stupid
passive voice maneuver.
The "truck" is the object
here, not the subject.
So this is something that Chomsky's
would not like but it's there.
Another one, so "The
completion of phrases,
and especially, of sentences
lights up parts of Broca's area."
So here is Wernicke's area.
Here is Broca's area,
these two area, and this
is our arcuate fasciculus.
This is the huge bundle of
nerves that I told you
before and it joins the two.
You want to hear something amazing?
This bundle of nerves,
this big fiber becomes myelinated,
which means it becomes ready for use
at the 20th month of the baby's life,
exactly when sentences
are beginning to form.
These are sort of amazing data.
These are incredibly- I mean,
I know it's very tempting to
do ridiculous speculation.
So here is one way that I'm
interested in how sentences,
I think parsing is an
interesting problem,
but it's really a reverse
engineering of generation,
generation is an interesting problem.
The real problem in language is
when you see this, what do you do?
How do you record this as a fact?
Basically, what you do is you
probably look for the verb first,
and once you find it,
you project it to
the Wernicke's area.
Then you find maybe the
subject and you project it,
then you find maybe the
object and project it,
and then you create a merge
between "kick" and "ball" and
you call this 'verb phrase'.
Then you create a sentence
which is "Boy kicks ball."
So what I'm saying is that
there is wild speculations
that you can make about how
this is done using assemblies
and that agrees with everything,
with the experimental data that we
have about language in the brain.
So ultimately, I don't know how
many of you know the story,
but the people who discovered,
the founders of
neuroscience were two,
an Italian and a Spanish
scientist called
Camillo Golgi and
Santiago Ramon y Cajal.
They both sought at the
same time the neuron using
a technology that Golgi had
created. Then they disagreed.
They parted company
at that time because
basically, Golgi said, "Okay.
What I see here is a network.
This is called a reticulism."
I mean, the network theory
that the brain is a network.
There are some dense
masses here but who cares.
I mean, obviously, the brain is
a network and we have
to study the network.
Whereas Ramon y Cajal said,
"This is the neuron, folks,
and you should spend the next
300 years studying the neuron."
They completely disagreed and they
had an epic fight at the
Nobel Prize ceremony.
But I mean, the question
is, who was right?
Of course, it's obvious
now that they were
both right and they were
both wrong to reject so
vehemently the opposing view.
But in another real, more real way,
Ramon y Cajal was right because
there were no scientists in
1900 ready to study
the network but there
were a lot of physicians who
wanted to study the neuron,
and therefore that's the
point of view that won.
So I guess the assembly
hypothesis says, "Wait a minute.
Maybe there is something in
between where when we focus there,
we'll get new insights
about this thing."
So maybe there are sets of neurons,
assemblies of neurons that
can explain a few things that
could not be explained before.
The study of the brain is
fascinating and bottomless.
Assemblies and their
operations may be
one productive path to thinking
about computation in the brain.
Are they the seat of Axels' logic?
We don't know how assemblies
learn and predict.
Unless we nail that,
the theory is incomplete.
How can one test, verify,
falsify the assembly hypothesis
through experiments?
We have the NSF grant that was just
approved to work with
cognitive scientists
and neuroscientists
to try to see these things.
Frankly, my point of view
is that theories like that,
they are never falsified, right?
They just die in absence of
supporting evidence, right?
For all I know, that's
what's going to happen.
These are the people who I work with.
Larry Abott is the leader of
theoretical neuroscience at Colombia.
Santosh Vempala, we started
together 3-5 years ago.
This is our student, Dan Mitropolski.
My colleague, my linguist,
Mike Collins at Columbia,
and Wolfgang Maass who also
started me on this path.
He's a computer scientist,
theoretical computer scientist who
25 years ago decided to become
a neuroscientist and it became
one of the areas of competence.
So he did what I'm
doing 20 years earlier.
That's it. Thank you very much.
>> So with your language,
other mammals also have language?
>> No. I disagree.
>> So whales communicate, right?
>> Yeah. By language,
I mean something very specific okay,
and the chimpanzees have
warning cries and so on.
Language is something
that is, I mean grammar,.
recursion, unbounded number
of possibilities. Yes.
>> You're distinguishing
communication from language.
>> Absolutely. I mean
and that's important.
Language as communication is
only maybe 80,000 years old.
People believe that language
maybe existed half a
million years ago,
but if you think about it,
we spend much more time talking to
ourselves than to anybody else.
All right so for
half a million years maybe we
were talking to ourselves only.
Okay, but that has
a lot of advantages
because sort of know it helps you,
a lot of revolutionary
about us you know.
It helps you plan, better understand,
make more complete
models of the world
so therefore it will
spread if you have it.
Okay, and eventually the
pressure is such that
because you have so much
to say that sort of
know that every little mutation
in your facial muscles and cords.
Actually speech is not
apparently very difficult.
There are a lot of non-human
primate researchers
who believe that many
monkeys are reading.
>> If you are studying the brain,
can you verify some of the models
by looking at all those issues?
>> Yeah.
Right, so the question is?
Yes, and this is the reluctance
of you know because for humans
you can only do very
limited experiments.
I mean to do as experiments
like the Obama in Eiffel tower
experiment you need to find
people down on the lux sort of
intractable and that's rare.
So these are rare data,
very valuable data.
But everything else is much more
difficult and needs a lot of data
science to analyze the results.
The other experiment
I'll show you are
essentially FMRI and EEG experiments.
Okay, and you really need I
mean they're not as crisp
as ECOG experiments.
So you really need a lot of
analysis to get to conclusions.
>> Let's have a couple more questions
and then we're going to have
a series of lightning talks.
Anymore questions? Glen.
>> You see I can't use
any of these operations
or logic in social networks.
>> I see. I have thought of that.
So, it will be
interesting what we know,
how much of our language mechanism
are duplicating in other artifacts.
Okay so you know that's
important. I'll think about that.
>> I have a question.
You had one slide about
the first time you create a
memory, and it's recorded.
Is there a hypothesis that there is
a random number generator at
that time to create
the random projection?
>> How the first thing?
Theo about this we know a lot, okay.
The first thing happens
in the hippocampus.
Okay so we know and we know that
every second of our
lives, sort of you know,
a new memories is and most of
them of course don't survive,
but some of them are
projected further
and then they start having
a life of their own.
Okay so how the first and how
these memories are created,
they are created from
the highest order,
from the highest levels of sensory.
Okay so we know for example,
for whole objects memories from
the visual cortex are formed
in an area called IT,
Inferior temporal cortex,
which is sort of the highest,
the root of the tree that is vision.
Okay so and these always project to
the hippocampus and this
seems to be the gateway.
If the brain is a tree where
it has input and output,
this is the root.
Okay so the hippocampus is the root.
If we lose it,
I mean when there is a patient HM
sort of in a famous case lost.
At some point when people didn't
know what hippocampus was,
and the surgeons would cut it,
so that did not create a new memory.
Okay so and he lived a happy
life for 50 years and he saw
his doctor twice a
week and every time he
would introduce himself again.
But he learned to play piano,
so there are some things that
we don't understand, okay.
>> Help me picture in how literally
we should take the word "random".
>> Yeah.
>> All right.
>> Yeah, of course great question.
>> Yeah, I mean at
one end you could just say
I don't really need random.
I just use random in order
doing mathematical analysis
but really is deterministic.
>> Yeah.
>> On the other end you could say,
maybe the brain is deliberately,
actually got a mechanism
and uses it to
bring out these stuff
by genuine randomness.
>> Right, yeah. So that's
an amazing question
and of course I told
you that Axel Steam was able
to measure this and apply,
okay it seems to be pretty random.
Okay. In the mammalian brain
we know that there are several
deviations from uniform randomness.
Okay for example, if you
have two neurons okay,
the chance that they are connected
let say is one in a 1,000 okay.
If they are connected
the chance there's is
a reciprocal connection
exists is like 100,
or you can have triangle completion.
If these two exist,
then this is very likely to exist.
So and these have been studied
and there are reasons to believe
that they are architecturally
important to enhance algorithms.
Okay to make algorithm
run more robust.
Okay so it's a big question,
where the origin of
this? Is it geometry?
Is it plasticity?
I mean why do these biases,
some deviations from uniform
randomness happen okay?
It's always a matter of contextual,
but I feel that that's
a very important topic.
>> Okay, great.
Thank you so much, Christos.
