Recently we've had a request saying can
Computerphile do something on finite
state automata. And in considering what
to do about that it did seem to me that it
would be a good idea to look at where
finite-state automata sit in the scheme
of things. We've done a lot about
Turing machines for example, we've
covered the fact that really every
single computer, of any sort nowadays, is
a Turing machine. So what I'd like to do
is to refer to a diagram which shows, if
you like, the various types of Turing machine -
a hierarchy where if you go inside
you make less and less demands on what you need.
And, if you look at this set of circles
here, I can even related it back to some of
the videos we've done previously. You will
remember that when we were doing the Ackermann 
function we eventually decided that it's
of the Recursive sort,  which means it will
terminate but it could take an awful
long time. If you remember in leading up
to that, I said there's a certain sort of
Turing Machine, outside that, which is a
real so-and-so which says: "Sometimes the
algorithm you give me will give an
answer - and you're happy, but sometimes I
will go into a loop and when you say `How
can I detect, in general, that you are in a
loop?', the answer is `You can't!' "
And we've done other videos with my colleague Mark Jago
all about the Halting Problem as it's
called and then we went one even worse:
we've been out in the outer perimeter of
hyperspace here, to say are there some
problems that are so awful that no algorithm 
can exist? And we did the Busy Beaver
if you remember which is a sort of encoding 
of a particular Turing Machine. It says: "Look, there's
not a general algorithm here - I'm not trying to 
do n factorial. What I'm
asking is, for machines of this sort, can you
predict how many zeros it will print out?" And the
answer is there isn't an algorithm that can say 
how those Busy Beaver
programs will behave in general - if only there
was! What you have to do is to run them
all and just exhaustively say "I don't know,
there isn't an algorithm, just try them".
What happened after
Godel and Turing and others, in the nineteen
thirties, did all this, is people started
saying "Well these Turing Machines, y'know,
it's wonderful -
they're a pencil and paper thing but you could
imagine building hardware to do them and,
of course. those are general-purpose
computers as we now know them. But people
said: "Is there a sort of subset of Turing 
Machines where you can either say 'it doesn't need
more than a definite amount of RAM -
guaranteed - that would be nice to know. ' "
And those come
out to be in this inner circle of Type 1, here,
Then people started to say;"Hey there's
this thing called a 'pushdown store', which
the Americans call a 'stack'. That's a one-ended
memory device.  You can't dip into it,
arbitrarily. You can either take
something off the top or push something new on the 
top. So any addition or 
reading of your memory can only be done at the 
top of the stack. Is that a sort of special
case? Yes it is. 
>> Sean: And we've looked at that with
your Towers of Hanoi haven't we?
>> DFB: Yes,  Towers of
Hanoi is a classic example of
something where you just want to get
hold of the whole bunch of disks and sort them, in your
hand, like RAM, y' know, just park that one there,
store that one there, think about it
put them back together and plonk them
back on the rod. But you can't do that!
You can only do it one-ended and for
something that y'know, "I could do that in two 
or three moves if only I could take all the
discs off", you end up having to do 64
moves. And then there's one right in the
middle in the inner circle here,
that needs no memory at all - in principle. And
that's what these finite-state automata
are all about. You might ask "Who
discovered all this - who filled in all
these gaps?" Because there we are with Turing,
who perversely discovers the most general
thing going, in the nineteen thirties. But
people don't know the simpler story underneath.
Well the person who discovered it is still
with us. I think he must be in his late
eighties now - his name is Noam Chomsky
and I think my friends say that you
ought to pronounce it "Homski" like the "ch" in 
[Scottish] "loch". But I'm happy to be put right on
that. He's genius - near genius - guy, I think he's
been at Harvard, MIT, places like that
ever since he was young. He really was
talented. He's a 'linguistician'. If you
study the structure of natural languages
- any languages, computer languages even, I think
I'm right in saying you're a 'linguistician'. Well,  
in the late nineteen fifties he started
saying: "Look, to understand natural
languages better, I'm going to look at the
most restrictive form of language I can
think of". Y'know really simple things.
How about a language whose words are just
strings of the letter 'i'. So 'iii' is a 
word; five i's, 'iiiii, 'is a word. Any number of
i's is a different word. How simple can
that be?
Yes, very simple. And then he went on to
say things like "Yeah, what's a bit more
complicated than that?" Because those very
simple languages as we'll find, next
time, don't need any memory at all - they
really don't.
And what's the one that sits outside - he
did more investigations
and said: "Ah! there is one where a one-ended 
memory will work".
Yeah - these are the Chomsky Type 2.
So, remember Chomsky - it goes, as it were, the
opposite way around. Type 0 is the most
general, right, the Recursivley Enumerables.
A subset of Type 0
is the Recursively Enumerables that really do
terminate - e.g. Ackermann. Type 1 is
the one where it needs RAM but you can predict
how much RAM. So he discovered that sort -
that's the Type 1 i.e. Turing Machines with
a predictable and finite amount of RAM
requirement. And he just filled in the
whole picture. And in the period really
from about 1959 to the middle 1970s
a huge amount of work filling in
the middle of this diagram. And that
includes things like computer languages,
Algol, how to parse them, how to compile them.
And it was all filled in - in the middle
of this diagram. But all basically
referring back to that work that Chomsky
did in 1959, saying: "These are the
language varieties".
