- Now, I first heard about Don Knuth when
I was a mathematician originally, pure mathematician.
And when I moved into computing, I came under
the tutelage at Southampton of Professor David
Barron.
Who Don just remembered is the editor of Software:
Practice and Experience for a long time.
A student of Maurice Wilkes and a great fan
of tech.
Don just said he and David didn't met, but
they corresponded regularly.
David was a great letter writer.
And I see the books here Interesting props
for him, he's a true renaissance man.
Computer scientist, author, historian, musician.
And his life's work crosses the border of
science and art.
Looking at him there standing like a young
man, still.
He got his Turing award in 1974, so he must
have been in short trousers when he won it.
He was actually quite young by today's standards.
I'm afraid, Don today you'd probably win the
ACM Prize first before the Turing at that
age.
But clearly, one of the most important people
in our community.
It is my great, great pleasure to welcome
to the stage Donald, a 1974 Turing Laureate.
- Don't applaud, just read my books.
So, she didn't mention the title of my talk
this morning.
And actually, I don't remember the exact words
of the title.
But it was something like, I'm supposed to
prove to you that computer science is actually
a respectable discipline.
That has lots and lots of great results.
Now, actually that doesn't need any proof.
So I don't know why I should need any time
to do it.
Instead, what I wanna do is tell you just
a few stories, but mostly I'll open up for
questions.
So, there's two microphones in the middle
aisle.
And after I finish my opening remarks, I plan
to just answer one question after another.
Try to spend about 30 seconds to a minute
on each one.
Unless there are no questions, in which case
we can all enjoy a break or something.
Okay so, in the sixties my mathematical friend
would come to me and say "What're you doing?"
And I'd say "Oh, computer science."
"Computer science, what's that?"
And at that time it consisted of either artificial
intelligence, numerical analysis, or programming
languages.
But anyway, the main thing I wanted to mention
is that my colleague in Princeton told me
I'll believe that computer science is really
a science as soon as it has a thousand deep
theorems.
So, I don't know exactly what a "deep theorem"
is, but it's something different than is discovered
by deep learning I think.
But, at the time I couldn't claim a thousand
deep results.
And this was, you know, late 1960s.
But I think that threshold, by any definition
of deep, was probably passed in the middle
seventies.
And now we have thousands and thousands of
things.
So computer science shares with mathematics,
the great privilege that we can invent the
problems we work on.
We don't have to base it on the way nature
happens to have decided to go.
So we can invent our own universe as we can
design our own languages and our own exonyms.
On the other hand, computer science I'm not
sure if it will ever be decided whether computer
science is a subset of mathematics or mathematics
is a subset of computer science.
But, I suspect that it'll actually confirm
what I've always believed is that basically
they're two parallel subjects, that neither
one gobbles the other up.
And there's a lot in common, still there's
a distinct difference.
It's certainly clear from the very beginning
of our computer sized departments and universities,
that there were lots of people who could never
get tenure in mathematics, who could get tenure
in computer science.
And vice versa.
So in fact, vice versa was probably even more
so than versa.
So that was the situation when we started
up and I think that's probably going to continue.
We have lots in common, but we also have this...
We're studying things on which we have total
control of what we're doing.
So that makes it nice because we can actually
know that we've done something when we got
a result.
But physicists can never be absolutely sure.
You can't go to the sun and measure things,
you can only guess.
So till your dying day you don't know exactly
whether what you've done was right.
But with mathematics and with computer science,
we have this opportunity once in a while.
Okay now they said if you want to ask me any
questions, please go to the microphone and
I'm gonna start answering them.
But while you're walking to the microphone,
I wanna show you something from the day that
I got my Turing award in 1974.
Now in those days, the amount of money that
went with the prize was one million dollars
less than it is today.
But 
I did get a very nice Tiffany bowl of silver.
And my wife and I use it to serve strawberries
all the time.
Strawberries taste actually, much better out
of it.
But, what I wanna show you a present that
I got from my publishers on that day.
So when I got the Turing award, they presented
me with these three beautifully leather bound
volumes.
I'm gonna hold 'em up so that you can see
the gorgeous binding and everything.
Look at the type setting in those old fonts.
So anyway, this was about the nicest thing
I could have received at the time.
I remember that when they gave it to me though,
I immediately looked inside to see the copyright
page to see which printing they had.
Now I see people are lined up, so I'm ready
to answer questions.
- [Audience Member] Hi, it's kind of a silly
question, but since it's my only opportunity
to ask one of my favorite doctors, I was just
wondering...
In the beginning of your book on algorithms,
you establish a grading system for your exercises
and I was wondering, have you tried applying
the system to other aspects of your life?
For instance, would writing a grant application
be a five one task or a zero zero task?
Thank you.
- Alright, yeah.
Actually, every time I look at an exercise
I have to give it another number.
But what exercise did you specifically ask
about?
- [Audience Member] Well for instance, writing
a grant application, would it be five one
or zero zero?
It's probably zero zero for you but it's more
like so and one for me.
- My life has been about as to Wall Street
has or money has been.
So the main reason that I was very happy to
retire, is that I no longer had to worry about
finances.
In fact, I never knew why people bought computers
or paid for computers or anything, I just
happened to be around where there was a computer
to use.
But I'm completely worthless as an advisor
about anything that has to do with economics.
To me, I have to admit, it's a game and I've
been able to take advantage of it.
Now okay, onto the next question.
- [Audience Member] Thank you very much.
- Yeah.
- [Audience Member] So I recall that when
you published TeX you published with it a
very bold wager.
As to the number of bugs that would ever be
found in it.
A wager that doubled with every additional
bug being found.
This is the only real useful system I have
ever heard of that was advertised with such
a bold claim.
How on earth did you do it?
- So I stopped doubling when it reached 32768.
But the beautiful thing about it was that
this brought people out of the woodwork and
told me about the bugs early on.
And so I now review them, I think the next
time is 2021.
But the bugs, there's a group of volunteers
that reported bugs and if they think it's
something worth while for me to do they file
it away.
And every once in a while, I believe in batch
processing, rather than swap in, swap out.
So I get a batch of these requests and at
first they would be every year, then every
two years, every three years, and so on.
So the last time was, I think, 2014.
And so then seven years from then is 2021.
So I'm gonna look at bug reports again then.
And I think last time there still was a 32678.
- [Audience Member] The real question was
how were you able to design a real complex
system with such purity that you could dare
to make that wager?
- Well there's something called literate programming
that helped a lot.
But the full example, I wrote a paper called
Errors of Tech.
Which goes through and classifies exactly
the first fifteen hundred bugs that I removed.
And classifies them into a dozen categories
and tries to figure out, was it because goto
were harmful?
Or something like this?
It turned out, yes, goto was harmful but that
was only 2% of the bugs.
So look for that paper.
By the way, it was published in the journal
that she mentioned Software: Practice and
Experience.
- [Audience Member] Sorry, I don't have any
jokes but I have two questions.
First, do you think computers will become
good composers of music?
- Okay so, I think that...
I know that they're wonderful aids to composition.
And some people may want to trust an algorithm
to do the whole thing, but to me that's beside
the point.
What I like is to have a computer help me.
And then I have a smaller space of possibilities
to think about.
And I choose the one that actually works for
me.
- [Audience Member] So that's related to my
second question.
You proved hundreds of theorems in your life.
Do you see a role for computer assisted theorem
proving in the future?
- Yes, also in the present.
I mean I've got a program running now at home
that's gonna help me prove one I hope.
- [Audience Member] So you talked about a
thousand theorems as the difference between
mathematics and computer science and that
made me--
- No, I didn't mean the difference.
I meant when does computer science reach maturity
or worth consideration as a subject and so
on.
- [Audience Member] Thank you for that clarification.
The term I've heard about social scientists
is physics envy.
And that's in respect to reproducibility and
things like that.
I was wondering, do you think that we have
an equivalent physics envy or mathematics
envy or something else in computer science?
And how should we, especially as far as reproducibility
of research, and how should we be addressing
that?
- Ah, well those are certainly good questions.
I don't think I have a particular answer for
them.
Certainly for reproducibility of computer
results, I'm a strong advocate of literate
programming.
So take a look on the web for all the things
about literate programming.
And I know the people in the statistics community
are all embracing this because when they write
their papers, they want the programs to be
understandable.
And so the idea of literate programming, I
got the idea from structured programming which
Dijkstra and Hoare and Dahl wrote a magnificent
book about, three of our Turing laureates,
had a book called Structured Programming.
And a few years later I came up with the idea
that I'll have something called literate programming,
so that people who would have to use it.
Otherwise, they'd be accused of writing illiterate
programming.
The idea is that you don't think of a program
as something that you're presenting to a machine
to execute, but you're presenting to other
human beings to understand.
And so this is a big boost for reproducibility
I think.
When it comes down to it we have to know how
to understand programming.
If we're going to do anything like this.
But I don't know how to...
I think my own thinking process is probably
different from people in other fields.
And I think our field is primarily jealous
because a lot of people have the same peculiarity
we have thinking that I have and people in
the room here.
And so we discovered each other and we come
together and we can talk at high bandwidth
with each other.
But if we have other ways of thinking, then
I'm not the right person to really design
a system for that, I'd rather work in collaboration
with people from other groups.
- [Audience Member] Hi Don.
Thanks for your magnificent volumes.
Maybe some day you'll hatch another one.
In the meantime..
My question is, what do you consider to be
the most important theorem in computer science
both from a practical and a theoretical point
of view?
- That's the one question I hate the most.
What is the question you hate the most?
- [Audience Member] There we go I mean it's
like asking a parent which of their children
they like the best.
But probably my favorite theorem wouldn't
be one of my own.
On the other hand, when it comes to a favorite
algorithm, it turns out I do have a favorite.
And that's Bob Tarjan's algorithm for strong
components.
And it's just the most gorgeous, beautiful,
elegant algorithm that I...
There might be a topper for it someday, but
just for sheer...
It's short, it's deep, and I use it for lots
and lots of stuff.
- [Audience Member] Thank you.
- [Audience Member] Hi Don, I just wanted
you to look forward a little bit and--
- [Don] No.
Next question!
- [Audience Member] Okay, nevermind.
So these days a lot of people are talking
about the existential threats potential--
- [Don] I'm sorry?
- The potential existential threats from AI
and so forth.
- [Don] Existential threats, okay.
- And a lot of us, well I won't put in my
opinion, a lot of us view this a certain way.
But I wondered what your thoughts were on
that?
- [Don] About what?
- [Audience Member] About that AI could run
away and become an existential threat.
- Right, yeah.
I can go both ways on it.
I can become optimistic and pessimistic.
And Stuart Russell mentioned yesterday that
he's spending most of his time on that.
And I think he has the most sensible ideas
of anybody.
But it still scares me when I look at his
ideas and I see that they're based on the
assumption that human beings are rational.
And then I look at election results...
- [Audience Member] Enough said.
- But I'm very serious.
That all of these scenarios that they're planning
to say, "Here's how we're gonna keep machines
from taking over."
Based on the fact that human beings are gonna
make some intelligent decisions, the people
in charge.
And so we have to get people who don't think
like us and work with them in order to control
them, before we're able to control the robots.
- [Audience Member] Thanks.
- [Audience Member] Hi Don, I'm asking my
question on behalf of all future computer
scientists.
So based on what you know now, if you could
do it all over again, your whole career, what
would you do differently?
- I think I would use decimal internally in
TeX instead of binary.
- [Audience Member] Hi Don.
You've done lots of splendid work concerning
the history of our subject.
But I know that you've expressed concerns
about some of the recent ways in which so-called
historians of science have been investigating
and writing about our subject.
I was wondering what you thought were the
major topics, historical topics, that were
worth analyzing and documenting just now?
And essentially what guidance you would give
to the future generation of graduate students
in that subject?
- Okay, I only have a minute and I promised
to answer in 30 seconds, but--
- [Audience Member] You have an extra five!
- Okay so, but most of what I said is available
on the web in a video called, Let's Not Dumb
Down the History of Computer Science.
And it's a talk that I gave in honor of Tom
Kailath who has annual lectures at Stanford,
two or three years ago.
And basically, I start out by saying, "Why
is history of computer science really important
to people who are doing computer science?"
And the main reason is that we learn by osmosis.
If we can see what other people did, how they
got their ideas, and we can watch other creative
work in progress, then that helps us understand
how to do creative work ourselves.
But there's been a seat change in the way
computer science history was done.
In the sixties there were papers written about
computer science and that actually talked
about algorithms and programming and things
like that.
But now, if you look at the papers that are
written about the history of computer science,
it says here's the way somebody got funding.
Here's the way somebody made a partnership
with somebody else, or maybe somebody had
problems with their parents or something.
But they don't ever talk about anything that
a non computer scientist would also understand.
And the reason is that the whole field of
history of science has changed over from what
they call internal history to external history.
And internal history is being played down.
And one of the reasons is that historians
now in order to publish in journals, have
to write external history.
So this recent, really wonderful book about
the history of the ENIAC, the authors say
that they couldn't have published any of that
in journals of history of science.
Now the universities are falling down.
There isn't a single university in America
that supports a computer science department
that supports a historian of computer science
to train the next generation.
And so I'm trying to have Stanford come up
at least for that.
I recently read a PhD thesis that's going
to be published where, his name is van den
Hove, who studied Edsger Dijkstra's Algol
60 compiler.
Which, as you know, was written not in a Simula,
but in a machine.
A machine code.
And he studied very carefully and he makes
a wonderful description of exactly the innovations
in it and the data structures, and all the
problems in the organization and so on.
And this is the kind of thing that I believe
is what deserves to be called history of computer
science.
- [Audience Member] Thank you, that's very
helpful.
Not least because I'm the examiner on that.
- [Audience Member] Hi Don, how are you?
You're surviving up there fantastically.
You said you were gonna wing it and look,
you're winging it!
Okay, so I have concerns and I would guess
that you might also have concerns about the
lack of focus in CS education now.
And I think, though I love many of the things
that you have done, what I find most inspirational
is the way in which you've done things.
Which is intense focus.
We rarely get to see you because you're always
focusing.
And trying to get out the next volume and
the next theorem and the next algorithm.
I'm wondering what you would say to those
much younger than us here about the way in
which you approach computer science?
- The hardest thing for me is to decide between
two hypotheses.
One hypothesis is that you can take anybody
and teach them to be a good computer scientist.
And the other hypothesis, that 2% of the world
were born to be geeks and the other 98% were
not.
And depending on which way you go, the jury
is out.
But let's suppose the second is true.
It's not for a person who's a geek himself
to understand it.
But the people who measure personality, also
are only good at measuring the personality
types of themselves.
So it's very hard to actually prove it either
way.
But let's suppose only 2%.
Then only 2% of the world is able to teach
computer science.
In a way, they'll understand what they're
teaching.
And so when I read it in the latest Communications,
this lack of focusing, we're supposed to teach
computational thinking, what is it?
And I'm thinking it might be that the teachers
are saying, "I don't know how to do computational
thinking myself so how am I supposed to teach
it?"
And that to me is the hub of the problem.
That deserves the most emphasis.
I can't go on much further.
By the way, I have one more joke prepared
so I hope your question, Alfred...
- [Audience Member] I don't know, maybe this
will be general enough Don, that you can weave
your joke into the answer.
I hope so.
But this is hopefully useful to the breadth
of the ACM community.
And while you're known, really for mathematical
computer science and really creating the field
of the analysis of algorithms more than anyone
else, your book is also entitled The Art of
Computer Programming.
And I believe your department is part of a
school of engineering, as is the case for
many of us here, that our schools are part
of engineering.
In addition, I think it's the case that our
field's becoming increasingly empirical.
And that we use all sorts of data science,
machine learning, other things, to learn from
the world around us, from the reaction of
people.
So I'm getting back to the point, you compared
us to mathematics and only that.
Shouldn't we be compared and judged also with
respect to the natural sciences as we learn
from data about the world?
Also the engineering sciences, because we
build such remarkably complex edifices using
abstraction, design, and lots of tools?
- So when I gave the Turing lecture, I tried
to explain what it would've meant by The Art
of Computer Programming.
And the idea is that it's art in the sense
of, not in nature as well as being beautiful.
Your point is that I was comparing it only
to mathematics and not to other things and
that leads to the actual joke that I prepared
because--
- [Audience Member] You're welcome.
- I think tonight we're all gonna get a free
copy of Communications of the ACM, the latest
issue.
So take a look at Peter Denning's article
which is very well written, but it has a nice
typo and it calls about "deep earning".
- One last round of applause for a great man.
