>> Okay. Today is my great pleasure
to introduce Kevin Buzzard.
He's a professor of pure
mathematics at Imperial College.
He received the whitehead award for
his distinguished work in
number theory in 2002.
He also received the Senior
Berwick Award in 2008.
He's a number theorist.
He's also interested in
formalizing mathematics,
AI for math and how technology
can improve math education.
Kevin is extremely popular
with his students.
I met him and many of
them last year had flock.
As a last remark as many of you know,
Daniel Selson is starting
the International
Mathematical Olympiad
Grand Challenge where the goal
should get a gold medal in
this challenging and
prestigious competition.
A few weeks ago, Madam found out that
Kevin is a master puzzle solver.
He got a perfect score when he
was a student's, a gold medal.
Today, Daniel and I had
the pleasure to see him in
action solving some
of these puzzles and
it's was human ingenuity
at its best and
how many magical steps are needed for
solving the problems
in this competition.
The competition has six
mathematical puzzles.
Only the top students participate
and many of them can solve
one or two of these problems.
If you think GO is hard is
a finite action space
in this competition is
an infinite action space with
many magical moves that
demonstrate human ingenuity.
Today, Kevin is going to
talk about the future of
mathematics and how technology
is affecting it. Thank you.
>> Thank you for the invitation.
Thank you for the introduction.
So as he says,
I'm a professor of pure
mathematics at Imperial College.
So I'm an algebraic
number theory so I have
no background in computing at all.
I've used computers
completely informally.
I've written Python
code to solve games on
my phone and things like this.
But no programming background at all.
I've used computer algebra packages.
In particular I've
used Sage written by
William Stein who save for
this to do calculations.
So to computate, I want to check
a million examples of something.
But really I'm interested
in theorem proving.
There's theorem-proving
software nowadays and I work in
a regular maths
department and no one in
my maths department has heard of
this theorem proving software.
It even has theorem in the
title some of this stuff.
This is supposed to
be for mathematicians but
no mathematicians use it.
So in some sense,
the most surprising part
of the slide is that
a mathematician actually started
to engage with this software,
and all my colleagues
thought I was mad.
So I started goofing around with
the Lean theorem prover which is
the software that Leo has written,
that Microsoft Research has written.
The reason I did it I'll get to it.
But the reason I did it was
because not for my research,
but because I was given an
introduction to proof course to teach.
I was told to teach
the basic introduction to proof
course and I started thinking
about what a proof was.
These computer theorem provers
are extremely precise,
definition of what a proof is.
So that was how I got
interested because of an
educational, for educational reasons.
Then it slowly dawned on
me what this software can
do and now I'm absolutely
convinced that this software
has an integral part to play
in the future of mathematics.
So many of my peers don't
think this yet but many
of my peers don't,
I think it's inevitable.
That's the point and I think it's
only a matter of time before
mathematicians start waking
up and smelling the coffee.
So there's many theorem provers
and I'm not saying
this because I'm here,
I might be here because I believe
this but might be more
likely to be true.
So I started using
Lean two years ago but
six months ago I thought
I should probably
try and look at all the
software available,
and I looked at all the
software available just with
this one goal in mind of
doing all of mathematics,
somehow attempting to do
all of pure mathematics.
I'm absolutely convinced that
if you want to do all
of pure mathematics,
then Microsoft is currently
the winner, hands down.
That might not be true
forever, but I think the Lean,
this is some technical issue
to do with what kind of
type theory it's using
and how one can use it.
But at the minute I think
you're sitting on the goldmine.
In 10 years time things
might be very different.
So I'm not saying Lean
is going to solve
all the problems but tools such as
Lean are going to
change the way mathematics
works and probably quite soon.
Maybe in the near future,
they're going to be helping is.
How do you do so as a mathematician,
I'm doing my research.
I have some technical question
about very, very complicated objects.
So you can't Google for the
answer to that question.
Google is a useless too-
Bing as well probably.
These things just don't curse it.
If I've got a technical question how
has the following technical
statement been proved?
I know exactly how I'm
going to answer that.
I ask my friends,
I ask my peers and if none of
them know I ask the elders.
I ask the experts in my area.
Ideally it would be nice
to do that automatically,
and this is one of the many
things that I think that software
might be able to do once
we've made the database.
I think also they can
change the way we teach.
So every year now I
get put in front of
270 undergraduates who have all
just come out of high school and
none of them have got a clue,
the vast majority of them haven't
got a clue what a proof is,
a rigorous logical
mathematical proof.
They see heuristic arguments.
They are told that the
derivative of sine is cosine,
and the proof involves drawing
a very thin triangle
and heuristically arguing
that if you have a very,
very thin triangle, a triangle
with an extremely small angle then
the two long sides are approximately
the same length which
is intuitively obvious.
But that's not the kind of thing you
can type into the
Lean theorem prover.
So they come with a very
vague idea about what
a proof is and what they can get
away with and all of a sudden,
they find out in
university that they can't
get away with anywhere
near as much as
they used to be able to get and it is
true that the derivative
of sine is cosine,
but we feel that this needs a proof.
The proof that we're going to give
them is very different to the proof,
in fact the proof I was told
at school was circular.
The proof that the derivative
of sine is cosine.
Because it involves, you do
the algebra and you end up
reducing to the case of zero.
You need to prove that the derivative
of sine at zero is cosine.
So you have to compute
sine of x divided by
x and then take the limit
as x tends to zero.
The proof of this involves
someone drawing the graph of
sine and saying well it
looks intuitively obvious.
Then you say and actually
later on we'll prove,
after a while you forget about
the question that the
derivative of sine is cosine.
The question is what's
the limit as x tends to
zero of sine x divided by x.
Then once people have forgotten
that that was relevant you
say well later on we'll prove that
the derivative of sine is cosine.
So that means that this slope
must be one because the cosine of
zero is one and some other
circular argument has slipped in.
So we get told rubbish at
school and then I have
to pick up the pieces.
I got interested in Lean because
kids like software, right?
I started 20 years ago,
kids didn't use software
because there wasn't software.
Now there's loads of software
and kids have computers
in their pockets,
and I figured great let's show them
what proof is using software
because they'll like it.
So that's how I started doing
Lean and I think that this
is almost certainly going to
happen and this might happen.
I just don't know, this is
your problem, not my problem.
Can you actually start
making these tools
to generate new theorems
or generate new proofs.
I don't know but then again, I've
seen computers do things like
beating humans at chess and
beating humans at go and
so maybe they can
beat humans at maths.
So I just don't know what's possible
but I want to be
there when it happens and I want
to understand what's going on.
So in April I met this guy called
Christian Szegedy at Google and
he thinks it's all going to
within 10 years he says.
He has a PhD in
combinatorics so he's an expert
in a certain kind of maths.
I've got PhD in
algebraic number theory
which is a completely
different kind of maths.
I'm rather more skeptical.
I think that when we talk about
mathematics we might be
meaning different things.
But he seems very bullish and one of
his papers has got 15,000
citations so you've got
to take him seriously.
But he has a team that's
using up they're using
Isabelle not Lean using Isabelle
and he's going to do stuff.
I don't think he's about
as good enough to do.
I don't think Isabelle
it's good enough to
understand all of mathematics.
But it might be good
enough to understand all
of what he thinks is mathematics.
So that's the situation.
So I'm just going to
expand on various things.
So who am I?
I'm an algebraic number theorists.
So ultimately we're interested in
questions about the
non-negative integers
and it turns out there are questions
like Fermat's last theorem
which are very simple to state
and extremely difficult to prove.
This took us 350 years between
the statement and the proof,
it's very innocuous looking statement
completely of no use to anyone.
This is just a puzzle.
But it's just a great example
of something that turned out
to be extremely difficult.
I'm very lucky because while I was
a PhD student my adviser
in his adviser proved it.
So I was all of a sudden in
the middle of everything.
I've learned a whole
bunch of techniques that
turned out to be suddenly extremely
fashionable because
mathematics has fashions.
I learnt a whole bunch of
techniques and all of a
sudden they were extremely
fashionable and hello.
>> Hey do you trust the proof?
>> Oh I've read the proof.
>> It's good?
>> It's fine.
>> I was always worried
this is so complicated.
>> Yeah.
>> That it takes someone like you.
>> Yeah it's an expert.
>> To say oh it's right.
>> Yeah. But there is lot's of us.
>> There's room for
argument sleeping.
>> No there's lots of it. So I
read the Walls and Taylor work.
So that part is fine but
there's 50 references.
>> That's where Lean can help, right?
>> Yeah. So let me fast
forward a little bit.
How does the proof of
Fermat's last theorem go?
It breaks into two steps; firstly,
you prove that the semi-stable
Taniyama Shimura conjecture
implies Fermat's last theorem.
Secondly, you prove the semi-stable
Taniyama Shimura conjecture.
That's the strategy.
Now the big problem with
software as it stands
is that even though it's
existed for 30 years,
this theorem proving software,
nobody has got round to even
stating the semi-stable
Taniyama Shimura conjecture.
So we can't even write the first line
of the proof currently and these
things have existed for 30 years.
So the proof of last theorem
is structurally complex,
even though it's a statement
about non-negative integers,
the objects that are involved in
the proof are incredibly complicated,
they take pages and pages
and pages to define.
So we've had to invent complex
structures in order to
prove a simple looking statement
about simple structures.
That's is a general
phenomenon of mathematics.
Number theory says
undecidable in general.
So certain simple questions about
the natural numbers you can
solve easily and then
some somehow you have to.
This is a nest, this is
possible nature of mathematics.
Because there are some
statements which are
easy to state but hard to prove,
and then there is some
statements which are easy
to state but impossible to prove.
But the proof of Fermat's
last theorem is long,
you pointed out, but that's
nothing compared to what
we're doing nowadays.
Because people are collaborating,
it's easy to pass latex
documents around.
People just write 200 page proofs,
sync them up on the archive,
and they look very professional
because they're all written
in professionally typeset software.
The moment there on the archive
other people can start citing them,
even though they have
not been refereed.
If you trust the authors.
Why would I cite a paper on
the archives that
hasn't been refereed?
One of two reasons; if I trust
the author I'll cite it.
If I'm desperate for
the results and this is
the only reference I know,
then I'll cite it.
But who cares if anyone's
checked it because
our currency is theorems,
we want new theorems.
So I just started to get nervous
and that was another reason.
Nervousness about the
reliability of the literature
is one thing that started
making me think about
using theorem provers,
and then being told to
teach a course about
proof made it actually happen.
So very briefly, the background.
I knew some Python
but people told me that Haskell was
cool because it's
completely different.
I tried it but I didn't get it,
and then I was given
this course to teach.
So then I thought I
should go back to Haskell
because it was more
mathematical, I don't know.
I tried again the second
time I did very well.
I read this learn you a
Haskell for great good,
and the second time I
managed to get through it.
It was hilarious.
Then somebody asked some question
about this is a fancy
theorem in mathematics.
The odd order theorem
is a theorem with
very long proof and
it was formalized in
a rival system, a system called Coq.
The proof is formalized,
it took years to do.
I thought obviously this
is cool because that's
a relatively respectable
theorem, the odd order theorem.
It got John Thompson
the Fields Medal.
The proof is quite low level.
The odd order theorem is a question
about finite groups which are
relatively simple
mathematical objects,
and the proof is lots of arguments
involving finite groups.
So it's different to
Fermat's last theorem.
Fermat's last theorem is a
question about numbers but
the proof involves lots of
complicated structures.
This involves lots and lots of
an ingenious arguments about
finite groups and it's a
theorem about finite groups.
So I goofed around with
this and I thought,
"Yeah, this is nice."
There was a tactic framework
and a proof simple,
the things I wanted to
teach my undergraduates.
Then Tom Hales gave a talk in
Cambridge in 2017 which was live
streamed by the Newton Institute.
These are links by the way,
if I distribute the PDF afterwards
you can click on these links.
Tom Hales announced that
he was going to make
a gigantic mathematical
database that had
all the pure mathematics
in and he didn't name
any software by name but
at the end somebody asked him what
software was he going to use.
He said my proposal is Lean,
and I thought I'd never even
heard of Lean but I trust Hales.
So I thought let's forget
Coq, let's learn Lean.
So by this stage,
I'm convinced that one can
try and formalize all.
But I've seen what Coq can
do and I've realized that
all the mathematics I
know it's just based on
logic and Coq understands logic.
So I think yeah why not let's
formalize all the mathematics.
Sounds like an interesting project
and so because Hales chose Lean,
I think I'm going to choose Lean.
I'll give it a try.
So Coq, I tried and then I
moved away from but Lean,
I tried and never
moved away from because it seemed
to me to be the right tool.
Now I believe it's the only tool
that's currently available,
other tools will of course appear.
So I start with Lean and
now all I want to do,
I don't want to do
mathematics on pen and paper
anymore because I don't
trust it and I don't trust
other people that use
it which is everyone.
I'm having a midlife crisis.
I think that's what's
really happening.
But if given a choice between using
free and open source software
and buying a Ferrari,
this is somehow a cheaper option.
So I'm going to stick
to Lean and now my job,
because I'm one of three
mathematicians in the UK.
that's heard of this. By
mathematician I don't mean
these people doing category
theory and type theory,
people that say they're
mathematicians but us proper
mathematicians regard
with deep suspicion.
When I say mathematician I
mean a proper mathematician,
some of it is like
topology or analysis.
None of these category theory people.
These people are dying in my country,
they're retiring from massive pumps.
They're not being replaced. They're
finding their way to other,
they're being hired by
computer science departments,
philosophy departments.
But for 20 years we had no foundational
people at all at Imperial,
and the foundational people
that taught me in Cambridge,
Peter Johnston who wrote some
book about topos theory,
he wrote the Bible of topos theory.
He's retired and he's
not been replaced.
We don't care about these silly,
we don't understand logic,
there's no point in logic.
Somebody checked that
logic worked for
mathematics and so now we can
just get on with our mathematics.
That's how we proper
mathematicians think about it.
So I'm one of
two or three proper mathematicians in
my country that knows
about this software,
because I think it's going to turn
the entire world upside down.
I think it's my job to try
and tell them about it.
So let's start with
the undergraduates.
I love teaching, both my
parents were teachers so
I thought I'll try and
teach the undergraduates
to use this software.
That was the plan. So I
just threw myself in.
I started a blog.
I'd never done anything
like that before.
I'm on Twitter and started
an undergraduate club
and basically said come
along and we'll figure out we'll
figure out how to use
this software together,
because I'm a very visible lecturer,
all the students know because
I've been teaching this
first year course for
several years and it's
a compulsory course.
So if I tell
a bunch of students to go and do
something, a lot of them will do it.
So I say come to my club,
so like 70 people will show up for
this first week and then we
realized that actually the software
it's really hard to use and we
don't quite know what we're
doing and it's a very high entry bar.
So lots and lots of
people came in week
one and by the end of
the semester there was
about six people coming and then
by the beginning of next semester
there was about two people coming.
The two people were Chris
Hughes and Kenny Lau,
and what was quite funny it was by
February they were both better than
me because they're young.
I'm 50 years old, I can
still learn things but it
just takes me some time.
I have to be told things three
times in order to sink in,
and I'm not used to undergraduates
being better, I'm [inaudible].
I'm an expert puzzle solver,
I'm a professor of
algebraic number theory,
I spent all my time dealing
with these undergraduates
and they can never tell
me anything I don't know.
That's been true for 20 years,
and absolutely, I dominate
the undergraduates.
They can ask me the hardest
question they come up
with and I can answer it because
I'll found the material trivial.
For the first time in my life,
undergraduates are telling me
things that I'm interested in.
So I thought this is
really interesting from
a pedagogical point of view
is that I'm actually trying
to use this software,
I'm getting stuck,
and they're telling
me the way I should be
thinking about mathematics.
So this was extraordinary,
and these two people somehow ended
one of them is now a maintainer
of the Lean's math library and
these people are very smart.
So two years ago neither
of them knew anything
about type theory or
computers or anything,
they were at school and now they're
writing professional software.
They're writing PRs.
They've written 30,000
lines of professional code.
So I think that's kind of weird.
So then my institution
started noticing that I was
doing something a bit strange.
They had a pedagogy
transformation grant,
they have money for summer
projects and I've got three kids.
I'd spent 20 years never doing
any summer projects because
I was having to babysit.
My partner is a surgeon,
so she's very busy.
So would babysit because
I'm an academic.
I would just take the summer
off and do the childcare.
My kids are all teenagers now.
So for the first time last summer
I thought I can actually
go to work during
the summer and just leave them at
home and just hope they
don't burn the place down.
Because I'd spent 20 years
doing no summer projects at
all I said I'll do 20,
will you pay for them?
Imperial came up with money
to pay for 20 Summer Projects and all
of a sudden I had 20
undergraduate so I said,
right let's just digitize
bits of the curriculum.
Take any piece of math that
you thought was interesting.
Let's type it up into Lean.
We had a really good
time, and then October,
2018 the new first-years coming
and I'm teaching my course
again and I've got this group of
10 students that actually
know what they're doing.
So we've got 70 people
in week one and
then I got 50 people in week two
and the students are
talking to each other
which is perfect about mathematics.
So the search took
a lot longer to die
down and now there's
a core group of 10.
By the time when the exams
come there's then it's only
the hardcore people come.
So I'd gone from two hardcore
people to 10 hardcore people
and in October it's going to
start again and now I've
got lots and lots of people
that know what they're doing,
including several
people that know what
they're doing better than me.
So that's what's happening
with the undergraduates
at Imperial College London.
What have they actually done,
we arrived at a perfect time
because Lean had just got
all the prerequisites available to
start doing mathematics the
way a mathematician does,
pretty much, not quite.
Things like a theory of finite
sets that we think of as trivial.
It was thousands of lines of
type theory code that was there
in the math library already and
so I just told the
student why don't you
just take some random
undergraduate piece of
math formulas because this is all
undergraduate level mathematics,
and none of it was that
the complex numbers
weren't even in Lean when I started.
My first achievement in Lean
was define a complex numbers
pair of real numbers and
then pretty basic facts
about the complex numbers,
and I'm just thinking if I can
do this I can do anything,
because that's exactly how
I think about mathematics.
So I just said take a random
theorem, why don't you prove it.
Some of these theorems
were quite hard to prove.
Some of these are
substantial pieces of work,
but they're all done and
the undergraduates did
it with very little
help from me really.
They just got the hang of it and they
talked to each other
and they worked it out.
This software been
around for decades,
but as a teacher I'm
interested in, where's the
undergraduate Mathematics degree?
That would be for me a very natural
thing to formalize and it's
not there in any of the systems
people just do bits and bobs.
They choose their area
and they do that here.
Why can we have one
coherent body where all of
the mathematics and the
normal undergraduate degree
is there in one library or
at least in one system?
Thirty years this software's been
around, no one's ever done that.
The proof of the odd older theorem,
they needed bits and bobs from
an undergraduate curriculum.
They needed the third
year group theory course
and the third year
algebraic number theory,
half of the third year
algebraic number theory.
So they formalized those bits but
they were not trying
to make a library,
they were trying to prove a theorem.
So they just they did
bits of a whereas we're
trying I just a different
way of doing things.
Why don't we just
digitize our curriculum.
I don't know what that
it will be fun, right?
It might teach the students
something. I don't know.
It might create an object that's
never been created before.
I'm a blue sky
researcher, that's great.
I do normal pose.
I say I've got to
prove some theorems in
the area that we're all
proving theorems in.
I want to prove
similar-looking new theorem.
I think this is much more exciting.
I want to make a new object,
I want to make it
undergraduate degree.
So there you go. So here's
some undergraduates.
I'm really into cosmopolitan
international university.
Ellen is German,
Sangwoo is Korean and
Kenny is from Hong
Kong, Guy is French,
Amelia is British, Jean is from
Taiwan, Rohan is Australian,
Blair is Chinese, Abhi is
Indian, Calle is Swedish.
All these people have written code.
So it's now in Leans math library.
All these people, every single
one of them without exception,
none of them knew a damn thing about
type theory or coding or anything.
They're all mathematics students,
and I've taught them all to
write professional code,
and these people here: Sian,
Anca, Clara, and Ramon,
they've done projects with me.
This is Ramon's MSC project
was formalizing the
definition of a scheme,
which is an MSC level mathematical
object that was in no system.
Thirty years we've had this
fundamental Vladimir Vernadsky,
got a field names for proving
a theorem about schemes,
and then he went off and
invented univalent type theory.
Ten years later, how had he formalized
the definition of a scheme?
No, he hadn't. What is going on?
An undergraduate an MSC student
formalized the
definition of a scheme,
then I started looking around
and seeing how it was done in
the other systems and I
found that it wasn't there
in the other system.
This is publishable work,
and so it me is for author paper
me in three undergraduates
because I formalized
it with two undergraduates
in a poor way we made
some design, poor design
decisions because we're
amateurs and once I'd learned
how we should have done it,
I gave it Ramon as an
MSC project to fix up,
to do what we should have done.
I'm collaborating with undergraduates
maybe that's normal in
computer science departments,
that's absolutely unheard of in
pure mathematics because
we dominate the students.
They have to doing PhDs before they
can even get to the
point where they're
asking questions which
find interesting.
So this is very strange for me and
very exciting as well as an educator.
So I've written a piece
recently in the London
Maths Society news use.
The UK is a small place
aligned with known person
and I was asked by
the London Maths Society of
how to write a piece for
their newsletter about
your software, and I did.
It's a small, five-page thing,
but I talk more about undergraduates
there because I figured
that a lot of general mathematicians
in the UK reading this will not be
so interested in my great
research or whatever,
but they might be
interested in teaching
because that's what's
paying the bills for us.
So what do the
undergraduates not like?
They're smart, the
undergraduates in my university.
They got the hang of the software,
but the documentation
is hard to read because
it's written by computer scientists
for computer scientists.
So somebody needs to write
documentation that's more
mathematician appropriate.
So I can do that, I just
haven't done it yet.
The other thing is
they all use Windows,
they all use typically
cheap Windows laptops
and to install the software,
they need to install
the command line and
type things like git whatever,
git clone or something.
These really exotic commands
that mathematicians
are never- we don't use computers.
We don't use anything.
One of the major parts of my job
over the summer was to teach
20 undergraduates how to use
Git, because it's a really,
really essential tool
which is not taught,
and CUs get you,
it's just so ridiculous.
The easiest way to compile
Lean is via the command line.
You just download some compiled
binaries using some command line.
I got a bit frustrated
after a while and we
use something called CoCalc
instead, which works really well.
This is multi-player Lean.
VS Code has some multiplayer option,
but it doesn't work with Lean.
Did you ever try using
multiplayer Lean?
VS Code, several people can
edit a file at the same time,
but the LeanServer is only
running on one computer.
It's running on the host computer,
and the guest computer can use
the server but it can't import
all the imports file
because it attempts to
do the imports locally,
then there is some
capacibility issues.
CoCalc has solved all these.
So I've run multiplayer CoCalc
webinars where we go through a
file and we all edit it together.
It works really well,
but you have to pay, and
I can't pay for 270
undergraduate licenses.
I mean I could ask my department to,
but my department has already
given me tens of thousands of
pounds. They might buy it.
Well, one thing I've learned is
that you can engage undergraduates,
and even with no
programming background,
you can teach them.
It's exhausting but you can
teach them to use this software.
The other thing that
my department did,
was I asked for specialists in
education to come and
examine while I'm doing.
Because I was giving lectures
but the last year I gave this course,
I would fire up Lean,
I would state theorems and then
I would prove them in Lean,
and I would give out notes that
had the traditional PDF proofs.
But I would do live Lean
coding in lectures.
These people watched
while I was doing,
interviewed 40 of the
undergraduates, did surveys,
and they're currently
writing an education paper
about my intervention as it's called.
But it's not ready
yet. It's coming out
beginning of October apparently.
They've been very tight lipped
about what they're going to say.
I didn't really know. They might
as well say this guy's
an idiot, right?
But who knows. But it'll be
interesting to see what happens.
So my course starts again in October,
and now I know what I'm doing.
So all the notes will be formalized,
all the problems sheets
will be formalized,
and they will simply have a choice.
You can do it in the
old-fashioned way with pen
and paper or you can
do it on a computer.
The course has been reorganized.
Other people are
teaching parts of it,
and I took precisely
the parts which are the
maximally easiest and
funnest to do in Lean,
and we're going to do those first.
So the first eight lectures
that the new undergraduates
in my university,
the new math undergraduate we'll
see, will all be given by me.
They are just going to one
course and it's my course,
but it's going to be
heavily Lean related,
and we will just see.
What the notes will look like
is a guy called Patrick Massot,
and he'll say, he's
where are we going?
It's there somewhere.
Here it is. He's already done this.
He gave an introduction
to proof course as well.
There are some basic stuff
about equivalence relations.
But down here, here's a proof,
and you can fold everything
up and then it just
looks like a normal
mathematical proof.
But then you can unfold
things and you can see
the tactic state before and after.
So there is before,
and there is after that next line,
and there's after that next line.
What I've learned
very quickly is that
mathematicians find this extremely
hard to follow because
these are Lean tactics.
It's language specific
things like this,
quotient exists rep.
Students find this hard to follow,
but within a couple of
hours of my course,
students can understand
what's going on
here because this is
just normal mathematics,
is all done in Unicode,
which is crucial really.
Students understand what this
says with very little training.
Even if they can't follow this stuff,
you can just click through.
Keep clicking on these
gray things here,
and they can watch the
tactic state change,
and they can follow it, and
they can see what's happening,
and they can begin to
learn what a proof is.
So that's one of the reasons
I got interested in Leans.
So how do we put this
together literate proving?
Patrick Massot wrote some Python code
which takes a Lean
file with comments,
but the comments are
written in LaTeX.
The source code for this
is a Lean file which
compiles and runs with
comments written in LaTeX,
and he runs it through
some Python code,
and it produces a Web page
that looks like this.
So that problem is solved.
So that's great for me
because I just need to
type up my eight lectures in Lean,
put comments and LaTeX,
which I can just knock
off very quickly.
Then you compile it,
and then I have this.
That's what I'm going to be
showing the students and it won't
be in French either, even better.
So there we go. So there's
the undergraduates,
and as I say, I'm convinced
that it shows them a
new way of thinking.
It turns proving into a game.
That's what it is,
and games are great.
If you've watch the students,
when they finish doing
math, they play games.
They play games on their phone,
they play games on their PC,
they play nerdy games,
they play Minecraft.
They play all things.
But that's how I sell this software.
It's something I need
to teach and I'm
turning it into a game
so it's more fun.
Then, as well as that,
I want to digitize the curriculum.
So in three years time,
when we're finished because
it's ticking along.
Where we have to say
Imperial College,
all theorems formally verified
in Lean. It'll be a cool thing.
This is me two years ago.
First question on the
first example sheets,
is it true that if
X is a real number,
and X squared minus 3X
plus 2 is 0, then X is 1.
The question has an
implication sign in.
X squared minus 3X plus 2
equals 0 implies X equals 1.
So I've failed to typeset the
main point of the question.
This should be an implication sign
because I'm trying to
teach them what that
symbol means, X implies Y.
So what I expect the
students to say is,
"No, this is not true
because X can be 2."
So you type that into Lean,
if you're me two years
ago learning this stuff.
You say, "X equals 2."
Then Lean says, "Great."
So now you've reduced
the question to prove in the
2 squared minus 3
times 2 plus 2 is 0,
and now you've got to
prove that 2 isn't 1.
So as a student,
how are you going to
prove that 2 equals 1?
This is impossible because
this is a quotable fact.
Initially, I was thinking
this is a floor with my plan,
because now the students are going
to say this software is stupid,
because how does it not
know that 2 isn't 1.
But now I've turned that around.
I said excuse me, how do
you know that 2 isn't 1?
Is that an axiom?
But 53 isn't 28,
is that an axiom?
Is that a rule? Is that a theorem?
What's actually going on here?
Initially, I thought the software
was just being annoying,
but now I've realized
that actually there's
a pedagogical point to this now.
I'm showing the students
that the things
they thought were obvious
about the real numbers,
we need to find a way
of justifying it.
There's a tactic that
solves these now.
When I run into this issue,
I couldn't do it.
I mean, I was stuck,
and I asked on the Lean
chat and they were, "Oh,
yeah that's quite hard because
the real numbers are
really complicated."
If these were integers,
it would be fine,
but unfortunately, I've asked
question about real numbers.
Last year, I changed the questions,
I had X an integer.
But this year, I changed
it back again to let X be
a real number because now we
have the tools to do this.
There's tactics that resolve
questions like this.
So I raised this on the Lean,
I said undergraduates can't use
your software unless
this could be made easy.
Mathematics undergraduates can't
use it unless it was made easing.
So right, so students
went really well,
I love students and
that was really great.
So now what about the staff?
So initially, they were just
like Kevin's doing stupid thing,
Kevin's doing something
we don't understand.
But I started telling
people about it,
like how this was going
to change the world
and we should all learn
and it's very important.
Again and again, I get this question,
can it actually tell us anything
new because that's what
we're interested in.
We're not interested in the
old order theorem, that's old.
We're interested in new theorems.
So where are the new theorems
that this software is generating?
My answer to this
always used to be, no.
I've realized that that's
not a very good answer,
it doesn't sell it very well.
So I've changed my answer,
and now I say, not yet,
which is also valid,
it's equally true,
but it's a lot more positive and
it more accurately
reflects my beliefs.
Not yet. But it's still
enough to make 90 percent of
people stop listening instantly,
because that's what
mathematicians want,
they want new theorems.
That's what excites
us. So there we go.
There's this other system called Coq.
It's an older version.
It's the same type theory as lean,
but with more design errors.
Lean I guess learned a lot from Coq.
So there's this proof
of four color theorem,
which is his famous theorem
because the proof is weird because
it was like thousands
of special cases
and mathematicians were
uncomfortable about it.
Then there's this older order theorem
and that's also checked in Coq.
The proof of this is low level.
It's hundreds of theorems about
finite groups put
together in a clever way.
So this has been formalized in Coq,
and the conclusion of this is that
the software can handle theorems
which take hundreds of pages.
These serves hundreds of pages long,
and the software can handle it
and compile it in an hour or two.
The software is ready to
handle proofs which are
hundreds of pages long,
that's an important data point.
This work is phenomenally important
because it means that the
whole thing is feasible.
As I said, these have been
formalized in some older software,
but if Coq can do it,
then Lean can do it.
I don't know anything
that Coq can do that
Lean can't in theory do as well.
But why aren't people interested?
One reason is because
we don't care about
the odd order theorem because
it was proved 50 years ago,
and 10 years after
that they proved this
classification theorem for
finite simple groups which answered
the big unsolved questions
about finite groups,
and then everyone left the
area, and the area became dead.
So the area has been
dead since the 1980's.
Then in 2012, all these
computer scientists like "Oh,
we formally verified
the odd order theorem."
Well, there's nobody left
who's interested anymore.
They've just gone,
they've retired and gone.
The area is dead.
We care about fashionable mathematics
and finite groups are
no longer fashionable.
In some sense, this
was very poor choice,
but in another sense it
taught us a lot of things.
I'm interested in marketing.
I'm interested in
marketing your software to
my people and this is a poor choice.
The other thing is course,
is that we're also
sufficiently arrogant.
We know what a proof is.
A proof is if the
elder say it's okay.
Do the alders think that this proof
of the odd order theory is okay.
Yes, they do think is okay.
They gave Thompson a
Fields Medal. That's it.
The committee of elders gave him
the Fields Medal, and so we're there.
That means that the proof is.
Who cares if it's correct?
The question is, is it accepted?
That's what we want to know.
This theorem is accepted.
So we don't care if it's correct,
because it's good enough.
This phrase is correct,
sure it's correct.
We all knew it was correct,
because everything that is
accepted is correct apart
from the small number of
errors that we've made,
and we don't know
where the errors are.
But history says that
the number of errors is
extremely small because we're
good. We're mathematicians.
We're good at smelling problems.
If some argument doesn't
quite look right,
so it looks incomplete,
or your gut feeling
is that what's known,
isn't going to be enough
to prove that result,
then you can smell it.
If you smell a bad smell,
then you've got this 200 page paper.
You don't read it from the beginning.
You start looking at the
statements of the lemmas.
I believe that, I believe
that, I believe that.
Oh, this looks optimistic,
and you read that
lemma very carefully.
If the proof is fine,
you've learned
something, if the proof
isn't fine, you reject the paper.
So we're very good at
reading gigantic papers in
extremely non-linear way and
trying to work out
what's right or not.
So we knew this was right,
because this was read
by hundreds of people.
I think there might be some problems,
but I don't think it's all wrong.
I think that the chances of something
big that we believe
turns out to be wrong.
I think it's actually quite small.
So I don't think that's
the way of selling
this software to mathematicians.
So Fermat's Last Theorem.
The statement is trivial,
but what does the proof look like?
First, we spent 300 years fiddling
around inventing
algebraic number theory,
basic algebraic number theory.
I guess that gets us
to the early 1900's.
Then by the 1930's,
we invent elliptic curves
and modular forms.
Then we invent 10 other things.
This is the 60's, 70's, and 80's.
Each definition relies
on earlier definitions.
So by the end of it, some of
these definitions are huge.
Then we put all of
these things together.
So 350 years so far.
We pull all these things
together, and we get it.
But now here's the problem
is that I can't get my
undergraduates to do
this bibs because this is what you
learn when you're doing a PhD.
I can't get them to formalize
the definition of a p-adic
Galois representation,
because they don't know what it is.
My experience is that you
shouldn't try and teach somebody
some new interesting and
complicated mathematics while
simultaneously getting them to
formalize it. That's the thing.
You should get them to formalize
stuff that they understand.
That's what works best.
First, they understand it,
then they formalize it,
and then they really understand it,
and it stays forever.
So this is just too much.
I could spend a year supervising
an MSC student to learn
what these objects are,
but then we've got no time
for the formalizing instead.
So the proof goes like this.
You invent all these
complex structures,
you prove some
relationships between them,
and then the theorem
just comes out in
the wash. That's how it works.
That's what mathematics looks like.
You invent sufficiently
complex tools,
and then you let the tools do
the heavy lifting for you.
So the proof of course,
takes thousands of
pages but no one's really read it.
So it could be formalized
it. There is a project.
I know the proof or
I know bits of it.
I know the Wiles Tyler part.
I don't know all the proofs
to all the references,
but I formalize the proof.
I can lead that team.
I've led teams before.
I build SAP number theory
from nothing in London.
I could build up this stuff here,
but it will cost a
huge amount of money.
The problem is, we don't care
because the elders
have said it's fine.
The elders have said
the proof is fine.
So what's the point in checking it.
So there's one obstruction to doing
it, is that no mathematician.
So no agency to fund the
mathematics or fund it either.
How does it work? I send my proposal
off and then they send
it to mathematicians.
What's the point of
this? Is absurd amount
of money to do something that
there's no point doing so.
So that's not the way
to sell it either.
We don't care because
the elder say it's fine.
We know it's fine, because
we gave while surprise.
It's interesting because no
human actually understands it.
Mathematics is quite modular.
This is a side remark.
No mathematician
understands the proof
of Fermat's Last
Theorem, the full-proof.
I mean many of them have read
Wiles's paper or the references,
but I've never read the proof of
the Langlands- Tunnell theorem,
because it's just a lot
of harmonic analysis.
But I understand the statement
and I can insert the statement.
I can apply the statement,
but I don't know the proof
because I don't know
enough analysis of an algebras.
So the proof is modular and
different people that we
trust in the community,
have verify different pieces of it.
So now we say it's fine.
It's getting worse.
As I say nowadays people
are verifying proofs where,
if an elder says,
"This is forthcoming work,
but I'm putting it
on the archive now."
I know two instances just in
the last couple of years where
people have brazenly said,
"There's a 100 missing
pages." Literally a 100.
I've asked authors, "You say
this is forthcoming work.
How long do you think that work will
be and when does it going to appear?"
I've had a 100 pages
just missing stuff.
Is fine. This is the
way the system works.
Its the way the system works and it's
difficult to disrupt that.
I say, "Look at that.
Is this really fine?
We actually doing mathematics."
The answer is, "Yes,
of course we are."
That's how it's always worked.
It's fine. They will appear.
In 10 years time, those 100
pages would probably have been
written by some poor PhD
student or post-doc.
Yeah, it's a technical
result that everybody knows.
All the experts say,
the techniques are there.
What's even the point of writing it.
We'll get a PhD student to write it.
We'll teach them the area. Yes, an
efficient way of going about it.
So it's funny. Do you
trust a PhD student?
I don't know, because I'm
getting a bit paranoid.
That's not the way to
sell the software.
So we accepted the proof in 1970.
We gave John Thompson a Fields Medal
and that's how we know
the order of the history.
We don't care that it was
formally checks in code.
We don't even understand
what that mean,
99 percent of the people I work
with don't even understand
what that means.
We formally verify the proof
because they don't know Set Theory.
They just do mathematics the
way Gauss and Euler did it.
Gauss and Euler didn't have
axioms and foundations,
and they were still
doing mathematics.
We're doing it like that still.
The Set Theory checked that
there is some way of making
it work with foundations,
but we don't care.
So they don't really know
what the thing means.
So what are we interested in?
We're interested in
perfectoid spaces,
that's what we're interested
in, which a fancy new objects
which we've made
about five years ago.
So here's the table.
That is the proof of
the odd order theorem,
that was what you guys thought
was going to be exciting.
That's the thing that
really is exciting.
They both got Fields Medals,
but where have they gone?
No one in my department.
There is only about
the odd order theorem
because there's no one
doing finite group theory anymore.
But my PhD students all know
perfectoid spaces inside
of the PhD students.
The person in the office next to me.
Perfectoid spaces are exciting.
I formalized a scheme.
That was 50 year old mathematics.
I was like, "This is great. This is
MSC level mathematics
and the theorem prover."
I told you algebraic [inaudible].
I learn what a scheme was
when I was an MSC student,
and I realized I had to go higher.
So perfectoid spaces
have vastly complicated
mathematical structures,
and we did it.
I thought let's really learn that.
This is what taught me how to use
the software in a serious way.
Me and a topologist and a post-doc in
number theory all of whom I met
online in the Lean
theorem prover chat.
All of us are mathematicians.
It took us eight months.
We formalized the definition
perfectoid space.
So 10,000 lines of code.
They love it, mathematicians love it.
They don't understand it,
but they know it's different.
So I get batches you could colloquial
all around the place,
it's absolutely great.
>> Question.
>> Up in their hide.
>> So there's up the
other of a whole bunch
of other stuff you are
having? From one place
>> No, there's a whole bunch
of other stuff that we
formalized and that's why
it took us eight months.
>> So it goes all the way down to?
>> It goes all the way
down to the axioms.
>> So.
>> Hold guys, hold down
the lean's transferred,
when need may be transferred.
>> Okay.
>> So it compiles, it's
sorely free. It compiles.
>> Okay, sorely free.
>> Yeah, we had to
formalize a whole bunch of
general ball back is
general topology,
a very formal math textbook.
The topologists did that.
Then we did the graduate
level number theory
on top: me and the number theorists.
This guy here is a post-doc.
He's put his job on the line here.
This work, who referees this?
Somebody who understands type
theorem perfectory spaces.
Who's that? That's me.
That's me and this guy.
I only have a valid referee.
We've written some weird
paper that just falls between
the cracks because it's type
theories a big at right?
Theoretical computer
science is big area,
Algebraic number
theory is a big area,
number of people in both is tiny.
So we don't quite know
what to do with it when
writing the paper and we'll see,
we'll write a paper and we'll
stick it in the system and
we might just completely fail
to obviously but I don't care.
If the computer scientists
don't see the worth,
I'm not targeting the computer
scientists, you get it.
At least you understand
what the software's doing.
This is a PR exercise for me.
I need to get more mathematicians
involved because I have a vision,
I have a gigantic vision
that I'm one person.
Perfectoid basis took eight months,
do you know how many
definitions there are?
There's hundreds, there's
thousands of definitions.
This is both.
I specifically chose
perfectoid spaces
because I knew that
mathematicians found it
sexy and I was right.
I've formalized perfectoid space
and I've done some stuff
with undergraduates.
So I'm great.
This gives the colloquium,
give me the colloquium in Durham
in two weeks time and in Exeter.
I mean, I'm going
length and breadth of
the country and in the EU as well,
I give a talk in Germany
about it all, it's is great.
I'm just basically being an
evangelists for your software.
Doesn't tell us anything. It's
again, is the first question.
Does it tell us any new theorems?
I'll tell you what it does, it
enables us to state old theorems.
We can state the theorems.
Sholze approved a theorem
about perfectoid spaces
and got a Field Medal.
Now we can state that theorem.
But you can't prove it.
Can we prove a new theorem?
We don't. If you can't state
it, where do you begin?
Now we can state it,
that's where you have to start.
That's why I'm doing it
because you have to start
somewhere and it's ridiculous and
computer scientists that
you've made a definition.
It's like I'm making a class.
Lean enables you to reason about
code and you can
prove a hard theorem.
You can prove that this code does
a certain complicated a thing.
You can prove a hard theorem
and that's the triumph.
I'm just defining a class
but anyone can do that.
Maybe I've made a mistake.
Sure. But I mean,
maybe the definition is not right in
that the definitions
change since it was
made, it was tinkered with.
If it gets tinkered
with again that's fine,
I'll just take it with
my code. I don't care.
Whether it's right or
not, we have something.
So, this is, yeah.
I mean, let's skip this bit
because I want to stop on time.
Formalizing definitions is difficult.
>> Question.
>> Sorry, hello.
>> So there's a problem of trust?
>> Yeah.
>> You trust mathematician,
now you need to trust software?
>> Yeah, I trust software more
than I trust mathematician.
It's as simple as that
because I found mistakes.
This is what's really going on.
I referee papers and
once I had a really bad
experience referring a paper.
I spent two years
arguing with the authors
that their proof was incomplete
and eventually I won.
But I was the referee and
this took a lot out of me.
I had young children at the time.
The authors were experts and it
destroyed my trust in humans.
I might trust in LEO Software,
as in, not yet been
destroyed by any means.
I think it holds it's magnitude.
It's an order of
magnitude more reliable.
I don't care that it's
not a 100 percent.
>> You mean it doesn't have an ego?
>> I mean it doesn't have an ego.
So why do I want them to use
it, because I've got a plan.
Before we can start
proving new theorems,
we need to be able to
state old theorems
and that's what we can't
do because we can't state
the semistable Taniyama
Shimura Conjecture
because we haven't even
defined the definitions.
All the definition,
there's not millions,
there's only hundreds of thousand.
There's not that many, I've
done one it took me months,
I'd get better at it now though,
it won't take me eight
months to do the next one.
I've just written an EU GROB Rows
where I proposed to do 10 more.
What you would love to do really
is just feed in the book.
Just take the book
apart, feed in the PDF,
let the computer read it,
and then interpret
the natural language.
Then that's great.
It's just that's a million
miles away I think.
I'm not an expert in this
but my impression is that
computers cannot read
human-generated mathematics.
I think you actually have
to write it properly.
To get mathematicians to do that,
you have to teach mathematicians
to write in this
programming language.
It's no good teaching
them to write in English.
You have to and that's
what I'm doing.
I can't really get the
software to do it yet,
so that's why I'm getting
the students to do it.
There has to be a new culture.
We have to start making
students write in this.
If they want, they should
be able to write in
this language because I've got
1,000 definitions I want to do.
Maths and computer science,
like grant agencies should love
this because its synergy.
But let's find out,
let's see if it actually,
I mean, there's problems.
So as I say this has
been for 30 years
but if you ask a random mathematician
about the problem they're working on,
they'll state a theorem
and it will be impossible to
state that theorem in any
of these theorem provers because it
will be about objects that computer
scientists don't understand.
It's as simple as that.
Because computer scientists
are not research-level
Mathematicians.
We're learning about
periodic Banach Spaces
and we're proving the Lemmas
about periodic Banach Spaces.
There's no definition of
a periodic Banach Space.
So the software hasn't
been around for
30 years and it still
can't understand the statements
of what we're doing.
That's how bad we are.
It's fascicle, 30 years.
It's time we got together.
It's time we got together.
So we don't see the objects
that we think are sexy.
So I just written in
EU grant proposal.
Two million euros to
just give me a couple of
post-docs who can just spend
their time formalizing
definitions and formalizing
famous mathematical conjectures
and famous mathematical theorems.
This has gone to the EU and
on the 31st of October,
the UK's going crash
out of the EU with
no deal and my grant application
will instantly become invalid.
Because if we came out with a deal,
we would be some kind of member.
If we come out with no deal,
then my graph of ISIL is failed.
Trump loves Johnson and
the idea is that we are going
to be friends with America now.
When we get like remember me on
Halloween because apparently,
America are our friends.
So if you want to give
me $2 million or even
$0.5 million just to find a post-doc,
Microsoft funded post-doc,
who's just quietly going to
come work with me at Imperial
and just change the world,
that would be great.
Tom Hales says we
should make a database.
Tom Hales is a very
smart mathematician.
We should just make a database,
that's where to start.
Let's just make a database
of mathematical objects but
then let's state the theorems that
relate these objects to each other.
I think that's the next
thing you ought to do.
Lean is the right tool currently,
Lean is the only viable
tool currently to do this.
Let's tell it what modern
pure mathematics is.
Let's not prove anymore big
theorems at 50 years old.
Let's just define some modern stuff.
Now who's going to do that?
People like me that know the
mathematics and people that I making,
people that I'm training.
I'm a professional trainer,
I've graduated 15 PhD
students and I've
taught 15 undergraduates how
to write math library code.
I can train these people.
When we've done that, it'll happen
somehow and when we've done that,
we can make a database of stuff
and a database of links
between that stuff.
This implies this;
every X with property Y gives
us something with properties.
We can just state the things
that the elders believe.
I don't care about
proving, that's harder.
We can just state it
and then we've made
a big database of our beliefs.
We're writing down our belief system
and that database
doesn't exist anywhere.
This is a completely
new kind of database.
Then what we can do, we can start
attaching things to the labels
saying this was proved by this person
in this journal and then suddenly,
we can start searching it because
you guys can do that stuff.
Then we can start
asking basic questions.
I don't know what we
can do. That seems
trivial and that would be great.
That will make mathematicians in
the third world that don't have,
I have access to the elders.
Some of them work in
the same building as
me but people that don't
have access to the elders,
they can they can use the database.
May be you make tools.
Undergraduates; if I could persuade
the undergraduate so all
work in the software,
then I'd have to mark their work.
Instant feedback,
that's what they want.
Two hundred and seventy
students to me,
I find it very very difficult
to give them feedback.
This is one of the big biggest
criticisms of my university.
Not enough staff, too many students.
We love the students because
they're paying our salaries.
Too many students and
they want feedback.
They do some Lemmas,
they write some PDF stuff,
they write some stuff on paper,
it goes into the machine,
a week and a half later
it comes back with
some PhD student who's
attempted to mark it,
not always accurately.
They're paying 9,000
pounds a year for that.
This gives them instant feedback.
I think that's worth
something and then and then
of course, world domination.
Then I give up. That's it. Thank you.
>> Can you say a little bit
about the features of Lean
that made it well-suited
for this compared to the
other systems you tried?
>> So I looked at four kinds
of foundational theory,
the set theory, which is
what all mathematicians,
if they learn anything at
all, they learn set theory.
Automation is very
difficult with set theory,
and I need automation.
So I gave up on the
set theory on this.
There's simple type theory, Isabelle,
the other big player
really in my mind,
and Isabelle, simple type
theory is too simple.
It's really great for doing
100-year-old analysis.
It's really great for proving
theorems about spheres and things.
But you can't, I mean, you can,
it will be extremely
painful to define a
perfectoid space in Isabelle
because the fundamental
object is a dependent type.
There's a sheaf of rings
on a topological space.
So I've got a topological space,
a very open subset, I've got a ring,
and this is a dependent type,
and you do it in a
simple type theory,
you need a proper function.
You can't have a dependent
type, you need a function.
So you take the disjoint
unit of all your rings,
and now you've got an
object which isn't really
a sensible mathematical structure.
Then you put ring structures
on subtypes of it,
and now all of a sudden you
can't use any of the type class
in machinery because it's
not actually a ring,
it's a disjoint, it's a
big collection of rings.
So you lose the type class system.
So I don't want to do that.
I mean it's fast, it's faster
than Lean because it's simple,
and it's good for
some of mathematics,
but I definitely want
to do all of this.
So then there's univalence,
univalent type theory,
and I'll buy that.
I think you've got a univalent
way that looks good.
But they've been around,
Vladimir Voevodsky,
and his champs around for 15 years,
and you look at what they've
done, and they've done nothing.
They've defined a ring and a module.
If it's taken them 15
years and they've done
nothing, then I'm very skeptical.
Fifteen years, and caught
they were really doing stuff.
But Lean's math library,
when I arrived in 2017,
that software was only a
couple of years old really,
and when was Lean 3 released?
>> 2016.
>> 2016.
So within one year,
the amount of stuff in Lean's
math library just completely
dominated what had been done
in 15 years in univalent.
I don't know what's wrong
with univalent mathematics.
Of course, I hope that it will work.
There's something else just happened.
What was it called errands.
Someone's made a new system.
I would love to see univalence work
because univalent is really useful.
We have to work around the fact
that we don't have univalence.
But we can do it because
the system is powerful
enough to work around it.
That was one of the mistakes I
made in formalizing schemes,
it's that I just assumed that
univalence was obvious
because I'm a mathematician.
X was isomorphic to y,
and I'd proved the theorem about x,
and I wanted to deduce
the theory about y.
Now, I was told that
life wasn't so easy,
independent type theory, and
that's why I started again.
I'd written this formalization for
scheme with two undergraduates,
and I realized that we'd made some,
we had to work around, we had
to do a lot of diagram chasing.
Then I gave it to another
undergraduate and said,
"This is the mistake we've made,
do it in a bit more of a
universal property way."
They did it and it work fine.
I have lots of evidence that Lean is
good enough, perfectoid space.
So simple type theory is too simple.
univalent type theory, it might work,
but someone has to do this,
and I'm going to do finite sets,
finite sets are trivial.
But in Lean, they'd all been done for
me by the people that came before me.
I want to do proper mathematics.
So no univalent system seems ready,
and I think that's it, I
can't think of any more.
I need dependent types,
and I need a sufficiently
advanced system.
>> In terms of Coq.
>> Coq is just not as good as Lean.
Coq doesn't have quotients.
A perfectoid space is a quotient
with the dependent type on it,
that's the answer to your question.
A perfectoid space is a quotient
with a dependent type on it,
and that hence immediately
rules out Coq and Isabelle.
>> Why not?
>> You can't make the quotient.
You can have a big space
with a relation on it,
but you can't make the quotient type.
You just have to work
with a big space.
In equivalence relation,
you're constantly jumping.
You're in something
called setoid hell.
That's what its informally known as.
So yeah, I don't know.
It's just Lean is just
sexier, it has unicode.
It's a ridiculous thing,
but for mathematicians,
it matters, right?
I'm coming from a different
world where I'm trying to sell
stuff and it just matters
that we have unicode.
Unicode is what we write it.
We write upside-down As on our
bits of paper and in our PDFs,
and that's what I see
on the screen. Hello.
>> Speaking of unicode,
did you insert Agda at
all as a potential tool
for formalizing that?
>> I didn't. I was told that
Agda wasn't really
for mathematicians.
I was told it was more of an
experiment in type theory.
There's no tactic
framework, is that right?
Is it just all terms?
>> Not that I just know.
It looks very different to something
like caulk or Lean as well.
I actually sort of
loved how it looks.
>> The underlying type
theory is similar,
right? It's close to Lean.
But they have more exotic types.
You have these inductive recursive
types were not demeanor.
I see people talking about
these things on the Lean chats,
and you think they sound interesting,
higher inductive types or whatever.
But actually what I
realized when I formalize
perfectoid spaces was that we
don't need any of that stuff.
Mathematicians just want
structures. You know that?
I was talking to Sebastian,
it's quite interesting.
I said, "You know we
never use induction."
He was like, of course,
you use the natural numbers,
and I'm just thinking,
I don't think we do use
the natural numbers,
not in the definition
of a perfectoid space,
we don't need natural number.
There's no numbers involved.
There's a prime number at some
point later on the theory.
But we're dealing with
complicated objects which are
typically infinite-dimensional.
There's no numbers involved
at all in our number theory.
That's what number theory has become.
So how funny.
So I don't need what Agda
has to offer. Hello.
>> So I wrote a paper a few
years ago about some folks who
had done a form of proofs
of free Euclidean geometry.
>> Yeah.
>> One of the things
that they found was
when they went and took all of
the rigorous proofs using
hundreds accidents that were
designed to get all the geometric
intuitions out of Euclid,
they found that in fact.
Suddenly carried over a lot of
the students intuitions and
found a bunch of holes.
I'm curious with your
students doing number theory,
did they find a lot of
systematic holes in the principle
that appeared in textbooks?
>> Yeah, they find emissions.
An emission is fine, right?
That's the difficult thing,
if you're a PhD student,
you're reading a paper,
and you don't understand it,
it's just because they left a gap
that you can't traverse yourself.
You ask your PhD adviser,
and they just say we can
just do this and this,
you just do the standard technique.
That's interesting because
very early on it gets drilled
into your head that if
you don't understand
something it's probably your fault.
If it's only once I've been in
the game for about 10, 15 years,
that I realized that actually if
I don't understand something,
it might be because it's wrong.
It might be because the people
that are writing the paper might
not actually understand the
subjects as well as I do.
But that's not the philosophy.
Yes, mistakes are normal.
I got one of my summer students
formalized all of
that Euclidean stuff.
Well, hundreds of lines of Lean
Code formalizing Euclid's,
I mean Hilbert's axioms.
They put a whole bunch of
stuff, it was amazing.
That's one of the many things
we learned, all unpublished,
all on some private GitHub
repository completely unavailable.
Yeah, it's one of the many things.
Yeah, sure there's mistakes
but the undergraduate level,
we know how to fix them, right?
The research level, some of
them really are mistakes.
But that's normal.
You get away with it,
that's the point, you
get away with it.
So it doesn't dump on me. Hello.
>> So it relates to
my earlier question.
One thing I've been thinking
about as you've been
talking about the different systems,
and then as well as how the
proof development in the English
or French or any other language pros
would pair with that
and be communicated.
In some sense, the thing you
might want to share wouldn't
be say a Lean proof development
or CoC proof development,
but what is generated from that.
Ideally, if you check by
many different systems.
>> That would be nice.
>> As a mathematician,
do you care that,
in particular proof development,
if you sat down and read it,
maybe stepped through
it line by line,
the proof actually carried
any kind of human level
of semantic content,
or is this one that alongside
some documentation?
Those things can also
drift sometimes,
say in regular programming.
So what do you think
about that in terms
of a future for using proofing.
>> So I've talked to a
lot of mathematicians
about what a proof is and
what they want for a proof.
I'm a formalist, so as
far as I'm concerned,
if some computer came
up and generated
a one billion line long proof
of the Riemann hypothesis,
in some type theory
that could be formally
checked to be correct but gave
us no understanding at all,
I'd be super excited about that.
But I know a whole bunch
of you will either be
super cross or might even reject it.
So within the community,
there's huge diversity of opinions
as to whether we care about
semantic contents of a proof.
But actually, talking to
Leo and Daniel this week,
I did realize that I'm very
selective about which proofs I read.
I tend to only read proofs if I
want to know what the idea is.
Because 99 percent of what's
published is mathematicians just
generalizing ideas that they've
learned in other places,
and applying them in either
different situations
or slightly more general situations.
So you have to learn some proofs.
I'm torn. The formalist part
of me says I don't care.
I just think that why
don't we just make
computers do it all, then
we don't have to work.
No human understands
how their phone works.
Well, most humans don't
understand how their phone works.
Do we really need to
understand how math works?
Then on other days
that kind of thing,
well actually that's the point.
There's loads of different
answers to your question.
I personally am a formalist,
but I know people that
would actively be
cross if you suggested that
we should start taking
proofs away from them.
Because for them the beauty is the
idea and the idea can get lost.
But beauty is one thing and
truth is another thing,
and I'm more worried about
truth, that's why I'm here.
I've met mathematicians that say
that you shouldn't believe
any of their papers,
their papers are just
telling stories.
You should enjoy the
beauty of the story,
and they don't really care
if it's right anyway,
they know they've got
it past the system,
so they can put it on their CV.
But if somebody told them that
their theorem was wrong, they
wouldn't be too bothered.
It's extraordinary, the people.
When you actually start
asking delicate questions,
they claim that they don't
really even care if
they're right or wrong.
It's just the fun is the ideas,
and that's what we're doing.
We're just passing
around abstract ideas.
Yeah, because abstract
math is beautiful, right?
A perfunctory space is a
really beautiful thing,
if you're one of the
very small number
of people that can understand that.
So funny, yeah I don't know.
I reject beauty, I want rigor.
But not all of us do.
Is that an answer?
I don't know. Hello.
>> Have you tried saying this to
theoretical computer scientists?
>> I thought that was you guys.
Yeah, I'm speaking at something
called ITP next week,
yeah, in Portland, Oregon,
and I think that will be full of
theoretical computer scientists.
So yeah, I do talk to them sometimes
but I mostly talk to mathematicians.
Yeah, what should I tell them?
Are you theoretical computer
scientist? >> Yeah, I am.
>> Okay. Yeah, what
did they want to hear?
>> They'll love this.
>> Okay. Good. Because the
mathematicians are skeptical,
but yeah, I don't know where
I am between the cracks now.
I want to just make lots of noise
until somebody eventually realizes
that what I'm doing is important,
if it turns out to be important.
But am I crazy?
Surely this stuff will
have an impact in my area.
It's so obvious to me.
It must inevitably have an
impact in my area, right?
That's the selling point,
I just need to convince people.
My impression is that
theoretical computer scientists
have taken it as far as they want.
It's great, they've done
the odd order theorem.
They've done loads of stuff,
but now someone else
has to have a go.
That's my impression. I think
they know that their software
is growing maybe, I don't know.
They're interesting to talk to.
They asked me weird questions.
They start asking me about, how
I come up with ideas and things.
I spent 20 years talking
to number theorists,
and we talked about
technical number theory.
Now all of a sudden
I've talked to people,
they are asking me
really weird questions
about the process I go through
when I'm doing mathematics,
so really interesting talking
to a new group of intelligent people.
