There's a new way to do Mathematics.
which has a number of advantages
and this new way of doing mathematics
is basically very close to just using a programming language.
[Computer Science ∩ Mathematics]
In Mathematics, people prove theorems
I mean that's the productivity of mathematicians
and there's a problem: Sometimes theorems turn out to be wrong!
so it's very embarassing: I mean people publish something and provide a proof
but then there's something wrong with the proof. That happens lots of times.
And now we have a new opportunity, using Computer Science technology
We have something called Interactive Proof Assistants
and what you can do with those is, basically, dialogue with a computer
you produce a proof - a mathematical proof.
which can be stored like a computer program
And it can be checked by this or another program to make sure that you haven't cheated.
So that's... Very cool. And there is increasing use of this:
There are actually some conferences where you can submit an artifact which is this formal proof.
like, computer code which actually proofs the theorem.
and then people can check and you can be very, very sure, that your proof is correct.
People are getting lazy these days, you know? You get all these referee requests:
"I can't read this proof, this is too much! I don't have the time!"
But now we have these computer systems, one of them is called Coq
it's a French system. I think they called it that to upset English speakers.
We could just do mathematics as it has been done using Set Theory
and we'd learn about Sets and Elements and that sort of thing
and just formalize it that way
but I think if we would do this, we really miss an opportunity:
This Coq system, for example, is actually based on a programming language called Type Theory.
So this Type Theory is at the same time a programming language and logic, so you can do logical reasoning
and it's a replacement for this language Mathematicians use, called Set theory.
It's a more recent development going back to the 70s:
By the Swedish Mathematician and Philosopher
Per Martin-Löf
He wanted to capture what it called Constructive Mathematics
By Constructive Mathematics we mean that we can actually always obtain a result.
that we don't just claim, for example, if you have a function, you may have no way to calculate the answer
In constructive mathematics there is always a way to obtain an answer.
If you say there exists a number of some property, in classical mathematics you may not know the number.
In constructive Mathematics you can only make this claim if you can actually provide a concrete number.
and Per Martin-Löf, he had nothing to do with computer science, he was really a Philosopher and Mathematician
a very interesting guy
"but then people thought: ok, this idea of always getting an answer:"
"doesn't this correspond to, like, writing a computer program?"
And he said: "Yeah, this is like a computer program."
And this has been made more precise and has been used,
so people started basically to implement this Type Theory on a computer.
Type Theory uses the Types we have in Programming language, like for example
the type of booleans, the type of Integers...
or the type of functions from Integers to Booleans.
Int → Bool
Input an Integer and output a Boolean
Int → Bool
and so on. These are types.
And a very important observation is, that we can use types in order to model logical reasoning.
To model logical propositions.
So if you have a proposition, a logical expression,
For example, "There are infinitely many ℙrime Numbers"
∀p∈ℙ : ∃q∈ℙ | q > p
is a proposition
or, for every two ℕatural Numbers, one is less than, equal to or greater than the other
∀a,b∈ℕ: ab
These are propositions.
And the observation is that, for every proposition, we can associate a type.
And that's the type of evidence or proof for this proposition.
This is called the Curry-Howard Equivalence
So whenever you have a proposition
you find a Type such that
, if there is a program of its corresponding type, then you know this proposition holds.
Sean: So this is computing getting its elbows in and getting stuck into some mathematics
Yes and I would say, ok, using type theory is very natural to do on a computer
where as Set Theory seems to be rather alien to use on computers.
In Set Theory, for example, there is the notion of a function, but it's a derived concept:
You have a set of pairs with certain properties
{p|p=(a,b), *certain properties hold*}
There is not this idea like in Type Theory
That a function is something where you put something in and get something out
f : a → b
Which corresponds to how we think in Computer Science
So a function in Type Theory is something we can actually compute with,
Where as a function in Mathematics and Set Theory is something like
the set of all pairs of inputs with corresponding outputs
but a function in Mathematics may be something you cannot compute
So I'm always saying "This is not a good name, to call it `function` because
a function which doesn't function shouldn't be called a function"
Sean: dysfunctional functions
Thorsten: Yeah dysfunctional functions
Type Theory has got some other interesting features, which have been
discovered or made explicit more recently
and this is work which was started by a mathematician, actually:
He's called Vladimir Voevodskij and he's a Fields medalist at the Institute for Advanced Study in Princeton
He is doing something called Homotopy Theory
So Homotopy Theory is a quite abstract area of Mathematics
where we try to understand geometric objects, a very abstract geometric objects.
and you can identify geometric objects which you can transform like plastic... or play dough
where you can transform one into the other in continuous transformations
and first of all that doesn't seem to have anything to do with the Type Theory I mentioned
but, Vladimir Voevodskij observed, that
there actually is a close connection between Type Theory and Homotopy Theory
which leads to this new idea called Homotopy Type Theory (HoTT)
and the idea here is, that when, in Type Theory, you have a proof that two things are equal,
this is like surfaces where you have a path between them
and then you can also ask yourself: "Are two equalities equal?" - "Is there a path between paths?"
A continuous transformation of paths
Ok. That's
Sean: It's getting quite abstract
Thorsten: It's getting very abstract
Sean: Is this like a
Sean: -These play dough objects are a bit like a way of visualizing these ideas? Is that what he has discovered?
Thorsten: No, it's not really. It's rather surprising. Because we started with something that is called
Type Theory which is a programming language and a logic as I have explained.
And then there is this Homotopy Theory which has completely different rules in
what is called, Algebraic Topology
Which is like, I mean, topology is an idea of describing continuous phenomena like ℝeal numbers and
and surfaces and so on, which are important in physics and so on
And then, in Homotopy Theory, people do this more and more abstractly.
They try to distill the essence of these objects.
They are infinitely dimensional - no really good intuition for what's going on.
But then, the surprising thing, and this one of the surprising reuses of mathematics:
You go completely abstract in one direction and you come out somewhere completely different
And this is a relation between Type Theory and Homotopy Theory, ok?
So it's not very direct; it's not like we have a type and then we have a picture which corresponds to this type
It's not like this.
It's much more indirect
but the mathematical principles underlying this Homotopy Theory and Type Theory are matched up
Surprisingly well.
Maybe one question is "What is the pay-off?", I mean, what does this buy us?
And I think there are some very exciting opportunities now,
where Type Theory shows its superiorirty over Set Theory
And this has to do with abstraction:
If you want to build large mathematical libaries
you have to climb a ladder of abstract reasoning
You don't want to do everything on a concrete level, you have to do things abstractly.
That's the power of Mathematics.
Type Theory, and especially Homotopy Type Theory, in its very structure is designed in a way,
that you cannot talk about the details of how things are actually implemented
so for example, ℕatural Numbers: In Set Theory, ℕatural Numbers are Sets. That's maybe surprising.
But in Set Theory, Everything is a Set.
And there is a particular Encoding of ℕatural Numbers using Sets using {curly brackets}
but there is more than one! So you can use Sets to encode ℕatural Numbers in various ways.
so for example, in the standard encoding in Set Theory, there is a very strange artifact
for example, one number is element of a bigger number of ℕatural Numbers, Counting Numbers
It's true that 2 is an element of 3
0∈1∈2∈3∈4∈...
doesn't really make any sense, right?
It's just the way it's set up. Which means, when you work in Set Theory, you can see the implementation details.
How numbers are actually represented using {curly brackets}.
Now that's something that doesn't really matter when you think about numbers: 2∈3 is silly.
But you can't hide this
In Type Theory, when you introduce ℕatural Numbers in Type Theory,
you cannot talk about the encoding - how they are actually represented -
So for example, numbers are often represented following an idea of an Italian mathematician
callead Peano, who said:
0 is a ℕatural Number
0∈ℕ
and every other ℕatural Number is a previous number +1
∀n∈ℕ : succ(n)∈ℕ (where succ(n) is n+1)
is also a ℕatural Number
∀n∈ℕ : succ(n)∈ℕ (where succ(n) is n+1)
and that's a way we can build all ℕatural Numbers.
But there are other ways to do this, right? When you write numbers in this system they become very long.
Because you write +1 + 1 +1 +1; that's not a very good system.
So what we rather like to do is using binary encoding of numbers
Using 0 and 1. That's much shorter. I mean still longer than decimal, but
It's much shorter than this +1+1+1
And this are two ways of encoding the ℕatural Numbers.
Now in Type Theory, you cannot express the difference between these two encodings.
Because they are actually in a way the same:
They exhibit the same concept.
So you automatically hide implementation details, like, that's what we wanted to do in Computer Science.
We want to hide implementation details because, like, if you want to change one implementation  for another,
we don't want to go all the way up so everything which uses this has to be rewritten.
We want to say "Ok I'll just plug something else in and it works."
So you want to be able to plug in, if we use the Peano Numbers and say "that's too cumbersome"
"and I'll plug in my binary numbers" and nothing should change."
And that's how Type Theory is designed.
And from this Homotopy Type Theory comes a very important principle
which is called the Univalence Principle
(=)≅(≅)
And it basically says that two things which are equivalent (≅), like the binary and Peano numbers,
then they are equal (=)
We cannot, not just, we cannot talk about implementation details
but because we cannot talk about implementation details,
two things which behave the same from outside (≅) are actually considered to be the same (=)
Sean: Is this changing massively the way that people  think about things in other areas, then?
Thorsten: Yes, so let's see:
It has the potential of doing this.
So Homotopy Type Theory has been suggested as a new Foundation of Mathematics.
People can just move into it, they can learn logical reasoning and abstract mathematics
just coming from this programming angle.
So I think there is an exciting opportunity.
And I think, looking back to Set Theory, we have these new tools of Proof Assistants to do interactive proofs
And in a way it would be a shame if we used this new technology to
formalize the mathematics of the 20s and 30s, the mathematics of, let's say, yesterday.
But there is an exciting new opportunity here, to really use new mathematical ideas, new foundations
when we formalize mathematics.
...we got a value here at x1 and we are going between x0 and x1 and the value of y, at this position...
...in Type Theory means you can proof the Excluded Middle for this particular predicate,
You can proof, for all numbers, either prime(n) or not prime(n)...
