Zulip Chat Archive
Stream: new members
Topic: beginner question about proof by function
rzeta0 (Jan 15 2025 at 12:44):
I am trying to learn about "dependent type theory" and ideas like "functions as proof".
I was watching this computerphile video: "Propositions as Types" https://www.youtube.com/watch?v=SknxggwRPzU
A key point made is that a computer can't complete a proof of "for all natural numbers n
a property P(n)
holds" simply by verifying it for 0, 1, 2, 3, .. and so on.
So that motivates another kind of proof.
This is the point where I am less certain.
I think the idea is that :
n
is of typeℕ
and the propositionP(n)
is of typeProp
.- We need to show that if an object has type
ℕ
then there exists a function that maps it to an object of typeP(n)
- Because this is "there exists" any function will do, as long as it is indeed
ℕ → Prop(n)
.
Is this correct? Is funding such a function sufficient? It seems "too good to be true".
I guess my uncertainty arises from the fact that from a set theory perspective, proving equality of sets usually means showing both directions x∈A → f(x)∈B
and y∈B → f{^-1}x∈A
.
Could there be be elements of P(n)
that aren't in ℕ
?
Edward van de Meent (Jan 15 2025 at 13:46):
As you say, n : ℕ
gives you P(n):Prop
. This means we have P : ℕ -> Prop
, meaning that n
has to be of type ℕ
or else P(n)
doesn't type check!
Bolton Bailey (Jan 15 2025 at 13:51):
We need to show that if an object has type ℕ then there exists a function that maps it to an object of type P(n)
This is a bit verbose of a way to say it, you could just say "We need to show there exists a function that maps it to an object of type P(n)
Because this is "there exists" any function will do, as long as it is indeed
ℕ → Prop(n)
.
Yes. By way of analogy, there are many different proofs of the Pythagorean theorem, but you only need one.
Could there be be elements of
P(n)
that aren't inℕ
?
Well, the elements of P(n)
are elements of P(n)
, I'm not sure why you would think they would be in If you mean are there arguments to ℕ
.P
that aren't in ℕ
, then Edward is right, the answer is no.
Edward van de Meent (Jan 15 2025 at 13:57):
btw, i feel there is an important distinction between ℕ → P(n)
and (n : ℕ) → P(n)
Edward van de Meent (Jan 15 2025 at 13:58):
strictly speaking, the proof that for all n
, P(n)
is true has type (n : ℕ) → P(n)
rzeta0 (Jan 15 2025 at 17:14):
Thanks both.
Edward - yes I can see the distinction, thanks.
Bolton - thanks for clarifying. This is all new and unfamiliar so it will take some time to sink in.
What I think will help is a "plain English" of a "proof by function" - and I can't seem to think of one.
Here is me thinking aloud:
- Task: prove that
1 + 2 + 3 + ... k
isk * (k+1)/2
, wherek
is a natural number. - A few days ago I would have done this by induction or algebra.
- Now I need to find a function that maps from something to something. What is the domain and range of this function?
- Since I want to prove "for all natural numbers k" I guess the domain of that function must be
(k:ℕ)
, as Edward clarified above. - Actually I think the previous line is wrong. We need to prove for all expressions of the form "1 + 2 + .. + k". So the domain is the set of expressions of the form "1 + 2 + .. + k". This happens to match 1-2-1 to all natural numbers
k
. - The range is the set of expressions of the form "k*(k+1)/2".
-
So what does such a function look like? It takes, for example an input "1+2" and outputs "2*(2+1)/2".
-
Here is a python example of such a function :
def f(expression):
k = expression[-1:]
out = k+"*("+k+"+1)/2"
return out
This takes an input 1+2
and returns 2*(2+1)/2
, or an input 1+2+3
to 3*(3+1)/2
, and so on.
However this function is not a proof. It is just string manipulation.
So I'm clearly missing something important - help?!
Bolton Bailey (Jan 15 2025 at 18:08):
This k*(k+1)/2
example is tricky because the usual proof is by induction, and it is hard to wrap your mind around the idea that you are constructing a function that produces a proof of equality, but no actual data.
The example that helps me when thinking about propositions as types is "there are infinitely many primes". In various Lean tutorials and in mathlib, this proposition is written as
theorem exists_infinite_primes (n : ℕ) : ∃ p, n ≤ p ∧ Prime p
If you read this closely, you see that what this is saying more precisely is "for all natural numbers n, there exists a p such that p is at least n and p is prime".
So what does a proof of this fact look like? In the proposition-as-types philosophy, it is a function that takes a natural number n and returns another natural number p. (Technically, it also returns other things like a proof that p >= n and a proof that p is prime, but if that is confusing, I would ignore it temporarily until you understand the function between the naturals)
Bolton Bailey (Jan 15 2025 at 18:21):
What concretely is this function that takes a natural n and returns a prime greater than n? To find out you have to look at the proof. The answer is:
Spoiler
Bolton Bailey (Jan 15 2025 at 18:23):
If lean were not finicky, you would actually be able to evaluate this by doing something like
#eval (Nat.exists_infinite_primes 5).1.1
To get 11. Alas, the creators of Lean have no sense of adventure.
Bolton Bailey (Jan 15 2025 at 18:27):
Once you understand this, you can go deeper. You realize that n ≤ p
is defined as "there exists a natural a such that n + a = p
". So actually, the return type of Nat.exists_infinite_primes
contains two naturals hidden within it.
Bolton Bailey (Jan 15 2025 at 18:32):
But the confusing part is maybe "what is the type of proofs of n + a = p
"? At this point, we are sort of at the primitive base of type theory. I'm not sure there's a better explanation than to say that for any x and y, if x really does equal y, we pretend that there is a set containing one element which is "the definitive proof that x equals y", and the set containing this one element is "the set of proofs that x = y", which is what we write in Lean/Type theory as simply x = y
. If x isn't really equal to y, we declare by fiat that "the definitive proof that x equals y" doesn't exist, and that the "the set of proofs that x = y" is empty.
Bolton Bailey (Jan 15 2025 at 18:38):
So to return to your question:
- Now I need to find a function that maps from something to something. What is the domain and range of this function?
- Since I want to prove "for all natural numbers k" I guess the domain of that function must be
(k:ℕ)
, as Edward clarified above.
Yes, the domain is the type of natural numbers. The range is funny because it is different depending on what the input is (that's why they call it dependent type theory). But for input k
, the range is simply "the set of proofs that 1 + 2 + 3 + ... k =k * (k+1)/2
". This may strike you as talking in circles :shrug:.
The range isn't a set of expressions. It's prudent to think that for every value of k, the corresponding range set will have exactly either 0 elements or 1 element.
nrs (Jan 15 2025 at 19:02):
#new members > How does one actually construct proofs without using "by"? might be of interest
Mitchell Lee (Jan 15 2025 at 19:45):
A great way to learn how to construct proof terms is to play this game: https://adam.math.hhu.de/#/g/trequetrum/lean4game-logic
rzeta0 (Jan 16 2025 at 14:03):
Thanks again everyone for the replies. I will have to get my head around it slowly and also try the links you suggested.
I'm also watching YouTube videos to see if someone says a form of words that give me the "aha" moment!
Matt Diamond (Jan 19 2025 at 03:13):
However this function is not a proof. It is just string manipulation.
@rzeta0 In a sense it is a proof... it's a proof that if you have a string, you can produce a string. In other words, it's a proof that the type String
is nonempty.
In fact, you can think of any function as a proof that the return type of the function is nonempty, with the output serving as the witness to this fact.
Giacomo Stevanato (Jan 19 2025 at 10:12):
rzeta0 said:
- A few days ago I would have done this by induction or algebra.
- Now I need to find a function that maps from something to something. What is the domain and range of this function?
These two are not exclusive to each other. The way you'll implement that function will likely be by using induction (also called recursion when using functions). Just like when you compute a Fibonacci number you can use F (n - 1)
to get the n-1
th Fibonacci number, when doing proofs using functions you can (kinda) use my-theorem (n - 1)
to get a proof that the theorem holds for the case n - 1
.
(In practice you won't exactly use my-theorem (n - 1)
because n - 1
might not be defined, but the idea is similar and will also involve handling the base case where n = 0
)
rzeta0 said:
- Since I want to prove "for all natural numbers k" I guess the domain of that function must be
(k:ℕ)
, as Edward clarified above.- Actually I think the previous line is wrong. We need to prove for all expressions of the form "1 + 2 + .. + k". So the domain is the set of expressions of the form "1 + 2 + .. + k". This happens to match 1-2-1 to all natural numbers
k
.
There isn't much of a difference between the two, because considering all nutural numbers k
s and then the expression 1 + 2 + ... + k
is equivalent to considering all the expressions in the shape 1 + 2 + ... + k
, except the former is much easier to state.
rzeta0 said:
So what does such a function look like? It takes, for example an input "1+2" and outputs "2*(2+1)/2".
This is wrong. The function should not output 2 * (2 + 1) / 2
but instead a proof that 1 + 2
is equal to 2 * (2 + 1) / 2
. This means the return *type should be 1 + 2 = 2 * (2 + 1) / 2
.
rzeta0 (Jan 19 2025 at 13:50):
Matt Diamond said:
In fact, you can think of any function as a proof that the return type of the function is nonempty, with the output serving as the witness to this fact.
This is zooming into the core of my misunderstanding. I can understand that such a function provides a witness and therefore is useful for a "there exists ∃" proof.
But what kind of functions can support a "for all ∀" proof?
rzeta0 (Jan 19 2025 at 13:53):
Giacomo Stevanato said:
This is wrong. The function should not output
2 * (2 + 1) / 2
but instead a proof that1 + 2
is equal to2 * (2 + 1) / 2
. This means the return *type should be1 + 2 = 2 * (2 + 1) / 2
.
OK - this is again zooming into the core of my misunderstanding. We have found something I didn't understand so will spend some time thinking about this specific point - that the return type must be A=B, not just B.
rzeta0 (Jan 19 2025 at 14:01):
I'm trying to read the overview paper by P Wadler lots of sources link to (pdf).
It is a great and generally not too difficult to read paper - except it introduces this topic a bit too quickly for me. Perhaps everyone else "gets it" and I just don't.
In 1934, Curry observed a curious fact, relating a theory of func-
tions to a theory of implication [13]. Every type of a function
(A →B) could be read as a proposition (A⊃B), and under this
reading the type of any given function would always correspond to
a provable proposition. Conversely, for every provable proposition
there was a function with the corresponding type. Subsequently,
Curry and Feys [14] extended the correspondence from not merely
types and propositions to also include term and proofs, and to hint
at the relation between evaluation of terms and simplification of
proofs.
Eric Wieser (Jan 19 2025 at 14:07):
rzeta0 said:
But what kind of functions can support a "for all ∀" proof?
A theorem statement with a ∀ x
is just another way of saying that it is a function that takes an argument x
Philippe Duchon (Jan 19 2025 at 15:49):
Matt Diamond said:
rzeta0 In a sense it is a proof... it's a proof that if you have a string, you can produce a string. In other words, it's a proof that the type
String
is nonempty.
Actually, it is not even that. You can define the identity function on an empty type. (It does prove that if you have a string then you can produce a string, but that in itself does not prove that strings exist)
Dan Velleman (Jan 19 2025 at 16:01):
I would suggest that you read (again?) Chapters 3 and 4 of TPIL. The ideas are all there, although it may take time to wrap your mind around them. The key points are:
- If
P
is a proposition, then it is a type, and an object of typeP
is a proof ofP
. This may seem strange. You might think it would make more sense to say that for each propositionP
, there is a corresponding type of proofs ofP
. But it turns out to be convenient to say thatP
is literally the type of proofs ofP
. In particular, it leads to the following simplifications: - The arrow for implication is literally the same as the arrow for function types. In other words, if
P
andQ
are propositions, then the implicationP → Q
is the type of functions fromP
toQ
. Again, this seems strange, but it is convenient because it means that all rules for using functions apply to implications. For example, consider the following rule for working with functions: if you havef : α → β
andx : α
, thenx
is in the domain off
, so you can applyf
tox
to getf x
, which has typeβ
. Similarly, if you haveh1 : P → Q
andh2 : P
, thenh2
is in the domain ofh1
, so you can applyh1
toh2
, getting the termh1 h2
, which has typeQ
. This is the logical rule modus ponens: if you have proofs ofP → Q
andP
, then you can combine them to get a proof ofQ
. So there is no need to tell Lean the modus ponens rule; you get it for free by identifying implications with function types. Also, if you can show how, given a proof ofP
, to produce a proof ofQ
, then you are justified in asserting the implicationP → Q
. (This is the "introduction rule" for→
.) There is no need to tell Lean this; you get it for free, because you can use thefun
notation to define a function mapping proofs ofP
to proofs ofQ
, and that function will have typeP → Q
, which means it is a proof ofP → Q
. - The universally quantified proposition
∀ (x : α), P x
is, literally, the function type(x : α) → P x
. Again, this means that there is no need to state the logical rules for dealing with universal quantifiers; you get them for free from the rules for functions. For example, if you haveh : ∀ (x : α), P x
, which meansh : (x : α) → P x
, then if you also havea : α
, thena
is in the domain ofh
, so you can applyh
toa
to geth a
, which has typeP a
. This is the logical rule of universal instantiation (or universal elimination): if you know∀ (x : α), P x
, then you can concludeP a
, for anya
of typeα
. Also, if you can show how, given an arbitraryx
of typeα
, to produce a proof ofP x
, then you are justified in concluding∀ (x : α), P x
. (This is the introduction rule for the universal quantifier.) There is no need to define this as a new rule in Lean; you can usefun
notation to define a function that, given anyx
of typeα
, produces a proof ofP x
, and that function has type(x : α) → P x
, which is the same as∀ (x : α), P x
, so it is a proof of∀ (x : α), P x
.
Things didn't have to be done that way in Lean. Implication and universal quantification could have been defined more like the way the other logical symbols are defined. But then you'd need logical rules for working with them. Those rules would say, in effect, that implication and universal quantification work just like function types. The designers of Lean decided that, if implication and universal quantification were going to work just like function types, then it was simpler to simply define them to be function types. It takes a little getting used to, but it is actually quite convenient.
rzeta0 (Jan 19 2025 at 17:33):
Thanks Dan for taking the time to write out a significant explanation.
I will print it out and work through it with pen and paper tomorrow.
(Wadler's paper section 6 finally clarified what introduction and elimination rules are - I think it is very well explained there, if any other beginner is struggling with TPIL's explanation).
Ilmārs Cīrulis (Jan 19 2025 at 17:47):
Soon enough you will have better understanding than I do. :thumbs_up: :octopus:
I just follow my nose, tinkering with Rocq prover (formerly Coq) and now Lean for many years helps. Without much deeper understanding, just "intuition".
Vlad Tsyrklevich (Jan 19 2025 at 18:13):
https://www.cambridge.org/core/books/type-theory-and-formal-proof/0472640AAD34E045C7F140B46A57A67C is also a resource you may be interested in. It's an introductory book that introduces type theory, develops a pen-and-paper dependent-type formal system, and then actually goes and writes some proofs in it. I read it as I was getting started with Lean and it helped clear up some ideas.
Matt Diamond (Jan 19 2025 at 18:49):
Philippe Duchon said:
Matt Diamond said:
rzeta0 In a sense it is a proof... it's a proof that if you have a string, you can produce a string. In other words, it's a proof that the type
String
is nonempty.Actually, it is not even that. You can define the identity function on an empty type. (It does prove that if you have a string then you can produce a string, but that in itself does not prove that strings exist)
Good point, you're right. A function String → String
is more like Nonempty.intro
: if you give me a value of type String
, then I can give you a proof that String
is nonempty.
rzeta0 (Jan 20 2025 at 16:11):
I have now re-read both Dan's reply and Wadler's exposition paper.
I am happy with almost everything that has been said and have narrowed down my confusion to the following.
I think the error I was making before was trying to think of examples in my head that used simple objects and not propositions.
For example, given a proof objective P → Q, I was trying to think of a function that is the proof, and I was imagining P to be something like 3:ℕ and Q to be something like 5:ℝ
.
I now understand P and Q need to be propositions, which seem obvious in retrospect. And the function-as-a-proof needs to take a proposition as input, and a proposition as output. In fact a proposition of type P as input and a proposition of type Q as output.
So here is a better (I hope) example:
- proof objective: "n is a natural number" → "2n is even"
- this is of the form P → Q
- we need to find a function that maps P → Q
- I had trouble with this if I think about functions like y(x) = x^2
- I think the solution to this trouble is to think of programs with procedural steps - BUT which also conform to commonly agreed logically valid steps
- so I can now use things like lemmas and logical inferences in my function ... which is unfamiliar when thinking about school functions like y(x)=x^2 .. indeed I can use the introduction and elimination rules we have talked about
- this allows me to take the input "n is a natural number" and construct "2n is a natural number", we can think of this a a function
g1
- I can then construct a proposition "there exists a k natural number such that 2k = 2n", evidenced by choosing k=n", which confirms to the definition of "even". I can call this step a function
g2
- so applying
g1
and theng2
(function composition) to the input P gives me the output Q - and so my function is indeed a function-as-proof
Have I finally got that right?
As a thought experiment I'm trying to think what elements (terms) of type "n is a natural number" look like. I can't think of any.
Dan Velleman (Jan 20 2025 at 16:33):
rzeta0 said:
And the function-as-a-proof needs to take a proposition as input, and a proposition as output.
One small correction: it takes a proof of a proposition as input, and gives a proof of a proposition as output.
I know that when you see h : P
in the tactic state, it is tempting to think that means that h
is the proposition P
. But what it really means is that h
is a proof of the proposition P
.
Julian Berman (Jan 20 2025 at 16:45):
Another minor bit of pedantry is that "n is a natural number" is not a proposition, nor is n : \N
a proof of a proposition, so strictly speaking you might instead think of "n, a natural number, with no other restriction" or something as your proposition, (or otherwise you have to again widen what you've learned to realize P can be any type and then your example function is correct as-is).
Dan Velleman (Jan 20 2025 at 17:01):
Another way to put Julian's point: Your example "if n is a natural number then 2n is even" would probably be written in Lean as ∀ (n : ℕ), Even (2 * n)
--a universally quantified statement, not an implication. But it still the case that a proof of this proposition is a function; it takes a natural number n
as input, and gives a proof of Even (2 * n)
as output.
rzeta0 (Jan 20 2025 at 17:13):
Dan Velleman said:
One small correction: it takes a proof of a proposition as input, and gives a proof of a proposition as output.
I know that when you see
h : P
in the tactic state, it is tempting to think that means thath
is the propositionP
. But what it really means is thath
is a proof of the propositionP
.
Yeah - I'm still getting used to this :)
rzeta0 (Feb 01 2025 at 23:09):
Vlad Tsyrklevich said:
https://www.cambridge.org/core/books/type-theory-and-formal-proof/0472640AAD34E045C7F140B46A57A67C is also a resource you may be interested in. It's an introductory book that introduces type theory, develops a pen-and-paper dependent-type formal system, and then actually goes and writes some proofs in it. I read it as I was getting started with Lean and it helped clear up some ideas.
I got a copy of this - and I'm only a few pages in, and so far it is really good.
If there is a list of recommended resources on the lean main website, this should be on there, imho.
It provides a "from basics" explanation of lambda calculus, and works up to the calculus of constructions.
More than the content itself, it is written (so far) in an easy to read way, and I think it will answer all those questions about "how does it work" newcomers to Lean like me have - all in one coherent place.
Last updated: May 02 2025 at 03:31 UTC