Zulip Chat Archive

Stream: Type theory

Topic: why proof irrelevance?


Ping J (Jul 15 2025 at 12:51):

what purpose does proof irrelevance serve?

i.e. what if we treat theorems as normal types and each distinct proof of it as distinct objects?
what ill will such an implementation introduce?

is proof irrelevance for logical or efficiency purpose?

thanks~

Kenny Lau (Jul 15 2025 at 12:57):

I'm not really familiar with this but I suspect it might have to do with allowing propositions to quantify over any type

Aaron Liu (Jul 15 2025 at 12:58):

I think it's so we can not have to worry about how something is proved

Riccardo Brasca (Jul 15 2025 at 13:01):

It makes things like Fin.ext_iff true. Otherwise one can have i.val = j.val but with different proofs. This is completely crazy from a mathematician's point of view.

Fernando Chu (Jul 15 2025 at 13:01):

Proof irrelevance, in particular uniqueness of identity proofs (i.e. (p q : a = b) -> p = q), makes types behave more like sets, otherwise they behave more like infinity groupoids.

Fernando Chu (Jul 15 2025 at 13:03):

Here is an example: if you have p q : a = b and t : P a then p ▸ t will no longer be (automatically) the same as q ▸ t. You would have to prove that p = q and then the lemma p = q -> p ▸ t = q ▸ t.

Fernando Chu (Jul 15 2025 at 13:04):

So in particular, whenever you have a variable X : Type you will also need to assume (a b : X) (p q : a = b) -> p = q.

Ping J (Jul 15 2025 at 13:05):

i see.

Ping J (Jul 15 2025 at 13:06):

cheers to all :_)

Notification Bot (Jul 15 2025 at 13:06):

Ping J has marked this topic as resolved.

Chris Hughes (Jul 15 2025 at 15:44):

It also means that things like the powerset are in the same universe. If Set X := X -> Type then Set X : Type 1.

suhr (Jul 15 2025 at 17:47):

A more technical answer: https://rocq-prover.org/doc/V9.0.0/stdlib/Stdlib.Logic.Berardi.html

Impredicativity ∧ excluded middle → proof irrelevance. And you want impredicativity for power and convenience.

Notification Bot (Jul 16 2025 at 02:01):

Ping J has marked this topic as unresolved.

Ping J (Jul 16 2025 at 02:01):

hmmm, deep, dark, and interesting.

i'm not sure i fully understood the last contradiction

Florent Schaffhauser (Jul 16 2025 at 06:23):

Riccardo Brasca said:

It makes things like Fin.ext_iff true. Otherwise one can have i.val = j.val but with different proofs. This is completely crazy from a mathematician's point of view.

So crazy that it does not happen :-) The equality in Nat is provably decidable, so by Hedberg's theorem Nat satisfies UIP. In particular, any two proofs of i.val = j.val can be identified, which means that Fin.ext_iff is provable without assuming proof irrelevance.

Florent Schaffhauser (Jul 16 2025 at 06:25):

Chris Hughes said:

It also means that things like the powerset are in the same universe. If Set X := X -> Type then Set X : Type 1.

You could also define Set X := X -> hProp, where hProp is now the subtype of propositions in Type, and then Set X : Type as desired?

I guess you need propositional resizing to justify that hProp is Type-small, though.

Riccardo Brasca (Jul 16 2025 at 11:12):

Florent Schaffhauser said:

Riccardo Brasca said:

It makes things like Fin.ext_iff true. Otherwise one can have i.val = j.val but with different proofs. This is completely crazy from a mathematician's point of view.

So crazy that it does not happen :-) The equality in Nat is provably decidable, so by Hedberg's theorem Nat satisfies UIP. In particular, any two proofs of i.val = j.val can be identified, which means that Fin.ext_iff is provable without assuming proof irrelevance.

OMG you really switched to the dark side :D OK let's consider ℝ≥0 then.

Kenny Lau (Jul 16 2025 at 11:13):

do we even need that, like I feel like might already be different from our ?

𝚠𝚘𝚓𝚌𝚒𝚎𝚌𝚑 𝚗𝚊𝚠𝚛𝚘𝚌𝚔𝚒 (Jul 19 2025 at 17:35):

suhr said:

A more technical answer: https://rocq-prover.org/doc/V9.0.0/stdlib/Stdlib.Logic.Berardi.html

Impredicativity ∧ excluded middle → proof irrelevance. And you want impredicativity for power and convenience.

One more thing to note here is that Berardi gives you propositional proof irrelevance, whereas Lean has definitional proof irrelevance which makes a difference to the metatheory.

Trebor Huang (Jul 28 2025 at 05:54):

Kenny Lau said:

do we even need that, like I feel like might already be different from our ?

For the Cauchy reals, we need more axioms for quotient types without proof irrelevance (unless it's definable as an inductive type SetQuotient like in cubical Agda, in which case zero axioms) to completely specify its behavior, but then the real numbers should be the same. In particular the equality type has at most one element.

Of course, HoTT people adopt propositional proof irrelevance too, since it's the very definition of being a homotopy proposition. So a relevant (no pun intended) question is "Why definitional proof irrelevance?"

Mr Proof (Aug 30 2025 at 03:31):

Is this the same as definitional equality? Or not?
i.e. a function taking a term of type T will accept also a term of type T' as long as T is definitionally equivalent to T'. By which I believe is meant, that by reducing T and T' to their normal forms by means of expanding out all definitions. Please correct me if I am wrong.

Niels Voss (Aug 30 2025 at 04:07):

Yes, that is basically what definitional equality is (although I think there might be some nuance in that some pathological expressions in Lean don't have normal forms). The proof irrelevance being discussed here is a special case of definitional equality which applies to Lean, but not every other DTT based theorem prover.

Kyle Miller (Aug 30 2025 at 05:04):

Yeah, Lean's reduction rules don't normalize. Proof irrelevance is in fact a cause for this: https://arxiv.org/abs/1911.08174

By which I believe is meant, that by reducing T and T' to their normal forms by means of expanding out all definitions.

More precisely, there's definitional equality the relation and definitional equality the algorithm. The ideal definitional equality does not need normalization: terms are definitionally equal if there's any sequence of basic reductions (either applied forward or backward) that brings the first term to the second. The algorithm however does its best to see that terms are definitionally equal, but it might fail even if they really are definitionally equal. It uses heuristics to try to limit the number of reductions done.

Proof irrelevance ("Axiom K") is one of the rules that you can choose definitional equality to have. Unlike beta/eta/zeta/delta reduction, it doesn't really have a sense of direction, it's not a "reduction", but a mere "judgment". It's just that you can substitute one proof for an arbitrary other proof of the same proposition. It's not compatible with thinking about definitional equality in terms of reducing to normal forms.

An interesting use of proof irrelevance is in reducing recursors. Suppose you have a proof h : a = a. Then Eq.refl a is definitionally equal to h by proof irrelevance. So, the term Eq.recOn h r is definitionally equal to Eq.recOn (Eq.refl a) r is definitionally equal to r. The first is by proof irrelevance and the second is "iota reduction" (the "computation rule" for the recursor applied to a constructor). The fact that we know Eq.refl a is a proof of a = a from the type alone justifies reducing Eq.recOn h r to r. Lean puts this reasoning to use:

example (α : Type) (p : α  Type) (a : α) (h : a = a) (r : p a) :
    h  r = r := rfl

Last updated: Dec 20 2025 at 21:32 UTC