Zulip Chat Archive

Stream: Type theory

Topic: Why use propositions as types?


Robin Allison (Nov 08 2020 at 21:47):

This isn't a question about lean per se, but given that lean is based on CiC, I thought people in this community might be able to answer a question I've had for a while: What is the importance of the curry-howard isomorphism / propositions as types representation mechanism for formal proof verification?

There are plenty of proof assistants that don't follow this paradigm so that makes me ask why follow this unusual formalism (for most mathematicians) at all? I mean, it must have some very compelling advantages and I wonder what those are.

The term that immediately comes to mind to explain its advantages is "computational content". Formalisms based on the lambda calculus have computational content because terms of the lambda calculus are programs and "run" through beta reduction. I don't see how this benefits formalization at all. How can a computer use this to verify a proof is correct? It is easy to conceive of some program that can run through the steps of a proof and verify it is all correct. I can't conceive of proofs themselves being programs as somehow helping this process. The only thing remotely relevant in my mind would be how a lambda term denoting
an arithmetic expression, e.g. (2+2)*3, can be evaluated to an integer by running the program it is encoded as.

Now evidently I don't entirely understand what this term means as people will say, dependent types for instance have a computational interpretation, and a big problems apparently is how adding axioms to these systems, for instance function extensionality or the univalence axiom, may not have a computational interpretation. My understanding at the present goes that each term describes an algorithm for constructing a term of a given type. The function type operator is a primitive constructor or algorithm for constructing a terms of one type out of a term of another type. But what is to stop you from asserting other axioms have a computational interpretation?

For example function extensionality would be a term of the type
(A)(B)(f:A\toB)(g:A\toB)((x:A)(f(x)=g(x))\to(f=g)).
(apologies for the awful notation) What is to stop us from regarding this as a primitive construction? In terms of classical logic this is like adding an extra axiom and I would ask "is this consistent", but I'm not sure what the right question to ask here would be. It must be the case that the formal system has certain properties or not whereby we deem such an axiom to have computational content or not. Such formal properties may suggest what advantages "computational content" confers upon a proof assistant but I don't even know what these properties would be at this point. (canonicity property?... and how is it even possible to have a closed term of type int that doesn't evaluate to a numeral?)

Along these lines is the decidability of type checking. This is important because type checking corresponds to proof checking. As I understand it type checking is decidable if we have an algorithm to determine if a term in a given context is of a given type, but isn't this redundant in a proof assistant where the entire point is to interactively construct a term of a given type. You wouldn't need to check a term is of a given type because your construction is already an algorithm to verify this.

The second possible line of advantages in my mind is that type theory just more closely reflects how most mathematicians think. Cue remarks on synthetic euclidean geometry and nonsense terms you can form in set theory such as the intersection of + with the number 9. I could regard this as a compelling reason for CiC, just not in isolation. So really I'm more curious on the computational angle here.

Notification Bot (Nov 08 2020 at 23:22):

This topic was moved here from #Lean for the curious mathematician 2020 > Why use propositions as types? by Bryan Gin-ge Chen

Kevin Buzzard (Nov 08 2020 at 23:29):

My personal take on this as a mathematician is that if you're going to formalise, you have to choose your foundations, and there currently seem to be three viable choices: set theory (e.g. metamath), HOL (e.g. Isabelle/HOL) or dependent type theory (e.g. Coq, Lean). Metamath has no tactics, and Isabelle has no dependent types which makes it very good for 19th century mathematics but problematic for a bunch of 20th century mathematics. So if you want to do 20th and 21st century mathematics you're forced to use dependent types, and now this whole Curry-Howard thing is just part of the furniture. One big difference between Coq and Lean is that in Coq they place a great emphasis on constructive logic; in Lean we seem to have firmly rejected this point of view, even to the extent that polynomials are not computable. I think it's no coincidence that this aggressively non-computational viewpoint has enabled us to prove a ton of theorems which aren't in any other prover, but has given us a maths library which is very bad at doing computations, or perhaps "not optimised for computations" is a better way of putting it. We do not usually prove things using "heavy refl"s, preferring to reason with rewrites. In short then, propositions as types is part of the furniture, but often we don't care about the computational consequences of this because we are using the axiom of choice and things like functional extensionality and LEM all over the place in the maths library, thus disabling the ability to compute anyway. I rant about this more here. Hopefully someone else will come along and give another point of view though -- this is just my personal take on things.

Mario Carneiro (Nov 09 2020 at 01:05):

I agree with Kevin for the most part. I would say that the mathlib community is not at all representative of "real type theory", and we tend to use lean despite DTT rather than because of it. Coq is a much better place to find DTT true believers.

Nevertheless, you have already identified some of the small advantages of having a computational interpretation, or more specifically a nontrivial "definitional equality" judgment: it allows you to silently unfold things when applying other operations, for example if some definition unfolds to an and then you can directly pattern match on it to construct or destruct it. Overuse of this feature is considered a code smell, but it is used in myriad small ways in mathlib.

As for "heavy refl"s, which is to say computations that perform many reduction steps in the kernel, this is very rarely taken advantage of in mathlib, and the few places we do use it are failed experiments (ring2) or current bottlenecks (omega), largely because lean's kernel is not really built to handle heavy computation efficiently. Instead, computation tactics produce proofs on the fly, which is a mechanism that works in any theorem prover, whether or not it has a computational interpretation.

Finally, there is the question of "propositions as types" itself. Since lean has definitional axiom K (proof irrelevance), while propositions are types they are really trivial types, basically just true = {trivial} and false = {}. This does allow for some unification of theories between propositions and types, via the Sort u universe, which is parametric over Prop (propositions) and Type u ("proper" types). However, this also leads to some messy universe level arithmetic, and furthermore a large fraction of mathlib is restricted to Type u because small elimination makes propositions the "odd man out" in a number of ways, so it's not clear to me whether this amounts to a net gain.

Mario Carneiro (Nov 09 2020 at 01:14):

and how is it even possible to have a closed term of type int that doesn't evaluate to a numeral?

here's one:

#reduce @eq.rec_on Prop (true = true) (λ a, a  ) true
  (eq_true_intro (eq.refl true)) (λ _, 2) trivial

The evaluation of the numeral gets stuck at the eq_true_intro which unfolds to propext which is an axiom

Mario Carneiro (Nov 09 2020 at 01:14):

it's provably equal to 2 but not definitionally equal to 2

Jeremy Avigad (Nov 10 2020 at 17:02):

It is true that constructivists often find mystical significance in the conflation of propositions and data, but from my point of view, propositions as types is just a convenient way of representing mathematics, even for classical mathematicians. It is easy to interpret propositions as types in conventional mathematical terms, so there is nothing metaphysically dubious going on. (See the explanation in TPiL.) But it is often an advantage to use the same notation, parsing, type-checking, unification, and elaboration procedures for objects, data types, propositions, and proofs, i.e. nice to have a uniform language of defining all those things. And it is often convenient to put data and axiomatic assumptions in the same definition of a structure.

Reid Barton (Nov 10 2020 at 20:09):

Some more comments about canonicity/computational interpretations.

  • There are plenty of closed terms of type nat which we don't know how to reduce to a literal: for example, n : nat defined as the maximum rank of an elliptic curve over the rationals, or 0 if there are curves of arbitrary large rank. In some sense, math is entirely about terms like this one which we can't simply evaluate, and not about terms like (2+2)*3. (I've seen computer scientists and even philosophers of mathematics write things along the lines of "one of the fundamental principles of mathematics is that each closed term denoting a natural number can be reduced to a constant", but it's completely at odds with how classical mathematics operates. One place where we do have common ground with computer scientists is that we agree that the process of proof checking is, in principle, completely mechanical, but this tends to be regarded as "merely" logic or foundations or outside of mathematics entirely.)

  • I'm not sure whether the notion of "computational interpretation" has an exact meaning, but I think one manifestation of it is having a specific algorithm which can reduce any closed term of type nat to a literal. If you are a computer scientist and think of a term of type nat as a program which produces a natural number, then you would certainly expect to have such an algorithm! One of the design goals of type theories like CiC is that we do in fact have such an algorithm which works for the base system without any added axioms. However, if this algorithm encounters an application of an axiom, such as the univalence axiom, it gets stuck. The problem then is to find a better algorithm that works even in the presence of the univalence axiom. It doesn't do any good to merely "assert" the existence of a computational interpretation of univalence, because it would change the meaning of "algorithm".

  • As a practical matter, the canonicity property relies on the ability to unfold definitions and so it appears to be incompatible with abstraction and implementation hiding. In mathematics, we frequently prove that there exists a unique object with some property, and then regard that as defining the object rather than working directly with its construction. For example, the reals are the unique complete ordered field and it would be weird for a mathematician to have an opinion about whether the reals are "really" defined using Cauchy sequences or Dedekind cuts. But it seems that a system with the canonicity property would have to expose this choice.

Mario Carneiro (Nov 10 2020 at 20:21):

But it seems that a system with the canonicity property would have to expose this choice.

That depends on how "exposed" the computational interpretation is to the logic. If it is like a regular programming language where you don't reason about it at all, you just run programs at the end of the day, then there is no need for the particular implementation to be accessible, so you can still have a well defined API with "implementation details" that change the way something evaluates but not the theorems that can be proven about the definition.

Kevin Buzzard (Nov 10 2020 at 20:21):

An easier example than Reid's -- every sufficiently large natural number is known to be the sum of 7 natural cubes, and there are infinitely many natural numbers which are not the sum of three natural cubes. So G(3)G(3), the smallest value of nn for which every sufficiently large natural number is the sum of nn natural cubes, is known to be between 4 and 7, but we don't know what it is and as far as I know there is no known algorithm which will figure it out in a finite time. Known results about this function are tabulated on Wikipedia's page on Warings problem. Hilbert (himself a proponent of non-constructivism) proved that the function existed, but we still have no formula for it, although there is an MSC code for the problem, indicating that it is still an active area of research.

Joe Hendrix (Nov 10 2020 at 20:29):

One topic I didn't realize until watching one of Kevin's talks is that there is a fair bit of overlap between how working mathematicians view these matters and how system programmers (as opposed to the more language-oriented computer scientists). Even if there is a decision procedure or algorithm it may be too slow for actual usage.
The ability to unfold definitions for reasoning purposes is often really important in proving properties about systems, because they are often sort of defined to meet some particular features and often do not have explicit nice invariants to reason about. That's true for mathematics to, but it seems like a lot more thought is put into what is being defined.

Reid Barton (Nov 10 2020 at 20:34):

Yeah, I didn't state the third point very well. Maybe it would be better to just say that the premise behind canonicity is that things reducing is good and stuck terms are bad, but in practice, too much reduction or the wrong kind of reduction can also be bad.

Kevin Buzzard (Nov 10 2020 at 21:20):

There was some noise relatively recently about some conjecture in type theory being disproved and people were saying "so, this puts Lean in a bad position, right?" and I was just thinking "I have no idea what this conjecture says but I don't think it has anything to do with mathematics"

Kevin Buzzard (Nov 10 2020 at 21:24):

https://artagnon.com/articles/leancoq says "There are three axioms in Lean: propositional extensionality, quotient soundness, and choice; however, these don't block computation, since computation is done in a VM. They do, however, break good type theoretic properties like strong normalization, subject reduction, and canonicity — this was a conscious design choice. " but I am a Lean user doing mathematics in Lean without a clue about what strong normalization, subject reduction and canonicity even mean.

Kevin Buzzard (Nov 10 2020 at 21:24):

Perhaps I should add that I use the three axioms all the time in my work, and indeed some of the theorems I prove would not be provable without them. They are all regarded as completely uncontroversial by most working mathematicians.

Mario Carneiro (Nov 10 2020 at 21:28):

They do, however, break good type theoretic properties like strong normalization, subject reduction, and canonicity — this was a conscious design choice. " but I am a Lean user doing mathematics in Lean without a clue about what strong normalization, subject reduction and canonicity even mean.

You haven't heard about them because they don't hold for lean so you wouldn't be able to leverage them anyway. It's just a thing a type system can have that lean doesn't

Reid Barton (Nov 10 2020 at 21:44):

I like the Waring's problem example better. These examples are really just versions of "if PP holds then 1, else 0", but dressed up in such a way that it would never occur to most mathematicians that there could be some doubt about whether it actually denotes a natural number.

Alena Gusakov (Nov 16 2020 at 17:56):

a maths library which is very bad at doing computations, or perhaps "not optimised for computations"

Oh yeah I noticed this, I did wonder whether one could build some kind of support or plugin for that

Alena Gusakov (Nov 16 2020 at 17:58):

As someone who's really inexperienced I welcome corrections or criticisms but I wonder if it would be difficult to set up some sort of system within Lean where you can disable axioms you don't want to use or something like that

Mario Carneiro (Nov 16 2020 at 17:59):

You can use #print axioms to see if you have avoided an axiom

Mario Carneiro (Nov 16 2020 at 18:00):

However, lean has very few axioms which makes it difficult to carve out lots of logical subsystems. For example you can't have just the implicational fragment of intuitionistic logic without or, because or axiomatization counts as no axioms

Mario Carneiro (Nov 16 2020 at 18:04):

There are really only three subsystems of interest: no axioms, propext + quot.sound, and propext + quot.sound + classical.choice. However avoiding choice in lean also means avoiding the law of excluded middle, so it's not like ZF, and use of classical.choice is not directly related to whether or not the term is computable in the sense of generating bytecode, so it's all a bit useless

Mario Carneiro (Nov 16 2020 at 18:06):

The no axioms fragment satisfies canonicity (AFAIK), but the others do not, and none of them are strongly normalizing

Reid Barton (Nov 16 2020 at 18:16):

Alena Gusakov said:

a maths library which is very bad at doing computations, or perhaps "not optimised for computations"

Oh yeah I noticed this, I did wonder whether one could build some kind of support or plugin for that

The state of the art for this in Lean is norm_num, which can "compute" in the real numbers (for example) even though the ring structure of the reals is not computable.

Reid Barton (Nov 16 2020 at 18:18):

There's "computation" in the type theorists' sense of terms reducing by definitional equalities to other terms, and then there's "computation" in the computer algebra sense which is more generally about algorithms for calculating or proving equalities which might be only propositional, and is closer to what mathematicians mean by "computation" (which I won't attempt to define!)

Reid Barton (Nov 16 2020 at 18:20):

and I guess technically Lean's computable is not the same as any of these, but it's closest to the type theory one in that it involves a built-in evaluation procedure

Kevin Buzzard (Nov 16 2020 at 18:24):

I am pretty sure I could convince most mathematicians thatring was "doing computation".

Mario Carneiro (Nov 16 2020 at 18:24):

well, be careful they don't agree with that statement too easily - computers compute

Kevin Buzzard (Nov 16 2020 at 18:27):

"Lean is not a computer" -- K. Lau, 2017.

Kevin Buzzard (Nov 16 2020 at 18:29):

Kenny would occasionally say these things to me when he was a first year, and they would increase my understanding of what Lean was. The other one I'm fond of quoting was "Lean does not do magic" (when I claimed that simp solving a goal was magic)

Reid Barton (Nov 16 2020 at 18:29):

Kevin Buzzard said:

I am pretty sure I could convince most mathematicians thatring was "doing computation".

Right but even worse is that a mathematician will "compute" some homotopy group or whatever using the Serre spectral sequence, even though there's no algorithm involved because we might have to use some arbitrary other information to decide whether some differential exists

Alena Gusakov (Nov 16 2020 at 18:29):

When you say "computation" in the type theorists' sense, is it the same thing as decision problems? Or is that the second one (I guess it is about algorithms so I'm guessing the second)

Mario Carneiro (Nov 16 2020 at 18:31):

Decision procedures can be expressed in lean as computations in either sense

Mario Carneiro (Nov 16 2020 at 18:31):

norm_num is a decision procedure for less than on natural numbers in the second sense, dec_trivial is a decision procedure for less than on natural numbers in the first sense

Pedro Minicz (Nov 16 2020 at 23:38):

Mario Carneiro said:

The no axioms fragment satisfies canonicity (AFAIK), but the others do not, and none of them are strongly normalizing

How is the no axioms fragment not strongly normalizing?

Bryan Gin-ge Chen (Nov 16 2020 at 23:44):

See this thread.

Nasos Evangelou-Oost (May 22 2021 at 02:31):

Is it something intrinsic that makes Lean (4) incompatible with HoTT or is it just the conventions of mathlib? @Mario Carneiro said above that Lean has definitional axiom K.
But if we chose to have propositions in Type instead of Propwould that be sufficient to develop HoTT? Would higher inductive types be definable? Could we define and use the univalence axiom?

Scott Morrison (May 22 2021 at 04:29):

The equation compiler is intrinsically un-HoTT, apparently, and you can't really "turn it off". This is why the hott3 repository linked above uses an attribute, which via some metaprogramming inspects each declaration you make to see if it uses incompatible features of the language.

Andrew Ashworth (May 22 2021 at 06:29):

You could do it synthetically, but you would get no help from the built-in commands and tactics. At which point - why bother?

Mario Carneiro (May 22 2021 at 08:52):

This is another rephrasing of Scott's point, but the existence of Eq in Prop (which exists in the core library, and even if you replaced core it can still be written as an inductive without using any "axioms") implies that if you define Eq' in Type then Eq' also satisfies axiom K (not definitionally, but with either Eq or Eq' equality). So you have to watch out for a whole collection of constants that lean thinks are "no-axioms" fine, which is what the hott3 mode does.

Mario Carneiro (May 22 2021 at 08:55):

Would higher inductive types be definable?

Not really. Lean has built in quotients, which turn out to be strong enough to do a wide variety of HITs (@Floris van Doorn has a paper on this IIRC), but there is no inductive support for HITs, and you don't get any HITs that can't be built from regular inductives and quotients .

Floris van Doorn (May 22 2021 at 17:44):

Yeah, you get most set level ("axiom K-compatible") higher inductive types just from inductive types + quotients. There are some recursive set-level HITs that cannot easily defined this way, they are usually called "quotient inductive types". For example, you can constructively define initial algebras of infinitary algebraic theories. In this case, classically you can construct these also using the axiom of choice, but there might also be quotient inductive types that cannot be defined in Lean.


Last updated: Dec 20 2023 at 11:08 UTC