Zulip Chat Archive

Stream: maths

Topic: Univalence


Bas Spitters (Mar 08 2020 at 20:40):

I believe univalence can consistently be added to lean, since classical logic is valid in the simplicial set model
(https://www.math.uwo.ca/faculty/kapulkin/notes/LEM_in_sSet.pdf).
Moreover, that model also interprets strict propositions.
One would of course need to disable UIP for universes. Would that be possible?

Chris Hughes (Mar 08 2020 at 20:46):

Have you seen https://github.com/gebner/hott3 ?

Kevin Buzzard (Mar 08 2020 at 20:46):

I thought that the issue was that bool = bool was a singleton in Lean but bool \equiv bool was not.

Chris Hughes (Mar 08 2020 at 20:47):

That's why you have to disable Prop, or at least large elimination from Props

Chris Hughes (Mar 08 2020 at 20:47):

And redefine equality to be a Type

Kevin Buzzard (Mar 08 2020 at 20:47):

and this breaks all of mathlib immediately, right?

Chris Hughes (Mar 08 2020 at 20:47):

Yes

Kevin Buzzard (Mar 08 2020 at 20:56):

and also it means that it would be much harder to make it all again, because the cool tricks we can get away with because any two terms of type P : Prop are equal won't work any more, so it might not even be possible to make parts of mathlib again if we have univalence.

Reid Barton (Mar 08 2020 at 22:33):

I understand why equality can't be a Prop but once that is changed I don't understand why Prop itself causes any problems.

Mario Carneiro (Mar 08 2020 at 22:38):

If eq : A -> A -> Prop exists then you can prove that eq' : A -> A -> Type is equivalent to it and hence also satisfies UIP

Chris Hughes (Mar 08 2020 at 22:38):

Because this is inconsistent right?

inductive eq2 {α : Type} (a : α) : α  Type
| refl : eq2 a

example {α : Type} (a b : α) (h₁ h₂ : eq2 a b) : eq2 h₁ h₂ :=
by cases h₁; cases h₂; exact eq2.refl  _

Mario Carneiro (Mar 08 2020 at 22:39):

which is basically what Chris's proof does, because cases is implicitly using eq

Reid Barton (Mar 08 2020 at 22:51):

I was going to say maybe this just means you need to adjust the rules for large elimination of Props to not allow smuggling in eq, but possibly that eliminates all interesting uses?

Mario Carneiro (Mar 08 2020 at 22:51):

If prop looked like type there wouldn't be any issues

Reid Barton (Mar 08 2020 at 22:52):

Well then it might as well not exist, right?

Mario Carneiro (Mar 08 2020 at 22:53):

I am not sure if the impredicativity is also bad but if you don't have UIP in prop then I think it would be bad

Reid Barton (Mar 08 2020 at 22:53):

I guess the question is how much stuff can you put into a definitionally proof-irrelevant Prop while remaining consistent with univalence, and is that amount enough to be useful

Mario Carneiro (Mar 08 2020 at 22:54):

You could remove all large elimination. just have all inductive types act like exists

Mario Carneiro (Mar 08 2020 at 22:54):

then Prop becomes a one-way path so it can't cause problems... I hope

Reid Barton (Mar 08 2020 at 22:55):

Right, I should go back to: how permissive can the large elimination rules be

Reid Barton (Mar 08 2020 at 22:55):

E.g. I imagine that large elimination for true is ok

Reid Barton (Mar 08 2020 at 22:55):

But it's also not that exciting

Mario Carneiro (Mar 08 2020 at 22:55):

Some large elimination principles are provable without assuming them, like and

Reid Barton (Mar 08 2020 at 22:56):

There's surely going to be issues with propext next, I guess

Mario Carneiro (Mar 08 2020 at 22:56):

but those are basically the only okay ones

Reid Barton (Mar 08 2020 at 22:58):

Oh maybe there is no issue with propext as long as its type itself doesn't have large elimination

Mario Carneiro (Mar 08 2020 at 22:58):

propext is actually univalence for propositions

Reid Barton (Mar 08 2020 at 23:03):

Ok, I think what I was imagining is not a real issue

Reid Barton (Mar 08 2020 at 23:06):

I guess if you can track the h-levels of types syntactically then you can also allow large elimination when appropriate. Is this what Arend does?

Reid Barton (Mar 08 2020 at 23:06):

Hmm this isn't very obvious actually

Kevin Buzzard (Mar 08 2020 at 23:27):

This theorising is all well and good but you seem to be saying "if lean's core logic is changed a bit then it would be interesting to see what still works" and I'm merely pointing out that it might be much much harder to develop a proper maths library, which is my impression of what happens the moment you go univalent: you can compute homotopy types of synthetic spheres and then the only thing you can do is higher topos theory

Kevin Buzzard (Mar 08 2020 at 23:28):

Because you can't use cases any more

Mario Carneiro (Mar 09 2020 at 00:06):

in principle, lean tactics could be modified to be less HoTT-unfriendly. In practice, the people who are capable of doing a good job of it are not interested

Bas Spitters (Mar 09 2020 at 08:50):

@Chris Hughes Thanks. Does one still have classical logic?
Instead of singleton elimination, I guess one wants this: @Gaëtan Gilbert is also involved in that.
https://jesper.sikanda.be/files/definitional-proof-irrelevance-without-K.pdf

Bas Spitters (Mar 09 2020 at 08:56):

@Kevin Buzzard Most of basic maths happens at the lowest type level, so things wouldn't change much. Only the first universe becomes a groupoid, the second a 2-groupoid, etc.
One would notice it when formalizing presheaves, and there I would expect it to be the more natural thing to do. Unless one is "evil":
https://ncatlab.org/nlab/show/principle+of+equivalence

Kevin Buzzard (Mar 09 2020 at 09:28):

Presheaves are just a fancy word for functors and I'm sure there will be no problems with functors in a HoTT theory. I want to go much further than this. I want to see the proof that the presheaf of rings on an affine scheme is a sheaf, ie not just a definition of presheaves but a proof that they can be used. This took a team of complete amateurs a few months in Lean -- we were all mathematicians who knew nothing about computer proof systems or type theory one year earlier. I genuinely don't know whether the reason this isn't done in any computer HoTT system is because the systems aren't capable or because there just isn't anyone interested in doing it. I believe that there are classical hypotheses everywhere in the argument.

Kevin Buzzard (Mar 09 2020 at 09:37):

The reason I am banging on about this is that if your original assertion is supposed to imply that adding univalence does not in theory break anything, then I am pointing at an explicit piece of MSc level mathematics whose proof will be completely broken by it and am trying to ask the question "is there any evidence that it can actually be fixed in practice?". I am arguing that it took a team of amateurs a few months to make this proof in Lean in their spare time, so it does not seem like an unreasonable practical challenge. In my mind, one of the main things lean has going for it is a whole bunch of examples of mathematics at MSc level and above being done with relative ease by the community -- a proof that the proof assistant is usable in practice to do "fashionable mathematics".

However this might well be a long way from the point you were trying to make originally and so I'll now stop banging on about this on this thread. Sorry to derail.

I think Chris' original comment on Gabriel's work on HoTT in Lean 3 is the place to start. Both he and Floris van Doorn are experts in doing HoTT in Lean, but currently I don't think either of them are doing any, and perhaps this is only because the community is not there yet.

Kevin Buzzard (Mar 09 2020 at 09:41):

I could envisage someone setting an undergraduate project on doing some HoTT formalisation in Lean using Gabriel's library and then a stream being set up here for HoTT In Lean and one could just see how easy it is to do mathematics in practice in this mode. It would definitely be an interesting and worthwhile project. One could imagine the UniMath library and the HoTT book as being useful resources and one could ask whether this Lean framework is easier to use than the corresponding Coq framework

Bas Spitters (Mar 09 2020 at 10:08):

Of course, I would start from the HoTT library, not UniMath. It is better engineered and doesn't have the imho arbitrary restriction on the use of e.g. record types.
https://github.com/HoTT/HoTT

Kevin Buzzard (Mar 09 2020 at 10:45):

Thanks for reminding me of this library! I see that it is actively growing too :D People are currently thinking about groups, judging by recent commits. Can you state Sylow's theorems? Can you prove them?

To give you some sort of data point for comparison, when I arrived at Lean they had in their maths library a theory of groups, rings and topological spaces, and a good theory of finite sets. They had the naturals, integers, rationals and reals, and perhaps not too much more of interest to a "working mathematician" (although my memory might be inaccurate). This was definitely enough to get me interested though. Within a year Chris Hughes had proved Sylow's theorems, and he was a 1st year undergraduate mathematician with training in computer proof systems; Kenny Lau had set up a theory of localisation of rings, and he was also a 1st year undergraduate mathematician with no prior training in computer proof systems, and the three of us had proved this result about schemes. The reason I'm mentioning it is that one could ask what the "bare minimum"is that one needs to get mathematicians involved. And in my experience it's undergraduate mathematicians you can start with, especially when you say to them "hey there's this cool system and there's a whole bunch of undergraduate level maths in it which you can do and which we need".

Are you considering the possibility of porting this library to Lean? Is that the question you're really asking -- whether the logics of the systems are sufficiently compatible?

Chris Hughes (Mar 09 2020 at 10:52):

Impredicativity isn't compatible with HoTT right? This is something that seems quite important to do normal maths.

Mario Carneiro (Mar 09 2020 at 11:03):

Actually, impredicativity can be done in HoTT but it doesn't look the same. There is a well known result that says that two impredicative universes is inconsistent, and one impredicative universe is System F which does not have the "standard semantics" we like to have for function types and such. In the standard HoTT foundations you can define what it means for a type to be a proposition, but you end up with a subset of the universe at each level that are propositions, and there may be "more propositions" at the higher levels; this is what we mean by predicativity.

In order to reflect these propositions down to the lowest level to get something more impredicative, there is Voevodsky's "universe resizing" rule. IIRC it says that if A : Type u and is_prop A (which we would render as subsingleton A), then there exists A' : Type 0 such that equiv A A' (and a forteriori is_prop A' as well).

Mario Carneiro (Mar 09 2020 at 11:07):

In classical logic, this is trivial, because every proposition is either true or false and both true and false can be found in the lowest universe.

Bas Spitters (Mar 09 2020 at 11:12):

@Chris Hughes
Impredicativity is called the resizing axiom in the HoTT book. It holds in most (classical) models of HoTT, e.g. simplicial sets.

Mario Carneiro (Mar 09 2020 at 11:15):

But it is an axiom, yes? This by contrast to the way lean's impredicativity works, by a different typing rule for the Pi

Bas Spitters (Mar 09 2020 at 11:15):

@Mario Carneiro In the simplicial set model, the hProps are not the booleans, so the issue is somewhat more subtle:
Any contractible type is equivalent to a point (e.g. the unit type in the lowest universe)

Mario Carneiro (Mar 09 2020 at 11:16):

I suspect that Kevin et al really want the hProps to be equivalent to bool

Bas Spitters (Mar 09 2020 at 11:18):

One could use strict propositions for that, as developed by @Gaëtan Gilbert in Coq

Mario Carneiro (Mar 09 2020 at 11:20):

is there a motivating example for why hProp can't be bool, in a classical HoTT model?

Mario Carneiro (Mar 09 2020 at 11:21):

I would have thought equality would be an example but it's not even an hProp unless the type is a hSet

Mario Carneiro (Mar 09 2020 at 11:23):

I'm not really sure how strict propositions (by which I assume you mean Coq's SProp) differs from lean's Prop, and in particular what they do to avoid the exact same constructions shown by Chris above that trivialize the equality relation

Bas Spitters (Mar 09 2020 at 11:38):

@Mario Carneiro @Chris Hughes Indeed, with classical logic, hProp ~ bool. Cor 4 of:
https://www.math.uwo.ca/faculty/kapulkin/notes/LEM_in_sSet.pdf

Johan Commelin (Mar 09 2020 at 11:44):

@Bas Spitters Does this mean that we can assume all types (or at least all types X with is_Set X) are either empty or have a term?

Bas Spitters (Mar 09 2020 at 12:34):

@Johan Commelin Yes, that should be provable. Assuming classical logic and choice, hSets form a model of Lawvere's structural set theory.

Johan Commelin (Mar 09 2020 at 12:35):

Maybe with the new tooling it becomes easier to switch back and forth between mathlib and the hott3 project...

Johan Commelin (Mar 09 2020 at 12:35):

I wouldn't mind playing with it.

Bas Spitters (Mar 09 2020 at 12:37):

@Mario Carneiro Consider a huge type what paths between all terms. This is contractible, so an hprop, but this is obviously not a boolean.

Mario Carneiro (Mar 09 2020 at 12:37):

isn't it equivalent to true?

Mario Carneiro (Mar 09 2020 at 12:37):

what makes it "huge"?

Mario Carneiro (Mar 09 2020 at 12:38):

or are you saying it has higher structure like loops or something

Bas Spitters (Mar 09 2020 at 12:42):

@Johan Commelin We can also consider the universe of strict sets. Here is a constructive development, which will not be easy to read, but may give an idea of the type theory.
http://www.cse.chalmers.se/~coquand/bishop.pdf

Johan Commelin (Mar 09 2020 at 12:43):

What is the difference between an hSet and a strict set?

Mario Carneiro (Mar 09 2020 at 12:45):

two equalities of fixed values in a strict set are defeq

Mario Carneiro (Mar 09 2020 at 12:45):

in other words, lean types

Johan Commelin (Mar 09 2020 at 12:46):

Aha... but can you have those in a HoTT system?

Mario Carneiro (Mar 09 2020 at 12:47):

that is based on skimming Bas's link

Mario Carneiro (Mar 09 2020 at 12:47):

I don't think that type theory is classical, although probably you can just throw that in

Mario Carneiro (Mar 09 2020 at 12:49):

but the upshot of all this is still not clear to me. It sounds like you are offering most of the features lean has already. What are we giving up? Clearly something has to give since univalence in lean is disprovable

Mario Carneiro (Mar 09 2020 at 12:52):

the stuff about simplicial sets or cubical set models is not relevant for the lean user, since we are working in the theory not on the theory. (Formalizing type theories would be a worthy project but quite independent of tweaks to lean foundations.)

Bas Spitters (Mar 09 2020 at 14:08):

One would have a type theory with two kinds of propositions, strict ones and hProps.
set level mathematics should go through at the bottom universe, but the other universes can be univalent.
So one would need to give up UIP for the higher universes.

Mario Carneiro (Mar 10 2020 at 01:12):

Wait, so the lowest universe is not univalent in this scheme? Isn't that where all the interesting maths would go? For example, I guess one desideratum would be to have real = real x {0} be provable, as well as similar statements for the corresponding elements of a fully bundled type Ring := Sigma (A : Type 0) (mul : A -> A -> A), .... As I understand it this is the main selling point of having univalence around, but these objects are h-sets, possibly strict sets if you prefer, and live in the bottom universe. (Ring lives in the next universe but equality of rings, which should mean ring isomorphism, depends on equality in Type 0, and so needs the bottom universe to be univalent.)

Mario Carneiro (Mar 10 2020 at 01:16):

While UIP is tremendously convenient, it doesn't actually enable any particular kind of maths, as long as you can obtain the (non-judgmental) equality by other means. It would still require a lot of engineering effort to recover the same level of ease of use that you get with UIP, but at least in principle I can see giving it up provided that things like the resizing axiom are around to allow us to do long well founded recursions and such.

Reid Barton (Mar 10 2020 at 01:19):

I think @eq A a b is a Prop if A : Type 0. That doesn't prevent Type itself from satisfying univalence

Bas Spitters (Mar 13 2020 at 11:24):

We could have a universe of strict sets, and a universe of hSets living side by side.
@Gaëtan Gilbert has been working on the type theory for this. Some things will simplify in the classical model of simplicial sets, but I'm not sure anyone has gone through the details. It will be a simplification of Thierry's note.

Johan Commelin (Mar 13 2020 at 11:26):

What would "living side by side" mean in practice? Would it be easy to transport data and properties from one side to the other? Or would the gap be about as big as the gap between two different libraries right now (e.g., mathcomp and mathlib)?

Bas Spitters (Mar 13 2020 at 11:26):

This paper discusses a decade long research program on implementing transfer in proof assistants (Coq, Isabelle,...)
The Marriage of Univalence and Parametricity
https://arxiv.org/abs/1909.05027
and claims that the approach using HoTT is most convenient. It would be interesting to know whether there are any initiatives to implement transfer in lean.

Johan Commelin (Mar 13 2020 at 11:27):

@Cyril Cohen has been working on this about a year ago.

Bas Spitters (Mar 13 2020 at 11:28):

Reference?

Johan Commelin (Mar 13 2020 at 11:28):

But so far we don't have any usable tools.

Johan Commelin (Mar 13 2020 at 11:28):

https://github.com/CohenCyril/mathlib/commits/param

Bas Spitters (Mar 13 2020 at 11:32):

I guess that is the CoqEAL approach? Has it been proved to work with classical logic? I don't recall seeing that result explicitly.

Johan Commelin (Mar 13 2020 at 11:35):

I really don't know.

Johan Commelin (Mar 13 2020 at 11:35):

This is stuff that I never thought about as a mathematician

Johan Commelin (Mar 13 2020 at 11:35):

It's just an extremely annoying problem that you discover once you start using ITP's

Bas Spitters (Mar 13 2020 at 11:36):

@Johan Commelin I don't think the type theory has been worked out. It would be somewhat like the interaction of sProp and Prop in Coq and agda.
Semantically, we can check what is going on by looking at the classical groupoid model.

Bas Spitters (Mar 13 2020 at 11:38):

Strange, I would imagine you would know about Artin gluing
https://drops.dagstuhl.de/opus/volltexte/2019/10532/pdf/LIPIcs-FSCD-2019-25.pdf

Johan Commelin (Mar 13 2020 at 11:39):

I think it would be very helpful if some HoTT library would demonstrate that it's useful for maths that doesn't look like homotopy theory. It doesn't have to be perfectoid spaces. But something like the Hilbert basis theorem, or even better fundamental theorem of calculus. Those are low-level from a maths point of view, but my impression is that they are currently challenging for HoTT libs.

Johan Commelin (Mar 13 2020 at 11:39):

Hmmm... let me check that link

Bas Spitters (Mar 13 2020 at 11:40):

Unary, binary and machine integers low level enough for you :-)
That's the examples they look at in the univalent parametricity paper.

Johan Commelin (Mar 13 2020 at 11:41):

@Bas Spitters Ok, I've heard of things like gluing sheaves or topoi, or what not. But (for a simple mathematician) it's not clear how that helps us with AA and BB are isomorphic semirings, BB is a noetherian integral domain, hence AA is as well.

Johan Commelin (Mar 13 2020 at 11:41):

And mathematicians completely don't know what you need to do to prove such a statement. They only know a 1-word proof: "Obvious".

Johan Commelin (Mar 13 2020 at 11:42):

Bas Spitters said:

Unary, binary and machine integers low level enough for you :-)
That's the examples they look at in the univalent parametricity paper.

No, that won't do. We need to know that the maths that appears in bachelor courses is at least as easy in HoTT as it is in DTT like Lean.

Bas Spitters (Mar 13 2020 at 11:42):

@Johan Commelin There was a smiley missing in my last comment. I just wanted to point out that the "annoying problem" actually hides beautiful maths that you know well.

Johan Commelin (Mar 13 2020 at 11:43):

So it has to be some calculus or commutative algebra or something. Binary integers are too low level.

Johan Commelin (Mar 13 2020 at 11:43):

Sure... but I'm talking about the public relations / advertisement side of things. To get things rolling you need an eye-catching result.

Bas Spitters (Mar 13 2020 at 11:47):

Here are some examples from algebra:
http://www.cse.chalmers.se/~nad/publications/coquand-danielsson-isomorphism-is-equality.pdf
We are currently working on a formalization of universal algebra in HoTT. The isomorphism theorems become "identification theorems".

Kevin Buzzard (Mar 13 2020 at 12:08):

This is not what is taught in 1st year undergraduate classes. What Johan and I are trying to say is this: if HoTT is supposed to be able to do mathematics, let's see some mathematics which are taught in standard undergraduate mathematics classes. Where is quadratic reciprocity in any HoTT theory? The proof of this is not straightforward. You need a robust theory of finite sets. You need to be able to count. Where is the proof of Lagrange's theorem? Where is "normal mathematics"? machine integers and universal algebra etc are exactly what we are not talking about.

Kevin Buzzard (Mar 13 2020 at 12:10):

Navel-gazing about equality is something else which is not done at all in a mathematics department, away from esoteric coursees.

Bas Spitters (Mar 13 2020 at 12:36):

We may be talking past eachother, but I am saying that in HoTT we can prove @Johan Commelin s statement about integral domains by univalent transfer. Since, A and B are ismorphic algebraic structures, the record types are equal, and hence we can transport the property of being a noetherian integral domain.

Johan Commelin (Mar 13 2020 at 12:39):

Exactly! And that is why HoTT is so enticing!

Johan Commelin (Mar 13 2020 at 12:39):

But if that is the "only" thing it can do... it's no good (of course I'm exagerating with "only").

Johan Commelin (Mar 13 2020 at 12:40):

We need a system that gives us a powerful transfer. But we also need a system that gives us quadratic reciprocity, fund.thm.calc., lagrange, hilbert basis, and all the rest.

Johan Commelin (Mar 13 2020 at 12:41):

So far we have all the classical (not in the logic sense, but in temporal sense) systems that struggle with transfer, and we have HoTT which doesn't.
But HoTT isn't proving that you can do the "classical" theorems of mathematics in it.

Johan Commelin (Mar 13 2020 at 12:42):

Universal algebra is nice. But can it be applied?

Johan Commelin (Mar 13 2020 at 12:42):

If you can give me a formalisation of quadratic reciprocity as a 10 line consequence of universal algebra. Sure, then I will be very excited about the universal algebra library.

Johan Commelin (Mar 13 2020 at 12:43):

Especially if the fundamental theorem of calculus is another 30-line consequence

Johan Commelin (Mar 13 2020 at 12:43):

But I doubt that that is true.

Johan Commelin (Mar 13 2020 at 12:43):

To use some categorical language: we want the pushout of HoTT and mathlib.

Johan Commelin (Mar 13 2020 at 12:46):

We need another paper "The marriage of univalence and material set theory".

Bas Spitters (Mar 13 2020 at 12:47):

I am aware universal algebra is close to logic/CS. I.e. it's use in effectful computation.
My PhD-student Andreas has been working on integrating math-comp and HoTT.
This is currently stuck on universe polymorphism:
https://github.com/coq/coq/issues/10513
Perhaps Coq is trying to be too clever, and the more conservative approach of lean would work better. Also, there is less legacy code in lean.

Bas Spitters (Mar 13 2020 at 12:50):

Theorem 10.1.15 of the HoTT book:
If the axiom of choice holds then the category Set is a well-pointed boolean elementary topos with choice.

This is called Lawver set theory. What else do you want from set theory?

Johan Commelin (Mar 13 2020 at 12:54):

I think that's good. That's good enough for the theoretical foundations. What we need to see is that it's usable in practice.

Johan Commelin (Mar 13 2020 at 12:55):

How many lines of code does mathlib use to prove quadratic reciprocity? (I dunno, but we can find out.)
Can we do a 1-1 translation to HoTT?

Johan Commelin (Mar 13 2020 at 12:55):

If not 1-to-1, what is the factor that proofs scale by? Should we expect less lines of code, or more?

Johan Commelin (Mar 13 2020 at 12:56):

That's the kind of question that Kevin and I are interested in.

Johan Commelin (Mar 13 2020 at 12:57):

Of course we could try to figure this out ourselves. But we are also wondering why HoTT experts are not figuring this out.

Johan Commelin (Mar 13 2020 at 12:57):

(Also, unfortunately I need to spend lots of time on paper maths projects, and diving into developing a theory of finite sets in HoTT would not make my boss happy.)

Sebastien Gouezel (Mar 13 2020 at 13:02):

I have the following analogy. It is consistent with ZF to assume that every set of the real line is measurable. Then, one can add this axiom, and do all measure theory in an extremely nice way (much nicer than what is usually done because you won't need to check measurability any time). Except that then you can not integrate this measure theory library with other traditional parts of maths that require choice (for instance the existence of the algebraic closure), as assuming measurability of everything is contradictory with choice. In this situation, it seems to me that doing measure theory in this way would be cheating, and mostly useless to build a whole library of mathematics.

As to whether this is a good analogy with HOTT, I won't say anything, because I don't understand anything to HOTT.

Bas Spitters (Mar 13 2020 at 13:05):

Unless, I am overlooking something, lean is already consistent with HoTT under this translation (which uses --indices-matter)( @Gaëtan Gilbert ?)
https://coq.discourse.group/t/alpha-announcement-coq-is-a-lean-typechecker/581
Of course, that does not directly give univalence, but it embeds lean in the strict propositions that we discussed before.

Mario Carneiro (Mar 13 2020 at 13:06):

Yes but Coq isn't HoTT, it's some poorly defined union of a bunch of type theory extensions

Bas Spitters (Mar 13 2020 at 13:07):

@Mario Carneiro Coq implements pcuic. What do you mean?

Mario Carneiro (Mar 13 2020 at 13:08):

Coq doesn't "just" implement pcuic, it has extra stuff

Mario Carneiro (Mar 13 2020 at 13:08):

I have literally never seen a definition of Coq that didn't finish "oh and there's also some other stuff we won't go into here"

Mario Carneiro (Mar 13 2020 at 13:11):

I'm not 100% positive I know what the exact theory you are calling PCUIC is (usually people call "that thing Coq implements" CIC instead) but I think that SProp is not mentioned in it

Gaëtan Gilbert (Mar 13 2020 at 13:14):

Unless, I am overlooking something, lean is already consistent with HoTT under this translation (which uses --indices-matter)( @Gaëtan Gilbert ?)

I'm pretty sure the translation breaks when using -indices-matter
Lean can't be consistent with HoTT since Lean has UIP

Bas Spitters (Mar 13 2020 at 13:15):

@Gaëtan Gilbert What is the status of strict sets?

Bas Spitters (Mar 13 2020 at 13:17):

Pcuic:
https://hal.inria.fr/hal-01615123v2/document
SProp is a recent addition described in Gaetan's thesis.

Mario Carneiro (Mar 13 2020 at 13:18):

I know

Mario Carneiro (Mar 13 2020 at 13:18):

hence my claim that it's the union of a bunch of type theory extensions that have not all been examined as a whole

Mario Carneiro (Mar 13 2020 at 13:19):

I really sincerely don't understand why there isn't some reference linked from the coq manual or something that defines the whole axiomatic system and stays up to date with new versions

Bas Spitters (Mar 13 2020 at 13:20):

@Sebastien Gouezel First, besides your point, measure theory should be developed algebraically/categorically e.g. in line with the theory of von Neumann algebras. See the remarks by people like Rota and Tao.
But to address your point, which axiom would you like to add that is inconsistent with lawvere set theory ?

Johan Commelin (Mar 13 2020 at 13:22):

All this talk about foundations is nice and well. My question remains: suppose that all of mathlib were translated by magic to HoTT. Would it be longer or shorter?

Bas Spitters (Mar 13 2020 at 13:25):

It should be shorter, as we have more axioms (UA) at our disposal. However, I cannot completely gauge the influence of strict equality at the moment.

Johan Commelin (Mar 13 2020 at 13:27):

The number of axioms is not the only factor. We lose definitional proof irrelevance. Will automation apply it for us instead? How often will I need to prove that my type is an hSet? Or is that automated? Is there type class inference? Is there a simplifier? Do they work?

Mario Carneiro (Mar 13 2020 at 13:28):

for that argument to work, you have to be able to mimic/model/implement every single one of lean's axioms and defeqs. The "coq is a lean typechecker" suggests that this is very nearly possible, but it seems like you still have to give up some things, like lean's strong form of the axiom of choice

Johan Commelin (Mar 13 2020 at 13:28):

But "coq is a lean typechecker" was not a univalent coq, was it?

Mario Carneiro (Mar 13 2020 at 13:29):

no

Mario Carneiro (Mar 13 2020 at 13:29):

I think Bas is claiming that there is a way to have both strict sets and univalence in the same system

Mario Carneiro (Mar 13 2020 at 13:30):

but I'm pretty sure that nonempty A -> A breaks everything

Mario Carneiro (Mar 13 2020 at 13:31):

the hott version of the axiom of choice is much more guarded, something like (Pi x, nonempty (A x)) -> nonempty (Pi x, A x)

Sebastien Gouezel (Mar 13 2020 at 13:39):

Bas Spitters said:

Sebastien Gouezel First, besides your point, measure theory should be developed algebraically/categorically e.g. in line with the theory of von Neumann algebras. See the remarks by people like Rota and Tao.

It definitely shouldn't. The von Neumann algebra point of view is interesting, but there are many statements of down to earth measure theory that are just not true in this generality.

Sebastien Gouezel (Mar 13 2020 at 13:40):

Bas Spitters said:

But to address your point, which axiom would you like to add that is inconsistent with lawvere set theory ?

As I was saying, I don't know anything about HOTT. But my impression as an outsider is that you have to sacrifice some versions of choice, which is not an option for me. I would be happy to be proven wrong!

Mario Carneiro (Mar 13 2020 at 13:43):

By the way, lean's version of the axiom of choice isn't even ok in ZFC. It's known as "global choice", positing the existence of a proper class function that is a choice function on every set

Mario Carneiro (Mar 13 2020 at 13:43):

The actual axiom of choice only allows you to choose from set-many nonempty sets at once

Mario Carneiro (Mar 13 2020 at 13:45):

However, in set theory you do have definable definite description, so unlike lean you don't have to use choice to define things like lim when the limit is unique

Gabriel Ebner (Mar 13 2020 at 13:46):

I'm not sure this is true though since a Lean term can only talk about finitely many choice constants. Lean has a choice.{u} constant for every universe u. There is no relation between choice.{u} and choice.{u+1} (except for what you get out of the type). And each choice.{u} is definable in ZFC + enough inaccessibles.

Mario Carneiro (Mar 13 2020 at 13:46):

Oh, that's an interesting point

Mario Carneiro (Mar 13 2020 at 13:48):

Still, choice.{-} has a bit of a peculiar status as a family then, since it will not be uniformly definable

Gabriel Ebner (Mar 13 2020 at 13:48):

Is Type.{-} uniformly definable?

Mario Carneiro (Mar 13 2020 at 13:49):

In all the models that I've considered for lean in set theory, yes. It's usually an explicitly agreed upon sequence, or perhaps the sequence "the n-th inaccessible"

Mario Carneiro (Mar 13 2020 at 13:52):

I guess it's not really that different from the choice constants though. At some point this sequence breaks down - it's well defined but no longer points at anything because you've run out of inaccessibles in the outer model, say - and you can do something similar with the choice sequence where you start with an assumption that gives you the first 42 choice constants and the rest of them are just nonsense

Bas Spitters (Mar 13 2020 at 13:53):

@Sebastien Gouezel lawvere set theory includes choice...

Sebastien Gouezel (Mar 13 2020 at 13:58):

Great!

Mario Carneiro (Mar 13 2020 at 13:58):

which choice?

Bas Spitters (Mar 13 2020 at 14:00):

https://ncatlab.org/nlab/show/ETCS
https://ncatlab.org/nlab/show/fully+formal+ETCS#the_theory_etcs

Gaëtan Gilbert (Mar 13 2020 at 14:13):

Mario Carneiro said:

I really sincerely don't understand why there isn't some reference linked from the coq manual or something that defines the whole axiomatic system and stays up to date with new versions

what about https://coq.inria.fr/distrib/current/refman/language/cic.html ?

Gaëtan Gilbert (Mar 13 2020 at 14:13):

Bas Spitters said:

Gaëtan Gilbert What is the status of strict sets?

I'm not working on them

Assia Mahboubi (Mar 13 2020 at 14:18):

Mario Carneiro said:

I really sincerely don't understand why there isn't some reference linked from the coq manual or something that defines the whole axiomatic system and stays up to date with new versions

I guess is related to the fact that the first released version of Coq is more than 30 years old. Nit-picking: other parts of a proof assistant, for which a paper description is seldom available, have at least as much impact on users and on their act of faith in formal statements. E.g. the elaborator, the parser etc. Since I am more interested in maths than in immediate applications in, e.g., security, I am not too worried though :).

Mario Carneiro (Mar 13 2020 at 14:21):

I am interested in specification for those aspects as well, but I think the logical foundation is the first stage, and in principle should be the easiest to obtain, being as well studied as it is

Mario Carneiro (Mar 13 2020 at 14:23):

@Gaëtan Gilbert That is a nice link. It is a bit tutorial-like though, and in particular the propensity to defer more complex aspects of the system until later makes it unclear when the description of the axioms is "done"

Mario Carneiro (Mar 13 2020 at 14:24):

I don't think you could write a Coq typechecker from that description

Bas Spitters (Mar 13 2020 at 14:25):

@Assia Mahboubi I'm interested in both math and security and that's why I think it's important that the logical foundations are well specified.
I believe the information is there for the Coq system, but they could be collected in a more central place.

Mario Carneiro (Mar 13 2020 at 14:28):

I am willing to believe that every piece of Coq foundations has been described in some paper written over the past 30 years. But there are also probably some bits of black magic in the ML code that never got air-time in a paper, and also the lack of central description means a lack of (proof of) global coherence, which is of course critical for soundness

Bas Spitters (Mar 13 2020 at 14:34):

@Assia Mahboubi Do you happen to know more about the parametricity plugin for lean and how it relates to classical logic?

Mario Carneiro (Mar 13 2020 at 14:34):

from the coq CIC manual:

the terms structurally smaller than yy are:

  • case(c,P,f1fn)\mathsf{case}(c,P,f_1\dots f_n) when each fif_i is structurally smaller than yy.

Isn't this the rule that caused a Coq soundness bug a while back?

Mario Carneiro (Mar 13 2020 at 14:37):

this one

Mario Carneiro (Mar 13 2020 at 14:42):

@Bas Spitters The parametricity plugin written by Cyril will prove the parametricity theorem for "any lean term". It works by recursion over the term syntax of the language, but it can get stuck when it gets to certain things. For axioms, there is nowhere to recurse so you have to manually provide a proof of parametricity. And choice simply doesn't satisfy the parametricity theorem, so any terms built using choice (and there are a lot of them) are similarly blocked. There are also complications in some inductive types, but last I saw there were no real theoretical barriers.

Pierre-Marie Pédrot (Mar 13 2020 at 14:43):

@Mario Carneiro Strictly speaking, this was not a soundness bug. It made Coq inconsistent with the standard Set model, which is indeed pretty bad, but doesn't allow to derive False either.

Mario Carneiro (Mar 13 2020 at 14:45):

I realize that another way to phrase this is "Coq is anti-classical", and that is what is done in the thread itself. But for all the lean users and mathematicians around here, that's basically just proving false things.

Mario Carneiro (Mar 13 2020 at 14:46):

Stated another way, it allows you to derive False in Coq + axioms that everyone wants to have and are unwilling to give up

Bas Spitters (Mar 13 2020 at 14:46):

@Mario Carneiro so, that approach to Transfer does not seem to be very suitable for lean...

Mario Carneiro (Mar 13 2020 at 14:51):

Right, somehow the pure "abstract nonsense" proof of parametricity doesn't work in practice because choice is too deeply ingrained. But for more engineering minded folks like myself the news isn't all bad. Although we use choice a lot to do proofs, it's not like we have choicy functions all over the place. Mathematicians by and large tend to avoid constructions that are not "canonical" in some sense and this means that "random functions" are quite rare. So one might hope to blockade the region around choice by proving, by alternate means, the parametricity theorem for certain terms that use choice in their definition, and then being able to stop the recursion when you get there rather than peeking into the definition and getting stuck on choice.

In particular, much of our usage of choice comes via classical.em, which I think satisfies the parametricity theorem because of proof irrelevance

Pierre-Marie Pédrot (Mar 13 2020 at 14:53):

@Mario Carneiro I am not sure what you refer to, but anti-classical has a well-defined meaning and this doesn't apply to Coq. Coq is compatible with classical logic, but it doesn't imply it. So, parametricity breaks in presence of classical logic, but that's not preventing the extension of Coq with classical principles. Furthermore, adding effects to a language just weakens parametricity, there are still weaker presentations that can be salvaged.

Mario Carneiro (Mar 13 2020 at 14:54):

When I talk about proving soundness of Coq, it may not be obvious but I don't mean "no axioms Coq", I mean "all common axioms Coq"

Mario Carneiro (Mar 13 2020 at 14:55):

I'm pretty sure there is a file in the coq standard library called Axioms.v or something like that containing a bunch of things commonly believed to be ok. I want those to all be consistent with Coq

Pierre-Marie Pédrot (Mar 13 2020 at 14:56):

This is ill-defined, people do use conflicting "common" axioms. Is UIP common? Is univalence common?

Mario Carneiro (Mar 13 2020 at 14:57):

If there are mutually incompatible supported subsets, they should be explicitly determined and the (probably quite distinct) proofs of consistency of each maximal consistent subset should be shown

Mario Carneiro (Mar 13 2020 at 14:57):

And also there should be GIANT WARNING FLAGS on these axioms to prevent someone from accidentally using both in the same development

Pierre-Marie Pédrot (Mar 13 2020 at 14:57):

This doesn't make sense, by Gödel's theorem there is no maximally consistent extension...

Mario Carneiro (Mar 13 2020 at 14:58):

The set of axioms in the coq standard library is finite

Pierre-Marie Pédrot (Mar 13 2020 at 14:58):

There is a nice graph somewhere in the doc, for set-theoretic axioms. Let me find it.

Mario Carneiro (Mar 13 2020 at 14:59):

I would have hoped that all the axioms taken together are consistent. Univalence is not in the coq standard library, I think

Pierre-Marie Pédrot (Mar 13 2020 at 14:59):

You can have a look at https://github.com/coq/coq/wiki/The-Logic-of-Coq and https://github.com/coq/coq/wiki/CoqAndAxioms.

Mario Carneiro (Mar 13 2020 at 15:00):

All 7 axioms on that page are consistent with each other in lean, and I hope in Coq as well

Pierre-Marie Pédrot (Mar 13 2020 at 15:00):

There is no such thing as "true mathematics" so it doesn't make sense to ask our users to abide to a specific set of axioms.

Pierre-Marie Pédrot (Mar 13 2020 at 15:01):

Compatibility is a strength. But if you like sets, you can throw in whatever set-theoretical axiom you fancy.

Pierre-Marie Pédrot (Mar 13 2020 at 15:02):

And I am not a crazy intuitionist, I would actually like to have some extension of CIC to classical reasoning that preserves its good computational properties. But it's far from being an easy task. And no, I don't consider that what Lean does is the right way.

Mario Carneiro (Mar 13 2020 at 15:03):

As long as the union of all provided axioms is still consistent, it's not so bad to work with subsets. You can choose not to use any of them, or just one, or just a different one, and all the libraries are compatible. As soon as you throw in an axiom that is inconsistent with the others, you bifurcate the libraries and the community

Pierre-Marie Pédrot (Mar 13 2020 at 15:04):

So what? There are people writing thread-safe code in C++, that doesn't mean that you can't also write code that is thread-unsafe.

Mario Carneiro (Mar 13 2020 at 15:04):

I actually don't particularly like lean's approach to axioms either. There are 3 axioms, and for almost all work you basically are forced to accept them all. There is very little room for playing with subsets

Mario Carneiro (Mar 13 2020 at 15:06):

In metamath there are something like 20-25 axioms, and while many of the subsets are not interesting, you can still play with avoiding the axiom of choice, or the axiom of foundation, or replacement, or first order logic, or intuitionistic logic.

Mario Carneiro (Mar 13 2020 at 15:07):

However, the union of all available axioms is TG set theory which is rather strong but still believed to be consistent

Mario Carneiro (Mar 13 2020 at 15:07):

so you don't need to have two separate libraries

Pierre-Marie Pédrot (Mar 13 2020 at 15:08):

This may look like an alien statement to the mathematicians around here, but I do think this kind of belief of "the one metatheory to rule them all" is 19th century thinking. If you accept proof-as-programs, then it makes sense to have several programming languages enforcing contradictory invariants. And this has nothing to do with the old feud between intuitionistic crackpots and the rest of the mathematical community, this is just about accepting the world as it is.

Johan Commelin (Mar 13 2020 at 15:09):

Who cares? If you want to get mathematicians on board, you'll have to accept that they don't care about foundations and just want to prove their theorems.

Johan Commelin (Mar 13 2020 at 15:09):

I know many mathematicians who really don't care about foundations. (I can't even list the axioms of ZFC myself.)

Pierre-Marie Pédrot (Mar 13 2020 at 15:09):

... or maybe we can teach the mathematicians about computer science to make them understand it's not a sterile debate?

Mario Carneiro (Mar 13 2020 at 15:09):

I'm all for supporting multiple foundations, but I also want to avoid erecting walls between formal developments

Johan Commelin (Mar 13 2020 at 15:09):

They don't care about correctness either. As long as proofs converge within a reasonable amount of time, that's fine.

Mario Carneiro (Mar 13 2020 at 15:10):

so compatibility is key

Johan Commelin (Mar 13 2020 at 15:10):

Pierre-Marie Pédrot said:

... or maybe we can teach the mathematicians about computer science to make them understand it's not a sterile debate?

They aren't interested in that either.

Johan Commelin (Mar 13 2020 at 15:10):

I don't say you should change. But in that case, don't get your hopes up on having mathematicians among your users.

Pierre-Marie Pédrot (Mar 13 2020 at 15:10):

@Mario Carneiro the situation is not as dire as you think from the point of view of foundations in the Coq community, there are mainly the Set people and the HoTT people.

Johan Commelin (Mar 13 2020 at 15:10):

What I am interested in is a proof assistant that helps me do research in arithmetic geometry.

Johan Commelin (Mar 13 2020 at 15:11):

Currently no such a thing exists.

Pierre-Marie Pédrot (Mar 13 2020 at 15:11):

@Mario Carneiro technical issues regarding the design choices of libraries are a much bigger issue than foundations.

Johan Commelin (Mar 13 2020 at 15:11):

And as long as I need to spend brain power on foundational issues, it won't happen either.

Mario Carneiro (Mar 13 2020 at 15:11):

Again, compatibility is key

Mario Carneiro (Mar 13 2020 at 15:11):

This is why mathlib is a monorepo

Pierre-Marie Pédrot (Mar 13 2020 at 15:14):

Once again, I think we should turn to programming engineering to draw inspiration. It is not uncommon to see various styles emerge. Then people tend to turn this state of affairs in the dreaded "platform" thingy so that you effectively get another programming language atop of the base one, enforcing more constraints like style and whatnot. If you need to make interact two libraries using different platforms, then you're toast.

Mario Carneiro (Mar 13 2020 at 15:14):

If it really is the Set people and the HoTT people, then asking mathematicians to use HoTT is like asking them to jump over the wall. And they very reasonably point out that they like the things on this side of the wall and don't want to give them up, even though there are some cool things on the other side too.

Mario Carneiro (Mar 13 2020 at 15:15):

At least there is the C ABI for interop when it comes to cross library compatibility in software

Pierre-Marie Pédrot (Mar 13 2020 at 15:15):

You can do Set theory in HoTT, but this is more like bondage than anything else.

Pierre-Marie Pédrot (Mar 13 2020 at 15:15):

As for C ABI, this is not a magical entity that happened overnight, this is the result of history and social interaction.

Pierre-Marie Pédrot (Mar 13 2020 at 15:16):

My personal belief is that MLTT plus or minus is a rather good minimal system.

Mario Carneiro (Mar 13 2020 at 15:16):

but you can't actually share results in a base theory, because the theorems need more than the base theory

Pierre-Marie Pédrot (Mar 13 2020 at 15:16):

There are French people believing that you should do everything via λΠ-modulo.

Mario Carneiro (Mar 13 2020 at 15:17):

If everyone worked in MLTT then they could be "universal donors" but there is no industrial strength implementation of MLTT

Pierre-Marie Pédrot (Mar 13 2020 at 15:17):

Well, so what? So be it. If I have code that uses threads, then it will break libraries relying on the absence of threads. That's life.

Pierre-Marie Pédrot (Mar 13 2020 at 15:18):

You don't consider Coq to implement MLTT, or that it's not industrial-strength?

Johan Commelin (Mar 13 2020 at 15:19):

@Pierre-Marie Pédrot Here is a result that I find extremely interesting: https://en.wikipedia.org/wiki/Resolution_of_singularities
I would like to know what you think is the shortest path to getting such a statement and proof in a computer proof assistent. I have all the reason to believe that Lean is currently the proof assistant that is moving in that direction the fastest.

Mario Carneiro (Mar 13 2020 at 15:19):

Coq implements a too strong theory, hence it cannot export results elsewhere

Mario Carneiro (Mar 13 2020 at 15:20):

The reason "Coq is a Lean typechecker" and not the other way around is because Coq is stronger. Strength is not a strength

Pierre-Marie Pédrot (Mar 13 2020 at 15:20):

@Johan Commelin foundation-wise, there is no real difference between Lean and Coq. Anything you can do with one, you can do with the other. @Mario Carneiro well, but nobody prevents you to define subsystems.

Pierre-Marie Pédrot (Mar 13 2020 at 15:20):

Coq is not stronger than Lean, @Gaëtan Gilbert had to specifically use unsafe features to implement Lean.

Mario Carneiro (Mar 13 2020 at 15:21):

It's very difficult to have subsystems of Coq where it matters. Can you turn off inductive types? Fancy pattern matches? Type n for n > 3?

Pierre-Marie Pédrot (Mar 13 2020 at 15:21):

If anything, modulo implemementation details, Lean is an extension of Coq.

Pierre-Marie Pédrot (Mar 13 2020 at 15:22):

What system do you want to get? As far as I am concerned, if you remove stuff like SProp, modules and template polymorphic inductive types, and only use eliminators, I get a reasonable foundation.

Mario Carneiro (Mar 13 2020 at 15:22):

The truth is that neither Coq or Lean is a subsystem of the other because DTT is too sensitive to minor differences

Mario Carneiro (Mar 13 2020 at 15:22):

Can I do something with equivalent axiomatic strength to HOL in Coq?

Mario Carneiro (Mar 13 2020 at 15:23):

Or at least not surpassing ZFC (without going intuitionistic / predicative)?

Pierre-Marie Pédrot (Mar 13 2020 at 15:23):

Coq is conjectured to be strictly weaker than ZFC + ω universes btw.

Mario Carneiro (Mar 13 2020 at 15:24):

you are talking no axioms coq though

Pierre-Marie Pédrot (Mar 13 2020 at 15:24):

At the risk of repeating myself, there is no such thing as "Coq + Axioms".

Mario Carneiro (Mar 13 2020 at 15:24):

I want all the axioms on that page

Mario Carneiro (Mar 13 2020 at 15:24):

Coq + those axioms

Pierre-Marie Pédrot (Mar 13 2020 at 15:24):

Yes, but then you get a crappy theory.

Mario Carneiro (Mar 13 2020 at 15:25):

Why? It's the theory mathematicians are used to

Pierre-Marie Pédrot (Mar 13 2020 at 15:25):

It's not a programming language anymore.

Mario Carneiro (Mar 13 2020 at 15:25):

or at least, the best approximation you can get to it within the constraints of dependent type theory

Pierre-Marie Pédrot (Mar 13 2020 at 15:25):

Having axioms in SProp does look like a promising way though.

Mario Carneiro (Mar 13 2020 at 15:26):

You know, I don't think saying "it's not a programming language anymore" will get you much sympathy around here. We don't want a programming language, we want a proof language

Pierre-Marie Pédrot (Mar 13 2020 at 15:26):

As for HOL, the Dedukti people wrote a few translators from various systems. You could try playing with it.

Mario Carneiro (Mar 13 2020 at 15:26):

You can always use computable things if you want to compute

Pierre-Marie Pédrot (Mar 13 2020 at 15:27):

This is why Coq has the distinction Prop / Type. Add you proof axioms in Prop, compute in Type.

Mario Carneiro (Mar 13 2020 at 15:28):

Sure, and none of those axioms interferes with this AFAIK

Pierre-Marie Pédrot (Mar 13 2020 at 15:28):

And FTR I don't really care about the sympathy of mathematicians. I have enough "real math" friends of mine who make fun of type theory and computer assistants. It's a social issue, not a technical one.

Mario Carneiro (Mar 13 2020 at 15:29):

I just don't see why they have to be at odds

Pierre-Marie Pédrot (Mar 13 2020 at 15:29):

Lean Santa Claus version of choice "inhabited A -> A" is very bad in the Prop / Type distinction though.

Mario Carneiro (Mar 13 2020 at 15:30):

Sure, it's noncomputable

Mario Carneiro (Mar 13 2020 at 15:30):

The cool thing is that if you use it only in proofs it becomes computable again

Mario Carneiro (Mar 13 2020 at 15:30):

that is, classical.em is computable

Pierre-Marie Pédrot (Mar 13 2020 at 15:30):

Well, breaking SR, strong normalization and whatnot.

Pierre-Marie Pédrot (Mar 13 2020 at 15:31):

I am afraid this is a dealbreaker.

Pierre-Marie Pédrot (Mar 13 2020 at 15:31):

But anyways, if a development uses choice, then will you consider this as valid?

Mario Carneiro (Mar 13 2020 at 15:31):

Axioms don't break SR, proof irrelevance and large elimination does

Pierre-Marie Pédrot (Mar 13 2020 at 15:31):

In a noncomputable way, I mean?

Mario Carneiro (Mar 13 2020 at 15:31):

strong normalization doesn't matter

Pierre-Marie Pédrot (Mar 13 2020 at 15:32):

(You can't shift the blame one one axiom, it's the whole interaction of rules.)

Mario Carneiro (Mar 13 2020 at 15:32):

That concerns the behavior of #reduce which we literally never use because it's impractically slow on almost all examples

Pierre-Marie Pédrot (Mar 13 2020 at 15:32):

strong normalization doesn't matter

Pierre-Marie Pédrot (Mar 13 2020 at 15:32):

for some people it does

Pierre-Marie Pédrot (Mar 13 2020 at 15:33):

so, please answer the question: assuming some subcommunity wants to restrict to computable functions

Pierre-Marie Pédrot (Mar 13 2020 at 15:33):

for some reason, doesn't matter

Pierre-Marie Pédrot (Mar 13 2020 at 15:33):

what does mathlib tell them?

Mario Carneiro (Mar 13 2020 at 15:33):

What matters is evaluating computable functions that were written to be computable, not evaluating all terms anywhere in the proof assistant including proofs and stuff

Pierre-Marie Pédrot (Mar 13 2020 at 15:34):

OK, this foundation stuff is a red herring. Let me put it another way. Assume there are people deciding to go ssr-style in Lean.

Pierre-Marie Pédrot (Mar 13 2020 at 15:35):

What happens? A fork?

Pierre-Marie Pédrot (Mar 13 2020 at 15:35):

That's essentially what happened to Coq a long time ago.

Mario Carneiro (Mar 13 2020 at 15:36):

Lean has a marker, called noncomputable, that you are required to put on any definition that cannot be executed. There is a way to put this at the start of your file to say "I don't care about computability", and most mathematicians do this for their theories of category theory, commutative algebra and whatnot. If you care about computability, you write computable defs, which don't have this marking, and lean will generate bytecode on the spot. These functions can be called via #eval with concrete arguments and you get values out just like any programming language.

Pierre-Marie Pédrot (Mar 13 2020 at 15:37):

Right, right. But my point is about the existence of diverging design choices in library in software.

Pierre-Marie Pédrot (Mar 13 2020 at 15:37):

Doesn't even have to be Lean.

Mario Carneiro (Mar 13 2020 at 15:38):

Moreover, there are ways to compute even with noncomputable terms, using tactics, and this is in fact the preferred method (by comparison to proofs by reflection). This is exemplified in tactics like norm_num that will prove things like 100 + 100 = 200 by cobbling proofs together, even though yes, you could prove it by rfl, the proof takes too long

Pierre-Marie Pédrot (Mar 13 2020 at 15:38):

Invariably, there will be people disagreeing with whatever choice done in mathlib.

Pierre-Marie Pédrot (Mar 13 2020 at 15:39):

the proof takes too long

really? this sounds suspicious to me

Mario Carneiro (Mar 13 2020 at 15:39):

It unfolds into unary. It's bad

Pierre-Marie Pédrot (Mar 13 2020 at 15:39):

yes, but tactics will have to generate similarly big proof terms

Pierre-Marie Pédrot (Mar 13 2020 at 15:40):

doesn't it mean that the reduction machine is not really optimized in Lean?

Mario Carneiro (Mar 13 2020 at 15:40):

The short story is that in Lean the tactics are fast and the kernel reduction (plus elaborator reduction) is comparatively slow. In Coq it is the other way around and the cost benefit analysis goes the other way

Pierre-Marie Pédrot (Mar 13 2020 at 15:41):

OK, this is surprising.

Mario Carneiro (Mar 13 2020 at 15:41):

Checking big proof terms is not that bad, as long as the types all line up

Pierre-Marie Pédrot (Mar 13 2020 at 15:42):

well, type-checking is at least linear in the size of the term

Mario Carneiro (Mar 13 2020 at 15:42):

yes, that's fast if the term is small

Pierre-Marie Pédrot (Mar 13 2020 at 15:43):

yeah but I fail to see how you can build a reasonable proof of "100 + 100 = 200" that is not generating a term requiring more work than mere evaluation, in unary that is

Mario Carneiro (Mar 13 2020 at 15:44):

Note also that if you provide a proof term you have a lot of flexibility in how to do the computation. For example when norm_num proves or disproves primality, the first thing it does is find out if the number is actually prime, and if not, it has a very short proof of the negation. A reduction based proof would have to do the whole primality thing even in the composite case

Pierre-Marie Pédrot (Mar 13 2020 at 15:44):

depends how you write it

Mario Carneiro (Mar 13 2020 at 15:44):

You can produce a proof of 100 + 100 = 200 in size linear in the size of the expression, which is written in binary after the parser has had a go

Pierre-Marie Pédrot (Mar 13 2020 at 15:45):

the ssr people are very good at this kind of stuff

Reid Barton (Mar 13 2020 at 15:45):

The literal "100" itself is not in unary, as a term

Bas Spitters (Mar 13 2020 at 15:46):

@Mario Carneiro @Johan Commelin Is it possible that what mathematicians in lean are really using is Lawvere's version of choice as in ECTS, and not Hilbert epsilon which is non-structural.

Mario Carneiro (Mar 13 2020 at 15:48):

Let's suppose you want to reduce prime N to true or false where N is some big number, say 50 digits. You have no choice but to push the kernel through a bunch of states to test the primality, while a tactic can just call GMP or mathematica to first factorize it and then prove the correctness of this untrusted guess

Pierre-Marie Pédrot (Mar 13 2020 at 15:49):

@Mario Carneiro that's assuming you go full reflection, but precisely the ssr people are argueing for interleaved computation with tactics.

Mario Carneiro (Mar 13 2020 at 15:50):

Is that so? I didn't realize that. I usually have a tough time being able to insert a proof term in the middle of a defeq computation

Pierre-Marie Pédrot (Mar 13 2020 at 15:51):

I am probably not qualified enough to talk about this but this is precisely the whole story of the "small-scale" nature of this approach. I'd recommend asking @Assia Mahboubi for instance.

Mario Carneiro (Mar 13 2020 at 15:53):

@Bas Spitters In a literal sense, of course they are not; lean provides one choice axiom and they are using it. But more likely you mean that they would be equally happy with a Lawvere style axiom of choice, and I'm not sure this is true. Among other things, we use choice to do definite descriptions. So things like sum (f : nat -> real) : real will be defined using choice. (It would also be used here to do an outer if statement: if the sum is not defined then return 0 else return the unique sum of the sequence.)

Reid Barton (Mar 13 2020 at 15:55):

I think this "naked choice" is potentially problematic even just in the context of Lean itself. For example (as noted earlier, I think) it blocks automatic syntax-directed proofs of parametricity.

Reid Barton (Mar 13 2020 at 15:55):

It would be better to split off axiom_of_choice (the Prop version) and unique_choice into separate axioms

Bas Spitters (Mar 13 2020 at 15:57):

@Johan Commelin I am not an expert on this topic, but the people in constructive/computer algebra have considered this topic. E.g.
https://arxiv.org/abs/1304.6770
What is interesting is that the same abstractions are useful in topos theory (Coste, Roy, Lombardi, Wraith, ...) and in computer algebra (magma,GAP, ...)
@Assia Mahboubi may know more.
I would imagine that a more algebraic development is more suitable for formalization. However, it is possible that we do not have all the right abstractions yet.

Reid Barton (Mar 13 2020 at 15:57):

Then some definitions like (hypothetically, I don't think this is the actual mathlib definition) the dimension of a vector space as "pick a basis, and take its cardinality" can be formulated in terms of unique_choice, but it requires proving that the result is independent of the choice. Which is also the rule in ordinary math.

Reid Barton (Mar 13 2020 at 15:58):

(I think the actual mathlib definition is "the smallest cardinality of a basis"? but surely there are other examples)

Mario Carneiro (Mar 13 2020 at 16:00):

sum mentioned above is an example. There are two uses of choice in that definition, and they are both definite descriptions

Gabriel Ebner (Mar 13 2020 at 16:00):

I'm not sure how the dimension of a (real) vector space could ever be computable. What is the dimension of the span of a real number x? This requires computing whether x is nonzero. But alas, that's noncomutable.

Mario Carneiro (Mar 13 2020 at 16:00):

It's not computable, unique_choice would still be an axiom

Gabriel Ebner (Mar 13 2020 at 16:02):

Ah, sorry didn't follow the discussion too much. unique_choice [subsingleton A] : nonempty A -> A then?

Reid Barton (Mar 13 2020 at 16:02):

This also fits better with my mental model of the semantics--I want to say you get a model of Lean from a model of ZFC+omega inaccessibles, not from such a model together with some specific choice of choice functions.

Reid Barton (Mar 13 2020 at 16:02):

Of course full choice would probably be needed/convenient for some purposes, but at least you could track it separately

Mario Carneiro (Mar 13 2020 at 16:04):

You can also do unique_choice : unique A -> A

Gabriel Ebner (Mar 13 2020 at 16:05):

This version of unique_choice doesn't require an axiom though, since unique extends inhabited.

Mario Carneiro (Mar 13 2020 at 16:06):

oh

Mario Carneiro (Mar 13 2020 at 16:06):

nonempty (unique A) -> A? Doesn't have the same ring

Bas Spitters (Mar 13 2020 at 16:06):

Corollary 3.9.2 (The principle of unique choice). Suppose a type family P : A → U such that
(i) For each x, the type P(x ) is a mere proposition, and
(ii) For each x we have || P( x)||.
Then we have ∏( x:A ) P( x).

Mario Carneiro (Mar 13 2020 at 16:08):

The use of a type family there seems superfluous

Mario Carneiro (Mar 13 2020 at 16:09):

Note that lean's choice used to be more complicated, and it was simplified to the current nonempty A -> A

Mario Carneiro (Mar 13 2020 at 16:10):

I think the original axiom was indefinite_description {α : Sort u} (p : α → Prop) : (∃ x, p x) → {x // p x}

Mario Carneiro (Mar 13 2020 at 16:17):

If I ignore the thing about the type family (set A = unit), then it seems that the HoTT version reduces to what Gabriel posted

Gabriel Ebner (Mar 13 2020 at 16:23):

The Lean version of unique choice is pretty strong since we have more subsingletons, e.g. it implies that all types have decidable equality. Does the HoTT version allow you to write λ x : ℝ, if x = 0 then tt else ff?

Bas Spitters (Mar 13 2020 at 16:25):

@Gabriel Ebner I believe so in the presence of PEM. Since the term is definable in ECTS using unique choice.

Mario Carneiro (Mar 13 2020 at 16:26):

I think the proof of unique choice in HoTT is similar to the lean proof of [subsingleton A] : trunc A -> A

Kevin Buzzard (Mar 13 2020 at 17:04):

Pierre-Marie Pédrot said:

There is no such thing as "true mathematics" so it doesn't make sense to ask our users to abide to a specific set of axioms.

@Pierre-Marie Pédrot in my department the vast majority of us think ZFC is true mathematics, and I am certain that many other mathematics departments think the same way. The fact that logicians and computer scientists have invented a whole bunch more options and then decide to call them mathematics is problematic, because it means that discussions can degenerate into axiomatic issues which as far as a mathematician is concerned are completely solved. The axiom of choice is an axiom of maths as practiced in mathematics departments. Johan is absolutely right when he says that mathematicians are not interested in this stuff. You say

... or maybe we can teach the mathematicians about computer science to make them understand it's not a sterile debate?

but it is much worse than that -- it is not a sterile debate -- we finished debating 100 years ago and decided what mathematics was, and the fact that you guys are still arguing over it is of no interest to us. From your point of view this debate is very important, because if you want to design a new computer proof system than of course foundations are absolutely a major issue. But from a mathematician's point of view, ZFC works, constructivism doesn't work, ZF doesn't work, and these topics are not being studied in mathematics departments other than by a small niche group of people. We mathematicians are looking for a system where we can reason the way we reason on paper and having tried several systems I currently believe that Lean 3 is the closest we have to this; I can name explicit problems with every other system, either of the form "not enough foundations yet" or "can't do some part of modern mathematics in a way which is practical". For Coq it is "quotients", which are absolutely everywhere in modern mathematics.

Once again, I think we should turn to programming engineering to draw inspiration.

We mathematicians cannot turn to programming languages to draw inspiration because for you guys it's really important to have lots of different programming languages, because different languages are good at different things. We are attempting to make one coherent body of mathematics, and it's really important that everything is compatible with everything else. In particular every paper in number theory has to be accessible to me if I am a number theorist so it's really important that they all have the same foundations, and these foundations are decided -- they are ZFC -- and sure, add universes if you like, it doesn't matter, because it turns out that the kind of mathematics we do can sometimes be made easier with universes but we actually know tricks to get it back into ZFC. But we will not settle for anything less. LEM and AC are embedded. We are not going back.

And FTR I don't really care about the sympathy of mathematicians. I have enough "real math" friends of mine who make fun of type theory and computer assistants. It's a social issue, not a technical one.

What I am trying to do in Lean is to bring mathematicians in and get them using this software. In particular I need to hide type theory if I want to get them to use computer assistants. It is a social issue and it is one which I think is extremely important, but we as proof assistant users need to tread carefully when attracting mathematicians into the area. I simply tell mathematicians that a type is a set where the elements aren't necessarily sets any more but could just be "atoms", and suggest that actually this is the way they think about stuff they call sets.

I want to meet computer scientists "half way" and so I can have some dialogue with them about the future of these sytems -- but there are some ground rules, and AC and LEM and noncomputability are part of them. Any suggestion that a system which doesn't have them can be used to do modern mathematics is ridiculous to our ears, so the conversation cannot even start. This is why I use Lean and this is why I tell other mathematicians to use Lean.

Pierre-Marie Pédrot (Mar 13 2020 at 18:06):

LEM and AC are embedded. We are not going back.

This is great, because that's what some people in the community are trying to provide in type theory in a way that actually works. Disclaimer: I am one of those, and I am not advocating against EM + AC. I am just constantly disappointed by the lack of consideration from mathematicians on the topic of foundations. And I am not saying that anybody should turn into a type theorist, but at least that they should be listened to, which is very different.

So far, the logical choices of Lean are the dependent type theory equivalent of PHP. Yes, it kind of works, until you reach the size of Facebook and realize that you wrote your stuff on quicksand, and that disregard for foundational issues have a long term cost you'll only realize at that point. And now, they're switching parts of their code to ReasonML because the trade-off is long overdue.

What I am trying to do in Lean is to bring mathematicians in and get them using this software.

I am not complaining about that. It is better to use whatever proof assistant than nothing, and the state of a few areas of mathematics is depressing. Needing a Fields medal to dismiss a broken proof is not a great situation, if you ask me.

What I am trying to do in Lean is to bring mathematicians in and get them using this software. In particular I need to hide type theory if I want to get them to use computer assistants. [...] actually this is the way they think about stuff they call sets.

There is wide evidence that no mathematician ever uses ZFC anyways, so I don't think it's very hard to make them forget about sets, except maybe for some weird hilbertian fetichism.

For Coq it is "quotients", which are absolutely everywhere in modern mathematics.

I will call your bluff, because you keep spreading this lie around, and I am starting to be slightly annoyed. As demonstrated by @Gaëtan Gilbert , it is perfectly possible to get quotients in Coq with the very same rules, if you opt-in for a broken feature that is hardwired in Lean. Even Santa Claus choice can be obtained by similar tricks. I have actually heard rumours that you didn't even try Coq thoroughly, and I even believe there is a written account of that. You don't have indeed to try everything out there to settle for a tool, and it's nice to see math users playing around with any type-theoretic proof assistant.

Yet, I am a type theorist and I would never not making random claims in some arbitrary area of mathematics.

But once again, I claim the minimal divergence of foundations between Lean and Coq are of little relevance to formalization of mathematics. From what I gather, Lean has not reached Gonthier's famous wall of complexity. By that, he means that a lot of a priori ridiculous technical impediments are paving the way to mathematical formalization. I am genuinely willing to see an actual large mathematical development in Lean, to satisfy my own computer scientist fetichism, namely scalability. It might be changing, but Coq proofs are still a cottage industry. Can Lean do better?

Sebastien Gouezel (Mar 13 2020 at 18:18):

Pierre-Marie Pédrot said:

As demonstrated by Gaëtan Gilbert , it is perfectly possible to get quotients in Coq with the very same rules, if you opt-in for a broken feature that is hardwired in Lean. Even Santa Claus choice can be obtained by similar tricks.

I don't think the question is whether they can be constructed in Coq. To an outsider to type theory like me, the systems are sufficiently similar that I have little doubt that something natural that can be made with one can be made with the other. The question is rather whether they are used in the libraries, i.e., if the main libraries with the software try to use a constructivist point of view, or don't care and go full classical right away. Here I can see a really different approach currently with the Coq and Lean communities. I think it is very healthy that different viewpoints are tried and put forward, but mathematicians will be more easily attracted to libraries that use choice and LEM from the very beginning, because it will sound more familiar to them.

Kevin Buzzard (Mar 13 2020 at 18:19):

You are right about my Coq experience. I heard about this "setoid hell" from various people but I am certainly not an experienced Coq user. I will try and get some more concrete facts.

Mathematicians are terrible about foundations. It is not regarded as an interesting topic by most people, because "all the interesting questions were solved when forcing was invented" or something :-/ You are also right that we don't use ZFC in practice, however it is true that there is a general belief that whatever it is that we are doing, it is possible to translate it down to ZFC (and classical logic) in theory. My impression is that in the areas I am interested in (number theory, arithmetic geometry) it is possible to find people (they are rare, but they exist) who actually know how to do this, but it is not something which is deemed particularly important. Somehow the workers in each area know the API, and then a few people know how to get from the API to the foundations, but what is taught is the API.

I am not necessarily saying that Lean is the solution to everything. But I think it is close enough to being able to simulate the kind of mathematics done by mathematicians that it might be enough to get people interested in the theory in general. My general line of thought is this. Which system is best for modern mathematics? We don't really know. So let's try and get mathematicians actually making mathematics in as many systems as possible, because my impression is that when we get stuck the computer scientists are able to understand what the problem is and try to solve it. You know we have some typeclass problems in Lean 3, which I think were basically discovered because of mathematicians trying to push the system? Hopefully these will be fixed in Lean 4.

I guess it will take time before we have a large development. I guess we are working on mathlib as being this development?

Pierre-Marie Pédrot (Mar 13 2020 at 18:25):

mathematicians will be more easily attracted to libraries that use choice and LEM from the very beginning, because it will sound more familiar to them.

I don't know why Coq has this aura of knife-between-their-teeth intuitionists in the mathematical community. Just as mathematicians, most computer scientists don't care at all about their foundations. Most serious computer science Coq developments I know of actually use classical axioms everywhere.

Kenny Lau (Mar 13 2020 at 18:28):

Is knife-between-their-teeth a French phrase?

Kevin Buzzard (Mar 13 2020 at 18:29):

It's clear what is meant :-)

Kenny Lau (Mar 13 2020 at 18:30):

I actually wasn't able to infer it from context

Kevin Buzzard (Mar 13 2020 at 18:31):

I remember one of the first things I noticed when reading the Coq odd order paper was that they said that the result was proved constructively, and I thought "what does that even _mean_?" (this was a few years ago before I knew the definition). And then later on they started explaining about how they couldn't use the complex numbers and I was totally speechless. I think it is more the issue that constructivism is simply not taught in mathematics departments, so we find anyone who talks about it slightly scary.

Kevin Buzzard (Mar 13 2020 at 18:32):

I literally had no idea 5 years ago that the complex numbers were somehow illegal for some people.

Pierre-Marie Pédrot (Mar 13 2020 at 18:36):

It's strange, because I am fairly sure mathcomp uses classical logic locally. Dually to Lean, they mark classical statements by quantifying over a proof of EM. Anyway, AFAIR it does not even matter for the odd order theorem since it's a negative statement, which are essentially equivalent in intuitionistic and classical logic.

Kevin Buzzard (Mar 13 2020 at 18:37):

I guess the point is that all the constructions they used in the proof were also constructive.

Pierre-Marie Pédrot (Mar 13 2020 at 18:38):

See Chapter 3.3 of https://math-comp.github.io/mcb/book.pdf.

Sebastien Gouezel (Mar 13 2020 at 18:40):

When I started with proof assistants (I wanted to do some measure theory), I tried both Coq and Isabelle and I gave up very quickly with Coq, one of the reasons being that in the standard library many files talking about the reals start with the word "Constructive".

Kevin Buzzard (Mar 13 2020 at 18:41):

The link is just the sort of thing that mathematicians find intimidating. We do not teach decidability in any serious way; I think I met it once in the 2nd year logic course and then it was never mentioned again (and I am certain that my 3rd year lecturers did not know what it was). I learnt very quickly when talking to mathematicians about Lean that the moment you start mentioning these things they think you've become some kind of crank.

Kevin Buzzard (Mar 13 2020 at 18:43):

I think this is the main reason I am so edgy about the univalence axiom. It starts with a discussion of univalence, and within minutes the conversation has either turned to elementary topoi (about which mathematicians know nothing), or constructivism (about which mathematicians know nothing), or some discussion about how things can be equal in more than one way (which just sounds like complete craziness). It is so far from what happens in practice in maths departments.

Pierre-Marie Pédrot (Mar 13 2020 at 18:45):

many files talking about the reals start with the word "Constructive"

This is funny because precisely the Coq real library from stdlib is classical, to please the mathematicians. I guess merely uttering the word "constructive" is enough to deter mathematicians then...

Sebastien Gouezel (Mar 13 2020 at 18:48):

Absolutely. Because it means, at some point, you will have to make some effort to stay constructive (otherwise, you wouldn't put the word constructive in the file name). So, the definition starts with some constructive stuff, and then goes classical. To me, if you can express a statement or a proof by remaining constructive but it requires 1% effort more, then it should be avoided (both for the library builder, and for the user who will have to skim the library), simply because there's no point. Maybe I am a little extreme there, but I think it is pretty reasonable in fact.

Johan Commelin (Mar 13 2020 at 18:50):

On this forum, I can say that I see value of constructive maths (for example, constructive maths holds in any topos). But when I'm having lunch with my colleagues, they don't want to hear anything about giving up the C in ZFC. So yes:

I guess merely uttering the word "constructive" is enough to deter mathematicians then...

My goal is to get them engaged with proof assistants. And for that, mentioning the word "constructive" is quite often an unfortunate obstacle.

Pierre-Marie Pédrot (Mar 13 2020 at 18:52):

@Sebastien Gouezel well, until very recently Coq reals were fully axiomatic, so there was not even a hint of constructivism in there. Now they're trying to have some subsystem that's constructive in order to plug into C-CoRN.

Bas Spitters (Mar 13 2020 at 18:52):

Georges seem to take it as a matter of pride to make a constructive proof. Georges has a deep appreciation for the use of computation to proof mathematical theorems.

Kevin Buzzard (Mar 13 2020 at 18:54):

I am wondering whether the mathlib people take pride in the fact that the development is 100% axiom-free. Sure we use Lean's inbuilt axioms (because this is just "maths mode") but there are no instances where we build a mathematical object by just adding axioms.

Pierre-Marie Pédrot (Mar 13 2020 at 18:54):

Somewhat unrelated to the discussion, but @Sebastien Gouezel we could try to have more discussions on the topic in person, we're just one building away IIUC... (when they reopen the uni, that is.)

Bas Spitters (Mar 13 2020 at 18:56):

There is now a substantial development of constructive measure theory in Coq. Following Bishop, who was a great functional analyst.
https://github.com/coq/coq/pull/9185
The classical theorems are then derived quickly from the constructive ones. One does not need to do it that way, but some people appreciate it.

Pierre-Marie Pédrot (Mar 13 2020 at 18:57):

they don't want to hear anything about giving up the C in ZFC.

One of the advantages of type theory is that they are finer-grained than sets, and that it's easier to understand what is what. The usual joke on the understandability of choice vs. Zorn vs. well-ordering makes sense there. CIC validates choice phrased as "you can pick elements of existentials".

Sebastien Gouezel (Mar 13 2020 at 19:02):

Pierre-Marie Pédrot said:

Somewhat unrelated to the discussion, but Sebastien Gouezel we could try to have more discussions on the topic in person, we're just one building away IIUC... (when they reopen the uni, that is.)

Definitely! In May or June, maybe :-)

Johan Commelin (Mar 13 2020 at 19:04):

Pierre-Marie Pédrot said:

One of the advantages of type theory is that they are finer-grained than sets, and that it's easier to understand what is what. The usual joke on the understandability of choice vs. Zorn vs. well-ordering makes sense there. CIC validates choice phrased as "you can pick elements of existentials".

That's a good point. Thanks for pointing it out! Still, it's only useful for mathematicians if you can also have transport (not only along equiv's, but also other isos) and have a sizable library of "basic maths".

Bas Spitters (Mar 13 2020 at 19:05):

@Johan Commelin Peter Johnstone once wrote an apology for locale theory, and the benefits of working classically, but without choice.
The point of pointless topology
https://projecteuclid.org/euclid.bams/1183550014
Choice is only used to construct the points of a locale. The locale it self can be constructed without choice. The use of the axiom of choice is a choice.
This has applications in e.g. functional analysis.

Bas Spitters (Mar 13 2020 at 19:12):

@Sebastien Gouezel I had a quick look at your homepage.
https://www.math.sciences.univ-nantes.fr/~gouezel/
You seem to be working in a beautiful part of analysis that is not very structural, i.e. somewhat distanced from say functional analysis.
Moreover, does not seem to be very computational/numerical.
So, constructive methods may be less natural to you.

Patrick Massot (Mar 13 2020 at 19:14):

:open_mouth: I think this is the first I see anyone claiming to know which part of mathematics Sébastien works in.

Bas Spitters (Mar 13 2020 at 19:15):

About the Hausdorff metric. My PhD-student once used it to make a 1980-style graphical calculator which was provably correct. The printing of graphs of functions was done using the Coq notation system.
https://www3.risc.jku.at/publications/download/risc_3448/SCSS2008_Proceedings.pdf
Russell O’Connor. A Computer Verified Theory of Compact Sets. SCSS 2008, RISC-Linz Report Series 08–08: 148–162, Jul 2008, Proceedings

Sebastien Gouezel (Mar 13 2020 at 19:16):

I use a lot of functional analysis (in the sense of operator theory on spaces of functions, so with concrete applications to other problems in mind), but not on the abstract side of things (I know what a trace on a C^* algebra is, but I don't use them in my research). So, I am away from the finite world (where decidable methods are appealing because they can let you compute for free on examples), and away from the more abstract sides (higher homotopy theory for instance) where foundational discussions become relevant. So yes, I am definitely biased away from these questions. But I guess that this is also the case of maybe 90% working mathematicians.

Bas Spitters (Mar 13 2020 at 19:17):

While I'm at it. One can use constructive analysis to do certified computation with classical real numbers:
Cezary Kaliszyk and Russell O’Connor. Computing with Classical Real Numbers. Journal of Formalized Reasoning, Vol. 2, No. 1, 2009, Pages 27–29
http://jfr.cib.unibo.it/article/view/1411/932

How does one prove in lean that:
pi ~ 3.14159265359 ?

Johan Commelin (Mar 13 2020 at 19:18):

We can only do the first 4 digits (-;

Johan Commelin (Mar 13 2020 at 19:18):

Maybe we pushed till 7 digits, on last years Pi-day

Bas Spitters (Mar 13 2020 at 19:19):

@Johan Commelin So, that's one reason you should have used constructive reals :-)

Johan Commelin (Mar 13 2020 at 19:20):

Or we just need some tactic that can spit out proofs.

Bas Spitters (Mar 13 2020 at 19:20):

Arguably, one can also connect with interval computation "by hand", as in the impressive work by Fabian Immler in Isabelle to prove that the Lorentz attractor is chaotic

Patrick Massot (Mar 13 2020 at 19:21):

I really don't understand that bit. Say I prove some series an\sum a_n of rational numbers converges to pi. I can do this 100% classically. And then I prove an estimate on the remainder nNan\sum_{n\geq N} a_n. Then I can prove the first kk digits are correct. Where do I need anything constructive as long as I can prove all this and compute with rational numbers?

Bas Spitters (Mar 13 2020 at 19:28):

For most of basic analysis such proofs would be naturally constructive, and one would combine the proof of convergence with the estimate on the remainder.
You are right, it is not needed to do so.

Sebastien Gouezel (Mar 13 2020 at 19:29):

In Isabelle, which is completely non-constructive but more mature, you would prove it with

lemma ineq3:
  "(pi :: real) ≤ 3.141592658"
by (approximation 40)

where approximation is a tactic that will do all the dirty interval arithmetic for you.

Chris Hughes (Mar 13 2020 at 19:38):

So my understanding is that if I had a constructive proof of convergence of an\sum a_n, then given some ϵ\epsilon I could compute NN, such that the sum of the first NN terms was within ϵ\epsilon. But the problem with this is that it means whenever I do a continuity proof or a convergence proof I will have to think about providing a minimal δ\delta or a minimal NN, in order to make sure the computation time was reasonable, but often with the standard proofs, you'll end up computing an NN way bigger than it really needs to be and not really practical to compute with.

Does an unfolded proof of convergence actually provide a practical way of approximating the limit most of the time?

Bas Spitters (Mar 13 2020 at 19:40):

@Sebastien Gouezel Coq has a similar package:
http://coq-interval.gforge.inria.fr/

Sebastien Gouezel (Mar 13 2020 at 19:42):

Yes I am not surprised: Coq and Isabelle are much more mature than Lean, so most things that should be there are already there.

Bas Spitters (Mar 13 2020 at 19:45):

@Chris Hughes With a bit of engineering this can be made practical. However, with epsilon-delta proofs one is computing backwards from the precision of the result to the precision of the input.
For optimal speed, one wants to compute forwards: start with a small interval see whether the result is precise enough. If not, start over with double precision. This is what IRRAM does:
https://github.com/norbert-mueller/iRRAM
I believe mathematica does something similar.

Bas Spitters (Mar 13 2020 at 21:08):

Hoping I wasn't too zealous, I just wanted to point out that constructive mathematics has its uses in the formalization of mathematics, even if one is not a diehard constructivist.


Last updated: Dec 20 2023 at 11:08 UTC