Zulip Chat Archive
Stream: general
Topic: scalability and maintainability
Maarten Derickx (Aug 18 2023 at 09:32):
Mario Carneiro said:
FWIW I consider this a strong negative aspect of lean in general: it is not suitable for doing anything "archival quality" because in a year you will have difficulty getting it to compile. Mathematics is supposed to "stay proved", and lean overhauling itself on a regular basis does not help in this regard. Not all theorem provers have this issue, although the approaches used to deal with it vary.
Is the problem here really lean overhauling itself or is it mathlib overhauling itself?
Scott Morrison (Aug 18 2023 at 09:33):
It's a pretty severe problem / feature on both sides. :-) Mathlib is worse, and unlikely to get better. Regular official releases of Lean 4 itself should start soon.
Maarten Derickx (Aug 18 2023 at 09:46):
P.s. I realized that Joseph Myers post right before that of Mario Carneiro is actually a better context for the discussion I want to start. He raises some excellent challenges. And I am just curious what kind of things can be done to adres those challenges.
Joseph Myers said:
I think having a new list of formalization targets is worthwhile, but ideally it would look somewhat different to the old list, reflecting the different challenges for formalization today. The challenge now isn't so much "can we formalize X?" for various X - by now it's clear that just about any individual result X can be formalized on its own with enough effort. And the old list effectively provided lots of examples of such individual results.
Rather, I think challenges now are more about issues such as scaling, breadth, integration, maintenance. Can mathlib scale to 100 times its present size, with a community 100 times its present size and commits going in at 100 times the present rate? Can it reach the point where it has not merely most of an undergraduate degree but many of the key results used by people working in all fields of mathematics? Can all the results be formalized not just on their own, but with all the underlying mathematics being properly integrated in mathlib? Will the proofs be maintained afterwards (arguably several of the entries on Freek's original list should now have some kind of red flag in the list of Lean proofs, that they were only done in Lean 3 in an external repository and so haven't been done in Lean 4 / with current mathlib4 - and I'd guess much the same applies for some of the other theorem provers that have been used to formalize entries on that list)?
Considering those challenges suggests that maybe some of the following would make sense for a new list of formalization targets. (a) It should have a lot more than 100 theorems, to provide broad coverage of different areas of mathematics. (b) It should routinely get more theorems added (whenever new results come to prominence), rather than being a fixed and frozen list (but once an entry is there, it should remain there). (c) While "interesting oddities" as described in a comment on #6091 are worth including, maybe also get people working in (or familiar with) many different fields of mathematics to suggest lists of major theorems in their fields for inclusion. (You might get the "interesting oddities" by counting "mathematics that attracts popular interest or coverage" as being a field there.) (d) There should be an associated convention that a theorem on the list isn't considered done until all the underlying mathematics is fully integrated in the relevant libraries for the theorem prover being used, with the result itself being somewhere that gets maintained on an ongoing basis independent of the original authors.
Jon Eugster (Aug 18 2023 at 10:00):
Maarten Derickx said:
Is the problem here really lean overhauling itself or is it mathlib overhauling itself?
To a certain extend 'mathlib overhauling itself' probably also reflects mathematics in general overhauling itself, just there it's less obvious. E.g. I've just been working on some lecture notes and they are full of comments about incompatible notions coming different papers over time; or some authors giving a noun/adjective completely different meaning than others do.
Maarten Derickx (Aug 18 2023 at 10:01):
For example I myself have no idea on how to actually write mainainable proofs in Lean. I know thechniques of how to do it in other programming languages. For example:
One important principle in software engineering is isolation. When you are designing a system with multiple components, it is usually a good idea to isolate them from each other so that a change in one component doesn’t have undesired effects on other components.
In a lot of programming languages.
I know that lean has the private keyword to help facilitate this. However it is not explained properly in theorem proving in lean 4, and is also rarely used in mathlib (my grepping skills seem to indicate that less then 1% of all theorems and definitions in mathlib are private). So basically everything in our api is considdered fair play to depend on, which makes it really hard to do refactors.
Scott Morrison (Aug 18 2023 at 10:33):
I think you might be mistaken about refactors here. Yes, everything is public. On the other hand, the fact that everything is so thoroughly spec'd (i.e. by having theorems about the definitions), and Lean's type theory is able to pinpoint exactly where things fail, means that it is actually possible to do refactors.
The first time you make a change deep in mathlib and only have to wait a few minutes before Lean tells you about a consequence way off in a different subject, is kind of amazing.
Ruben Van de Velde (Aug 18 2023 at 10:56):
It's easy enough to refactor things within mathlib, but that does have a negative impact on code outside mathlib (which is one of the reasons we're in favor of putting as much as possible in mathlib)
Maarten Derickx (Aug 18 2023 at 12:16):
I'm just a little bit worried that this is something that will come back and make it more difficult for us as mathlib grows and scales. If there is no thought about isolation, at some point certain parts wil become defacto unchangable just because changing them will cause so much breakage higher up. And I think getting everything into mathlib as a slotion to avoid causing problems to things outside mathlib is also doesn't feel like a solution that scales, and will only work as long as mathlib is small enough. Especially since code in PR's also count as code outside mathlib. So that if more people start contributing, then one also will notice that more pr's will go stale and will need updating before they can be merged.
Maarten Derickx (Aug 18 2023 at 12:18):
Anyway maybe time will tell if these kind of things will actually be problems or are just hypothetical. What I am more interested in is for myself to know how to write maintainable code in lean. I tried to find some resources on this, but couldn't really find any. Does someone have any good pointers for this?
Eric Wieser (Aug 18 2023 at 12:19):
If there is no thought about isolation, at some point certain parts wil become defacto unchangable just because changing them will cause so much breakage higher up
To some extent this is already the case; but these parts are typically refactors of the interfaces between "isolated" components. The one that comes to mind is when I tried to add another data field (op_smul
) to docs#Algebra, which meant that every single downstream instance now needed to expose a larger interface than before.
Martin Dvořák (Aug 18 2023 at 13:20):
My two cents on "archival quality" and "theorems should stay proved":
- I think there is a great value in having "all da mafs" in one library. Yes, it comes at a cost. Maintenance load is typically superlinear w.r.t. the size of the library. However, I am not trying to justify "let it grow organically". There are certainly some strategic decisions that can significantly influence the overhead. I am not convinced, however, that isolation is a principle that should be broadly applied to mathlib.
- I think that pen-and-paper proofs are less likely to stand the test of time than Lean proofs. For a Lean proof, even if it depends on a very old version of mathlib, it can be always unambiguously resolved what definitions and lemmas it depends on. Yes, it might be sometimes hard to make it compatible with the present version of mathlib, especially if it was abandoned in an "private archive" (any repo that
leanprover-community
does not maintain), but it does "stay proved". In contrast, pen-and-paper proofs have a lot of "implicit dependencies". As @Jon Eugster said, many nouns and adjectives mean different things to different people. And even if there was a consensus about their meaning 10 years ago, the default meaning might be different for researchers now. Zeitgeist also influences which informal proof techniques are accepted by the community. Leaving out certain steps might be OK in one epoch but frowned upon later. When working in Lean, we may get more automation over time, but it doesn't mean that some old proofs that don't use the automation become inferior. I believe that Lean proofs are immortal.
Alex J. Best (Aug 18 2023 at 13:20):
Maarten Derickx said:
Anyway maybe time will tell if these kind of things will actually be problems or are just hypothetical. What I am more interested in is for myself to know how to write maintainable code in lean. I tried to find some resources on this, but couldn't really find any. Does someone have any good pointers for this?
I think these things will become problems eventually, we shall see if we can come up with technical or organisational solutions in time, I'm quite hopeful though, we've already overcome several hurdles .
As for writing maintainable proofs, part of it is helping people fix the proof later if it breaks, i.e. using have
and suffices
statements and focussing \.
's to make the proof structure clear and simple to fix. Also using tactics like ring
and linarith
that don't care precisely what the input is can be more robust compared of individual rewites / very manual proofs. On the flip side not using tactics whose output can easily change (i.e. unbounded simp
s) in the middle of a proof (use simp only [blah]
instead) is best practice that we try to follow.
Siddhartha Gadgil (Aug 18 2023 at 14:03):
My experience is much more limited than others here, but my little experience mainly during the port: extreme golfing of proofs seems to make them more fragile. If there is a chain of steps one can see more easily where it breaks and why.
Eric Wieser (Aug 18 2023 at 14:32):
Note though that the same is true in a different way for breakages to lemma that used to be proved by simp
; in both the super-golfed case and the super-automated case, you can't tell what the proof was supposed to do without further inspection of the working version.
Jireh Loreaux (Aug 18 2023 at 15:22):
Eric, I think this is true, but only to some extent. During the port, I think our simp
debugging was enormously hindered by simp
's (recently fixed!) silent failure mode.
Joseph Myers (Aug 18 2023 at 22:30):
Maarten Derickx said:
Especially since code in PR's also count as code outside mathlib. So that if more people start contributing, then one also will notice that more pr's will go stale and will need updating before they can be merged.
I think good refactoring tools will help a lot here. (Google apparently has such tools to help maintaining their enormous monorepo.) Say mathlib is 100 times its present size, developing at 100 times its present speed, and someone changes the definition of a group. (a) It would be very helpful if 99% of the consequent changes in mathlib are done by an automated refactoring tool. (b) The instructions given to that tool should be preserved, and associated with the commit to mathlib master that does the refactor, rather than lost as a transient part of developing that commit. (c) There should then be automation that makes it easy, when updating a PR to a new mathlib version, to apply (to the new/changed code in the PR only) all the automated refactoring instructions for commits newly merged into the PR. (d) Likewise, there should be automation that makes it easy to apply such automated refactoring changes to another project depending on mathlib, when updating its mathlib version.
(That's just one guess as to how such scalability issues might be addressed - but it seems plausible, given various reports on how Google handles a huge monorepo.)
Scott Morrison (Aug 19 2023 at 01:29):
I really hope that we can move towards golfing proofs much less.
Wouter Smeenk (Aug 19 2023 at 05:09):
I once found this tool for Java: https://spoon.gforge.inria.fr/ . Reading the discussion here made me think about this tool. It might be useful to have a similar approach here. I think it would be very powerful to reason about Lean code in Lean. This would allow you to do complicated rewrites of the entire codebase.
Joseph Myers (Aug 19 2023 at 10:23):
A lot of the time, golfing is a good thing. When PR review turns a 50-line proof into a 5-line proof, with three new lemmas with two-line proofs being split out from the original proof, the result is probably faster to build and more maintainable and more useful to subsequent contributors and helps the original contributor learn about how to improve their future proofs.
Ruben Van de Velde (Aug 19 2023 at 10:52):
I think we're sometimes confusing ourselves by using "golfing" to mean both "reduce to the absolute least number of characters, ignoring readability and maintainability" (which is arguably the original meaning in this context) and what you said, which is more about splitting out reusable parts of a proof and reusing existing results in the library - I think we approve of the latter but not necessarily of the former
Scott Morrison (Aug 19 2023 at 13:06):
Yes, breaking out lemmas has nothing to do with golfing.
Scott Morrison (Aug 19 2023 at 13:08):
I'm more interested in the "let's turn every apply x; rw [y]; exact z
into a x (y ▸ z)
". The former gives clues about how the author knew how to construct the proof that have vanished in the latter.
Trebor Huang (Aug 19 2023 at 13:17):
Interestingly that seems to be the routine for Agda users. In effect the encouraged workflow of Agda lets the user enter a series of keychords (enter x
, hit C-c C-r, enter subst y
, hit C-c C-r, enter z
and C-c C-g), and the resulting thing is x (subst y z)
with the construction process vanishing.
Arthur Paulino (Aug 19 2023 at 13:42):
I have this gut feeling that automated proofs (even simp
s) should be somehow replaced by their expanded form. If not in the code directly, then somewhere else (a mechanic that does not exist in Lean, at least not yet). This would make compilation faster (because searching would no longer be needed) and also more robust (because the proofs become static code)
For example, if you have
theorem foo : goo := by simp
Then somewhere else, like an automated cache file, you'd have
foo : goo := by simp only [zoo]
Then Lean would look for the cached proof first. And if something goes wrong with it, retry with the original code (just simp
)
Eric Wieser (Aug 19 2023 at 13:48):
Scott Morrison said:
I'm more interested in the "let's turn every
apply x; rw [y]; exact z
into ax (y ▸ z)
". The former gives clues about how the author knew how to construct the proof that have vanished in the latter.
I don't really agree with this (at least for this contrived example of only three lemmas); there's pretty much only one way to end up at x (y ▸ z)
, and it's exactly that sequence of tactics. Furthermore, I think you can see all the same goal states by putting the cursor in the appropriate places in the term.
Eric Wieser (Aug 19 2023 at 13:49):
Admittedly rw
gives you better error messages than ▸
when something breaks
Eric Wieser (Aug 19 2023 at 13:56):
But the places where you'd write x (y ▸ z)
in lean are presumably the places where you'd write "it follows fromx
, y
, and z
" on paper; the details are boring anyway
Niels Voss (Aug 19 2023 at 18:33):
I have this gut feeling that automated proofs (even
simp
s) should be somehow replaced by their expanded form. If not in the code directly, then somewhere else (a mechanic that does not exist in Lean, at least not yet). This would make compilation faster (because searching would no longer be needed) and also more robust (because the proofs become static code)
This could be very useful, but I think it would only make sense to cache lists of lemmas used by "searching tactics" like simp. If we reduce the proof to its term mode and cache that, we'd probably insist on making sure that the term is up to date, and we'd have to recompile it all the time. Then this feature would basically just be olean files. (Actually I'm not sure oleans store the proof but it would probably be just as expensive as producing them)
I'm also not sure how we would prevent proofs from going out of sync with the cache. If the cache breaks and the original works, it isn't that big of a deal, but if the original breaks and the cache works, we wouldn't find out until much later. That might not be that big of a deal, but we would still have a broken proof in mathlib. Checking that the original is still in sync with the cached one has to be at least as expensive as compiling it.
Johan Commelin (Aug 19 2023 at 19:04):
But this expensive compilation has to be done only once (per mathlib commit), by some powerful CI machine. The rest of the world could just use the cache and benefit from it.
Johan Commelin (Aug 19 2023 at 19:04):
To some extent, these ideas can already be used right now, using the says
tactic combinator in mathlib.
Mac Malone (Aug 19 2023 at 21:49):
Mario Carneiro said:
it is not suitable for doing anything "archival quality" because in a year you will have difficulty getting it to compile.
This criticism feels with misplaced. With git, lean-toolchain
, and lake-manifest
, all requirements should be pinned indefinitely. So it should be a simple matter of lake build
to get something (even very old) to compile in Lean 4. And, if mathlib keeps it cache around indefinitely, a lake exe cache get; lake build
should also work.
Mac Malone (Aug 19 2023 at 21:51):
This even applies to pre-Lake Lean code. My old leanpkg project Papyrus still builds two years later (on all platforms were LLVM 12 is available -- which is not a limitation of Lean).
Mario Carneiro (Aug 19 2023 at 22:13):
This is still a far cry from other projects that can compile code 10 or 20 years later. 2 years is a long time in lean years
Mario Carneiro (Aug 19 2023 at 22:14):
if you want to archive an old lean project you should probably not just pin a version of lean but also vendor it
Mario Carneiro (Aug 19 2023 at 22:18):
and the fact that Lean requires LLVM and LLVM has a short shelf life is absolutely a design decision that lean made which is anti-archival.
Scott Morrison (Aug 20 2023 at 04:30):
I don't want to insist too much on the x (y ▸ z)
example, which obviously is at the very short end because I was too lazy to come up with a good example.
Nevertheless, I think there are two very different ways to produce that term!
apply x
rw [y]
exact z
and
have h := z
rw [y] at h
have := x h
assumption
Both schemes are reasonable (backwards reasoning vs forwards reasoning), but when golfed they turn into indistinguishable terms that obfuscate whether it was obvious to the human that one needed to works backwards, starting using x
, or that one needed to work forwards, starting using z
.
(i.e. it might not be obvious that z
is relevant until you start thinking about x
, or vice versa, and it is very helpful to reflect this!)
Mac Malone (Aug 20 2023 at 05:12):
Mario Carneiro said:
and the fact that Lean requires LLVM and LLVM has a short shelf life is absolutely a design decision that lean made which is anti-archival.
Lean's LLVM is built from the source and thus is long lasting. My example was about bindings to an external LLVM, which was were that shelf problem came in. That is, normal Lean is not tied to LLVM's shelf life. Sorry for any confusion I caused with that mention.
Mac Malone (Aug 20 2023 at 05:28):
Mario Carneiro said:
This is still a far cry from other projects that can compile code 10 or 20 years later.
Getting anything to last 20 years in CS requires substantial independent resources to vendor and maintain old technology. 20 years ago virtually all user applications were 32-bit only which modern OSes do not even support (e.g., Windows 11 and Mac post-Catalina). Even 10 years is a big ask. Much technology that old has also well since past end of life (and the threshold for any long-term maintenance). Even code in well-maintained largely platform-independent languages would likely not last 20 years (e.g. the Python 2 to Python 3 breakage). As such, I am curious as to your point of reference here.
Patrick Massot (Aug 20 2023 at 05:29):
TeX?
Mario Carneiro (Aug 20 2023 at 05:30):
The one on my mind is metamath, which, by virtue of being a spec that has been left largely unchanged and has many independent implementations, has managed to weather a number of technological waves without any required changes to the archive itself - you can use 20 year old tech to check today's set.mm and vice versa
Mario Carneiro (Aug 20 2023 at 05:32):
here archival is decidedly not achieved by simply pinning old everything
Mario Carneiro (Aug 20 2023 at 05:32):
but rather by decoupling the archival language from the tech stack
Mario Carneiro (Aug 20 2023 at 05:32):
i.e. having a standard
Mario Carneiro (Aug 20 2023 at 05:35):
plus, if we're comparing with paper mathematics we have to compete with shelf lives on the order of 100 years
Johan Commelin (Aug 20 2023 at 05:36):
But note that @Martin Dvořák put some valid qualifications on the archival quality of paper maths. See https://leanprover.zulipchat.com/#narrow/stream/113488-general/topic/scalability.20and.20maintainability/near/385911381
Mario Carneiro (Aug 20 2023 at 05:37):
I think that is quite overblown, one can routinely and reasonably read papers which are 50 years old with only a modest amount of difficulty
Mac Malone (Aug 20 2023 at 05:38):
Patrick Massot said:
TeX?
At least in my experience with TeX, there are a number of old packages that old code is fond of using that do not work properly on more modern systems, so code there still seems to have an expiry date. Furthermore, TeX package management is lacking so it is easy to sometimes get the wrong copy of a package and then everything breaks hard.
Johan Commelin (Aug 20 2023 at 05:38):
@Mario Carneiro But there are also a whole bunch of examples that you can no longer read. And it's exactly those that are problematic.
Johan Commelin (Aug 20 2023 at 05:39):
Unless a newer variant was written by someone who went through the trouble of updating the language and notation.
Mario Carneiro (Aug 20 2023 at 05:39):
sure, you might need to "port" old maths just like with software
Johan Commelin (Aug 20 2023 at 05:40):
But it might be easier to port old Lean than to port old paper maths
Mac Malone (Aug 20 2023 at 05:40):
Mario Carneiro said:
plus, if we're comparing with paper mathematics we have to compete with shelf lives on the order of 100 years
I would think that is more like being able to read code versus being able to compile it.
Mario Carneiro (Aug 20 2023 at 05:40):
but the stuff Mac is talking about (where you have slight incompatibilities and need to work a bit to get things working) happens around the 2 year mark for lean, the 5 year mark for general compiler software, 5-20 years for tex development and 20-200 years for paper mathematics
Mac Malone (Aug 20 2023 at 05:43):
@Mario Carneiro My example had no incompatibles? The Lean code compiled with no issues. The problem was the goal of the project was to binding LLVM code which had since past end-of-life on MSYS2. So the problem was with my choice of project and Windows MSYS2's shelf life for LLVM. (MSYS2 actually does not actually officially support anything but the most recent version of a package and their unofficial archive only holds a few old versions).
Mario Carneiro (Aug 20 2023 at 05:43):
I'm talking specifically about lean for mathematics
Mario Carneiro (Aug 20 2023 at 05:44):
it's a much more complicated story when using lean for other purposes because then the usefulness of the tool depends on the state of the tech world
Mario Carneiro (Aug 20 2023 at 05:45):
but my original claim that started this was "mathematics should stay proved"
Mac Malone (Aug 20 2023 at 05:46):
@Mario Carneiro Yes. That was sort of the point I was trying to make. :sweat_smile: As long as the goal is just to run old Lean, afaik, all existing Lean 3 / Lean 4 projects can still be compiled today without changes.
Mario Carneiro (Aug 20 2023 at 05:46):
the forward compatibility story is pretty bad though
Mac Malone (Aug 20 2023 at 05:47):
(Less sure about Lean 3, because I did not have much experience with package management in it, but at least getting the toolchain still works via elan).
Mario Carneiro (Aug 20 2023 at 05:47):
it's good enough to identify a version
Mario Carneiro (Aug 20 2023 at 05:48):
I don't have any sense that we are converging on a language for lean 4 that will stabilize
Mario Carneiro (Aug 20 2023 at 05:48):
I expect changes to just keep coming for the foreseeable future
Mac Malone (Aug 20 2023 at 05:49):
@Mario Carneiro For "mathematics should stay proved" isn't the goal just to be able to verify that the proof was correct (i.e., in Lean, to be able to compile and run the proof through a verifier)? Why would forward compatibility matter in that?
Mario Carneiro (Aug 20 2023 at 05:49):
because you want to use that proof from a masters thesis 4 years ago in your project which uses a lemma from last month's mathlib
Mario Carneiro (Aug 20 2023 at 05:50):
I don't want every nightly to be a new incompatible dialect of a language
Mac Malone (Aug 20 2023 at 05:51):
@Mario Carneiro Sure, but at least to my understanding, that would be a separate goal from "mathematics should stay proved".
Mario Carneiro (Aug 20 2023 at 05:51):
is it?
Mac Malone (Aug 20 2023 at 05:51):
Yes, at least in how I would understand the concept.
Mario Carneiro (Aug 20 2023 at 05:52):
it should still be true and usable, just like old theorems are still usable today
Mario Carneiro (Aug 20 2023 at 05:52):
we aren't reinventing the logic every year and throw away the library of accumulated mathematics
Mac Malone (Aug 20 2023 at 05:53):
@Mario Carneiro If that is goal, wouldn't we need interoperability with every other proof system as well? So that a proof in one system is usable for all time as well?
Mario Carneiro (Aug 20 2023 at 05:53):
of course I want that
Mac Malone (Aug 20 2023 at 05:54):
In fact, one could argue the need for formalization itself indicates that paper proofs themselves were insufficient for this theorem provers cannot directly import a proof from a text document.
Mac Malone (Aug 20 2023 at 05:55):
Similarly, translating a proof from natural language to natural language or from print to digital also violate this goal.
Mario Carneiro (Aug 20 2023 at 05:56):
I don't disagree with that, formalizations are certainly value added on the original. But to the extent that people prefer to read the paper proof than a formalization when re-formalizing in another system, the formal language is also lacking in exactly the same way
Mario Carneiro (Aug 20 2023 at 05:57):
the formalization "should" be automatically facilitating its translation into other formal (or informal) languages, that's supposed to be the value add of digitizing the mathematics
Mario Carneiro (Aug 20 2023 at 05:59):
if the artifact is intimately tied to details of the system and is not exportable anywhere else because of this, then it is anti-archival, because one day the system will be out of fashion and people will have difficulty upgrading the proof
Mac Malone (Aug 20 2023 at 06:00):
Mario Carneiro said:
But to the extent that people prefer to read the paper proof than a formalization when re-formalizing in another system, the formal language is also lacking in exactly the same way
This is hard to measure objectively, though. Mathematicians lean proofs initially from papers and natural languages, so there is inherit basis there that affects this result. A bias that could very well be inverted if the opposite occurred (in fact, the differing attitude of CS people towards math may suggest this is likely the case).
Mario Carneiro (Aug 20 2023 at 06:01):
I'm willing to condition that statement on people who are very comfortable with the formal language
Mac Malone (Aug 20 2023 at 06:02):
Mario Carneiro said:
the formalization "should" be automatically facilitating its translation into other formal (or informal) languages, that's supposed to be the value add of digitizing the mathematics
I personally agree that this is a great feature. However, my understanding from the formal verification community is that goal is generally certainty in correctness, not actually reproducibility itself (except to the extent it assists the former).
Mario Carneiro (Aug 20 2023 at 06:02):
I've done a fair amount of that kind of work myself, and I will usually split 50/50 between a formalization and the informal version when trying to understand the proof and recreate it
Mac Malone (Aug 20 2023 at 06:06):
Which I think seems to accurately measure the degree to which you are CS and a Math person (i.e,, 50/50). :laughing:
Mario Carneiro (Aug 20 2023 at 06:07):
It's more like, it reflects the degree to which I am not a computer and cannot read text which requires a computer to process, especially if I can't get it to compile and have to compile it in my head (because it's too old and something about the setup is broken now)
Mac Malone (Aug 20 2023 at 06:08):
Well I guess I am a computer then. :laughing: I always much prefer reading code (and formal proofs) than paper descriptions.
Mario Carneiro (Aug 20 2023 at 06:08):
I think you might not have tried to read some lean/coq/hol light proofs then
Mario Carneiro (Aug 20 2023 at 06:09):
it's not about preference, the context required to understand the words just isn't there
Mario Carneiro (Aug 20 2023 at 06:09):
reading mathlib without lean running is a real challenge
Mario Carneiro (Aug 20 2023 at 06:12):
I personally agree that this is a great feature. However, my understanding from the formal verification community is that goal is generally certainty in correctness, not actually reproducibility itself (except to the extent it assists the former).
Goals vary. The point I am making is that archival is often not a goal, and it is easy to lose sight of it, but in the long run people will wish we paid more attention to it. In the CS world the "long run" isn't even that long, and lean is one of the shortest shelf life languages I have ever worked in
Mario Carneiro (Aug 20 2023 at 06:14):
In fact, archival quite often pulls in the opposite direction to "progress", which is why it is no surprise it is not a priority for a language like lean. But we are giving up some important properties in doing so, including ones that affect the growth of the language into other areas where this is a user requirement
Mac Malone (Aug 20 2023 at 06:16):
@Mario Carneiro On the proof point, I actually kind of agree. I do find Lean/Coq/HOL proofs hard to read because they make heavy use of automation tactics that I have a hard time figuring out what they actually did. But, in my opinion, that is because they are not really formal proofs, but a program to generate a formal proof. I remember Patrick demonstrating a really cool html proof explore that took a Lean theorem converted it into HTML page that contain a formal natural language proof that you could arbitrarily drill down. That seemed to me to be essentially the perfect merger of all three styles.
Mario Carneiro (Aug 20 2023 at 06:17):
As much as I like that display, it is not really archival either though. I'm not sure you could reconstruct the formal proof from it
Mac Malone (Aug 20 2023 at 06:20):
Mario Carneiro said:
lean is one of the shortest shelf life languages I have ever worked in
[...[
In fact, archival quite often pulls in the opposite direction to "progress",
I agree and this why I consider it be one of Lean's great features. Most languages take forever to update, which means a lot of really cool ideas are left on the cutting room floor and a lot of time is waste working around deficiencies of the language. With Lean, a lot of this is solved with frequent language updates and this combined with the ability to easily pin a lean toolchain mans that downstream code still does not need to necessarily keep up the pace (mitigating most of the cons such an approach in my view).
Mario Carneiro (Aug 20 2023 at 06:20):
But, in my opinion, that is because they are not really formal proofs, but a program to generate a formal proof
I agree with this 100%, and this is one of the key reasons metamath proofs have such a long shelf life compared to the programs that generated the proofs. Speccing a proof generation language stably is way harder than speccing a proof language
Mario Carneiro (Aug 20 2023 at 06:22):
Lean is great for people heavily involved in the language process itself like you and me. For people who are only watching on the fringes it's a tornado of activity and not fun to bump against if you want to make and maintain a project
Mac Malone (Aug 20 2023 at 06:26):
Mario Carneiro said:
Speccing a proof generation language stably is way harder than speccing a proof language
I agree, which is why I think they should generally be separate things. In Lean 3, the "proof language" was the output format. Unfortunately, Lean adds a number of new features to the proof language (e.g., mutual definitions) that make the old format insufficient. Designing a new proof language that is interoperable with Lean 4 and external checkers (and maybe one that can be shared also by other proof systems outside of Lean) seems like a good idea.
Mac Malone (Aug 20 2023 at 06:32):
Mario Carneiro said:
For people who are only watching on the fringes it's a tornado of activity and not fun to bump against if you want to make and maintain a project
Why, exactly? When writing projects as user, my approach has been to pin some version at the start of the project and only update when I wanted some new feature. In that way, it works very much like any other language. The same more or less holds true for dependencies: get the version you can at the start and only update when you want new features, which is pretty much the same approach as I had other languages. Even in languages with semantic versioning, a lot of major projects don't follow it properly (i.e., use zero versioning), so updating everything can still be a hassle.
Mario Carneiro (Aug 20 2023 at 06:33):
most libraries and projects I do in other languages (and lean too) I try to keep up to date
Mario Carneiro (Aug 20 2023 at 06:33):
because I want them to be used by others
Mario Carneiro (Aug 20 2023 at 06:34):
I might bump them only infrequently or on request, but I don't want to leave a project that is known out of date unless I am really abandoning it
Mac Malone (Aug 20 2023 at 06:35):
@Mario Carneiro I try to do so as well for much the same reason, but I have always found that to be potentially challenging in every language I've used. Furthermore, when doing research on patching security holes in old software, I/we found many of the most popular software repositories rarely had maintenance releases, so this seems to be a general problem.
Mario Carneiro (Aug 20 2023 at 06:36):
the difficulty depends heavily on the language's backward compatibility stance. For C projects you basically never have to revisit them, for rust you might want to fix some warnings, for lean it's broken again every month
Mario Carneiro (Aug 20 2023 at 06:37):
this is basically by design in all three cases
Mac Malone (Aug 20 2023 at 06:38):
@Mario Carneiro For JavaScript (i.e., web dev), its often also broken every month due to small incompatibilities in the ginormous number of packages a web project generally uses.
Mario Carneiro (Aug 20 2023 at 06:38):
if you write vanilla then the breakage is essentially zero
Mario Carneiro (Aug 20 2023 at 06:39):
but sure, if you are chasing the trends then god help you
Mac Malone (Aug 20 2023 at 06:39):
@Mario Carneiro lol, vanilla JavaScript is not a thing at scale.
Mac Malone (Aug 20 2023 at 06:40):
I know, I've tried and it just made everything harder.
Mac Malone (Aug 20 2023 at 06:40):
(and eventually I had to switch to a framework anyway.)
Mario Carneiro (Aug 20 2023 at 06:40):
I believe you. But you know what the archival choice is? Vanilla.
Mac Malone (Aug 20 2023 at 06:41):
(which was fun, because then the code became a mix of framework code and vanilla because we did not have the time to rewrite everything which made everything even more unwieldly to work with.)
Mario Carneiro (Aug 20 2023 at 06:42):
well, actually it's not so bad with the frameworks since at least in the web dev world you are vendoring everything anyway
Mario Carneiro (Aug 20 2023 at 06:42):
but your build tools might rot
Mac Malone (Aug 20 2023 at 06:42):
what do you mean by "vendoring" in this context?
Mac Malone (Aug 20 2023 at 06:42):
That term is heavily overloaded in web dev
Mario Carneiro (Aug 20 2023 at 06:42):
e.g. shipping a specific version of jquery along with your jquery app
Mario Carneiro (Aug 20 2023 at 06:43):
you have to do this in every javascript app these days, you can't just assume the target has react, it's a security issue
Mario Carneiro (Aug 20 2023 at 06:43):
so you bundle react along with your program
Mario Carneiro (Aug 20 2023 at 06:44):
which means that 20 years later it will still work as long as the browser still understands the JS
Mac Malone (Aug 20 2023 at 06:44):
In that case, it is not much different from Lean. You downloaded packages, run a build, and the final product is bundle of everything. The main difference with Lean, of course, is that bundle is currently a binary blob (which is platform-dependent in the executable case).
Mario Carneiro (Aug 20 2023 at 06:45):
we don't typically actually ship lean executables with our projects
Mac Malone (Aug 20 2023 at 06:45):
Mario Carneiro said:
which means that 20 years later it will still work as long as the browser still understands the JS
Note that JS has switched much closer to a Lean model in its revisions.
Mario Carneiro (Aug 20 2023 at 06:46):
not sure what you mean, JS has backward compatibility AFAIK
Mac Malone (Aug 20 2023 at 06:46):
i.e. it is quite easy for old JS to break on newer browser versions.
Mac Malone (Aug 20 2023 at 06:46):
Absolutely not, browsers remove/disable old features of JS all the time.
Mac Malone (Aug 20 2023 at 06:46):
(often for security and/or privacy reasons)
Mario Carneiro (Aug 20 2023 at 06:47):
that is still covered as backward compatibility though
Mac Malone (Aug 20 2023 at 06:47):
what do you mean by that?
Mario Carneiro (Aug 20 2023 at 06:48):
the features aren't removed such that errors are generated, they are neutered to avoid the security issues but still exist
Mac Malone (Aug 20 2023 at 06:48):
security patches are famous for their tends to be backwards incompatible
Mac Malone (Aug 20 2023 at 06:48):
@Mario Carneiro no, many features are flat removed and break tons of sites.
Mac Malone (Aug 20 2023 at 06:49):
Think third-party cookies, Adobe Flash, Java Widgets, IFrame changes, Content Security Policies, etc.
Mario Carneiro (Aug 20 2023 at 06:51):
I see. In that case you probably shouldn't be using those kind of features if you want things to work long term
Mario Carneiro (Aug 20 2023 at 06:51):
web 1.0 stuff still works
Mac Malone (Aug 20 2023 at 06:51):
Also, pop-ups, event handlers. It is also important to note that while many changes are ostensible for security, a lot of legitimate use cases depended on such features to provide their services.
Mario Carneiro (Aug 20 2023 at 06:51):
the boring stuff always lasts the longest
Mac Malone (Aug 20 2023 at 06:53):
@Mario Carneiro IFrames, Flash, and Widgets were abundant in Web 1.0?
Mario Carneiro (Aug 20 2023 at 06:53):
I mean plain HTML
Mario Carneiro (Aug 20 2023 at 06:54):
but even granting your premise, I'm not sure what it is intended to show. All languages struggle with balancing backward compatibility and language evolution
Mario Carneiro (Aug 20 2023 at 06:54):
and ignoring it completely doesn't mean that it isn't still important for a specific set of use cases
Mac Malone (Aug 20 2023 at 06:56):
My point was about forward combability. It is a very rare feature that almost never lasts 10-20 years without explicit dedication. I agree it is important, but it generally requires an additional effort separate from the main progression track.
Mario Carneiro (Aug 20 2023 at 06:58):
almost never lasts 10-20 years without explicit dedication
of course! that's also my point, you have to try to make it happen, it won't just happen on its own
Mac Malone (Aug 20 2023 at 07:00):
Yes, and its generally accomplished through a project whose primary purpose is to forward port a specific old technology to the new era rather than being an official part of the technology itself (e.g., emulators, remasters, reworks, etc.)
Mario Carneiro (Aug 20 2023 at 07:02):
I can't think of any case where that actually worked at scale, compared to just incorporating backward compatibility in the language itself
Mac Malone (Aug 20 2023 at 07:03):
@Mario Carneiro Some examples: video game console emulators and the Newgrounds flash player.
Mario Carneiro (Aug 20 2023 at 07:04):
and it's certainly not achieving the goal of keeping theorems interoperable
Mac Malone (Aug 20 2023 at 07:04):
In Lean, though, I do hope an archival format will eventually be developed and supported. I agree that it would be a great boon.
Mario Carneiro (Aug 20 2023 at 07:05):
an emulator is just like a low level port of the source system
Mac Malone (Aug 20 2023 at 07:05):
Yes?
Mario Carneiro (Aug 20 2023 at 07:05):
i.e. like binport running on lean 3 code
Mario Carneiro (Aug 20 2023 at 07:06):
there are a lot of reasons the result is difficult to use
Mac Malone (Aug 20 2023 at 07:06):
Yep, mathport was essentially one of these kinds of projects. Just with more official support than most get.
Mario Carneiro (Aug 20 2023 at 07:07):
binport, not synport. Binport would port the exprs, not the syntax
Mac Malone (Aug 20 2023 at 07:07):
I know, but I think both qualify as the kind of project I was describing.
Mario Carneiro (Aug 20 2023 at 07:08):
so you get oleans only, and the notations and attributes wouldn't really work
Mario Carneiro (Aug 20 2023 at 07:08):
Synport would be more like a source to source translator or auto-fix migration assistant
Mac Malone (Aug 20 2023 at 07:10):
As far as I can tell there is always going to be a trade-off between an stability and innovation. If you choose the stable route, it will be hard to get new features. If you chose innovation, it will be hard to keep things stable. An archival format focuses on stability, whereas actively developed technology tends to focus on innovation.
Mario Carneiro (Aug 20 2023 at 07:10):
You should read the rust edition RFC, aka "stability without stagnation"
Mac Malone (Aug 20 2023 at 07:11):
Yes, but that is still a trade-off between man-hours spent on maintaining stability versus man-hours spent on revolutionary improvements.
Mac Malone (Aug 20 2023 at 07:13):
Which is why there is always a new popular language every so many years. The old one gets bogged down maintaining stability and the new one gets to exemplify the best modern technology can offer (in some way).
Mario Carneiro (Aug 20 2023 at 07:14):
yep, it's great when you don't have any pesky users to worry about and can focus on making new things
Mac Malone (Aug 20 2023 at 07:15):
My hope is that Lean can eventually break this cycle through its powerfully metaprogramming, allowing files to be fixed to a specific stable spec that can be interposable with code running newer versions. However, determining when to start spending man hours maintaining support for an old code base is a hard decision.
Mario Carneiro (Aug 20 2023 at 07:15):
lol no, lean isn't even trying to solve this problem
Mac Malone (Aug 20 2023 at 07:15):
At the moment, no. Lean does make regular breaking changes.
Mac Malone (Aug 20 2023 at 07:16):
After all, Lean 4 still has not had a proper official release. :laughing:
Mac Malone (Aug 20 2023 at 07:22):
Once Lean has a stable release schedule, I imagine more guarantees will be made.
Mac Malone (Aug 20 2023 at 07:23):
The first goal will likely be to avoid regressions, but hopefully stuff like Rust editions will eventually be possible.
Bulhwi Cha (Aug 20 2023 at 10:51):
Mac Malone said:
Once Lean has a stable release schedule, I imagine more guarantees will be made.
But when will Lean have a stable release?
Mario Carneiro said:
lol no, lean isn't even trying to solve this problem
Is it even a good idea to use Lean to build a huge library of mathematics in the long run?
Mario Carneiro (Aug 20 2023 at 18:29):
I think mathlib can weather the changes pretty well, it itself is the cause of more churn than lean 4 core and has enough weight to avoid the worst issues with backward incompatibility (e.g. if a nightly breaks something requiring too much mathlib change, we put the bump on hold and reconsider the lean 4 feature upstream). But it means that it's difficult for single users to build libraries atop mathlib that don't rot quickly
Kayla Thomas (Aug 21 2023 at 04:47):
What is changing in Lean? Is it just the surrounding front end or is it the core logic too?
Mario Carneiro (Aug 21 2023 at 04:50):
the core logic of lean 4 has basically not changed at all since the beginning of lean 4 (it was changed somewhat from lean 3)
Mario Carneiro (Aug 21 2023 at 04:50):
but the front end changes routinely
Kayla Thomas (Aug 21 2023 at 04:54):
Is there a low level spec that the proof tactics get expanded to that remains unchanging?
Mario Carneiro (Aug 21 2023 at 04:55):
yes, that's the lean kernel
Kayla Thomas (Aug 21 2023 at 04:57):
Could that act as the archive?
Kayla Thomas (Aug 21 2023 at 05:13):
Given a program to translate it back to a standardized set of tactics, if that is theoretically possible.
Kayla Thomas (Aug 21 2023 at 05:18):
Sorry, I'm probably in over my head.
Scott Morrison (Aug 21 2023 at 05:34):
The kernel is very low level. There wouldn't be much hope of translating back to anything recognisable.
Kayla Thomas (Aug 21 2023 at 05:37):
Could the tactics expand to some intermediate recognizable standard on their way to being expanded to the kernel?
Mario Carneiro (Aug 21 2023 at 05:43):
in theory yes, in practice tactics don't really target any such intermediate level
Last updated: Dec 20 2023 at 11:08 UTC