Zulip Chat Archive

Stream: general

Topic: syntax


Reid Barton (Jun 04 2018 at 13:22):

Fun fact: notation applies even in namespace commands, so if you write notation `foo` := bar and then namespace foo it means namespace bar

Simon Hudon (Jun 04 2018 at 13:24):

Cool! :)

Simon Hudon (Jun 04 2018 at 13:25):

I don't know if that's a good thing but I was properly surprised when I realized what you meant

Reid Barton (Jun 04 2018 at 13:26):

If you don't want this to happen (like I didn't) then you can write namespace «foo»

Simon Hudon (Jun 04 2018 at 13:27):

Curiously enough, it also works on binders: ∀ foo, f foo actually means ∀ bar, f bar which is inconvenient when the notation is a short hand for an expression, not an identifier

Simon Hudon (Jun 04 2018 at 13:27):

Thanks for the fix

Sebastian Ullrich (Jun 04 2018 at 13:31):

o.O

Simon Hudon (Jun 04 2018 at 13:32):

I take it that's unintended behavior?

Reid Barton (Jun 04 2018 at 13:34):

(I was actually using local notation, in case that matters)

Sebastian Ullrich (Jun 04 2018 at 13:37):

I get "identifier expected" in both cases

constants foo bar : Type
local notation `foo` := bar
#check ∀ foo, foo
namespace foo

Reid Barton (Jun 04 2018 at 13:38):

Oh, I might have jumped to conclusions.
What actually happened was that I was inside namespace foo and then I defined foo as local notation for something, and then end foo stopped working.

Reid Barton (Jun 04 2018 at 13:39):

with some error like "identifier expected"

Simon Hudon (Jun 04 2018 at 13:42):

Yes, you're right. That's what happens for me too. Sorry about the confusion

Sebastian Ullrich (Jun 04 2018 at 13:44):

I see. This will probably be fixed in Lean 4 automatically since we will be using a parser combinator without a scanner, so we can skip notations where we only want to parse identifiers.

Simon Hudon (Jun 04 2018 at 13:45):

How is that going by the way?

Sebastian Ullrich (Jun 04 2018 at 13:51):

Let's... say that we're transitioning from the planning phase to the implementation phase (parser, compiler, and other refactorings)

Simon Hudon (Jun 04 2018 at 13:56):

Exciting :D I can't wait to see where that will end up!

Tim Daly (May 29 2019 at 22:54):

Is there an EBNF description of the Lean syntax?

Wojciech Nawrocki (May 29 2019 at 22:56):

Maybe this: https://leanprover.github.io/reference/lexical_structure.html ? I'm not sure if it's actually EBNF

Tim Daly (May 29 2019 at 22:59):

That doesn't describe, for example, the syntax of optional types, etc. It is more like a tokenizer syntax.

Wojciech Nawrocki (May 29 2019 at 23:03):

Oh, that's true

Tim Daly (May 29 2019 at 23:04):

I'd like to accept Lean syntax as part of a system I'm building, making it easier to export proofs to Lean

Wojciech Nawrocki (May 29 2019 at 23:08):

Perhaps the Pygments syntax description might help?

Tim Daly (May 29 2019 at 23:14):

Wow, the person who wrote that code is multilingual. Still it is only a tokenizer. I was hoping that I didn't have to create the EBNF from Leonardo's source code.

Mario Carneiro (May 30 2019 at 07:44):

I think @Thales wrote a Lean grammar for Mehnir, which is probably the closest approximation yet. Part of the problem is that lean isn't actually context free, so you have to make a bunch of approximations and it's not actually sure to work (or even likely to work without a lot of hard-coding) on a real lean file

Tim Daly (Jun 01 2019 at 06:55):

Well, I read the sources looking for EBNF. Unfortunately(?) Lean uses Pratt parsing. You can change syntax on the fly.

Mario Carneiro (Jun 01 2019 at 08:03):

right, that's the notation command

Mario Carneiro (Jun 01 2019 at 08:04):

You can still do decently if you know what syntaxes are defined, and parse up to the next command keyword

Tim Daly (Jun 01 2019 at 09:27):

This causes a portability problem. I'd like to have a parser / exporter that can communicate freely with Lean. But if the notation command is used the associations can change dynamically making previously accepted communications fail. The obvious workaround would be to export the led-nud tables but I don't see anything in the source code that provides that. Another alternative would be to communicate using only "unsugared syntax". I will read more about that level of detail soon.

Tim Daly (Jun 01 2019 at 09:47):

I am trying to combine proofs with code. This is a "30 year horizon" effort so it is important that the proofs are repeatable and stable (think latex, common lisp, and pdfs). The ability to modify syntax is one problem. The ability to modify tactics is another. Proofs are difficult, time consuming, and require a lot of effort. The hope is that today's proof will be accepted by the system years from now with the same result.

Tim Daly (Jun 01 2019 at 09:57):

At the moment the only path seems to be to extract the Lean kernel and, for every proof, have the system dump a "kernel version" that used no syntax or tactics, just kernel-level I/O. That would enable proofs to be developed at the higher levels but stored and replayed as kernel code. The assumption is that the kernel is small and "30 year horizion" stable.

Andrew Ashworth (Jun 01 2019 at 10:02):

The API is likely to change drastically regardless when Lean v4 arrives later this year. There will be many breaking changes. At a minimum mutually inductive types are going to move into the kernel for performance reasons, and there is no reason to believe other things may not move into the kernel if necessary for usability.

Tim Daly (Jun 01 2019 at 10:02):

My code uses theorems from the 1890s so long-term proof stability is a vital issue.

Andrew Ashworth (Jun 01 2019 at 10:07):

I can't think of a single proof language with that kind of stability, unfortunately.

Tim Daly (Jun 01 2019 at 10:19):

How can I build up chains of Definitions and Lemmas proving code correctness if the whole thing fails a week later? I don't know how to do computational mathematics on that basis. I have common lisp source code for my computer algebra system that still compiles and executes with the same results today that it had when it was written in 1980. I have latex documents from then. It seems perfectly reasonable to expect that a "proof" by machine code would continue to be checkable in 2049.

Andrew Ashworth (Jun 01 2019 at 10:30):

If you're willing to never upgrade or import results from the proof community at large, you could fix a version of <your favorite proof language>. Hardly appealing, I know.

Tim Daly (Jun 01 2019 at 10:37):

It might be useful to consider a "standards effort" in proof technology. There used to be many competing lisp dialects and they eventually agreed on common lisp. I used lisp 1.5, maclisp, lispvm, psl, and many others and was good at "porting code". Now the common lisp standard makes that pointless. Perhaps there needs to be a Common Proof effort that provides a stable basis for (at least primitive syntax) of proofs that can be universally accepted. Computational mathematics IS mathematics so it seems that long-term "30 year horizon" stability ought to be a major goal.

Tim Daly (Jun 01 2019 at 10:44):

Competing on "features" is for game programmers, not mathematicians.

Patrick Massot (Jun 01 2019 at 10:46):

I think you don't quite understand where we stand now. Freezing stuff at such an early stage would probably be very counter-productive. See https://deducteam.github.io/ however (but keep in mind that this is also very much work in progress)

Scott Morrison (Jun 01 2019 at 10:54):

At the moment we are still before the beginning of history. Theorem provers are not yet useful for mathematics.

Scott Morrison (Jun 01 2019 at 10:54):

(That is not to say they are not worth investing significant time in.)

Scott Morrison (Jun 01 2019 at 10:55):

But any serious plans for backwards compatibility should wait (implementation at least, not discussion) until we have something.

Tim Daly (Jun 01 2019 at 11:08):

Systems that use CIC, for instance, ought to have a compatible "raw" language for accepting each other's proofs. At minimum, a system ought to accept its own "raw" logic no matter what else changes. The logic is stable. The issue isn't "backward compatiblity" but "forward compatibilty". I can still compile and run my C code from my PDP-11/03 (1976). It seems obvious that a proof system should always accept a "proven" proof forever.

Andrew Ashworth (Jun 01 2019 at 11:12):

In fact, the underlying logic isn't stable. For example, Coq only recently got universe polymorphism in version 8.5, and the underlying implementation is still subject to change.

Tim Daly (Jun 01 2019 at 11:15):

So Coq doesn't have a stable proven kernel of logic?

Andrew Ashworth (Jun 01 2019 at 11:18):

No. Extensions to CIC are allowed (as in Lean) as long as the developers think it's a good idea.

Tim Daly (Jun 01 2019 at 11:19):

Do they invalidate prior proofs? (Will Flyspeck become junk?)

Andrew Ashworth (Jun 01 2019 at 11:28):

I'm just looking at Flyspeck's project history and the most recent source code change was in Jan 25, 2019. Despite the fact the project was finished five years ago.

Patrick Massot (Jun 01 2019 at 11:29):

According to Manuel Eberl, the Flyspeck project is compiled every day against current version of the relevant proof assistant.

Patrick Massot (Jun 01 2019 at 11:29):

(automatically of course)

Tim Daly (Jun 01 2019 at 11:36):

It seems my understanding of "the trusted kernel" is bogus. I'll read the source code and try to get a better grounding. Thanks.

Reid Barton (Jun 01 2019 at 13:41):

At the moment the only path seems to be to extract the Lean kernel and, for every proof, have the system dump a "kernel version" that used no syntax or tactics, just kernel-level I/O. That would enable proofs to be developed at the higher levels but stored and replayed as kernel code. The assumption is that the kernel is small and "30 year horizion" stable.

lean --export is something like this

Reid Barton (Jun 01 2019 at 13:45):

There are external type checkers that can check the output of lean --export, for example https://github.com/dselsam/tc

Reid Barton (Jun 01 2019 at 13:47):

So there is at least a very small amount of portability, namely, to outside the Lean kernel itself

Mario Carneiro (Jun 01 2019 at 14:45):

@Tim Daly I am sad I missed this discussion. Lean is not the place to look for stability, but there are proof languages that are designed for long term stability. Long future archiving and maintenance in perpetuity is among the selling points of Metamath. It has a standard, which was written down, in a published book from the 1990s and still remains valid (although additions have grown in the gaps, like HTML). There are as a result a whole ecosystem of independent verifiers, and in fact no single verifier "dominates the market", which is not something I can say about any other proof language I know. The axiom system is not completely stable but has changed only a few times in its 20 year history, and is based on an old mathematical standard, ZFC.

The downside to this kind of long term standardization is that it doesn't leave a lot of room for "syntax sugar" and building conveniences into the language, which is where Lean and Coq are coming from. I try to provide some assurance of "monotonicity" of proofs that are submitted to mathlib, but it's not easy. The axiom system is dependent on subtleties that make the proofs not portable to similar systems (such as Coq or other DTT systems), the syntax changes every few years, and we've literally thrown away the library and started afresh since the previous version, and while I don't intend to repeat that experience with lean 4 the language changes are just as traumatic. This is done in the quest for an easy to use language, where you can just sit down and write proofs and it's easy, and I respect that goal; but I think that this need not conflict with standardization.

Consider HTML and Javascript. This is a standard from the early 90s that we still use today. It has changed, but slowly and with a clearly defined standards body behind it. It has not hampered our ability to build new and glitzy things on top, but the key is that no one actually writes HTML/JS directly anymore. Instead the glitzy front end is written in some framework or language that didn't exist a year ago, and is hooked to a compiler that produces standards conformant HTML. I think this approach is very powerful, and I wish people would think more about having a separate back end with a well understood logic, and a core verifier that anyone can write.

Johan Commelin (Jun 01 2019 at 14:55):

In principal I really like this idea... but it seems very non-trivial to build this. Quite a bit harder than HTML, I would think

Johan Commelin (Jun 01 2019 at 14:56):

And already with the HTML + JS example you see that every new year brings a new framework, but it's hard to make them interact

Kevin Buzzard (Jun 01 2019 at 14:56):

I still write html ;-)

Johan Commelin (Jun 01 2019 at 14:57):

If proofs are written in some framework that still compiles in 50 (or 500) years, that is of course fabulous. But if those theorems can't be applied, because the framework they are written in is outdated, I don't see how this is any better than saying "Ooh, Flyspeck (or whatever) compiles with $ANCIENT version of $THEOREM_PROVER".

Mario Carneiro (Jun 01 2019 at 15:13):

The theorems can be applied, but they may not be easily modified. For a mathematical theorem that's actually not that bad, and not all that different from conventional mathematical practice

Mario Carneiro (Jun 01 2019 at 15:15):

The software analog is a library written in $OUTDATED that does something useful for you, so you link it to your project written in the new hotness

Johan Commelin (Jun 01 2019 at 15:16):

They can be applied "in theory". Just like wordpress and joomla can probably be made to work together "in theory". But you can't just load a decades old joomla plugin into wordpress.

Johan Commelin (Jun 01 2019 at 15:17):

Summary: I will have to see this work... until then, I remain somewhat skeptical. That "maths stack" is orders of magnitude more layered than any web-dev stack.

Mario Carneiro (Jun 01 2019 at 15:32):

I'm working on it. I actually hope to have a demo soonish

Johan Commelin (Jun 01 2019 at 16:38):

@Mario Carneiro Cool. I'm looking forward to it.

Tim Daly (Jun 01 2019 at 19:40):

Axiom has a "30 Year Horizon" project goal, looking to develop the
techniques and technology for reliable, proven computational
mathematics.

I'm trying to prove Axiom Sane (rational, judicious, sound, logical)
by constructing axioms and specifications for functions (actually,
only the GCD functions since I'm not gonna live long enough to do
much more :-) )

It is clearlly important to construct proofs that can be checked
at compile time, every time, to ensure that Axiom is still
mathematically sound despite changes.

Apparently this kind of "30 Year Horizon" stability isn't a project
goal for a lot of the systems.I find this suprising since Gentzen's
rules seem like a solid logic kernel you'd have to implement.

Robinson's resolution paper was in 1965 so proof technology is as
old as computer algebra (SAINT/SIN). After 50+ years, it seems
reasonable to think about standards.

Add all the features you want. Just give me a low-level, long-term
language that still proof-checks 30 years from now.

Johan Commelin (Jun 02 2019 at 03:54):

Axiom has a "30 Year Horizon" project goal, looking to develop the
techniques and technology for reliable, proven computational
mathematics.

(emphasis mine) @Tim Daly The problem is that most mathematicians are in love with a the 95% of mathematics that is not computational. Lean is a new system (not perfect, but getting close :wink:) that gives us powerful tools and explicitly doesn't give a thing if we use the axiom of choice are as noncomputable as we want. That is a new combo that other systems didn't give us.
It's too early to claim victory. (In fact as others pointed out, we need systems that are even better than Lean.) But it cannot be denied that in its rather short life span Lean has made a rather impressive impact on the mathematical community compared to what other theorem provers have accomplished. (I didn't check any numbers, but I guess that the ratio mathematicianscomputer scientists\frac{\text{mathematicians}}{\text{computer scientists}} is higher in the Lean community than in the other theorem prover communities.)

I understand your wish for stability. It is also on my wish list. But my wish list is rather long (-;

Tim Daly (Jun 02 2019 at 08:58):

About "victory"...

There are well over 100 "computer algebra systems" (I made a CD for a conference containing them). There are only a few large, general purpose ones (Mathematica, Maple, Axiom, Maxima, Magma). All of them had one thing in common... they had a reliable source of money. Sage was growing rapidly but the money ran out. (Oh, and nobody gets tenure writing software).

Look to see how reliable, long term, and deep-pocket-ed the money source is. Winning systems need a constant funding of about 2 million dollars a year that will continue for 15+ years. The 2 million dollars will support 6-8 "core researchers".

When the core researchers are gone the software dies. Nobody ever, EVER, bothers to document the code. How many people know about the reference counting macros used in the Lean data structures? It is clever but unless you study the code and understand reference counting garbage collection, it is quite opaque. Who can maintain that software? (I'm trying to reverse-engineer the syntax which is considered a "trivial" task, just so my software can use Lean.)

On the mathematical side, what logic rules are actually implemented? Can a non-mathematician even read the papers that MIGHT show what SOME of the code does? Who can maintain that software?

Will Lean be "one of the large, general purpose systems" in 15 years? To misquote Zhou Enlai (when asked if the French Revolution was a good thing) ... "It is too soon to tell". But based on my experience, you just need to follow the money.

Tim Daly (Jun 02 2019 at 09:09):

Interesting paper: Asperti et al. "Crafting a Proof Assistant" Proc. Types 2006: Conf of the Types Project", 2006

Tim Daly (Jun 02 2019 at 09:56):

Also: Barthe and Elbers "Towards Lean Proof Checking"

"Our implementation of Oracle types is very flexible and allows rewriting to be perfomred either inside Lego or by Reduce, an efficient symbolic computation system. In our view, the main novelty of our approach is to combine a sound theoretical foundation with an efficient implementation. Besides, our work provides the first attempt to combine symbolic computation systems with theorem provers such as Coq and Lego, which are based on intensional type theories."

Patrick Massot (Jun 02 2019 at 11:20):

But it cannot be denied that in its rather short life span Lean has made a rather impressive impact on the mathematical community compared to what other theorem provers have accomplished. (I didn't check any numbers, but I guess that the ratio mathematicianscomputer scientists\frac{\text{mathematicians}}{\text{computer scientists}} is higher in the Lean community than in the other theorem prover communities.)

Johan, I appreciate your enthusiasm, but let's not forget we are discussing statistics on a negligible subset of mathematicians. For the sake of discussion, let's take the simplistic definition that we count people having a PhD in mathematics. Then, thinking about it for a couple of minutes, I can't get more than 10 mathematicians using Lean. Maybe I forgot one or two (the list I have in mind is Bryan, Jan David, Johan, Kevin, Neil, Patrick, Sander, Scott, Sébastien, Tom, although I guess we could add Reid anyway). It seems there are at least 100 000 living mathematicians, so the ratio of Lean users among them is at most 1 out of 10 000. And taken all together, they have written 1 paper about formalizing maths in Lean (the cap set paper).

That's comparing Lean mathematicians with all mathematicians. Of course comparing to other proof assistants is more impressive. But we shouldn't forget there are, if I understand correctly, a number of mathematicians (using that simplistic definition) that do homotopy type theory, and it seems they almost all use Coq. There may very well be more than ten people in this category, say around Steve Awodey. There may be also people around Carlos Simpson, but I'm not sure he is still doing this. So I'm not sure at all we have more mathematicians than Coq. We probably have a better derivative, but still a tiny one. And we still miss having one landmark formalization. Computer scientists are not impressed at all by the perfectoid spaces project for instance. We claim it's something completely new, but we completely failed at convincing anyone of this up to now.

At the Edinburgh workshop, we really freaked out other proof assistant communities by bringing half a dozen mathematicians. The other gangs were not expecting this at all, and seemed thoroughly confused by this fact. But Jeremy was among people suggesting names to the main organizer. And we still don't understand why there were only 2 Coq users (Gonthier and Kerjean) at this workshop (other people were philosophers or Isabelle users or one person per "exotic" proof assistant like Naproche or PVS).

Tim Daly (Jun 02 2019 at 12:50):

I agree with your point. I'm quite happy with Lean. I'm spending time reading the source code so I'm investing time in it.

Tim Daly (Jun 02 2019 at 12:58):

There have been attempts to use Oracle systems to speed up proofs that involve computation. The problem is that the Oracle systems (usually a computer algebra system) are not proven, which undermines the premise that the proof is sound. However, if the GCD algorithms in Axiom were proven correct using Lean, then Lean could use Axiom's GCD computation as a proven Oracle. That changes the whole game.

Tim Daly (Jun 02 2019 at 13:02):

And if the code for GCD was extracted from Lean it is even more interesting. See this paper:

Berger and Schwichtenberg "The Greatest Common Divisor: A Case Study for Program Extraction from Classical Proofs" LNCS 1158 pp 36-46 1985

Tim Daly (Jun 02 2019 at 13:08):

Efforts invested in a "landmark formalization" are interesting but are essentially a one-off effort. But efforts to tightly cooperate with a proven computer algebra system vastly increases the range of possible uses, both for the proof community and the computer algebra community.
We really need these two towers of computational mathematics to cooperate.

Kevin Buzzard (Jun 02 2019 at 13:11):

I spoke to @William Stein (who wrote SAGE) about Lean last year and he was super-enthusiastic, even going to the extent of making CoCalc work with Lean. I've used Lean in a multi-user environment within CoCalc just becuase of William's enthusiasm. But my aims were far less lofty than yours -- I just wanted a controlled place where students could use Lean without the hassle of having to install it. I'm sure he'd be very open-minded about further integration.

Tim Daly (Jun 02 2019 at 14:46):

Axiom also works with Sage.

William Stein (Jun 02 2019 at 15:26):

Hi Tim -- I'm glad to hear that you're learning and using LEAN!


Last updated: Dec 20 2023 at 11:08 UTC