Zulip Chat Archive

Stream: lean4

Topic: Partial evaluation of a file


Tomas Skrivan (Mar 30 2022 at 22:50):

I'm having a hard time working with a file with some heavy computations. I have few questions regarding this:

  1. Is it possible to evaluate file only up to the cursor position? I do not want to waste computation on recomputing all the heavy computations that I'm not currently working. In Coq(Proof General mode in emacs) you can 'confirm' line by line, this way you have a bit more control when stuff gets evaluated.

  2. When I modify a proof, is it evaluated from scratch or from the last valid non modified point?

I have mwe for this:

class A (n a b : Nat)

instance (a b : Nat) : A 0 a b := ⟨⟩
instance (a b : Nat) [A n a b] : A (n+1) a b := ⟨⟩

theorem foo (a b n : Nat) [A 4000 a b] : a * n + b * n = (a + b) * n := sorry

set_option synthInstance.maxHeartbeats 500000
set_option synthInstance.maxSize 100000
set_option maxRecDepth 10000

example (n : Nat) : 1 * n + 1 * n + 1 * n + 1 * n = 4 * n :=
by
  rw[foo]
  conv =>
    lhs
    enter [1]
  rw[foo]
  rw[foo]
  done

When you modify enter [1] to enter [1,1] the goal should changes from (1 + 1) * n + 1 * n to (1 + 1) * n and I would expect this to happen immediately as it is just traversing an expression. However, it looks like that each time you modify anything rw[foo] gets reevaluated and that takes a long time :(

Are there some editor options on how a file is evaluated?

Leonardo de Moura (Mar 30 2022 at 22:56):

Is it possible to evaluate file only up to the cursor position?

Simple workaround: put an #exitcommand at the place you want Lean to stop. It helps for files with a bunch of theorems/definitions. It will not help in the example above.

When I modify a proof, is it evaluated from scratch or from the last valid non modified point?

Lean starts from the last checkpoint (i.e., at the beginning of the current command).

Tomas Skrivan (Mar 30 2022 at 23:00):

Leonardo de Moura said:

Lean starts from the last checkpoint (i.e., at the beginning of the current command).

That is odd, because even if the code looks like this:

example (n : Nat) : 1 * n + 1 * n  = 2 * n :=
by
  conv =>
    enter [1]
  rw[foo]
  done

When changing enter [1] to enter [1,1], it takes quite time for the goal to update from 1 * n + 1 * n to 1 * n. It looks like that the goal view is updated only once the whole proof is traversed. (This might be a problem with emacs, I didn't test it in VS Code)

Leonardo de Moura (Mar 30 2022 at 23:04):

The checkpoint is at the beginning of the example command. Every modification on this command will force Lean to re-elaborate the command from the beginning. The rw [foo] step is quite expensive because Lean needs to synthesize [A 4000 a b].

Leonardo de Moura (Mar 30 2022 at 23:05):

I am using Emacs, and it seems to be working as expected. For every modification I make at example, I get the orange bar only at this command.

Tomas Skrivan (Mar 30 2022 at 23:08):

My use case is that I'm working with a quite complicated expression that I simplify with quite expensive simp call. Then I need to navigate to its subexpression with conv to modify it to desired final form. Figuring out the desired enter [??] to navigate to the correct subexpression is difficult without interactivity.

Tomas Skrivan (Mar 30 2022 at 23:10):

Leonardo de Moura said:

I am using Emacs, and it seems to be working as expected. For every modification I make at example, I get the orange bar only at this command.

I'm having the same behavior. I just didn't know that checkpoints are only at commands. A long time ago when I was playing with Coq, it had checkpoints after every tactic call and I was hoping that Lean would have that too.

Mario Carneiro (Mar 30 2022 at 23:11):

It is difficult to integrate this with lean's elaboration mechanism since tactics can appear in subterms and generally trade off elaboration with other tactic blocks or subterms

Mario Carneiro (Mar 30 2022 at 23:12):

it wouldn't even be syntactically correct to just stop in the middle of a tactic proof and then pick it up later

Tomas Skrivan (Mar 30 2022 at 23:13):

How do people write complicated proofs if every modification to it makes it re-evaluate?

Mario Carneiro (Mar 30 2022 at 23:16):

they use more lemmas

Mario Carneiro (Mar 30 2022 at 23:17):

or use sorry { ... } to stub out subproofs

Mario Carneiro (Mar 30 2022 at 23:18):

very large proofs are a bad idea for several reasons so it doesn't bother me so much that this live response slowdown gently prods people to break up big proofs into smaller pieces

Tomas Skrivan (Mar 30 2022 at 23:22):

:cry: I'm sad about this. I want to use tactic mode to do something like what computer algebra systems do where you step by step modify an expression to a desired form. Some manipulations can be quite expensive and you do not know the intermediate forms to break it up to several pieces.

Tomas Skrivan (Mar 30 2022 at 23:23):

Would it be possible to have a tactic checkpoint? That would cache the state?

Mario Carneiro (Mar 30 2022 at 23:23):

in your example, you can work around this by writing the intermediate form after the expensive simp [...], and then do suffices : ... by sorry in place of the simp

Mario Carneiro (Mar 30 2022 at 23:24):

There have been talks about plans to checkpoint between tactics this but I don't know what stage they are in

Mario Carneiro (Mar 30 2022 at 23:25):

I would guess it's not a very high priority

Mario Carneiro (Mar 30 2022 at 23:26):

I wonder whether it is possible to write such a tactic in userland using IO references

Mario Carneiro (Mar 30 2022 at 23:26):

which IIUC are not rolled back along with the environment

Tomas Skrivan (Mar 30 2022 at 23:26):

Mario Carneiro said:

in your example, you can work around this by writing the intermediate form after the expensive simp [...], and then do suffices : ... by sorry in place of the simp

The problem is that the intermediate form is often quite complicated and just copy pasting it does not work as it is quite often missing some type anotations.

Mario Carneiro (Mar 30 2022 at 23:27):

my impression was that lean 4 is better at pp roundtripping, but pp.all is a hammer to solve the problem

Tomas Skrivan (Mar 30 2022 at 23:29):

Yay the expression I'm working with is 570 lines of code with pp.all :tada:

Mario Carneiro (Mar 30 2022 at 23:30):

maybe unfolding everything wasn't a good idea...?

Tomas Skrivan (Mar 30 2022 at 23:37):

It is just a complicated algebraic expression where every operation contains the projection e.g. from VectorSpace to AddCommGroup to AddGroup to SubNegMonoid to Sub. That totally explodes the expression.

Mario Carneiro (Mar 30 2022 at 23:44):

turning off implicits should help with that. Those subterms are shared anyway

Tomas Skrivan (Mar 30 2022 at 23:54):

Mario Carneiro said:

I wonder whether it is possible to write such a tactic in userland using IO references

Not sure how to do checkpoint but maybe something like cached simp would be possible. It would store the state before and after you execute simp. The next time it gets evaluated you first check if the state corresponds to the cached one and just load the state after.

Mario Carneiro (Mar 30 2022 at 23:55):

that's what I'm trying now. the hard part is that the state is completely different with regard to metavariable and local variable unique names

Mario Carneiro (Mar 30 2022 at 23:55):

you can't just use the old expression

Tomas Skrivan (Mar 30 2022 at 23:56):

Step ahead :) yeah that is indeed a problem

Tomas Skrivan (Mar 31 2022 at 00:00):

What would be sufficient for me it to cache only conv commands. If I understand it right, it would be enough to store the final form and equality to the original after a rewrite. This should get rid off metavariables. The problem with local variables stays.

Mario Carneiro (Mar 31 2022 at 00:01):

my plan is to support macro "checkpoint" " => " tacs:tacticSeq : tactic => ...

Mario Carneiro (Mar 31 2022 at 00:02):

which works like what you said about cached

Mario Carneiro (Mar 31 2022 at 00:02):

you could have a version of this for conv too

Tomas Skrivan (Mar 31 2022 at 00:02):

Having that would be really great!

Patrick Massot (Mar 31 2022 at 06:11):

Mario Carneiro said:

very large proofs are a bad idea for several reasons so it doesn't bother me so much that this live response slowdown gently prods people to break up big proofs into smaller pieces

I think this opinion comes from the fact that most of mathlib is still laying down very elementary foundations. In real mathematics most proofs simply can't be broken up like this. I consider this a major issue of Lean 3. Of course this isn't blocking mathport so it isn't a priority in Lean 4, but it could very well be the very top priority from a user perspective after the port.

Patrick Massot (Mar 31 2022 at 06:12):

I used to laugh at Coq's curtain going up and down the file, but nowadays I really wish I could have this in Lean so that I would get an immediate tactic state update when typing.

Mario Carneiro (Mar 31 2022 at 06:17):

The reasons to break up proofs also have to do with readability to other people, not just lean. These huge proofs are really not desirable in any sense. Of course we should try to address these issues with tooling support where possible, but it's papering over the issue. I am reminded of an adage I learned from who knows where: if you hit a system limit like a timeout or stack overflow, you should first consider whether you are doing something wrong before increasing the limit

Mario Carneiro (Mar 31 2022 at 06:20):

The "elementarity" of the proof has nothing at all to do with it. You can have large proofs and modular proofs at the high level and the low level equally well

Mario Carneiro (Mar 31 2022 at 06:23):

in CS, a common rule of thumb is to not let your functions get too large or too deeply nested. Saying "it's okay because I'm building on a big framework" is not an excuse, and the rule is not related to the effectiveness of the compiler on your code (although if you let it get really bad then you might hit a compiler limit).

Patrick Massot (Mar 31 2022 at 06:38):

Having abstract principles like this sounds nice, but math simply doesn't work like this.

Patrick Massot (Mar 31 2022 at 06:40):

And cutting a proof into ten lemmas that have the same assumptions and are used only once doesn't increase readability.

Mario Carneiro (Mar 31 2022 at 06:42):

it's difficult to get concrete in absence of an example, but I have done plenty of big proofs enough to say that it's possible to have discipline to use definitions when needed to break proofs into parts, even if some of the lemmas have no reasonable names

Mario Carneiro (Mar 31 2022 at 06:42):

they at least give you a place to hang documentation

Tomas Skrivan (Mar 31 2022 at 06:44):

Another way to phrase what I'm doing. I write down a program specification as a goal and then use bunch of tactics to generate executable code.

The whole point is that user can write down a simple, short specification that is guaranteed to be correct/exactly what user wants. And lean helps you to build the program you want.

Breaking it down to multiple steps kind of defeats the purpose.

Loosing interactivity after few steps of program synthesis is a major problem here.

Mario Carneiro (Mar 31 2022 at 06:45):

Another way to phrase what I'm doing. I write down a program specification as a goal and then use bunch of tactics to generate executable code.

I had a feeling that this was the reason for your code (which was why the claim that it was about algebra was surprising to me)

Mario Carneiro (Mar 31 2022 at 06:47):

I am probably incurably of the opinion that anything that takes more than a few seconds to check is Doing It Wrong but I am clearly biased by my experiences with metamath and MM0

Tomas Skrivan (Mar 31 2022 at 06:48):

I'm building ode solver right now, and the simp tactic was doing complicated gradient computation of an energy function.

After the differentiation I want to massage the resulting gradient to a form that is fast to execute.

Mario Carneiro (Mar 31 2022 at 06:48):

Surely that can be done without simp

Mario Carneiro (Mar 31 2022 at 06:49):

if it is based on syntactically matching the input then it should be doable in ~linear time (aka very fast) using lemma applications

Mario Carneiro (Mar 31 2022 at 06:50):

and that's including the conv bits afterward

Tomas Skrivan (Mar 31 2022 at 06:55):

Not sure what 'syntactic matching' is, but I don't think it is that simple. In particular, I'm taking a differential w.r.t to a function(not a simple variable) and then compute an adjoint of the resulting differential(a linear function) to obtain gradient.

Tomas Skrivan (Mar 31 2022 at 06:58):

Some rewrite steps need proofs that certain functions are linear. Or expressions with sums and Kronecker deltas appear and to eliminate them I need proofs that certain functions are invertible so that the sum can be reindexed and Kronecker's delta eliminated.

Mario Carneiro (Mar 31 2022 at 06:58):

By syntactic matching, I mean that the form of the output is a function of the syntax that you are given on input (i.e you recurse on the structure of the term). Most things that can be done with typeclasses have this form

Mario Carneiro (Mar 31 2022 at 06:59):

all of those things can be handled, especially if you have a closed set of things to handle

Mario Carneiro (Mar 31 2022 at 06:59):

even if you don't you can use an attribute

Tomas Skrivan (Mar 31 2022 at 07:03):

I have bunch of theorems as rewrite rules that I want to apply. So what am I supposed to use if not simp to apply these theorems?

Mario Carneiro (Mar 31 2022 at 07:04):

norm_num is written like this BTW. You match on the input syntax, and in each case you know exactly which rule you want to apply

Mario Carneiro (Mar 31 2022 at 07:05):

so it's not really just throwing a bag of rewrites at the term until things stop changing

Sebastien Gouezel (Mar 31 2022 at 07:07):

Mario Carneiro said:

very large proofs are a bad idea for several reasons so it doesn't bother me so much that this live response slowdown gently prods people to break up big proofs into smaller pieces

As a data point, in my paper on the formalization of the Morse lemma in hyperbolic geometry in Isabelle, the proof of the main lemma (after having isolated everything that could reasonably be split off) is more than 1000 lines long -- it is an intricate double-induction, which it wouldn't make any sense to break into separate lemmas. In Isabelle, it works very smoothly because the system doesn't recompile the whole proof every time, it caches everything until your first modification.

Mario Carneiro (Mar 31 2022 at 07:09):

I would actually have just the double-induction framework in the main theorem and break the body into a separate lemma

Mario Carneiro (Mar 31 2022 at 07:09):

that's a pretty clear split point

Mario Carneiro (Mar 31 2022 at 07:10):

aka: I'm glad your compiler can handle 8 levels of nested brackets but I sure can't

Mario Carneiro (Mar 31 2022 at 07:13):

the Con-NF proof also contains a really huge transfinite induction, which spans most of the paper. In the lean version this is broken out into a bunch of definitions and theorems, with only a single line definition to tie the knot at the end

Tomas Skrivan (Mar 31 2022 at 07:21):

norm_num is written like this BTW. You match on the input syntax, and in each case you know exactly which rule you want to apply

Once I know the set of rules I want to use, I might implement it like norm_num but right now I'm still figuring out the set of rewrite rules, their order and priorities.

Sebastien Gouezel (Mar 31 2022 at 07:23):

Mario Carneiro said:

I would actually have just the double-induction framework in the main theorem and break the body into a separate lemma

I'm not sure that's a reasonable course of action when the induction assumptions would be like 80 lines.

Mario Carneiro (Mar 31 2022 at 07:23):

you would put the assumptions in a definition in that case

Sebastien Gouezel (Mar 31 2022 at 07:24):

It's not one definition, it's literally 80 lines of properties. Or build up an 80 lines definition, but I'm not sure it's a big gain.

Mario Carneiro (Mar 31 2022 at 07:24):

I mean an 80 lines definition, or possibly multiple levels of definition if it's that bad

Mario Carneiro (Mar 31 2022 at 07:25):

the idea is to limit what has to stick around in the reader's head all at once

Kevin Buzzard (Mar 31 2022 at 08:38):

Mario I think that what's happening here is that the kind of maths you've seen formalised is of a different nature to the kind of maths Patrick and Sebastien are talking about.

Mario Carneiro (Mar 31 2022 at 08:50):

Perhaps. The proof of the prime number theorem was on similar levels of hairy though, so I'm not really convinced

Kevin Buzzard (Mar 31 2022 at 08:55):

The prime number theorem is undergraduate mathematics (at my university).

Mario Carneiro (Mar 31 2022 at 08:56):

yes, it's "elementary"

Mario Carneiro (Mar 31 2022 at 08:56):

doesn't make it a small proof though

Mario Carneiro (Mar 31 2022 at 08:57):

it ended up as something like 23 numbered lemmas in order with ~20 hypotheses in most of them

Kevin Buzzard (Mar 31 2022 at 10:11):

I agree it doesn't make it a small proof. But it might make it a conceptually simpler proof somehow. I think this is a very interesting question. I finished my course with a proof of the Hilbert Basis Theorem and, just like Sebastien was saying, beyond some point it's extremely difficult to start factoring out sublemmas; you could I guess just do extract_goal at some arbitrary point but then you will end up with single-use very ugly lemmas; beyond some point in history, delicate and complex arguments became more commonplace. Whether or not things scale is something we'll find out soon enough, I suspect.

Sebastien Gouezel (Mar 31 2022 at 10:20):

It's not even clear it scales for mathematicians. I mean, papers of 100 pages doing one single proof are not uncommon today, and essentially no-one can check these seriously.

Arthur Paulino (Mar 31 2022 at 10:23):

I had a good experience with CoqIDE's curtain, as Patrick called it. It felt particularly good when I was reading a Coq book (from the Software Foundations series) in it because it helped me with a sense of progression. The only problem was that it didn't save my progress and then I had to remember where I was and re-run it everytime I closed CoqIDE, so I ended up leaving it open (and idle) for several hours.

I think the best solution would aggregate the best of both worlds in a on/off toggle fashion. For common programming tasks I prefer Lean to just eat up the entire file everytime I change something. But for heavy typechecking tasks, such as advanced mathematics, there might exist advantages of having cached states in between tactics so Lean doesn't have to compile the same thing over and over.

Mario Carneiro (Mar 31 2022 at 10:25):

I think that even if the best you can do is extract_goal this is still useful in the sense that it keeps the context bounded in size and you can choose optimal cut-points to keep things manageable

Mario Carneiro (Mar 31 2022 at 10:25):

the lemma might have no useful name, that's okay

Mario Carneiro (Mar 31 2022 at 10:26):

I mean, if the process of making definitions for groups of stuff helps you discover a new concept, great, but it's a useful exercise even if it's just lemma_1, foo.aux_2 stuff

Tomas Skrivan (Mar 31 2022 at 10:27):

I was getting odd sense of satisfaction for every line that turned green in Coq and I was missing that when I switched to Lean :)

Mario Carneiro (Mar 31 2022 at 10:28):

And to be clear, I am not at all against checkpointing within a proof, if we can find a way around the technical hurdles. That's a straight win. But it is in service of proofs I would not consider "good"

Arthur Paulino (Mar 31 2022 at 10:29):

Mario Carneiro said:

I think that even if the best you can do is extract_goal this is still useful in the sense that it keeps the context bounded in size and you can choose optimal cut-points to keep things manageable

Might have been my lack of skill, but I've had troubles with extract_goal when there were metavariables involved

Mario Carneiro (Mar 31 2022 at 10:29):

yes, checkpointing has the same issues

Mario Carneiro (Mar 31 2022 at 10:29):

if you want to be able to checkpoint at an arbitrary position, at least

Sebastian Ullrich (Mar 31 2022 at 10:30):

Arthur Paulino said:

I think the best solution would aggregate the best of both worlds. For common programming tasks I prefer Lean to just eat up the entire file everytime I change something. But for heavy typechecking tasks, such as advanced mathematics, there might exist advantages of having cached states in between tactics so Lean doesn't have to compile the same thing over and over.

I'm not sure I understand your distinction. "Should we check the entirety of the remaining file on each change" (which is the only mode in Lean 4 mainly because of limitations in editor implementations of LSP) and "can we restore a tactic's state when the change is strictly below it" are essentially orthogonal issues.

Sebastian Ullrich (Mar 31 2022 at 10:31):

Tomas Skrivan said:

I was getting odd sense of satisfaction for every line that turned green in Coq and I was missing that when I switched to Lean :)

Moving from an orange bar for the non-processed parts to a green bar for the processed parts would be a comparatively trivial change :) . IIRC Isabelle/jEdit does this?

Arthur Paulino (Mar 31 2022 at 10:34):

I see. Let me rephrase it: I think it would be nice to have an option to toggle the automatic processing on and off. But I didn't know it was a limitation of VS Code.

Sebastian Ullrich (Mar 31 2022 at 10:38):

Yeah, this would unfortunately mean that the semantic highlighting and other metadata would never update

Arthur Paulino (Mar 31 2022 at 10:40):

I wish it wasn't so laboriously boring to develop our own IDE :upside_down:

Leonardo de Moura (Apr 01 2022 at 03:57):

We pushed the checkpoint tactic, but it is very experimental, and we don't have an infrastructure for testing it in the CI yet.
The test file based on @Tomas Skrivan's example looks more responsive now

class A (n a b : Nat)
instance (a b : Nat) : A 0 a b := ⟨⟩
instance (a b : Nat) [A n a b] : A (n+1) a b := ⟨⟩
theorem foo (a b n : Nat) [A 4000 a b] : a * n + b * n = (a + b) * n := sorry
set_option synthInstance.maxHeartbeats 500000
set_option synthInstance.maxSize 100000
set_option maxRecDepth 10000

set_option trace.Meta.debug true
example (n : Nat) : 1 * n + 1 * n + 1 * n + 1 * n = 4 * n := by
  checkpoint
    rw[foo]
  checkpoint
    rw [foo]
  trace "hello world"
  rw [Nat.add_comm]

Tomas Skrivan (Apr 01 2022 at 09:02):

@Leonardo de Moura Really cool, it works! Unfortunately it is a bit inconsistent. When you modify the string in the example to "hello world123456789" one character at a time you get quick response on odd numbers but slow on even numbers.

Arthur Paulino (Apr 01 2022 at 11:26):

Leonardo de Moura said:

We pushed the checkpoint tactic, but it is very experimental

Oh wow! This is a biggie IMO :heart:

Leonardo de Moura (Apr 01 2022 at 12:55):

@Tomas Skrivan I pushed a fix. Could you please try again?

Tomas Skrivan (Apr 01 2022 at 13:26):

@Leonardo de Moura Great! works as expected now!

Few problems when you modify in between checkpoints. This might be important when you have already complicated proof done, but you have to modify it due to some changes above.

Here the tracing message does not update when you modify it. (probably because tracing does not change the state)

example (n : Nat) : 1 * n + 1 * n + 1 * n + 1 * n = 4 * n := by
  checkpoint
    rw[foo]
  trace "hello world123456789"
  checkpoint
    rw[foo]
  rw [Nat.add_comm]

Something that changes state has to wait until all the following tactics are finished. Here modifying enter [1] has to wait for the second rw[foo]

example (n : Nat) : 1 * n + 1 * n + 1 * n + 1 * n = 4 * n := by
  checkpoint
    rw[foo]
  conv =>
    enter [1,1]
  checkpoint
    rw[foo]
  rw [Nat.add_comm]

Of course, the solution is to comment out the rest. Is this a limitation of LSP, the state can't be updated until the whole command is processed?

Leonardo de Moura (Apr 01 2022 at 13:33):

What about the following hack?

macro "stop" s:tacticSeq : tactic => `(repeat sorry)

Then, you put the stop before the second checkpoint.

example (n : Nat) : 1 * n + 1 * n + 1 * n + 1 * n = 4 * n := by
  checkpoint
    rw[foo]
  conv =>
    enter [1,1]
  stop
  checkpoint
    rw[foo]
  rw [Nat.add_comm]

Tomas Skrivan (Apr 01 2022 at 17:36):

Using stop is a lot less messy then commenting and uncommenting the remainder of the proof. I will play around with it see how comfortable it is to use.


Last updated: Dec 20 2023 at 11:08 UTC