Zulip Chat Archive

Stream: new members

Topic: Lean act slowly when proof gets long


YH (Dec 14 2019 at 00:35):

Like when it's about 70 lines it starts to be slow. Is there anything to do?

Alex J. Best (Dec 14 2019 at 01:19):

Sure but it depends exactly what the problem is which is hard to diagnose without seeing the code. It could be any one of many problems so please post an example.

YH (Dec 14 2019 at 01:25):

https://gist.github.com/3abc/5a1aa9aff302bfb6d2f1aab41c2b9e6e
OK here is what I had.

YH (Dec 14 2019 at 01:27):

it's just that at 60 lines whenever I add a new line it hangs like 10 - 15 seconds.

Simon Hudon (Dec 14 2019 at 01:27):

Which line do you add?

Simon Hudon (Dec 14 2019 at 01:28):

It's likely not a matter of length so much as what kind of work the proof requires from Lean

YH (Dec 14 2019 at 01:28):

Any newline, even if I add a random thing and remove it.

Simon Hudon (Dec 14 2019 at 01:36):

What if you do change _?

Alex J. Best (Dec 14 2019 at 03:32):

I have added a modified version of your proof on a gist: https://gist.github.com/alexjbest/5e21dea71b7ea3c4be346fd89c216951 . The short answer is that the proof is long and complicated and uses lots of tactics. Which is fine but its often better to break things up in some way or other. I'm definitely not an expert speeder-upper but here are some comments in no particular order, maybe not all of them are useful but I hope some of them are:

  • Use separate lemmas for subproofs, these won't be re-checked each time you change the main proof, this is probably the most important thing
  • You use let sometimes for Props instead of have which seems unnecessary.
  • When you have a lot of hypotheses in the context it can be hard for tactics like linarith to find a contradiction, so you can use linarith only [blah] to tell linarith which things will give a contradiction.
  • Alternatively to this (but it is also good in general) you can structure your proof in such a way that when you are done with a statement it drops out of context for instance your h was only used to prove hhhh but it was cluttering up the context after so I moved h into a proof block for hhhh same for hh hhh and LHS_nonneg
  • After identifying that some linarith calls are just simple contradictions we can replace them e.g. linarith only [hhhh, a2] can become
      apply lt_irrefl ((m ^ 4) ^ 3),
      exact lt_of_lt_of_le a2 hhhh,

or even exact lt_irrefl ((m ^ 4) ^ 3) (lt_of_lt_of_le a2 hhhh),

  • You can use set_option profiler true to see some profiling information about what is taking time
  • Instead of using have blah : blaaaah, begin blah end you can use have blah : blaaaah := begin blah end this makes it easier to use the profiler for one.
  • simp is great but once you are happy with the way the proof is it is good to change it to squeeze_simp which will output a string simp only [blah] which should give a more minimal proof.
  • The last part of the proof you used norm_num and ring which are pretty heavy handed, I tried just using simp instead and it worked but left a 0^3 in there so simp [zero_pow] replaced both norm_num and ring here, then with squeeze_simp i got to simp only [zero_pow, nat.succ_pos', zero_mul, zero_add, mul_zero], the profiler says this took 22ms instead of over 600ms to use the tactics. In summary norm_num and ring and other tactics are great if they are really needed sometimes simp is just as good.

YH (Dec 14 2019 at 06:28):

  • You use let sometimes for Props instead of have which seems unnecessary.

This is something I wanted to ask, what is the difference between have and let? I felt have is only for one goal and let have a larger scope, but I am not sure.

Alex J. Best (Dec 14 2019 at 06:41):

let remembers the definition of the term whereas have does not, for statements (Props) the definition / proof is not relevant once it is constructed, all that matters is that a term has been constructed. Most of the time if you are introducing terms of Types that are not Prop i.e. some natural or real number you want to remember its definition so that you can later substitute or otherwise work with it.

Kevin Buzzard (Dec 14 2019 at 07:24):

So as Alex says, usually the problem when Lean gets sluggish like this is that you have used an expensive tactic like simp and the convoluted algorithm which it has discovered to prove your goal (which can involve trying literally thousands of stupid ideas before it finds the right one) gets repeated again and again. The fault is not some inbuilt lean issue, it's an argument which mathematicians would call logically fine and which computer scientists would call bad code. It's normally just one line which is causing the problem. Alex has listed several suggestions but the main idea to fix an issue like this is either to refactor (ie prove your lemmas before your theorems, not as sublemmas) or to replace an expensive tactic with the proof it found. Sometimes it can be the typeclass system though and this can be harder to work around

Kevin Buzzard (Dec 14 2019 at 07:54):

In particular your original claim "when it gets to 70 lines Lean gets slow" is not an accurate diagnosis of the situation. You can tell this can't be the case because some of the files in mathlib are 1000 lines long. What is happening in those files is that 98% of the proofs are two lines long. It's a bit of a pain to diagnose what's making a 70 line proof slow -- what happens with me in practice is that the moment the proof starts to become slow I stop and try to figure out what I did wrong.

YH (Dec 14 2019 at 07:59):

OK. I'm looking at Alex's modified version of my proof.
Actually moving one thing out as a lemma reduces the whole elaboration by 2 seconds. (The one that gets from a ≥ 0 and ¬(a > 0) to a = 0.) I do believe many things can be improved in that proof.

Kevin Buzzard (Dec 14 2019 at 08:04):

One thing you can do with your 70 line proof is this: Step 1: put set_option profiler true just before the statement of the lemma. Step 2: insert the line repeat {sorry} end #exit as the first line of the proof. Step 3: switch VS Code to "Info View: Display Messages (Ctrl+Shift+Alt+Enter)" (now the Lean Messages window prints debugging information about the entire file rather than just what's going on in the middle of one tactic mode proof). Step 4: move the repeat {sorry} end #exit line around until you spot a big jump in the time taken to elaborate ag3.

YH (Dec 14 2019 at 08:06):

Oh cool

YH (Dec 14 2019 at 08:06):

Step 2: insert the line repeat {sorry} end #exit as the first line of the proof (or the lemma?).

Kevin Buzzard (Dec 14 2019 at 08:07):

This is just a random line which has a good chance of just dumping you out of the proof immediately.

YH (Dec 14 2019 at 08:08):

How do I do step 3 on a Mac?

Kevin Buzzard (Dec 14 2019 at 08:08):

It doesn't always work, e.g. you have a begin/end block within another one and then you'll get syntax errors. This is why it's usually better to catch these things as they happen -- i.e. the moment things start getting slow, stop what you're doing and fix them.

Kevin Buzzard (Dec 14 2019 at 08:09):

view.png Click the button that looks like that on VS Code

YH (Dec 14 2019 at 08:10):

pasted image
Cannot find any button like that...

Kevin Buzzard (Dec 14 2019 at 08:10):

Try the three dots in the top right?

YH (Dec 14 2019 at 08:10):

pasted image

Kevin Buzzard (Dec 14 2019 at 08:11):

You have a tab which says "Lean Messages". If you close it, how can you open it again?

Kevin Buzzard (Dec 14 2019 at 08:11):

My guess is that on a mac it will be command+all those other keys I mentioned

YH (Dec 14 2019 at 08:11):

Oh yeah close it the button appears

Kevin Buzzard (Dec 14 2019 at 08:12):

There are two different windows displaying Lean messages, both helpfully with the title "Lean Messages" and explaining different things :-/

YH (Dec 14 2019 at 08:12):

pasted image
Is this the info view?

Kevin Buzzard (Dec 14 2019 at 08:12):

Yes!

YH (Dec 14 2019 at 08:12):

Oh OK. It was always open here...

Kevin Buzzard (Dec 14 2019 at 08:13):

Now compile your proof and see if you have something like "elaboration of ag3 took 10 seconds" somewhere

YH (Dec 14 2019 at 08:14):

elaboration of ag3 took 4.99s

Kevin Buzzard (Dec 14 2019 at 08:15):

so there's your problem, and there's just one or two bad lines of code in there where you use something like simp to go from a<b to not b<=a and it takes Lean about 2 seconds to do this, and you could cut this down to 0.002 seconds by changing that line to rw lt_iff_not_le or something.

Kevin Buzzard (Dec 14 2019 at 08:16):

The problem is that it turns out that maths does not go at infinite speed.

Kevin Buzzard (Dec 14 2019 at 08:16):

That's why they have induction as an axiom ;-)

Kevin Buzzard (Dec 14 2019 at 08:16):

If we know P 0 is true and P n -> P (n + 1) then we can instantly deduce P 10^10^10^10

YH (Dec 14 2019 at 08:16):

There are some things which I repeated three times.
Like proving some properties of 4th root of a number.
I will see if I can write a function into subtypes of ℝ and get a 4th root map together with its properties.

Kevin Buzzard (Dec 14 2019 at 08:16):

because we won't ever bother writing the proof down

Kevin Buzzard (Dec 14 2019 at 08:17):

These things are a pain to debug though. Sometimes it's just little things building up. There might be some killer line in there somewhere and rewriting it will save you 4 seconds. But it might just be a case of an annoying refactor.

Alex J. Best (Dec 14 2019 at 08:20):

Is the 4.99s the old one or the one I modified? I think I got it down to 1.2s on my computer (a laptop) so if it's the modified one it could just be that your computer can't run lean that fast.

YH (Dec 14 2019 at 12:32):

The old one.

YH (Dec 14 2019 at 18:31):

The new one actually took 3.17 on my computer. (If it is CPU that matters then it is a 2.3 GHz Intel Core i7)

Kevin Buzzard (Dec 14 2019 at 19:33):

@YH

import data.real.basic
import data.real.nnreal
import data.rat
open real nnreal

lemma ag2 (a b : ) : 2 * a * b  a ^ 2 + b ^ 2 :=
begin
  apply le_of_sub_nonneg,
  convert mul_self_nonneg (a - b),
  ring
end

Reid Barton (Dec 14 2019 at 19:35):

also, IIRC, a3+b3+c33abc=12(a+b+c)((ab)2+(bc)2+(ca)2)a^3 + b^3 + c^3 - 3abc = \frac 12 (a + b + c) ((a - b)^2 + (b - c)^2 + (c - a)^2)

Kevin Buzzard (Dec 14 2019 at 19:36):

spoken like a true IMOer.

Kevin Buzzard (Dec 14 2019 at 19:38):

example (a b c : ): a ^ 3 + b ^ 3 + c ^ 3 - 3 * a * b * c =
1 / 2 * (a + b + c) * ((a - b) ^ 2 + (b - c) ^ 2 + (c - a) ^ 2) := by ring

Kevin Buzzard (Dec 14 2019 at 19:52):

lemma ag4 (a b c d : ) : 4 * (a * b * c * d)  a ^ 4 + b ^ 4 + c ^ 4 + d ^ 4 :=
calc 4 * (a * b * c * d)
     = 2 * (2 * (a * b) * (c * d)) : by ring
...   2 * ((a * b) ^ 2 + (c * d) ^ 2) : (mul_le_mul_left $ by norm_num).2 $ ag2 _ _
...  = 2 * a ^ 2 * b ^ 2 + 2 * c ^ 2 * d ^ 2 : by ring
...   (a ^ 2) ^ 2 + (b ^ 2) ^ 2     + 2 * c ^ 2 * d ^ 2 : add_le_add_right (ag2 _ _) _
...   (a ^ 2) ^ 2 + (b ^ 2) ^ 2  + ((c ^ 2) ^ 2 + (d ^ 2) ^ 2) : add_le_add_left (ag2 _ _) _
...  = a ^ 4 + b ^ 4 + c ^ 4 + d ^ 4 : by ring

You can get ag4 from ag2 purely in calc mode

Kevin Buzzard (Dec 14 2019 at 19:54):

The trick is to know exactly what the ring tactic can and can't do. I switch between term mode proofs like add_le_add_right (ag2 _ _) _ which corresponds to what a mathematician means when they say "now apply AG2 and we deduce that..."

Kevin Buzzard (Dec 14 2019 at 19:54):

All the equalities I did with ring and on the other steps I moved as little as possible.

Kevin Buzzard (Dec 14 2019 at 19:55):

@YH your ag3 code compiles with no warnings about sorrys now :grinning:

YH (Dec 14 2019 at 20:10):

also, IIRC, a3+b3+c33abc=12(a+b+c)((ab)2+(bc)2+(ca)2)a^3 + b^3 + c^3 - 3abc = \frac 12 (a + b + c) ((a - b)^2 + (b - c)^2 + (c - a)^2)

Yeah I know this one. I just want to write a proof that can be generalized.

Alex J. Best (Dec 14 2019 at 23:49):

I don't want to spoil your fun, so feel free not to look at this! But I got a bit addicted and I think I got the general AM-GM to work, its a bit ugly atm (I should have proved more lemmas about nnreal from the start) and was quite hard (for me) ~450 lines. but if anyone is curious I followed the proof in https://math.stackexchange.com/a/1665056/31917 and ended up with https://gist.github.com/alexjbest/da0a2e2acf3ba252659531ca77a3b5a7

Alex J. Best (Dec 15 2019 at 00:18):

Oh :oops: I see @Yury G. Kudryashov has added some of these useful lemmas for nnreal already, I really should update mathlib more often!

YH (Dec 15 2019 at 00:21):

I don't want to spoil your fun, so feel free not to look at this! But I got a bit addicted and I think I got the general AM-GM to work, its a bit ugly atm (I should have proved more lemmas about nnreal from the start) and was quite hard (for me) ~450 lines. but if anyone is curious I followed the proof in https://math.stackexchange.com/a/1665056/31917 and ended up with https://gist.github.com/alexjbest/da0a2e2acf3ba252659531ca77a3b5a7

Oh nice. I looked at the stackexchange proof. What I had in mind is a different one. Since n2nn ≤ 2^n and we have AM-GM for any n=2mn = 2^m, for arbitrary non-negative a1,,ana_1, \ldots, a_n, make it 2n2^n numbers b1,,bn,m,m,m,,mb_1, \ldots, b_n, m, m, m, \ldots, m with the property bk2n=akn,m2n=kaknnb_k^{2^n} = a_k^n, m^{2^n} = \dfrac{\sum_k a_k^n}{n}. Now put it in AM-GM for 2n2^n numbers and simplify. Good thing about this is that it bypasses the definition of $n$-th root or fancy lemmas in nnreal (only need square root), it also bypasses the fancy induction (that proves it for $2^n$ and goes down), but I'm still struggling having lean doing basic algebraic operations.

Alex J. Best (Dec 15 2019 at 00:29):

Ah that also sounds like a nice proof, I agree that avoiding the fancy induction seems good, the thing that attracted me to the one I picked is that the "padding" inductive step only involves replacing a couple of elements of the list, which seemed marginally easier to prove things about to me. It would be really interesting to compare them when done.

Yury G. Kudryashov (Dec 15 2019 at 02:23):

I almost have weighted version of AM-GM ready.

Yury G. Kudryashov (Dec 15 2019 at 02:24):

I'll submit a series of PRs tonight.

Yury G. Kudryashov (Dec 15 2019 at 02:24):

Together with a few other inequalities.

Yury G. Kudryashov (Dec 15 2019 at 02:29):

We already have Jensen inequality in convex, so we only need to prove that exp is convex_on univ.


Last updated: Dec 20 2023 at 11:08 UTC