Zulip Chat Archive

Stream: Lean for the curious mathematician 2020

Topic: (Deterministic) timeout at an exercise


Pim Spelier (Jul 18 2020 at 07:16):

Hello,
I'm currently doing the exercise that the derivative of the determinant at the identity matrix is the trace. Twenty lines in I had just done enough work to arrive at the goal ∥(B + 1).det - 1 - (⇑(matrix.trace_clm n) (B + 1) - ⇑(matrix.trace_clm n) 1)∥ ≤ c * ∥B + 1 - 1∥, but the next step I got a (deterministic) timeout. I don't feel like I do anything weird in my code; I squeezed my simps, and all other steps seem fairly simple. Is there someway I can see what is taking Lean so long, so I can try to speed up the lines that need it the most?

I guess the best solution is to first prove some lemmas that I could then use in the proof in order to make it shorter and hence faster, but I'd still like to understand why Lean is taking a lot more time than I expected.

The minimal working example is pretty long, so I'm not sure sure sharing it would be very helpful. but let me know if I should!

Kevin Buzzard (Jul 18 2020 at 09:06):

Hi Pim! I think you were in one of the breakout rooms with me yesterday but never said a word to me :-) I'm glad you're speaking now! Yury suggested to me yesterday that this proof would be long to formalise in Lean, but definitely possible. I'm afraid that deterministic timeouts can occur for all manner of reasons in Lean, and it will be very hard to diagnose without you posting code. Just stick your code up as a gist or paste it in some other way, and maybe people will take a look. Sometimes one tiny typo can throw the elaborator into complete confusion and send it off looking for answers to questions which you didn't mean to ask it at all.

Pim Spelier (Jul 18 2020 at 09:18):

Hi Kevin!
My code is```


Scott Morrison (Jul 18 2020 at 09:18):

There are three "legitimate" reasons for a deterministic timeout:

  • you call a tactic that does something complicated, or could just try many things (e.g. tidy, and library_search)
  • you call simp with a bad simp set that goes into a loop
  • you call refl (or something that calls refl under the hood), that induces Lean to go crazy unfolding definitions ("near misses" on definitional equality are often the culprit: usually the solution is to identify something that should be @[irreducible])

But at least as often it's because of something "dumb", often a syntactical mishap in entering a term or tactic.

Pim Spelier (Jul 18 2020 at 09:25):

On further experiments, it seems the problem might be in fact with the change command (the line commented out in the previous version. Does that call refl unde the hood?

Kevin Buzzard (Jul 18 2020 at 09:26):

Yes! If you try to change A to B and A and B are not in fact definitionally equal, then when Lean uses refl to try and prove that they are equal it might go crazy.

Pim Spelier (Jul 18 2020 at 09:27):

Before the change command, the hypothesis hA is of type dist (B + 1) 1 < eps, and 0 + 1 is definitionally equal to 1 right?

Kevin Buzzard (Jul 18 2020 at 09:27):

Can you please post an #mwe in the sense of the link?

Kevin Buzzard (Jul 18 2020 at 09:28):

PS whether 0 + 1 is definitionally equal to 1 will depend on what the types of 0 and 1 are. If they are naturals, then yes. If they are reals, I would be far less sure.

Kevin Buzzard (Jul 18 2020 at 09:28):

import data.real.basic

example : (0 : ) + 1 = 1 := rfl -- fails

Pim Spelier (Jul 18 2020 at 09:32):

They are in fact nxn-matrices, so then it does make more sense that 0 + 1 is not definitionally equal to 1

Pim Spelier (Jul 18 2020 at 09:36):

We even have:

example : (1 : matrix n n ) = 0 + 1 := rfl -- fails

that shows it's not definitionally equal

Pim Spelier (Jul 18 2020 at 09:38):

Weirdly enough,

example : (1 : matrix n n ) = 0 + 1 := begin
  library_search,
end

also timeouts...

Pim Spelier (Jul 18 2020 at 09:45):

but zero_add luckily proves this :)

Pim Spelier (Jul 18 2020 at 09:53):

Still, is there a way to explicitly see what is taking Lean so long? I would like to be able to automatically see where the timeout occurred, is that possible?

Kevin Buzzard (Jul 18 2020 at 09:55):

It's extremely hard to diagnose what's going on until you post a #mwe

Kevin Buzzard (Jul 18 2020 at 09:55):

Ie I am too lazy to do the work which you can do easily

Pim Spelier (Jul 18 2020 at 09:58):

Well, here the diagnosis, thanks to you and Scott, is that I was trying to change to something non-definitionally equal, so my specific problem is solved.

Pim Spelier (Jul 18 2020 at 09:59):

I was just wondering whether it is in general possible to automatically see where a timeout occurs?

Scott Morrison (Jul 18 2020 at 10:00):

Delete lines from the end of the proof?

Scott Morrison (Jul 18 2020 at 10:00):

Use extract_goal to pull a suspicious step out into a separate lemma?

Scott Morrison (Jul 18 2020 at 10:01):

Try adding in implicit arguments by hand using @?

Scott Morrison (Jul 18 2020 at 10:01):

None of these are automatic, but all helpful for running it down.

Kenny Lau (Jul 18 2020 at 10:03):

import data.real.basic
import linear_algebra.matrix

#check (rfl : (0 + 1 : ) = 1) -- fails
attribute [reducible] real real.comm_ring_aux
#check (rfl : (0 + 1 : ) = 1) -- works
#check (rfl : (0 + 1 : matrix bool bool ) = 1) -- fails

Kenny Lau (Jul 18 2020 at 10:03):

don't quite understand why the last one fails

Pim Spelier (Jul 18 2020 at 10:05):

Ok, thanks! Deleting lines from the end of the proof already sounds like it should work most of the time, especially knowing that even something simple like change can cause timeouts.

Scott Morrison (Jul 18 2020 at 10:06):

change sounds simple, but recall how unboundedly bad "it's obvious" proofs by human mathematics can become. change is pretty much the same thing.

Pim Spelier (Jul 18 2020 at 11:18):

I guess the lesson for me is indeed that even definitional equality can be arbitrarily complicated

Kevin Buzzard (Jul 18 2020 at 11:19):

If you're a mathematician then you shouldn't get too hung up about definitional equality. It is an implementation issue. No mathematician would believe that n+0=nn+0=n and 0+n=n0+n=n were any different as statements, however if you set things up the way Peano did then one is definitional and the other isn't. The important thing for mathematicians is that they are both simp lemmas.

Johan Commelin (Jul 18 2020 at 13:06):

@Pim Spelier You might be looking for

set_option profiler true

Put it above your lemma...

Pim Spelier (Jul 18 2020 at 13:54):

That seems to work, thanks!

Mario Carneiro (Jul 18 2020 at 17:05):

@Kevin Buzzard I know what you mean, but to be fair I don't think Peano deserves the blame for this sordid state of affairs. I don't think such a convoluted notion of "truth by definition" for recursive functions appears in his axiomatization. You could blame lambda calculus but here it's not so much "unfolding definitions" as "computing" when you perform beta reduction. I don't know where this idea came from prior to CIC itself


Last updated: Dec 20 2023 at 11:08 UTC