Zulip Chat Archive

Stream: Berkeley Lean Seminar

Topic: Bug in lean??


Shreesha S (Jun 21 2020 at 21:20):

I'm currently trying to solve --0079 in 09_limits_final.lean
This is my proof:

lemma bdd_below_segment {f :   } {a b : } (hf :  x  Icc a b, continuous_at_pt f x) :
 m,  x  Icc a b, m  f x :=
begin
  have negf :  x  Icc a b, continuous_at_pt (λ x, -(f x)) x,
  {
    intros x hx,
    exact continuous_opposite (hf x hx),
  },
  cases bdd_above_segment negf with negm hnegm,
  use (-negm),
  intros x xab,
  specialize hnegm x xab,
  -- works until here
  -- now, I just need to move terms around an inequality
  linarith,
  -- linarith fails
end

Does anybody know what might be wrong with this?
The solution file does something very similar and linarith works in a similar context.

Thanks in advance!

Kyle Miller (Jun 21 2020 at 21:30):

You've run into a bug in the tutorial that I ran into (that I've been meaning to submit a pull request for to fix). First, to see what's going on, at the top of the file comment out set_option pp.beta true.

Kyle Miller (Jun 21 2020 at 21:34):

When pp.beta is true, the goal window will automatically beta reduce expressions, leading to some bewilderment that certain things should work but don't seem to. The issue is that you were seeing

hnegm : -f x  negm

when, secretly, it was actually

hnegm : (λ (x : ), -f x) x  negm

This is a non-beta-reduced term. To beta reduce it (and nothing more (I think)), do dsimp only at hnegm.

Patrick Lutz (Jun 21 2020 at 21:34):

So I guess first doing dsimp at nhegm, will make linarith work as expected here?

Patrick Lutz (Jun 21 2020 at 21:34):

oops, sent at the same time as your last message I guess

Kyle Miller (Jun 21 2020 at 21:36):

This is not a bug in Lean because linarith won't beta reduce expressions for you by design, since beta reduction can make terms arbitrarily complicated (lambda calculus can compute all recursively enumerable functions, after all).

Shreesha S (Jun 21 2020 at 21:38):

Wow that works! thanks a lot
So what exactly does set_option pp.beta true do?

Kyle Miller (Jun 21 2020 at 21:39):

Do you understand what hnegm : (λ (x : ℝ), -f x) x ≤ negm means in my message above?

Kyle Miller (Jun 21 2020 at 21:39):

But, specifically, it is setting a pretty printer option to beta reduce terms before showing them in the goal window.

Shreesha S (Jun 21 2020 at 21:40):

I think I do.. It was a lambda function but the final goal was not. It had to be reduced..
Ahh got it...

Kyle Miller (Jun 21 2020 at 21:40):

Beta reduction is the rule that says when you apply a lambda expression to an argument, you can substitute that argument into the body of the lambda expression. It models function application.

Shreesha S (Jun 21 2020 at 21:42):

I think I get it now... thanks a lot!

Kyle Miller (Jun 21 2020 at 21:44):

I'd looked into linarith, and what it does is look for linear inequalities and locate all the "atoms", which are the individual terms of the inequalities. It does some really basic equality checking of these atoms, and it won't factor out anything even if it could decide it can factor stuff out later. Like (λ (x : ℝ), -f x) x ends up being one of the atoms of hnegm. While this is equal to -f x, it won't notice f x is a better atom. Anyway, in your case you had negm, f x, and -f x as atoms, so it's not able to prove any inequalities.

Patrick Lutz (Jun 21 2020 at 21:45):

By the way, there is another tactic, linarith!, which is more aggressive about looking for atoms. But it also fails in this case

Kyle Miller (Jun 21 2020 at 21:46):

I got the impression from perusing the source it fails because the atom is -f x and it doesn't know how to continue factoring things out. (I really had hoped linarith! would work, but it sort of makes sense not trying to "fix" it if dsimp works well enough.)

Patrick Lutz (Jun 21 2020 at 21:50):

It's a little clunky, but linarith [show -f x ≤ negm, from by dsimp at hnegm; exact hnegm], also works

Kyle Miller (Jun 21 2020 at 21:52):

By the way, it's sort of a relief that I'm not the only one that had this issue with these tutorials. I myself spent a couple frustrating hours trying to get to the bottom of it (and I was about ready to wait another few years to learn a computer proof assistant -- it's no fun fighting with invisible representations of things in a language with a complicated type system!). Apparently Patrick Massot's students never had this issue, but I wonder if it's more likely to appear with PhD students rather than undergrads because a PhD student is more likely to have confidence about what should actually work. (That is, it appears for undergrads, too, but they'll just try something else because they think they're doing something wrong.)

Kyle Miller (Jun 21 2020 at 21:53):

Another thing instead of dsimp is

change -f x  _ at hnegm,

if you want some control over the final representation.

Patrick Lutz (Jun 21 2020 at 21:54):

Kyle Miller said:

By the way, it's sort of a relief that I'm not the only one that had this issue with these tutorials. I myself spent a couple frustrating hours trying to get to the bottom of it (and I was about ready to wait another few years to learn a computer proof assistant -- it's no fun fighting with invisible representations of things in a language with a complicated type system!). Apparently Patrick Massot's students never had this issue, but I wonder if it's more likely to appear with PhD students rather than undergrads because a PhD student is more likely to have confidence about what should actually work. (That is, it appears for undergrads, too, but they'll just try something else because they think they're doing something wrong.)

I think the existence of the zulip chat is actually one of the main things that makes Lean tolerable to learn.

Kyle Miller (Jun 21 2020 at 21:55):

Yeah, this is certainly an invaluable resource. I get a lot of help through even just the search box at the top.

Patrick Lutz (Jun 21 2020 at 21:56):

I don't think I've ever asked a question on the new members stream which didn't get some kind of answer within 5 minutes

Patrick Lutz (Jun 21 2020 at 21:56):

But I do feel like the documentation available online for Lean is a bit lacking right now

Kyle Miller (Jun 21 2020 at 21:58):

Some of it's for Lean 2, which is confusing. Yesterday I was trying to figure out how to do the different specialized rewrite rules, but I'm thinking they don't exist anymore.

Patrick Lutz (Jun 21 2020 at 22:04):

Some of them definitely still work, not sure about all of them

Patrick Lutz (Jun 21 2020 at 22:07):

Oh and I guess some of them still exist but now have different notation? Like it looks like rw → h used to be rw -h?

Kyle Miller (Jun 21 2020 at 22:14):

Patrick Lutz said:

Oh and I guess some of them still exist but now have different notation? Like it looks like rw → h used to be rw -h?

I can't get either to work. The documentation suggests is the only thing you can do. I think maybe those Lean 2 rewrite rules were replaced by conv.

Patrick Lutz (Jun 21 2020 at 22:17):

Oh, I made a typo. I meant

Patrick Lutz (Jun 21 2020 at 22:17):

But for instance, rw h at * still works I think

Kyle Miller (Jun 21 2020 at 22:46):

Patrick Lutz said:

But for instance, rw h at * still works I think

Yeah, that's part of its grammar:

rewrite ([ (←? expr), ... ] | ←? expr) (at (* | ( | id)*))?

This says every rewrite can optionally end in an at expression, consisting of either at * or at followed by any number of identifiers or *.

This ends up being defined using parser combinators in interactive.lean line 359 (which is used in the meta def rewrite a bit further down.)

Kyle Miller (Jun 21 2020 at 22:52):

Oops, that file only defined everything before the "at". The meta def rewrite also refers to interactive_base.lean line 75 for the at part.


Last updated: Dec 20 2023 at 11:08 UTC