Zulip Chat Archive

Stream: general

Topic: Unfolding carefully


view this post on Zulip Moses Schönfinkel (Apr 11 2018 at 09:07):

Suppose I have the following goal f a = c (f (d a)). What I am looking to do is to unfold1 f but only on the left hand side. Currently I do generalize hack : f (d a) = x, unfold1 f, rw <- hack. Apart from the fact that it's a hack, the generalize takes a couple of seconds to execute. I tried using conv { to_lhs ... } but unfold1 is not an option in conv (and simp fails with deterministic timeout). a) Why on Earth is generalize so slow in this case - is it because c is some horrible dependently typed function with explicit well founded termination proofs? b) Is there a nicer trick to invoke unfold1 on left hand side only?

view this post on Zulip Kenny Lau (Apr 11 2018 at 09:09):

use change?

view this post on Zulip Moses Schönfinkel (Apr 11 2018 at 09:10):

that would involve me having to explicitly state the effect of unfolding if I understand your suggestion; right?

view this post on Zulip Simon Hudon (Apr 11 2018 at 13:28):

What about using transitivity instead of generalize?

view this post on Zulip Moses Schönfinkel (Apr 11 2018 at 14:14):

ooh that is actually brilliant! what I ended up doing is something I'm ashamed of but it saves me 15 seconds on each recompile - I reformulated the lemma such that it contains a pre-generalized expression and an equality (which I will, of course, change now :P)

view this post on Zulip Simon Hudon (Apr 11 2018 at 14:16):

Great! Glad to be of help! So, the way I use transitivity to protect parts of my formulas is to do transitivity, blah, blah, refl

view this post on Zulip Moses Schönfinkel (Apr 11 2018 at 14:17):

Yeah I have never thought of that. Thanks a lot. I wish conv was a bit less constrained.

view this post on Zulip Moses Schönfinkel (Apr 11 2018 at 14:19):

if this worked, then conv { to_lhs, dsimp1 f } would be just fine for me - but interactive.conv.* only has a bunch of tactics for reasons I'm not sure I want to know about :P

view this post on Zulip Simon Hudon (Apr 11 2018 at 14:19):

If you have something more specific that you want to use, let's say g instead of =, you can also make your own protection lemmas:

lemma protect (x y : α) (z : β)
  (h : x = y)
  (h : g y z)
: g x z := ...

This allows you to select exactly what part of the formula you want to rewrite when you expect something very specific. And z is protected.

view this post on Zulip Moses Schönfinkel (Apr 11 2018 at 14:20):

ah, right; this is basically the general version that I ended up doing, but I think the transitivity is such a neat hack :P

view this post on Zulip Simon Hudon (Apr 11 2018 at 14:20):

if this worked, then conv { to_lhs, dsimp1 f } would be just fine for me - but interactive.conv.* only has a bunch of tactics for reasons I'm not sure I want to know about :P

I think Coq is more targeted like that rewrite h at 1,3. I found I missed that at first but I'm not sure it makes for stable proofs

view this post on Zulip Moses Schönfinkel (Apr 11 2018 at 14:21):

yes, in Coq I can rewrite where I want

view this post on Zulip Moses Schönfinkel (Apr 11 2018 at 14:21):

I can also rewrite under binders

view this post on Zulip Moses Schönfinkel (Apr 11 2018 at 14:21):

(with setoid rewrites)

view this post on Zulip Moses Schönfinkel (Apr 11 2018 at 14:21):

in Lean I've only used simp to rewrite under binders, not sure how else it can be even done

view this post on Zulip Simon Hudon (Apr 11 2018 at 14:21):

simp helps there with binders and congruence lemmas but it's a bit less controlled

view this post on Zulip Simon Hudon (Apr 11 2018 at 14:22):

... yeah, exactly

view this post on Zulip Moses Schönfinkel (Apr 11 2018 at 14:22):

stability of proofs has never been an issue for me, because, I, urm... write them once and then I... forget about them...

view this post on Zulip Moses Schönfinkel (Apr 11 2018 at 14:22):

I may need to try to write Coq in "industrial" settings :P

view this post on Zulip Simon Hudon (Apr 11 2018 at 14:23):

Question: I've only used Coq as a noob so I never experienced scaling up proofs that uses the at 1,3 notation. Do you find that your proofs breaks a lot when you do that?

view this post on Zulip Moses Schönfinkel (Apr 11 2018 at 14:23):

yes, they do

view this post on Zulip Moses Schönfinkel (Apr 11 2018 at 14:23):

it's the absolute worst thing one can do

view this post on Zulip Simon Hudon (Apr 11 2018 at 14:25):

In my temporal logic tactics language, I have played with a tactic you call as rw_using : f x = g x, { /- proof -/ }. For some reason, I liked not clearing a temporary assumption.

view this post on Zulip Simon Hudon (Apr 11 2018 at 14:26):

In temporal logic, that was doubly useful because the proof of equality was typically done in normal Lean logic

view this post on Zulip Simon Hudon (Apr 11 2018 at 14:27):

Otherwise, I really haven't found a drop in replacement for that Coq notation

view this post on Zulip Moses Schönfinkel (Apr 11 2018 at 14:27):

I wish I understood exactly how simp works but I have never had the willpower to look at it

view this post on Zulip Moses Schönfinkel (Apr 11 2018 at 14:28):

I believe it's been modeled to resemble the Isabelle heuristics you get when you're within a single theory section thing

view this post on Zulip Moses Schönfinkel (Apr 11 2018 at 14:29):

where theory just aggregates a set of related constants, definitions, proofs, etc. and you abuse this implicit relation in automation

view this post on Zulip Simon Hudon (Apr 11 2018 at 14:29):

I feel like I really don't understand what Isabelle does with any given proof. It's hard for me to understand that comparison

view this post on Zulip Moses Schönfinkel (Apr 11 2018 at 14:30):

you basically help guide automation in Isabelle by aggregating related stuff together in "theories", which is something like making a hint database in Coq

view this post on Zulip Moses Schönfinkel (Apr 11 2018 at 14:32):

(so like a beefy namespace that implicitly provides hints for automation)

view this post on Zulip Simon Hudon (Apr 11 2018 at 14:32):

Ah I see. Does it make things clearer to use the theories or the databases?

view this post on Zulip Moses Schönfinkel (Apr 11 2018 at 14:33):

databases are less implicit so you have to go over the trouble of creating them

view this post on Zulip Moses Schönfinkel (Apr 11 2018 at 14:33):

so in Isabelle you get nice behaviour by default but it's less controlled because it's just "whatever you put in this theory"

view this post on Zulip Moses Schönfinkel (Apr 11 2018 at 14:34):

I think people with purely math background would have easier time using Isabelle, simply because you get nice implicit heuristic-guided behaviour out of the box (+ the Sledgehammer)

view this post on Zulip Moses Schönfinkel (Apr 11 2018 at 14:34):

if I understand Kevin correctly, he uses Lean because he wants dependent types

view this post on Zulip Moses Schönfinkel (Apr 11 2018 at 14:35):

(or I should say, he's not using Isabelle because he wants dependent types)

view this post on Zulip Moses Schönfinkel (Apr 11 2018 at 14:35):

and simp in Lean takes after Isabelle default simplifier, I believe

view this post on Zulip Simon Hudon (Apr 11 2018 at 14:36):

Any chance that something like sledgehammer would work with dependent types? Or is it more of a matter of structuring databases like Isabelle?

view this post on Zulip Moses Schönfinkel (Apr 11 2018 at 14:37):

it would work, Sledgehammer is almost like an outside tool that ships proof obligations to SMT solvers and ATPs

view this post on Zulip Moses Schönfinkel (Apr 11 2018 at 14:37):

Sledgehammer is really the most unimpressive piece of Isabelle; it just tries to invoke every automating algorithm it can get its grabby mittens on

view this post on Zulip Sebastian Ullrich (Apr 11 2018 at 14:44):

That may be the understatement of the century

view this post on Zulip Moses Schönfinkel (Apr 11 2018 at 14:44):

:D

view this post on Zulip Sebastian Ullrich (Apr 11 2018 at 14:45):

Sledgehammer is the result of a multi-year research project that produced a good amount of papers http://www.cl.cam.ac.uk/~lp15/papers/Automation/

view this post on Zulip Sebastian Ullrich (Apr 11 2018 at 14:46):

The hard parts are relevance filtering of lemmas to pass to the external solver, and of course translating between HOL and FOL

view this post on Zulip Moses Schönfinkel (Apr 11 2018 at 15:21):

ok ok I will change it to 2 out of 10 on the scale of impressiveness, making it the second least impressive Isabelle feature right after its bundled development IDE :grinning:

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 19:57):

I use Lean because when I had never heard of any theorem provers apart from Coq, and never used any at all, I watched a live stream of Tom Hales in Cambridge talking about (amongst other things) FAbstracts, and someone asked him which language he would be using, and he said "...Lean?" and I thought "OK that'll do for me"

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 19:57):

and I certainly didn't know what a dependent type was at that point

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 19:57):

I just decided to jump in

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 19:59):

I wish I understood exactly how simp works but I have never had the willpower to look at it

I wish I understood more about how simp works but I learnt a couple of set_option options and wrote https://github.com/leanprover/mathlib/blob/master/docs/extras/simp.md

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:00):

Looking at the output of set_option trace.simplify true gave me some sort of idea about the absolute inanity of what simp was doing

view this post on Zulip Chris Hughes (Apr 11 2018 at 20:04):

I wish I understood exactly how simp works but I have never had the willpower to look at it

I wish I understood more about how simp works but I learnt a couple of set_option options and wrote https://github.com/leanprover/mathlib/blob/master/docs/extras/simp.md

That bit at the end sounds interesting. Does that mean if I had a + b ≡ c [MOD n] and h : a ≡ d [MOD n] I could rewrite to get a + d ≡ c [MOD n]? How would that work with complicated examples, with lots of addition and multiplication?

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:09):

I have never used that MOD business so I'm not sure

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:10):

Oh, Gabriel said that

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:10):

I didn't understand it so I didn't use it

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:10):

[and you mean d + b = c of course]

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:11):

Aah but I do remember that this might work in calc mode.

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:11):

If your relation is tagged with trans then I think calc supports it

view this post on Zulip Chris Hughes (Apr 11 2018 at 20:12):

Can I mix equality and an congruences in calc?

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:12):

I haven't ever seen this done

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:12):

I think that either in the reference manual or in TPIL they are pretty formal about what you can do with calc

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:14):

https://leanprover.github.io/theorem_proving_in_lean/quantifiers_and_equality.html#calculational-proofs

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:14):

searching TPIL for calc doesn't find that

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:15):

so I always look at https://github.com/kbuzzard/mathlib/blob/WIP_docs/docs/WIPs/calc.md

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:15):

and indeed there is an example with different operators, = and < and <=

view this post on Zulip Patrick Massot (Apr 11 2018 at 20:15):

searching TPIL for calc doesn't find that

It does actually, but you don't recognize it

view this post on Zulip Patrick Massot (Apr 11 2018 at 20:16):

3rd result

view this post on Zulip Patrick Massot (Apr 11 2018 at 20:17):

I know this because I filled a Sphinx bug report before understanding this :flushed:

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:18):

Oh! So "It's somewhere in Chapter 4" is the best I get?

view this post on Zulip Patrick Massot (Apr 11 2018 at 20:18):

yes

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:18):

"PS here's the first occurrence of the string Calc"

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:18):

"feel free to look and see if there are any others"

view this post on Zulip Patrick Massot (Apr 11 2018 at 20:19):

Web pages of TPIL are too big

view this post on Zulip Patrick Massot (Apr 11 2018 at 20:19):

from this point of view

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:19):

That's one interpretation

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:19):

Another is "search is crap"

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:19):

I know we've all been spoilt by Google...

view this post on Zulip Patrick Massot (Apr 11 2018 at 20:20):

google would show you the same

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:20):

or maybe section 4.3 should be tagged calc

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:20):

and the search should prioritise tags

view this post on Zulip Patrick Massot (Apr 11 2018 at 20:20):

one hit for this page, and not all occurences on the search result page

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:20):

is that possible in Sphinx?

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:21):

Google might have a better idea about moving the "right" answer to #1 in the list...

view this post on Zulip Chris Hughes (Apr 11 2018 at 20:21):

Can I mix equality and an congruences in calc?

I tried it and you can.

view this post on Zulip Patrick Massot (Apr 11 2018 at 20:24):

Go to google and typein search bar: site:https://leanprover.github.io/theorem_proving_in_lean/ calc

view this post on Zulip Patrick Massot (Apr 11 2018 at 20:24):

Google is smarter than Sphynx

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:25):

Can I mix equality and an congruences in calc?

@[trans] protected theorem trans : a ≡ b [MOD n] → b ≡ c [MOD n] → a ≡ c [MOD n] := eq.trans

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:25):

and lo and behold

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:25):

import data.nat.modeq
variables (a b c d m : )

example (H1 : a  b [MOD m]) (H2 : b = c) (H3 : c  d [MOD m]) : a  d [MOD m] :=
calc a  b [MOD m] : H1
... = c : H2
...  d [MOD m] : H3

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:26):

Google is smarter than everything :-/

view this post on Zulip Chris Hughes (Apr 11 2018 at 20:26):

what if I have two relations that are "co transitive" in the same sense that le and lt are. Can I make them work with calc?

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:27):

Right, e.g. a congruence mod 8 might imply a congruence mod 4

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:27):

I have no idea how Lean is doing the < and <= thing

view this post on Zulip Patrick Massot (Apr 11 2018 at 20:27):

calc knowing about that would be pretty cool

view this post on Zulip Chris Hughes (Apr 11 2018 at 20:27):

I thought that was a useless question, but obviously not.

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:27):

I know from experience that it will let you prove a < d by writing a <= b < c <= d

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:27):

so it knows lt_of_le_of_lt

view this post on Zulip Patrick Massot (Apr 11 2018 at 20:27):

yes, this is already really cool

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:27):

but it wouldn't surprise me if this were hard wired in somehow

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:27):

as a common use case

view this post on Zulip Patrick Massot (Apr 11 2018 at 20:28):

but I suspect this special case is hard-wired

view this post on Zulip Chris Hughes (Apr 11 2018 at 20:28):

lt_of_lt_of_le is tagged with trans. So it might just be a case of doing that.

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:30):

Oh well spotted!

view this post on Zulip Patrick Massot (Apr 11 2018 at 20:30):

indeed

view this post on Zulip Chris Hughes (Apr 11 2018 at 20:31):

def r :     Prop := sorry
def s :     Prop := sorry

local infix `r'`:50 := r
local infix `s'`:50 := s

@[trans] lemma r.trans {a b c : } : r a b  r b c  r a c := sorry
@[trans] lemma rs.trans {a b c : } : r a b  s b c  r a c := sorry

@[trans] lemma rs.trans1 {a b c : } (h₁ : r a b) (h₂ : s b c) : r a c := calc
  a r' b : h₁
  ... s' c : h₂

view this post on Zulip Chris Hughes (Apr 11 2018 at 20:32):

That pasted from VScode without any extra spaces.

view this post on Zulip Patrick Massot (Apr 11 2018 at 20:32):

:hushed:

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:33):

import data.nat.modeq
variables (a b c d m : )

@[trans] theorem helpful (a b c : ) : a  b [MOD 8]  b  c [MOD 16]  a  c [MOD 4] := sorry
example (H1 : a  b [MOD 8]) (H2 : b = c) (H3 : c  d [MOD 16]) : a  d [MOD 4] :=
calc a  b [MOD 8] : H1
... = c : H2
...  d [MOD 16] : H3

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:33):

(deleted)

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:33):

so did that

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:34):

So it's really not hard coded in, there is some dark art with trans tags

view this post on Zulip Patrick Massot (Apr 11 2018 at 20:34):

Let's keep that in Lean 4

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:34):

Conclusion in my example must be a = c mod 8 after second line

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:35):

it uses this:

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:35):

#check trans_rel_left -- ∀ (r : ?M_1 → ?M_1 → Prop), r ?M_2 ?M_3 → ?M_3 = ?M_4 → r ?M_2 ?M_4

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:35):

(I know because some messing around gave me some explicit error messages)

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:35):

and then it uses my trans-tagged theorem for 3rd line

view this post on Zulip Patrick Massot (Apr 11 2018 at 20:36):

To the docs/extras/calc!

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:36):

Yes, this deserves to be better known

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:37):

I mean it's mentioned in TPIL but somehow it hadn't dawned on me how far you could push this

view this post on Zulip Chris Hughes (Apr 11 2018 at 20:37):

So now I know the point of @[trans] and @[refl] but I'm still not sure why you would tag something @[symm]

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:39):

https://leanprover.zulipchat.com/#narrow/stream/113488-general/topic/simp.20is.20amazing

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:39):

Gabriel's comment is in that thread

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:39):

but I don't know any longer if it was about simp

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:40):

If you tag something symm then there's some tactic called symmetry which will work ;-)

view this post on Zulip Patrick Massot (Apr 11 2018 at 20:43):

We need to formalize something using this calc power

view this post on Zulip Patrick Massot (Apr 11 2018 at 20:43):

What about some version of Hensel's lemma?

view this post on Zulip Patrick Massot (Apr 11 2018 at 20:43):

There should be chains of congruence modulo different stuff there

view this post on Zulip Patrick Massot (Apr 11 2018 at 20:44):

And it may even be useful for perfectoids, who knows?

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:44):

theorem X (a b m : ℕ) (H : a ≡ b [MOD m]) : b ≡ a [MOD m] := by symmetry;assumption

view this post on Zulip Chris Hughes (Apr 11 2018 at 20:45):

theorem X (a b m : ℕ) (H : a ≡ b [MOD m]) : b ≡ a [MOD m] := H.symm

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:45):

symmetry tactic replaces a hypothesis with another one if the fact that one implies the other is marked with symm

view this post on Zulip Chris Hughes (Apr 11 2018 at 20:46):

I know what it does, but it seems fairly useless.

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 20:47):

Simon's cool use of transivity is above, and refl is used everywhere, but who knows what symmetry is for :-)

view this post on Zulip Chris Hughes (Apr 11 2018 at 20:48):

Simon's cool use of transivity is above, and refl is used everywhere, but who knows what symmetry is for :-)

What's Simon's cool use of transitivity?

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 21:13):

At the start of this thread

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 22:03):

To the docs/extras/calc!

https://github.com/kbuzzard/mathlib/blob/WIP_docs/docs/WIPs/calc.md

view this post on Zulip Chris Hughes (Apr 11 2018 at 22:05):

Can I use calc if I haven't defined an infix for my relation?

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 22:06):

I don't know!

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 22:06):

As you can see I defined infixes for mine.

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 22:09):

@Mario Carneiro I wrote some calc docs (see link above) extending what is written in TPIL, together with some notes for things I struggled with myself after reading TPIL (e.g. I would sometimes get into a real mess with some syntax error or proof error manifesting itself as a red squiggle under a random ...). Three questions:

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 22:09):

1) Shall I PR to mathlib?

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 22:09):

2) Chris asks if calc can be used with operators that aren't infix

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 22:10):

3) I made some guesses as to how calc works. In particular I note that trans_rel_right and trans_rel_left are not tagged [trans]. Are these special cases which are tried first?

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 22:12):

And an idle question -- would it be possible to break calc by proving e.g. a < b -> b < c -> a <= c and tagging with [trans]?

view this post on Zulip Kevin Buzzard (Apr 11 2018 at 22:14):

And @Patrick Massot After Mario accepted my last mathlib PRs I just nuked the entire repo and forked it again (do you remember I had a bad commit history because I never branched?). Now I have a branch with my WIPs. I know I can google for how to do this but I'm sure you will know instantly -- what is the best way to just PR the calc.md file? Sorry to bother you.

view this post on Zulip Patrick Massot (Apr 12 2018 at 06:39):

Since you have only one file to PR and don't care about keeping the history of this file, here the simplest route: copy that calc.md somewhere else (say /home/kevin/), then, inside mathlib,

git checkout master
git checkout -b docs-calc
cp /home/kevin/calc.md docs/extras/

then edit docs/extras.md to add a link to docs/extra/calc.md

git add docs/extras.md docs/extra/calc.md
git commit
git push

The last command will complain you should explicitly say to create an upstream branch. Copy-paste the suggested command (I never remember the syntax since git always helps me here).

view this post on Zulip Patrick Massot (Apr 12 2018 at 06:41):

I forgot: don't forget to make sure your master is in sync with Mario's before git check-out -b docs-calc

view this post on Zulip Patrick Massot (Apr 12 2018 at 06:42):

assuming you followed conventional names, that would mean git pull upstream master after git checkout master

view this post on Zulip Mario Carneiro (Apr 12 2018 at 06:42):

It's not a big problem if you PR off an old version of mathlib, since I usually rebase it on the current head when I merge the PR anyway

view this post on Zulip Mario Carneiro (Apr 12 2018 at 06:42):

unless there's a conflict of course

view this post on Zulip Mario Carneiro (Apr 12 2018 at 06:42):

but docs don't usually cause conflicts if they are new

view this post on Zulip Patrick Massot (Apr 12 2018 at 06:44):

sure

view this post on Zulip Patrick Massot (Apr 12 2018 at 06:44):

I'm only trying to explain good practice

view this post on Zulip Mario Carneiro (Apr 12 2018 at 06:51):

I agree with everything you said.

I would add that if you intend to PR some addition, you should write it "in place", i.e. with the file located in mathlib where you want it to be, by first checking out master then checking out a new branch (i.e. the first two lines of Patrick's script), then making any modifications there. Then any commits you make will branch from master nicely. If you need to set the work aside, you can just commit what you have to the branch and move to the current master or somewhere else, and come back to your PR when you are ready to resume work with git checkout my-pr.

view this post on Zulip Patrick Massot (Apr 12 2018 at 07:07):

No -b in your last sentence

view this post on Zulip Patrick Massot (Apr 12 2018 at 07:08):

Otherwise of course this is the correct way. I was explaining how to fix the mess.

view this post on Zulip Mario Carneiro (Apr 12 2018 at 07:12):

oops, typo (copy-o?)

view this post on Zulip Kevin Buzzard (Apr 12 2018 at 15:52):

OK done. Thanks both for the advice. Mario -- it's a bit of a WIP because I guessed how the transitivity worked in calc mode. First I guessed that Lean went from A op1 B op2 C op3 D to A op4 D via "reading from left to right", i.e. first attempting to figure out how A op1 B op2 C can become A op5 C and then merging this with C op3 D (i.e. I don't know if it does anything clever like trying to merge two random consecutive theorems in the middle), and secondly I guessed that when trying to merge two ops to become another one it first uses rw if one is = and then tries lemmas tagged trans if neither op is =. These are just plain guesses.

view this post on Zulip Chris Hughes (Apr 12 2018 at 16:22):

I did some testing and I don't think it tries to do anything clever with changing the order. L2 doesn't work below, but the rest do. You can also state that three relations are transitive together.

def r :     Prop := sorry
def s :     Prop := sorry
def t :     Prop := sorry

local infix `r'`:50 := r
local infix `s'`:50 := s
local infix `t'`:50 := t
variables {a b c d : }

@[trans] lemma rrr.trans {a b c : } : r a b  r b c  r a c := sorry
@[trans] lemma srr.trans {a b c : } : s a b  r b c  r a c := sorry
@[trans] lemma sts.trans {a b c : } : s a b  t b c  s a c := sorry
@[trans] lemma rss.trans {a b c : } : r a b  s b c  r a c := sorry
@[trans] lemma trs.trans {a b c : } : t a b  r b c  s a c := sorry

lemma L1 (h₁ : s a b) (h₂ : t b c) (h₃ : r c d) : r a d :=
calc a s' b : h₁
   ... t' c : h₂
   ... r' d : h₃

lemma L2 (h₁ : r a b) (h₂ : s b c) (h₃ : t c d) : r a d :=
calc a r' b : h₁
   ... s' c : h₂
   ... t' d : h₃

lemma L2' (h₁ : r a b) (h₂ : s b c) (h₃ : t c d) : r a d :=
rss.trans h₁ (sts.trans h₂ h₃)

lemma L3 {a b c : } (h₁ : t a b) (h₂ : r b c) : s a c :=
calc a t' b : h₁
   ... r' c : h₂

view this post on Zulip Kevin Buzzard (Apr 12 2018 at 16:45):

Thanks for checking. That was one way of doing it. The other way is to read the source code, but I sort-of suspect (perhaps incorrectly) that it will be in C++ not Lean.


Last updated: May 10 2021 at 16:20 UTC