Zulip Chat Archive

Stream: Is there code for X?

Topic: iterate.comm


Damiano Testa (Apr 08 2021 at 16:14):

Dear All,

I think that this lemma is not part of logic/function/iterate. If it is not there, do you think that it would be a good idea to add it?

Thanks!

lemma comm {m n : } : f^[n]^[m] = (f^[m]^[n] : α  α) :=
begin
  rw [ iterate_mul, nat.mul_comm, iterate_mul],
end

Alex J. Best (Apr 08 2021 at 16:21):

Seems reasonable, I wonder if we should establish a simp-normal form for these sorts of things though, I see there is docs#function.iterate_add and docs#function.iterate_mul both of which insert more iteration symbols rather than arithmetic operations on nats neither of which is @[simp] though, so I would think adding the symm of those lemmas as simp lemmas would help when working with an expression like this.

Yakov Pechersky (Apr 08 2021 at 16:38):

I don't agree that the symm of those is the right direction for simp. There are basically no lemmas about f^[2], for example. But if you splat it out, you're more likely to have lemmas about application or composition

Damiano Testa (Apr 08 2021 at 18:31):

PR #7121

Bryan Gin-ge Chen (Apr 08 2021 at 18:33):

Just noticed docs#function.commute.iterate_iterate_self which seems related.

Damiano Testa (Apr 08 2021 at 18:34):

I do not know much about what the best for formalization would be, but my instinct would be to transfer everything to the "exponents" and then act there. So, I would have formulated iterate_add and iterate_mul in reverse order to the current one. Since Yuri made this choice, I am probably wrong!

Damiano Testa (Apr 08 2021 at 18:42):

Bryan, the statement that you mention seems indeed very close, but I am failing to use it...

Bryan Gin-ge Chen (Apr 08 2021 at 18:47):

Right, I'm not sure how commute is supposed to be used, but it looks like there's some useful API around it.

Damiano Testa (Apr 08 2021 at 18:48):

I've gotten to having to prove f^[n]^[m] x = f^[n] (f^[m] x), but I am struggling to make progress.

Damiano Testa (Apr 08 2021 at 18:49):

Wait, that is equivalent to f^[n * m] x = f^[n + m] x: this is not what I want! No wonder I was not making progress!

Bryan, I think that the one that you mention is addition of exponents, not multiplication!

Bryan Gin-ge Chen (Apr 08 2021 at 18:49):

Ah, sorry about that!

Yakov Pechersky (Apr 08 2021 at 18:50):

It's commutativity of multiplication in the group of endos

Damiano Testa (Apr 08 2021 at 18:50):

No worries! I find it somewhat confusing myself.

Yakov Pechersky (Apr 08 2021 at 18:51):

function.commute is missing a helper lemma about application

Damiano Testa (Apr 08 2021 at 18:52):

Yakov, I would like to understand better your last statement: would you mind expanding a bit and giving me a hint as to what kind of lemma you are thinking of?

Damiano Testa (Apr 08 2021 at 18:52):

I would be happy to add it to my "one-lemma PR"!

Yakov Pechersky (Apr 08 2021 at 18:59):

You have, via function.comm that the composition of f and g is the same as composing g and f. congr_fun with that equality gives you that for all x, (f o g) x = (g o f) x

Yakov Pechersky (Apr 08 2021 at 19:00):

So you have the tools now to make the proof you need. You can use apply congr_fun to generalize over whatever you're applying on. The helper lemma would just state that directly, something like

Yakov Pechersky (Apr 08 2021 at 19:02):

function.commute.apply (h : function.commute f g) (x : a) : g (f x) = f (g x) := congr_fun h x

Yakov Pechersky (Apr 08 2021 at 19:02):

Or something

Yakov Pechersky (Apr 08 2021 at 19:03):

Then your proof becomes (function.commute.iterate_iterate_self f m n).apply x

Damiano Testa (Apr 08 2021 at 19:03):

Ok, let me try! Thanks a lot!

Yakov Pechersky (Apr 08 2021 at 19:03):

And there, you're using that projection notation

Yakov Pechersky (Apr 08 2021 at 19:04):

Does that make sense? We're constructing a proposition and right away using a lemma about it. And because of namespaces, we can use the projection

Yakov Pechersky (Apr 08 2021 at 19:06):

Ah hold on, your original lemma is about exponentiation. I misunderstood

Damiano Testa (Apr 08 2021 at 19:06):

Yakov, it turns out, that the proof is your lemma even easier! Maybe this is why Yuri did not add it:

lemma function.commute.apply (h : function.commute f g) (x : α) :
  g (f x) = f (g x) :=
(h _).symm

Yakov Pechersky (Apr 08 2021 at 19:07):

Yes, you're using the defeq of h there

Damiano Testa (Apr 08 2021 at 19:07):

Yes, I think that the one that I want is not the one that Bryan mentioned, but I still took some time to figure out that it wasn't, so I would like to understand how to better get this things!

Damiano Testa (Apr 08 2021 at 19:07):

Indeed, I unfolded twice before I removed them.

Yakov Pechersky (Apr 08 2021 at 19:07):

The defeq is via the definition of semiconj

Damiano Testa (Apr 08 2021 at 19:08):

here is the non-minimal proof:

lemma function.commute.apply (h : function.commute f g) (x : α) :
  g (f x) = f (g x) :=
begin
  apply congr_fun _ x,
  unfold function.commute at h,
  unfold semiconj at h,
  ext,
  exact (h _).symm,
end

Yakov Pechersky (Apr 08 2021 at 19:08):

Right. I think it's still nice to provide the API that wraps the defeq operations, just in case someone wants to generalize the definition in a way that defeq no longer works. Others might disagree

Damiano Testa (Apr 08 2021 at 19:09):

Ok, so you are saying that it is better to have the explicit unfold steps?

Damiano Testa (Apr 08 2021 at 19:10):

I can see why this proof is probably the "other" candidate:

lemma function.commute.apply (h : function.commute f g) (x : α) :
  g (f x) = f (g x) :=
begin
  unfold function.commute at h,
  unfold semiconj at h,  -- at this stage, h : ∀ (x : α), f (g x) = g (f x)
  exact (h _).symm,
end

Yakov Pechersky (Apr 08 2021 at 19:10):

Of course, maybe the lemma should be the symm of that. It's not clear a priori which is the "inside" function in commute f g

Yakov Pechersky (Apr 08 2021 at 19:10):

Ah, I meant for users of the API. In the making of the actual API, it's fine to use defeq

Yakov Pechersky (Apr 08 2021 at 19:10):

(in my opinion)

Damiano Testa (Apr 08 2021 at 19:11):

I can add this lemma to the PR, even if it is about "addition" instead of "multiplication".

Damiano Testa (Apr 08 2021 at 19:11):

(If you think that it would be useful to have: adding "unused" lemmas is rarely an issue with CI building.)

Yakov Pechersky (Apr 08 2021 at 19:12):

Right. Regarding reduction of exponents: since we're operating solely in the nat monoid, any reduction would lead solely to a growth of the exponent. So it'd be harder to deal with a f^3 than f^2 o f

Damiano Testa (Apr 08 2021 at 19:14):

I see, so this is why the direction of these lemmas goes in the direction that feels "wrong" to me: thanks for the explanation!

Yakov Pechersky (Apr 08 2021 at 19:14):

So while the expression is "simpler" with a reduced exponent, it's harder to prove things about it.

Damiano Testa (Apr 08 2021 at 19:17):

Yakov, I added your function.commute.apply lemma as well!

Alex J. Best (Apr 08 2021 at 21:20):

Yakov Pechersky said:

Right. Regarding reduction of exponents: since we're operating solely in the nat monoid, any reduction would lead solely to a growth of the exponent. So it'd be harder to deal with a f^3 than f^2 o f

I think this is the key point I was missing here, the example I had in my head of things "cancelling out" when you combine the exponents really can't happen with nat-indexing. So maybe the best form is the version currently there, having a simp-normal form still seems useful though?

Yakov Pechersky (Apr 08 2021 at 21:31):

Is simp a tactic for proving things or simplifying expressions? If the former, you might want to make the proof you use break the monoidal term by 2s or by 3s, that will lead to fewer rewrites.

Yakov Pechersky (Apr 08 2021 at 21:32):

We break off by ones because of the nat.succ structure. But execution of a function that has been raised to a nat power is more efficient if you execute by twos, and even more so by threes. But it's harder to write the method to do that.

Eric Wieser (Apr 08 2021 at 22:06):

Are you sure function.commutes.apply is a useful lemma? Does docs#function.commute really unfold to composition?

Eric Wieser (Apr 08 2021 at 22:08):

Yeah, if you have a term h : function.commutes f g, h x is syntactically f (g x) = g (f x) - the lemma is pointless

Yakov Pechersky (Apr 08 2021 at 22:08):

variables (f g : α  α)

#reduce (function.commute f g)
/-
∀ (x : α), f (g x) = g (f x)
-/

Yakov Pechersky (Apr 08 2021 at 22:09):

When I'm working with a new prop, I don't always know what it reduces to. And to find out here that (h : function.commute f g) x would mean f (g x) = g (f x) would require going down some search path.

Eric Wieser (Apr 08 2021 at 22:10):

Or just #check

Yakov Pechersky (Apr 08 2021 at 22:10):

It's not indicated in the docs anywhere. One way to explicitly indicate is just to have a lemma that says exactly what it means.

Yakov Pechersky (Apr 08 2021 at 22:10):

#check (function.commute f g)
/-
function.commute f g : Prop
-/

Eric Wieser (Apr 08 2021 at 22:10):

I mean, add an x in that check

Yakov Pechersky (Apr 08 2021 at 22:11):

Right, one needed to know that that was a valid thing to say in the first place

Eric Wieser (Apr 08 2021 at 22:14):

To be clear, I meant

variables (h : commute f g) (x)
#check h x

Eric Wieser (Apr 08 2021 at 22:14):

That's basically a problem for any definition that expands to a function type - eg docs#matrix

Eric Wieser (Apr 08 2021 at 22:15):

So maybe a doc-gen solution is needed here?

Yakov Pechersky (Apr 08 2021 at 22:18):

Yes I understand what you meant. This example works because of how commute is defined, and how semiconj is defined. In #7118, I redefine perm.disjoint to be disjoint f.support g.support instead of def disjoint (f g : perm α) := ∀ x, f x = x ∨ g x = x. So previous proofs that used cases (h : disjoint f g) x with hfx hgx won't work straight out of the box. But if there was a disjoint.apply or whatever, it would be stable to the definitional change.

Eric Wieser (Apr 08 2021 at 22:41):

What does h x now produce?

Eric Wieser (Apr 08 2021 at 22:41):

(for perm.disjoint)

Yakov Pechersky (Apr 09 2021 at 00:08):

An error, _root_.disjoint expects a set membership prop

Damiano Testa (Apr 09 2021 at 01:36):

My experience with the commute issue has been that it took me a while to realise that

variables (h : commute f g) (x)
#check h x

that is, applying h to x, was even a thing. Of course, I did in the end, but I would have liked that this information had been more prominent. As a not so experienced user, I still struggle with seeing through the definitions.

Maybe just a pointer in a doc-string would be enough.

Yury G. Kudryashov (Apr 09 2021 at 04:58):

Feel free to improve docstrings. About LHS vs RHS in iterate_add etc, AFAIR it was not me who made the choice. I moved these lemmas from nat.iterate_* and did not think about LHS vs RHS.

Yury G. Kudryashov (Apr 09 2021 at 04:59):

I don't know which direction is better.

Damiano Testa (Apr 09 2021 at 05:01):

I will not change the direction of the lemmas, but I will make an attempt at doc strings!

Damiano Testa (Apr 09 2021 at 05:38):

For those interested, I added the second line to the doc string of commute:

/-- Two maps `f g : α → α` commute if `f ∘ g = g ∘ f`.
Given `h : function.commute f g` and `a : α`, the term `h a` has Type `h a : f (g a) = g (f a)`. -/
def commute (f g : α  α) : Prop := semiconj f g g

Any comments are more than welcome! I personally find it very helpful when doc strings give "expected usages", since often definitions are a little opaque.

Damiano Testa (Apr 09 2021 at 06:38):

Ok, the doc string appears to have generated lots of support!

At the moment, the lemma directly below the definition of commute is

lemma commute.apply {f g : α  α} (h : function.commute f g) (x : α) :
  f (g x) = g (f x) :=
h _

and, given the doc string, I am now leaning towards Eric's idea that this is superfluous: this simply allows to write h.apply x instead of h x.

:thumbs_up: this comment if the lemma should stay. :thumbs_down: this comment if the lemma should go.

Eric Wieser (Apr 09 2021 at 06:49):

A commute.comp_eq lemma that applies funext might still be a useful addition

Eric Wieser (Apr 09 2021 at 06:50):

Assuming it doesn't exist already

Gabriel Ebner (Apr 09 2021 at 06:51):

I believe this lemma (commute.apply) would be useful if you made f (and maybe also g) explicit and put h at the end. Then you could do rw commute.apply f x and get a subgoal for function.commute f g.

Damiano Testa (Apr 09 2021 at 06:58):

As far as I can see, commute.comp_eq does not exist, but there is a similar one for semiconj (see code below and recall
def commute (f g : α → α) : Prop := semiconj f g g).

Thus, I am tempted to will not introduce commute.comp_eq , but may revive the commute.apply with explicit function(s), if people agree that this is wanted. I personally have not worked much with this API, so I am in no position of judging whether it is useful or not.

/-- We say that `f : α → β` semiconjugates `ga : α → α` to `gb : β → β` if `f ∘ ga = gb ∘ f`. -/
def semiconj (f : α  β) (ga : α  α) (gb : β  β) : Prop :=  x, f (ga x) = gb (f x)

namespace semiconj

variables {f fab : α  β} {fbc : β  γ} {ga ga' : α  α} {gb gb' : β  β} {gc gc' : γ  γ}

protected lemma comp_eq (h : semiconj f ga gb) : f  ga = gb  f := funext h

Yury G. Kudryashov (Apr 09 2021 at 07:02):

If you have h : commute f g, then you have h.comp_eq. It unfolds commute to semiconj, then applies semiconj.comp_eq.


Last updated: Dec 20 2023 at 11:08 UTC