Zulip Chat Archive
Stream: maths
Topic: reversing the coefficients
Kenny Lau (Jun 08 2020 at 10:13):
in maths this would be notated as x ^ deg f * f(1/x)
Kenny Lau (Jun 08 2020 at 10:13):
this reverses the coefficients
Kenny Lau (Jun 08 2020 at 10:14):
this is a monoid isomorphism if the ring is an integral domain I think
Kenny Lau (Jun 08 2020 at 10:14):
should we put this in mathlib?
Johan Commelin (Jun 08 2020 at 11:06):
Looks good to me. polynomial.reverse
?
Kevin Buzzard (Jun 08 2020 at 11:18):
this does not preserve addition of polynomials of different degrees
Kevin Buzzard (Jun 08 2020 at 11:19):
and it is not even a bijection: the reverse of X^n is 1
Johan Commelin (Jun 08 2020 at 11:19):
Right. But it shows up sufficiently often that it should have it's own definition in mathlib.
Kenny Lau (Jun 08 2020 at 11:28):
I think we can be sure that reverse^3 = reverse
Kenny Lau (Jun 08 2020 at 11:28):
(what is the name for this)
Kevin Buzzard (Jun 08 2020 at 11:29):
Galois insertion
Mario Carneiro (Jun 08 2020 at 11:38):
Maybe this should be parameterized on the maximum degree and treated as a bijection on the submodule of deg <= n polynomials?
Mario Carneiro (Jun 08 2020 at 11:39):
I don't think it is true that reverse^3 = reverse
Kenny Lau (Jun 08 2020 at 11:39):
why not?
Kenny Lau (Jun 08 2020 at 11:39):
Mario Carneiro said:
Maybe this should be parameterized on the maximum degree and treated as a bijection on the submodule of deg <= n polynomials?
then we lose the fact that reverse is a monoid homomorphism
Mario Carneiro (Jun 08 2020 at 11:40):
add monoid hom?
Mario Carneiro (Jun 08 2020 at 11:40):
or monoid hom
Kenny Lau (Jun 08 2020 at 11:40):
monoid hom
Kenny Lau (Jun 08 2020 at 11:41):
reverse(pq) = x^(deg(pq)) (pq)(1/x) = x^(deg p + deg q) p(1/x) q(1/x) = [x^(deg p) p(1/x)] [x^(deg q) q(1/x)] = reverse(p) reverse(q)
Kenny Lau (Jun 08 2020 at 11:41):
the equality deg(pq) = deg p + deg q
needs integral domain
Mario Carneiro (Jun 08 2020 at 11:42):
I don't think you lose the monoid hom property, you just get some kind of graded multiplicativity property among the elements of the family
Mario Carneiro (Jun 08 2020 at 11:43):
Johan Commelin said:
Right. But it shows up sufficiently often that it should have it's own definition in mathlib.
citation?
Johan Commelin (Jun 08 2020 at 11:43):
My algebra course notes
Mario Carneiro (Jun 08 2020 at 11:44):
To me without fixing the power of x this seems like a very awkward operation for the reasons Kevin noted
Mario Carneiro (Jun 08 2020 at 11:45):
and I thought we had already established that deg
is not a very well behaved function and the graded family of submodules deg p <= n plays much better
Johan Commelin (Jun 08 2020 at 11:45):
@Kenny Lau If you fix the degree, it will become
reverse (deg p + deg q) (p * q) = reverse (deg p) p * reverse (deg q) q
Mario Carneiro (Jun 08 2020 at 11:46):
I think it's reverse (m + n) (p * q) = reverse m p * reverse n q
, right?
Kenny Lau (Jun 08 2020 at 11:46):
depends on how you totalize reverse m I guess
Kenny Lau (Jun 08 2020 at 11:46):
but sure
Mario Carneiro (Jun 08 2020 at 11:47):
I think the right behavior is to drop coefficients that are flipped past zero
Kenny Lau (Jun 08 2020 at 11:47):
does this satisfy the property?
Mario Carneiro (Jun 08 2020 at 11:53):
Not without additional assumptions: reverse 1 (x^2 + 1) = 1
but reverse 2 (x^4 + 2x^2 + 1) = x^2 + 2
Mario Carneiro (Jun 08 2020 at 11:54):
I think it holds as long as at least one of m
and n
is more than its polynomial's degree
Kenny Lau (Jun 08 2020 at 11:57):
that is a bit unfortunate
Mario Carneiro (Jun 08 2020 at 11:58):
Obviously this all works out much better in Laurent series
Kenny Lau (Jun 08 2020 at 11:59):
looks like there just is no section to the monoid surjection R[X] -> R[X]/(X^n)
Kenny Lau (Jun 08 2020 at 12:01):
wait how does this work in laurent series
Kenny Lau (Jun 08 2020 at 12:01):
i dont think the result is still a Laurent series
Jalex Stark (Jun 08 2020 at 12:02):
Sending x^n to x^{d-n} sounds like it takes Laurent series to Laurent series to me
Mario Carneiro (Jun 08 2020 at 12:02):
Ah, I mean Laurent polynomial
Kenny Lau (Jun 08 2020 at 12:02):
ah ok
Jalex Stark (Jun 08 2020 at 12:03):
(is a Laurent polynomial a Laurent series with finitely many nonzero terms?)
Kenny Lau (Jun 08 2020 at 12:03):
yeah that sounds like cheating though
Mario Carneiro (Jun 08 2020 at 12:03):
yes
Mario Carneiro (Jun 08 2020 at 12:04):
It pretty much is cheating, it's basically solving the problem by fiat
Jalex Stark (Jun 08 2020 at 12:05):
It's pushing the work into defining the appropriate truncation map from Laurent polynomials to polynomials
Mario Carneiro (Jun 08 2020 at 12:05):
This probably extends to group rings where the monomials take exponents in a group
Kenny Lau (Jun 08 2020 at 12:06):
this is just a special case of R[G] being a bifunctor
Kevin Buzzard (Jun 08 2020 at 12:37):
The only times I've had to reverse polynomials, they were char polys, so monic and the degree was a constant which was everywhere
Johan Commelin (Jun 08 2020 at 12:45):
And I guess typically the constant coefficient was nonzero...
Kenny Lau (Jun 08 2020 at 13:20):
I was actually reminded by this when it was implicitly used in the paper I linked in the Witt vector paper where every monic polynomial splits in some ring and as a consequence every polynomial with constant coefficient 1 splits
Kevin Buzzard (Jun 08 2020 at 13:30):
So I was in exactly this situation where the only polynomials I ever reversed were either monic or had constant coeff 1. It's a bijection between these two things, perhaps even an iso of multiplicative monoids
Yury G. Kudryashov (Jun 08 2020 at 15:42):
I vote for having a name for x ^ n * f(1 / x)
where deg f ≤ n
. If it has better properties in case n = deg f
, then we can have two names.
Mario Carneiro (Jun 08 2020 at 15:47):
My thoughts are to associate the name to x ^ n * f(1 / x)
without the constraint but with deg f ≤ n
being the soft domain of the function. I don't think it will be used enough to justify the added convenience of no side assumptions in the multiplication theorem
Aaron Anderson (Aug 04 2020 at 18:47):
Reviving this, I'll note that some version of reverse would have been immensely useful when defining next_coeff
Aaron Anderson (Aug 04 2020 at 18:48):
and sooner or later we will need to define Laurent polynomials and Laurent series. I imagine defining Laurent polynomials as monoid_algebra _ Z
would be fine, but I'm not sure if there's any way to flesh out their API except copy-pasting the ridiculous amount of code on polynomials
Aaron Anderson (Aug 04 2020 at 18:49):
doubly-infinite Laurent series could similarly be defined by duplicating the API of power series
Aaron Anderson (Aug 04 2020 at 18:51):
and Laurent series could be defined either as a subring
of doubly-infinite Laurent series or as pairs (order : Z, x : power_series)
where x
has a nonzero constant term
Adam Topaz (Aug 04 2020 at 18:51):
You can't multiply arbitrary doubly infinite series.
Adam Topaz (Aug 04 2020 at 18:56):
What do you mean by "doubly-infinite Laurent series"?
Adam Topaz (Aug 04 2020 at 19:01):
In case it's not clear:
.
Alex J. Best (Aug 04 2020 at 19:08):
And then Puiseux series and then Hahn series :smile:
Adam Topaz (Aug 04 2020 at 19:24):
Since polynomial rings, power series rings, and localizations already exist, why not define the laurent polynomials resp. laurent series as the localization of polynomials resp. power series at the element ?
Aaron Anderson (Aug 04 2020 at 21:53):
You can do some things with doubly-infinite Laurent series, defined as above, but you’re right, they’re not a ring, so they’re probably better made ad hoc for compex analysis applications later
Aaron Anderson (Aug 04 2020 at 21:54):
Defining the others with localisation, however, sounds like it could be done now
Kevin Buzzard (Aug 04 2020 at 22:00):
If you have some sort of topology on your base ring making it complete then you can look at doubly infinite power series which converge in certain regions, and these can be rings
Kevin Buzzard (Aug 04 2020 at 22:00):
Such things come up in p-adic Hodge theory
Adam Topaz (Aug 04 2020 at 22:00):
These usually show up as completions of localizations though.
Kevin Buzzard (Sep 02 2020 at 12:29):
Mario Carneiro said:
and I thought we had already established that
deg
is not a very well behaved function and the graded family of submodules deg p <= n plays much better
I probably was one of the people helping to establish this -- but I now realise that deg
is very well-behaved: it's a valuation (or more precisely 37^(-deg) is). You should think of it as the size of the pole at 0 of f(T^{-1}). Note that in p-adic numbers the valuation is 37^(+something). The sign difference means that the nice submodules coming from the p-adic valuation are v >= n, whereas the nice submodules coming from the degree valuation are d <= n.
Sebastien Gouezel (Sep 02 2020 at 12:30):
You are not allowed to use 0.37
as a junk value, only 37
. So your post should read "37^(-deg)
is".
Kevin Buzzard (Sep 02 2020 at 12:32):
Fixed, thanks.
Damiano Testa (Sep 02 2020 at 12:33):
Kevin Buzzard said:
Mario Carneiro said:
and I thought we had already established that
deg
is not a very well behaved function and the graded family of submodules deg p <= n plays much betterI probably was one of the people helping to establish this -- but I now realise that
deg
is very well-behaved: it's a valuation (or more precisely 37^(-deg) is). You should think of it as the size of the pole at 0 of f(T^{-1}). Note that in p-adic numbers the valuation is 37^(+something). The sign difference means that the nice submodules coming from the p-adic valuation are v >= n, whereas the nice submodules coming from the degree valuation are d <= n.
Indeed! This is why I called at_infty
the reverse
that I was defining. But I think that it is not yet implemented, unfortunately.
Mario Carneiro (Sep 02 2020 at 12:42):
deg
only makes mathematical sense if it is allowed to take a -infty like value at zero
Mario Carneiro (Sep 02 2020 at 12:42):
but that doesn't play well with lean's types
Kevin Buzzard (Sep 02 2020 at 12:42):
yes, and that's exactly why Johan wrote group_with_zero
Mario Carneiro (Sep 02 2020 at 12:44):
I agree that deg is basically a valuation, but I'm not convinced valuations are any easier to work with than deg
Kevin Buzzard (Sep 02 2020 at 12:45):
no, but it frames your understanding of what you can expect from them.
Mario Carneiro (Sep 02 2020 at 12:45):
the family of submodules approach seems much more formal friendly
Kevin Buzzard (Sep 02 2020 at 12:46):
but if you've written thousands of lines of a code in a perfectoid project whose line 1 is the definition of a valuation, then you get a feeling for them :-)
Aaron Anderson (Sep 02 2020 at 16:12):
Perhaps the lesson is that the family of submodules approach is useful for valuations, among which is deg
...
Damiano Testa (Sep 03 2020 at 07:11):
In case people find it useful, I have defined a command at_infty
taking as input a polynomial p
and returning the polynomial at_infty p
, whose coefficients are reversed, starting from nat_degree p
. The 0
polynomial is sent to itself. I have not yet proved that at_infty
maps products of polynomials to products, though.
The relevant Lean file is here:
https://gist.github.com/adomani/f1c468b8812a55e12e95bff954b3596f
Johan Commelin (Sep 03 2020 at 07:17):
Nice! @Damiano Testa If you clean this up a bit, I think it would make a really nice first PR
Damiano Testa (Sep 03 2020 at 07:17):
Johan Commelin said:
Nice! Damiano Testa If you clean this up a bit, I think it would make a really nice first PR
I will try! Sadly, I thought that I had already cleaned it up... Ahahaha
Johan Commelin (Sep 03 2020 at 07:18):
It's hard to learn those tricks out of the blue. It might be very helpful to do the cleaning up in a pair programming session
Johan Commelin (Sep 03 2020 at 07:18):
I'll be leaving on a camping trip in 30 minutes... so I can't really help before Monday.
Damiano Testa (Sep 03 2020 at 07:18):
If you are offering, I am happy to do it now
Damiano Testa (Sep 03 2020 at 07:18):
ok, then I will wait!
Johan Commelin (Sep 03 2020 at 07:18):
But I'm sure others can help with it.
Damiano Testa (Sep 03 2020 at 07:18):
have fun on your camping trip!
Johan Commelin (Sep 03 2020 at 07:19):
Thanks!
Kyle Miller (Sep 03 2020 at 08:16):
@Damiano Testa I did a cleanup pass to put it in my understanding of mathlib style.
- Make sure to use two spaces for indentation
- Curly braces and operators need one space around them
- If you use
simp
ortidy
, make sure it is a finishing tactic. Otherwise usesqueeze_simp
to get asimp only [...]
, ortidy?
to get some tactic block to clean up. - Join consecutive
rw
tactics into a singlerw [...]
tactic. Sometimes you can merge this withsimp
. - It's usually better to use named fields rather than positional ones.
- I prefer parentheses for quantifier variables, but I don't think that's required.
- Usually,
by exact
is unnecessary, and you can set the proof term with:=
.
import tactic
import data.polynomial.degree.basic
open polynomial
lemma finset_of_bounded {S : set ℕ} (N : ℕ) {nN : ∀ (n ∈ S), n ≤ N} : S.finite :=
set.finite.subset (set.finite_le_nat N) nN
lemma le_degree_of_mem_supp {R : Type*} [comm_ring R] {p : polynomial R} (a : ℕ) :
a ∈ p.support → a ≤ nat_degree p :=
begin
rw p.mem_support_to_fun,
contrapose,
push_neg,
exact coeff_eq_zero_of_nat_degree_lt,
end
def rev_at (N : ℕ) : ℕ → ℕ := λ (i : ℕ), (N - i) + i * min 1 (i - N)
lemma rev_invol {N i : ℕ} : rev_at N (rev_at N i) = i :=
begin
unfold rev_at,
by_cases i ≤ N,
{ have min10 : min 1 0 = 0 := nat.min_zero 1,
have NiN : N - i - N = 0 := nat.sub_eq_zero_of_le (nat.sub_le N i),
convert nat.sub_sub_self h,
simp [nat.sub_eq_zero_of_le h, min10, NiN], },
{ rw not_le at h,
have iN : 1 ≤ i - N := nat.succ_le_of_lt (nat.sub_pos_of_lt h),
have Ni0 : (N - i) = 0 := by omega,
have min1iN : min 1 (i - N) = 1 := min_eq_left iN,
simp [min1iN, Ni0, min1iN], },
end
lemma rev_support {R : Type*} [comm_ring R] (p : polynomial R) :
{a : ℕ | ∃ b ∈ p.support , a = rev_at (nat_degree p) b}.finite :=
begin
apply finset_of_bounded (nat_degree p),
rintros n ⟨rn, rnsupp, rfl⟩,
unfold rev_at,
have zerr : 0 ≤ rn := zero_le rn,
have rle : rn ≤ nat_degree p := le_degree_of_mem_supp rn rnsupp,
have rneq : (rn - p.nat_degree) = 0 := nat.sub_eq_zero_of_le rle,
have : min 1 (rn - p.nat_degree) = 0,
{ rw rneq, exact nat.min_zero 1, },
rw [this, mul_zero, add_zero],
exact (nat_degree p).sub_le rn,
end
noncomputable def at_infty {R : Type*} [comm_ring R] (p : polynomial R) : polynomial R :=
⟨(rev_support p).to_finset, (λ (i : ℕ), p.to_fun (rev_at (nat_degree p) i)), begin
intros a,
simp only [exists_prop, set.finite.mem_to_finset, finsupp.mem_support_iff, ne.def, set.mem_set_of_eq],
split,
{ rintros ⟨b, hb₁, hb₂⟩ h,
apply hb₁,
subst a,
rw ← h,
apply congr_arg p.to_fun,
exact rev_invol.symm, },
{ intro h,
use rev_at p.nat_degree a,
exact ⟨h, rev_invol.symm⟩, },
end⟩
Kyle Miller (Sep 03 2020 at 08:26):
One more simplification: in rev_support
, one sequence of tactics had the effect of substituting a variable in another expression. If you use rintros
, you can place rfl
where that equality is to auto-substitute. (I updated the message above.)
Sebastien Gouezel (Sep 03 2020 at 08:29):
Also, in le_degree_of_mem_supp
, the variable a
can be implicit, as it can be inferred from a ∈ p.support
.
Kyle Miller (Sep 03 2020 at 08:32):
With Sebastien's suggestion, a few more simplifications, some (questionably better?) renamings, and using function.involutive
:
import tactic
import data.polynomial.degree.basic
open function polynomial
lemma finset_of_bounded {S : set ℕ} (N : ℕ) {nN : ∀ (n ∈ S), n ≤ N} : S.finite :=
set.finite.subset (set.finite_le_nat N) nN
lemma le_degree_of_mem_supp {R : Type*} [comm_ring R] {p : polynomial R} {a : ℕ} :
a ∈ p.support → a ≤ nat_degree p :=
begin
rw p.mem_support_to_fun,
contrapose,
push_neg,
exact coeff_eq_zero_of_nat_degree_lt,
end
def rev_at (N : ℕ) : ℕ → ℕ := λ (i : ℕ), (N - i) + i * min 1 (i - N)
lemma rev_at_invol {N : ℕ} : involutive (rev_at N) :=
begin
intro i,
unfold rev_at,
by_cases i ≤ N,
{ convert nat.sub_sub_self h,
have NiN : N - i - N = 0 := nat.sub_eq_zero_of_le (nat.sub_le N i),
simp [nat.sub_eq_zero_of_le h, NiN], },
{ rw not_le at h,
have Ni0 : N - i = 0 := by omega,
have iN : 1 ≤ i - N := nat.succ_le_of_lt (nat.sub_pos_of_lt h),
have min1iN : min 1 (i - N) = 1 := min_eq_left iN,
simp [Ni0, min1iN], },
end
lemma rev_at_support {R : Type*} [comm_ring R] (p : polynomial R) :
{a : ℕ | ∃ (b ∈ p.support), a = rev_at (nat_degree p) b}.finite :=
begin
apply finset_of_bounded (nat_degree p),
rintros n ⟨rn, rnsupp, rfl⟩,
unfold rev_at,
have rle : rn ≤ nat_degree p := le_degree_of_mem_supp rnsupp,
have : min 1 (rn - p.nat_degree) = 0,
{ rw nat.sub_eq_zero_of_le rle, exact nat.min_zero 1, },
rw [this, mul_zero, add_zero],
exact p.nat_degree.sub_le rn,
end
noncomputable def at_infty {R : Type*} [comm_ring R] (p : polynomial R) : polynomial R :=
⟨(rev_at_support p).to_finset, (λ (i : ℕ), p.to_fun (rev_at (nat_degree p) i)), begin
intros a,
simp only [exists_prop, set.finite.mem_to_finset, finsupp.mem_support_iff, ne.def, set.mem_set_of_eq],
split,
{ rintros ⟨b, hb₁, rfl⟩ h,
apply hb₁,
rw ← h,
apply congr_arg p.to_fun,
exact (rev_at_invol b).symm, },
{ intro h,
use rev_at p.nat_degree a,
exact ⟨h, (rev_at_invol a).symm⟩, },
end⟩
Damiano Testa (Sep 03 2020 at 08:54):
Dear @Kyle Miller and @Sebastien Gouezel, thank you very much for your input! I was with a student, but I will take a look at your improvements and will learn them for future use!
Kyle Miller (Sep 03 2020 at 08:54):
One more thing that's needed is to add /-- documentation comments -/
for the definitions.
Kyle Miller (Sep 03 2020 at 09:03):
I guess also p.to_fun
is sort of against the API for polynomials. It should probably be p.coeff
instead:
noncomputable def at_infty {R : Type*} [comm_ring R] (p : polynomial R) : polynomial R :=
⟨(rev_at_support p).to_finset, (λ (i : ℕ), p.coeff (rev_at (nat_degree p) i)), begin
intros a,
simp only [exists_prop, set.finite.mem_to_finset, finsupp.mem_support_iff, ne.def, set.mem_set_of_eq],
split,
{ rintros ⟨b, hb₁, rfl⟩ h,
apply hb₁,
rw ← h,
apply congr_arg p.coeff,
exact (rev_at_invol b).symm, },
{ intro h,
use rev_at p.nat_degree a,
exact ⟨h, (rev_at_invol a).symm⟩, },
end⟩
Reid Barton (Sep 03 2020 at 09:23):
You should also prove a lemma which identifies the coefficients of at_infty p
, along with other lemmas that are likely to be useful (if you have a particular use in mind, it may suggest such lemmas).
Damiano Testa (Sep 03 2020 at 09:25):
Just so we are all on the same page: we are not aiming for readability of the final code, correct?
for instance is
lemma rev_at_invol {N : ℕ} : involutive (rev_at N) :=
begin
intro i,
unfold rev_at,
by_cases i ≤ N,
{ simp [nat.sub_eq_zero_of_le (nat.sub_le N i),nat.sub_eq_zero_of_le h, nat.sub_sub_self h], },
{ rw not_le at h,
simp [nat.sub_eq_zero_of_le (le_of_lt (h)), min_eq_left (nat.succ_le_of_lt (nat.sub_pos_of_lt h))], },
end
preferable to
lemma rev_at_invol {N : ℕ} : involutive (rev_at N) :=
begin
intro i,
unfold rev_at,
by_cases i ≤ N,
{ convert nat.sub_sub_self h,
have NiN : N - i - N = 0 := nat.sub_eq_zero_of_le (nat.sub_le N i),
simp [nat.sub_eq_zero_of_le h, NiN], },
{ rw not_le at h,
have Ni0 : N - i = 0 := by omega,
have iN : 1 ≤ i - N := nat.succ_le_of_lt (nat.sub_pos_of_lt h),
have min1iN : min 1 (i - N) = 1 := min_eq_left iN,
simp [Ni0, min1iN], },
end
?
Damiano Testa (Sep 03 2020 at 09:27):
Reid Barton said:
You should also prove a lemma which identifies the coefficients of
at_infty p
, along with other lemmas that are likely to be useful (if you have a particular use in mind, it may suggest such lemmas).
Ok, I will include that as well! I was going to also prove that at_infty
is "close to" an involution
Kyle Miller (Sep 03 2020 at 09:34):
Damiano Testa said:
Just so we are all on the same page: we are not aiming for readability of the final code, correct?
I think it depends on how readable you think it should be. Part of mathlib style seems to be to not care about readability of trivial statements.
I usually avoid writing proof terms that are too nested, which is why I kept it in the second form rather than putting it into the first. If it were rw
rather than simp
, I might go with the first, but that's because you can still step through each rewrite, where simp
does things in a more opaque way.
Damiano Testa (Sep 03 2020 at 09:35):
Kyle Miller said:
I usually avoid writing proof terms that are too nested, which is why I kept it in the second form rather than putting it into the first. If it were
rw
rather thansimp
, I might go with the first, but that's because you can still step through each rewrite, wheresimp
does things in a more opaque way.
Ok, thank you for your input!
Kyle Miller (Sep 03 2020 at 09:37):
(These are just my tastes in functional programming -- I'm not an authority in any way, to be clear!)
Damiano Testa (Sep 03 2020 at 09:43):
Sure, but your comments help me, nonetheless!
Kevin Buzzard (Sep 03 2020 at 11:32):
Re readability: I think the philosophy is that short assertions which are obvious to any mathematician can be golfed to obscurity, because nobody wants to read the actual proof that deg(fg)=deg(f)deg(g) in an integral domain, it's obvious. Longer proofs which might naturally be proved in tactic mode can still be heavily golfed, because the shorter the code, the quicker it will compile, but if you want to flag something about your proof then you can do it in the module docstring at the top of the file.
Damiano Testa (Sep 03 2020 at 11:39):
Kevin Buzzard said:
Re readability: I think the philosophy is that short assertions which are obvious to any mathematician can be golfed to obscurity, because nobody wants to read the actual proof that deg(fg)=deg(f)deg(g) in an integral domain, it's obvious. Longer proofs which might naturally be proved in tactic mode can still be heavily golfed, because the shorter the code, the quicker it will compile, but if you want to flag something about your proof then you can do it in the module docstring at the top of the file.
Thanks! I like the suggestion of obscuring into the code steps that are completely clear! As I find that most of these results are pretty straightforward mathematically, I may opt for a shorter version, when I can find one!
Damiano Testa (Sep 03 2020 at 11:40):
Is there a way to have rw
only rewrite something, but not everything?
Patrick Massot (Sep 03 2020 at 11:41):
https://leanprover-community.github.io/extras/conv.html
Patrick Massot (Sep 03 2020 at 11:42):
Damiano Testa (Sep 03 2020 at 11:42):
Ah, I had read about conv
and have never been able to use it... I will try once more! Thanks!
Patrick Massot (Sep 03 2020 at 11:43):
If you post a #mwe where you can't get conv
to work we could help.
Damiano Testa (Sep 03 2020 at 11:50):
Patrick Massot said:
If you post a #mwe where you can't get
conv
to work we could help.
While trying to produce a #mwe, I realized that I may not have found the minimal path to my proof. I will try to streamline everything and will post something again, once I have a shorter question.
Thanks for the offer of help, though!
Damiano Testa (Sep 03 2020 at 13:00):
I would like to prove that the degree of at_infty p
is the degree p - p.support.inf
. Unfortunately, this assertion is riddled with with_bot
s and option
s and I cannot even produce a #mwe. Of course, the only issue arises with p=0
, since otherwise all the with_bot
and option
s really are not there. Nevertheless, I am unable to navigate this. Can someone write for me a lemma asserting the mathematical fact that degree (at_infty p) = degree p - p.support.inf
, whenever there are no weird options and it is none
otherwise?
Thank you very much!
Johan Commelin (Sep 05 2020 at 16:10):
@Damiano Testa There is also nat_degree
... does that make your life easier?
Johan Commelin (Sep 05 2020 at 16:10):
It takes values in nat
, and it is equal to p.degree
, except for p = 0
.
Damiano Testa (Sep 05 2020 at 16:29):
Thanks! I found the command, but have been having some difficulties navigating between the two: many lemmas intertwine between the two notions and I have been unable to separate them effectively
Johan Commelin (Sep 05 2020 at 16:32):
Yup... it's a bit of a dark art to figure out which one is best for the application at hand (-;
Damiano Testa (Oct 02 2020 at 10:04):
Dear All,
in case anyone is interested, I wrote a proof of the equality
reverse (f*g) = reverse f * reverse g
in the case in which the leading coefficients do not multiply to zero. This assumption can be dropped, modifying slightly the definition of reverse
(in the file, the modification is called reflect _
).
The proof is essentially by induction on the number of terms of a polynomial: this might already exist in Lean, in which case the file can be substantially shortened!
If anyone has any comments, I would be more than happy to hear them, though maybe in an hour, since I am going for lunch right now!
https://gist.github.com/adomani/4490b69b4a8627e24dae54decaf3f888
Johan Commelin (Oct 02 2020 at 10:43):
Awesome! I don't think we have that induction principle. Looking forward to the PRs (-;
Damiano Testa (Oct 02 2020 at 16:19):
Johan Commelin (Oct 02 2020 at 16:26):
@Damiano Testa Great! But it's a rather big PR, I think. Can you split it in two?
Damiano Testa (Oct 02 2020 at 16:28):
I can probably move stuff from there to data.polynomial.degree.basic, though maybe it will have to wait until Monday
Damiano Testa (Oct 02 2020 at 16:33):
Btw, @Johan Commelin does moving stuff around count as splitting or not?
Johan Commelin (Oct 02 2020 at 16:50):
@Damiano Testa I think that with PRs that are <= 300 lines you get more and better reviews. See also the following blog post
https://unhashable.com/stacked-pull-requests-keeping-github-diffs-small/
Aaron Anderson (Oct 02 2020 at 20:16):
I left some reviews on the first few lemmas, and agree that a lot of stuff should be moved, and then you should split it into multiple PRs, classified mostly on which files you're editing.
Damiano Testa (Oct 02 2020 at 20:17):
Ok, do I need to create separate branches for each file that I want to edit?
Damiano Testa (Oct 02 2020 at 20:18):
Also, thank you very much for the reviews and the interest!
Aaron Anderson (Oct 02 2020 at 20:23):
Damiano Testa said:
Ok, do I need to create separate branches for each file that I want to edit?
No, but I'd try to get 2 or 3 PRs each editing only 2 or 3 files if you can.
Aaron Anderson (Oct 02 2020 at 20:24):
For instance, all the stuff that's really about finset
s or finset.min', finset.max'
without getting into polynomials could be a good PR to start with.
Damiano Testa (Oct 02 2020 at 20:25):
Ok, I will learn how to split the PR!
Damiano Testa (Oct 02 2020 at 20:25):
I am reading your comment right now: they are super helpful!
Damiano Testa (Oct 02 2020 at 20:26):
I will answer and edit the file properly once I am on the computer, likely on Monday
Aaron Anderson (Oct 02 2020 at 20:26):
Damiano Testa said:
Ok, I will learn how to split the PR!
I think the easiest thing to do is just to start a new branch and copy-paste, then once that PR merges into master, you merge master back into your original PR. I'm no git expert though, so other people might have better suggestions.
Damiano Testa (Oct 02 2020 at 20:31):
This is already a better suggestion than what I had in mind: I was going to copy the files outside of the branch and then gradually add a trickle of lemmas in small chunks!
Yakov Pechersky (Oct 02 2020 at 20:31):
Johan's link of https://unhashable.com/stacked-pull-requests-keeping-github-diffs-small/ is a great git guide to how to stack the PR. That way you can break it out in manageable pieces in more robsut ways than just copy-paste.
Yakov Pechersky (Oct 02 2020 at 20:32):
But rather, piecewise add things though commits. A git commit doesn't have to commit an entire file, it can be just a piece of it (often called a hunk).
Damiano Testa (Oct 02 2020 at 20:35):
Ok, I was reading the file, but will need to experiment with it, before I feel comfortable with its content!
Scott Morrison (Oct 03 2020 at 00:57):
Because mathlib uses the "squash merge" strategy for dealing with PRs, remember that your commit history on a PR branch is completely irrelevant. You can make it as messy as you like!
In particular, I'd recommend not bothering with git rebase
(and please don't force push to PR branches). You can just keep merging master into later branches as earlier PR branches get merged.
Aaron Anderson (Oct 03 2020 at 05:59):
Is there a good strategy for using git to revert the accidental edits to the weird vscode file?
Johan Commelin (Oct 03 2020 at 06:16):
git checkout master -- path/to/weird/vscode/file
git commit
Yury G. Kudryashov (Oct 03 2020 at 06:16):
What versions do you have in git history?
Yury G. Kudryashov (Oct 03 2020 at 06:17):
If you want to revert all your edits in a file restoring the version from HEAD
, then :up:
Yury G. Kudryashov (Oct 03 2020 at 06:18):
If you want to revert only some of your edits, then it's less trivial but is still possible.
Yury G. Kudryashov (Oct 03 2020 at 06:18):
There are git GUIs that let you revert specific chunks of the current git diff
.
Yury G. Kudryashov (Oct 03 2020 at 06:20):
Of course, you can do the same in command line (e.g., add good edits using git add -p
, then commit, then see @Johan Commelin 's answer) but it's less convenient.
Yury G. Kudryashov (Oct 03 2020 at 06:23):
I use Emacs + magit, and it allows me to compare any two revisions (or compare any revision to the current version) and apply/revert specific hunks of the diff. I guess, other GUIs and editor plug-ins should have similar functionality.
Damiano Testa (Oct 03 2020 at 07:08):
Dear All,
I still do not feel comfortable with doing several small PRs, but I have broken up the main file into 4 files. Two of them, to_basic
and to_trailing_degree
contain lemmas that I think should go into the corresponding files. The remaining two files are called erase_leading
and reverse
.
erase_leading
introduces and establishes some properties of the polynomial f - f.leading_coeff * X ^ f.nat_degree
.
reverse
proves that reversal of the coefficients is essentially multiplicative.
I hope that this is a reasonable compromise, while I understand how to produce smaller PRs!
Damiano Testa (Oct 03 2020 at 07:16):
In case it helps navigate the PR, the implications among the files are a linear order:
to_basic < to_trailing_degree < erase_leading < reverse_mul
Damiano Testa (Oct 06 2020 at 12:23):
Dear All,
I took into account the comments that I received on the big PR and have now broken it into small bits. As I am not completely sure about how to do incremental PRs, I have decided to simply upload the files one at a time. The first one is #4475, which contains a few extra lemmas added to the file data/polynomial/degree/basic
. Once this one receives some (hopefully positive!) feedback, I will gradually introduce the rest of the code.
Johan Commelin (Oct 06 2020 at 12:59):
@Damiano Testa I left some comments.
Damiano Testa (Oct 06 2020 at 13:00):
Thank you! I will look at them right now!
Last updated: Dec 20 2023 at 11:08 UTC