Zulip Chat Archive
Stream: Is there code for X?
Topic: Convolution of mu and Lambda
Gareth Ma (Mar 08 2024 at 07:14):
Do we have ?
Gareth Ma (Mar 08 2024 at 07:23):
The proof I am aware of uses then looking at their Dirichlet series , and , copying from wiki.
Gareth Ma (Mar 08 2024 at 07:25):
And also .
Michael Stoll (Mar 08 2024 at 07:59):
Maybe one can deduce that from docs#ArithmeticFunction.log_mul_moebius_eq_vonMangoldt or one of its variants?
Gareth Ma (Mar 08 2024 at 08:01):
I can't think of any way to derive it directly from that (), but maybe my number theory is too weak :tear: I will just use the proof I gave, especially since your EulerProduct has LSeries_vonMangoldt_eq_deriv_riemannZeta_div
i.e.
Ruben Van de Velde (Mar 08 2024 at 08:24):
There's an apparently elementary proof on page 97 of https://eclass.uoa.gr/modules/document/file.php/MATH717/04.%20%CE%92%CE%BF%CE%B7%CE%B8%CE%AE%CE%BC%CE%B1%CF%84%CE%B1/Analytic-number-theory-Hildebrand.pdf - not sure that's easier
Gareth Ma (Mar 08 2024 at 08:27):
I need to look more closely, but that gave me inspiration for a new proof lol. Just do it for , then (should be 0 both side), then use IsMultiplicative
.
Gareth Ma (Mar 08 2024 at 08:28):
But these two proofs should both be easier to formalise in Lean.
Gareth Ma (Mar 08 2024 at 08:54):
I will lay out the details here so it's easier for me to formalise...
LHS=,RHS=
LHS=,RHS=
And then use IsMultiplicative.mul
Gareth Ma (Mar 08 2024 at 08:55):
Should be easier than proving IsSummable
:)
Gareth Ma (Mar 08 2024 at 09:04):
... no, is not multiplicative.
Gareth Ma (Mar 08 2024 at 11:59):
https://github.com/grhkm21/PrimeNumberTheoremAnd/commit/d787740b3a33087579dffac7a96ea5868fc2c6b8 almost done
Gareth Ma (Mar 10 2024 at 14:36):
It was not a pleasant experience
Kevin Buzzard (Mar 10 2024 at 14:59):
Writing a 100 line monolithic proof is never a pleasant experience. I see lots of have
s in your proof. Maybe factor them out as sublemmas and it might help you to see the wood from the trees?
Gareth Ma (Mar 10 2024 at 15:01):
I’ll translate my Lean into a latex outline later
Arend Mellendijk (Mar 10 2024 at 17:38):
Here's my first attempt at a golf. I hope the two sorries aren't too hard to prove.
lemma ArithmeticFunction.mul_prime_pow_apply {R : Type*} [Semiring R]
(f g : ArithmeticFunction R) (p : ℕ) (hp : p.Prime) (n : ℕ) :
(f * g) (p ^ n) = ∑ k in Finset.Icc 0 n, f (p ^ k) * g (p ^ (n - k)) := sorry
lemma Nat.Coprime.divisors_inter_eq {m n : ℕ} (hmn : m.Coprime n) (hm : m ≠ 0) (hn : n ≠ 0) : m.divisors ∩ n.divisors = {1} := by
ext d
simp only [mem_inter, mem_divisors, ne_eq, mem_singleton]
constructor
· rintro ⟨⟨hdm, _⟩, hdn, _⟩
exact Nat.eq_one_of_dvd_coprimes hmn hdm hdn
· rintro rfl;
exact ⟨⟨one_dvd _, hm⟩, ⟨one_dvd _, hn⟩⟩
lemma not_isPrimePow {m n d : ℕ} (hmn : m.Coprime n) (hd : d ∣ m * n) (hdm : ¬ d ∣ m) (hdn : ¬ d ∣ n) :
¬ IsPrimePow d := by
sorry
example {n : ℕ} : (μ * Λ) n = -μ n * log n := by
induction n using Nat.recOnPosPrimePosCoprime with
| hp p k hp hk =>
rw [ArithmeticFunction.mul_prime_pow_apply _ _ p hp]
by_cases hk1 : k = 1
· simp [hk1, hp, ← Nat.Ico_succ_right, Finset.sum_range_succ, vonMangoldt_apply_prime,
moebius_apply_prime]
simp [ArithmeticFunction.moebius_apply_prime_pow, hp, hk.ne.symm, if_neg hk1]
trans ∑ x in Finset.range 2, μ (p ^ x) * Λ (p ^ (k - x))
· refine Finset.sum_subset ?_ ?_ |>.symm
· intro i; simp only [mem_range, mem_Icc, _root_.zero_le, true_and]; omega
intro i; simp only [mem_Icc, _root_.zero_le, true_and, mem_range, not_lt, _root_.mul_eq_zero,
Int.cast_eq_zero]
intro _ h2i
left
rw [moebius_apply_prime_pow hp (by omega), if_neg (by omega)]
simp [Finset.sum_range, vonMangoldt_apply_pow, hp, hk.ne.symm, show k-1 ≠ 0 by omega, moebius_apply_prime]
| h0 => simp
| h1 => simp
| h m n hm hn hmn h_ind_n h_ind_m =>
have hm0 : m ≠ 0 := by omega
have hn0 : n ≠ 0 := by omega
calc
(μ * Λ) (m * n) = ∑ d in (m*n).divisors, μ (m * n / d) * Λ d + 0 := by
erw [add_zero, ArithmeticFunction.mul_apply, Nat.sum_divisorsAntidiagonal' (fun x y ↦ μ x * Λ y)]
_ = ∑ d in m.divisors ∪ n.divisors, μ (m*n / d) * Λ d
+ ∑ d in m.divisors ∩ n.divisors, μ (m * n / d) * Λ d := by
congr 1
· refine Finset.sum_subset ?_ ?_ |>.symm
· apply Finset.union_subset
· apply Nat.divisors_subset_of_dvd (by positivity) (dvd_mul_right _ _)
· apply Nat.divisors_subset_of_dvd (by positivity) (dvd_mul_left _ _)
simp only [mem_divisors, ne_eq, _root_.mul_eq_zero, mem_union, Int.cast_eq_zero, and_imp]
intro x hxmn _ hx_not
right
rw [ArithmeticFunction.vonMangoldt_eq_zero_iff]
rw [and_comm, and_comm (a := x ∣ n)] at hx_not
push_neg at hx_not
apply not_isPrimePow hmn hxmn (hx_not.1 hm0) (hx_not.2 hn0)
· rw [Nat.Coprime.divisors_inter_eq hmn hm0 hn0]
simp
_ = ∑ d in m.divisors, μ (m * n / d) * Λ d + ∑ d in n.divisors, μ (m * n / d) * Λ d := Finset.sum_union_inter
_ = μ n * ∑ d in m.divisors, μ (m / d) * Λ d + μ m * ∑ d in n.divisors, μ (n / d) * Λ d := by
rw [Finset.mul_sum, Finset.mul_sum]
congr 1
· apply Finset.sum_congr rfl
simp only [mem_divisors, ne_eq, and_imp]
intro d hdm _
rw [← mul_assoc, mul_comm m n, Nat.mul_div_assoc _ hdm]
congr; norm_cast
rw [IsMultiplicative.map_mul_of_coprime (by arith_mult) (hmn.coprime_div_left hdm).symm]
· apply Finset.sum_congr rfl
simp only [mem_divisors, ne_eq, and_imp]
intro d hdn _
rw [← mul_assoc, Nat.mul_div_assoc _ hdn]
congr; norm_cast
rw [IsMultiplicative.map_mul_of_coprime (by arith_mult) (hmn.coprime_div_right hdn)]
_ = μ n * (μ * Λ) m + μ m * (μ * Λ) n := by
simp_rw [ArithmeticFunction.mul_apply]
iterate 2 erw [Nat.sum_divisorsAntidiagonal' (fun x y ↦ μ x * Λ y)]
_ = -(μ n * (μ m * log m) + μ m * (μ n * log n)) := by
rw [h_ind_n, h_ind_m]
ring
_ = -(μ (m * n)) * log (m * n) := by
simp [log_apply, neg_add_rev, cast_mul, neg_mul, hmn]
rw [IsMultiplicative.map_mul_of_coprime (by arith_mult) hmn,
Real.log_mul (by norm_cast) (by norm_cast)]
push_cast
ring
Gareth Ma (Mar 10 2024 at 18:17):
:tear: I will have to study it carefully haha(haha...)
I can't even prove the first lemma. I am stuck on this:
example {p n : ℕ} {f g : ℕ → ℕ} (hp : p.Prime) :
∑ x in divisorsAntidiagonal (p ^ n), f x.1 * g x.2 = ∑ k in Icc 0 n, f (p ^ k) * g (p ^ (n - k)) := by
sorry
In particular I don't know how to deal with sums when they are over different sets.
Gareth Ma (Mar 10 2024 at 18:17):
I can construct an equivalence between the (subtypes of the) two sets:
/- these are subtypes -/
def thing {p : ℕ} (hp : p.Prime) (n : ℕ) : Finset.Icc 0 n ≃ (p ^ n).divisors where
toFun := fun ⟨d, hd⟩ ↦ by
refine ⟨p ^ d, mem_divisors.mpr ⟨?_, ?_⟩⟩
· exact pow_dvd_pow _ (mem_Icc.mp hd).right
· exact (pow_pos hp.pos _).ne.symm
invFun := fun ⟨d, hd⟩ ↦ by
refine ⟨d.factorization p, mem_Icc.mpr ⟨Nat.zero_le _, ?_⟩⟩
exact factorization_le_of_le_pow (le_of_dvd (pow_pos hp.pos _) (dvd_of_mem_divisors hd))
left_inv := fun d ↦ by simp [hp]
right_inv := fun d ↦ by
ext
obtain ⟨k, _, asdf⟩ := (Nat.dvd_prime_pow hp).mp (mem_divisors.mp d.prop).left
simp [asdf, hp]
But Lean doesn't like my subtype :(
Gareth Ma (Mar 10 2024 at 18:19):
Or even say I have this:
lemma asdf {p n : ℕ} {f g : ℕ → ℕ} (hp : p.Prime) {d : ℕ × ℕ} :
d ∈ (p ^ n).divisorsAntidiagonal ↔ ∃ k ∈ Finset.Icc 0 n, d = ⟨p ^ k, p ^ (n - k)⟩ := by
sorry
How do I use this to prove the lemma I'm stuck on? (I think this is a purely Lean question, not a Math question)
Gareth Ma (Mar 10 2024 at 18:39):
Also for the second sorry
, here's the proof I came up with:
lemma not_isPrimePow {m n d : ℕ} (hmn : m.Coprime n) (hd : d ∣ m * n) (hdm : ¬ d ∣ m) (hdn : ¬ d ∣ n) :
¬ IsPrimePow d := by
by_contra! hd_pp
obtain ⟨p, k, hp_prime, hk_pos, rfl⟩ := hd_pp
rw [← prime_iff] at hp_prime
have hm : m ≠ 0 := by contrapose! hdm; subst hdm; exact dvd_zero _
have hn : n ≠ 0 := by contrapose! hdn; subst hdn; exact dvd_zero _
rw [hp_prime.pow_dvd_iff_le_factorization] at hdm hdn <;> try assumption
have h : m.factorization p = 0 ∨ n.factorization p = 0 := by
by_contra!
have : p ∣ m.gcd n := dvd_gcd_iff.mpr ⟨?_, ?_⟩
rw [hmn, dvd_one] at this
exact hp_prime.ne_one this
all_goals apply dvd_of_factorization_pos; tauto
rw [hp_prime.pow_dvd_iff_le_factorization $ mul_ne_zero_iff.mpr ⟨hm, hn⟩,
Nat.factorization_mul hm hn, Finsupp.coe_add, Pi.add_apply] at hd
cases h <;> linarith
But I think it's also pretty terrible and I'm not sure how to write it nicely :( (That's 6 by_contra lemmas :sweat: )
Arend Mellendijk (Mar 10 2024 at 18:39):
This works for the first sorry:
lemma ArithmeticFunction.mul_prime_pow_apply {R : Type*} [Semiring R]
(f g : ArithmeticFunction R) (p : ℕ) (hp : p.Prime) (n : ℕ) :
(f * g) (p ^ n) = ∑ k in Finset.Icc 0 n, f (p ^ k) * g (p ^ (n - k)) := by
rw [ArithmeticFunction.mul_apply, sum_divisorsAntidiagonal (f · * g ·),
divisors_prime_pow hp, Finset.sum_map, Finset.range_eq_Ico, Nat.Ico_succ_right]
apply Finset.sum_congr rfl
simp only [mem_Icc, _root_.zero_le, true_and, Function.Embedding.coeFn_mk]
intro k hk
rw [Nat.pow_div hk hp.pos]
In my experience, equivalences between finsets are really ill-behaved, and you're much better off working with Finset.map
and Finset.image
. Once I found docs#Nat.divisors_prime_pow, the rest of the proof fell in place.
Gareth Ma (Mar 10 2024 at 18:42):
Ohhhhh okay divisors_prime_pow
is exactly what I was missing... This will be useful in many of my other proofs, thanks sooo much
Gareth Ma (Mar 10 2024 at 21:47):
Okay I learned a lot from your code, especially that erw
can write inside binders (to some extent?). I golfed it further reproved the lemma but using your framework. Here's what I got:
example {n : ℕ} : (Λ * μ) n = -μ n * log n := by
induction n using Nat.recOnPosPrimePosCoprime with
| hp p n hp_prime hn_pos =>
simp_rw [mul_comm Λ _, mul_prime_pow_apply _ _ _ hp_prime, intCoe_apply]
by_cases hn : n = 1
· simp [show Icc 0 1 = {0, 1} by rfl, hn, Nat.div_self hp_prime.pos,
moebius_apply_prime hp_prime, vonMangoldt_apply_prime hp_prime]
· trans ∑ x in Finset.range 2, μ (p ^ x) * Λ (p ^ (n - x))
· refine (sum_subset (fun d hd ↦ ?_) fun d _ hd ↦ ?_).symm
· simp at hd ⊢; omega
· have : 2 ≤ d := le_of_not_lt $ mem_range.not.mp hd
rw [moebius_apply_prime_pow hp_prime (by omega)]
simp [show d ≠ 1 by omega]
· simp [sum_range, moebius_apply_prime hp_prime]
rw [vonMangoldt_apply_pow, vonMangoldt_apply_pow, add_neg_self,
moebius_apply_prime_pow hp_prime]
all_goals simp [hn]; try omega
| h0 => simp
| h1 => simp
| h a b ha hb hab ha' hb' =>
have ha₀ : a ≠ 0 := by omega
have hb₀ : b ≠ 0 := by omega
calc (Λ * μ) (a * b)
_ = ∑ d in (a * b).divisors, Λ d * μ (a * b / d) := by
erw [mul_apply, sum_divisorsAntidiagonal (Λ · * μ ·)]
_ = ∑ d in a.divisors ∪ b.divisors, Λ d * μ (a * b / d)
+ ∑ d in a.divisors ∩ b.divisors, Λ d * μ (a * b / d) := by
simp [hab.divisors_inter_eq ha₀ hb₀]
refine (sum_subset ?_ fun d hd hd' ↦ ?_).symm
· apply union_subset <;> apply divisors_subset_of_dvd (mul_ne_zero ha₀ hb₀) (by simp)
· simp [mem_divisors] at hd hd'
apply mul_eq_zero_of_left (vonMangoldt_eq_zero_iff.mpr $ not_isPrimePow hab ?_ ?_ ?_)
all_goals tauto
_ = ∑ d in a.divisors, Λ d * μ (a * b / d) + ∑ d in b.divisors, Λ d * μ (a * b / d) := by
exact sum_union_inter
_ = μ b * ∑ d in a.divisors, Λ d * μ (a / d) + μ a * ∑ d in b.divisors, Λ d * μ (b / d) := by
rw [mul_sum, mul_sum]
congr 1 <;> apply sum_congr rfl fun d hd ↦ ?_
· have hd_dvd := (mem_divisors.mp hd).left
rw [mul_comm a b, Nat.mul_div_assoc _ hd_dvd, isMultiplicative_moebius.right, Int.cast_mul]
ring_nf
exact hab.symm.coprime_dvd_right ⟨d, (Nat.div_mul_cancel hd_dvd).symm⟩
· have hd_dvd := (mem_divisors.mp hd).left
rw [Nat.mul_div_assoc _ hd_dvd, isMultiplicative_moebius.right, Int.cast_mul]
ring_nf
exact hab.coprime_dvd_right ⟨d, (Nat.div_mul_cancel hd_dvd).symm⟩
_ = μ b * (Λ * μ) a + μ a * (Λ * μ) b := by
simp_rw [mul_apply, intCoe_apply, sum_divisorsAntidiagonal (Λ · * μ ·)]
_ = -μ (a * b) * log (a * b) := by
rw [ha', hb', ← mul_assoc, ← mul_assoc, mul_neg, mul_neg,
isMultiplicative_moebius.right hab, mul_comm (μ b : ℝ) (μ a : ℝ), ← mul_add,
@log_apply (a * b), cast_mul, Real.log_mul]
norm_cast
all_goals exact cast_ne_zero.mpr (by omega)
10L shorter (using dark magic such as all_goals tauto
)
Terence Tao (Mar 11 2024 at 03:39):
One can deduce the identity from the Leibniz rule , Mobius inversion , and . Indeed, applying Leibniz to Mobius inversion one gets , hence . Now cancel the convolution with by another Mobius inversion. (This is just the Dirichlet series proof in disguise, but perhaps these little lemmas about Dirichlet convolution can be helpful for other manipulations of this type.)
A less traditional proof (see this blog post of mine ) is to work in the dual number system and note that , , and are all multiplicative in this system. One can then establish the identity by checking on prime powers and then using multiplicativity; then the identity you want follows by extracting the coefficient. This approach is more complicated than traditional approaches, but scales better when one wants to consider "higher order" versions of such identities, as discussed in the blog post.
Gareth Ma (Mar 11 2024 at 11:07):
Just to be clear, here is ?
Update: yes
Antoine Chambert-Loir (Mar 11 2024 at 11:35):
And to state the (possibly) obvious : is a derivation on arithmetic functions such that the natural morphism of algebras from (reasonably bounded) arithmetic functions to (germs of) holomorphic functions on the complex plane (which maps addition to addition, and convolution to multiplication) is a morphism of differential algebras.
The operator $\exp(\eps L)$$ corresponds with that of the (infinitesimal) Taylor formula.
Gareth Ma (Mar 11 2024 at 13:40):
@ Lean people, just to be clear, is the above not in Lean? I can't find it with f.deriv
(for f: ArithmeticFunction R
)
Michael Stoll (Mar 11 2024 at 14:11):
It should just be -pmul log f
.
Gareth Ma (Mar 11 2024 at 14:12):
You probably want it as .deriv
or something like that, so that you can write (f * g).deriv = f.deriv*g+f*g.deriv
, for example.
Terence Tao (Mar 11 2024 at 14:56):
I think a modest amount of API for this deriv
operator on arithmetic functions (e.g., the derivation axioms, the relation with derivatives of Dirichlet series, and the fact that f.deriv = g.deriv
if and only if f = g + C • 1
for some C
, so in particular 1.deriv = 0
) would be useful to have somewhere in Mathlib.NumberTheory.ArithmeticFunction .
Gareth Ma (Mar 11 2024 at 15:03):
I am trying to improve the api for too (there's one lemma about convolution inverses), since you can do stuff about too. And I got stuck already :sweat: https://leanprover.zulipchat.com/#narrow/stream/113489-new-members/topic/Proving.20termination.20of.20sum.20over.20properDivisors/near/425921747
But I can try to work on this, though I won't be as fast as others at Lean
Gareth Ma (Mar 11 2024 at 22:09):
@Michael Stoll I saw that you have written code about completely multiplicative functions. Do you think there should be an extra typeclass for {Is}CompletelyMultiplicative
? In your EulerProducts
code, it is expressed as ℕ →*₀ ℕ
, but it doesn't interact well with ArithmeticFunction
and those API at all.
Gareth Ma (Mar 12 2024 at 00:23):
I have to catch the bus soon but here are some thoughts after working on this for a bit. This is my first time writing "library code", so sorry if it's very not in Mathlib style. But:
- There should be ; See L105 of below
- There should be
IsCompleteMultiplicative
; See L166 of below - Most lemmas in Mathlib can/should be phrased in terms of
ArithmeticFunction R
instead of just over\Z
, e.g. theisMultiplicative
ones (over a[CommRing R]
) or yeah; See Line 248 of below
I have a very terrible framework for now! this is the "below"
- I need some help with structuring my code into a Mathlib-worthy version
-
I need help with making the API more user AND developer(?) friendly
-> Currently the API is borderline unusable - in particular, requires , meaning I have to pass thisFact
around. I chose a typeclass[Fact Invertible (f 1)]
, but that causes all kinds of problems, and you need a lot of auxiliaryhave : Invertible (\zeta 1) := ...
or whatever. -
I need some sanity check (on the code, not me): is this a reasonable approach? Especially with the
mkUnit
functions
Gareth Ma (Mar 12 2024 at 00:24):
There's a lot of Lean/Mathlib code and very little Math, to be honest
Gareth Ma (Mar 12 2024 at 00:33):
Or is the correct to write it using \frac1
and Invertible
typeclasses (as in [Invertible f]
)?
Update: Probably not
Gareth Ma (Mar 12 2024 at 02:06):
Sorry for the spam, feel free to mute this stream :sweat_smile:
My justification for (3.) was that some functions are defined over different codomains (e.g. maps to while maps to ), so it'll be simpler if everything is stated as when every function is casted to a common ring already. But this results in long statements:
lemma inv_totient : (φ : ArithmeticFunction R).inv = (ζ : ArithmeticFunction R) * (pmul (μ : ArithmeticFunction R) id) := by
When all I want to say is
lemma inv_totient : φ.inv = ζ * pmul μ id := by
Plus the problem can be solved by using natCoe_apply
and intCoe_apply
, so I don't know if my idea (3.) is good at all.
Arend Mellendijk (Mar 12 2024 at 02:47):
Note:
example : IsMultiplicative (ζ : ArithmeticFunction R) := by arith_mult
/- Try this: exact IsMultiplicative.nat_cast isMultiplicative_zeta -/
example : IsMultiplicative (ζ : ArithmeticFunction R) := by arith_mult?
example : IsMultiplicative (ζ : ArithmeticFunction R) := isMultiplicative_zeta.nat_cast
So the API isn't that bad at the moment. That said I do think that ζ
and μ
should be defined for any type R
where 0
1
(and -1
for μ
) make sense, just to save on casts.
Gareth Ma (Mar 12 2024 at 02:59):
Okay isMultiplicative
is the worst example I can give, but any other example should be better
Yaël Dillies (Mar 12 2024 at 08:53):
Gareth, why don't you define an Inv (ArithmeticFunction R)
notation which sends f
to its true inverse if f 1 = 0
and to 0
(or f
or anything you want really) if not?
Yaël Dillies (Mar 12 2024 at 08:54):
If you can make it so that you can construct a DivisionMonoid (ArithmeticFunction R)
instance, you get an extra point
Michael Stoll (Mar 12 2024 at 10:21):
Yaël Dillies said:
Gareth, why don't you define an
Inv (ArithmeticFunction R)
notation which sendsf
to its true inverse iff 1 = 0
and to0
(orf
or anything you want really) if not?
...that should probably read "if f 1
is invertible"
Michael Stoll (Mar 12 2024 at 10:23):
Defining ζ
, μ
etc. as ArithmeticFunction R
for varying R
will mean you have to write ζ R
, μ R
etc., so there is some trade-off.
Michael Stoll (Mar 12 2024 at 10:41):
In any case, I had very similar problems related to the fact that when there is a coercion from R
to R'
, this does not imply that there is also one from ArithmeticFunction R
to ArithmeticFunction R'
. It is OK when R = ℕ
or R = ℤ
, but not, e,g, when R = ℝ
and R' = ℂ
, which is relevant when we want to consider the L-series of the von Mangoldt function, which is Λ : ArithmeticFunction ℝ
. My first attempt to make this work was #10725, which did not enjoy a friendly reception. To make it work for L-series, I then introduced the notation ↗f
, which will turn f
into a function from ℕ
to ℂ
if at all possible (this was recently merged into Mathlib and is available in file#Mathlib/NumberTheory/LSeries/Basic).
One problem with setting this up more generally for ArithmeticFunction
s is that you need the coercion map from R
to R'
to preserve some structure (being a ZeroHom
is the bare minimum, but in most cases, you want a ring homomorphism or an algebra map). Another one is that trying to set this up for general R
and R'
, there are type class synthesis problems; see the discussion here.
Gareth Ma (Mar 12 2024 at 11:12):
I should've tried that :man_facepalming: it's how 1/0 is defined anyways... sorry for that. I'll try that
Gareth Ma (Mar 12 2024 at 11:37):
variable (f : ArithmeticFunction R)
def invertible := IsUnit (f 1) /- (1) is this correct (in Lean)? -/
@[simp]
def invFun (hf : invertible f) : ℕ → R
| 0 => 0
| 1 => (f 1)⁻¹ /- (2) How do I extract the unit? Using `[Invertible (f 1)]` -/
/- I can just write \frac1 ..., but not here -/
| n => (f 1)⁻¹ * ∑ d in n.properDivisors.attach,
have := Nat.mem_properDivisors.mp d.prop; f (n / d) * invFun hf d
decreasing_by simp_wf; exact this.right
/- (3) Is this correct? -/
noncomputable instance : Decidable (invertible f) := Classical.dec _
@[simp]
noncomputable def inv : ArithmeticFunction R :=
if hf : invertible f then ⟨f.invFun hf, rfl⟩ else 0
I wrote this down but faced the problems (1) (2) (3)
Michael Stoll (Mar 12 2024 at 11:58):
This might work:
import Mathlib
namespace ArithmeticFunction
variable {R} [CommRing R]
variable (f : ArithmeticFunction R)
def invertible := IsUnit (f 1) /- (1) is this correct (in Lean)? -/
open BigOperators
@[simp]
noncomputable
def invFun (hf : invertible f) : ℕ → R
| 0 => 0
| 1 => (hf.unit⁻¹ :) /- (2) How do I extract the unit? Using `[Invertible (f 1)]` -/
/- I can just write \frac1 ..., but not here -/
| n => (hf.unit⁻¹ :) * ∑ d in n.properDivisors.attach,
have := Nat.mem_properDivisors.mp d.prop; f (n / d) * invFun hf d
decreasing_by simp_wf; exact this.right
/- (3) Is this correct? -/
-- noncomputable instance : Decidable (invertible f) := Classical.dec _
@[simp]
noncomputable def inv : ArithmeticFunction R := by
classical
exact if hf : invertible f then ⟨f.invFun hf, rfl⟩ else 0
But maybe it is better to do it like this:
import Mathlib
namespace ArithmeticFunction
variable {R} [inst : CommRing R]
variable (f : ArithmeticFunction R)
def invertible : Prop := IsUnit (f 1) /- (1) is this correct (in Lean)? -/
open BigOperators
-- Add decidability assumptions where needed
-- We can always use `classical` in proofs later.
noncomputable def inv_leading [DecidablePred (@invertible R inst)] : R :=
if hf : invertible f then (hf.unit⁻¹ :) else 0
noncomputable def invFun [DecidablePred (@invertible R inst)] : ℕ → R
| 0 => 0
| 1 => f.inv_leading
| n => f.inv_leading * ∑ d in n.properDivisors.attach,
have := Nat.mem_properDivisors.mp d.prop; f (n / d) * invFun d
decreasing_by simp_wf; exact this.right
noncomputable def inv [DecidablePred (@invertible R inst)] : ArithmeticFunction R :=
⟨f.invFun, rfl⟩
The real test is whether it is easy to show that the so-defined inverse really is an inverse w.r.t. convolution...
Gareth Ma (Mar 12 2024 at 11:59):
I proved that already
Gareth Ma (Mar 12 2024 at 12:00):
(when things are invertible)
Gareth Ma (Mar 12 2024 at 12:25):
Now everything's noncomputable because IsUnit
doesn't carry data about an explicit unit (?) :( But that's fine. Thanks for the help
Gareth Ma (Mar 12 2024 at 13:00):
Yaël Dillies said:
If you can make it so that you can construct a
DivisionMonoid (ArithmeticFunction R)
instance, you get an extra point
This requires (f^-1)^-1 = f, but that only holds when f is invertible.
Gareth Ma (Mar 12 2024 at 13:00):
(deleted)
Michael Stoll (Mar 12 2024 at 13:28):
Gareth Ma said:
Now everything's noncomputable because
IsUnit
doesn't carry data about an explicit unit (?) :( But that's fine. Thanks for the help
Yes: extracting the unit from IsUnit
is noncomputable.
Gareth Ma (Mar 12 2024 at 14:48):
I'm using your second definition and defined this:
noncomputable instance : Inv (ArithmeticFunction R) where
inv := fun f ↦ if invertible f then ⟨f.invFun, rfl⟩ else f
However, this requires [DecidablePred (@invertible R hR)]
, even if it's givenby the user. Is there any way to avoid that?
Gareth Ma (Mar 12 2024 at 14:48):
If that makes any sense.
Gareth Ma (Mar 12 2024 at 14:51):
Michael Stoll (Mar 12 2024 at 14:53):
If you do it like in my second definition, you don't need the if
in the definition of the inv
field.
In any case, whenever you use if
, you need a decidability instance for the predicate you base the decision on. This is how docs#ite (or docs#dite) is defined. (And you cannot really avoid this when you want to produce "data" like in the inv
field above).
Sébastien Gouëzel (Mar 12 2024 at 14:53):
You could use open Classical in
on the line before your definition, to use the classical instance.
Gareth Ma (Mar 12 2024 at 14:54):
Oh :man_facepalming: I'm using your inv_leading
and invFun
from the second definition, but the if-then-else from the first definition.
Gareth Ma (Mar 12 2024 at 14:54):
Sébastien Gouëzel said:
You could use
open Classical in
on the line before your definition, to use the classical instance.
Yeah but I try not to do that. Maybe I shouldn't care that much?
Michael Stoll (Mar 12 2024 at 14:55):
You probably can write down something like
noncomputable instance [DecidablePred (@IsUnit R inst)] : DecidablePred (@invertible R inst) := ...
then it should be inferred when R
is something like the integers or the rationals.
Michael Stoll (Mar 12 2024 at 14:56):
The point of defining inv_leading
is that the definition of the inverse works also when f 1
is not invertible and gives you the zero function (since f.inv_leading = 0
in this case).
So the case distinction is moved to inv_leading
.
Gareth Ma (Mar 12 2024 at 14:57):
Then f.inv.inv won’t always be f. Is that fine? I defined it to be f.inv=f when it’s not invertible.
Michael Stoll (Mar 12 2024 at 15:04):
I don't think it makes sense to require inv
to be an involution in this context.
When R
is a (commutative) ring, there will be a (commutative) ring structure on ArithmeticFunction R
, and the relevant statement should be
lemma isUnit_iff : IsUnit f ↔ invertible f
and you can use general API for units in rings.
I don't think there is a reasonable way to produce a DivisionMonoid
structure on ArithmeticFunction R
.
Gareth Ma (Mar 12 2024 at 15:05):
Oh! That’s smart, so user just deal with (IsUnit f), which has a ton of existing API
Gareth Ma (Mar 12 2024 at 15:06):
Okay that I can do, I will work on it when I get back :) thanks a lot
Sébastien Gouëzel (Mar 12 2024 at 15:27):
Gareth Ma said:
Sébastien Gouëzel said:
You could use
open Classical in
on the line before your definition, to use the classical instance.Yeah but I try not to do that. Maybe I shouldn't care that much?
Is there a reason you're trying not to do that? Unless in very specific cases (when you're writing a function that the kernel should be able to evaluate -- which will never be the case with complex numbers), it's perfectly reasonable and safe.
Gareth Ma (Mar 12 2024 at 15:54):
Computability is the biggest reasons yeah, it’ll be nice if I can just #eval \phi\^-1 12 and verify it’s what I want :) But yeah it doesn’t bother me
Yaël Dillies (Mar 12 2024 at 18:17):
Gareth Ma said:
Yaël Dillies said:
If you can make it so that you can construct a
DivisionMonoid (ArithmeticFunction R)
instance, you get an extra pointThis requires (f^-1)^-1 = f, but that only holds when f is invertible.
Not if you define f⁻¹ := f
when f
isn't invertible.
Gareth Ma (Mar 12 2024 at 18:17):
Then the other requirements fail e.g.
Gareth Ma (Mar 12 2024 at 18:18):
Screenshot-2024-03-12-at-18.18.01.png
:(
Yaël Dillies (Mar 12 2024 at 18:18):
Yes, so this is what I'm saying you would get bonus points if you managed to write such an instance :smile:
Gareth Ma (Mar 12 2024 at 18:19):
I think I have something slightly better now using Michael's idea of using IsUnit
for the API
Yaël Dillies (Mar 12 2024 at 18:19):
I'm not yet convinced it's impossible. There's probably some clever trick of junk value that makes it work. And then it'd be worth using that junk value since it makes so many more theorems true.
Gareth Ma (Mar 12 2024 at 18:20):
I was thinking the opposite, that you should be able to prove it's impossible to set such junk values
Yaël Dillies (Mar 12 2024 at 18:23):
I'd be interested in either!
Yaël Dillies (Mar 12 2024 at 18:24):
(bonus points for you in either case :fishing: :carrot:)
Gareth Ma (Mar 12 2024 at 18:24):
For "And then it'd be worth using that junk value since it makes so many more theorems true." do you mean e.g. we can use the whole DivisionMonoid
API?
Gareth Ma (Mar 12 2024 at 18:41):
Not sure what I was thinking, but it's indeed impossible, for quite a simple reason
Gareth Ma (Mar 12 2024 at 18:41):
satisfies , so
So you need to define and indeed this is the only division that's needed (for , satisfies so rearrange and you get )
Yaël Dillies (Mar 12 2024 at 19:14):
Your argument only shows what f^-1
is when f
is invertible. What I'm interested in here is what we can set f^-1
to when f
is not invertible.
Michael Stoll (Mar 12 2024 at 19:16):
Compare docs#ZMod.inv . There is no docs#DivisionMonoid instance on ZMod n
, however.
Kevin Buzzard (Mar 12 2024 at 19:39):
Similar problema show up when inverting 2*2 matrices.
Terence Tao (Mar 12 2024 at 19:41):
For commutative rings, such as the convolution ring, using the junk value for non-invertible seems to obey the DivisionMonoid
axioms without exception, if I am not mistaken.
Terence Tao (Mar 12 2024 at 19:42):
Oh, wait, it doesn't. Grr.
Michael Stoll (Mar 12 2024 at 19:42):
How do you get that f * g = f * g⁻¹
when g
is invertible, but f
is not?
Terence Tao (Mar 12 2024 at 19:45):
How painful would it be to remove from the axioms of a division monoid, and only recover this fact from the other axioms for invertible ? Then one can just set for non-invertible .
Michael Stoll (Mar 12 2024 at 19:48):
It looks like the only way Mathlib knows to get a docs#DivisionMonoid is from a group (with zero); all other instances just serve to make sure that certain constructions involving DivisionMonoid
s are DivisionMonoid
s themselves. So I'm not convinced that this is really a good notion...
Kevin Buzzard (Mar 12 2024 at 19:57):
Aren't things coming from groups and groups with zero docs#DivInvMonoid s?
Michael Stoll (Mar 12 2024 at 20:04):
"A DivisionMonoid is a DivInvMonoid with involutive inversion and such that (a * b)⁻¹ = b⁻¹ * a⁻¹ and a * b = 1 → a⁻¹ = b." So a DivisionMonoid
adds some structure.
Terence Tao (Mar 12 2024 at 20:05):
For what it's worth, ZMod n does have a division monoid structure if one defines if divides and is coprime to , though this is rather artificial and not really a point in favor of keeping the current DivisionMonoid
definition.
Michael Stoll (Mar 12 2024 at 20:07):
I guess my point above was that inverses in ZMod n
are defined in this artificial way, yet there is no DivisionMonoid
instance.
Terence Tao (Mar 12 2024 at 20:11):
Actually I think the current inverse definition is slightly different (setting to rather than ), so it doesn't obey the DivisionMonoid
axioms as currently formulated.
Terence Tao (Mar 12 2024 at 20:16):
I suspect that there are very few applications in which complete involutionarity of the inverse operation of a DivisionMonoid
is actually useful, and the special case of invertible elements can be deduced from the other axioms, so it may actually not be too painful to refactor that away, so that (among other things) Ring
s now become DivisionMonoid
s (by using the zero junk value).
Yaël Dillies (Mar 12 2024 at 21:18):
Note that the motivating example for the introduction of DivisionMonoid
is docs#Finset.divisionMonoid
Yaël Dillies (Mar 12 2024 at 21:19):
I am writing a paper about it, but I'm afraid it will still be a month or so before I have something to show
Michael Stoll (Mar 12 2024 at 21:19):
But are there any results where it is relevant that taking the inverse is involutory even on non-invertible elements?
Yaël Dillies (Mar 12 2024 at 21:22):
Plenty. I encourage you to try removing that axiom and see what breaks.
Yaël Dillies (Mar 12 2024 at 21:23):
I'm not against having an extra typeclass which would be DivisionMonoid
without the inv_inv
axiom, but it would have to be a separate typeclass
Gareth Ma (Mar 12 2024 at 22:48):
To be honest, I will just stick will the plain stuff (proving theorems only for invertible functions) so that I get some math out. Making it fancy and milking extra features can be done later :D
Last updated: May 02 2025 at 03:31 UTC