Zulip Chat Archive
Stream: new members
Topic: finite recursive -1/12=1+2+3+... accepted!
Rob Fielding (Nov 17 2024 at 08:18):
I am not sure what to make of this. If I have the theorem accepted, what can be objected to thereafter? The main thing to make it work is to not write things like "+...", but to actually NAME it as a function and calculate what it must be. ie:
-1/12 = S
= 0 + Tail_S(0)
= 1 + Tail_S(1)
= 1+2+Tail_S(2)
= 1+2+3+Tail_S(3)
...
= 1+2+3+...+n+Tail_S(n)
= Sum_S(n)+Tail_S(n)
Where this is a general phenomenon for convergent and divergent series:
X = Sum_X(n) + Tail_X(n)
the actual theorem where Sum(n) := n*(n+1)/2 is:
-1/12 = Sum_S(n) + Tail_S(n)
where most people would write it with: "+..." = "+Tail_S(n)", which would not work because it's critical to track how many expansions, n, were done. Writing "+..." loses that, and the algebra then breaks.
This is a FINITE proof. No use of infinity, or advanced math.
This statement in particular is so controversial, that it needs to avoid infinity and Riemann Zeta completely. Finite, in code, and accepted in Lean4 is best.
It is ACCEPTED BY LEAN4. This is what is accepted:
-1/12 = S = (1 + 2 + 3 + 4 + ... + n) + Tail_S n
-1/12 = S = Sum_S n + Tail_S n
Tail_S having a value might be argued with.
import Mathlib
/-
This is a FINITE proof. No use of infinity, or advanced math.
It is ACCEPTED BY LEAN4.
-1/12 = S = (1 + 2 + 3 + 4 + ... + n) + Tail_S n
-1/12 = S = Sum_S n + Tail_S n
-/
def Tail_B (n : ℤ):ℚ := (2*n+1)*(-1)^2/4
def Sum_B (n : ℤ) : ℚ := (1/(1 - (-1))^2) - Tail_B n
def Sum_S (n : ℤ):ℚ := n*(n+1)/2
def Tail_S (n:ℤ):ℚ := ((4*Sum_S n) - (Sum_S (2*n) + Tail_B (2*n)))/3 - Sum_S n
def Bf (n:ℤ):ℚ := Sum_B n + Tail_B n
def Sf (n:ℤ):ℚ := Sum_S n + Tail_S n
/- We have the first hint on how to isolate Sf to not depend on n ... -/
theorem sumMatch : Sum_S (2*n) - Sum_B (2*n) = 4*(Sum_S n) := by
unfold Sum_S Sum_B Tail_B
simp
ring
/- If the sum and tail have this same pattern, then we can solve for S -/
theorem tailMatch : Tail_S (2*n) - Tail_B (2*n) = 4*(Tail_S n) := by
unfold Tail_S Tail_B Sum_S
simp
ring
theorem sVal1 : (4*Sum_S n - (Sum_S (2*n) + Tail_B (2*n)))/3 = -1/12 := by
unfold Sum_S Tail_B
simp
ring
/- This is ACCEPTED by Lean4. The form of Sum_S n determines
how parenthesese are used. n is the number of expansions
of the series.
-/
theorem sVal2: Sum_S n + Tail_S n = -1/12 := by
rw [ ← sVal1]
unfold Tail_S
simp
ring
/- Notes:
B comes from:
A = 1 + x A
= (x^0 + x^1 + x^2 + ... + x^n) + x^{n+1} A
= Sum_A n + Tail_A n
= 1/(1-x)
F = dA/dx
= 1/((1-x)^2)
B = F(-1)
= 1/((1-(-1))^2) = 1/4
= 1 + -2 + 3 + -4 + 5 + -6 + ... + n*(-1)^{n-1} + (d[Tail_A (n+1)]/dx @ -1)
And it's useful because, it's fully known: B = 1/4.
It was an arbitrary choice of a recursive sequence that helps solve for S.
There may be other proofs that make no use of B.
-/
Etienne Marion (Nov 17 2024 at 09:36):
If I just look at Lean, what you proved is that some polynomial expression of n simplifies to -1/12. But this says nothing about 1+2+3+…
Edward van de Meent (Nov 17 2024 at 12:42):
indeed, i could similarly provide a function Tail_S' : Nat -> Int
which has similar recursion properties, but additionally has Sum_S n + Tail_S' n = 0
.
Edward van de Meent (Nov 17 2024 at 12:43):
proving the series converges would be done by showing that Tail_S
goes to 0 for larger n
.
Edward van de Meent (Nov 17 2024 at 12:43):
(which in this case can't be done)
Rob Fielding (Nov 17 2024 at 16:49):
Sum_S + Tail_S = 0 won't work. If you expand B, you can see that it must have a value of 1/4, and force S=-1/12. Note that because S can be solved like S=1+13S, that S is a base13 number that is all 1 digits as well. "S=13^0+13^1+13^2+...=1+2+3+...". Those are far more easily evaluated by Lean.
B can't be assigned a different number than 1/4, and it expands like 1 + -2 + 3 + -4 + 5 + -6 + ... It comes from here:
A = 1 + x A
(1-x)A = 1
A = 1/(1-x) -- at value -1, it's 1/2
= (x^0 + x^1 + x^2 + ... + x^n) + x^{n+1}A
dA/dx differentiates everything, including the tail. That is why we can't just write "+...", or else we lose information. Note that differentiating A gives increasing numbers, to accomodate differentiating:
dA/dx = F = 0 x^{-1} + 1 x^0 + 2 x^1 + 3 x^2 + ... + n x^{n-1} + d[Tail_A(n)]
= 1/( (1-x)^2 )
and just assign B = F(-1). this gives:
F = 1/( (1 - (-1))^2)
= 1/4
= 1 + -2 + 3 + -4 + 5 + -6 + .... + n(-1)^{n-1} + d[Tail_B(n)]
And you can verify that Sum_B and Tail_B have exactly this behavior. You lean on the definition of B to solve for S,
because you find quickly that:
Sum_S (2n) - Sum_B (2n) = 4 Sum_S(n)
And this is the critical part where writing "+..." breaks the algebra. It is important that the right-hand side has half the expansions as the left-hand side. That is because it's doing this in the waffly "internet proof"
ie: S-B:
1 + 2 + 3 + 4 + 5 + 6 +
-1 + 2 + -3 + 4 + -5. + 6
=
0 + 4 + 0 + 8 + 0 + 12 + ....
Using the closed forms for Sum_B and Tail_B neatly work around people just re-arranging and re-parenthesizing. This keeps it valid.
Similarly, the reason for the complicated expression (4Sum n - (Sum_S (2n) + Tail_B (2n))/3 is that was a solve for the only value that S can have. The n values cancel out, and you get -1/12. Everybody is describing this wrong. It is not S that is the Sum. It's (S - Tail) = Sum. And it's a consequence of (S = Sum + Tail). It is true that BEFORE you introduce B, that S could be anything. But once you calculate (S - B) to be 4S, S can have no other value than -1/12. And it's not a paradox at all, because "-1/12 = 1 + 2 + 3 + ..." is not correctly stated. It's the DIFFERENCE between them that is the sum, because the sum is expressed as a closed finite form:
S - Tail_S(n) = Sum_S(n) = n*(n+1)/2
Here is a simpler example:
T = 1 + 2*T
= (2^0 + 2^1 + 2^2 + 2^3 + ... + 2^n) + 2^{n+1}T
T - 2^{n+1}T = 2^0 + 2^1 + 2^2 + 2^3 + ... + 2^n
T(1 - 2^{n+1}) = Sum_T(n)
(-1)(1 - 2^{n+1}) = Sum_T(n)
2^{n+1} - 1 = Sum_T(n)
T =
-1 = (1) - 2
= (1 + 2) - 4
= (1 + 2 + 4) - 8
= (1 + 2 + 4 + 8) - 16
= Sum_T(n) + Tail_T(n)
ie:
-1 + 2 = 1
-1 + 4 = 1 + 2
-1 + 8 = 1 + 2 + 3
....
See? It gave a closed form for T! T=-1, but you need to subtract the "+..." to get the Sum_T(n). It is critical to not get lazy and just write "+...", because when you do, then the algebra becomes wrong.
It's even more straightforward when it converges. When it converges, the tail goes to 0 for large n
Z = 9/10 + (1/10)Z
= 0.9 + (0.1)(0.9 + 0.1 Z)
= 0.99 + (0.01)Z
...
10 Z = 9 + Z
9Z = 9
Z = 1
Scott Carnahan (Nov 17 2024 at 17:33):
I think what is missing here is some justification, in Lean, for why the functions you defined are uniquely characterized by the properties you seem to want in your informal description. For example, I see some formal power series and derivatives at work, but they don't seem to show up in the Lean code.
Edward van de Meent (Nov 17 2024 at 17:35):
indeed. you define some function, but there is nothing written in lean that convinces me that Sum_S
indeed gives the tail, and not secretly ActualSum_S + c
for some constant c
.
Rob Fielding (Nov 17 2024 at 17:38):
Yes. That is because this is the first time I ever got Lean4 to accept anything. I want to start from taking a derivative on A, and do explicit calc. But ChatGPT and Claude can't help. I finally got it accepted from fiddling with it.
Sum_S(n) = n*(n+1)/2 is just the formal sum of 1+2+3+...+n. I convinced Lean that -1/12 = Sum_S(n) + Tail_S(n) for all n. But yes, people reading it are too skeptical to accept the theorem prover accepting it. I need to get better at using the tactics system. But I have been doing this on paper and in LaTeX for like a year; and am certain that no use of infinity is required, because of the recursion. What I take issue with is writing "+..." on the end, without giving "..." a name.
S = Tail_S(0)
= 1 + Tail_S(1)
= (1+2) + Tail_S(2)
= (1+2+3) + Tail_S(3)
....
= (1+2+3+...+n) + Tail_S(n)
This idea applies generally to convergent and divergent series. I found about a dozen different cases that it lets me solve.
I cannot stress this enough, the idea that S is a paradox is just wrong. You can take this with a limit of n going to infinity if you want to talk about that. It's critical to add the negative of the unwritten continuation:
S - Tail_S(n) = Sum_S(n) = n*(n+1)/2
Rob Fielding (Nov 17 2024 at 17:53):
In any case, given that Lean4 accepts this; I think it proves something. Too many real mathematicians disagree on whether this is even real, but Lean4 accepts it. The mathematicians that disagree cannot all be correct. "+Tail_S(n)" is a principled way of writing "+..." that keeps the algebra intact.
import Mathlib
/-
This is a FINITE proof. No use of infinity, or advanced math.
It is ACCEPTED BY LEAN4.
-1/12 = S = (1 + 2 + 3 + 4 + ... + n) + Tail_S n
-1/12 = S = Sum_S n + Tail_S n
-/
def Tail_B (n : ℤ):ℚ := (2*n+1)*(-1)^2/4
def Sum_B (n : ℤ) : ℚ := (1/(1 - (-1))^2) - Tail_B n
def Sum_S (n : ℤ):ℚ := n*(n+1)/2
def Tail_S (n:ℤ):ℚ := ((4*Sum_S n) - (Sum_S (2*n) + Tail_B (2*n)))/3 - Sum_S n
def Bf (n:ℤ):ℚ := Sum_B n + Tail_B n
def Sf (n:ℤ):ℚ := Sum_S n + Tail_S n
/- We have the first hint on how to isolate Sf to not depend on n ... -/
theorem sumMatch : Sum_S (2*n) - Sum_B (2*n) = 4*(Sum_S n) := by
unfold Sum_S Sum_B Tail_B
simp
ring
/- If the sum and tail have this same pattern, then we can solve for S -/
theorem tailMatch : Tail_S (2*n) - Tail_B (2*n) = 4*(Tail_S n) := by
unfold Tail_S Tail_B Sum_S
simp
ring
theorem sVal1 : (4*Sum_S n - (Sum_S (2*n) + Tail_B (2*n)))/3 = -1/12 := by
unfold Sum_S Tail_B
simp
ring
/- This is ACCEPTED by Lean4. The form of Sum_S n determines
how parenthesese are used. n is the number of expansions
of the series.
-/
theorem sVal2: Sum_S n + Tail_S n = -1/12 := by
rw [ ← sVal1]
unfold Tail_S
simp
ring
/- Notes:
B comes from:
A = 1 + x A
= (x^0 + x^1 + x^2 + ... + x^n) + x^{n+1} A
= Sum_A n + Tail_A n
= 1/(1-x)
F = dA/dx
= 1/((1-x)^2)
B = F(-1)
= 1/((1-(-1))^2) = 1/4
= 1 + -2 + 3 + -4 + 5 + -6 + ... + n*(-1)^{n-1} + (d[Tail_A (n+1)]/dx @ -1)
And it's useful because, it's fully known: B = 1/4.
It was an arbitrary choice of a recursive sequence that helps solve for S.
There may be other proofs that make no use of B.
-/
Ruben Van de Velde (Nov 17 2024 at 17:57):
And what exactly do you say you've proved that people might disagree with?
Rob Fielding (Nov 17 2024 at 18:02):
Stuff like: "Mathologer is just wrong.". You can't reject it because you don't like it. If it's accepted by Lean, then you have some explaining to do if you reject it. This problem says some really deep things about the "instruction set" of mathematics. It takes "a=b" very seriously, as a bidirectional rewrite rule. It takes recursion seriously. It doesn't care at all that "I added integers and got a negative fraction". Finite fields do exactly this as well. I think that's what is going on.
S = -1/12
-12 S = 1
(1-13)S = 1
S = 1 + 13 S
S is a base13 number that is all 1 digits. That is why multiplying it times 12 gives you -1, in its infinite-digit representation. I am a CS major; this is all deep common-sense to me. I don't really see what people object to. Adding all powers of 2 to get -1 is just common sense if you build computer hardware.
Edward van de Meent (Nov 17 2024 at 18:12):
so your "common sense" says that adding all positive integers gives a negative integer...
Edward van de Meent (Nov 17 2024 at 18:12):
rather than something positive
Rob Fielding (Nov 17 2024 at 18:15):
Yes. Because this happens in real hardware, then you implement adders out of logic circuits. It's a deep physical common-sense notion you get from doing code.
T = 1 + 2 T
= 1 + 2( 1 + 2 T)
= 1 + 2 + 4 T
= 1 + 2 + 4(1 + 2 + 4 T)
= 1 + 2 + 4 + 8 + 16 T
When you get S=-1/12=1+2+3+...+n+Tail_S(n), that "-1/12" can be represented as some infinite digit number. It's "-1/12" because if you multiply it times 12, you get -1. That's because in base13, (12) is the highest possible digit. Add 1, and it all ripple-carries to 0. Here is the powers of 13 expansion of -1/12:
-1/12 = V = 13^0 + 13^1 + 13^2 + ... + 13^n + 13^{n+1}V
Here is the closed form for finite n:
Sum_V(n)
=
(-1/12)(1 - 13^{n+1})
=
(13^{n+1} - 1)/12
Edward van de Meent (Nov 17 2024 at 18:23):
how do you mean "if you multiply it times 12, you get -1"? what exactly is the calculation? something like ? that's fine, but how do you get to this equaling ?
Scott Carnahan (Nov 17 2024 at 18:24):
Pretty much all of these manipulations admit some mathematical context in which they can be formally justified, but gluing them together is not so easy. This is where people usually fall back on complex analysis, e.g., analytically continuing the zeta function.
Rob Fielding (Nov 17 2024 at 18:25):
Sure! But I got Lean to accept it. That seems meaningful.
Rob Fielding (Nov 17 2024 at 18:26):
No Zeta, and no infinity is the point. I am a computer programmer. I just don't believe in an infinite number of iterations, because you will never be around to see the last iteration; by definition.
Edward van de Meent (Nov 17 2024 at 18:27):
the thing that you got lean to accept is "there is some function Tail_S : Int -> Rat
that complements n*(n+1)/2
to add to -1/12
". That's not a revelation about sums. that's just a fact about being able to calculate differences.
Rob Fielding (Nov 17 2024 at 18:28):
In base13, the digits are: 0 1 2 3 4 5 6 7 8 9 A B C. C is base13 "12".
12 * (13^0 + 13^1 + 13^2 + ...)
written in base13 is an infinite number of C digits to the left.
....CCCCCCCC
In the same way that in base10, (.....99999) will ripple carry to (.....00000) if 1 is added, so will (.....CCCCCC) base13.
So, -1/12 is not mysterious at all. It's a thing that if you multiply times 12, you get -1. If it is an infinite-digit representation of -1, you add 1 to it and you get 0. This is totally common sense to EE and CS majors. We baffle at math notation sometimes; because of what we see happening in hardware computation.
Scott Carnahan (Nov 17 2024 at 18:29):
Your particular choice of Tail_S
function is justified by a non-formalized manipulation that involves identifying the rational function F
with its Taylor expansion at zero and evaluating at -1
.
Edward van de Meent (Nov 17 2024 at 18:29):
sure, but where does 1+2+...
come into the mix in that reasoning? because i don't understand how you justify that being equal to 12 * (13^0 + 13^1 + 13^2 + ...)
Edward van de Meent (Nov 17 2024 at 18:32):
Rob Fielding said:
We baffle at math notation sometimes; because of what we see happening in hardware computation.
similarly, we baffle at the shortcuts taken in hardware computation sometimes to represent an infinite type in finite space, making a "bad" representation.
Kyle Miller (Nov 17 2024 at 18:32):
It's one say to say you can reason using the 13-adic numbers that in that number system, 1 + 2 + 3 + ... is -1/12 (that seems plausible to me, though I didn't check), but it's another to say (1) that this has anything to do with the real numbers and (2) that this reasoning is accurately captured in the Lean.
The situation with (2) is that Lean only checks that what you give it is correct, not that what you give it is what you think it is.
Edward van de Meent (Nov 17 2024 at 18:34):
(13-adic numbers refers to the number system where you can write base 13 numbers of infinite length and have "infinite" roll-overs)
Kyle Miller (Nov 17 2024 at 18:35):
More about 2: what if I told you I was very very sure that I proved Fermat's Last Theorem because I wrote "fermats_last_theorem" in Lean and Lean accepts it?
def FermatsLastTheorem := ∀ x, x ^ 2 ≥ 0
theorem fermats_last_theorem : FermatsLastTheorem := by
intro; omega
Rob Fielding (Nov 17 2024 at 18:35):
Just from S, you can solve it for 1+13 S.
S = T=-1/12
-12 T = 1
(1 - 13)T = 1
T = 1 + 13 T
That is the justification for "-1/12 is a base13 of all 1 digits". It's just pointing out that when you show that the recursive definition. You can't just add "..." to it. You must spell out what it is. You can DIFFERENTIATE Tail, etc. It is completely unambiguous. And in making it unambiguous, you force S to have an infinite-digit represented number (....111111_13). Which has the property that if you multiply it times 12, you get (......11111_2) ... an infinite representation of -1. That is why it is -1/12. This -1/12 is an "infinite digit integer" if you like. This is no more strange than 1=(0.9....). There is an uncarried infinite digit representation, and a carried finite digit representation. If you like, there are still an infinite number of zeroes to the left and to the right.
// I call this nonsense....
S = n*(n+1)/2 + ...
// this is proper.
S = n*(n+1)/2 + Tail_S(n)
Rob Fielding (Nov 17 2024 at 18:42):
If Lean accepts your FermatsLastTheorem, then that means that it chained together the inequality and the logic. Yes, I would believe that Lean proved: that def exactly as stated. I would wonder if it was stated correctly though.
Eric Wieser (Nov 17 2024 at 18:42):
I'll note that with the definition of summation used in mathlib, I can prove that the sum is not :
import Mathlib
/-- `0 + 1 + 2 + ...` is not `Summable`, because the sequence diverges. -/
theorem not_summable : ¬ Summable fun x : ℕ => (x : ℝ) := by
rw [not_summable_iff_tendsto_nat_atTop_of_nonneg fun _ => Nat.cast_nonneg _]
apply Filter.tendsto_atTop_atTop_of_monotone
· refine monotone_of_le_succ fun a ha => ?_
simp_rw [Nat.succ_eq_succ, Finset.sum_range_succ]
rw [←Nat.cast_sum, ←Nat.cast_add]
norm_cast
apply le_self_add
· intro x
refine ⟨⌈x⌉₊.succ, ?_⟩
rw [Finset.sum_range_succ, ←Nat.cast_sum]
refine Nat.le_ceil _ |>.trans ?_
conv_lhs => rw [← zero_add ⌈x⌉₊, Nat.cast_add]
gcongr
positivity
/-- `0 + 1 + 2 + ... = 0`, because the sequence is not summable. -/
example : ∑' x : ℕ, (x : ℝ) = 0 :=
tsum_eq_zero_of_not_summable not_summable
(for ease I included 0 +
in the sum, but hopefully you agree this is not relevant)
Rob Fielding (Nov 17 2024 at 18:43):
is the sum n*(n+1)/2 for finite n?
Eric Wieser (Nov 17 2024 at 18:48):
Sounds like a nice lean challenge for you to have a go at:
import Mathlib
-- this one is harder because the `/` rounds down
theorem lets_find_out {n : ℕ} : ∑ i ≤ n, i = n*(n + 1) / 2 := by
sorry
theorem lets_find_out_no_rounding {n : ℕ} : ∑ i ≤ n, ↑i = (n*(n + 1) / 2 : ℚ) := by
sorry
Rob Fielding (Nov 17 2024 at 18:48):
"because the sequence is not summable". I worked around this by using closed form for Sum_S(n), so that there's no putting in 0 in between random spots, doing parenthesis... n*(n+1)/2 nails it down exactly for every n. I leaned on the definition of B to force S to have a value. B = 1/4 = 1 + -2 + 3 + -4 + 5 + -6 + "..." has a structure that you can mix in to get the -1/12. There can't be a different value. In fact, Lean detects an error right away. Try to set it to 0, and it won't be accepted. The algebra literally cancels out all n, and you are left with -1/12. Sum_S(n) is huge positive, and Tail_S(n) is huge negative. They differ by -1/12
Andrew Yang (Nov 17 2024 at 18:49):
No Zeta, and no infinity is the point. I am a computer programmer. I just don't believe in an infinite number of iterations, because you will never be around to see the last iteration; by definition.
Note that mathematicians agree with you on this. does not mean "do the addition infinite amount of times, and you will get a number called ". It means "there exists a function such that for all ". The and are just convenience notations and nothing more.
Rob Fielding (Nov 17 2024 at 18:51):
Yes. I take "a = b" to literally mean "a -> b" and "b -> a" as rewrite rules. You therefore can't say that anything is equal to "+ ...".
Rob Fielding (Nov 17 2024 at 18:51):
And that's what forces the -1/12 value to come out the other end. You need B to isolate S. Use recursion to remove the ambiguity.
Rob Fielding (Nov 17 2024 at 18:52):
1+2+3+...+n + f(n)
Rob Fielding (Nov 17 2024 at 18:55):
This has been written up as Python code for a long time, btw. I still have more work to formalize everything beyond the fact that -1/12 = Sum_S(n) + Tail_S(n) is true. The tail is nailed down by the constraints imposed by 1/4 = Sum_B(n) + Tail_B(n), which you can show is the alternating sequence, justified as being the derivative of A at -1.
Rob Fielding (Nov 17 2024 at 18:56):
A = 1 + x A
(1-x)A = 1
A = 1/(1-x)
dA/dx = F
B = F(-1)
the choice of B is not important other than it helps extract S.
Rob Fielding (Nov 18 2024 at 02:09):
reading
theorem not_summable : ¬ Summable fun x : ℕ => (x : ℝ)
... i do not see how that's relevant to whether the pattern "1+2+3+...+n+Tail_S(n)" can be rewritten as -1/12 or not. doing some research to see anybody that thinks that this result is "actually zero"; and why Ramanujan thought it was -1/12.
Rob Fielding (Nov 18 2024 at 02:11):
it implies that you get a 0, not because 0 is the value; but simply that it's "not summable", which seems to mean "no answer". But if you are trying to figure out what the answer is after an infinite number of iterations... that is completely true! there is no answer if you are asking what the answer is after a number of iterations which is unreachable by definition.
Rob Fielding (Nov 18 2024 at 02:12):
and that is exactly why i went with recursion; to not rely on an infinite process. even if it's "alowed", you can't actually do that on a computer.
Rob Fielding (Nov 18 2024 at 02:20):
(deleted)
Rob Fielding (Nov 18 2024 at 03:13):
This is as terse as I can get it, because I can't do circular references between the definitions. It's just mysterious how I figured out theorem S.
import Mathlib
/-
1/4 = B = 1 + -2 + 3 + -4 + 5 + -6 + ... + n*(-1)^{n-1} + Tail_B n
-1/12 = S = (1 + 2 + 3 + 4 + ... + n) + Tail_S n
-/
def Tail_B (n : ℤ):ℚ := (2*n+1)*(-1)^2/4
def Sum_B (n : ℤ) : ℚ := 1/4 - Tail_B n
def Sum_S (n : ℤ):ℚ := n*(n+1)/2
/- n cancels out to solve for a constant. this is the justification for the equation
3S = 4 SumS n - (SumS 2n + TailB 2n)
4S = S + 4 SumS n - SumS 2n - TailB 2n
4 TailS n = TailS 2n - TailB 2n
because
SumS 2n - SumB 2n = 4 SumS n
which means S - B = 4S, which needs S=-1/12
-/
theorem S : (4*Sum_S n - (Sum_S (2*n) + Tail_B (2*n)))/3 = -1/12 := by
unfold Sum_S Tail_B
simp
ring
def Tail_S (n:ℤ):ℚ := -1/12 - Sum_S n
theorem Sgoal : Sum_S n + Tail_S n = -1/12 := by
unfold Tail_S Sum_S
simp
Abraham Solomon (Nov 18 2024 at 06:43):
Is there any argument at all to convince you otherwise? It seems like there is literally nothing that could convince you otherwise, which seems like a bad idea.
Rob Fielding (Nov 18 2024 at 06:51):
well, all this messing with this caused something to click with how Lean works; massively better understanding. look at the tactics and unfold every definition you still see; and then simp, maybe ring. that's 90% of what i need for now. I am now not having any trouble getting theorems accepted, and can focus on making sure they are correctly stated:
import Mathlib
def Tail_B (n : ℤ):ℚ := (2*n+1)*(-1)^2/4
def Sum_B (n : ℤ) : ℚ := 1/4 - Tail_B n
def Bf (n:ℤ):ℚ := Sum_B n + Tail_B n
def Sum_S (n : ℤ):ℚ := n*(n+1)/2
/- therefore ... -/
def Sf (n:ℤ):ℚ := (4*Sum_S n - (Sum_S (2*n) + Tail_B (2*n)))/3
def Tail_S (n:ℤ):ℚ := Sf n - Sum_S n
/- because Sf is equiv to Sum_S + Tail_S -/
theorem SfWhy : Sf n = Sum_S n + Tail_S n := by
unfold Sf Sum_S Tail_B Tail_S Sf Sum_S Tail_B
simp
/- B = 1 + -2 + 3 + -4 + 5 + -6 + ... + n*(-1)^(n-1) + Tail_B n -/
theorem Bgoal : Sum_B n + Tail_B n = 1/4 := by
unfold Sum_B Tail_B
simp
/- In base13, Sf n is all 1 digits -/
theorem B13 : Sf n = 1 + 13 * (Sf n) := by
unfold Sf Sum_S Tail_B
simp
ring
/- S = 1 + 2 + 3 + ... + n + Tail_S(n)
= -1/12
= s = 13^0 + 13^1 + 13^2 + ... + 13^m + Tail_s(m)
-/
theorem Sgoal : Sum_S n + Tail_S n = -1/12 := by
unfold Sum_S Tail_S Sf Tail_B Sum_S
simp
ring
Rob Fielding (Nov 18 2024 at 07:07):
∀ {n : ℤ}, Sum_S n + Tail_S n = -1 / 12
=
1+2+3+...+n + Tail_S n
=
1 + 2 + 3 + ... + n + ((4Sum_S(n)-(Sum_S(2n)+Tail_B(2n)))/3 - n(n+1)/2)
:rolling_on_the_floor_laughing:
I have had this working as Python code for months. nothing on earth is more convincing than code that runs and gives the answers you expect. If it's really strange, then that's just your brain resisting something new going in.
So, the whole point of this is that if the theorem prover is sound; that you should put in the most bizarre thing you can get it to accept. That's how you figure out what the "instruction set" of the math really is. If fitting a recursive pattern gives an answer that you find highly offensive; then that's an amazing way to nail down the subtle details on how it really works.
Vincent Beffara (Nov 18 2024 at 07:24):
You are not using theorem S
at all...
Vincent Beffara (Nov 18 2024 at 07:24):
Removing it, your code reduces to this:
import Mathlib
def Sum_S (n : ℤ):ℚ := n*(n+1)/2
def Tail_S (n:ℤ):ℚ := -1/12 - Sum_S n
theorem Sgoal : Sum_S n + Tail_S n = -1/12 := by
unfold Tail_S Sum_S
simp
Vincent Beffara (Nov 18 2024 at 07:25):
That is certainly true, but so is this:
import Mathlib
def Sum_S (n : ℤ):ℚ := n*(n+1)/2
def Tail_S (n:ℤ):ℚ := 37 - Sum_S n
theorem Sgoal : Sum_S n + Tail_S n = 37 := by
unfold Tail_S Sum_S
simp
Rob Fielding (Nov 18 2024 at 07:29):
the last one i posted is cleaned up, now that i am having no trouble getting them accepted. These observations for ANY sequence:
X = Sum_X(n) + Tail_X(n)
= 1 + Tail_X(1)
= 1 + 2 + Tail_X(2)
= 1 + 2 + 3 + Tail_X(3)
....
You can only write a finite number of them. If you have a closed form, then you can write a Sum and Tail for it. Keep in mind that S is not the sum. Sum is the sum...
S - Tail_S(n) = n*(n+1)/2.
Remember: S = Sum_S(n) + Tail_S(n) needs to be taken literally.
lim n->infty[ S - Tail_S(n) = Sum_S(n) ]
when people argue that the sum is infinity, that's what they mean.
That's the thing that people are getting wrong.
Example:
T = 1 + 2 T.
= 1 + 2 + 4 + 8 + ... + 2^n + 2^(n+1)T
T - 2^(n+1)T = 1 + 2 + 4 + 8 + ... + 2^n
T(1 - 2^(n+1)) = Sum_T(n)
(-1)(1 - 2^(n+1)) = Sum_T(n)
2^(n+1) - 1 = 1 + 2 + 4 + 8 + ... + 2^n
Vincent Beffara (Nov 18 2024 at 07:44):
Nobody is disputing that if you say def Sum_B (n : ℤ) : ℚ := 1/4 - Tail_B n
then Sum_B n + Tail_B n = 1/4
for all n
.
Rob Fielding (Nov 18 2024 at 07:53):
Actually... Sf came from S-B. But I had to define the function, then justify it later. with SfWhy. Sf n = Sum_S n + Tail_S n, but that would be a tautology that doesn't give a value. So I just set it to the thing that forces it to be -1/12.
ie:
S - B ... can be split up into Sum and Tail. But there's a super-subtle point in the proofs where people casually use "+...". Because they got 0+2+0+4+0+8 ... and are not taking the number of expansions into account. That's why it's Sum_S(2*n) to represent expanding twice as far. You need to preserve that for the algebra to work.
So, just considering sums.... you get 4 times the sum but for half as many terms expanded out:
Sum_S (2n) - Sum_B (2n) = 4* Sum_S n
And because Tail_S isn't defined yet, you do the same for tails, but substitute Tail_S n = (S - Sum_S (2*n)) to make it work...
Tail_S (2n) - Tail_B (2n) = 4 * Tail_S n
=
(S - Sum_S (2n)) - Tail_B (2n) = 4 * (S - Sum_S n)
...
which you solve for S to get:
S = (4Sum_S(n) - (Sum_S(2n)+Tail_B(2*n)))/3
And all the n cancel out, and you get a constant of -1/12. It's the reason why B is involved in this at all.
def Sf (n:ℤ):ℚ := (4*Sum_S n - (Sum_S (2*n) + Tail_B (2*n)))/3
def Tail_S (n:ℤ):ℚ := Sf n - Sum_S n
/- because Sf is equiv to Sum_S + Tail_S -/
theorem SfWhy : Sf n = Sum_S n + Tail_S n := by
unfold Sf Sum_S Tail_B Tail_S Sf Sum_S Tail_B
simp
It is true that I am not referencing SfWhy somewhere, but i state it so that I get an error if it's not true. But it's true. I am still learning how to use the tactics. I can do this on paper; and am confident that I can get it working in Lean....
Rob Fielding (Nov 18 2024 at 08:14):
There is a real application for the divergent sums btw: It lets you calculate closed forms. Sum of powers of 13:
s = 13^0 + 13^1 + 13^2 + ... + 13^n + 13^(n+1)s
s(1 - 13^(n+1)) = 13^0 + 13^1 + 13^2 + ... + 13^n
(-1/12)(1 - 13^(n+1)) = 13^0 + 13^1 + 13^2 + ... + 13^n
=
(13^(n+1) - 1)/12
I only started fiddling with this because I was trying to write code to turn infinite series into recursion only. I thought it would be simple; but finding recursive equivalents is not easy in general.
Rob Fielding (Nov 18 2024 at 08:36):
I need to encode the justification for B=1/4. It's derived like this, but I don't know enough Lean to start from A. I have to just define it, and have a theorem justify it later (like I did to get the only possible value for S, -1/12)
/-
A = 1 + x A
(1-x)A = 1
A = 1/(1-x)
= x^0 + x^1 A
= x^0 + x^1 + x^2 + x^3 + ... + x^n + x^(n+1)/(1-x)
F = dA/dX = 0 x^(-1) + 1 x^0 + 2 x^1 + 3 x^2 + ... + n x^(n-1) + d[Tail_A n]/dx
B = F(-1)
-/
Rob Fielding (Nov 19 2024 at 01:40):
Greetings! This is sorry-free. It does not plug in any constants directly. It uses theorems to justify the choices I made. If Lean accepts it, then the question is what is actually stated; and what gaps there are in the theorems.
The conclusion I get:
∀ {n : ℤ}, Sum_S n + Tail_S n = -1 / 12
=
(1+2+3+...+n) + Tail_S n
Where you can think of + Tail_S n
as an actual definition of + ...
so that it can be manipulated in the algebra. Note that I actually had to differentiate + Tail_A
which most people would be writing as + ...
.
Now I am not sure how theorems are supposed to be linked together; because you sometimes need to define something, and justify its definition with a theorem later on. I don't yet know how to differentiate A to get F, to get the constant value for B. So I just diff A manually. I don't know how to link together non-equal steps, so I just do a theorem per-step. The fact that I "don't use" the theorems isn't true. I use them so that I get an error when the assumption I am making is not true.
Contrary to comments, I can't just assign Sf whatever value I like; because the theorems will fail if I define Sf=0, etc. The same for B.
import Mathlib
def Af (x : ℚ):ℚ := 1/(1-x) /- A = 1 + x*A = x^0 + x^1 + x^2 + ... + x^n + x^(n+1)A = 1/(1-x) -/
def Ff (x : ℚ):ℚ := 1/((1-x)^2) /- dA/dx = F = 0 x^(-1) + 1 x^0 + 2 x^1 + 3 x^2 + ... + n x^(n-1) + d[Tail_A x n] -/
/- B = F(-1) = 1 + -2 + 3 + -4 + 5 + -6 + ... + (Tail_F (-1) n) -/
def Tail_B (n : ℤ) : ℚ := (2*n+1)*(-1)^n/4
def Sum_B (n : ℤ) : ℚ := (Ff (-1)) - Tail_B n
def Bf (n:ℤ):ℚ := Sum_B n + Tail_B n
def Sum_S (n : ℤ) : ℚ := n*(n+1)/2
def Sf (n:ℤ):ℚ := (4*Sum_S n - (Sum_S (2*n) + Tail_B (2*n)))/3
def Tail_S (n : ℤ) : ℚ := Sf n - Sum_S n
/- Sum_S is related to a multiple of itself -/
theorem SumDiff : Sum_S (2*n) - Sum_B (2*n) = 4*Sum_S (n) := by
unfold Sum_S Sum_B Tail_B Ff
simp
ring_nf
/- Tail is also related to a multiple of itself, in same relation -/
theorem TailDiff : Tail_S (2*n) - Tail_B (2*n) = 4*Tail_S n := by
unfold Tail_B Tail_S Sum_S Sf Sum_S Tail_B
simp
ring_nf
/- SumDiff and TailDiff imply: S-B=4S -/
theorem SminusBis4S : Sf (2*n) - Bf (2*n) = 4*(Sf n) := by
unfold Sf Sum_S Tail_B Bf Sum_B Tail_B Ff
simp
ring
/- leads to justifySf -/
theorem TailDiffBySum : (Sf (2*n) - Sum_S (2*n)) - Tail_B (2*n) = 4*(Sf n - Sum_S n) := by
unfold Sf Sum_S Tail_B
simp
ring
/- n cancels to give the constant, and only possible value for Sf -/
theorem justifySf : (4*Sum_S n - (Sum_S (2*n) + Tail_B (2*n)))/3 = Sum_S n + Tail_S n := by
unfold Tail_B Sum_S Tail_S Sum_S Sf Sum_S Tail_B
simp
/- Sf is all 1 digits in base13, which makes it the sum of all ℕ powers of 13! -/
theorem base13allOnes : Sf n = 1 + 13 * (Sf n) := by
unfold Sf Sum_S Tail_B
simp
ring
/- 1+2+3+...+n + Tail_S = -1/12 = S = 13^0 + 13^1 + 13^2 + ... + 13^m + Tail_s(m) = s -/
theorem Sgoal : Sum_S n + Tail_S n = -1/12 := by
unfold Sum_S Tail_S Sf Sum_S Tail_B
simp
ring
Rob Fielding (Nov 19 2024 at 03:12):
So, was Ramanujan a crazy person?
Yakov Pechersky (Nov 19 2024 at 03:43):
Here's a way to start to try proving some of your comments:
import Mathlib
example (x : Rat) (n : Nat) : 1/(1 - x) = ((Finset.range (n + 2)).image (fun k => x ^ k))).sum := by sorry
Vincent Beffara (Nov 19 2024 at 09:43):
AFAICT, you are still proving an identity between two explicitly defined expressions (which is true but does not have a lot of mathematical content), and listing many intermediate theorems that you never use (but are indicative of some math that wants to get out).
I have a proposal: since what you want to do is show that any "extended sum" applied to n
gives - 1 / 12
, let's actually show this:
import Mathlib
structure Reasonable (S : (ℕ → ℝ) → ℝ) where
-- Linearity
h1 : ∀ (a b : ℕ → ℝ), S (fun n => a n + b n) = S a + S b
h2 : ∀ (c : ℝ) (a : ℕ → ℝ), S (fun n => c * a n) = c * S a
-- Shift
h3 : ∀ (a : ℕ → ℝ), S a = a 0 + S (fun n => a (n + 1))
-- Splitting even / odd
h4 : ∀ (a : ℕ → ℝ), S a = S (fun n => a (2 * n)) + S (fun n => a (2 * n + 1))
-- Plus whatever rules you want to add
theorem twelfth {S : (ℕ → ℝ) → ℝ} (hS : Reasonable S) : S (fun n => n) = - 1 / 12 := by
sorry
The "right" proof of that theorem will involve the manipulations that you are doing, and would be satisfying to have.
Be careful though that the definition of Reasonable
I am giving here is totally wrong, no summation operator is reasonable:
theorem caution {S : (ℕ → ℝ) → ℝ} (hS : Reasonable S) : False := by
have bug := hS.h3 (fun n => 1)
linarith
The point is to get your - 1 / 12
froms the manipulations that you are doing. An actual definition would have a summation operator with values in Option ℝ
or something, and you would have to state that "if the sum is actually defined, it is equal to - 1 / 12
".
With an actually well chosen definition of Reasonable
and an accepted proof of twelfth
I believe that you would have something pretty nice.
Mario Carneiro (Nov 19 2024 at 09:47):
@Rob Fielding, I think this conversation would be improved if you were not posting a page with every response and not obviously engaging with the other people in those responses. If you say one sentence at a time, people have an opportunity to respond before you post a whole paragraph, codeblock or mathematical derivation that doubles down on a misapprehension
Rob Fielding (Nov 19 2024 at 23:51):
This is sorry-free. All stated theorems are validated to be true. There are no red error markers. But It is now clear that there has to be a way to link these up. So, I don't know exactly what you look for when you see a proof without any complaints. It appears that a human still can come in and cast doubt on it.
I thought that this deals with just blatantly setting Sf n = -1/12
and then making theorems to verify it later. Is this this insufficient to define S
?
def Sf (n:ℤ):ℚ := (4*Sum_S n - (Sum_S (2*n) + Tail_B (2*n)))/3
Because later, it's justified as being the same as 1+2+3+...+n + Tail_S(n)
theorem justifySf : (4*Sum_S n - (Sum_S (2*n) + Tail_B (2*n)))/3 = Sum_S n + Tail_S n := by
I could have just set Sf to -1/12 and use a theorem to justify it later. Because it's recursive, you have to define some things, and do the recursive definitions in theorems later. They will turn red if they are false. I get here because I only know how to do equalities like: (a=b)
. I dont know how to properly link up uses of =
as just another binary operator. I want to use that to create 5*(a=b)
to make 5*a = 5*b
, then (5*a=5*b)+X
to get 5*a+X = 5*b+X
, etc. The definition of justifySf comes from stating Sum_S (2*n) - Sum_B (2*n) = 4*Sum_S n
to show how S without tails is related to itself. Then from there, if S - B = 4*S
, then the tails have the same relationship. Solving Tail_S (2*n) - Tail_B (2*n) = 4 * Tail n
for S has a super-subtle point in it. Without the use of infinity, you can see that the right-hand side expands half as many times as on the left-hand side. The + ...
proofs can't capture this in the notation.
Matt Diamond (Nov 20 2024 at 00:08):
I dont know how to properly link up uses of
=
as just another binary operator. I want to use that to create5*(a=b)
to make5*a = 5*b
, then(5*a=5*b)+X
to get5*a+X = 5*b+X
, etc.
sounds like maybe docs#congrArg could help here?
Matt Diamond (Nov 20 2024 at 00:14):
import Mathlib
example (a b n : ℕ) : a = b → n * a = n * b := fun h => congrArg (n * ·) h
example (a b n : ℕ) : a = b → a + n = b + n := fun h => congrArg (· + n) h
Rob Fielding (Nov 20 2024 at 00:41):
Oh! Thank you! Amazing @Matt Diamond .
I think i can use just this for almost everything that i do in practice.
The last theorem is just to ask what S is, and it must be -1/12.
theorem justifySf2:
(Bf n = 1/4) ∧
(Sum_S (2*n) - Sum_B (2*n) = 4*Sum_S n) ∧
(Sum_S n + Tail_S n = Sf n) ∧
(4*Sum_S n - (Sum_S (2*n) + Tail_B (2*n))/3 = Sf n)
→ 4*Sum_S n - Sum_S (2*n) - Tail_B (2*n) = 3*(Sf n)
→ (Sf n) - Sum_S (2*n) - Tail_B (2*n) = 4*(Sf n) - 4*Sum_S n
→ (Sf (2*n)) - Sum_S (2*n) - Tail_B (2*n) = 4*Tail_S n
→ Tail_S (2*n) - Tail_B (2*n) = 4*Tail_S n := by
unfold Sum_S Sum_B Ff Tail_B Sf Sum_S Tail_B Tail_S Sf Sum_S Tail_B
simp
Rob Fielding (Nov 20 2024 at 02:21):
it was accepted because I had a bug in the definition of Tail_B, where (-1)^2
was where (-1)^n
should have been. I can't figure out how to get rw [pow_mul, neg_one_sq]
to work. It isn't necessary to solve for S
, but it's necessary to do a #eval Sum_B 1
to validate that B
is the alternating sequence. It was always this that stopped me, that ring can't handle (-1)^(2*n)=1
properly.
Matt Diamond (Nov 20 2024 at 02:27):
I can't figure out how to get
rw [pow_mul, neg_one_sq]
to work.
Could you provide a statement of what you have and what you're trying to get to using rw [pow_mul, neg_one_sq]
? It's hard to help without context.
Rob Fielding (Nov 20 2024 at 02:57):
hi! the help I am getting from you guys is tremendous. i went from being totally stuck, to thinking that I can do almost all algebra this way... as long as it's not relying on things like (-1)^2=1=1^n. Claude is unbelievably good at Lean, especially when you consider how bad ChatGPT is at it. I punted on linking A to F to show Tail_F is d[Tail_A]/dx ... Which would explain where Tail_B came from.
import Mathlib
def Af (x : ℚ):ℚ := 1/(1-x) /- A = 1 + x*A = x^0 + x^1 + x^2 + ... + x^n + x^(n+1)A = 1/(1-x) -/
def Ff (x : ℚ):ℚ := 1/((1-x)^2) /- dA/dx = F = 0 x^(-1) + 1 x^0 + 2 x^1 + 3 x^2 + ... + n x^(n-1) + d[Tail_A x n] -/
/- B = F(-1) = 1 + -2 + 3 + -4 + 5 + -6 + ... + (Tail_F (-1) n) -/
def Tail_B (n : ℤ) : ℚ := ((2*n+1)*(-1)^n)/4
def Sum_B (n : ℤ) : ℚ := (Ff (-1)) - Tail_B n
def Term_B (n:ℤ):ℚ := Sum_B (n) - Sum_B (n-1)
def Bf (n:ℤ):ℚ := Sum_B n + Tail_B n
def Sum_S (n : ℤ) : ℚ := n*(n+1)/2
def Sf (n:ℤ):ℚ := (4*Sum_S n - (Sum_S (2*n) + Tail_B (2*n)))/3
def Tail_S (n : ℤ) : ℚ := Sf n - Sum_S n
def Term_S (n:ℤ):ℚ := Sum_S (n) - Sum_S (n-1)
theorem BisConstant : Bf n = Bf (n+1) := by
unfold Bf Sum_B Tail_B
simp
theorem justifySf:
((Bf n = 1/4) ∧
(Sum_S (2*n) - Sum_B (2*n) = 4*Sum_S n) ∧
(Sum_S n + Tail_S n = Sf n) ∧
(4*Sum_S n - (Sum_S (2*n) + Tail_B (2*n))/3 = Sf n))
→ 4*Sum_S n - Sum_S (2*n) - Tail_B (2*n) = 3*(Sf n)
→ (Sf n) - Sum_S (2*n) - Tail_B (2*n) = 4*(Sf n) - 4*Sum_S n
→ (Sf (2*n)) - Sum_S (2*n) - Tail_B (2*n) = 4*Tail_S n
→ Tail_S (2*n) - Tail_B (2*n) = 4*Tail_S n := by
unfold Sum_S Sum_B Ff Tail_B Sf Sum_S Tail_B Tail_S Sf Sum_S Tail_B
simp
/- Everything above is sorry-free and nothing is red. Code below fails because I can't get rid of ((-1)^2) or ((1)^n) -/
theorem SisConstant : Sf n = Sf (n+1) := by
unfold Sf Sum_S Tail_B
simp
rw [zpow_mul, zpow_mul, neg_one_sq]
ring_nf
Rob Fielding (Nov 20 2024 at 06:20):
oh my God is Claude good at writing Lean. My problem is that with functions that take Z as input and output Q, tactics theorems encountering integer powers of -1 are COMICALLY hard to write. Claude got it all sorted out for me. I would say that the handling of integer powers of -1 are by far the biggest barrier for using Lean for routine work. If ring doesn't do it, then we need a tactic that does. The proof is much larger, and it is much much more solid. Claude thinks it's a really good proof, because it really deeply groks what I am doing. I am making fine distinctions between the algebra, the limits (ie: S-(Tail_S n)=Sum_S n) as n gets large, and keeping the distinction on how many term expansions happened. Claude totally groks the S=1+13S as the reason why this infinite digit number (sum of powers of 13... all 1s in base13).
The only remaining bits is the comments about the mystery of why B was chosen like it was. But how S was chosen is very very clear. But my theorems are a total mess because ring can't handle things like (-1)^n and 1^n correctly. It only worked previously because I had a bug and wrote Tail_B to use (-1)^2 instead of (-1)^n, which caused most things to work, but failed to give me the right terms for B until it was fixed.
Ruben Van de Velde (Nov 20 2024 at 06:20):
Add zpow_ofNat before neg_one_sq
Ruben Van de Velde (Nov 20 2024 at 06:21):
The exponent 2 is an integer rather than a natural number, which is a problem to lean
Rob Fielding (Nov 20 2024 at 06:23):
Here is where Claude got me, with zpow_ofNat added. It looks tight other than the definition of B not really being justified as the alternating sequence.
import Mathlib
def Af (x : ℤ):ℚ := 1/(1-x) /- A = 1 + x*A = x^0 + x^1 + x^2 + ... + x^n + x^(n+1)A = 1/(1-x) -/
def Ff (x : ℤ):ℚ := 1/((1-x)^2) /- dA/dx = F = 0 x^(-1) + 1 x^0 + 2 x^1 + 3 x^2 + ... + n x^(n-1) + d[Tail_A x n]/dx -/
/- B = F(-1) = 1 + -2 + 3 + -4 + 5 + -6 + ... + (Tail_F (-1) n) -/
def Tail_B (n : ℤ) : ℚ := ((2*n+1)*(-1)^n)/4
def Sum_B (n : ℤ) : ℚ := (Ff (-1)) - Tail_B n
def Term_B (n:ℤ):ℚ := Sum_B (n) - Sum_B (n-1)
def Bf (n:ℤ):ℚ := Sum_B n + Tail_B n
def Sum_S (n : ℤ ) : ℚ := n*(n+1)/2
def Sf (n:ℤ):ℚ := (4*Sum_S n - (Sum_S (2*n) + Tail_B (2*n)))/3
def Tail_S (n : ℤ) : ℚ := Sf n - Sum_S n
def Term_S (n:ℤ):ℚ := Sum_S (n) - Sum_S (n-1)
theorem BisConstant : Bf n = Bf (n+1) := by
unfold Bf Sum_B Tail_B
simp
theorem justifySf:
((Bf n = 1/4) ∧
(Sum_S (2*n) - Sum_B (2*n) = 4*Sum_S n) ∧
(Sum_S n + Tail_S n = Sf n) ∧
(4*Sum_S n - (Sum_S (2*n) + Tail_B (2*n))/3 = Sf n))
→ 4*Sum_S n - Sum_S (2*n) - Tail_B (2*n) = 3*(Sf n)
→ (Sf n) - Sum_S (2*n) - Tail_B (2*n) = 4*(Sf n) - 4*Sum_S n
→ (Sf (2*n)) - Sum_S (2*n) - Tail_B (2*n) = 4*Tail_S n
→ Tail_S (2*n) - Tail_B (2*n) = 4*Tail_S n := by
unfold Sum_S Sum_B Ff Tail_B Sf Sum_S Tail_B Tail_S Sf Sum_S Tail_B
simp
theorem even_power_neg_one (n : ℤ) : (-1:ℚ)^(2*n) = 1 := by
rw [zpow_mul, zpow_ofNat, neg_one_sq]
simp
theorem SisConstant : Sf n = Sf (n+1) := by
unfold Sf Sum_S Tail_B
simp
rw [even_power_neg_one n, even_power_neg_one (n+1)]
ring
theorem S_base13 : (Sf n) = 1 + 13*(Sf n) := by
unfold Sf Sum_S Tail_B
simp
rw [even_power_neg_one n]
ring_nf
theorem Sgoal : Sum_S n + Tail_S n = -1/12 := by
unfold Sum_S Tail_S Sf Sum_S Tail_B
simp
rw [even_power_neg_one n]
ring
Rob Fielding (Nov 20 2024 at 06:47):
Claude likes zpow_ofNat. it is dramatically simper. it works. Claude even suggested that a better tactic that it calls rational_algebra
.
Mario Carneiro (Nov 20 2024 at 10:07):
@Rob Fielding said:
This is sorry-free. All stated theorems are validated to be true. There are no red error markers.
It seems that you are still missing the point. There is not a single person who has questioned the correctness of the lean theorems. The comments are all of the form "you are proving something other than what you want to show". In particular, the statement of the main theorem you have written in lean looks nothing like -1/12 = 1+2+3+...
and this should be a red flag. It means you have to work much harder to "interpret" the expression you are actually proving, and the claim I would make is that you are making a mistake in this process. So I suggest you focus your efforts on that: try to break down in little steps why you think that the statement you have written in lean is equivalent to the informal mathematical statement so that we can tease this apart.
Rob Fielding (Nov 20 2024 at 18:21):
? for all n (number of expansions) of (Sum_S n) + (Tail_S n) = -1/12. I dont understand how that it does not mean this:
Sgoal says this:
-1/12 = (Sum_S 0) + (Tail_S 0) = 0 + Tail_S 0
= (Sum_S 1) + (Tail_S 1) = 1 + Tail_S 1
= (Sum_S 2) + (Tail_S 2) = 1 + 2 + Tail_S 2
= (Sum_S 3) + (Tail_S 3) = 1 + 2 + 3 + Tail_S 3
= (Sum_S 4) + (Tail_S 4) = 1 + 2 + 3 + 4 + Tail_S 4
...
It doesn't say "infinity", but no matter how many expansions you choose. +Tail_S n
is an explicit replacement for + ...
. Even if you don't like this result, the tail has to be defined as something. And the final number is all 1s in base13.
S = 1 + 13 S
= 13^0 +13^1 S
= 13^0 + 13^1 + 13^2 S
= 13^0 + 13^1 + 13^2 + ... + 13^n + 13^(n+1)S
...
= 1 + 13 S
(1-13)S = 1
-12 S = 1
S = -1/12
In base 13 there are only 13 digits. Call them 0,1,2,3,4,5,6,7,8,9,a,b,c. (.....cccc_13) is a representation of
12*S = 12*(....1111_13) = (......cccccc+13) = -1
So, all the complaints that this is impossible neglects that -1/12 has an infinite digit representation. It's just that it's a specific infinite number. It's a thing that if you multiply it times 12, you get an infinite digit representation of -1, which for every number base would be the highest digit plus base times itself:
-1 =
V = 1 + 2 V
T = 2 + 3 V
...
U = 9 + 10 V
...
12S = 12 + 13 12S
...
Z = (base-1) + base*Z = -1
In other words, if you add 1 to it, it will ripple-carry to become 0. This is common sense to somebody used to building computers. The high bit in binary denotes a negative number, the same as "carry the 1 out to infinity"; it's a similar behavior. I don't know how to formalize an explanation of this, so that you can see that it's true without even reading it. And that is kind of the point of Lean. If you can get Lean to accept it, then you don't need to spend time checking it.
Chris Birkbeck (Nov 20 2024 at 18:50):
The confusion is that what we normally mean by "+..." and what you have is different. When you write -1/12 = 1+2+3+... mathematically would be the statement:
lemma A : -(1 : ℚ) / 12 = limUnder (atTop) (fun k : ℕ => ∑ m in Finset.range k, k) := by sorry --this is false
but what you are proving is (I expect) essentially
-1/12 = (Sum_S 0) + (Tail_S 0) = 0 + (-1/12 - 0)
= (Sum_S 1) + (Tail_S 1) = 1 + (-1/12 - 1)
= (Sum_S 2) + (Tail_S 2) = 1 + 2 + (-1/12 - 1 -2)
= (Sum_S 3) + (Tail_S 3) = 1 + 2 + 3 + (-1/12 - 1 -2 -3)
= (Sum_S 4) + (Tail_S 4) = 1 + 2 + 3 + 4 + (-1/12 - 1 -2 -3 -4)
...
which everyone (including lean) is happy to accept is true for any n, but also doesn't say much since we can change the -1/12 (on both sides) for anything and importantly has nothing to do with the real meaning of the "+..." of the first statement.
Vincent Beffara (Nov 20 2024 at 18:50):
If your goal is to remove all the + ...
from the explanation and notation, you might also want to remove it from the (......cccccc_13)
as well as the implicit ones in "infinite digit expansion" so that we understand better what you mean.
Eric Paul (Nov 20 2024 at 18:53):
Along the same lines as what @Chris Birkbeck said, look at this different definition of Sum_S
and Tail_S
I provide in the following. I am able to prove the same goal you do, but it is not showing anything. This is why it would be good to formalize the full claim being made in lean to make sure you do mean what you want to say.
import Mathlib
def Sum_S (n : ℤ) : ℤ := n * (n+1) / 2
def Tail_S (n : ℤ) : ℚ := -Sum_S n - 1/12
theorem Sgoal : Sum_S n + Tail_S n = -1/12 := by
simp [Sum_S, Tail_S]
ring
Rob Fielding (Nov 20 2024 at 18:54):
I completely agree with Chris Berbeck's explanation. The problem is that the Lean Proof nails down what the final value is. And for some reason, that final value is -1/12. It uses the fact that the sum of the alternating sequence is 1/4. In other words, a thing that if you multiply it times 4, you get an infinite digit representation of 1.
Philippe Duchon (Nov 20 2024 at 18:55):
I will just repeat what Mario said with other words, but nobody is disputing the fact that, as defined, Sum_S n + Tail_S n = -1/12
holds for all n
, or even that Sum_n
is the sum of naturals from 1
to n
.
The point that is up to discussion, and which you still have not tried to address, is the link between Sum_S n
and giving a meaning to "the sum of all naturals". Nothing in your Lean development even approaches that. Classical mathematics gives an unambiguous answer to this question, in a theory that gives meaning to (some) infinite sums, and in which not all manipulations that are valid for finite sums have a valid translation for infinite sums.
You don't want to go this way; fine, but then you have to understand that until you give some link between the finite sums that you do manipulate correctly, and whatever meaning you are intending to give to infinite sums, what you are doing is not breaking much ground; most people who know the classical theory of infinite sums will take a quick look at your development, and not find anything in there to shake their beliefs.
Rob Fielding (Nov 20 2024 at 18:58):
Go into the proof and explicitly set it to a value other than -1/12. All of the theorems will then fail. The definition of (Sf n) is a recursive definition involving Sum_S and Tail_B. It forces (Sf n) to take on the value of -1/12 for n. If you only define S=-1/12 alone, then yes, it would work for any other number. But it has to be consistent with other definitions; where B=1/4 due to a much simpler explanation of A=1+x*A, then F=dA/dx, then B=F(-1).
Rob Fielding (Nov 20 2024 at 18:59):
You get -1/12, because that complicated definition of (Sf n) is special. All the uses of n cancel, and only a constant of -1/12 remains. I don't know why it had to be a number such that it's all 1s in base13 though; but you can see that it is!
Kyle Miller (Nov 20 2024 at 19:00):
Go into the proof and explicitly set it to a value other than -1/12. All of the theorems will then fail.
No one is disputing this.
The definition of
(Sf n)
is a recursive definition involvingSum_S
andTail_B
.
That is incorrect. "Recursive" means the right-hand side of the definition refers to what is being defined. The right-hand side of Sf
does not refer to Sf
:
def Sf (n:ℤ):ℚ := (4*Sum_S n - (Sum_S (2*n) + Tail_B (2*n)))/3
Chris Birkbeck (Nov 20 2024 at 19:03):
Rob Fielding said:
I completely agree with Chris Berbeck's explanation. The problem is that the Lean Proof nails down what the final value is. And for some reason, that final value is -1/12. It uses the fact that the sum of the alternating sequence is 1/4. In other words, a thing that if you multiply it times 4, you get an infinite digit representation of 1.
Sorry I don't understand what the statement " the lean proof nails down the final value is" means. What lean proof are you talking about? Or what final value are you talking about? To me the final value would be what the thing on the right hand side of lemma A is.
Kyle Miller (Nov 20 2024 at 19:05):
12*S = 12*(....1111_13) = (......cccccc+13) = -1
It's possible to do reasoning like this, but not with the integers/rationals/reals in base 13. This is a completely different number system.
In the real numbers, you can have base-whatever expansions where there are infinitely many digits after the decimal point, but there's no such thing as numbers with infinitely many digits before the decimal point (other than repeated 0's forever).
Yes, you can imagine ...9999999999.0 in base 10 as being -1, but you can't work with it as if it were literally -1, because ...9999999999.0 converges to infinity in the real number system. You would need to work with a different number system with little to no relation to the real number system.
Rob Fielding (Nov 20 2024 at 19:05):
That's the sort of recursion you see in code all the time. Mutual recursion. The trickiest part of it that keeps you from using +...
is that you need to keep track of how many times you expanded it.
Sum_S (2*n) - Sum_B (2*n) = 4*(Sum_S n)
handles the tricky situation where people justify adding
1 + 2 + 3 + 4 + 5 + 6 + ...
-1 + 2 + -3 + 4 + -5 + 6 + ...
0 + 4 + 0 + 8 + 0 + 12 + ...
So, that is why I insist on using recursion. Only with recursion will you still have an algebra that you can correctly manipulate.
Kyle Miller (Nov 20 2024 at 19:07):
I have never seen recursion in code where the body of the definition(s) doesn't call the function(s) being defined.
Kyle Miller (Nov 20 2024 at 19:08):
Neither Sum_S
nor Tail_B
call Sf
, so there is no mutual recursion either.
Rob Fielding (Nov 20 2024 at 19:14):
That's literally how compilers are made. BNF is a huge recursion, where larger structures refer to smaller structures. I am taking a CS view on this. And in this view, you need to treat a=b
as a
can be rewritten as b
and b
can be rewritten as a
. It's clear that a lot of mathematicians have something else in mind. But you can't make it work on a computer if you don't treat it this way.
Y = 0.9 + 0.1 Y
(1.0 - 0.1) Y = 0.9
0.9 Y = 0.9
Y = 1
Z = 1 + 2 Z
(1 - 2)Z = 1
-1 Z = 1
Z = -1
= 1 + 2 Z
= 2^0 + 2(1 + 2Z)
= (2^0 + 2^1 + 2^2 + ... + 2^n) + 2^(n+1)Z
Z - 2^(n+1)Z = Sum_Z(n)
Z(1 - 2^(n+1) = Sum_Z(n)
(-1)(1 - 2^(n+1) = Sum_Z(n)
2^(n+1) - 1 = Sum_Z(n)
= 1 + 2 + 4 + 8 + (Sum_Z(n) - Sum_Z(4))
Use recursion, and you can take everything super-literally; because you are not using infinite processes where the manipulation can be kind of waffly. But a saying in computer security "Undefined behavior becomes defined by an implementation" is so true.
Rob Fielding (Nov 20 2024 at 19:30):
may i suggest that this is a difference in view on what a = b
means. Take B = 1/4
as an example:
B=1/4
4 B = 1
(1+3)B = 1
B = 1 + (-3)B
= (-3)^0 + (-3)^1 + (-3)^2 + (-3)^3 + ... + (-3)^n + ((-3)^n)B
1/4 can be represented in base (-3) as a string of all 1s. This is the kind of reasoning that leads you to S=-1/12
. It's an equation that you can manipulate. It does not mean that if you keep adding the next highest number that you will encounter -1/12
. It just means this:
S = -1/12
-12 S = 1
(1-13)S = 1
S = 1 + 13 S
... [and through some complicated manipulations involving n*(n+1) and B, you find...]
= 1 + Tail_S(1)
= 1 + 2 + Tail_S(2)
= 1 + 2 + 3 + Tail_S(3)
...
= 1 + 2 + 3 + ... + n + Tail_S(n)
Kyle Miller (Nov 20 2024 at 19:31):
I think the difference in view is simply that all these reasonings in these text blocks should be represented directly in Lean somehow — that's the standard this Lean community expects.
Rob Fielding (Nov 20 2024 at 19:34):
I am working on it. I got a ton of help from you guys. Claude is helpful. But honestly, it doesn't matter where B came from. I just use it because people doubt the definition in spit of
#eval Term_B(1) 1
#eval Term_B(2) -2
#eval Term_B(3) 3
#eval Term_B(4) -4
#eval Term_B(5) 5
and similarly for Term_S. From there, you manipulate the definitions to get something that Ramanujan already told us; yet there a chorus of people that don't believe in recursion.
Kyle Miller (Nov 20 2024 at 19:36):
Don't get me wrong -- working with formal power series is really cool and you can reason about things in cool ways (combinatorics makes use of them too for counting arguments), but that doesn't mean that you can evaluate formulas for formal power series outside their radii of convergence and expect it to mean anything. (Though sometimes, with very careful reasoning, you can extract some meaning! This happens in combinatorics.)
Kyle Miller (Nov 20 2024 at 19:38):
Like the formula 1 + x + x^2 +... = 1/(1-x) is only true if the absolute value of x is less than 1. That's usually covered in a second-semester calculus course I think.
Thomas Browning (Nov 20 2024 at 19:38):
@Rob Fielding Here's a "proof" that : https://www.reddit.com/r/mathmemes/comments/r36cws/as_you_can_clearly_see_the_sum_of_all_natural/
And I can even give a "proof" in lean:
import Mathlib
def Sum_S (n : ℤ) : ℚ := n*(n+1)/2
def Sf (n : ℤ) : ℚ := (9*Sum_S n - (Sum_S (3*n + 1)))/8
def Tail_S (n : ℤ) : ℚ := Sf n - Sum_S n
theorem Sgoal : Sum_S n + Tail_S n = -1/8 := by
unfold Tail_S Sf Sum_S
simp
ring
Kyle Miller (Nov 20 2024 at 19:40):
(Thanks @Thomas Browning, I was spending some time going down the route of trying to get other "obvious" values for what ...99999 in base 10 would be.)
Rob Fielding (Nov 20 2024 at 19:46):
You need a justifySf to prove that: (9*Sum_S n - (Sum_S (3*n + 1)))/8 = Sum_S n + Tail_S n
Rob Fielding (Nov 20 2024 at 19:47):
S is pinned down by: Sum_S (2*n) - Sum_B (2*n) = 4*Sum_S n
as well.
You get that just from the definition of the Sums, before Taill_S is defined at all. From there, if the sums go on forever, means that the tails have the same relationship
Sum_S (2*n) - Sum_B (2*n) = 4*(Sum_S n)
if it goes on forever, then this is true, and we can solve it for S, and it implies (Sf n) - (Bf n) = 4*(Bf n), where since Sf and Bf are constant, it doesn't matter what their values for n are, as the n cancel out.
Tail_S (2*n) - Tail_B (2*n) = 4*(Tail_S n)
((Sf n) - Sum_S (2*n)) - Tail_B (2*n) = 4*((Sf n) - (Sum_S n))
.... [see justifySf] where this is then solved for (Sf n),
..... before (Tail_S n) has a definition other than ((Sf n) - Sum_S n))
Thomas Browning (Nov 20 2024 at 19:56):
Rob Fielding said:
You need a justifySf to prove that:
(9*Sum_S n - (Sum_S (3*n + 1)))/8 = Sum_S n + Tail_S n
import Mathlib
def Sum_S (n : ℤ) : ℚ := n*(n+1)/2
def Sf (n : ℤ) : ℚ := (9*Sum_S n - (Sum_S (3*n + 1)))/8
def Tail_S (n : ℤ) : ℚ := Sf n - Sum_S n
theorem justifySf : (9*Sum_S n - (Sum_S (3*n + 1)))/8 = Sum_S n + Tail_S n := by
unfold Sum_S Tail_S Sf Sum_S
simp
theorem Sgoal : Sum_S n + Tail_S n = -1/8 := by
unfold Tail_S Sf Sum_S
simp
ring
Rob Fielding (Nov 20 2024 at 19:57):
The whole reason I introduced B is that before you link it to B, S can be anything at all. B is known to be 1/4. That's why you investigate S-B first.
Mario Carneiro (Nov 20 2024 at 19:59):
The thing that is most troubling about your proofs @Rob Fielding is that each move you make is changing the statement of the theorem. That's not how proofs are supposed to work. For example, normally it would be... concerning... that both you and @Thomas Browning have proved that a certain sum is equal to two different values. But it's not surprising here because the manipulations are all happening in the statement of the theorem - yourSum_S
and Tail_S
are not the same as @Thomas Browning 's.
Mario Carneiro (Nov 20 2024 at 20:00):
You should be able to state the theorem in such a way that it doesn't require proving the theorem first
Rob Fielding (Nov 20 2024 at 20:01):
But there is recursion involved. You know that S = Sum_S n + Tail_S n
, so you subtract Sum_S n
from both sides to get a vacuous definition that needs help from elsewhere to define S
. And it is well known that calculating S - B
will get you there, because it's S - 1/4 = 4*S
Mario Carneiro (Nov 20 2024 at 20:02):
The point is that before you do any of that, subtracting things from both sides and whatnot, first you have to decide: what does 1+2+3+... mean as an expression
Mario Carneiro (Nov 20 2024 at 20:02):
you should be able to write down a definition of this expression which doesn't involve explaining how you intend to manipulate that sum
Mario Carneiro (Nov 20 2024 at 20:03):
That is: what is S
? Maybe it's equal to Sum_S n + Tail_S n
but that's not the definition, that's a theorem you need to prove about it
Mario Carneiro (Nov 20 2024 at 20:03):
informally, S is 1+2+3+...
Rob Fielding (Nov 20 2024 at 20:04):
S - 1/4 = 4*S
4S = 1/4 + S
S = (1/16) + (1/4)S
You need a recursion that relates S to itself to solve for S. If you want to plug it in arbitrarily, you need to solve for what that value is.
Mario Carneiro (Nov 20 2024 at 20:04):
I don't want to solve for S, I want to define S
Mario Carneiro (Nov 20 2024 at 20:04):
What is S supposed to describe?
Mario Carneiro (Nov 20 2024 at 20:04):
Definitions come before proofs
Rob Fielding (Nov 20 2024 at 20:05):
S is the "final value". Sum_S(n)+Tail_S(n)
means that it has the same value no matter what n is. This value is unreachable by iteration, by definition. It's a matter of finding all the constraints that solve for S. If you just make up a value for S, it's going to conflict with others that are already defined, like B.
Mario Carneiro (Nov 20 2024 at 20:06):
Okay what's the "final value" supposed to denote?
Mario Carneiro (Nov 20 2024 at 20:07):
Let's take a simpler example where the algebraic manipulations are actually correct. Let's prove that 1+1/2+1/4+1/8+... = 2
.
Here's the informal mathematics proof:
- Let S be the sum
1+1/2+1/4+1/8+...
- Then
S = 1+1/2+1/4+1/8+...
= 1+1/2(1+1/2+1/4+...)
= 1+S/2
- Therefore
S = 2
.
Kyle Miller (Nov 20 2024 at 20:08):
(The formal mathematical proof of that of course needs to reason about the partial sums and show they converge too, or at least you need a bunch of theory built up for term-wise manipulations.)
Mario Carneiro (Nov 20 2024 at 20:08):
In this example, the answer to my question, what is S, is that
Rob Fielding (Nov 20 2024 at 20:09):
For all n: n(n+1)/s + Tail_S(n). You could just define the end result to be anything. But you need to justify that it's consistent with other known facts. B is just used because it easily gives a relationship to itself. S-B=4S, which literally solves for -1/12.
When you increment n, all you are doing is moving n from Tail to Sum; to leave the final value unchanged. That doesn't mean you can just go set it to 1/8 though. You need to find places where S is related to itself in a way that tells YOU the value.
Mario Carneiro (Nov 20 2024 at 20:09):
which is to say, what we are using algebraic manipulations to prove is that , where is an abbreviation for
Eric Wieser (Nov 20 2024 at 20:10):
But you need to justify that it's consistent with other known facts
This isn't allowed in Lean; you have to justify it as being consistent with facts known to lean
Kyle Miller (Nov 20 2024 at 20:11):
I think strictly speaking @Eric Wieser, that's allowed in Lean, because Lean has nothing to say about what's not put into Lean.
Mario Carneiro (Nov 20 2024 at 20:11):
Also it's not really true that you have to justify that definitions are consistent with known facts when defining things in lean
Mario Carneiro (Nov 20 2024 at 20:11):
that's not part of definitional principles
Eric Wieser (Nov 20 2024 at 20:12):
For instance, it's "well known" that , but in lean it's
Mario Carneiro (Nov 20 2024 at 20:12):
The way definitions work is you write down an expression involving previously defined things and give it a name and voila you have a new definition
Mario Carneiro (Nov 20 2024 at 20:13):
But it is very important to make definitions correctly, because they control whether you are proving the right things
Mario Carneiro (Nov 20 2024 at 20:14):
for instance, with my infinite sum example, I could have also defined and now the proof that becomes a lot easier
Mario Carneiro (Nov 20 2024 at 20:14):
but the proof is no longer a theorem about infinite sums if I do this, it is just a triviality
Rob Fielding (Nov 20 2024 at 20:15):
1 + 2V = V
1 = (1-2)V
1 = -1 V
-1 = V
V = 1 + 2V
= 1 + 2(1 + 2V)
= 1 + 2 + 4 V
= 1 + 2 + 4(1 + 2 + 4 V)
= 1 + 2 + 4 + 8 + 16 V
= 2^0 + 2^1 + 2^2+ 2^3 + ... + 2^n + (2^(n+1)V)
(-1) = 1 + 2(-1)
= 1 + 2 + 4(-1)
= 1 + 2 + 4 + 8(-1)
= 1 + 2 + 4 + 8 + 16(-1)
=
(-1) + 16 = 15
This proves the point rather well.
V - (2^(n+1)V) = V(1 - 2^(n+1)) = 2^(n+1) - 1 = Sum_V(n)
This is an actual application of divergent series! You can use it to compute closed forms on the sums, because of how it is recursively defined.
Mario Carneiro (Nov 20 2024 at 20:15):
but if I hand you a goal like
example : sum (fun n => 1/2^n) = 2 := by
sorry -- fill me in
there is no way to cheat by changing definitions, because the input doesn't reference any S
in the first place
Mario Carneiro (Nov 20 2024 at 20:16):
you can introduce an S
in the proof if you like, but you can't change the theorem statement
Kyle Miller (Nov 20 2024 at 20:19):
I just proved S = 0
. Here's my text block:
S = 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + ...
= (1 + 2 + 3) + (4 + 5 + 6) + (7 + 8 + 9) + ...
= 6*1 + 6*2 + 6*3 + ...
= 6*S
S - 6*S = 0
(1 - 6)*S = 0
=> S = 0
Rob Fielding (Nov 20 2024 at 20:21):
but i used closed forms so you can't just throw parenthesis around things. n*(n+1)/2
in the sum, and S - n*(n+1)/2
solves for S
, and it just happens to be complicated enough to find S
that people will argue over a Lean proof that does it.
Kyle Miller (Nov 20 2024 at 20:22):
I'm using closed forms too, but I'm evaluating it at every third n rather than every n. Why should that make a difference?
Edward van de Meent (Nov 20 2024 at 20:24):
your original definition of S also evaluates the sequence corresponding to B every second number...
Rob Fielding (Nov 20 2024 at 20:26):
make that a closed form where each n is a parenthesis.... this is exactly why i want to use recursion. +...
is so error-prone.
oh... i got it. track the expansion rate of terms... it's something like...
(Sf 6n) = 6(Sf 3)
+...
is an undefined bullshit procedure. use recursion.
Rob Fielding (Nov 20 2024 at 20:27):
if you expand at 1/3 the rate, it's not the same.
Thomas Browning (Nov 20 2024 at 20:27):
so why is it ok to expand at 1/2 the rate?
Mario Carneiro (Nov 20 2024 at 20:28):
Rob Fielding said:
+...
is so error-prone.
On this we definitely agree
Mario Carneiro (Nov 20 2024 at 20:29):
The point I've been trying to get to here is that the lean translation of 1+2+3+...
is sum (fun n => n)
(give or take a few details), not the polynomial expressions you have been writing
Mario Carneiro (Nov 20 2024 at 20:30):
or in informal mathematical language, the equivalent of without dots in it is
Mario Carneiro (Nov 20 2024 at 20:31):
this should be the starting point, and then you perform algebraic manipulations on this expression in order to show that it is equal to something
Rob Fielding (Nov 20 2024 at 20:34):
i don't doubt it at all that if you re-parenthesize, you get a different number. I am trying (Sf (3n)) = 6(Sf n) ... and stuff like that. I am guessing that it ends up working.
Rob Fielding (Nov 20 2024 at 20:36):
but sum 1 to n is n*(n+1)/2. i am confused. for finite arithmetic, you don't have any of these bizarre problems. when you use recursion, you don't encounter the algebra problems with infinity either. but you do discover self-relationships that give numbers that say things like "an infinite number such that if you multiply it times 12, you get -1 represented as an infinite number"
Mario Carneiro (Nov 20 2024 at 20:36):
That depends on what you mean by "is" :)
Mario Carneiro (Nov 20 2024 at 20:37):
the sum of the numbers from 1 to n is
Rob Fielding (Nov 20 2024 at 20:37):
i reached this point by thinking of a rewrites to b, and b rewrites to a; as literally as humanly possible.
Mario Carneiro (Nov 20 2024 at 20:37):
it happens to be equal to but that's not the definition
Thomas Browning (Nov 20 2024 at 20:37):
Rob Fielding said:
but sum 1 to n is n*(n+1)/2. i am confused. for finite arithmetic, you don't have any of these bizarre problems. when you use recursion, you don't encounter the algebra problems with infinity either. but you do discover self-relationships that give numbers that say things like "an infinite number such that if you multiply it times 12, you get -1 represented as an infinite number"
the problem is that you also discover self-relationships that give numbers that say things like "an infinite number such that if you multiply it times 8, you get -1 represented as an infinite number"
Kyle Miller (Nov 20 2024 at 20:38):
The fundamental problem is like what Mario says: we haven't established that there even is such a number. We only established what it could equal, if it existed.
Rob Fielding (Nov 20 2024 at 20:39):
S = 1 + 13 S ... it constructs this number though.
Mario Carneiro (Nov 20 2024 at 20:39):
I haven't even gotten to that point :) I'm still talking about how the statement is not correct because it assumes what it wants to show or variants thereof
Kyle Miller (Nov 20 2024 at 20:39):
That's not a construction but a characterization. It's something that you established the possibly-existing number should satisfy.
Mario Carneiro (Nov 20 2024 at 20:40):
It is really critical to understand that when you translate "the sum of x and y is 5" to lean you write "x + y = 5" and not "5 = 5"
Mario Carneiro (Nov 20 2024 at 20:40):
you should strive to translate each word in the mathematics as directly as possible to lean expressions
Rob Fielding (Nov 20 2024 at 20:40):
well, i did not plug in S=-1/12. I defined B, then I defined S. I showed that if you calculate S-B, it leads you to solve S for that complicated recursion. And by that point, #eval Sf 2 (or any number) is -1/12.
Kyle Miller (Nov 20 2024 at 20:41):
Do you agree with the following @Rob Fielding: if you add two positive numbers then their sum is positive
Mario Carneiro (Nov 20 2024 at 20:41):
The definition of S does not involve B in any way
Mario Carneiro (Nov 20 2024 at 20:41):
the definition of S is "the sum of the natural numbers", it has no B in it
Edward van de Meent (Nov 20 2024 at 20:41):
Mario Carneiro said:
The definition of S does not involve B in any way
or well, should not
Mario Carneiro (Nov 20 2024 at 20:42):
maybe you can introduce a B and prove something about how it relates to S but that comes after the theorem statement is completed
Rob Fielding (Nov 20 2024 at 20:43):
S=-1/12 does not contradict that though! that's what I am saying:
S = -1/12 = 13^0 + 13^1 + 13^2 + ... + 13^n + (13^(n+1)S)
it's 12*S = -1 = (....111111_2)
It's the highest base2 integer. If you add 1, it ripple carries to 0. exactly like a machine.
Edward van de Meent (Nov 20 2024 at 20:43):
there is no reason why it should ripple like a machine tho
Vincent Beffara (Nov 20 2024 at 20:43):
Rob Fielding said:
S = 1 + 13 S ... it constructs this number though.
Well this equation is also satisfied for S being "infinity" (in the usual sense as it happens in limits for instance, i.e. infinity + infinity = infinity, 2 * infinity = infinity and so on - here there is no issue with "indefinite forms" like 0 * infinity to take care of).
Rob Fielding (Nov 20 2024 at 20:43):
have you ever built an adder out of AND and OR and NOT gates? try it.
Kyle Miller (Nov 20 2024 at 20:44):
Yes
Mario Carneiro (Nov 20 2024 at 20:44):
Fun fact, ripple carry adders have a finite number of gates
Edward van de Meent (Nov 20 2024 at 20:44):
Rob Fielding said:
have you ever built an adder out of AND and OR and NOT gates? try it.
not one that accurately reflects how natural numbers work
Edward van de Meent (Nov 20 2024 at 20:45):
Mario Carneiro said:
Fun fact, ripple carry adders have a finite number of gates
and more importantly, states
Rob Fielding (Nov 20 2024 at 20:45):
it's not "INFINITY". It's an specific number with an infinite number of digits.
in base 10:
....222222_10 != ......1111111_10
They are specific numbers with their own properties. 9*(.....111111_10) = -1
Kyle Miller (Nov 20 2024 at 20:45):
Come on Mario, be patient, we can wait for the ripple carry to get through all the infinite number of gates :-)
Mario Carneiro (Nov 20 2024 at 20:45):
How do you know that those two are not equal? Or that they are even numbers?
Yakov Pechersky (Nov 20 2024 at 20:45):
Is there a difference between 1.0 and 0.99999...?
Vincent Beffara (Nov 20 2024 at 20:45):
What do you mean by ...
in that sentence ?
Johan Commelin (Nov 20 2024 at 20:46):
Rob Fielding said:
have you ever built an adder out of AND and OR and NOT gates? try it.
My daughter built one when she was 8.
Yakov Pechersky (Nov 20 2024 at 20:47):
Just because they are by-digit different in terms of how the decimal expressions are written doesn't mean the closed terms are different. That's my counterexample to your claim that ...1111 and ...2222 are different.
Kyle Miller (Nov 20 2024 at 20:47):
"10s complement" representations are only for modulo arithmetic @Rob Fielding, where you take everything modulo 10^n for some n, just like how twos complement is for 2^n
Rob Fielding (Nov 20 2024 at 20:47):
2*(.....111111_10) = (.....22222_10), etc.
1 = (0.11111....) for similar reasons.
there is ALWAYS an infinite number of digits. all zeros to the left is a finite number. all 9s to the right is an integer. there are carried and uncarried representations that are the same value. this happens so much in cryptography and finite fields, it's not at all surprising that the negative inverse of 12 shows up.
Johan Commelin (Nov 20 2024 at 20:47):
Can someone remind me what the point of this thread is? @Rob Fielding posted some Lean code. Lean was happy. So let's move on.
Kyle Miller (Nov 20 2024 at 20:48):
Sorry, I got caught up in having fun finding different values for 1 + 2 + 3 + 4 + ...; gotta get back to some Lean metaprogramming...
Mario Carneiro (Nov 20 2024 at 20:49):
has anyone posted a correct version of the theorem statement here such that one can actually prove it is equal to -1/12?
Vincent Beffara (Nov 20 2024 at 20:49):
Rob Fielding said:
S = 1 + 13 S ... it constructs this number though.
More seriously: this equation S = 1 + 13 S
has exactly one solution in the real numbers (-1/12), one solution in Z / 13 Z
(1), no solution in Z / 2 Z
- it all depends on the nature of that S
. Here you are talking about infinite expansions to the left, so we are definitely not talking about the reals. So what is the nature of this S
?
Rob Fielding (Nov 20 2024 at 20:50):
The main thing is that you can write something that Lean is totally happy with, and if somebody doesn't believe it before they read it, then they won't believe it after reading it either.
Mario Carneiro (Nov 20 2024 at 20:50):
well no, that's not how proofs are supposed to work
Vincent Beffara (Nov 20 2024 at 20:51):
Rob Fielding said:
The main thing is that you can write something that Lean is totally happy with, and if somebody doesn't believe it before they read it, then they won't believe it after reading it either.
Everybody here agrees that those of your statements that are actually written in lean are correct. This is kind of besides the point that we are trying to make.
Johan Commelin (Nov 20 2024 at 20:51):
@Rob Fielding Nobody is disbelieving your Lean code. That has been pointed out to you before.
So let's move on.
Rob Fielding (Nov 20 2024 at 20:51):
Anyways, I am trying to figure out what the heck it is that you can do so that Lean is the judge, not a person.
Mario Carneiro (Nov 20 2024 at 20:52):
The way it works is you write down the statement, pause, confirm that the statement is correct, then prove it
Mario Carneiro (Nov 20 2024 at 20:52):
you have been doing the steps in the opposite order
Vincent Beffara (Nov 20 2024 at 20:52):
That's actually a subtle issue, lean is not the one that is judging definitions, it is only checking proofs.
Mario Carneiro (Nov 20 2024 at 20:53):
if you write the statement after the proof you can easily get lean to be a yes-man
Rob Fielding (Nov 20 2024 at 20:53):
(Sf n) = Sum_S n + Tail_S n
is a perfectly fine statement. the theorems picked Sf for us. it chose -1/12 for some reason.
Mario Carneiro (Nov 20 2024 at 20:53):
That is a perfectly fine statement. Which has nothing to do with infinite sums
Rob Fielding (Nov 20 2024 at 20:53):
correct. this is about recursively defined sums... that make the infinite pattern.
Mario Carneiro (Nov 20 2024 at 20:54):
I don't see anything that resembles in there
Mario Carneiro (Nov 20 2024 at 20:54):
so whatever you prove about that expression has no bearing on the famous theorem about -1/12
Rob Fielding (Nov 20 2024 at 20:55):
n*(n+1) is literally that. if you don't want to see, then you can't. i don't see how Lean is removing human judgement though.
Vincent Beffara (Nov 20 2024 at 20:55):
Rob Fielding said:
n*(n+1) is literally that. if you don't want to see, then you can't. i don't see how Lean is removing human judgement though.
It is not!
Mario Carneiro (Nov 20 2024 at 20:55):
No, n*(n+1) is a product of two numbers, it is neither a finite sum nor an infinite sum
Thomas Browning (Nov 20 2024 at 20:56):
@Rob Fielding If want to see what you proved, just put your cursor on the line after unfold
. Lean tells you that what you proved is the purely algebraic statement ↑n * (↑n + 1) / 2 + ((4 * (↑n * (↑n + 1) / 2) - (↑(2 * n) * (↑(2 * n) + 1) / 2 + (2 * ↑(2 * n) + 1) * (-1) ^ (2 * n) / 4)) / 3 - ↑n * (↑n + 1) / 2) = -1 / 12
Thomas Browning (Nov 20 2024 at 20:56):
You proved that in Lean, and no-one disputes that
Rob Fielding (Nov 20 2024 at 20:56):
note that Sum_S(n) := n*(n+1)/2 is the finite part. The tail deals with everything not expanded. So it's not surprising that Tail_S(n) has weirder properties.
Mario Carneiro (Nov 20 2024 at 20:56):
But you aren't expanding anything
Thomas Browning (Nov 20 2024 at 20:56):
The issue is that what you proved makes no mention of 1+2+3+...
Mario Carneiro (Nov 20 2024 at 20:57):
you have just made up two expressions which have nothing a priori to do with a certain infinite sum
Mario Carneiro (Nov 20 2024 at 20:59):
It's not so hard to fix the definition of Sum_S if you want it to be the relevant finite sum, just write
def Sum_S (n : ℕ) : ℚ := ∑ i ≤ n, i
instead of
def Sum_S (n : ℕ) : ℚ := n*(n+1)/2
Mario Carneiro (Nov 20 2024 at 20:59):
the former is a finite sum, the latter is a polynomial expression
Mario Carneiro (Nov 20 2024 at 21:00):
lean has notation for sums and you aren't using it
Mario Carneiro (Nov 20 2024 at 21:01):
and this is the cause of a lot of correct proofs about incorrect statements
Mario Carneiro (Nov 20 2024 at 21:06):
Here's a theorem statement for the theorem you are trying to prove:
import Mathlib
theorem sum_the_naturals : (∑' i : ℕ, i : ℚ) = -1/12 := by
sorry -- fill this in!
I hope you can agree that this statement says that the sum of the natural numbers is -1/12. (There are subtleties here which I will leave you to discover, if you get that far.) Can you prove this theorem without changing the statement of the theorem? If so, then I will be convinced that -1/12=1+2+3+...
.
Mario Carneiro (Nov 20 2024 at 21:07):
if you want to say that the sum can be broken into a finite part and a tail part, then you should prove it, in lean. That's what it's there for
Rob Fielding (Nov 20 2024 at 21:10):
put it this way: the sum of all integers 1+2+3+... going to infinity is not S. It's (S - Tail_S(n))=Sum_S(n). Because S=Sum_S(n)+Tail_S(n). Because Sum_S(n) is a function of n, it "goes to infinity" and never takes on a value as n "goes to infinity".
S != 1+2+3+...
because
S = Sum_S(n) + Tail_S(n)
S - Tail_S(n) = Sum_S(n)
it happens to be that... S = -1/12.
another sequence little-s s=-1/12
s = 1 + 13s
s = Sum_s(n) + Tail_s(n)
= 13^0+13^1+13^2 + ... + 13^n + (13^(n+1) s)
=
s - (13^(n+1)s) = 13^0 + 13^1 + 13^2 + ... + 13^n
==
s(1 - 13^(n+1))
=
(-1/12)(1-13^(n+1))
=
(13^(n+1) - 1)/12
as far as I know, s=S is really hard to relate to each other other than having the same value. But you could think of them as different parenthezations of 1+1+1+1+... And we have already seen that the parenthesis change the value if it is not finite.
I don't get how $\sum_i^{1 \dots n}n = n*(n+1)/2$ is irrelevant here. If a=b
means "a can substitute over b, and b can substitute over a", then they should be equivalent for finite arithmetic.
Johan Commelin (Nov 20 2024 at 21:11):
I think you should refrain from writing ...
unless it is in a compiling Lean block.
Rob Fielding (Nov 20 2024 at 21:12):
yes. i totally do not trust my ability (or anyone else's) ability to reason safely with +...
! That's why I use recursion. That's also the only way I know how to write code that deals with it.
Mario Carneiro (Nov 20 2024 at 21:13):
Around here the way we handle +...
is not recursion, it is using sum expressions containing
Mario Carneiro (Nov 20 2024 at 21:13):
recursive formulas and the like are theorems one proves about this object, not assumed as basic
Mario Carneiro (Nov 20 2024 at 21:14):
this is really important when it comes to infinite sums, because not all the "obvious things" about infinite sums are provable
Rob Fielding (Nov 20 2024 at 21:15):
You can calculate in O(1) time with n*(n+1)/2, and O(n) time with summation. Summation is an awful awful idea when you have an alternative to use a closed sum. So, they are not equal in that way at least.
Mario Carneiro (Nov 20 2024 at 21:15):
luckily this is mathematics, we don't have to care about the runtime
Rob Fielding (Nov 20 2024 at 21:16):
hahahaha... could not be more wrong. every major advance in science seems to come from taking an O(n^2) idea and making it run in O(n lg n). FFT. ... neural networks. We would still be living in the 1950s without it.
Mario Carneiro (Nov 20 2024 at 21:16):
Do you agree that is not true by definition?
Rob Fielding (Nov 20 2024 at 21:16):
O(2^n) is not a solution.
Mario Carneiro (Nov 20 2024 at 21:16):
It is true, but there is some work to be done here
Rob Fielding (Nov 20 2024 at 21:17):
if they are not the same, then what does '=' mean?
Mario Carneiro (Nov 20 2024 at 21:17):
it's not just obviously true by the nature of the objects involved
Mario Carneiro (Nov 20 2024 at 21:17):
= means provably having the same value
Mario Carneiro (Nov 20 2024 at 21:17):
Lean actually makes a clear distinction between "provably equal" and "equal by definition"
Mario Carneiro (Nov 20 2024 at 21:18):
you will see these referred to as "propositional equality" and "definitional equality" respectively
Rob Fielding (Nov 20 2024 at 21:18):
Anyways, I am going to keep learning Lean. It's good to have a nailed down "instruction set". It blows my mind that in 2024, we are only barely getting away from believing people that work on paper.
Mario Carneiro (Nov 20 2024 at 21:19):
I am claiming that is not a definitional equality, which you can test in lean by observing that rfl
is not a proof of it
Mario Carneiro (Nov 20 2024 at 21:19):
example (n : ℕ) : ∑ i ≤ n, i = n*(n+1)/2 := rfl -- fail
Mario Carneiro (Nov 20 2024 at 21:21):
open Nat
example (a b : ℕ) : a + succ b = succ (a + b) :=
rfl -- ok, because this is true *by definition of plus*
example (a b : ℕ) : succ a + b = succ (a + b) :=
rfl -- fail, because this is not true *by definition*
example (a b : ℕ) : succ a + b = succ (a + b) :=
Nat.succ_add .. -- it is true though, so there is a proof
Kyle Miller (Nov 20 2024 at 21:25):
By the way, if you want to see a place where that instruction set is written down @Rob Fielding, Mario's thesis #leantt is one of the main references.
Rob Fielding (Nov 21 2024 at 03:43):
Instead of re-posting updates, I am maintaining the latest Lean-cleared copy as a github gist. I credit Claude for checking it, and getting fixes to deal with integer powers of -1, etc.
https://gist.github.com/rfielding/cbfdca49a6320685cb8e7cfc6bbb0449
Rob Fielding (Nov 21 2024 at 05:11):
I asked Claude to write a tactic called rational_algebra
that handles the pattern of "unfold all, handle -1 and 1 powers, simp, ring_nf" so that in most practical cases, you can just do " := by rational_algebra", and it gave me something. That would be fantastic for doing practical things; especially if it could include a simple implicit diff operator.
Giacomo Stevanato (Nov 21 2024 at 10:38):
Rob Fielding said:
You can calculate in O(1) time with n*(n+1)/2, and O(n) time with summation. Summation is an awful awful idea when you have an alternative to use a closed sum. So, they are not equal in that way at least.
FYI multiplication between arbitrarily sized natural numbers is not O(1). This is kinda offtopic though.
Rob Fielding (Nov 22 2024 at 07:02):
wow... Lean is beautiful.
Screenshot from 2024-11-22 01-58-22.png
https://gist.github.com/rfielding/cbfdca49a6320685cb8e7cfc6bbb0449
Rob Fielding (Nov 23 2024 at 03:24):
I added a proof that uses a recursively defined 1+2+3+...+n = n*(n+1)
theorem, and added in examples that show the "1 + 2 + 3 + ...", and "1 + (-2) + 3 + (-4) + 5 + (-6) + ..." and the difference between them "0 + 4 + 0 + 8 + 0 + 12 + ..." which creates the difference in expansion rates that you lose when you write it with "+ ...". That is why Sum_S (2*n) - Sum_B (2*n) = 4*(Sum_S n)
nails down exactly what (Sf n)
must be, so that Sf n = Sum_S n + Tail_S n
does not let you set Sf n
to whatever you want. The link in the above post is updated as I add more theorems, and explanations as to how it relates to -1/12 = 13^0 + 13^1 + 13^2 + ... = 1 + 13 S
.
Michael Bucko (Nov 23 2024 at 10:27):
Kyle Miller schrieb:
By the way, if you want to see a place where that instruction set is written down Rob Fielding, Mario's thesis #leantt is one of the main references.
I will read this paper. Thank you for recommending it!
Nonetheless, this example made me very curious. For now, I've been assuming that when a theorem has no goals left, then it's correct. If that's still true, what's the relationship between this sum not being a definitional equality (which can be by tested using rfl
), and the no-goals-left tactic state
?
Philippe Duchon (Nov 23 2024 at 11:49):
Definitional equalities are those where one side is defined to be the other side (sometimes this is actually several steps of definitions, combined with notations, so it's not 100% obvious; see for instance Kevin Buzzard's course "Formalising mathematics" for examples, esp. in the "Sets" chapter).
Some other equalities are actually theorems. If you take addition on the naturals, it's not defined as an associative operation (I don't believe you could define it as associative, though I might be wrong on this; but the actual definition in Lean does not do this), but you end up proving a theorem that says it turns out to be associative.
Michael Bucko (Nov 23 2024 at 12:31):
Philippe Duchon schrieb:
Definitional equalities are those where one side is defined to be the other side (sometimes this is actually several steps of definitions, combined with notations, so it's not 100% obvious; see for instance Kevin Buzzard's course "Formalising mathematics" for examples, esp. in the "Sets" chapter).
Some other equalities are actually theorems. If you take addition on the naturals, it's not defined as an associative operation (I don't believe you could define it as associative, though I might be wrong on this; but the actual definition in Lean does not do this), but you end up proving a theorem that says it turns out to be associative.
So, soundness gives us no goals
correct proof
both definitional and theorem-proven equalities are applied correctly.
Definitionally true (d-true) gives us d-true
auto-resolvable
.
Can you please explain what I am still missing?
Mario Carneiro (Nov 23 2024 at 14:02):
that's more or less true, although it's not simply about being "auto-resolvable", because tactics can be smart and even some things which are not true by definition can be solved automatically
Mario Carneiro (Nov 23 2024 at 14:03):
definitional equality is a very strict notion of equality which allows you to unfold definitions and not much else
Mario Carneiro (Nov 23 2024 at 14:03):
in particular, you can't do inductive proofs to show something is a definitional equality
Mario Carneiro (Nov 23 2024 at 14:04):
so while x + y = y + x
is provable, it's not possible to prove this by simply unfolding the definition of plus until both sides look identical
Rob Fielding (Nov 24 2024 at 01:54):
https://gist.github.com/rfielding/cbfdca49a6320685cb8e7cfc6bbb0449
deriv seems like there are difficulties in using it; it does not seem to be a straight-forward algebraic translation? I want to use it to connect the definition of A to F here. I thought that the derivative operator was a mostly straight-forward recursion down the operators:
d[a+b] = d[a]+d[b].
d[a b] = d[a] b + a d[b].
d[a^b] = b a^(b-1) d[a] + log_e[a] a^b d[b].
d[log_a[b]] = (-log_e[b] d[a])/(a log_e[a]^2) + (d[b])/(b log_e[a]).
F = dA/dx = d[1/(1-x)]/dx.
It seemed such a pain that I just left it at defining F, and assuming that proving that it's dA/dx was not important to the proof. But it's useful, because on A, I differentiated the whole thing... including the Tail_A, which is normally written as "+ ...", something that I am objecting to in this proof. Because B=F(-1), it tells you where the definition of Sum_B and Tail_B came from; they were not made-up. And the definition of B is what forces S to be -1/12 later.
https://gist.github.com/rfielding/cbfdca49a6320685cb8e7cfc6bbb0449
Etienne Marion (Nov 24 2024 at 08:51):
(deleted)
Vincent Beffara (Nov 24 2024 at 17:09):
The way that deriv
is defined in Mathlib, i.e. deriv f x
is defined to be 0
(a junk value) if f
is not differentiable at x
, the relations like deriv (f + g) = deriv f + deriv g
are not true in general. They do hold wherever both f
and g
are differentiable of course, see docs#deriv_add and the like. Same for product, composition, and so on.
One could define deriv f x
to be something like NaN where f
is not differentiable but it would change the signature of deriv
and would be generally very painful to use. Moreover, it is possible that f + g
is differentiable where neither f
not g
is so it would not help for having deriv (f + g) = deriv f + deriv g
.
Vincent Beffara (Nov 24 2024 at 17:14):
Closer to what you would need: it is not true in general that sum and derivative commute, even when all the functions involved are : it is not difficult to construct a sequence of functions, each equal to on a neighborhood of so having derivative at , in such a way that the series converges uniformly to the identity function (with derivative at ).
Rob Fielding (Nov 30 2024 at 02:11):
how can i show that these are related by a deriv?
/- from A=1-x*A -/
def Sum_A (x : ℚ) (n : ℤ) : ℚ := (1 - x^(n+1))/(1-x)
def Tail_A (x : ℚ) (n : ℤ) : ℚ := x^(n+1)/(1-x)
/- F = dA/dX -/
def Sum_F (x : ℚ) (n : ℤ) : ℚ := (1 - (n+1)*x^n + n*x^(n+1))/((1-x)^2)
def Tail_F (x : ℚ) (n : ℤ) : ℚ := ((n+1)*x^n - n*x^(n+1))/((1-x)^2)
Rob Fielding (Nov 30 2024 at 03:53):
and this difference is infurating. powers of -1 and 1 again.
theorem Sincreases:
1 + (Term_S n) = Term_S (n+1) := by
unfold Term_S Sum_S
simp
ring_nf
theorem Balternates:
1 + (Term_B n) = -1*(Term_B (n+1)) := by sorry
Rob Fielding (Dec 14 2024 at 08:11):
(deleted)
Patrick Massot (Dec 14 2024 at 08:32):
@Rob Fielding what is the relationship between this and the discussion topic lean.nvim?
Notification Bot (Dec 14 2024 at 10:30):
2 messages were moved here from #general > lean.nvim by Johan Commelin.
Rob Fielding (Dec 14 2024 at 19:16):
i had accidentally posted this update into the wrong room. this is a major simplification that does not get distracted by powers of -1, because it uses Nat rather than Int I think. Notice that instead of coming out and calling it "-1/12" explicitly, it instead says that this full sequence is such that if you multiply times 12 and add 1, then you get 0. This works, because 12*S is an infinite digit representation of -1.
import Mathlib.Tactic
def Af (x:ℤ):ℚ := 1/(1 - x)
def Sum_A (x:ℤ) (n:ℕ):ℚ := (x^(n+1)-1)/(x-1)
def Tail_A (x:ℤ) (n:ℕ):ℚ := Af x - Sum_A x n
def Term_A (x:ℤ) (n:ℕ):ℚ := Sum_A x n - Sum_A x (n-1)
def Ff (x:ℤ):ℚ := 1/((1-x)^2) -- d[Af x]/dx
def Sum_F (x:ℤ) (n:ℕ):ℚ :=
(n*(x^(n+1)) - (n+1)*(x^n) + 1)/((1-x)^2) -- d[Sum_A x n]/dx
def Tail_F (x:ℤ) (n:ℕ):ℚ := Ff x - Sum_F x n -- d[Tail_A x n]/dx
def Term_F (x:ℤ) (n:ℕ):ℚ := Sum_F x n - Sum_F x (n-1)
def Tail_B (n:ℕ):ℚ := (2*n+1)*(-1)^n/4
def Sum_B (n:ℕ):ℚ := Ff (-1) - Tail_B n
def Bf (n:ℕ):ℚ := Sum_B n + Tail_B n
def Term_B (n:ℕ):ℚ := Sum_B n - Sum_B (n-1)
def Sum_S (n:ℕ):ℚ := (n+1)*n/2
def Sf (n:ℕ):ℚ :=
(4*Sum_S (n) - (Sum_S (2*n) + Tail_B (2*n)))/3
def Tail_S (n:ℕ):ℚ := Sf n - Sum_S n
def Term_S (n:ℕ):ℚ := Sum_S n - Sum_S (n-1)
theorem Ainf : Af x = Sum_A x n + Tail_A x n := by
unfold Af Sum_A Tail_A Sum_A Af
simp
theorem Finf : Ff x = Sum_F x n + Tail_F x n := by
unfold Ff Sum_F Tail_F Sum_F Ff
simp
theorem Binf : Bf n = Sum_B n + Tail_B n := by
unfold Bf Sum_B Tail_B Ff
simp
theorem Sinf : Sf n = Sum_S n + Tail_S n := by
unfold Sf Sum_S Tail_B Tail_S Sum_S Sf Sum_S Tail_B
simp
theorem AisConst : Sum_A x n + Tail_A x n = Sum_A x (n+1) + Tail_A x (n+1) := by
unfold Sum_A Tail_A Sum_A Af
simp
theorem FisConst : Sum_F x n + Tail_F x n = Sum_F x (n+1) + Tail_F x (n+1) := by
unfold Sum_F Tail_F Ff Sum_F
simp
theorem BisConst : Bf n = Bf (n+1) := by
unfold Bf Sum_B Tail_B
simp
theorem SisConst : Sf n = Sf (n+1) := by
unfold Sf Sum_S Tail_B
simp
ring_nf
theorem Arecursion (x:ℤ) (ha: (1 - x)*(Af x)=1): Af x = 1 + x * (Af x) := by
rw [← ha]
unfold Af
simp
ring_nf
theorem Spows13 : Sf n = 1 + 13 * (Sf n) := by
unfold Sf Sum_S Tail_B
simp
ring_nf
theorem whySf :
(
(Sf n = Sum_S n + Tail_S n) ∧
(Bf n = Sum_B n + Tail_B n) ∧
(Sf n = Sf (n+1)) ∧
(4*(Sum_S n) - (Sum_S (2*n) + Tail_B (2*n)))/3 = (Sf n)
) →
(4*(Sum_S n) - (Sum_S (2*n) + Tail_B (2*n))) = 3*(Sf n) →
Sf n + (4*(Sum_S n) - (Sum_S (2*n) + Tail_B (2*n))) = 4*(Sf n) →
Sf n + 4*(Sum_S n) - Sum_S (2*n) - Tail_B (2*n) = 4*(Sf n) →
Sf n - Sum_S (2*n) + 4*(Sum_S n) - Tail_B (2*n) = 4*(Sf n) →
Sf (2*n) - Sum_S (2*n) + 4*(Sum_S n) - Tail_B (2*n) = 4*(Sf n) →
Tail_S (2*n) + 4*(Sum_S n) - Tail_B (2*n) = 4*(Sf n) →
Tail_S (2*n) - Tail_B (2*n) = 4*(Sf n) - 4*(Sum_S n) →
(
Tail_S (2*n) - Tail_B (2*n) = 4*(Tail_S n) ∧
Sum_S (2*n) - Sum_B (2*n) = 4*(Sum_S n)
) →
Sf n - Bf n = 4*(Sf n) := by
unfold Sum_S Tail_B Sf Sum_S Tail_B Bf Sum_B Tail_B Ff Tail_S Sf Sum_S Tail_B
simp
ring_nf
simp
theorem S1vsS2: Sum_A 13 n + Tail_A 13 n = Sum_S n + Tail_S n := by
unfold Tail_A Tail_S Sum_A Sum_S Af Sf Sum_S Tail_B
simp
ring_nf
theorem SnegOneTwelfth : 12*(Sum_S n + Tail_S n) + 1 = 0 := by
unfold Tail_S Sf Sum_S Tail_B
simp
ring_nf
Matt Diamond (Dec 15 2024 at 04:51):
your theorem whySf
doesn't need any of its assumptions:
theorem whySf :
Sf n - Bf n = 4*(Sf n) := by
unfold Sf Bf Sum_S Sum_B Tail_B Ff
simp
ring
Rob Fielding (Dec 15 2024 at 05:06):
for sure i could just do that. but there's a huge subtlety to capture. I suspect that I could just as well use \and for everything as well. What I am trying to do with \r is a construct like: ?+X, or ?*Y, etc for a well-defined non-equal sequence. My Lean chops are still kind of weak. I still can't figure out how to just assign F as the derivative of A, which would help to explain why B was chosen.
if you simply calculate subtracting S-B, you get this, corresponding to "0+4+0+8+0+12+0+...". And it's important to note that because S is constant that (Sf n)==(Sf 2n), that you must be more careful when you can immediately calculate this....
Sum_S (2*n) - Sum_B (2*n) = 4*(Sum_S n)
2n vs n is important for the individual tails; though it doesn't matter for constants Sf and Bf. What is interesting is that at first, it looks like S could be any value you like. But then when B is introduced, it forces a particular value upon S. And from there, the value of S is picked by the relationship that lets you solve for S, given that S=SumS n + TailS n:
Tail_S (2*n) - Tail_B (2*n) = 4*(Tail_S n)
This is what finite recursion gets right that all the "+..." proofs on the internet get wrong. The expansion rate is 1/2 because of the sum being alternating with 0 and 4S. But once the sum and tail are shown to have this pattern, it's unavoidable to calculate S-B=4S , where the number of expansions, n, cancel out.
Matt Diamond (Dec 15 2024 at 05:13):
for sure i could just do that. but there's a huge subtlety to capture.
If you have a theorem with unnecessary assumptions, those assumptions have no relation to the information content of the theorem. They add no value and there's no reason to have them there.
Rob Fielding (Dec 15 2024 at 05:21):
I am still confused as to why even with an accepted Lean statement of:
Sf n = Sum_S n + Tail_S n
Sum_S n = n*(n+1)/2
there are still major objections that the result can't be correct. the S=1+13 S relationship is necessary to conceptually explain how S is a finite number with an infinite-digit representation (all 1s in base13), which resolves the "paradox" causing people to dismiss it without even reading it.
This is the larger context that this is in. The last comment in the gist is the cleaned up version. But there is a much more verbose version, and a LaTeX version as well. I don't think this proof has the slightest thing to do with infinity. It's more like... a self-similar pattern in the integers. "-1/12" is more like in the sense seen in a finite field... where it's an infinite digit thing such that if you multiply it times 12 and add 1, you get 0. And this is a totally routine idea in finite fields. My math background is mostly in practical cryptography.
https://gist.github.com/rfielding/cbfdca49a6320685cb8e7cfc6bbb0449
Mario Carneiro (Dec 15 2024 at 05:22):
there are still major objections that the result can't be correct
For the 5th time, the objection is NOT that the result is not correct, it is that it proves something unrelated to the claim
Mario Carneiro (Dec 15 2024 at 05:22):
What you are doing is analogous to
/-- Riemann Hypothesis is true, plz give $1000000 -/
theorem riemann_hypothesis : 1 + 1 = 2 := rfl
but more obscured
Mario Carneiro (Dec 15 2024 at 05:23):
the theorem you proved is true, but it's not the theorem you want to prove
Rob Fielding (Dec 15 2024 at 05:26):
If Sum_S n = 1+2+3+...+n, and you can make n as large as you like; I still can't fathom how it's not related. I am a computer programmer. If I see an infinite sum, my first instinct is to turn it into recursion.
T = (1/2) + (1/2)T
(1 - 1/2)T = 1/2
T = 1
... from the point of view of exact computation, infinite sums seem to actually convey less than recursion.
1 + 2 + 3 + ...
where the dots are an undefined procedure say less than
S = 1 + 2 + 3 + ... + n + Tail_S(n)
where everything has a definition.
Mario Carneiro (Dec 15 2024 at 05:26):
The dots are not an undefined procedure
Mario Carneiro (Dec 15 2024 at 05:26):
they are defined to mean the limit of the finite sums
Mario Carneiro (Dec 15 2024 at 05:27):
and your sequence has no limit
Mario Carneiro (Dec 15 2024 at 05:27):
because Tail_S n
does not approach 0
Mario Carneiro (Dec 15 2024 at 05:28):
What you are doing is proving relations about finite sums, without saying anything about the infinite sum which is actually the one claimed to be -1/12 (or not)
Rob Fielding (Dec 15 2024 at 05:29):
That's because it's just way more explicit. S is not the sum. S-Tail_S n = Sum_S n .... that's the entire point of having to use recursion for this:
lim n-> inf [ S - Tail_S n ] = Sum_S n
In cases where it converges, you can conflats S with Sum_S n. But when Tail_S n does not go to 0 for large n, you can't conflate them. That's the whole point of getting rid of "+ ...", as it's unsound and undefined.
Mario Carneiro (Dec 15 2024 at 05:29):
Rob Fielding said:
lim n-> inf [ S - Tail_S n ] = Sum_S n
is not well scoped, n
is bound on the left side and free on the right
Rob Fielding (Dec 15 2024 at 05:30):
err... lim n -> inf [ S - Tail_S n = Sum_S n ] .... yeah, n for both sides of a=b.
Rob Fielding (Dec 15 2024 at 05:30):
that's the thing that goes to infinity. S is still -1/12.
Mario Carneiro (Dec 15 2024 at 05:31):
But in this case there is no relation between S
and Sum_S
Matt Diamond (Dec 15 2024 at 05:32):
FYI if you're trying to prove things about Ramanujan summation in Lean, there's a way to do that, but it's not what you're doing
Mario Carneiro (Dec 15 2024 at 05:32):
you've just invented a function Tail_S
which describes the difference between them, S
and Sum_S
could be anything
Rob Fielding (Dec 15 2024 at 05:33):
in other words, Tail_S n written as "..."
S = Sum_S n + Tail_S n
B = Sum_B n + Tail_B n
lim n -> inf [ S - "..." = Sum_S n]
Tail is a name for the "...". You need a name for multiple reasons. One is that
A = Sum_A x n + Tail_A x n
F = diff[A]
Mario Carneiro (Dec 15 2024 at 05:34):
What is the definition of S?
Mario Carneiro (Dec 15 2024 at 05:34):
And don't say Sum_S n + Tail_S n
Mario Carneiro (Dec 15 2024 at 05:34):
because that depends on n
and S
doesn't
Mario Carneiro (Dec 15 2024 at 05:35):
aka it's a scope error
Rob Fielding (Dec 15 2024 at 05:35):
in this case, because there's recursion involved,
Sf n = (4*(SumS n) - (SumS 2n + TailB 2n))/3
and I use whySf to justify that... it' comes from finding
SumS 2n - SumB 2n = 4SumS n
and using that to solve the tails to find Sf.
Rob Fielding (Dec 15 2024 at 05:37):
in all these sequences, which may or may not converge....
X = SumX n + TailX n
X is the "final value". SumX is the finite number of terms expanded out, and TailX is "the rest".
Mario Carneiro (Dec 15 2024 at 05:37):
what does "final value" mean, as a concrete definition?
Mario Carneiro (Dec 15 2024 at 05:37):
what is the thing you are trying to compute?
Mario Carneiro (Dec 15 2024 at 05:38):
until you have a definition for S this whole derivation is not doing anything
Rob Fielding (Dec 15 2024 at 05:38):
V = 1 + 2V
(1 - 2)V = 1
-V = 1
V = -1
= 1 + 2(1 + 2V)
= 1 + 2 + 4V
= 1 + 2 + 4 + ... + 2^n + 2^(n+1)V
etc. there's dozens of things that are easily solved this way.
Mario Carneiro (Dec 15 2024 at 05:38):
and many of them are wrong
Mario Carneiro (Dec 15 2024 at 05:39):
this kind of manipulation of sums without paying attention to convergence is a notorious source of false proofs
Mario Carneiro (Dec 15 2024 at 05:39):
the point where lean would usually catch you is when you try to manipulate sum expressions, but you never wrote one down
Rob Fielding (Dec 15 2024 at 05:39):
not sure what we mean by wrong, when the algebra behaves like this. if you take "a=b" seriously, to mean that a can be rewritten as b, and b can be rewritten as a; then you end up with this. So, when exactly can you override the judgement of the algebra on auto-pilot?
Mario Carneiro (Dec 15 2024 at 05:40):
When such a proof is false, the mistake is usually on the first line, when we assumed V
existed
Mario Carneiro (Dec 15 2024 at 05:41):
having a definition for V keeps us honest here
Rob Fielding (Dec 15 2024 at 05:42):
it does mean that you must be careful with interpreting it!
S2 = 1 + 13 S2
= 13^0 + 13^1 + 13^2 + ... + 13^n + 13^(n+1) S2
S2 ( 1 - 13^(n+1)) = 13^0 + 13^1 + 13^2 + ... + 13^n
(-1/12)(1 - 13^(n+1)) =
(13^(n+1) - 1)/12 = 13^0 + 13^1 + 13^2 + .... + 13^n
S2 = 1 + 13 S2
(1 - 13) S2 = 1
-12 S2 = 1
S2 = -1/12
Mario Carneiro (Dec 15 2024 at 05:43):
actually your example with V is indeed one of those wrong proofs, if you think it is a proof that 1 + 2 + 4 + ... = -1
Rob Fielding (Dec 15 2024 at 05:43):
(-1) = 1 + 2(-1)
= 1 + 2 + 4(-1)
= 1 + 2 + 4 + 8(-1)
= 1 + 2 + 4 + 8 + 16(-1)
it's just a matter of taking a=b literally.
Mario Carneiro (Dec 15 2024 at 05:44):
All of those steps are true. They also do not tell you that the infinite sum adds to -1
Mario Carneiro (Dec 15 2024 at 05:44):
because at no point is an infinite sum on the left or right hand side of the equation
Rob Fielding (Dec 15 2024 at 05:45):
you can reprepresent a finite number with an infinite number of digits though.
V = 1 + 2 V
means.... in base 2, all 1 digits.
(....111111) + 1 = 0
in other words, it's "the highest" integer, such that if you add 1, you get a ripple carry. the number system acts like computer registers, because they are real physical machines.
Mario Carneiro (Dec 15 2024 at 05:46):
No, they don't. What you are describing is the behavior of the 2-adic numbers, but the real numbers have a different convergence structure and ....111111
is not a number
Mario Carneiro (Dec 15 2024 at 05:46):
it is in fact possible to prove that this sequence does not converge in lean
Rob Fielding (Dec 15 2024 at 05:47):
of course, you need to be careful when you interpret it! X +1 = 0 leads you to call X "-1". But there are infinite-digit representations of -1.
...2222 != ...1111
ie... two different infinite digit values. not equal.
F = 2 + 10 F .... all 2s base10
G = 1 + 10 F .... all 1s base10
their ordering may surprise you
Mario Carneiro (Dec 15 2024 at 05:47):
No, these are false assertions
Mario Carneiro (Dec 15 2024 at 05:47):
I'm not sure what else to say, you don't seem to want to engage here
Rob Fielding (Dec 15 2024 at 05:48):
F(1-10) = 2
F = 2/9
G(1 - 10) = 1
G = 1/9
where is the wrong step?
Mario Carneiro (Dec 15 2024 at 05:48):
the assumption that F exists
Rob Fielding (Dec 15 2024 at 05:49):
F = 2/9
9F = 2
(1-10)F = 2
F = 2 + 10 F
Rob Fielding (Dec 15 2024 at 05:49):
think harder about recursion. you can't avoid it if you take "a=b" seriously
Rob Fielding (Dec 15 2024 at 05:50):
F = 2 + 10( 2 + 10 F)
= 2 + 20 + 100 F
Mario Carneiro (Dec 15 2024 at 05:50):
If your definition is F = -2/9
then sure, F*(1-10) = 2
. But F
is not an infinite sequence of 2s, this is asserting that and this is false because doesn't exist
Mario Carneiro (Dec 15 2024 at 05:51):
you can prove that this sum does not converge
Mario Carneiro (Dec 15 2024 at 05:52):
What you are proving is that if exists then
Rob Fielding (Dec 15 2024 at 05:53):
well, in any case: it's definitely a computer science view that recursion is less suspect than infinite iteration. but there are applications for non-convergent series:
S = -1/12
-12 S = 1
(1-13)S = 1
S = 1 + 13 S
= 13^0 + 13^1 + 13^2 + ... + 13^n + 13^(n+1)S
S(1 - 13^(n+1)) = 13^0 + 13^1 + 13^2 + ... + 13^n
=
(-1/12)(1-13^(n+1))
(13^(n+1) - 1)/12
for instance, these divergent numbers are crucial for calculating the closed form sum as a function of n.
Mario Carneiro (Dec 15 2024 at 05:54):
Sure, but if you like recursion more than infinite sums then maybe don't make claims about infinite sums
Mario Carneiro (Dec 15 2024 at 05:54):
because infinite sums are not just recursive functions
Rob Fielding (Dec 15 2024 at 05:54):
of course, you have to be careful when you interpret these things.! but the algebra is doing what it's doing, because there's something really there!
Mario Carneiro (Dec 15 2024 at 05:55):
they are recursive functions + convergence
Mario Carneiro (Dec 15 2024 at 05:55):
and you've only dealt with half of the job here
Mario Carneiro (Dec 15 2024 at 05:56):
The reason to be picky about convergence in these kind of infinite sums is precisely because you can use the exact same reasoning to get different values for S
Rob Fielding (Dec 15 2024 at 06:00):
if you use recursion, then the difference between convergence vs divergence is pretty small.
V = Sum_V n + Tail_V n
presume that: lim n-> infty[ Tail_V n ] = 0
V = lim n-> infty [ Sum_V n ]
.... but in the general case, just DONT conflate the two.
X = Sum_X n + Tail_X n
(X - Tail_X n) = Sum_X n
as n goes to infinity, Sum_X n might go to infinity. If that's the case, then tail will cancel it out, so that we still have some constant X.
Mario Carneiro (Dec 15 2024 at 06:03):
Consider the following: suppose you have a setup such as that, where Tail_X
is blowing up to cancel Sum_X
and yielding an apparently meaningful constant X
. I would like to argue that Sum_X
adds to Y
instead. So I define Tail_Y n = Tail_X n + (Y - X)
, and now Y = Tail_Y n + Sum_X n
. So for the exact same structure, I have shown that Sum_X n
sums to Y
, modulo Tail_Y
which also blows up to infinity.
Rob Fielding (Dec 15 2024 at 06:04):
When Tail does not go to 0 for large n, you get "divergence". If it goes to 0, it converges. Where we differ is whether the value X is meaningful. But just keep in mind
Sum_X = Final_X - Tail_X
Mario Carneiro (Dec 15 2024 at 06:04):
When the sum is convergent, you can't do this trick, because but , so this allows to pick a unique value satisfying that the tail goes to 0
Mario Carneiro (Dec 15 2024 at 06:10):
import Mathlib
open BigOperators
def Sum_S (n : Nat) : Rat := ∑ i < n, 2^i
def Tail_S (n : Nat) : Rat := 37 - Sum_S n
example (n : Nat) : Sum_S n + Tail_S n = 37 := by simp [Sum_S, Tail_S]
def Tail_S' (n : Nat) : Rat := 42 - Sum_S n
example (n : Nat) : Sum_S n + Tail_S' n = 42 := by simp [Sum_S, Tail_S']
This is a proof in your style that the infinite sum is equal to 37, and also 42. This is an absurd conclusion, and therefore we have to conclude that there is a reasoning principle we should not be allowing here.
Mario Carneiro (Dec 15 2024 at 06:12):
I argue that the bad reasoning principle is to have no constraints on what Tail_S
is, such that I can just make it be the difference between what I have and what I want to conclude any desired value
Rob Fielding (Dec 15 2024 at 06:26):
you can't just assign whatever value you want to. That's the point of including B = F(-1). F = dA/dx. Because B already has to be 1/4, it forces S to be -1/12. It's the whole reason anybody ever came up with that number.
Mario Carneiro (Dec 15 2024 at 06:26):
The link between the 1+2+3+... divergent sum and -1/12 is quite complicated and involves the riemann zeta function or ramanujan summation
Mario Carneiro (Dec 15 2024 at 06:27):
the methods you are employing are not powerful enough to give this answer without also giving all other answers
Rob Fielding (Dec 15 2024 at 06:29):
for sure! T = 1 + 2 T is far, far more convincing. T=-1 is just as nonsensical as S=-1/12 ... you can't possibly like T if you don't like S. But if you don't like it; I suggest that you need to interpret it differently.
T = 1 + 2 T ... means 2^0+2^1+2^2+... = -1. There is no way around it. But the reasoning is far far simpler. But S still is -1/12. It's just that if the proof isn't simple, then people will doubt it. Even if you do the reasoning in Lean.
Mario Carneiro (Dec 15 2024 at 06:29):
Okay, we can talk about T instead. I agree it's simpler
Mario Carneiro (Dec 15 2024 at 06:30):
You are right that I "don't like it" for exactly the same reason
Mario Carneiro (Dec 15 2024 at 06:31):
T = 1 + 2 T means 2^0+2^1+2^2+... = -1. There is no way around it.
The trouble is that you would like to interpret the ... in this expression as a finite sum, and it's simply not
Rob Fielding (Dec 15 2024 at 06:31):
But the way to make sense of it... S is the "final value" that you can never reach through iteration; what infinite actually means... it means "never". This final value (fixed point), is an infinite string of 1 digits in base13. Because of that, if you multiply it times 12, you get an infinite digit representation of -1. That means that if you add 1, it all ripple-carries to give 0.
Mario Carneiro (Dec 15 2024 at 06:32):
I thought we were going to talk about T?
Rob Fielding (Dec 15 2024 at 06:33):
T is an infinite digit representation of -1.
12 S = T = -1 = 1 + 2 T= 9 + 10 T = 12 + 13 S etc
Mario Carneiro (Dec 15 2024 at 06:33):
This final value (fixed point), is an infinite string of 1 digits in base13.
This is a notion of convergence by the way: you are invoking convergence of an infinite digit sequence where each digit eventually stabilizes to a value after finitely many steps
Mario Carneiro (Dec 15 2024 at 06:34):
it's just not the standard notion of convergence on the reals
Rob Fielding (Dec 15 2024 at 06:34):
in general: Z = (n-1) + n * Z = -1 .... -1 is an interesting case, because you add 1 to it, and it ripple-carries to 0. The carried digit to the left is unreachable.
Mario Carneiro (Dec 15 2024 at 06:34):
this is convergence in the 13-adic (for S) or 2-adic (for T) numbers
Mario Carneiro (Dec 15 2024 at 06:34):
this "ripple carry" thing also occurs in the 2-adic numbers
Mario Carneiro (Dec 15 2024 at 06:36):
But there is no number like ...1111 in the real numbers, because it would have to be larger than every natural number and it is a property of the real numbers that for any real number there is always a natural number larger than it
Rob Fielding (Dec 15 2024 at 06:37):
yes. in every base, it happens like: Z = (n-1) + n * Z.
Z1 = 1 + 2 Z1
Z2 = 2 + 3 Z2
...
Z9 = 9 + 10 Z9
...
ZA = 12 + 13 Za.
A way to think of ZA is ... the digits available are.... 0 1 2 3 4 5 6 7 8 9 A B C ..... base13, all C digits. it's "full". if you add 1, it rolls over to 0.
Mario Carneiro (Dec 15 2024 at 06:37):
this is called the archimedean property, and the 2-adic numbers are a non-archimedean field
Mario Carneiro (Dec 15 2024 at 06:37):
Note though that you have to pick a base, the 2-adic numbers are not 13-adic and vice versa
Rob Fielding (Dec 15 2024 at 06:38):
but if you think about it that way, -1/12 is not even remotely nonsensical. Like a finite field, a fraction means that you multiply times an int, and get an int.
12 S + 1 = 0
Mario Carneiro (Dec 15 2024 at 06:39):
I didn't say -1/12 is nonsensical, I said ...1111 is not a real number
Mario Carneiro (Dec 15 2024 at 06:39):
-1/12 is of course a real number
Rob Fielding (Dec 15 2024 at 06:39):
sure. That's why S = 1 + 13 S is easier to write.
Mario Carneiro (Dec 15 2024 at 06:39):
That's an equation for -1/12, not ....1111
Mario Carneiro (Dec 15 2024 at 06:40):
or ...CCCC or what have you
Mario Carneiro (Dec 15 2024 at 06:40):
that number does not exist in the real numbers, and as such it can't be equal to -1/12
Rob Fielding (Dec 15 2024 at 06:41):
S = 1 + 13 S
= 13^0 + 13 S
= 1 + 13(1 + 13 S)
= 1 + 13^1 + 13^2 S
= 1 + 13^1 + 13^2 (1 + 13 S)
= 13^0 + 13^1 + 13^2 + ...
S = -1/12
-12 S = 1
(1-13)S = 1
S = 1 + 13 S
That is all 1 digits base13
Mario Carneiro (Dec 15 2024 at 06:42):
-1/12 is a number satisfying S = 1+13 S, and this means you can give it a base expansion of the form <big number>CCCC in base 13 for any finite number of C's, but as you make the number of C's larger the big number gets even bigger and you can't just hide it away in a '...'
Mario Carneiro (Dec 15 2024 at 06:43):
because small variations in the big number completely changes the value of the -1/12 coming out
Rob Fielding (Dec 15 2024 at 06:43):
the whole reason i learned Lean was to understand this phenomenon.
Mario Carneiro (Dec 15 2024 at 06:43):
So in that case maybe you should learn about how infinite sums work in lean
Mario Carneiro (Dec 15 2024 at 06:43):
starting point -> docs#tsum
Rob Fielding (Dec 15 2024 at 06:43):
i ran into this trying to turn integration into this kind of recursion
Rob Fielding (Dec 15 2024 at 06:45):
this here...
12 S + 1 = 0
justifies calling S=-1/12. but if S is represented with an infinite number of digits; then that is fine.
Mario Carneiro (Dec 15 2024 at 06:46):
real numbers have a decimal expansion that extends to the right, not the left
Mario Carneiro (Dec 15 2024 at 06:47):
because when you go to the right the numbers get smaller, and they converge to something
Mario Carneiro (Dec 15 2024 at 06:47):
when you go left they don't converge
Rob Fielding (Dec 15 2024 at 06:47):
X = 0.9 + 0.1 X
(0.9) X = 0.9
X = 1
Mario Carneiro (Dec 15 2024 at 06:47):
why do you keep showing algebra at me
Mario Carneiro (Dec 15 2024 at 06:47):
this is unnecessary
Rob Fielding (Dec 15 2024 at 06:47):
but that's the infinite digits to the right. it's the same exact issue.
Rob Fielding (Dec 15 2024 at 06:48):
and nobody doubts that 0.9999... = 1
Mario Carneiro (Dec 15 2024 at 06:48):
I don't think you understand the proof of 0.999... = 1
Rob Fielding (Dec 15 2024 at 06:49):
but it's the same form:
J = Sum_J n + Tail_J n
except, in convergent cases, it happens to be that Tail_J n would go to 0 for large n.
Mario Carneiro (Dec 15 2024 at 06:49):
you probably think the proof goes something like:
S = 1/10(9+S)
10S = 9+S
9S = 9
S = 1
But that's not a proof, as it does not prove that the sum converges
Rob Fielding (Dec 15 2024 at 06:50):
i would write it like:
S = 9/10 + (1/10)S
but the algebra is straightforward. "all 9s to the right in base 10"
Mario Carneiro (Dec 15 2024 at 06:51):
the algebra is straightforward but that's not the whole proof
Rob Fielding (Dec 15 2024 at 06:51):
but, if S solves for both 0.9999... and for 1, then tiat's enough for a computer.
Mario Carneiro (Dec 15 2024 at 06:52):
No, because it's a proof which assumes that 0.999... exists
Mario Carneiro (Dec 15 2024 at 06:52):
you end up with a conditional result saying that if this sum exists then it is equal to 1
Rob Fielding (Dec 15 2024 at 06:54):
heh! so.... i don't like infinite iteration. thats why i use recursion. if i don't use infinite iteration, then i don't need to wrestle with whether it's right or not. recursion is totally believeable though.
S = 9/10 + 1/10 S
10 S = 9 + S
(10 - 1)S = 9
9 S = 9
S = 1
S = 9/10 + 1/10 S
= 0.9 + 0.1 S
= 0.9 + 0.1(0.9 + 0.1 S)
= 0.99 + 0.01 S
...
Rob Fielding (Dec 15 2024 at 06:57):
I would not go as far as Norman Wildberger in arguing that it's wrong. But... when you write code, recursion is far superior to reason about, and to write code for.
Mario Carneiro (Dec 15 2024 at 06:59):
The actual proof looks more like this.
Let . Then (by some algebra) , so the sequence is increasing and bounded above by , therefore it has a limit, call it . (This is .) In fact, because the difference between and is , which approaches 0, the sequence also approaches . By uniqueness of limits, .
Mario Carneiro (Dec 15 2024 at 07:00):
the equations you have been showing roughly correspond to the (by some algebra) step in this proof
Rob Fielding (Dec 15 2024 at 07:00):
But, I just used simple algebra. Unless I divide by 0, or assign conflicting values as assumptions; then on what basis would it get rejected?
Mario Carneiro (Dec 15 2024 at 07:01):
the point is that (by some algebra) is only a small part of the proof
Mario Carneiro (Dec 15 2024 at 07:01):
there is a whole bunch of other stuff, significantly less algebra-looking, that one has to do to prove facts about infinite sums
Mario Carneiro (Dec 15 2024 at 07:02):
The mistake in your last post is essentially the ...
at the end
Rob Fielding (Dec 15 2024 at 07:03):
sure, there's stuff that I can't figure out how to do recursively. i would like to figure out general integration. A = 1 + xA isn't a lot to work with. F = SumF n + TailF n ... is better. But you can't express everything like this; such as infinite products.
Mario Carneiro (Dec 15 2024 at 07:03):
You can't express infinite sums like this either
Mario Carneiro (Dec 15 2024 at 07:03):
you can express finite sums like this
Mario Carneiro (Dec 15 2024 at 07:04):
infinite sums are limits of finite sums
Mario Carneiro (Dec 15 2024 at 07:04):
so to prove something about an infinite sum you have to talk about addition and you have to talk about limits
Mario Carneiro (Dec 15 2024 at 07:04):
if you only have addition you will never prove anything about an infinite sum
Rob Fielding (Dec 15 2024 at 07:05):
A = 1 + x A
= 1 + x(1 + x A)
= x^0 + x^1 + ... + x^n + x^(n+1)A
(1-x)A = 1
A = 1/(1-x)
= x^0 + x^1 + x^2 + ... + x^n + x^(n+1)/(1-x)
Rob Fielding (Dec 15 2024 at 07:06):
i don't see what's objected to in recursion.
Mario Carneiro (Dec 15 2024 at 07:06):
infinite sums are not recursive
Mario Carneiro (Dec 15 2024 at 07:06):
finite sums are
Mario Carneiro (Dec 15 2024 at 07:06):
you can use recursive methods to prove things about finite sums
Mario Carneiro (Dec 15 2024 at 07:07):
you have to then use quantifier reasoning to prove things about infinite sums
Rob Fielding (Dec 15 2024 at 07:07):
but... it is the same pattern. when does it end?
A = x^0 + x^1 + x^2 + ... + x^n + x^(n+1) * (x^0 + x^1 + x^2 + ... + x^n + x^(n+1)A)
Mario Carneiro (Dec 15 2024 at 07:08):
The notation is possibly misleading, it's not really an addition process that goes on forever
Mario Carneiro (Dec 15 2024 at 07:09):
it's more like an estimation of where the addition process seems to be headed toward
Rob Fielding (Dec 15 2024 at 07:09):
but that's why it's superior. you don't think of it as iterating infinitely. infinite iteration is by definition "never" at the last term.
Mario Carneiro (Dec 15 2024 at 07:10):
that's true but not sufficient to characterize it
Mario Carneiro (Dec 15 2024 at 07:11):
the infinite sum is a specific value with the property that for any little region around it the finite sum process eventually enters that region (after finitely many steps)
Mario Carneiro (Dec 15 2024 at 07:13):
for example, a sequence like eventually gets arbitrarily close to as grows. I can pick any positive value like and observe that after 3 terms the sequence is within 1/4 of 2, and from then on it is never farther than 1/4 from 2
Mario Carneiro (Dec 15 2024 at 07:13):
Therefore we say the infinite sum exists, and is equal to
Rob Fielding (Dec 15 2024 at 07:15):
anyways: for this particular application of 1+2+3+... , it's way more believeable than
1 + 2 + 3 + 4 + 5 + 6 + ...
- 1 + -2 + 3 + -4 + 5 + -6 + ...
0 + 4 + 0 + 8 + 0 + 12 + ...
what I hate about that proof is that it ignores what's going on with the zeroes. that indicates it expanding at half the rate
(Sf (2n) - Bf (2n)) = 4 * (Sf n)
that's the whole reason I started learning Lean. People wrangling infinite sums are using the most waffly logic I can imagine.
Mario Carneiro (Dec 15 2024 at 07:15):
on the other hand, the sequence grows indefinitely, and it has no upper bound - for any I can take steps and then the sum will definitely be bigger than . So it does not converge, and more specifically we say it diverges to infinity, written
Mario Carneiro (Dec 15 2024 at 07:16):
You need to learn about infinite sums from a math textbook, not from bad proofs on the internet
Mario Carneiro (Dec 15 2024 at 07:16):
All three infinite sums you just wrote do not converge
Rob Fielding (Dec 15 2024 at 07:17):
But this also grows indefinitely:
S - Tail_S n = Sum_S n
If you have the tail go to 0, you get away with conflating the value with the Sum. But when it diverges, you can't conflate them.
Mario Carneiro (Dec 15 2024 at 07:17):
I don't know what you mean by conflating. S is the sum, by assumption
Mario Carneiro (Dec 15 2024 at 07:18):
If the tail does not go to zero then the sum does not converge
Mario Carneiro (Dec 15 2024 at 07:19):
If S is not used to denote the sum but rather a specific real number you would like to prove the sum equal to (e.g. S := -1/12), then it suffices to just prove that the Tail_S function in that equation goes to 0
Rob Fielding (Dec 15 2024 at 07:20):
X = (1/2) + (1/2) X
= 1 -- it solves for 1
= 1/2 + 1/4 + 1/8 + ....
Sum_X n = 2^(n+1) - 1
X = Sum_X n + Tail_X n
implies
X - Tail_X n = Sum_X n
if Tail_X n = 0 for infinite n, then you can confuse these
X = Sum_X n
X is not the sum. Sum is the sum; Sum_X n specifically
Mario Carneiro (Dec 15 2024 at 07:20):
I mean the infinite sum you are interested in calculating
Rob Fielding (Dec 15 2024 at 07:21):
oops... but the idea is that these strange numbers when it does not converge are a real phenomenon. you need to plug it in to get the closed form as a function of n...
Mario Carneiro (Dec 15 2024 at 07:21):
You write X = (1/2) + (1/2) X
as the first line of the proof, but lean would not accept that as the first line of a proof, because you haven't defined X yet
Mario Carneiro (Dec 15 2024 at 07:22):
is this an assumption or a theorem?
Mario Carneiro (Dec 15 2024 at 07:22):
Math proofs introduce variables with a sentence that says "Let X be ...". You don't have sentences in your proofs and this is a problem
Rob Fielding (Dec 15 2024 at 07:23):
ok... but ...
X = 1
(1/2)X = (1/2)
(1 - 1/2)X = 1/2
X = (1/2) + (1/2) X
Mario Carneiro (Dec 15 2024 at 07:23):
I recommend you read a textbook on logic as it will help teach you how proofs are structured
Mario Carneiro (Dec 15 2024 at 07:24):
I recommend Mendelson
Rob Fielding (Dec 15 2024 at 07:24):
you can turn any number into one of those equations btw. and that equation always proposes an infinite digit representation. this is basic CS.
Rob Fielding (Dec 15 2024 at 07:25):
i am taking "a=b" completely literally. if you write 12=0, it's not a contradiction unless you also propose that 12 != 0... because if 12=0 is an assumption, then you are in mod 12.
Mario Carneiro (Dec 15 2024 at 07:25):
All of those equations are equivalent, I agree
Mario Carneiro (Dec 15 2024 at 07:26):
none of them say anything about an infinite sum
Mario Carneiro (Dec 15 2024 at 07:26):
I think you need to learn how to use lean to refute your claims
Mario Carneiro (Dec 15 2024 at 07:27):
example (h : 12 = 0) : False := by cases h
Rob Fielding (Dec 15 2024 at 07:27):
S = -1/12
-12 S = 1
(1 - 13)S = 1
S = 1 + 13 S
every value set to a rational number tells you how to make an infinite digit representation. it might even be the case that all infinite numbers can be represented as a finite number. (maybe?)
S = 13^0 + 13^1 + 13^2 + ... = 1 + 13 S = 1 + 13(1 + 13(1 + 13S)) ...
Rob Fielding (Dec 15 2024 at 07:28):
i am taking recursion very very literally, equational reasoning requires it.
Mario Carneiro (Dec 15 2024 at 07:28):
You are not doing proofs, you are writing a sequence of equations
Mario Carneiro (Dec 15 2024 at 07:28):
proofs require sentences
Rob Fielding (Dec 15 2024 at 07:29):
perhaps. but this is our number system.
Rob Fielding (Dec 15 2024 at 07:29):
and if it produces strange stuff, that's not my problem. it does mean that i need to bend my mind around interpreting it though.
Mario Carneiro (Dec 15 2024 at 07:29):
if you wrote a sentence you would have to make an assumption which made clear which number system you are talking about
Mario Carneiro (Dec 15 2024 at 07:29):
but if the number system you are talking about is the real numbers, then infinite numbers do not exist
Rob Fielding (Dec 15 2024 at 07:30):
but if you feed gibberish into a Python interpreter, it's going to do something. in computer security there is a saying...
"An implementation defines the undefined stuff"
Mario Carneiro (Dec 15 2024 at 07:30):
Yes, that's fine but you have to play by the rules of the system
Rob Fielding (Dec 15 2024 at 07:31):
and i don't see how to reject S = -1/12, etc. it's simple algebra.
Mario Carneiro (Dec 15 2024 at 07:31):
That's because you aren't listening
Mario Carneiro (Dec 15 2024 at 07:32):
The simple algebra is fine, but it doesn't have the magical ability to prove things about limits
Mario Carneiro (Dec 15 2024 at 07:32):
and S is a limit
Rob Fielding (Dec 15 2024 at 07:32):
but you can't object to the result. you can only object to a rule that you followed, and replace it with a rule that you dont object to. that's the whole problem with objecting to things that people can clearly do in the algebra.
X = 1 + 2 X
is -1.... and that's ... just how it is. 1+2+4+8+... = -1
Mario Carneiro (Dec 15 2024 at 07:33):
Why is your first step X = 1 + 2 X? Where did that come from?
Mario Carneiro (Dec 15 2024 at 07:33):
You should start the proof by a "Let X be ..." statement
Mario Carneiro (Dec 15 2024 at 07:33):
that's a sentence
Mario Carneiro (Dec 15 2024 at 07:33):
use words not equations
Mario Carneiro (Dec 15 2024 at 07:34):
do not respond by posting a bunch of equations again
Rob Fielding (Dec 15 2024 at 07:34):
X = -1
(1-2)X = 1
X = 1 + 2 X
that's where it came from
Mario Carneiro (Dec 15 2024 at 07:34):
:man_facepalming:
Rob Fielding (Dec 15 2024 at 07:35):
but that's where it came from. i just assigned a value to a name, and you can make a recursive equation by following the rules. and these are numbers that have infinite digit representations. what was Ramanujan thinking ? it isn't something that i dreamed myself
Mario Carneiro (Dec 15 2024 at 07:36):
So your theorem is "If X = -1, then X = 1 + 2X"?
Rob Fielding (Dec 15 2024 at 07:36):
exactly
Mario Carneiro (Dec 15 2024 at 07:36):
This doesn't have any infinite digit representations in it
Rob Fielding (Dec 15 2024 at 07:37):
look... write a computer program that follows legitimate rules at random. the program will find that.
Mario Carneiro (Dec 15 2024 at 07:37):
it's "just simple algebra"
Rob Fielding (Dec 15 2024 at 07:37):
so, you can't just say "a/b" without also asserting "b != 0". if you object to something, then the meta rule needs to prevent it.
Mario Carneiro (Dec 15 2024 at 07:38):
(in lean you can but let's set that aside)
Rob Fielding (Dec 15 2024 at 07:38):
you can't object to the result. you can only object to rules and axioms. no?
Mario Carneiro (Dec 15 2024 at 07:38):
I can object that the result isn't what you say it is
Mario Carneiro (Dec 15 2024 at 07:39):
if you point at a 3 and tell me it's a 7 I will disagree
Rob Fielding (Dec 15 2024 at 07:39):
so, is this a "proof by beatdown/authority" then?
Mario Carneiro (Dec 15 2024 at 07:39):
if you point at X = 1 + 2X and tell me it's an infinite sum I will disagree
Rob Fielding (Dec 15 2024 at 07:39):
does "a=b" mean:
- a can be rewritten as b
- b can be rewritten as a
?
Mario Carneiro (Dec 15 2024 at 07:40):
a = b means a and b are equal (assuming that a and b are well formed), which means that both rewrites are legal
Rob Fielding (Dec 15 2024 at 07:41):
but if you had nothing but a C compiler, how would you implement "a=b"? it's a rewrite rule, no?
Mario Carneiro (Dec 15 2024 at 07:42):
That's completely irrelevant? Equalities are not implementations
Mario Carneiro (Dec 15 2024 at 07:42):
I'm not really sure where you are going with this
Rob Fielding (Dec 15 2024 at 07:45):
I got something as weird as -1/12=1+2+3+... accepted even as a Lean proof.
S = 1+2+3+...+n + Tail_S(n)
if n can be as large as you like, Tail_S(n) has to be defined as something. Because B=1/4, and the outcome of (Sf 2n) - (Bf 2n) = 4*(Sf n) .... it's going to be -1/12. But the objection is ridiculous imho. It just means that you need to be careful to think about objects that you can multiply times 12 and add 1 to get 0. And that totally is what all 1s in base13 is.
Rob Fielding (Dec 15 2024 at 07:46):
S-Tail_S n = Sum_S n
that is the actual sum... not S.
Mario Carneiro (Dec 15 2024 at 07:47):
I wish you would actually prove this thing about base13, that's an interesting result in the 13-adic integers but you don't seem to be getting close to a proof
Mario Carneiro (Dec 15 2024 at 07:47):
Because Tail_S(n) actually does converge to 0 in the 13-adic integers
Rob Fielding (Dec 15 2024 at 07:47):
Z = 5 + 10 Z
(.....55555555_(10)) .... all 5s in base 10
Z = 5 + 10 ( 5 + 10 Z)
= 5 + 50 + 100Z
Mario Carneiro (Dec 15 2024 at 07:48):
(actually I'm not sure that's the case, it's true when you use powers of 13 but not for the sum of all the numbers)
Mario Carneiro (Dec 15 2024 at 07:49):
You've started a proof with an equation again
Mario Carneiro (Dec 15 2024 at 07:49):
never do that, this is not how math proofs work
Mario Carneiro (Dec 15 2024 at 07:50):
The first step is to introduce your variables with "Let"
Rob Fielding (Dec 15 2024 at 07:50):
this even works if you pick arbitrary definitions of the number, btw.... that blew my mind.
-1 = V = 1 + 2 V = 2 + 3 V = 3 + 4 V ....
V = 1 + 2(2 + 3 V)
= 1 + 4 + 6 V
-5 V = 5
V = -1
Mario Carneiro (Dec 15 2024 at 07:51):
I've exceeded my "someone is wrong on the internet" budget. Have a good day, and maybe read some Mendelson
Rob Fielding (Dec 15 2024 at 07:51):
:-)
Rob Fielding (Dec 15 2024 at 07:51):
it works though! even in Lean! maybe it's wrong anyway!
Rob Fielding (Dec 15 2024 at 07:56):
S = -1/12
-12 S = 1
(1 - 13)S = 1
S = 1 + 13 S
= 13^0 + 13(1 + 13 S)
= 13^0 + 13^1 + 13^2 + ... + 13^n + 13^(n+1) S
(-1/12) = 1 + 13(-1/12)
= 1 + 13 + 13^2(-1/12)
....
that's the proof that this number is also representable as all 1s in base13
Sébastien Gouëzel (Dec 15 2024 at 07:59):
Unless you fill in the dots in the following sentence, you're not doing mathematics:
Given an arbitrary sequence of real numbers , we say that its series converges to a real number if ...
Then, with this definition of convergence, let us show that the series associated to converges to . Here is the proof: ...
Note how the proof will have to refer to the definition in the first sentence. If you fill in the dots with "we say that its series converges to a real number if ", then showing the convergence of the series to is easy, but uninteresting. You need first to come up with an interesting definition, before talking about proofs.
Rob Fielding (Dec 15 2024 at 08:02):
the "..." is finite btw. i totally agree with you btw. on the end of an expression "+ ..." is too ambiguous for me.
Rob Fielding (Dec 15 2024 at 08:05):
S = 1 + 13 S
= -1/12
S = (13^0 + 13^1 + 13^2 + ... + 13^n) + 13^(n+1)S
S*(1 - 13^(n+1)) = 13^0 + 13^1 + 13^2 + ... + 13^n
=
(-1/12)(1 - 13^(n+1))
= (13^(n+1) - 1)/12 = Sum_S n
even though you get "nonsense" about powers of 13 adding to -1/12, S is the critical value to define the closed form Sum_S n here.
Rob Fielding (Dec 15 2024 at 08:06):
but it is not nonsense. 12*S = -1 is the key. it's just that if -1 is represented with infinite digits, it still works because you add 1, and it carries to produce 0
Rob Fielding (Dec 15 2024 at 08:10):
what i am really saying is that maybe you should reconsider your understanding of infinite numbers. all 2s in base 10 is not all 1s in base 10. it's not like a undefined number that keeps increasing every time you look at it... it is a well-defined number. and you can define it as an equation.
Chris Birkbeck (Dec 15 2024 at 08:47):
If you can fill in this sorry
lemma A : -(1 : ℚ) / 12 = limUnder (atTop) (fun k : ℕ => ∑ m in Finset.range k, k) := by sorry
I guarantee you that everyone on this thread will say you are right and we are wrong.
Sébastien Gouëzel (Dec 15 2024 at 08:59):
One last remark: there are many rigorous ways to define the sum of a series which is divergent in the usual sense. For some of these definitions, indeed the sum of is , but this has to be justified in terms of the definition, not formal algebraic manipulations which you haven't justified. A standard reference for this is the book "Divergent series" by Hardy, where you will see many ways to fill in the dots.
Rob Fielding (Dec 16 2024 at 17:27):
This is Sorry Free and no errors. It is using recursion all over the place in the theorems.
https://gist.github.com/rfielding/cbfdca49a6320685cb8e7cfc6bbb0449
Sébastien Gouëzel (Dec 16 2024 at 17:32):
Please, please, go and read the introduction of Hardy, Divergent Series (https://sites.math.washington.edu/~morrow/335_17/Hardy-DivergentSeries%202.pdf), especially Section 1.3, highlighting the need to define things before giving any proof about them -- note that it's not trivial, it took mathematicians several centuries to understand how important this is, but now that we have understood it you should take it seriously.
Sébastien Gouëzel (Dec 16 2024 at 17:34):
The bottom of Page 5 and the top of Page 6, especially, are completely non-technical but explain perfectly what is going on here.
Rob Fielding (Dec 17 2024 at 01:17):
X = 1 + 2 X
= 1 + 2(1 + 2 X)
= 1 + 2 + 4 X
= 1 + 2 + 4 + ... + 2^n + 2^(n+1) X
X - 2^(n+1) X = Sum_X n
(-1)(1 - 2^(n+1)) = Sum_X n
-- this is what people mean when they say "infinite sum" gives infinity
Sum_X n = 2^(n+1) - 1 = X - Tail_X n
-- this is what i don't like about "+ ..." on the end. Tail must be defined
(-1) = 1 + 2(-1)
= 1 + 2 + 4(-1)
= 1 + 2 + 4 + ...
Matt Diamond (Dec 17 2024 at 01:34):
FYI, we're all familiar with alternative summation methods, including the ways that 1 + 2 + 4 + 8... can be summed to -1. These are interesting ideas, and if you would like to learn about how to explore these ideas in Lean, people would be happy to help you. The reason you've been encountering resistance is that the code you've written does not properly capture these concepts in Lean.
Rob Fielding (Dec 17 2024 at 18:56):
you are just making things up. "i can just set S to 37!" ... no you cannot. the whole reason it gets the value it does...
S = 1 + 2 + 3 + 4 + 5 + 6 + ...
B = 1 + -2 + 3 + -4 + 5 + -6 + ...
S-B =
0 + 4 + 0 + 8 + 0 + 12 + ...
Which looks like expanding at half rate, but multiplied times 4: suggesting that we may find S-B = 4 S
in some form.
The whole reason you get the relationship is that the _finite_ sum Sum_S (2*n) - Sum_B (2*n)
reduces to 4* Sum_S n
with no other assumptions. From there, you find that Tail_S (2*n) - Tail_B (2*n) = 4*(Tail_S n)
solves for the value Sf n
, because all the n
cancel out. And -1/12 is what is left.
I did not pick that number out of thin air. It's a consequence of S and B.
Mario Carneiro (Dec 17 2024 at 18:59):
What was your takeaway from the recommended reading above?
Rob Fielding (Dec 17 2024 at 19:01):
S is not arbitrarily set. It's a consequence of B, which happens to be 1/4.
Mario Carneiro (Dec 17 2024 at 19:02):
Who are you responding to?
Mario Carneiro (Dec 17 2024 at 19:04):
Sébastien Gouëzel said:
Please, please, go and read the introduction of Hardy, Divergent Series (https://sites.math.washington.edu/~morrow/335_17/Hardy-DivergentSeries%202.pdf), especially Section 1.3, highlighting the need to define things before giving any proof about them -- note that it's not trivial, it took mathematicians several centuries to understand how important this is, but now that we have understood it you should take it seriously.
:this: do this and come back with thoughts about it so that we can have a sensible discussion
Rob Fielding (Dec 17 2024 at 19:04):
the poster of this
import Mathlib
open BigOperators
def Sum_S (n : Nat) : Rat := ∑ i < n, 2^i
def Tail_S (n : Nat) : Rat := 37 - Sum_S n
example (n : Nat) : Sum_S n + Tail_S n = 37 := by simp [Sum_S, Tail_S]
def Tail_S' (n : Nat) : Rat := 42 - Sum_S n
example (n : Nat) : Sum_S n + Tail_S' n = 42 := by simp [Sum_S, Tail_S']
Rob Fielding (Dec 17 2024 at 19:04):
you will get a contradiction if you don't let let the value be set by something else that is known.
Mario Carneiro (Dec 17 2024 at 19:05):
(you can use "quote reply" so that people can follow the thread)
Mario Carneiro (Dec 17 2024 at 19:06):
I don't think responding to your comment here is going to be productive until we get on the same page about what a proof is
Rob Fielding (Dec 23 2024 at 05:58):
note an almost identical situation if instead of F=dA/dx, you integrate instead for dG/dx=A. From there you get the alternating series G:
G = 1/1 + -1/2 + 1/3 + -1/4 + 1/5 + -1/6 + ... = ln(2)
And you can (more carefully than this!) figure out a self-similarity relationship:
H = 1/1 + 1/2 + 1/3 + 1/4 + 1/5 + 1/6 + ...
G = 1/1 + -1/2 + 1/3 + -1/4 + 1/5 + 1/6 + ...
-
==================================
H-G= 0 + 1/1 + 0 + 1/2 + 0 + 1/3 + 0 + ....
it's H again, but notably expanding at half the rate!
llllvvuu (Dec 23 2024 at 21:02):
Firstly, your definition of Tail_S
is overcomplicated. It is just -1/12 - n(n+1)/2.
theorem Tail_S_eq (n : ℕ) : Tail_S n = -1 / 12 - n * (n + 1) / 2 := by
simp [Tail_B, Sum_B, Sum_S, Tail_S]
ring
With this, your file can be simplified as all of your definitions are unnecessary:
import Mathlib
def Tail_S (n : ℕ) : ℚ := -1 / 12 - n * (n + 1) / 2
theorem S_val (n : ℕ) : n * (n + 1) / 2 + Tail_S n = -1 / 12 := by
simp [Tail_S]
Now, we are missing the step where we claim that n * (n + 1) / 2 + Tail_S n
represents 1 + 2 + 3 + ...
:
theorem S_def (n : ℕ) : n * (n + 1) / 2 + Tail_S n = ∑' x : ℕ, (x : ℚ) := by
sorry
This hinges on the claim that Tail_S n
represents (n + 1) + (n + 2) + ...:
theorem Tail_S_eq_sum (n : ℕ) : Tail_S n = ∑' x : ℕ, (x + n + 1 : ℚ) := by
sorry
This hinges on the following:
(n + 1) + (n + 2) + ... = (n + 1) + Tail_S (n + 1) = Tail_S (n)
Here we have assumed the claim for Tail_S (n + 1) to prove the claim for Tail_S (n). I am going to tell you here that this is not a valid induction, it goes the wrong way and there is no base case, i.e. it is an infinite regress.
But you are welcome to try to fill in these proofs.
llllvvuu (Dec 23 2024 at 21:59):
In your full "proof" you have used the Taylor expansion, but this is only valid within the radius of convergence, see: https://en.wikipedia.org/wiki/Taylor%27s_theorem
As it so happens, the radius of convergence of f(x)=1/(1-x) at 0 is -1 < x < 1. So Taylor expanding at 0 is not valid for f(-1).
In other words, you've simply pushed the issue from Tail_S
to Tail_B
:
-- Tail_S_eq_sum is claimed to follow from Tail_B_eq_sum,
-- but Tail_B_eq_sum is false since -1 is outside the radius of convergence
theorem Tail_B_eq_sum (n : ℕ) : Tail_B n =
∑' x : ℕ, (x + n + 1 : ℚ) * (-1) ^ (x + n) := by
sorry
BTW, this exact "proof" is addressed in the Wiki article: https://en.wikipedia.org/wiki/1_%2B_2_%2B_3_%2B_4_%2B_%E2%8B%AF#Heuristics
More here: https://www.smithsonianmag.com/smart-news/great-debate-over-whether-1234-112-180949559/
llllvvuu (Dec 23 2024 at 23:51):
Regarding this step:
0 + 4 + 0 + 8 + ... = 4S
This is a very nice technique indeed. We can use it to prove an even more astounding result:
S = 1 + 1 + 1 + 1 + ...
= 1 + S
=> 0 = 1
Rob Fielding (Dec 24 2024 at 00:38):
hah! for sure... I could come out and just say: S = -1/12, and then prove that S=Sum_S n + Tail_S n later. But there is a human element. I wanted it to be extremely obvious that whatever constant (Sf n) is, that I did not "pick" it. It is clear that S is arbitrary UNTIL you related it to another sequence like B. Most of what in this file is justifying the choice of B.
Rob Fielding (Dec 24 2024 at 00:48):
if you think about it.... all infinite sums can be turned into S = 1+1+1+1+... with arbitrary parenthesis, and sums that don't use negative numbers. But that's the key reason why we should use a closed formulat where n is the number of expansions. The whole reason for doing this is, so that:
S = Sum_S (1) + Tail_S (1)
= Sum_S (2) + Tail_S (2)
...
= Sum_S(n) + Tail_S (n)
If you do this, then it's clear that Tail
means ...
, and it's not so much that n
is "infinity", but it's any value you want; no matter how large. It resolves all the paradoxes, because it's now clear what we mean by an infinite value, versus the final actual value:
S = Sum_S (n) + Tail_S (n)
S - Tail_S (n) = Sum_S (n)
Now, you can carefully distinguish between taking a limit on n
, versus the recursive definition. This way, we completely avoid the concept of an infinite number of iterations for any purpose at all. What this means is that it is not at all alarming that we never "approach" -1/12 with larger n
n; because you mean S - Tail_S n
when you are thinking of "infinity"; but S
is a number that is related to calculating the closed form sums.
A = Sum_A n + Tail_A n
A - Tail_A n = Sum_A n
That's how we get things like the sum of all powers getting a closed form like (x^(n+1)-1)/(x-1)
.
Eric Wieser (Dec 24 2024 at 00:52):
Mario Carneiro said:
:this: do this and come back with thoughts about it so that we can have a sensible discussion
@Rob Fielding, have you done this yet?
llllvvuu (Dec 24 2024 at 00:58):
Rob Fielding said:
S = Sum_S (1) + Tail_S (1) = Sum_S (2) + Tail_S (2) ... = Sum_S(n) + Tail_S (n)
If you do this, then it's clear that
Tail
means...
, and it's not so much thatn
is "infinity", but it's any value you want; no matter how large. It resolves all the paradoxes, because it's now clear what we mean by an infinite value, versus the final actual value:S = Sum_S (n) + Tail_S (n) S - Tail_S (n) = Sum_S (n)
sure, we understand. the issue is that Tail_S is wrong.
Rob Fielding (Dec 24 2024 at 01:00):
This conversation isn't really so much about the S=-1/12 result, but about what it even means to formalize something. This technique of X = Sum_X n + Tail_X n
gives me the same answers for convergent series, and for all the divergent ones that I can find. X = Sum_X n + ...
is a problem in that it supposes that it does not matter what is in "...".
I mean, just from assuming that X = -1
...
X = -1
-X = 1
(1-2)X = 1
X - 2X = 1
X = 1 + 2 X
= (1 + 2(1 + 2 X))
= 1 + 2 + 4 X
= 1 + 2 + 4(1 + 2 X)
= 1 + 2 + 4 + 8 X
= 1 + 2 + 4 + ... + 2^n + 2^(n+1) X
X - 2^(n+1) X = 1 + 2 + 4 + ... + 2^n
(-1)(1 - 2^(n+1)) = Sum_X n
2^(n+1) - 1 = Sum_S n
This is an actual application of divergent series; calculating a closed form. I cannot believe for a second that this is illegimate; because it dispenses with dependencies on high-level math. This is EXACTLY how you get things to work on a computer; by turning it into tedious middle-school math.
llllvvuu (Dec 24 2024 at 01:06):
no one disputes the correctness of Sum_S = n(n + 1)/2, which is indeed middle school math. but you have given incorrect solutions for Tail_B and Tail_S. Your “solution” uses calculus which is neither middle school level nor correct.
llllvvuu (Dec 24 2024 at 01:15):
BTW if recursion is more your speed, try proving it this way:
import Mathlib
def tail (n : ℕ) : ℚ := n + 1 + tail (n + 1)
decreasing_by sorry
theorem tail_S (n : ℕ) : tail n = -1 / 12 - n * (n + 1) / 2 := by sorry
theorem S : tail 0 = -1 / 12 := by simp [tail_S]
Rob Fielding (Dec 24 2024 at 01:16):
What is missing from the proofs of S=-1/12 you see on the internet is this very subtle distinction in how far out things were expanded. As soon as you calculate this FINITE relationship:
Sum_S (2*n) - Sum_B (2*n)
=
4 * (Sum_S n)
Because there is no use of ...
on the end, there is nothing complicated in this relationship to argue over. It means that if you use Sf n = Sum_S n + Tail_S n
and Bf n = Sum_B n + Tail_B n
, you can take:
Tail_S (2*n) - Tail_B (2*n) = 4 * (Tail_S n)
This actually solves for Sf n
without doing any summation of all. It's a result of the structure of the number system. All the n
cancel, and there is a -1/12
left over. Then all this "extra unnecessary stuff" like the theorem Sf n = 1 + 13 * (Sf n)
is just more checks that it is not wrong.
Of course, no matter how many terms you add, you never "get closer to -1/12". It's just saying that the "final" value is a NUMBER with an infinite number of digits that forces you to express it as an equation:
12 * (Sf n) + 1 = 0
And of course, it means that whatever you you think of (Sf n), that you multiply it times 12 and add 1 to get it to ripple-carry to produce 0. People can argue about what it "means", but representing -1 as an infinite-digit number works. Most of the math I do relates to cryptography and finite fields. I am not bothered by an infinite sum totalling a bizarre fraction that is not even in the set of the types being added. S - Tail_X n
is what people mean when they say that Sum_X n
is "infinity".
llllvvuu (Dec 24 2024 at 01:19):
None of us are bothered by the idea that 1+2+3... could equal -1/12. Were you to provide a correct proof, we would accept it. You should be able to write all of the above in Lean, and I've just provided you a template for doing so.
Matt Diamond (Dec 24 2024 at 01:23):
Perhaps it would be more productive to acknowledge what we do agree on. We agree that Rob's Lean code proves a number of algebraic equalities.
The disagreement, as far as I can tell, is that Rob believes these equations have some meaning related to infinite sums and the rest of us don't. (Either that or we disagree on how their meaning relates to infinite sums.)
Rob Fielding (Dec 24 2024 at 01:26):
nobody seems to argue that Sum_S n
is wrong. maybe not even arguing with the whole concept of Xf n = Sum_X n + Tail_X n
. Just doubts about the assignment of the tail values. In the end, it's basically saying something like this:
Infinity is not a number. But there are numbers with an infinite number of digits. If you add together an infinite number of natural numbers, it may have a result that is not in the natural numbers.
What I am feeling right now is that even though I can calculate as a Lean4 proof that Sum_S (2*n) - Sum_B (2*n) = 4 * (Sum_S n)
, and then use that hint to solve Tail_S (2*n) - Tail_B (2*n) = 4 * (Tail_S n)
for the actual value of Sf n
... no amount of formalization will convince somebody until a rocket blows up on the basis of this calculation. It's not in the Hardy book afterall! Sf n
was chosen for me, against my will; as soon as I calculated S-B
, I have to accept that Sf n = -1/12
, and there's nothing I can do about it.
llllvvuu (Dec 24 2024 at 01:27):
Tail_S (2*n) - Tail_B (2*n) = 4 * (Tail_S n)
This equation is incorrect as explained previously.
llllvvuu (Dec 24 2024 at 01:28):
Or rather, it holds for the incorrect "solutions" to Tail_S
and Tail_B
, but the justification based on 0 + 4 + 0 + 8 +... is wrong
Rob Fielding (Dec 24 2024 at 01:32):
Lean readily calculates Sum_S (2*n) - Sum_B (2*n) = Sum_S n
. That's a hint on what to try. remember that the definitions mean that adding Sum and Tail cancel out n. The sum part is not controversial. The issue with Tail_n is that it depends on the sequence B being correct. And note that the value of Tail_S was not something I made up. It was forced upon me by this relationship:
A
F = dA/dx
B = F(-1)
S-B
I don't think anybody disagrees with the definition of A
.
I have a theorem in justifySf. the Sum part of justifySf. justifySf shows that the S-B tail part forces one exact value for Sf. I can't come up with a different value. It has to be -1/12.
Rob Fielding (Dec 24 2024 at 01:34):
if you go the other way and integrate A, you get: G = 1/1 + -1/2 + 1/3 + -1/4 + ... = ln(2). By the same exact technique. I can't get a value out of it for H = 1/1 + 1/2 + 1/3 + 1/4 + ...
llllvvuu (Dec 24 2024 at 01:42):
Rob Fielding said:
A F = dA/dx B = F(-1) S-B
This is wrong on many counts, as pointed out earlier. Also, you should not do this math in comments, as this defeats the purpose of Lean.
Anyways, if you are confident in your proof, you should try to fill in Tail_S_correct?
below:
import Mathlib
def Tail_S (n : ℕ) : ℚ := n + Tail_S (n + 1)
decreasing_by sorry
def Sum_S (n : ℕ) : ℕ := ∑ i ∈ .range n, i
theorem Sum_S_eq (n : ℕ) : Sum_S n = n * (n - 1) / 2 := by
simp [Sum_S, Finset.sum_range_id]
def S : ℚ := Tail_S 0
theorem S_eq (n : ℕ) : S = Sum_S n + Tail_S n := by
induction n with
| zero =>
unfold Sum_S Tail_S S
nth_rw 1 [Tail_S]
simp
| succ n ih =>
nth_rw 1 [ih, Tail_S]
simp [Sum_S, Finset.sum_range_succ]
ring
def Tail_S_wrong (n : ℕ) : ℚ := -1 / 12 - n * (n - 1) / 2
theorem Tail_S_correct? (n : ℕ) : Tail_S n = Tail_S_wrong n := by sorry
theorem wrong : S = -1 / 12 := by simp [S_eq 1, Tail_S_correct?, Tail_S_wrong, Sum_S]
If you are comfortable with recursion, and your proof is indeed correct, you should not find it hard to do this at all. This is the only way anyone will be convinced.
Rob Fielding (Dec 24 2024 at 03:33):
note this about how (Sf n)
is defined:
(Sf n = (4*Sum_S n - (Sum_S (2*n) + Tail_B (2*n)))/3)
Tail_B is the deriv of Tail A, then evaluated at -1. So, if you believe that Sum_S
is correctly defined, then the definition of Tail_B
is what you want checked. Why was this chosen?
A is a sequence of powers:
A = 1 + x * A
A = x^0 + x^1 + x^2 + ... + x^n + x^(n+1)A
A
trivially solves for 1/(1-x)
, but you can get 1*x^0 + 2*x^1 + 3*x^2 + ...
by differentiating it. And you can try to plug 1 into it....
(d/dx)(x^0 + x^1 + x^2 + ... + x^n + x^(n+1)/(1-x))
=
1 x^0 + 2 x^1 + 3 x^2 + ... + n x^(n-1) + (d/dx)(x^(n+1)/(1-x))
for x=1, we get
1 + 2 + 3 + ... + n + (d/dx)(x^(n+1)/(1-x))
And we think we got the value of S directly. But there's a problem. The closed form value is 1/(1-x)
has a divide by 0 for x=1
. So, we use B = (d/dx)A
, and evaluate that at -1 to get the alternating series. Divide by zero means that it could be any value, so using the alternating series is a better way around. With that, we find S-B
gives us something that lets us compute (Sf n)
.
In any case, if you don't like Tail_B n
, then just remember that it is deriv Tail_A (-1) x
. And there is nothing controversial about Tail_A
definition. Tail_S n = (Sf n) - Sum_S n
means that you can't argue with the definition of Tail. You can assign Sf n
to anything you like, until you have the S - B
relationship, and anything but -1/12
will create contradictions. Basically all my theorems break for anything but S=-1/12
.
The definitions for S are like this.... Where justifySf
does the algebraic manipulations that are caused by S-B
. We solve for (Sf n)
, and miraculously, all n
cancel out and give a constant.:
def Sum_S (n : ℕ) : ℚ := n*(n+1)/2
/- justifySf is why this defintion was chosen. -/
def Sf (n : ℕ):ℚ := (4*Sum_S n - (Sum_S (2*n) + Tail_B (2*n)))/3
def Tail_S (n : ℕ) : ℚ := Sf n - Sum_S n
def Term_S (n : ℕ):ℚ := Sum_S (n) - Sum_S (n-1)
Note that I could just make my justification for the choice of B
is "well, it happens to work! it causes us to relate S
to a multiple of itself. But I could not get deriv
to just work on this function with Rat -> Nat -> Rat
. So I do the deriv calculation using some axioms. If you think it invalidates the proof, then we can say that I magically found this B
with no explanation of how I found something that "just works".
I get it that there are places where I also "do math in the comments", but that's just because I could not get deriv to "just work", and have to describe what the far more verbose theorems are doing. But it IS being done in the theorems.
This is the URL at which I maintain the latest version of the code, as I add more and more theorems. It is AMAZING that you can state these complicated formulas, and all n
cancel out and leave you with a constant. It's a pretty stout check that there isn't a typo making it work.
This general method gives me the SAME values that I see elsewhere for divergent sums. It also gives the same value for convergent sums, so this method of X = Sum_X n + Tail_X n
appears to work in general. It's interesting that I can't google around and find a value for H = 1/1 + 1/2 + 1/3 + 1/4 + ...
, and discover that even though I can compute the alternating series G = 1/1 + -1/2 + 1/3 + -1/4 + ... = ln(2)
, The algebra so far has evaded me assigning a value to H
.
https://gist.github.com/rfielding/cbfdca49a6320685cb8e7cfc6bbb0449
llllvvuu (Dec 24 2024 at 04:14):
Rob Fielding said:
In any case, if you don't like
Tail_B n
, then just remember that it isderiv Tail_A (-1) x
. And there is nothing controversial aboutTail_A
definition.
This is not correct.
Rob Fielding said:
A trivially solves for
1/(1-x)
.
This is not correct unless |x| < 1.
Rob Fielding said:
I get it that there are places where I also "do math in the comments", but that's just because I could not get deriv to "just work", and have to describe what the far more verbose theorems are doing. But it IS being done in the theorems.
This is not correct.
Rob Fielding (Dec 24 2024 at 04:17):
A = 1 + x * A
.... do not assume we are doing calculus. If that's the definition of A
, then you have to admit this:
A = 1 + x * A
A - x * A = 1
A * (1 - x) = 1
A = 1/(1-x) -- (1-x) != 0
A = 1 + x * A
= 1 + x ( 1 + x ( 1 + x A ))
= x^0 + x^1 + x^2 + x^3 A
The recursion matches the infinite pattern, so we are dealing with infinite sums.
That's so super-trivial.
Jireh Loreaux (Dec 24 2024 at 04:22):
Rob Fielding said:
A = 1 + x * A
I can't resist. I will not interact further than this single post. This is the only thing I will say: that's not a definition of A
. You haven't defined A
at all. You've assumed that a certain A
exists which satisfies this equality, and then you showed (assuming x ≠ 1
), that A = 1 / (1-x)
. This is what Mario tried to explain so many posts ago.
Rob Fielding (Dec 24 2024 at 04:22):
it's defined in the Lean code ffs. I guess the basic point that my Lean code is making is that the definition of equality should mean that a=b
says that a
can overwrite b
, and b
can overwrite a
.
Even just naming a fraction gives recursive equations like this:
-1/12 = S
1 = -12 S
1 = (1 - 13) S
1 + 13 S = S
An infinite series such that if you multiplied it times 13 and added 1 and gave you the same S would fit the definition of -1/12
. It's all a result of taking =
seriously. You can duck-type a language btw. You can accumulate all the operations performed, and create a list of what the types could be.
llllvvuu (Dec 24 2024 at 04:24):
(deleted)
Jireh Loreaux (Dec 24 2024 at 04:38):
Mario Carneiro said:
Sébastien Gouëzel said:
Please, please, go and read the introduction of Hardy, Divergent Series (https://sites.math.washington.edu/~morrow/335_17/Hardy-DivergentSeries%202.pdf), especially Section 1.3, highlighting the need to define things before giving any proof about them -- note that it's not trivial, it took mathematicians several centuries to understand how important this is, but now that we have understood it you should take it seriously.
:this: do this and come back with thoughts about it so that we can have a sensible discussion
Rob Fielding (Dec 24 2024 at 04:38):
I read through it. It's useless for evaluating recursive definitions of sums; because he doesn't do that anywhere, or at all.
llllvvuu (Dec 24 2024 at 04:55):
Rob Fielding said:
for x=1, we get
1 + 2 + 3 + ... + n + (d/dx)(x^(n+1)/(1-x))
You can simplify this:
B = F(-1) = 1/4 = 1 - 2 + 3 - 4 + ... + n * (-1) ^ (n + 1) + (2n + 1)/4 * (-1)^n
However, this doesn't really prove anything about 1 - 2 + 3 - 4 + .... Yet, you have claimed that 1 - 2 + 3 - 4 + ... is B.
The problem is that this tail term (2n + 1)/4 * (-1)^n is not really the result of the recursive continuation that you speak of (in this case, the recursive continuation would be, in Python, tail_B = lambda n : (n + 1) * (-1) ** n + tail_B (n + 1)
, which BTW doesn't evaluate). So it is not valid to put tail_B to be (2n + 1)/4 * (-1)^n.
Rob Fielding (Dec 24 2024 at 04:57):
B = 1/4 is in the Hardy book, btw; in multiple places.
I could just say that I used it because it happens to work. But starting by differentiating A means that Sum_A and Tail_A determine what Sum_B and Tail_B are.
Rob Fielding (Dec 24 2024 at 04:59):
def Af (x:ℤ):ℚ := 1/(1 - x)
def Sum_A (x:ℤ) (n:ℕ):ℚ := (x^(n+1)-1)/(x-1)
def Tail_A (x:ℤ) (n:ℕ):ℚ := Af x - Sum_A x n
def Term_A (x:ℤ) (n:ℕ):ℚ := Sum_A x n - Sum_A x (n-1)
def Ff (x:ℤ):ℚ := 1/((1-x)^2) -- d[Af x]/dx
def Sum_F (x:ℤ) (n:ℕ):ℚ :=
(n*(x^(n+1)) - (n+1)*(x^n) + 1)/((1-x)^2) -- d[Sum_A x n]/dx
def Tail_F (x:ℤ) (n:ℕ):ℚ := Ff x - Sum_F x n -- d[Tail_A x n]/dx
def Term_F (x:ℤ) (n:ℕ):ℚ := Sum_F x n - Sum_F x (n-1)
def Tail_B (n:ℕ):ℚ := (2*n+1)*(-1)^n/4
def Sum_B (n:ℕ):ℚ := Ff (-1) - Tail_B n
def Bf (n:ℕ):ℚ := Sum_B n + Tail_B n
def Term_B (n:ℕ):ℚ := Sum_B n - Sum_B (n-1)
def Sum_S (n:ℕ):ℚ := (n+1)*n/2
def Sf (n:ℕ):ℚ :=
(4*Sum_S (n) - (Sum_S (2*n) + Tail_B (2*n)))/3
def Tail_S (n:ℕ):ℚ := Sf n - Sum_S n
def Term_S (n:ℕ):ℚ := Sum_S n - Sum_S (n-1)
I used theorems in the lean file to link B = F(-1). F = dA/dx.
llllvvuu (Dec 24 2024 at 05:03):
I understand where 1/4 comes from. I'm saying the "recursive evaluation" of 1 - 2 + 3 - 4 + ... is not 1/4. The relation to 1/4 in Hardy is a different relation, which is what we've been trying to say. 1 + 2 + 3 + 4 ... does have an associated value -1/12 in a well-defined framework (analytic continuation), and within these frameworks also 1 - 2 + 3 - 4 + ... has the associated value 1/4. But that framework is not the standard framework, nor is your "recursive evaluation" framework well-defined like analytic continuation is.
Rob Fielding (Dec 24 2024 at 05:04):
and a really crucial point: if I used A with "..." on the end, I would have a problem. i would not be able to differentiate the tail because i didn't name it! that's what started this whole journey.
Note that the tail should ALSO be differentiated, but we never named it
A = (x^0 + x^1 + x^2 + ...
dA/dx = 0 + 1 x^0 + 2 x^1 + ...
vs
A = (x^0 + x^1 + x^2 + Tail_A(2)
dA/dx = dA/dx Sum_A x n + dA/dx Tail_A x n
And as soon as I had to do that, I discovered that I can use recursion to deal with infinite sums in a much better way.
llllvvuu (Dec 24 2024 at 05:26):
This is true:
1/(1 - x) = x^0 + x^1 + ... + x^n + Tail_A(n)
This is not (unless |x| < 1), but you are relying on it to be:
x^0 + x^1 + ... + x^n + ... = x^0 + x^1 + ... + x^n + Tail_A(n)
The issue is that both:
1/(1 - x)
x^0 + x^1 + ... + x^n + ...
are solutions to A = 1 + x * A. But they are different solutions; the latter is the solution
f(x) = NaN if |x| >= 1 else 1/(1 - x). This also satisfies the equation: NaN = 1 + x * NaN, but it is not the same. So if you define A = 1/(1 - x) you cannot also use it as x^0 + x^1 + ... + x^n + ... This is the same as the famous 0 = 1 proof which leverages the fact that sqrt has two solutions.
Now, f(x) = 1/(1-x) is the unique analytic continuation of x^0 + x^1 + ... + x^n + ... to the connected set ℂ \ {1}, since it agrees on the open disk |x| < 1. It is correct to say that the analytic continuation of x^0 + 2x^1 + 3x^2 + ... evaluated at -1 is 1/4, but x^0 + 2x^1 + 3x^2 + ... itself evaluated at -1 is NaN.
llllvvuu (Dec 24 2024 at 05:58):
By the way, you can write the equation for B without using differentiation, as it is just an arithmetico-geometric series:
B = xB + A = xB + 1/(1-x) => B = 1/(1-x)^2 (warning: multiple solutions as you can put NaN in places and get a different solution)
Rob Fielding (Dec 24 2024 at 06:07):
i tried to not introduce a B at all. i tried to use (dA/dx [-1]), but if i want sorry-free Lean, i just run into too many cases where i can't fill out the proofs in the theorems. ufold, simp, ring_nf is about all i can deal with. i wish there was a rational_algebra tactic, and i wish that when i did a deriv across +,-,*,/,log,^ ... that was just a symbolic translation.
While I am learning TLA+, I gave up on putting it into Lean. There needs to be an option to automatically verify simple things; where you just need to know that it IS true. ie: Sf n = Sum_S n + Tail_S n := by rational_algebra
. What I would really want out of a theorem prover is to mostly just declare something I think is true, and extract a sample path when I have to.
Rob Fielding (Dec 24 2024 at 06:12):
it may not be obvious that the whole point is to completely avoid all advanced math. computers deal with finite symbol manipulation. An O(2^n) solution is useless. An O(infty) one is even less so. I got here trying to completely eliminate infinite sums and products. I was trying to turn integration into a finite process. I can't use Reimann Zeta and Analytic Continuations, because I don't understand that. If you can't write the code, then you don't understand it.
llllvvuu (Dec 24 2024 at 06:14):
Rob Fielding said:
it may not be obvious that the whole point is to completely avoid all advanced math.
No, it's completely obvious. But it just has holes unfortunately. Such as the one pointed out above that A = 1 + x * A has multiple solutions because A can take NaN values at more places than just x = 1 and still be a solution, which means that you cannot assert A = 1 / (1 - x) to be the solution.
Rob Fielding (Dec 24 2024 at 06:18):
more like: when you assert A = 1/(1-x)
, x != 1
is a side-condition. You get a contradiction if you then assert x = 1
in the same environment. It's just that if you calculate F(1) = 1 + 2 + 3 + ... + n + Tail_F(n)
that you need to use something other than 1/((1-x)^2)
to get the value. In some cases, there will be two options available, and you need to go down the path that does not have a contradiction in it.
Rob Fielding (Dec 24 2024 at 06:21):
ie: d[x^n] = n/x x^n dx
or d[x^n] = n x^(n-1) dx
... something I ran into when trying to write d[] for Geometric Algebra (non-commutative args). Just because there is a divide by 0 following a choice that was made, does not mean that you can't pick a different option to get around the problem.
Rob Fielding (Dec 24 2024 at 06:22):
in geometric algebra, that example, n might be a.... vector!
llllvvuu (Dec 24 2024 at 06:28):
does not mean that you can't pick a different option
that's exactly the issue.
Given A = 1 + x * A
you have picked A = 1/(1-x) if x != 1 else undefined
. Thus you conclude that x = -1 => dA/dx = 1/4
.
I (and the rest of the mathematical world) have picked A = 1/(1-x) if -1 < x < 1 else undefined
which also satisfies A = 1 + x * A
. Thus I conclude that x = -1 => dA/dx = undefined
.
The issue is that your equation A = 1 + x * A
does not pin down A
.
We're not doing advanced mathematics because we want to. In math, and especially in the Lean community, we do prefer to KISS. But without the field of analysis we really have no way to dictate which solution to A = 1 + x * A
is the correct one to describe 1 + x + x^2 + ...
Mario Carneiro (Dec 25 2024 at 01:43):
Rob Fielding said:
i tried to not introduce a B at all. i tried to use (dA/dx [-1]), but if i want sorry-free Lean, i just run into too many cases where i can't fill out the proofs in the theorems. ufold, simp, ring_nf is about all i can deal with. i wish there was a rational_algebra tactic, and i wish that when i did a deriv across +,-,*,/,log,^ ... that was just a symbolic translation.
I think this discussion would go a lot better if you attempted to do something like this, got stuck, and asked for help with it instead of trying to avoid it entirely and using a flawed proof methodology instead. It is actually possible to prove things about derivatives, logarithms etc. in lean, and it is very important to attempt it even though the methods are different from what you are used to, because they actually allow you to have lean "force your hand" rather than using equations in comments as a justification mechanism.
Mario Carneiro (Dec 25 2024 at 01:48):
Rob Fielding said:
it may not be obvious that the whole point is to completely avoid all advanced math. computers deal with finite symbol manipulation. An O(2^n) solution is useless. An O(infty) one is even less so.
I'm not sure whether you are completely appreciating this, but all of lean is finite symbol manipulation, including when talking about infinite sums and so on. Your whole proof is O(1), and this is still true if it's doing a proof about infinite sums or ramanujan summation or analytic continuation, or rational algebra. There is no O(infty), a proof which never finishes proofing is not a proof at all. Proofs are by necessity finite objects.
llllvvuu (Dec 25 2024 at 09:05):
BtW, the fallacy with A
is known as affirming the consequent.
The full proposition is:
- If 1 + x + x^2 = … sums to some value A, then A satisfies A = 1 + x * A.
It is valid to take the contrapositive:
- If no A satisfies A = 1 + x * A, then 1 + x + x^2 = … does not sum to any value.
It is not valid to take the converse:
- If some A satisfies A = 1 + x * A, then 1 + x + x^2 = … sums to A.
1 and 2 can actually be stated and proved in Lean (docs#sum_add_tsum_nat_add, docs#Mathlib.Tactic.Contrapose.contrapose), and 3 can be stated and disproved (docs#not_summable_of_ratio_test_tendsto_gt_one).
The claim that 1 + 2 + 4 + 8 + … = -1 is based on the same fallacy (indeed it’s just x=2).
The following variant of 3 is true for Euler summation:
- If some A satisfies A = 1 + x * A, then 1 + x + x^2 = … (E) sums to A.
Though the corresponding variants of 1 and 2 are not true, since for x=1 we have 1 + 1 + 1 + … (E) sums to -1/2, though A = 1 + A has no solution in reals.
Last updated: May 02 2025 at 03:31 UTC