Zulip Chat Archive
Stream: mathlib4
Topic: Why is a sequence of polynomials noncomputable?
Darij Grinberg (Oct 26 2025 at 23:14):
Looking at https://leanprover-community.github.io/mathlib4_docs/Mathlib/RingTheory/Polynomial/Pochhammer.html#descPochhammer , I am surprised to see that descPochhammer is marked noncomputable. This is a sequence of polynomials defined by an explicit recursion. Does noncomputability have any advantages?
Eric Wieser (Oct 26 2025 at 23:15):
docs#Polynomial.X is noncomputable
Darij Grinberg (Oct 26 2025 at 23:15):
oh. why?
Eric Wieser (Oct 26 2025 at 23:15):
https://github.com/leanprover-community/mathlib4/wiki/Computation-models-for-polynomials-and-finitely-supported-functions is possibly relevant
Eric Wieser (Oct 26 2025 at 23:16):
But to answer your direct question, because it's implented as the coeffient function @if (i = 1) Classical.decEq 0 1 instead of @if (i = 1) (Nat.decidableEq _ _) 0 1
Eric Wieser (Oct 26 2025 at 23:18):
#25273 takes a small step in a direction that will let us fix this, but right now too much code depends on this to change it
Darij Grinberg (Oct 26 2025 at 23:18):
interesting discussion... I think a quotient type is the only right way
Darij Grinberg (Oct 26 2025 at 23:18):
not necessarily by generators
Darij Grinberg (Oct 26 2025 at 23:19):
I'd use a direct limit of "n-th degree polynomials" as n -> \infty
Darij Grinberg (Oct 26 2025 at 23:19):
which is still a quotient, but mod a simpler relation ((p_0, p_1, ..., p_k) ~ (p_0, p_1, ..., p_k,0))
Darij Grinberg (Oct 26 2025 at 23:20):
anyway, requiring decidability of R for support is unavoidable and completely expected. deciding it for X is a heavy code smell if you ask me
Eric Wieser (Oct 26 2025 at 23:20):
Note also that we're trying to share most of the construction between Polynomial and MvPolynomial
Darij Grinberg (Oct 26 2025 at 23:21):
I sympathize with this, but in practice it turns out to be too burdensome for most algebra texts
Darij Grinberg (Oct 26 2025 at 23:21):
The easiest way to define multivariate polys is as a monoid algebra.
Eric Wieser (Oct 26 2025 at 23:22):
Well, then your question becomes "Why is the embedding from G to R[G] noncomputable", which is really the same thing
Darij Grinberg (Oct 26 2025 at 23:23):
it is only an embedding if R is nontrivial, so this isn't that surprising
Darij Grinberg (Oct 26 2025 at 23:23):
or do you mean the map itself? the map itself should be computable
Eric Wieser (Oct 26 2025 at 23:23):
I mean the map, docs#MonoidAlgebra.of
Darij Grinberg (Oct 26 2025 at 23:25):
but that one's computable, right?
Darij Grinberg (Oct 26 2025 at 23:25):
or, well, should be
Aaron Liu (Oct 26 2025 at 23:27):
It also depends on docs#Finsupp.single which uses the classical decidability
Darij Grinberg (Oct 26 2025 at 23:27):
Yeah, this looks bad. Should require G to be decidableEq
Aaron Liu (Oct 26 2025 at 23:29):
We also need to test whether the given value is zero so we know whether to include it in the support or not
Darij Grinberg (Oct 26 2025 at 23:30):
support (and similar functions, such as the degree of a polynomial) should require R to be decidableEq too
Darij Grinberg (Oct 26 2025 at 23:30):
but this doesn't have to infect the rest of the functionality
Darij Grinberg (Oct 26 2025 at 23:30):
constructivists talk of "a polynomial of degree \le n" without having a notion of the (exact) degree of a polynomial
Aaron Liu (Oct 26 2025 at 23:36):
there are many clasically equivalent but constructively inequivalent ways to implement polynomial
Aaron Liu (Oct 26 2025 at 23:37):
they don't even become all equivalent even if everything has decidable equality
Ruben Van de Velde (Oct 26 2025 at 23:37):
I'd consider reading any of the last dozen threads about compatibility and polynomials before continuing the thread
Bryan Gin-ge Chen (Oct 27 2025 at 00:03):
Since this comes up fairly frequently, maybe we should add a brief explanation or a link to the wiki page to one the docstrings or the module doc.
Kevin Buzzard (Oct 27 2025 at 05:16):
@Darij Grinberg who cares what is computable? Why do you care? It's not stopping us from proving theorems. The point of mathlib isn't to compute. We've optimised for proving and this strategy has been very successful. Computing is a really hard problem and the idea of solving it by writing tactics instead of making things computable in the sense of logic has worked really well.
Darij Grinberg (Oct 27 2025 at 05:23):
@Kevin Buzzard It makes me worried about #eval for checking things, and about simplification as tactic. Though I don't know to what extent the latter uses computability
Kevin Buzzard (Oct 27 2025 at 05:33):
I see. Yes I never use #eval and have no idea how it is affected by the issues being talked about here. Are ring and norm_num not enough for you? My impression is that people who do care about actually computing numbers have other techniques, and when they don't work then they extend the techniques until they do, rather than fretting about computability. I think the underlying argument is that making something abstractly computable is very different to making it performant, so problems involving computation should be done with bespoke tactics implementing good algorithms rather than ensuring that underlying representations pass some internal computibility check.
Yakov Pechersky (Oct 27 2025 at 10:54):
Perhaps it is important to note that computable is necessarily kernel computable. And #eval is a cherry on top VM calculation. But we can also get VM calculation via #simp or #norm_num, which is able to introspect on expressions and take steps that are more computationally efficient
Darij Grinberg (Oct 27 2025 at 15:04):
I see. So computability is viewed as some kind of legacy feature.
Well, I can live with that -- I just thought it's a pretty heavy code smell. I use #eval to convince myself on numerical examples that definitions are doing what I mean them to do. Probably worthless in analysis, but very convenient in anything algebraic. I don't care too much about performance when my examples are like "4 choose 2 is 6".
Darij Grinberg (Oct 27 2025 at 15:04):
And I don't know enough about Lean to tell the difference between computable and kernel computable.
Darij Grinberg (Oct 27 2025 at 15:04):
I guess the former includes simplification tactics?
Kenny Lau (Oct 27 2025 at 15:12):
\#eval is not the only tool available, and it seems undesirable to base your whole philosophy on one feature
Violeta Hernández (Oct 27 2025 at 15:18):
I'm increasingly on the side of computability as a distraction. We tried for a long while to make combinatorial games computable, so we could automatically prove inequalities. Eventually I realized I could write a thin wrapper around simp to do this, and all was good.
Alfredo Moreira-Rosa (Oct 27 2025 at 15:28):
I read in the docs that nothing would prevent to have Finsupp impl that could be computable.
Interrestingly, DFinsupp has been made computable. So that's what i used for lean-units even though i didn't need dependent property of DFinsupp.
Violeta Hernández (Oct 27 2025 at 16:14):
If we truly must have a computable finsupp surely merging dfinsupp and finsupp is the path of least resistance?
Violeta Hernández (Oct 27 2025 at 16:18):
I recall hearing there were some long-term plans for this but maybe I'm misremembering
Yaël Dillies (Oct 27 2025 at 16:49):
Indeed, merging them is the plan, but this is hard to do so long as Finsupp is used in places where it shouldn't, eg MonoidAlgebra
Hagb (Junyu Guo) (Oct 28 2025 at 06:54):
For your information. these are several implementations for kernel and/or VM computable polynomial:
and .
Hagb (Junyu Guo) (Oct 28 2025 at 07:03):
Finsupp was refactored to be classical in 2019. If I correctly read discussions at that time, one of the reasons to let it be classical was that it got stuck when synthesizing instance? I don't know whether it was because of the exponential time "in the presence of diamonds and divergence in the presence of cycles" in Lean3, which should have been solved in Lean4 (https://arxiv.org/abs/2001.04301).
Antoine Chambert-Loir (Nov 02 2025 at 11:25):
@Kevin Buzzard if the point of mathlib is to not-compute, how much harder will it be to implement complicated computational proofs?
Aaron Liu (Nov 02 2025 at 11:28):
what you would have to do is to make a computationally efficient implementation of polynomials, and then prove it's isomorphic to Polynomial
Aaron Liu (Nov 02 2025 at 11:28):
then you can transfer all your computations using the isomorphism
Kevin Buzzard (Nov 02 2025 at 11:31):
I think that the reason mathlib has caused such a stir is that in contrast to many other prover communities who have by and large put computability first and proofs second, we have put proofs first and computability second and this has enabled us to go very far in new directions.
I think that ultimately mathlib will have to figure out to compute. But right now you can't expect it to compute, because people have not put the effort in to make the library computable. Computing is not as easy as it sounds because different computational implementations of objects like polynomials will have very different scaling properties and will be good at different things. Big decisions will have to be made and some people will inevitably be unhappy (there will be people saying "polynomials are now computable but Lean still can't do (x^1000000+1)^2 even though it's trivial" or whatever other trivial thing turns out to be hard because it's nontrivial with the implementation which ends up being chosen).
There is a huge amount to be done, but until it is done, I think a fair answer to "why can't Lean compute" is "don't expect it to compute".
Alfredo Moreira-Rosa (Nov 02 2025 at 13:42):
Aaron Liu said:
then you can transfer all your computations using the isomorphism
I think this is a good idea. Should these isomorphism live outside Mathlib or should there be be alternative (computable) bridges directly integrated into mathlib ?
One example: i made an alternative to Nat.nth Nat.Prime that is computable and proven that they are the same. It has value for my project, but maybe also to others, or maybe not. Extracting it to a standalone lib would be bad, keeping it in my project means nobody will use it. So there's maybe a missing middle ground.
Maybe for people interrested in computable versions, a Mathlib Computable extension library could see birth.
Just an idea.
Eric Wieser (Nov 02 2025 at 14:23):
You could contribute it to mathlib in the form of a norm_num extension
Kevin Buzzard (Nov 02 2025 at 16:23):
Indeed, right now I think that a far more profitable and useful approach to "computing" in Lean is not to worry that polynomials etc are noncomputable, forget about #eval, and instead write tools which compute with them anyway (like norm_num extensions).
Kevin Buzzard (Nov 02 2025 at 16:24):
For example at LLL on Friday Bhavik was showing us how to compute logs of things like to 30 decimal places, despite the fact that Real is noncomputable, Real.log is noncomputable and Real.sqrt is noncomputable.
Kenny Lau (Nov 02 2025 at 16:25):
are there plans to upstream that to mathlib?
Bhavik Mehta (Nov 02 2025 at 19:27):
Not immediately, some of the underlying maths has been upstreamed and some more of it will be, but it's not clear to me that this functionality is the kind of thing mathlib should contain.
Jireh Loreaux (Nov 02 2025 at 20:35):
Why? Because the tools are too specialized?
Kenny Lau (Nov 02 2025 at 20:38):
I would be a bit hesitant to believe that there isn't some results in more applied fields that use the fact that log(2) is between 0.301 and 0.302...
Artie Khovanov (Nov 02 2025 at 21:07):
What is the performance of #eval like? That's another reason to use tactics potentially.
Bhavik Mehta (Nov 02 2025 at 21:17):
Kenny Lau said:
I would be a bit hesitant to believe that there isn't some results in more applied fields that use the fact that log(2) is between 0.301 and 0.302...
I added this much (for the natural log) to mathlib a few years ago already!
Bhavik Mehta (Nov 02 2025 at 21:20):
Jireh Loreaux said:
Why? Because the tools are too specialized?
I'm happy for my mind to be changed here, but computing precise estimates for logs of arbitrary constants seems too specialised and ad-hoc to me, especially since a more general interval arithmetic setup would be strictly better. My code is already public, and I'm happy to publicise it or make it easier to import in other projects if there's interest though.
Darij Grinberg (Nov 02 2025 at 21:30):
FWIW, I'm interested in #eval for other reasons than some of you seem to be. I want it as a quick sanity check for algebraic objects, to ensure that they are what I think they are and catch misunderstandings. I'm not saying proofs should rely on the most straightforward evaluation of something. I would certainly not use it to prove bounds for real numbers.
Kenny Lau (Nov 02 2025 at 21:36):
Darij Grinberg said:
I want it as a quick sanity check for algebraic objects
the tactics are designed to be able to do that as well, such as norm_num
Kenny Lau (Nov 02 2025 at 21:37):
import Mathlib
def Real.fib : ℕ → ℝ
| 0 => 13
| 1 => 3
| n+2 => fib n + fib (n + 1)
example : Real.fib 10 = sorry := by simp [Real.fib]; norm_num
Darij Grinberg (Nov 02 2025 at 21:58):
ah!
Darij Grinberg (Nov 02 2025 at 21:59):
sorry is more flexible than i thought
Shubakur Mitra (Nov 13 2025 at 17:06):
Kevin Buzzard said:
For example at LLL on Friday Bhavik was showing us how to compute logs of things like to 30 decimal places, despite the fact that
Realis noncomputable,Real.logis noncomputable andReal.sqrtis noncomputable.
Are this and other lectures from this year LLL available somewhere? The last update on the xena project yt channel is two years ago.
Kevin Buzzard (Nov 13 2025 at 18:55):
No, sorry, we're doing LLL in person going forward, it's the last Friday of the month somewhere in London and we don't record.
Timothy Chow (Nov 13 2025 at 21:33):
I'm late to this discussion, but even if you only care about "proving" and not about "computing" per se, computability can make some things easier to prove because then you can invoke a powerful tactic to finish off a proof for you. The first example below works but the second doesn't, and I think this is traceable to polynomials being noncomputable.
example (x : ZMod 2) : x + x = 0 := by
decide +revert
example (x : GaloisField 2 1) : x + x = 0 := by
decide +revert
This doesn't necessarily mean that the extra work needed to make things computable is always worth it, but it does illustrate that "computing vs. proving" is not as stark a dichotomy as it might seem.
Ruben Van de Velde (Nov 13 2025 at 22:06):
But then you run into the issue that decide is an algorithm, but not necessarily a good algorithm
Kevin Buzzard (Nov 14 2025 at 09:30):
Yeah I always feel conflicted here: part of me thinks that if we make polynomials computable-in-the-sense-of-lean then we will just replace the people complaining that polynomials aren't computable with people complaining that our algorithms perform very poorly. However on the other hand maybe most uses of computability will just be people working out rather than people trying to use Lean as a computer algebra system.
Wrenna Robson (Nov 14 2025 at 10:20):
Darij Grinberg said:
I'd use a direct limit of "n-th degree polynomials" as n -> \infty
This would be my preferred approach FWIW.
Wrenna Robson (Nov 14 2025 at 10:21):
(It isn't what we will end up using for polynomials I think but I want to use it in a similar place.)
Artie Khovanov (Nov 14 2025 at 14:45):
Yeah I agree with @Kevin Buzzard - isn't there a polynomial / algebra tactic in the works?
Last updated: Dec 20 2025 at 21:32 UTC