Zulip Chat Archive
Stream: mathlib4
Topic: Why is `Polynomial` noncomputable?
Jihoon Hyun (Feb 19 2024 at 15:36):
In Mathlib.Data.Polynomial.Basic, the structure Polynomial is defined in a noncomputable section. Is there any reason for not applying noncomputable for noncomputable types only? Any other building blocks of Polynomial doesn't seem to be noncomputable.
Anne Baanen (Feb 19 2024 at 15:39):
The reason is that we decided not to care about computability of polynomials when they were defined. Now there is a lot more reason to care about computability but it's going to be a lot of work to do so nicely. See e.g. https://leanprover.zulipchat.com/#narrow/stream/113489-new-members/topic/.E2.9C.94.20Polynomial.20uncomputable
Anne Baanen (Feb 19 2024 at 15:39):
Here is a wiki page with even more information: https://github.com/leanprover-community/mathlib4/wiki/Computation-models-for-polynomials-and-finitely-supported-functions
Eric Wieser (Feb 19 2024 at 16:19):
Please add this Zulip thread to the list of Zulip topics on that page :)
Eric Wieser (Feb 19 2024 at 16:19):
(having a record of this being asked for is helpful in justifying the change in future, if it is ever made)
Jihoon Hyun (Feb 19 2024 at 17:57):
While staring at the wiki page, I couldn't figure out how de-classified Finsupp, DFinsupp, and DFinsupp' achieve f i (aka p.coeff i) without DecidableEq ι. How can it compute f j if f = single i 1 and i = j but doesn't have DecideableEq ι property?
Jihoon Hyun (Feb 19 2024 at 18:08):
I'd also like to share my idea of computable Finsupp. Let ι be a type which we can choose one element (if not empty) without classical choose, for example zero. Also let Finsupp ι R contain f which is a finite set of functions from ι to ι, which is likely to be computable. This will require the same restriction as the quotient of inductive type method for operations listed in the table, while not being inductive.
Eric Wieser (Feb 19 2024 at 18:22):
Jihoon Hyun said:
While staring at the wiki page, I couldn't figure out how de-classified
Finsupp,DFinsupp, andDFinsupp'achievef i(akap.coeff i) withoutDecidableEq ι. How can it computef jiff = single i 1andi = jbut doesn't haveDecideableEq ιproperty?
Because the DecidableEq obligation appeands at f = single i 1 not at f i
Eric Wieser (Feb 19 2024 at 18:23):
I don't really understand your suggestion (I think it has at least one typo, because you don't mention R in your definition)
Jihoon Hyun (Feb 19 2024 at 18:43):
Eric Wieser 말함:
Jihoon Hyun said:
While staring at the wiki page, I couldn't figure out how de-classified
Finsupp,DFinsupp, andDFinsupp'achievef i(akap.coeff i) withoutDecidableEq ι. How can it computef jiff = single i 1andi = jbut doesn't haveDecideableEq ιproperty?Because the
DecidableEqobligation appeands atf = single i 1not atf i
Thank you for the explanation! Then it seems like the deterministic equality checking of index is not an issue when working on f i, right? Why is this the case?
Jihoon Hyun (Feb 19 2024 at 18:57):
Eric Wieser 말함:
I don't really understand your suggestion (I think it has at least one typo, because you don't mention
Rin your definition)
I meant that single i r or + in the table doesn't require DecidableEq for both types, while f i requires DecidableEq ι and support needs both DecidableEq ι and DecidableEq R, just like the 'quotient of inductive type'.
I'll explain more about the method. Each element F of Finsupp ι R of mine will hold functions from ι to ι, which denotes an index f 0 for each f in F . single i r is constructible by choosing a singleton set F which contains a function f which maps 0 to i, along with r. + can be lazily operated without comparison on ι. If we don't need to deterministically check for equality as stated above, then it seems like the DecidabeEq ι can also be cancelled out, but for support operation we still need both DecidableEq constraints.
Junyan Xu (Feb 19 2024 at 19:13):
It's hard to understand your idea by words. I wrote down some possible implementations of Finsupp here (in Lean 3). Can you take a look and how your approach differ, preferably with some actual code?
Junyan Xu (Feb 19 2024 at 19:18):
If you want to compute with polynomials, there is the polynomial tactic as well as a new approach of mine that works for polynomials over types with decidable equality and computable arithmetic operations, like ℕ, ℤ, and ℚ.
Jihoon Hyun (Feb 21 2024 at 10:30):
It seems like my idea was not that good after all. I tried creating an equivalence relation on ι -> ι, then noticed that there is a bijection between the created equivalence classes and ι itself, which implies that what I have thought was actually similar to that of DFinsupp.
Kenny Lau (May 26 2025 at 20:43):
i think we should refactor polynomials to make it more computable
Aaron Liu (May 26 2025 at 20:46):
I still don't see how making polynomials computable would be a huge issue
Kevin Buzzard (May 26 2025 at 20:46):
Well I don't think that's going to happen any time soon because the moment you do this then 1000 things will break and it will be very hard to fix them, and then a bunch of people will complain that you implemented the wrong kind of computability for their application (because there are several different ways that you can make them computable)
Kevin Buzzard (May 26 2025 at 20:51):
Kenny is old enough to remember why we made polynomials noncomputable, to fix slowdowns and constructivity issues. I can't imagine ever going back to be honest, whatever people want. It would be a gigantic mountain and at the end of it mathlib would compile more slowly and people would say "what exactly have we gained here and is it worth paying the price?" and my bet is that the answer will be "no".
Andrew Yang (May 26 2025 at 20:51):
I think the solution is to have some kind of axiom free version of csimp or implement_by where you map everything under an iso that preserves certain operations to an computable model, do kernel reduce on that model and then map back.
Kevin Buzzard (May 26 2025 at 20:52):
But that's just my guess as to what would happen, obviously we'll only know for sure if someone does it. First you have to bikeshed about whether your representation has overflowing memory or not.
Kevin Buzzard (May 26 2025 at 20:54):
Yes Andrew's idea is what we should really do, we don't want to go back to Finsupp.
Kenny Lau (May 26 2025 at 20:54):
yeah, how does Nat outsource addition externally?
Andrew Yang (May 26 2025 at 20:55):
Lean4 now trusts GMP :)
Kenny Lau (May 26 2025 at 20:55):
I suppose a good enough normal form algorithm for polynomial rings (last time I checked this was missing) is equivalent to computability.
Aaron Liu (May 26 2025 at 20:58):
Kenny Lau said:
I suppose a good enough normal form algorithm for polynomial rings (last time I checked this was missing) is equivalent to computability.
This doesn't help with #evaling
Andrew Yang (May 26 2025 at 20:59):
There is always ring, which in some sense is a normal form algorithm for multivariate polynomials. But I really wish it supports grouping things wrt a chosen variable (e.g. to instead of expanding all out)
Kenny Lau (May 26 2025 at 21:00):
Aaron Liu said:
Kenny Lau said:
I suppose a good enough normal form algorithm for polynomial rings (last time I checked this was missing) is equivalent to computability.
This doesn't help with
#evaling
yes, but you don't use #eval in proofs.
Andrew Yang (May 26 2025 at 21:01):
Aaron Liu said:
This doesn't help with
#evaling
Continuing with my proposition, you could then create an #eval with model command that applies the iso and the relevant lemmas showing how each operation in the expression is equivalent to which computable functions under the iso and then do #eval and then output iso.symm <the eval result>.
Kenny Lau (May 26 2025 at 21:02):
yes please have ComputablePolynomial
Andrew Yang (May 26 2025 at 21:03):
and then you can also have a different model for different kinds of operations (e.g. for polys and matrices we probably at least want a dense representation and a sparse one)
Aaron Liu (May 26 2025 at 21:03):
Kenny Lau said:
yes, but you don't use
#evalin proofs.
Lean is a programming language too
Aaron Liu (May 26 2025 at 21:03):
Andrew Yang said:
Aaron Liu said:
This doesn't help with
#evalingContinuing with my proposition, you could then create an
#eval with modelcommand that applies the iso and the relevant lemmas showing how each operation in the expression is equivalent to which computable functions under the iso and then do#evaland then outputiso.symm <the eval result>.
This is starting to sound like something I would want
Kevin Buzzard (May 26 2025 at 21:09):
Yeah this is the way to fix Polynomial, I don't think we should touch mathlib but if we want Lean to be a CAS as well then we can build ComputablePolynomial and if we want extreme awesomeness then we somehow get Andrew's ideas implemented to join the two concepts
Andrew Yang (May 26 2025 at 21:15):
I personally hope Lean could be a CAS, or at least hope there will eventually be a day where some CAS could output a proof term along with the results. (Most CAS already output some kind of proof certificate but the "proof certificates" are mostly more numbers that one needs to write another program to check and one could only hope there are no bugs in either of the programs)
But emphasis on "hope", I am not that actually invested to work on it myself :P
Kenny Lau (May 26 2025 at 21:22):
if only mathematica made its algorithm public
Kenny Lau (May 26 2025 at 21:24):
Kevin Buzzard said:
Kenny is old enough to remember why we made polynomials noncomputable, to fix slowdowns and constructivity issues.
That was Lean3.
Anatole Dedecker (May 26 2025 at 21:48):
If we want to be fast, aren't there multiple pertinent ways to represent polynomials depending on the situation ? In the same way that computer scientists have a notion of "sparse matrix" where they don't keep track of all the zeros ? This would make for a(nother) good argument to not choose any specific representation for docs#Polynomial but have multiple implementations sitting on the side.
Eric Wieser (May 26 2025 at 22:13):
I think a lot of time we don't care about being fast, and we just want to evaluate something very small by any means
Bhavik Mehta (May 27 2025 at 12:43):
There's earlier discussion on computable polynomial and finsupp here as well: https://github.com/leanprover-community/mathlib4/wiki/Computation-models-for-polynomials-and-finitely-supported-functions
Kim Morrison (May 27 2025 at 22:02):
I think that issue goes down the wrong track. Multiset and Finset should be avoided in the first place: sparse polynomials (operations, not theory) should be a pre-Mathlib library built in top of HashMap and TreeMap.
Now that we have ExtHashMap the groundwork is there. We need a wrapper ExtHashMapD which forbids values equal to zero (or default, depending on the design), which is then equivalently to finitely supported functions but really fast.
Then we need an analogue of the mergeWith function that lets you implement (pointwise) multiplication. (The current function just copies values that appear in only one map, so can only be used for addition.)
Then we need to do all of the above with TreeMap (including the extensional version, which is missing), because polynomials usually have a monomial order and using this speeds up operations.
None of this should touch Mathlib.
Eric Wieser (May 27 2025 at 22:26):
I added a new section to that wiki page that shows how to make kernel- and interpreter- computable polynomials today in two lines
Eric Wieser (May 27 2025 at 22:27):
I don't see how a HashMap/TreeMap implementation will help with things like
example :
∀ p ∈ ({3*X^2, 2*X^3, 3*X + 1} : Finset (Polynomial Int)), p ≠ 0 := by
decide
Andrew Yang (May 27 2025 at 22:43):
The point of doing this in lean is to get provable algorithms and proof terms as certificates IMO, which HashMap cannot do and TreeMap might be able to but needs significant change to make it kernel reducible.
Eric Wieser (May 27 2025 at 22:44):
I just added another example to that page for multivariate polynomials
example :
∀ p ∈ ({3*X 0^2, 2*X 1^3, 3*X 2 + 1} : Finset (CMvPolynomial (Fin 3) Int)), p ≠ 0 := by
decide
Kevin Buzzard (May 27 2025 at 22:47):
Another issue with "computable polynomials" is that there seem to be several genuinely different interpretations of the details of what the phrase actually means
Kenny Lau (May 27 2025 at 22:47):
it seems to not matter if we only use it meta-ly
Eric Wieser (May 27 2025 at 22:48):
... which is one meaning, but it's ergonomically not the same as "I want to use #eval"-ly, and certainly not the same as "kernel-y"
Eric Wieser (May 27 2025 at 22:50):
I actually suspect you could replace Polynomial with CPolynomial in mathlib without too much work, as long as you were willing to incrementally make things computable rather than all at once
Eric Wieser (May 27 2025 at 22:51):
That is, replace the ofFinsupp :: toFinsupp constructor with ofDirectSum :: toDirectSum, adjust the ring structure, then define ofFinsupp and toFinsupp manually and prove they are still well-behaved
Eric Wieser (May 27 2025 at 22:51):
Everything downstream should either use only the ring structure, or go via toFinsupp, so there should be minimal breakage except where defeqs are exploited
Notification Bot (May 27 2025 at 23:09):
45 messages were moved here from #Is there code for X? > Z[(1+sqrt(1+4k))/2] by Eric Wieser.
Yaël Dillies (May 28 2025 at 06:22):
Filip Jonsson Kling from Stockholm Uni was proving results about the resultant and the corresponding resultant matrix. I made him define the resultant matrix of A B : R[X] as a matrix Matrix (Fin <| A.natDegree + B.natDegree) (Fin <| A.natDegree + B.natDegree) R. Then he tried to compute the resultant of X + a and X + b and instantly got blocked by the fact that Matrix (Fin <| (X + C a).natDegree + (X + C b).natDegree) (Fin <| (X + C a).natDegree + (X + C b).natDegree) R doesn't reduce to Matrix (Fin 2) (Fin 2) R to use docs#Matrix.det_fin_two
Yaël Dillies (May 28 2025 at 06:23):
I am aware of @Kenny Lau's kckennylau/resultant branch, which sidesteps the issue by allowing padding the resultant matrix.
Kenny Lau (May 28 2025 at 06:24):
@Anne Baanen and I independently came up with the same idea of having the size of the matrix be free
Kenny Lau (May 28 2025 at 06:26):
but for natDegree <| X + C a to reduce to 1, it would require us to do case matching on Nontrivial R... which now that I think about it, can be solved by having a DecidableEq on R... which on further thought seems to be undesirable
Kenny Lau (May 28 2025 at 06:27):
i suppose this is one of the "problems" of kernel reduction
Kenny Lau (May 28 2025 at 06:27):
we can't just have a reduction rule that says if Nontrivial R then reduce etc.
Kenny Lau (May 28 2025 at 06:27):
having conditional reduction rules might be nice
Jz Pan (May 28 2025 at 07:10):
Yaël Dillies said:
Filip Jonsson Kling from Stockholm Uni was proving results about the resultant and the corresponding resultant matrix. I made him define the resultant matrix of
A B : R[X]as a matrixMatrix (Fin <| A.natDegree + B.natDegree) (Fin <| A.natDegree + B.natDegree) R. Then he tried to compute the resultant ofX + aandX + band instantly got blocked by the fact thatMatrix (Fin <| (X + C a).natDegree + (X + C b).natDegree) (Fin <| (X + C a).natDegree + (X + C b).natDegree) Rdoesn't reduce toMatrix (Fin 2) (Fin 2) Rto use docs#Matrix.det_fin_two
Oh defeq problem... What about resultant A B m hm : Matrix (Fin m) (Fin m) R where hm : m = A.natDegree + B.natDegree?
Yaël Dillies (May 28 2025 at 07:15):
Yes, and then you notice that actually assuming hm : A.natDegree + B.natDegree ≤ m instead doesn't hurt, and then you think for a second and you fully drop the assumption
Kenny Lau (May 28 2025 at 07:16):
(the resultant actually changes when you enlarge the degrees, and it might be useful to actually consider those cases as well)
Andrew Yang (May 28 2025 at 07:19):
Similarly we also have docs#Polynomial.reflect and docs#Polynomial.reverse so I think this is the right (and battle-tested) way to go.
Matthew Ballard (May 28 2025 at 13:54):
A polynomial k-algebra on a set is the result of the application of the left adjoint to the forgetful functor from commutative k-algebras to sets.
To me, this says we need to break up Polynomial into an implementation layer and the desired invariants. Something like PrePolynomial and IsPolynomial. Ideally a downstream user could plug in any PrePolynomial implementation that is proven to satisfy IsPolynomial and get access to the vast majority of the API.
import Mathlib.Algebra.Algebra.Hom
structure PrePolynomial (X : Type*) where
carrier : Type*
mon: X → carrier
class IsPolynomialAlg
(R X : Type*) [CommSemiring R] (P : PrePolynomial X) [Semiring P.carrier] [Algebra R P.carrier] where
lift (Y : Type*) [Semiring Y] [Algebra R Y] (f : X → Y) : ∃ φ : P.carrier →ₐ[R] Y, φ ∘ P.mon = f
unique (Y : Type*) [Semiring Y] [Algebra R Y] (φ₁ φ₂ : P.carrier →ₐ[R] Y) :
φ₁ ∘ P.mon = φ₂ ∘ P.mon → φ₁ = φ₂
Running heavily via decide seems like too much lock in.
Kenny Lau (May 28 2025 at 13:57):
@Matthew Ballard we already have external capabilities, calculations on Nat are all done externally (proof: see how fast #eval 641 * 6700417 is); we just haven't done this for polynomials, and I don't know how to; but we shouldn't make things like PrePolynomial inside Lean
Matthew Ballard (May 28 2025 at 14:09):
Yes, I am well aware of the kernel accelerated arithmetic operations, compiler generated representations, tactic generated representations etc. I also know how to do something similar (in broad strokes) for the current model of docs#Polynomial.
The main difference between Nat and Polynomial is that we all agree on the implementation of Nat (at least on this zulip instance). For polynomials, this is not the case. See the wiki. I think that is good thing. Different goals prioritize different representations. It is most useful broadly to be able to address them generally and abstractly.
Matthew Ballard (May 28 2025 at 14:13):
I think the whole computable vs noncomputable is also an incarnation of being too close to a particular implementation of polynomials and being forced to carry around too much to get tasks done.
Eric Wieser (May 28 2025 at 14:39):
Matthew Ballard said:
is that we all agree on the implementation of
Nat(at least on this zulip instance
This isn't true? Some of us use GMP, the rest (Windows) use a lean-internal nat implementation
Matthew Ballard (May 28 2025 at 14:40):
I was saying in Lean itself but you are also making my point more generally.
Eric Wieser (May 28 2025 at 14:48):
I don't really buy this argument though; I think a lot of us here don't agree on how Int is defined (I believe Kevin has wanted to teach it as a quotient in the past), but the API is such that we immediately stop caring
Eric Wieser (May 28 2025 at 14:57):
If we redefined Int such that #eval (1 + 2 : Int) complained, this would surely be very unpopular
Eric Wieser (May 28 2025 at 14:58):
So I don't understand the argument for why we shouldn't fix #eval (1 + 2 : R[X]), especially if the fix doesn't affect the API
Matthew Ballard (May 28 2025 at 14:59):
I agree that Int is not a mathematician's implementation. I am not sure we stopped caring but more so just lived with it. But polynomials also have more moving parts than integers. It seems more feasible to me to directly translate and transport invariants between different models of the integers than different models of polynomials.
I would be a pretty rough to tell someone who showed up that we don't support their model of the polynomials and they have to roll their own verification at best cribbing off mathlib.
Kyle Miller (May 28 2025 at 14:59):
Do any of the computational models on the wiki avoid the fundamental issue that polynomial operations result in closures that are not memoized?
If you make your point @Eric Wieser of "yes, but computational polynomials help discharge goals about small polynomials", a big difference between polynomials and Nat/Int is that Nat/Int are good at all scales. Why should mathematical theory have additional complications (decidability assumptions for example) just to support a computational model that is not good for large computations?
Eric Wieser (May 28 2025 at 15:00):
My claim is that no such assumptions are needed to fix +, *, and X
Kyle Miller (May 28 2025 at 15:01):
What about the rest of the operations that the rest of the theory would want to use?
Matthew Ballard (May 28 2025 at 15:01):
Eric Wieser said:
So I don't understand the argument for why we shouldn't fix
#eval (1 + 2 : R[X]), especially if the fix doesn't affect the API
Do we just want that to give a number?
Eric Wieser (May 28 2025 at 15:01):
Kyle Miller said:
What about the rest of the operations that the rest of the theory would want to use?
We can decide that on a case-by-case basis, just like we do for other types
Eric Wieser (May 28 2025 at 15:02):
At some point we can say "no, the [Decidable] tradeoff isn't worth it for Polynomial.foo", like we do for Finset.preimage
Kyle Miller (May 28 2025 at 15:03):
It would be really nice if this doesn't have to be decided case-by-case so that theory never has to make any compromises.
Regarding Finset, I really want there to be a FiniteSet type that similarly is pure theory (a Set plus a Set.Finite proof).
Kyle Miller (May 28 2025 at 15:04):
Do you have any answer about my closure point? Or is it still "that doesn't matter for small polynomials, and computable just means small polynomials"?
Eric Wieser (May 28 2025 at 15:05):
You're correct that the closure is present in all of them today, but I believe you could implement Polynomial.add a b as { toDirectSum := memoize (a.toDirectSum + b.toDirectSum) }
Kyle Miller (May 28 2025 at 15:07):
Hopefully memoization doesn't cause any issues for kernel reduction
Eric Wieser (May 28 2025 at 15:07):
Could csimp solve that?
Eric Wieser (May 28 2025 at 15:08):
Regarding Finset, it would be a shame for FiniteSet to replace it and leave me unable to #eval (Finset.offDiag {1, 2, 3}).powersetCard 2
Eric Wieser (May 28 2025 at 15:10):
(to be clear: the un-memoized closures only matter for execution, not for kernel-reduction, right?)
Kyle Miller (May 28 2025 at 15:15):
Kernel reduction is probably ok with unmemoized closures since it memoizes everything itself.
Kyle Miller (May 28 2025 at 15:17):
Eric Wieser said:
Regarding
Finset, it would be a shame forFiniteSetto replace it and leave me unable to#eval (Finset.offDiag {1, 2, 3}).powersetCard 2
It wouldn't replace it. It's just a missing corner of the 2x2 quadrant for finiteness of types/sets and computable/theoretical.
Independently, I think there should be a more powerful csimp that can be used to give mathematical expressions computational content.
Eric Wieser (May 28 2025 at 15:18):
It sort of shares that corner with Set.Finite I guess?
Kyle Miller (May 28 2025 at 15:26):
Maybe there's a 2x2x2 with bundled/unbundled as well.
Jireh Loreaux (May 28 2025 at 15:38):
Kyle Miller said:
Independently, I think there should be a more powerful csimp that can be used to give mathematical expressions computational content.
What do you mean by this?
Kyle Miller (May 28 2025 at 15:49):
@Jireh Loreaux Imagine something like this:
- There is a system to associate "computational types" to types, perhaps via a typeclass. The typeclass includes a coercion function
compute : T -> Cfrom the type to its computational type, for theoretical purposes. - There is a system of lemmas for pushing
computeinwards. - There is a metaprogram to drive the pushing.
Back in Lean 3 I tried doing this with just typeclasses (and worked on the Lean 3 noncomputability checker to get Lean to see that the compute function was computable), but it was too ineffective. Some sort of systematic compiler would be better. (And maybe it can be implemented almost entirely with simp!)
Kyle Miller (May 28 2025 at 15:50):
Unlike csimp, the lemmas could be conditional on typeclasses that synthesize for specific types, giving better algorithms than a lowest common denominator.
Hagb (Junyu Guo) (May 28 2025 at 18:56):
Kevin Buzzard said:
Kenny is old enough to remember why we made polynomials noncomputable, to fix slowdowns and constructivity issues. I can't imagine ever going back to be honest, whatever people want. It would be a gigantic mountain and at the end of it mathlib would compile more slowly and people would say "what exactly have we gained here and is it worth paying the price?" and my bet is that the answer will be "no".
I viewed discussions on #general > timeout when working with ideal of polynomial ring, where timeout happened when lean was searching for instance. I still don't know why making polynomial classical did "fix" it. Did it just avoid some ineffective/slow search routes?
(The way polynomial was computed was ineffective, but it seems unrelated to such a timeout in instance search.)
Matthew Ballard (May 29 2025 at 14:24):
What Kyle describes sounds pretty cool. But, in the big picture, what would be the goal here? Are we just making it so people stop asking the titular question? Are we looking at generating something performant for some set of real-world use cases? If so, how good?
Kyle Miller (May 29 2025 at 16:05):
Big picture, think of this being a framework for a decide-like tactic that reduces some mathematical goal into a routine computation (two possibilities here: good for kernel reduction, or good for #eval). This could subsume decide — in this framework, that tactic uses Bool for the computation type of Prop.
While I think this would all be useful, more realistically, when it comes to polynomials, having a normalg tactic that does what normnum does but normalizes anything algebraic, like a CAS, would solve most problems. (The common case for decide is for simple things, not heavy calculations, right? How often would it be better to use a heavy compute rather than a normalg?)
On the flip side, to prove things about programs that use, for example, computable polynomials, you'd want a "proofify" tactic to turn the program into a mathematical specification that uses the mathematical polynomials.
Matthew Ballard (May 29 2025 at 16:15):
Thanks. That is a good clarification of the scope: computation in service of verification.
Mario Carneiro (May 29 2025 at 20:10):
normalg is also known as ring_nf though...
Kyle Miller (May 29 2025 at 20:23):
ring_nf is an element of a normalg, yes, but there's more to algebra than rings and polynomials with natural number exponents
Kevin Buzzard (May 29 2025 at 20:48):
PARI/GP is free software, covered by the GNU General Public License, and comes WITHOUT ANY WARRANTY WHATSOEVER.
Type ? for help, \q to quit.
Type ?18 for how to get moral (and possibly technical) support.
parisize = 8000000, primelimit = 500000, nbthreads = 20
? 1/(1-x)+O(x^10)
%1 = 1 + x + x^2 + x^3 + x^4 + x^5 + x^6 + x^7 + x^8 + x^9 + O(x^10)
? 1/7+O(2^10)
%2 = 1 + 2 + 2^2 + 2^4 + 2^5 + 2^7 + 2^8 + O(2^10)
? exp(x^2)+O(x^10)
%3 = 1 + x^2 + 1/2*x^4 + 1/6*x^6 + 1/24*x^8 + O(x^10)
?
etc etc
Tomas Skrivan (May 30 2025 at 06:49):
Kyle Miller said:
And maybe it can be implemented almost entirely with
simp!
It is definitely possible. I have few experiments of translating reals to floats that do this.
What sort of stopped me was dealing with structures. If you define a structure/inductive type that uses reals then you need to define new structure that uses equivalent floats. Then you need to write down all the theorems for all the projections etc. it gets really tedious and one should write some meta programs to do this automatically.
Also this translation is often functorial and it might be advantageous to write custom extension to simp that is better at applying composition theorem than lean's unification.
Wrenna Robson (Jun 18 2025 at 09:55):
There is another way to compute polynomials - as a DirectLimit rather than a DirectSum, essentially. I somewhat assume we have a link between these two things already, though.
Last updated: Dec 20 2025 at 21:32 UTC