Zulip Chat Archive
Stream: general
Topic: Groebner bases
Matthew Ballard (Jun 23 2023 at 16:14):
I took the week to build a working (in the sense that it generates machine code, not that it is functionally correct with any certainity) version of Buchberger's algorithm at mattrobball/gb.
Some thoughts:
- If you want computable data, it is reasonably easy to strip out (or reimplement away) the uses of
Classical.dec
. Unfortunately sinceDecidableXXX
is not part of the signature in many declarations in mathlib you cannot justcsimp
these things away. - However, if you want to verify things, then you will quickly cast eyes longingly at the body of results in
mathlib4
for the existing implementation. - As mentioned, I have no confidence in its correctness atm. It wouldn't be too bad to verify it given proper access to mathlib. I don't even have reasonable unit tests right now.
- Things are clearly not optimized. I did use some slightly more sophisticated procedure (Gebauer-Moeller) to deal with the critical pairs. Ultimately, one of the goals is to see how fast native Lean can go so there is more work to do. One target is M4GB.
- Another goal (of which I have only thoughts and not work) is to connect up an existing C-library like MSolve.
Matthew Ballard (Jun 23 2023 at 17:03):
Right now this is a strange middle ground where the types want to keep contact with mathlib while not actually proving much.
There is a lot to do to make this useful in any sense and I would happily welcome any help.
Heather Macbeth (Jun 23 2023 at 17:10):
Since polyrith
can outsource the Groebner basis work for proving that an algebraic equation is implied by a collection of other algebraic equations, I have sometimes wondered whether a big use of a Lean-native Groebner implementation would be the other direction: showing that False (or, a specified algebraic equation) is not implied by a collection of other equations, i.e. that there exists a point satisfying a collection of equations (optionally: and also not satisfying a specified equation). (Assuming [IsAlgClosed K]
, I guess.). Is this something on your radar?
Matthew Ballard (Jun 23 2023 at 17:50):
I hadn't really thought of that but that is a good use. Mainly I just want to see how performant a Lean-native & Mathlib-compatible GB implementation might be.
Matthew Ballard (Jun 23 2023 at 17:54):
It might be that we are just SOL because GB computations are just so expensive but Lean 4 has been surprisingly performant anecdotally.
Matthew Ballard (Jun 23 2023 at 18:01):
On a related tack, are there people who really like the current non-computable MvPolynomial
? (Also, shouldn't it be MVPolynomial
?)
Like most people, I have no trouble dropping a Classical.dec
in a proof but to do in constructing a term for something that is data carrying is something else.
Yury G. Kudryashov (Jun 23 2023 at 18:15):
I think that it's hard to have one computable MVPolynomial
(yes, we should fix the name)MvPolynomial
(I'm wrong, see below) implementation that fits different algorithms.
Matthew Ballard (Jun 23 2023 at 18:29):
I agree that different implementations are more natural for different algorithms.
But I am asking a smaller question I think. As an example, for Finsupp.single
below
def single (a : α) (b : M) : α →₀ M where
support :=
haveI := Classical.decEq M
if b = 0 then ∅ else {a}
toFun :=
haveI := Classical.decEq α
Pi.single a b
mem_support_toFun a' := by
classical
...
all the decidability assumptions are hidden from the outside. I know that not explicitly passing [DecidableEq α]
and [DecidableEq M]
in Finsupp.single
were design designs and am trying to understand the motivation for this.
Ruben Van de Velde (Jun 23 2023 at 18:31):
Most likely the motivation is "in practice, it's annoying to work with the instance"
Kyle Miller (Jun 23 2023 at 18:31):
MVPolynomial
seems fine for math, but it's not a good data structure for computations, and I don't believe there's any way to make it be good for computations without changing it completely to something else (like lists of monomial/coefficient pairs with respect to some monomial ordering, or dense multidimensional arrays of coefficients).
The problem is that it's using finitely supported functions. (1) The finite support uses a Finset
, which is a data structure, but pretty much in any practical situation there are much better data structures, and (2) the function does not cache any of its values, making it very easy to accidentally recompute things repeatedly.
For example, in this instance you don't precompute any of the sums, so every time you index into it you'll recompute them. Plus, the closure stores v
and w
so there's a memory leak.
Jireh Loreaux (Jun 23 2023 at 18:32):
Regarding the name, we did actually talk about this before (i.e., MV
vs Mv
) and I think this one was a toss up. The capitalization is for acronyms, but since Multivariate is a single word, I think that's why we opted for v
instead of V
.
Jireh Loreaux (Jun 23 2023 at 18:34):
Note, according to naming conventions, if we do switch from MvPolynomial
to MVPolynomial
, we need to switch all occurrences in theorem names and the like to mvpolynomial
instead of the current mvPolynomial
.
Matthew Ballard (Jun 23 2023 at 18:37):
I totally agree that finitely supported functions are not the way to go if you are seriously performance conscious.
Matthew Ballard (Jun 23 2023 at 18:39):
Funnily, I didn't actually think of the actual name and parsed it as 'multi' + 'variable'.
Matthew Ballard (Jun 23 2023 at 18:42):
More generally, what do people think the role of mathlib is for "real(ish) world" code given the advances with Lean 4?
Kyle Miller (Jun 23 2023 at 18:45):
This is "performance conscious" at the bare minimum of "can it perform at all". You can expect accidentally exponential algorithmic complexity in the number of operations, unless you use a function to cache the results periodically (I don't think this function exists yet for MvPolynomial
). Finitely supported functions are mathematically nice so I wouldn't want to change MvPolynomial
, but I think we should stop trying to make it computable and we ought to develop a separate polynomial type for doing calculations, along with some theory to be able to carry results back and forth between these types.
Matthew Ballard (Jun 23 2023 at 18:49):
That probably makes more sense. I was wondering about the ease of "carrying results back and forth". What is the model for making this less painful?
Kyle Miller (Jun 23 2023 at 18:49):
For example of the issue, this loop would create an MvPolynomial that does something like 2^100
additions on the last line:
let mut p : MvPolynomial := 1
for i in [0:100] do
p := p + p
return p.coeff 0
Kyle Miller (Jun 23 2023 at 18:50):
and plus the final p
would transitively refer to all intermediate p
s (that's a memory leak)
Kyle Miller (Jun 23 2023 at 19:06):
Matthew Ballard said:
What is the model for making this less painful?
I don't really know. At the least, you'd want a ring isomorphism between the math type and the computational type along with a proof that it's coeff
-preserving.
I don't fully understand what the impact would be on the theorem proving side. If you're doing a verified Groebner basis calculation, you could at least show one-way containment of the resulting ideal using these faster polynomials, since you could run the algorithm, get a Groebner basis, take every original generator and calculate how it's generated by the new basis, then use ring
to generate proofs post-hoc. For the reverse containment, if you can calculate the reverse inclusion on the way and prove it with ring
afterwards, then great. Otherwise, you'd probably need to run a verified version of the algorithm by a slower method (reduction) or you could probably just work with symbolic evaluation of MvPolynomial
s at the meta level. I'm fairly ignorant about how to carry these things out, so having computable polynomials could very well unlock much more.
Matthew Ballard (Jun 23 2023 at 19:19):
Instead of transfer, I think it might be easier to build a common specification from which you can derive correctness of algorithm on both sides.
Jireh Loreaux (Jun 23 2023 at 23:01):
But then you would have to prove your new structure satisfies all the properties? E.g., you would need to prove that your multiplication on your new type is associative.
Yury G. Kudryashov (Jun 23 2023 at 23:08):
You just need to provide an Equiv
that preserves operations.
Yury G. Kudryashov (Jun 23 2023 at 23:09):
Then you can prove correctness using Function.Injective.*
and/or Function.Surjective.*
.
Yury G. Kudryashov (Jun 23 2023 at 23:10):
Also, you can prove nothing, just rephrase your answer in terms of MvPolynomial
s and provide enough information to verify the answer using linear_combination
.
Yury G. Kudryashov (Jun 23 2023 at 23:10):
(or ring
or whatever)
Jireh Loreaux (Jun 24 2023 at 19:08):
Right, that's what Kyle was arguing for, and I think it's what makes sense. I interpreted Matthew's comment of "common specification" as essentially: "show that both are rings (or whatever structure is needed)".
Hyunsang Hwang (Jun 25 2023 at 17:33):
I have been working on a project related to this. Though I am not finished, I would like to share my code. https://github.com/hyunsanghwang/matroid I used Dfinsupp
rather than Finsupp
.
Last updated: Dec 20 2023 at 11:08 UTC