Zulip Chat Archive

Stream: condensed mathematics

Topic: meta


Peter Scholze (Jun 17 2021 at 20:51):

Here's a meta question. How much would have to happen before Lean (or something similar) is able to

a) do such computations as yesterday, once it has been told the definition of C(A)C(A) (it knows this definition since early January...);

b) sift through the literature to figure out that this complex in fact goes back to Eilenberg and MacLane (and has arisen repeatedly since)? Or well, at least suggest relevant references.

I gather b) is much much harder than a) and would definitely require significant upfront work, of "digitizing" the whole literature in the first place. I became aware of the reference because one of my PostDocs (Johannes Anschütz) was just looking at that paper of MacLane, so mentioned this to me over lunch.

But a) sounds more realistic to me. It feels weird that you have to code from scratch in a different language, why not simply let Lean compute some examples?

Bryan Gin-ge Chen (Jun 17 2021 at 23:14):

(I've moved Patrick's post to the correct topic.)

Mario Carneiro (Jun 17 2021 at 23:31):

(a) is a significantly difficult and a lot less automatic than you think. The gap between a mathematical definition, even a "computational" one as perceived by a mathematician, to an actual program that can calculate the result, especially if it needs to be efficient and not just computable in principle, is quite large. I relied quite heavily on Johan's python program as a baseline implementation. In some cases, the naive implementation performs well enough to run on the examples that matter, in which case you might get away with just writing the program in a plausibly computable way in lean, but most of the time the mathematical description is entirely separate from the computation.

Johan Commelin (Jun 18 2021 at 04:29):

Still, as far as I understand, Lean 4 might be a language in which we can do both. It might still be a separate program implementing certain definitions in a specific way to take advantage of the context (e.g., linear algebra over F2\mathbb F_2 has a much more efficient algorithm than the general case). But it would be the same language in which you prove and compute.

Johan Commelin (Jun 18 2021 at 04:35):

Let me put it like this: all the small h(p,i) that I computed by brute force in Sage should be computable just as well in Lean (once it becomes as fast as Lean 4 is). All that is needed for that is an implementation of Gaussian elimination (and maybe a proof that it actually is a way to compute dimensions?).
But the next observation was to do a randomized computation. This was human input, so I don't think Lean will quickly come up with that idea when you just tell it "compute this".
Still it is conceivable that Lean can compute h(3,3) and h(7,2) with the standard definitions and a little bit of extra code to do the randomized computation.
But the computation of h(2,4) required a completely different implementation of linear algebra. We would have to teach Lean: hey, if you are working over a field with 2 elements, please use these faster algorithms instead of the default ones (and here's a proof that they do the right thing).

Mario Carneiro (Jun 18 2021 at 04:40):

that last bit isn't that unusual; in lean we would normally write the algorithm generically over (computable) fields, and for h(3,3) and h(7,2) we are using zmod 3 and zmod 7, while for h(2,4) we are using bool

Johan Commelin (Jun 18 2021 at 04:48):

But it's not just that we use bool. We also cut out a lot of computations that would otherwise just be multiplying by 1 all the time.

Chris Birkbeck (Jun 18 2021 at 08:04):

Interesting. So, if lean 3/4 were to do these computations would the result just be ,say, the rank of the subspace or would it also provide a full proof of this fact? i.e. it gives a full proof detailing each step in the Gaussian elimination? perhaps this depends on how one made lean do the computation.

Johan Commelin (Jun 18 2021 at 08:19):

Yes, it depends on what you ask Lean to do. It could generate a proof along the way, but it would be a massive proof term.

Chris Birkbeck (Jun 18 2021 at 08:39):

Yeah in these cases one maybe only wants a "quantum proof". i.e. one that only exists while the computation is running but is never observed by the user or recorded. Or you compress the proof sufficiently so that it could be stored, but that might not always be possible.


Last updated: Dec 20 2023 at 11:08 UTC