Zulip Chat Archive

Stream: condensed mathematics

Topic: homology refactor


Johan Commelin (Jul 25 2022 at 19:32):

At the moment C.X_prev i is not defeq to C.X j for any j. Besides that, it is only defined for categories with a 0-object, which is not always the case (e.g. FreeAb 𝓐).

In particular C.d_from i : C.X_prev i → C.X i is not defeq to C.d j i for any j. This means that all sorts of simp(s) lemmas don't apply.
One solution would be to add a function c.pred to complex_shape that sends i to the correct j whenever it exists, and to i otherwise. The upshot is that C.d (c.pred i) i is the correct function in all meaningful cases and equal to 0 in all the edge cases (because C.d j i = 0 when ¬ c.rel j i).

This definition has good defeqs, and it works in categories without zero objects.

I think we can attempt this refactor orthogonally to the other proposed homology refactors.

I welcome feedback from @Adam Topaz @Joël Riou @Scott Morrison @Andrew Yang

Andrew Yang (Jul 25 2022 at 19:37):

Why do we need X_prev in the first place? Would it make sense if the homology functor takes in i j k and a proof i + 1 = j and j+1 = k?
If so, do we need X_prev at all?

Johan Commelin (Jul 25 2022 at 19:43):

Then you can't talk about H₀ of an -indexed complex.

Yaël Dillies (Jul 25 2022 at 19:46):

Why don't you take a proof that i = pred j and k = succ j?

Yaël Dillies (Jul 25 2022 at 19:47):

That is, rely on the truncatedness of nat subtraction (or more generally on the behavior of docs#order.pred on minimal elements).

Andrew Yang (Jul 25 2022 at 19:48):

This doesn't generalize to arbitrary indexes though. You would require at least an order on it?

Andrew Yang (Jul 25 2022 at 19:49):

But maybe that generality is not needed?

Yaël Dillies (Jul 25 2022 at 19:49):

That's all the problem, right? When you don't have anywhere to go, you shouldn't try to go anywhere. When you're the maximal element, you shouldn't try to look at the next bit.

Andrew Yang (Jul 25 2022 at 19:50):

I meant that there might not be a notion of "maximal element" in every case that we currently support?

Yaël Dillies (Jul 25 2022 at 19:51):

Is it possible to consider cyclic complexes?

Andrew Yang (Jul 25 2022 at 19:51):

It is currently possible, but I doubt we have ever used it.

Andrew Yang (Jul 25 2022 at 19:52):

I'm still a bit inclined to support only Z-indexed complexes and provide some wrapper for padding zeroes in the front and in the end and that's it.
I think someone gave reason why this is bad but I couldn't find it now.

Yaël Dillies (Jul 25 2022 at 19:52):

If you do not use it, then you aren't losing any generality by assuming an order on the indices, because you could recover by defining a ≤ b := ∃ n, X_succ^[n] a = b.

Joël Riou (Jul 25 2022 at 21:06):

I do not mind replacing X.X_prev i by X.X i when c.prev i = none, but do we really need this? More precisely, do we want to attempt defining homology when we do not even have a zero object?

More seriously, if I interpret your suggestion properly, you would like to replace c.prev which sends i to the unique j such that c.rel j i when it exists, by some datum c.pred which would be a certain choice of j. (Of course, mathematically speaking, there is only one such j, but two choices may not be defeq.) I think it is not a good idea. I strongly prefer the current design. For example, for cochain C ℤ, we would presumably set c.pred i to be i-1 and similarly for c.succ, so that the homology at i of a complex K would be defined as the homology of the "short_complex" K.X (i-1) ⟶ K.X i ⟶ K.X (i+1)? The obvious problem I see is that if in some theorem we want to study the cohomology in degree n+2, then this complex will be K.X (n+2-1) ⟶ K.X (n+2) ⟶ K.X (n+2+1), and n+2-1 is not definitionally equal to n+1 and n+2+1 is not defeq to n+3.

I think that the best approach is to keep the current design for the definition of the homology, and use isomorphisms like prev_functor_iso_eval and next_functor_iso_eval that I introduced in for_mathlib/short_complex_homological_complex.lean. For example, if i, j, k are indices such that c.rel i j and c.rel j k, it is easy to deduce from these isomorphisms that the homology functor at j functorialy identifies to the homology of the "short_complex" K.X i ⟶ K.X j ⟶ K.X k, and of course, in the case of integers, one may get an isomorphism with the homology of K.X (n-1) ⟶ K.X n ⟶ K.X (n+1), if this is what we want? I do not really see what we would gain by making this the definition of the homology.

I have encountered a similar matter recently when I have been trying to check the axioms of model categories for the category of bounded above complexes in an abelian category C with enough projectives. In order to obtain lifting properties, it was useful to introduce the hom_complex F G of two cochain complexes F and G (indexed by ). Mathematically speaking, an n-cochain of hom_complex F G is the datum of maps F.X p ⟶ G.X (p+n) for all p. In Lean, working with that definition soon becomes an absolute nightmare because when doing computations, you have to use a lot of eq_to_hom. Finally, I have chosen to define an n-cochain of hom_complex F G as the datum of maps F.X p ⟶ G.X q when hpq : p+n=q. Doing so, I was able to make some computations with cocycles and coboundaries: for example, if F is bounded above and consists of projective objects and G is acyclic, I can show that hom_complex F G is acyclic. For the associativity of the composition of cochains, the statement is like this

lemma comp_assoc {n₁ n₂ n₃ n₁₂ n₂₃ n₁₂₃ : }
  (z₁ : cochain F G n₁) (z₂ : cochain G K n₂) (z₃ : cochain K L n₃)
  (h₁₂ : n₁₂ = n₁ + n₂) (h₂₃ : n₂₃ = n₂ + n₃) (h₁₂₃ : n₁₂₃ = n₁ + n₂ + n₃) :
  cochain.comp (cochain.comp z₁ z₂ h₁₂) z₃ (show n₁₂₃ = n₁₂ + n₃, by rw [h₁₂, h₁₂₃]) =
    cochain.comp z₁ (cochain.comp z₂ z₃ h₂₃)
      (show n₁₂₃ = n₁ + n₂₃, by rw [h₂₃, h₁₂₃, add_assoc]) := ...

As the associativity of the addition in is not a definitional equality, I had to introduce these n₁₂, n₂₃ and n₁₂₃. The situation when n₁₂ et al. must be defeq to n₁+n₂, etc, seems to me like attempting to define c.pred n = n-1. I believe that similarly as I experimented it with hom_complex, this may cause more harm than relief.

Johan Commelin (Jul 26 2022 at 03:07):

I think that we should keep all the isomorphisms that are currently there. One of the issues that I am trying to solve is that

(F.map_homological_complex C).X_prev i
-- and
(F.map_homological_complex C).d_from i

are not of the form F.obj blah and F.map blah. (Not even prop-eq.). So you loose access to a whole lot of library lemmas.
Of course we can write a whole bunch of specialised lemmas to deal with F.map_iso (C.X_prev_iso i) etc... but with my proposed change all the "combinatorics" is shifted to calculations in the indexing type ι.
As you rightly point out, we don't get good defeqs in ι. But the situtation certainly didn't get worse than before.

For actual computations with homology, I think we need another refactor (or 2), to implement a mix of your homology_(pre)_datum and my has_homology, and maybe also Adam's idea.

Adam Topaz (Jul 26 2022 at 03:13):

Here's some sketchy code with a potential idea;

Adam Topaz (Jul 26 2022 at 03:16):

The idea is that a homological complex is built up of a bunch of small complexes glued together with isomorphisms.
The morphisms are tedious to write in Lean with this approach. The benefit is that mapping this via an additive functor is easy, and it solves Johan's issue that (F.map_homological_complex C).X_prev i is not defeq to F.obj _.

Adam Topaz (Jul 26 2022 at 03:24):

(There may be some compatibilities missing with the isomorphisms involved in the definition of homological_complex', but I hope the idea is clear enough.)

Johan Commelin (Jul 26 2022 at 03:25):

But doesn't this mean that now there are isomorphisms everywhere?

Johan Commelin (Jul 26 2022 at 03:26):

I think we should have something like this when doing abstract homological algebra. But I don't know if it should be built right into the defn of complexes.

Johan Commelin (Jul 26 2022 at 03:27):

Having an unbundled version, where you get to choose the defeqs of the homology object (instead of using choice) is very useful, I think.

Adam Topaz (Jul 26 2022 at 03:38):

We already have a bunch of isomorphisms to deal with, because that's how we can relate C.X_next i to C.X (i+1) (for example). I guess I'm just suggesting that these isomorphisms be part of the structure itself.

Adam Topaz (Jul 26 2022 at 03:39):

For the homology issue, I think we should mimic the way you can describe an explicit limit with good defeq properties in terms of an explicit cone and an explicit term of is_limit .... That's roughly what homology_gadget is trying to accomplish.

Adam Topaz (Jul 26 2022 at 03:41):

With this approach to complexes, we would no longer have C.d i j for arbitrary i, j. In fact, all we will have is C.d_to i and C.d_from i

Joël Riou (Jul 26 2022 at 04:24):

About has_homology, my opinion is that it is excellent in order to prove that homology is a self-dual notion, but it is not practical enough in order to do computations because the burden of proof in order to get a has_homology is too strong as we experienced it in the LTE. My proposal of homology_iso_datum was not to take a kernel of g and a cokernel of f, but to compute the homology as a quotient of a subobject, by first taking a kernel K of g and then a cokernel of S.L ⟶ K. It is not self-dual, but computations are easy...

About the definition of complexes, my proposal would be the following, which seems to be the closest to the usual way mathematicians would think about complexes:

structure homological_complex'' (c : complex_shape ι) :=
(X : ι  C)
(d : Π i j (hij : c.rel i j), X i  X j)
(d_comp_d' :  i j k hij hjk, d i j hij  d j k hjk = 0 . obviously)

(I think @Kevin Buzzard suggested this also.)

In category theory, we should focus more on morphisms than on objects. Adam's definition may solve some difficulties, but I believe it creates more problems because of the unpractical notion of morphisms it would require. I can imagine how painful this would be for the proof of the Dold-Kan correspondence.

I think that what homology in mathlib needs the most is a more functorial approach. My introducing the category short_complex unlocked the verification of naturality properties. Anyway, the definition of homology is intricate, so that I do not think we need to change the defs: we can use a few isomorphisms.

Joël Riou (Jul 26 2022 at 04:58):

Another problem I see is that we currently have chain_complex C α which is defined when [add_right_cancel_semigroup α] and [has_one α]. In the case of , one may define c.pred i = i-1 and c.succ i = i+1, but with a general α, this would not work.

The only possible gain I may see in Johan's approach is that the "short_complex" around i attached to a complex K is mapped by an additive functor F to the "short_complex" around i attached to F.map_homological_complex K. It seems it would a definitional equality (if we allow the irrelevant differential K.d i i which I would like to get rid of...). In the LTE, I have precisely obtained this isomorphism with the current API short_complex.functor_homological_complex C c i ⋙ F.map_short_complex ≅ F.map_homological_complex c ⋙ short_complex.functor_homological_complex D c i. It was not that painful! Then, is it really worth setting a priori c.pred and c.succ only for that purpose? I think that the more "on-demand" approach that I have suggested above is enough.

Johan Commelin (Jul 26 2022 at 12:36):

I just opened #15690 as an experiment. It implements my idea, and the result is a +148 −331 diff.
Very little API surface has changed, but proofs became a lot shorter. I have not removed lemmas/definitions that are now just thin wrappers around other defs/lemmas. We could easily remove more stuff in the future.

As I said before, I am very much in favour of the other proposals. I think they can coexist with this one. If anything, my PR will mean that future refactors will be easier.

Joël Riou (Jul 26 2022 at 14:54):

I have nothing to object to this PR, as long as next and pred are defined using choice, and are not part of a datum that would be fields of the complex_shape structure. (Of course, some name changes shall be necessary, like X_prev_iso_zero!)

Johan Commelin (Jul 26 2022 at 15:22):

@Joël Riou Can you explain why you think they shouldn't become fields of complex_shape? Up to propositional equality there is no difference at all. And if we provide default arguments, you could pretend that those fields are not there at all.

Joël Riou (Jul 27 2022 at 04:21):

I have tried to give some arguments above. One is about (co)chain_complex, and another is about annoying indices like (n+1)-1 that will pop out. Could you explain what are the arguments in favour of this?

Johan Commelin (Jul 27 2022 at 04:30):

Sure, we will have those problems. But we also have them now (in mathlib master, and on my refactor).
In fact, if you do make succ and pred fields of complex_shape then complex_shape.pred (1:ℕ) = 0 would be defeq, but with the current refactor it isn't.
But I'm happy with keeping the current version. I just don't think that extra defeqs will do any harm.

Joël Riou (Jul 27 2022 at 05:50):

The homology is a subquotient of K.X i. For example, K.boundaries i is defined as the image subobject of K.d_to i, which is K.d (c.prev i) i with your refactor. When we want to use this, we have boundaries_eq_image_subobject and boundaries_iso_image which take as an input a relation c.rel j i to express the boundaries as the image of K.d j i. I do not see how setting a preferred c.pred would make the situation any better because I doubt that this chosen c.pred would be the best possible in most of the situations (because of the (n+1)-1 type of issues). Setting a priori a presumably "better" def of c.pred does not seem to be that useful in my opinion. I would rather keep the c.prev of your PR and have a more "on-demand" approach that is already provided by lemmas such as boundaries_iso_image.

(Note also that for the succ, even though 0+1 = 1 is a definitional equality, Lean may sometimes have difficulties identifying the two, which would imply more use of erw rather than rw/simp. If we have to work on homology in degree 0, it may still be more convenient to explicitly say that we want to work with 1 rather than 0.succ or 0+1.)

It may not do any harm, but this would still be more work when defining a complex_shape (and it may break (co)chain_complex as I suggested above). Then, is it really worth?

Johan Commelin (Jul 27 2022 at 05:58):

Yes, I agree that any potential benefits of having pred and succ as fields of complex_shape are not so large. So I'm very happy to leave them as definitions, like I do in the current PR.

Johan Commelin (Jul 27 2022 at 17:31):

The linter is now happy with

refactor(algebra/homology): better defeqs, less case splitting #15690


Last updated: Dec 20 2023 at 11:08 UTC