Zulip Chat Archive

Stream: condensed mathematics

Topic: thm95.double_complex


Johan Commelin (Mar 18 2021 at 14:44):

I just pushed a definition of the double complex that is the main protagonist of the proof of 9.5

Johan Commelin (Mar 18 2021 at 14:44):

There are still a couple of sorrys left

Johan Commelin (Mar 18 2021 at 14:44):

But the big skeleton is there.

Johan Commelin (Mar 18 2021 at 14:45):

All of this is taking place in the subdirectory thm95/. At some points we might want to give some of these directories/files a bit more canonical names.

Johan Commelin (Mar 18 2021 at 14:46):

The proof strategy is to show inductively that this double complex satisfies normed_spectral_conditions. Once that is done, we can apply normed_spectral (=9.6) and we are done. Sounds easy, right?

Johan Commelin (Mar 18 2021 at 14:48):

There are basically three conditions to check:

  • row_exact: this one is easy. It comes from the induction hypothesis. I hope it will be a 1-liner, even in Lean.
  • col_exact: this one is a lot trickier. This is where we need 8.17, 8.19, and 9.2. It relies on a hypercover argument.
  • a homotopy argument, scattered over 4 or 5 fields of normed_spectral_conditions. Here we need 9.13.

Johan Commelin (Mar 18 2021 at 15:51):

Ooh, I just realized that I didn't produce a system_of_double_complexes but only a cochain_complex ℕ system_of_complexes. :see_no_evil:

Johan Commelin (Mar 18 2021 at 18:29):

lemma double_complex.row (i : ) :
  (double_complex BD c' r r' V Λ M N).row (i+1) =
  (BD.system c' r V r'
    (Hom (polyhedral_lattice.conerve.obj
    (PolyhedralLattice.diagonal_embedding Λ N) (i+1)) M)) := rfl

Johan Commelin (Mar 18 2021 at 18:29):

This lemma confirms that (apart from row 0) all the rows in the double complex are of the form that appears in the statement of theorem 9.5

Johan Commelin (Mar 18 2021 at 18:29):

So we can apply induction to prove that they are bounded exact

Johan Commelin (Mar 18 2021 at 18:30):

row 0 is also of that form, but for a different lattice. The rows are defined by case distinction

Johan Commelin (Mar 19 2021 at 20:12):

I just pushed an update that contains the global outline of the induction argument for the proof of 9.5.

Johan Commelin (Mar 19 2021 at 20:13):

We need to fill in these sorries: https://github.com/leanprover-community/lean-liquid/blob/master/src/thm95/default.lean#L60
And also the other 37 sorries in the repo (-;

Johan Commelin (Mar 19 2021 at 20:13):

Note that this NSC definition that I link to is still missing some hypotheses... we'll figure those out along the way.

Johan Commelin (Mar 19 2021 at 20:14):

@Peter Scholze I'm gathering all the constants in one file: https://github.com/leanprover-community/lean-liquid/blob/master/src/thm95/constants.lean

Johan Commelin (Mar 19 2021 at 20:15):

Currently a lot of these constants are still missing their definitions. Again, we'll figure them out along the way.

Johan Commelin (Mar 20 2021 at 13:21):

I tried to flag all the deceptive sorrys that currently aren't provable, and added some remarks close to others that should be provable

Johan Commelin (Mar 20 2021 at 13:34):

If anyone is looking for new targets for this weekend (but feel free to enjoy the sun, or Milano-San Remo):

  • the sorry at the bottom of polyhedral_lattice/cech.lean
  • the sorrys in system_of_complexes/rescale.lean
  • the 3 sorrys at the bottom of thm95/double_complex.lean

Kevin Buzzard (Mar 20 2021 at 17:52):

I'm going to have a go at the rescale.lean stuff in about ten minutes. I'm just mentioning that here in case anyone else has started.

Kevin Buzzard (Mar 20 2021 at 17:58):

PS is this GPT AI thing just generally up and running in the repo? What do I type to see what the AI thinks the next line of the proof should be? Has anyone actually found this useful in practice yet?

Johan Commelin (Mar 20 2021 at 17:59):

Yes, Patrick (I think) set it up

Alex J. Best (Mar 20 2021 at 18:06):

gptf or neuro_eblast are the two main tactics implemented there j believe

Kevin Buzzard (Mar 20 2021 at 18:11):

It's hard to think of a theorem which could withstand a neuro_eblast! I'll give it a go!

Kevin Buzzard (Mar 20 2021 at 18:40):

Hmm, it says I don't have an API key. I've just signed up for one following the instructions I found in the lean-gptf stream. I'm assuming that's the right thing to do!

Alex J. Best (Mar 20 2021 at 19:16):

Yes, and you can either set it as a path variable or just you should be able to pass it in as an argument to the tactic like

gptf {api_key  := "MY_KEY"}

Kevin Buzzard (Mar 20 2021 at 20:13):

OK so I've just pushed a sorry-free system_of_complexes/rescale.lean, and I didn't even use neuro_eblast. These things take me forever, but with the infoview and constant use of the delta and dsimp tactics I can usually get to the bottom of what's going on. I tried to add some helpful refl lemmas but rewriting is full of dangers here. One thing I discovered was that if v : A and lemma h : A = B := rfl then if the goal depends on v a rw h at v might give you two v's and motive trouble later, but change B at v works fine. To avoid change I just did rintro (v : rescale r (C c i)), -- this does a very nice definitional rewrite behind the scenes and doesn't leave any chaos in its wake.

Johan Commelin (Mar 20 2021 at 21:12):

Thanks a lot!


Last updated: Dec 20 2023 at 11:08 UTC