# 2021 - The Bottom Line

Since the mathlib repository was created in summer 2017, each year has been bigger than the last. As an end-of-year retrospective, we look at how the mathlib library and community have developed in 2021.

## Statistics

Since early 2019, mathlib has followed a "squash and merge" strategy for pull requests. One commit to the main branch corresponds to one merged pull request.

Dec 31, 2020 Dec 31, 2021 Increase
Number of files 1498 2329 831 (55%)
Lines of code in repository 1 455152 745259 290107 (64%)
SLOC in /src 1 284441 473644 189203 (67%)
Lines of comments 65219 108658 43439 (67%)
Total commits 5751 10960 5209 (91%)
Definitions 15437 24861 9424 (61%)
Theorems 44038 72847 28809 (64%)
Contributors 132 209 77 (58%)

### Records

December 2021 set a record as the month with the highest number of commits, at 565. Four of the five most active months happened in 2021; December, October, and September were mathlib's first, second, and third most active months, respectively.

Yury Kudryashov was the year's most prolific contributor, with 662 commits, surpassing his previous year's record of 624. Yaël Dillies deserves a special shoutout: in their first year of contributing, they became the year's fourth most prolific contributor, with 301 commits.

## New contributors to mathlib

77 people contributed to mathlib for the first time in 2021. In total, 864 commits were from new contributors. Many thanks to:

## New mathlib maintainers

The group of mathlib maintainers has also grown! In 2021, Rémy Degenne, Markus Himmel, Bhavik Mehta, Oliver Nash, Adam Topaz, and Eric Wieser joined the team, which now numbers 23. (With the latter three, the team finally includes surnames from the second half of the alphabet.)

## New field in mathlib: probability theory

The measure theory and analysis libraries have been developed enough to allow the definition of some probability theory concepts. mathlib now contains definitions of independence, probability density functions, conditional expectations, martingales and stopping times. We don't have many theorems about them yet and many other basic definitions are still missing, but the foundations are in place and 2022 should see a large development of probability theory in mathlib.

## New Lean 3 community releases

The community has continued to maintain its version of Lean 3. Lean 3.24.0c was released Jan 4, 2021; as of Dec 31, the current version was 3.35.1c. Changes include:

• Parts of the core library were transitioned to mathlib to ease maintenance.
• Features were changed to support the transition to Lean 4, including coercion handling, ite/dite, and export formats.
• Many bugs were squashed, much documentation was added.

The Lean 3 repository includes a full change log.



This says that a certain system of complexes (BD.data.system κ r V r').obj (op $of r' (Mbar r' S)) satisfies the technical condition of weak bounded exactness with respect to certain parameters. This system of complexes fundamentally depends on a semi-normed group V and a certain space Mbar r' S. We will say a bit more about this mysterious object Mbar r' S later. To complete the final challenge, we need to remove the sorry from the following statement variables (p' p : ℝ≥0) [fact (0 < p')] [fact (p' ≤ 1)] [fact (p' < p)] [fact (p ≤ 1)] theorem liquid_tensor_experiment (S : Profinite.{1}) (V : pBanach.{1} p) (i : ℕ) (hi : 0 < i) : Ext i (ℳ_{p'} S) V ≅ 0 := sorry  In this statement, ℳ_{p'} S is morally a space of real-valued measures depending on the real parameter p' and the p-Banach space V is an example of the semi-normed groups showing up in first_target above. But the Ext in this statement is really about the Ext-groups of condensed abelian groups, so both ℳ_{p'} S and V are viewed as condensed abelian groups in this theorem. ## Homological algebra To be able to state this theorem (e.g., to make sense of Ext) we need to develop the theory of derived functors for arbitrary abelian categories with enough injectives/projectives. This was done by Scott Morrison, who contributed definitions of Ext and Tor to mathlib about half a year ago. We also need to show that the category of condensed abelian groups satisfies these conditions. Recently, Adam Topaz finished a formalization of the fact that on an arbitrary site (= category + Grothendieck topology) the category of sheaves with values in a suitable abelian category is itself an abelian category. In particular the category of abelian sheaves, and hence the category of condensed abelian groups, is an abelian category. The fact that condensed abelian groups have enough projectives is almost done, so that finally the Ext in the statement above does no longer depend on unformalized assumptions. To be able to compute the Ext-groups, we need a library of results about homological complexes, long exact sequences, and such. A fundamental input is of course the snake lemma, and this was formalized by Riccardo Brasca and Adam Topaz in the fall. Andrew Yang is currently working towards a formal proof that the homotopy category of complexes is a pretriangulated category. This will give us important tools to manipulate the Ext-groups into a shape that reduces to first_target. ## Spaces of measures As mentioned above, the space ℳ_{p'} S is a real condensed vector space that is morally a space of measures. One of the crucial insights in the proof is that it is a quotient of an arithmetic analogue that we've been calling the space of Laurent measures: ℳ(S, ℤ((T))_{r'}), with (1/2)^{p'} = r'. This quotient map turns out to be the cokernel of an injective endomorphism of the space of Laurent measures. Together, these two maps form a short exact sequence, which leads to a long exact sequence of Ext-groups. Hence the vanishing of the Ext-groups above can be reduced to a question about similar Ext-groups but this time involving these spaces of Laurent measures. In turn, these spaces of Laurent measures naturally admit a free module as submodule, and the quotient is the mysterious space Mbar r' S that appears in first_target. Together, these two short exact sequences are crucial inputs that will be combined with the homological tools mentioned above to reduce the main theorem to first_target. Filippo A.E. Nuccio has been working arduously on the formalization of these results, which amount to Theorem 6.9 of Analytic.pdf. ## Breen–Deligne resolutions and MacLane's Q'-construction Finally, to compute the Ext-groups, at some point projective resolutions must enter the picture. The proof in Analytic.pdf relies on so-called Breen–Deligne resolutions, which have the following two crucial properties: 1. It is a functorial construction A ↦ C(A) that sends an abelian group (or sheaf) to a complex of abelian groups (or sheaves). 2. When viewed as a functor to the homotopy category of complexes, it is additive. In other words, C(A ⊕ A) is naturally homotopic to C(A) ⊕ C(A). In addition to these properties, Breen–Deligne resolutions have the favourable property that the construction yields a projective resolution C(A) for every A. For our formalization project, they also come with the significant downside that the proof of the existence of Breen–Deligne resolutions relies on technical results from homotopy theory, which haven't been formalized yet. But to our delight, it turns out that there are related constructions that satisfy (1) and (2) above which are good enough for our purposes and which can be formalized directly. Indeed, the functorial complex known as "MacLane's Q' construction", is one such example. On top of that, the following result holds. Lemma. Let$A$and$B$be two abelian groups (or sheaves). If$\text{Ext}^i(Q'(A), B) = 0$for all$i \ge 0$, then$\text{Ext}^i(A, B) = 0$for all$i \ge 0. This lemma is not yet formalized, and to our knowledge it does not appear in the literature. We hope to fix both of these issues in the near future. So far, we have formalized the claim that the Q'-construction satisfies properties (1) and (2). The proof of the lemma uses the tensor-hom adjunction and the fact that the homotopy category of complexes is pretriangulated. The formalisation of these two prerequisites is active work in progress. You can follow our progress on all the remaining tasks at the following Github project. # Update on mathport (Dec 2021) mathport is the tool we're planning on using to help us port mathlib to Lean 4. It has mostly been written by Mario Carneiro and Daniel Selsam, and Gabriel Ebner and I have been making some fixes. To provide some context, mathlib is the primary library for mathematics in Lean 3, containing over 700k lines of code and growing fast! Lean 4 now has a preliminary release, and we would really like to transition to building a mathematics library in Lean 4. While the type theory and kernel in Lean 4 are quite similar from a user point of view to Lean 3, it is certainly not the case that we can run Lean 3 code in Lean 4. Our aspiration is to achieve a "semi-automated" port. The mathport tool first of all provides a complete binary level port of mathlib (i.e. generating a Lean 4 .olean file for every .olean file in mathlib). This means that you will be able to import all files from mathlib, as long as you don't expect to be able to look at the actual sources! We have largely rejected the idea of a dual-compilation setup, in which existing code stays in Lean 3, while new code is written in Lean 4. The binary port could make this viable, but it seems complicated and fragile, and the community is excited to take full advantage of Lean 4. (A dual-compilation setup would not allow Lean 3 files to import from Lean 4 files, so the version transition would have to be a cut across the import graph.) Thus mathport further attempts a "best-effort" port of the source files, translating Lean 3 syntax to Lean 4 syntax. Right now, that "best effort" is not quite good enough! Our goal is that this will steadily improve, until we reach a point at which it become viable for humans to finish the migration in an incremental fashion. Notably, because of the size of mathlib, and its growth rate, we have decided it is not desirable to "freeze" mathlib3 until very late in the process. (Throughout this document, mathlib refers to the current mathlib repository, but I'll try to consistently include the 3 to disambiguate!) Thus we will be regularly running mathport on a continuously evolving mathlib3 repository, and for an initial period not depositing the output in the mathlib4 repository. Nevertheless, mathport takes as input both mathlib3 and mathlib4, and attempts to automatically make use of the parts of the library which have already been ported, by "aligning" definitions. Thus once the output quality of mathport is sufficiently good, we expect to be able to move files across to the mathlib4 repository (starting from the bottom of the import hierarchy), while continuing to re-run mathport on the still evolving mathlib repository! Eventually, however, we will declare "flag day", at which point the mathlib3 repository will stop accepting PRs for new material, and we have a (hopefully brief!) intensive period of completing the migration. Notably, mathport makes no attempt to automatically translate the tactics that have been written in mathlib3. Metaprogramming is sufficiently different in Lean 4 that this would be impossible, and moreover there are significant stylistic differences when writing tactics in Lean 4, so a literal translation would not be desirable. This means that we have a huge amount of work remaining to re-implement all the mathlib3 tactics in Lean 4. We've already done a few important ones (notably simp is implemented in Lean 4, and we have ring in mathlib4). We would really like to preserve feature parity as we make the transition, and so are hoping to re-implement tactics, rather than "dumb-down proofs" wherever possible. The occasion for this blog post is that we now have continuous integration set up for mathport, and a reasonably easy to use setup that lets you work with the output of mathport without having to run it yourself. I'll describe that setup up below, but first explain what sort of efforts are probably most useful right now to help the mathlib port. They are approximately in priority order, in terms of my guess about what will hold up the port the most. • Porting missing tactics from mathlib3 to mathlib4. This is still a huge task, and will not be automated in any way. If you've contributed tactics to mathlib3, please consider trying to port them to Lean 4. If you're interested in learning some Lean 4 metaprogramming, what better way to do that than porting tactics? We'll write more about this in a future post, with some pointers about places to get started, and how to hook up new tactic implementations to the existing tactic parsers that Mario has already ported to mathlib4. • Resolve outstanding issues in mathport. (On some issues there's already an indicated fix but it needs implementing/testing. Other issues still need diagnosis.) • Open the mathlib3port repository (instructions below), look at files (probably starting with "low-level" files), and identify things that mathport should be doing better. Check there isn't already an open issue, then open an issue. • Note: at this stage I think it would be a bad idea to actually take a file from mathlib3port, clean it up, and PR it to mathlib4. That will hopefully come later, but we need to fix many mathport issues first. • Instead, it is fine to make changes that you think mathport should be doing already and committing these on a branch, so that you can link to diffs when opening mathport issues. • For now, don't worry too much about the state of proofs. We'd like to get to a state where the vast majority of statements are correctly translated as soon as possible. • There are still many alignment problems between Lean 3 / mathlib3 declarations, and Lean 4 / mathlib4 declarations. Sometimes these can be fixed by adding #align commands in mathlib4. Sometimes they may turn out to be mathport bugs. Sometimes they will reflect deeper design problems we're going to need to talk about! ## Background: what is mathport? mathport consists of two loosely coupled components: binport (largely Daniel's work), and synport (largely Mario's work). • binport constructs Lean 4 .olean files, from Lean 3 .olean files. It largely works, and means that you can import mathlib3 content into Lean 4 (as long as you don't expect to have source files!) This is what lets us do things like: import Mathbin #lookup3 algebraic_geometry.Scheme #check AlgebraicGeometry.Scheme  Yay, Lean 4 has schemes! :-) To see this file in action, you should check out a copy of the mathlib3port repository described below, and make a new file there. • synport constructs Lean 4 .lean files, on a "best effort" basis. (It uses both the output from binport, and Lean 3's --ast output to guide it.) We should not expect that this will ever converge to a perfect translator. Instead the hope is that it gives us something that humans can plausibly improve to a complete translation of mathlib3. To understand how mathport (mostly talking about the synport part from here on) works, it's important to understand that it is translating mathlib3 to Lean 4 source code, "modulo" the current content of mathlib4. That is, the premise is that as we progressively construct mathlib4 (whether by translating by hand, moving content from mathport's output to mathlib4, or adding #align statements) the output from running mathport on mathlib3 will change. In particular, as mathport is translating each declaration, it checks to see if a corresponding declaration in mathlib4 already exists, and is defeq. If so, mathport will instead just use that declaration. If not, mathport will make a copy, appending a × to the name. Sometimes these misalignments are due to an unintentional non-defeq, that can be fixed in mathlib4. Other times, we genuinely want to change something in mathlib4 (e.g. to use Lean 4's multiple parent structures, which are better than old_structure_cmd). As a result, we expect that some misalignments will persist throughout the mathport-assisted stage of the port, and only afterwards will we polish these away. A detailed account of how binport and synport are working is beyond the scope of this blog post. Hopefully we'll have one eventually, but in the meantime Mario's talk is very helpful. ## What should I look at? Please note that mathport takes considerable resources to run on mathlib3: approximately 3.5 hours, and 32gb of RAM. So you'll probably want to look at artifacts generated by CI rather than running it yourself. There are four GitHub repositories you can look at: • mathlib3port should be most people's first stop. This contains a copy of the .lean files produced by a recent run of synport, in the Mathbin directory. You should just be able to check out a copy of this repository, and open the folder in VS Code, to see the current state of mathport output. You can also try out the above example with #check AlgebraicGeometry.Scheme in a fresh file here. • Remember the synported files are expected to be horribly broken; most tactics aren't implemented, and there are bugs around parenthesization of arguments, and name resolution! • Good luck finding even a single file that compiles cleanly right now. • lean3port is the corresponding repository containing a copy of a recent run of mathport on the Lean 3 core library. It's less interesting perhaps, but also smaller and easier to inspect. • mathport contains the code for mathport itself, as well as the continuous integration set up that runs mathport on Lean 3 core and mathlib3 every time there is either a PR to mathport, or a commit to master. The artifacts produced by CI appear at https://github.com/leanprover-community/mathport/releases, and the two repositories listed above have a lakefile.lean that will download and unpack these artifacts. • mathlib4 is the current preliminary port of mathlib to Lean 4. Later, when mathport begins to stabilize, we will begin moving files from the output of mathport into this repository, but not yet. For now, this repository serves several purposes: • Primarily, it is the home for ports of tactics implemented in mathlib3 to Lean 4. • In order to build these tactics, a certain minimal library is necessary (e.g. for the ring tactic, we need the notion of a ring!) and we are constructing this by hand. • There is scope for experimental developments, but please don't just port parts of mathlib3 to Lean 4 for the sake of doing so. If you're investigating how some new Lean 4 feature might be used in the eventual port of mathlib, it's okay to do so in this repository. For now, the mathlib4 repository has fairly low standards: we don't expect the full review process used in mathlib3. Notable parts of the mathlib4 repository relevant for mathport are: • Mathlib/Mathport/SpecialNames.lean contains a sequence of #align statements, for cases where a definition in Lean 4 core has a different name from the name that would be produced automatically by changing case conventions from Lean 4 or mathlib3. • Mathlib/Mathport/Syntax.lean contains definitions of all the syntaxes of tactics currently implemented in mathlib3. These have been written by hand by Mario, and we should take care to keep these up to date. Please be careful editing this file, and we also need to remember to update it if any further changes to tactics land in mathlib3. Because in this file we only have the syntax statements, if you try to use the tactics here you will get Tactic not implemented yet errors. If you would like to work on porting tactics, essentially this file is the TODO list! You should take the relevant syntax definitions, move them to their own file in the Mathlib/Tactic/ directory, and provide an implementation. Conversely, please do not start porting a mathlib3 tactic without faithfully reproducing the syntax, as this will then cause problems for mathport. It's completely fine to provide an interim port, that throws errors when encountering some of the bells and whistles specified by a syntax declaration. ## How do I run mathport? The Makefile in https://github.com/leanprover-community/mathport is currently the best available documentation for running mathport. You will need to make sure you have curl, git, cmake, and elan installed on your system. Basic usage is make build source predata port. These stages are: • build: compile mathport (which is written in Lean 4) itself • source: pull the relevant commits of Lean 3 and mathlib3, and do a little preparatory work in those directories • predata: recompile the Lean 3 library and mathlib3, with lean --ast --tlean, to generate the auxiliary files mathport needs. • port: run mathport on Lean 3 and mathlib3. Running all of them in sequence is necessary if you're starting from scratch, but is painfully slow. You don't really want to run make predata yourself. Typically you don't want to run make port on the entire library either: you'd prefer to download an artifact containing the results, but then re-run mathport on a single file, in order to try out a bugfix to mathport. We provide artifacts for various stages of the build on the releases page of the mathport repository. The script ./download-release.sh nightly-YYYY-MM-DD downloads one of these, after which you can skip the make predata and/or make port steps (you will still need to run make build and make source). If you've already got a local copy of the output of make port (either by running it yourself, or using ./download-release.sh) you can also use the make TARGET=data.nat.bitwise port-mathbin-single target (similarly for port-lean-single) to run mathport on a single file. This is useful if you are testing a change to mathport. # Backstage with Yaël Dillies We're introducing a new category of blog post: backstage interviews with mathlib's active contributors! Today, Johan Commelin interviews Yaël Dillies. JMC: Please tell us a bit about yourself, about your background. YD: Hi! I am Yaël Dillies. I was brought up in France, in Nantes. That's in the west, near the coast. I've basically lived there all my life, and went to school there. My first memorable contact with maths was via the Kangaroo math competition, when I was 13 or 14 years old. I became 3rd out of 25000 participants, and was invited to an olympic math camp. That got me hooked, and I certainly learned a lot there. It was a really nice environment, and the competitions pushed me forward into mathematics. After high school I had to decide: either I would stay in the French system which meant going to prépa [JMC: two years of preparation for the grandes écoles in France] which is very intense. Also, part of what they do there is not math, but I knew that I was into math, and that's what I wanted to do. Instead, I decided to go to Cambridge. That's where I am now, in my second year of studies. JMC: How did you first learn about Lean? How did you get involved? YD: Leanwise, it all started with Kevin's [JMC: Kevin Buzzard] talk last February that he gave for the Archimedeans in Cambridge. I had already heard about Lean at that point, from the chalkdust article, from which I understood that it was some game that you can play. That made me wonder: "Who is making the levels? What is the end goal?" JMC: Wow, so in a couple of months you went from beginner to expert contributor! YD: [Chuckles] If you say so. JMC: Did you have prior experience with programming? YD: I learned Java in 2015, and it's the language I fall back on if I need to do something. But I don't program so much anyway. JMC: Ok, thanks. Let's get back to how you got involved with Lean. YD: So Kevin's talk made me realise that there's a lot more going on, and that it's not just a game. Mathwise, I figured out a lot along the way. Leanwise, I learned a lot from Bhavik [JMC: Mehta], from him supervising me. JMC: How did you meet him? Just around that time, there were summer internships in Cambridge advertised on some webpage. I applied for an internship on a Isabelle project, and at the same time I rushed through Kevin's workshop. In the end, I didn't get that internship, but I continued doing the tutorials. After that, I hung around on the Xena Discord server, where I asked a tonne of questions. I was probably quite annoying at that point. But Bhavik helped me a lot. Back then, Bhavik was working on a formalisation of Sperner's lemma. At some point I asked him if I could provide any help, and he said "Yes, sure!". So then we started hacking on the sperner-again branch. JMC: So this was in April, or? YD: The middle of March, actually. And that's what I worked on all my Easter holidays, which are very long in Cambridge. Basically till the end of April. I wrote about 6000 lines of code, most of which is pretty crappy to be honest. JMC: Around that time you also started contributing to mathlib. Which parts have you worked on? YD: It can be divided into three areas. The first is convex analysis. As I said, I was working on Sperner's lemma, which never hit mathlib. But we might continue it soon, so maybe it will then. Anyway, for this work, we needed several lemmas that were missing from mathlib. I also enjoy formalizing random bits of convex analysis, because it is elementary and I understood how to do it right. And in September I led the convexity refactor. It's not completely done yet. I plan to do the next steps during my Christmas holidays. The second part is order theory. I think it's the first place that I really understood to the core; how the hierarchy is built; how the typeclasses interact; and what kind of properties we expect from each thing; and just how everything fits together. Here I contributed the definitions of circular orders and locally finite orders and many missing lemmas. There always are missing lemmas. And together with Yakov [JMC: Pechersky], I'm working on refactoring bits of this hierarchy, such as conditionally complete orders. The third area I worked on is combinatorics. Not much on my work there is in mathlib yet. What I did this summer is Szemerédi's regularity lemma, and the stuff that follows from it, such as Roth's theorem. But this is all living on a branch that Bhavik and I are working on. And then I'm working on graph theory. But there's the usual problem with graph theory in mathlib: there are lots of interested people, lots of ideas, but nothing gets PR'd. Finally, I'm taking the old Kruskal-Katona branch of Bhavik, and turning that into PRs to mathlib. And all of these are related. With Bhavik, we are now considering formalizing a proof of Szemerédi's theorem, which generalises Roth's theorem. For this we will need Hales-Jewett, which David Wärn recently contributed, Sperner's theorem, which is in Bhavik's Kruskal-Katona branch, and a generalisation of that called Density Hales-Jewett. We are studying the Polymath-inspired proof. JMC: That's a lot. Probably the refactors have been the most visible to the outside. Can you give us some insight into the brain of someone who does a refactor? What goes on behind the scenes? YD: Hmmmm. Everything starts with an annoyance. You are on your way to somewhere, and suddenly your foot stumbles upon something, something that is not quite right. You inspect closer, you look onto the ground; and BAM! there is a rabbit hole. And of course, you have to look into it. You can't just leave it there, that's not something one should do. You have to inspect the rabbit hole, and maybe fill it up. I'm very prone to doing that, because I guess I'm pathologically perfectionistic. So I go down this rabbit hole, and there's an entire cave behind it. And I just can't leave anymore: I have to figure out how to deal with it. And that's how it happened with the convexity refactor, with the finite intervals, and all others. For example, the PR on Minkowski functionals depended on 12 other PRs that I made, which were all filling up little gaps that I stumbled upon. JMC: So the rabbit hole, the annoyance, that's how you get drawn into it. But in the end, something has to happen. You need to write code, it has to be PR'd, in order to improve mathlib. And this can be nontrivial. So how do you move from observing the rabbit hole and the cave behind it, to some effective actions that improve the situation? YD: Well, you map the cave. Maybe you go outside for a while, bring your friends in, chat around a picnic. For example, I discussed a lot with Yakov about the convexity refactor. In fact, he was probably the person who got me started with it. You better have that conversation now, before you find yourself with a crappy remblai. The cave is there. It's not going anywhere. You know you will eventually have to fill it up, but there is no urge in doing so. There's usually little incentive in rushing a refactor. JMC: Can you briefly sketch what the rabbit hole was in the case of the convexity refactor? YD: So, what the rabbit hole was: until August 2021, convexity in mathlib was only defined for sets in real vector spaces. This is very restraining, for various reasons. Sometimes you want to talk about convexity over the rational numbers. Yakov wanted to do tropical geometry, so then you only have an ordered semiring, not an ordered ring. There is no additive inverse in that setting. That's why he originally asked for the refactor. Similarly, Yury [JMC: Kudryashov] wanted to apply convexity to measures. But measures are a positive thing, so again you don't have additive inverses. So a first generalization could be to replace the real numbers by a linearly ordered field. But as the last two examples show, that's not sufficient. It doesn't bring much. So the bulk of the work is to take those big files with all the lemmas, and separate them out, stratify them, according to how much structure is needed to prove them. When I started, I had quickly figured out that ordered rings should be sufficient for almost all of it. But then Yakov pointed out to me that even ordered semiring should be enough (because he had tropical geometry in mind). And that's how it started. So for each lemma, you look at the typeclass assumptions, and try to replace them by something weaker, until something breaks. And if it breaks but you think it shouldn't, then you try to fix the proof. Now repeat that, like 500 times. I got it wrong the first time. I didn't localise my changes, as we say. I started by replacing all of the convex stuff, by changing all the lemmas to assume ordered semirings, everywhere; and then tried to fix all the problems, as they showed up. And I ended up changing 40 files, which wasn't very efficient, and it didn't go anywhere. So then I thought about it a bit harder, and decided that I would only change the first file, which defines convexity. So I did that, and PR'd it. And after some discussion, it got merged. (The discussion brought up another rabbit hole, concerning affine spaces.) And when the first file was done, I checked which files were one step higher in the import hierarchy. And then I generalized those files. And in this fashion you work your way through the files, going further and further. In the end, I found the process quite straightforward. And along the way I learned a lot about scalar actions in mathlib; actions, modules, associativity constraints when different objects act on each other simultaneously... I'm very glad I did this refactor. Most of the pain came from waiting for the poor reviewers to look at the 500-line diffs on github. JMC: So first there is the rabbit hole, and then your first attempt at refactoring, which failed. Then you tried again, in a more systematic way, which worked better. And then you start PR'ing things. By that time you've gone through lots of different thoughts and ideas and options, and settled on a particular approach. And now you have to convey your ideas to the rest of the community, and make them understand that this is the best direction. How did that go? YD: For the convexity refactor, it went quite well. Because in that case it was a clear win. Everything we could do before [the refactor], we could do after. The only payoff was that we now have to write the ring of scalars explicitly when stating that a set is convex, because there are potentially multiple candidate rings of scalars. One thing that I took away was that people wanted me to do smaller PR's. Localising your changes is really the way to go. Maintainers don't want to have several thousand lines of code to digest. And they might want to say no to a small portion of it, but they are ok with the rest. And the only way to go about that is either to block the PR entirely, which means the process becomes much slower, or you split the PR up. And this splitting is a part where you have to think a bit, because you have to figure along which lines you can cut up your PR. In my case there was an obvious choice. Because I wasn't functionally changing anything, only generalizing things, I could just do it file by file; and that's how it happened. There is also another pragmatic fact. Mathlib grows fast, and files get touched often. If you want to avoid your PR rotting away in a queue of "too big refactors" you need to make it small so that it goes through quickly and unconflicted. The time that it takes to review a PR is not proportional to its size, but maybe to the square of its size, or something. That's just an empirical observation. JMC: Yes, that sounds true to me. So, you've talked about three areas that you've worked on with Lean. If you had to choose, which one are you the most proud of? YD: Well, I guess they didn't bring the same things to me. The work in order theory just satisfies my needs for tidiness. Convex analysis got me into the right way of doing PR's. It taught me the process, and how to get your stuff accepted by others. And finally, combinatorics didn't bring me either of those. Because, firstly, none of it is in mathlib, secondly, there are lots of design decisions that are arguable and I don't think it's in its final form already. But it brought me recognition from outside the formalisation community. The latest of course being Tim Gowers with his tweet mentioning my work with Bhavik. I actually met Tim last week, by accident. And he's really interested in what we are doing. Formal theorem proving is now something that mathematicians can do and get something out of it. JMC: But if you had to choose one? I guess Szemerédi, right? YD: Yes, it's very fashionable. It's definitely something I'm proud of. JMC: Has contributing to mathlib changed the way you think about any part of mathematics? YD: Oh yes, definitely. Formalising in general, maybe not contributing to mathlib in particular, is really getting into the backscene of mathematics. You start to notice structures that wouldn't have occurred to you otherwise. For example, hierarchies. They are ubiquitous. Personally, I had not realised how structuring it is to mathematics that mathematical structures have some kind of order between them, that vertically there is a natural way to go from one to the next, and horizontally a natural way to build up the same structure over more and more complicated types. This brings me to monadic structures. It's a nice way to think about things. Once you empirically understand how monads work (you don't even have to get the theory), you start seeing them all over the place; Many things that I was defining had a monadic structure to them. For example, I needed something about some gadget, which turns out to be its bind operation. That gadget is built on top of other gadgets, so what I did was to prove their bind operation first, and I wouldn't have thought about that before. Knowing about monads makes you consider these statements [JMC: as useful building blocks]. Another thing is that people abuse language. And it's not a thing that as a beginner you really grasp. I was trying to learn a bit of category theory, maybe a year ago. And I kind of got stuck at functors, for a stupid reason. Because people use the notationF(A)$to mean the image of the object$A$and also$F(f)$to mean the image of the morphism$f$. And it's really dumb, but as a beginner the confusion prevents you from seeing that there is something happening. And in mathlib it's spelt out: a functor is two functions, and they behave correctly. Finally, I think it brings an organic approach to proof-writing. Because you really get that each thing has one definition. And you understand how to interact with that definition. You start to understand how the API around it works, so the wrapper around it that allows you to do basic stuff with the definition. And this API dawns on you how proofs about a specific object work. There's actually some math that I learned while doing Lean. Topology, with Kevin's workshop. So I found out things: to use compactness you need an open cover, and you get a finite subcover, and you can use that to do other things. And it becomes automatic at some point. You have this program, and it leads you, it leads your intuition through the proofs. In some sense, that's also why TabNine and Copilot [JMC: two editor plugins] are so impressive. They really work quite well at guessing what is coming next. And working with formal proofs makes you have some sort of Copilot in your head. JMC: What are your plans for the next year? YD: Actually, there are several things going on. I'm really trying to get Bhavik's branch into mathlib. And also, of course, Szemerédi's regularity lemma. Which are both things that will take a while, and hopefully they will be done by the end of January. I wouldn't be surprised if it's taking longer. There is also the sperner-again branch, which has lots of stuff that is ready for mathlib. Some of that stuff is seven months old, but the process of getting things into mathlib can be excruciatingly slow. JMC: Yes, certainly. Which is connected to the next few questions. I will just ask them as a batch, and then you can pick in which order you answer them. What change would you like to see in the community? What would help you to work more effectively with mathlib and/or Lean? What do you enjoy the most? What could increase the fun? YD: Honestly, the community is great! And Zulip is very effective. There is some stuff that I would like to see changed, but I realise that it's not very feasible, because people have a life. I would just like to see PRs getting quicker through the review process. Because it's arguably very long. I'm trying to review PRs myself, but sometimes it's hard. Part of the reason is, we're not that many. JMC: I agree, I would also like to see it go faster. At the same time, mathlib has a pretty fast reviewing process already. But I certainly recognise the feeling: "Ooh, my PR has been sitting there for 3 days. What's going on?!" So, do you see some actionable thing that could be changed? YD: I think there is not much that we can do by ourselves. I think what we need is more people who are capable of reviewing PRs, which will come with a bigger community. On the other hand, I'm not sure that improves the situation. Because more people means more PRs! JMC: Is there something that could be done to make it easier for people to review PRs? YD: Initially I didn't review PRs, because I didn't feel confident that I could add something. But now that I've made many PRs myself, I know what is going on, and I'm now much more active in the review process. So it's not something for beginners. It's a subtle question. There's knowledge that is not easy to acquire. You have to go through the process. Maybe we could have some sort of event. Where people can, maybe not review, but at least read through PRs. And see if they can catch anything mathematically meaningful? That they would like to see changed. That would be something where people can get a sense of the [other side of the] reviewing process. Maybe once a month? First Saturday of the month? A read-a-pr day. JMC: It's certainly a tricky problem to solve. But it would be fantastic if every PR could be reviewed in a couple of days. YD: Anyway, it's hard to complain about such a community. There are so many things that work well. The "new members" stream on Zulip; people actually care about new members. They are taken on board, even if most questions are quite repetitive and maybe pointless to an expert eye. People still take the time to answer them. And in general, if you ask a question on Zulip, you will get an answer in less than 10 minutes. JMC: So you mentioned TabNine and Copilot already. Are there any other things that would make you work more effectively? YD: Something like Sledgehammer, for if you just want to bash through a proof. That would be fantastic. Another thing is that Lean is slow. I notice that I have an upper bound on my coding speed because it takes a while before Lean updates the goal. And it's really painful when you have to wait 3 seconds at each keystroke. Had it been faster, I could have done some random thing in 10 minutes, but I spent 1h30 instead. And this happens for several reasons: long proofs and long files. If Lean could provide support for more granular recompilation of long proofs, that would be great. By now, many of the large files have been split into smaller pieces, and I am working on some of the remaining ones. JMC: Thanks for all your answers! It's time for the final two questions. Do you have a question for the next interviewee? YD: One thing I really like about mathlib is that it makes you appreciate the power of collaboration. Were it not for building on the work of others, I could have never achieved anything in Lean. My question is: In which ways did you find the community helped you? JMC: And do you have any parting words or proverbial wisdom that you want to share with us? YD: Localize your changes, split a PR. JMC: A great suggestion. Thanks a lot for your time! # Semilinear maps Since linear maps appear everywhere in mathematics, it comes as no surprise that they have been part of mathlib for quite some time. However, as we were working on adding the basics of functional analysis to mathlib, a drawback quickly became apparent: conjugate-linear maps could not directly be expressed as linear maps. This meant that some constructions could not be formulated in their most natural way: for example, the map that takes an operator to its adjoint on a complex Hilbert space is a conjugate linear map, and so is the Riesz representation that maps a vector to its dual. This was also preventing us from developing the orthogonal group, the unitary group, etc, properly. A few options were considered to introduce conjugate-linear maps. One possible way was to define the conjugate space of E as a type copy where scalar multiplication is conjugated. Then, a conjugate-linear maps is a standard linear map to the conjugate space. This would have enabled us to reuse the API of linear maps without having too much to refactor, but an early attempt to do this was abandoned when converting between the conjugate space and the original space proved to be unwieldy. A further disadvantage is that the type copy would have also appeared in the real case for constructions involving is_R_or_C. Another potential solution to the problem was to define conjugate-linear maps separately from linear maps. The big drawback here is that the API for linear maps would effectively have to be duplicated for those new maps. This left the more arduous option, namely to redefine linear_map to also include semilinear maps. A semilinear map f is a map from an R-module to an S-module with a ring homomorphism σ between R and S, such that f (c • x) = (σ c) • (f x). If we plug in the identity into σ, we get regular linear maps, and if we plug in the complex conjugate, we get conjugate linear maps. There are also other examples (e.g. Frobenius-linear maps) where this is useful which are covered by this general formulation. This implied a major refactor: we had to replace the basic definition of R-linear maps E →ₗ[R] F by σ-semilinear maps E →ₛₗ[σ] F while keeping the original notation for plain linear maps, and deal with the various problems that this inevitably created further down the import tree. The same also had to be done for linear equivalences, continuous linear maps/equivalences, and linear isometries. This idea had first been proposed by Yury Kudryashov about a year ago, but it then took several months to build up the motivation to embark on this project. Last July, Heather Macbeth, Rob Lewis and I finally managed to start working on it, and the result was merged into mathlib in late September. The main issue that we had to overcome involved composition of semilinear maps, and symm for linear equivalences. Suppose we have f : E₁ →ₛₗ[σ₁₂] E₂ and g : E₂ →ₛₗ[σ₂₃] E₃, we would naturally end up with g.comp f : E₁ →ₛₗ[σ₂₃.comp σ₁₂] E₃. However, in most cases of interest, this is very awkward: suppose, for example, that we have defined the adjoint as a conjugate-linear map: adjoint : (E →ₗ[ℂ] F) →ₛₗ[conj] (F →ₗ[ℂ] E), and want to express the fact that the adjoint of the adjoint is the identity; in other words, we want a lemma like1: lemma adjoint_adjoint : adjoint.comp adjoint = (id : E →ₗ[ℂ] F)  However, the general composition lemma for semilinear maps wouldn't give us this: the id on the right-hand side would actually be of type E →ₛₗ[conj.comp conj] F! A similar problem arises for symm for a semilinear equivalence. Suppose we have a semilinear equivalence e : E ≃ₛₗ[σ] F, then e.symm will naturally be of type F ≃ₛₗ[σ.symm] E. Again this is undesirable in interesting cases: suppose we have defined the Riesz representation of a vector (i.e. the map that takes a vector v : E to its dual λ x, ⟪v, x⟫ in a Hilbert space) as a conjugate-linear equivalence to_dual : E ≃ₛₗ[conj] (dual E). Then, of course we want to_dual.symm to be of type (dual E) ≃ₛₗ[conj] E, but the general lemma regarding symm will yield a map of type (dual E) ≃ₛₗ[conj.symm] E. To solve these two issues, we created two typeclasses to make Lean infer the right ring homomorphism. The first one is [ring_hom_comp_triple σ₁₂ σ₂₃ σ₁₃] which expresses the fact that σ₂₃.comp σ₁₂ = σ₁₃, and the second one is [ring_hom_inv_pair σ₁₂ σ₂₁] which states that σ₁₂ and σ₂₁ are inverses of each other. The ring homomorphism σ₁₃ (resp. σ₂₁) is inferred silently by the typeclass system using out_param. Then, to make our two examples go through properly, we just need to add instances for ring_hom_comp_triple conj conj id and ring_hom_inv_pair conj conj. There is also a third typeclass [ring_hom_surjective σ], which is a necessary assumption to generalize some basic lemmas. This refactor is now mostly complete ("mostly" because there are still lots of lemmas left to generalize!), and we have also added notation specifically for conjugate-linear maps: E →ₗ⋆[ℂ] F denotes conjugate-linear maps from E to F. Such maps are now slowly starting to appear, with the Riesz representation in PR #9924, and the adjoint coming soon! 1. The examples given here have been simplified to get to the core of the issue; in reality, these maps would involve continuous linear maps, we would most likely have to specify the type of adjoint for Lean to infer the correct types, etc. # This month in mathlib (Nov 2021) This post summarizes some of the activity that happened in mathlib in November. • Geometry and algebraic topology. • The series PR #10284, PR #10297, PR #10303, PR #10328, PR #10334, and PR #10401 constructs the sheafification of presheaves with respect to a Grothendieck topology on a category, culminating with a construction of colimits in the category SheafedSpace. It is an important building block in setting up algebraic geometry, as well as a requirement for the Liquid Tensor Experiment (LTE). A corollary is that the category of sheaves with values in a suitable concrete abelian category is itself an abelian category. This has been formalised for LTE, and should appear in mathlib in the near future. • Barycentric coordinates have been further studied in a series of PRs including for instance PR #10320 relating them to determinants. • PR #10381 proves the orthogonal group is generated by reflections. • PR #10195 defines the fundamental groupoid, as functor from Top to Groupoid. • Number theory. • PR #9071 defines the class number of number fields and function fields. This finishes the finiteness proof of the class group of a number field and function field, described in a previous blog post. • PR #8820 adds Lucas primality test. • Group theory. • PR #10059 proves that (infinite) Sylow subgroups are isomorphic. • PR #10283 proves the full Schur-Zassenhaus theorem. • PR #10249 defines the exponent of a group. • Combinatorics. • Linear algebra and functional analysis • The sequence of PRs PR #9840, PR #9995, PR #10317 establishes the connection between eigenvalues and the Rayleigh quotient, and proves the diagonalization theorem for self-adjoint endomorphisms on finite-dimensional inner product spaces. • PR #10145 introduces$C^*$-algebras. • PR #10530 proves properties of spectrum in a Banach algebra. • PR #9097 defines the Minkowski gauge functional. • Analysis • PR #9791 proves the Picard-Lindelöf/Cauchy-Lipschitz theorem about ordinary differential equations. • PR #10057 defines Vitali families and PR #10101 shows the main theorem on differentiation of measures: given two measures ρ and μ on a finite-dimensional real vector space, the ratio ρ (ball x r) / μ (ball x r) converges μ-almost everywhere to the Radon-Nikodym derivative of ρ with respect to μ when r tends to 0. The theorem is in fact proved in the more general context of Vitali families, as in Federer's famous book on geometric measure theory. • PR #10073 proves convergence of a sequence which does not oscillate infinitely. • PR #10258 proves that, if$u_n$is subadditive, then$u_n / n$converges. # Dedekind domains and class number in Lean Pull request #9701 marks the completion of a string of additions to mathlib centered around formalizing Dedekind domains and class groups of global fields (those words will be explained below). Previous PRs had shown that nonzero ideals of a Dedekind domain factor uniquely into prime ideals, and had defined class groups in some generality. The main result in this PR is the finiteness of the class group of a global field (and in particular of the ring of integers of a number field). Formalizing these subjects has been one of my long-term goals for mathlib, and as far as we are aware, Lean is the first system in which this level of algebraic number theory is available. These formalizations have been joint work: most notable contributors on the Lean side were Ashvni Narayanan, Filippo Nuccio and myself, with Sander Dahmen developing a new finiteness proof of the class group specially for this project. Of course, we could not have done this without the assistance of the entire mathlib community. Sander, Ashvni, Filippo and I wrote a paper on the formalization project for the ITP 2021 conference; this blog post goes through the highlights of the paper. Algebraic number theory is an associative term: parsing it as (algebraic (number theory)) we get a subarea of number theory, the study of the integer numbers, that uses algebraic techniques to solve equations such as$x^2 + 2 = y^3$. Alternatively, we can parse ((algebraic number) theory) as the area of mathematics studying algebraic numbers, those satisfying a polynomial equation$f(\alpha) = 0$for some nonzero polynomial$f$with rational coefficients. Algebraic numbers are found in number fields, which are finite extensions of the field of rational numbers, or equivalently fields generated by adjoining an algebraic element$\alpha$to$\mathbb{Q}$(by virtue of the primitive element theorem). Much like$\mathbb{Q}$contains the integers$\mathbb{Z}$as a subring, a number field$K$contains a ring of integers$O_K$, which consists of exactly those$x \in K$that are the solution to a monic polynomial with integer coefficients. Algebraic number theory also considers function fields, which are fields isomorphic to finite extensions of$\mathbb{F}_q(t)$, the field of rational polynomials over some finite field$\mathbb{F}_q$of cardinality$q$. Number fields and function fields together are called global fields, and they have a number of important characteristics in common. Like the ring of integers$O_K$in a number field$K$, a function field$F$has a ring of integers$O_F$, although in the function field case the definition depends on the realisation of$F$as a finite extension of$\mathbb{F}_q(t)$. In this case the integers are defined as those$x \in F$that are the solution to a monic polynomial with coefficients in$\mathbb{F}_q[t]$. Throughout our formalization process, we made sure to treat all global fields as uniformly as possible. We spent a lot of effort on creating a useful interface to deal with field extensions such as$\mathbb{Q} \subseteq K$and subrings such as$O_K \subseteq K$, without having to transport everything to a subtype of a "largest relevant field" such as$\mathbb{C}$. The algebra and is_scalar_tower typeclasses were invaluable in automating away all kinds of messy detail checking. In this context I would like to name Eric Wieser as someone whose skill in (ab)using typeclasses prevented many headaches. The structure of a ring of integers is not quite as nice as the structure of$\mathbb{Z}$. For example, while every nonzero integer factorizes uniquely into a product of prime powers, in the ring$\mathbb{Z}[\sqrt{-5}] = O_{\mathbb{Q}(\sqrt{-5})}$the number$6$factorizes non-uniquely as$6 = 2 \cdot 3$and as$6 = (1 + \sqrt{−5})(1 − \sqrt{−5})$. We can recover some of the useful properties by considering instead the ideals of$O_K$. Indeed, recovering unique factorization was Kummer's motivation for studying "ideal numbers", the predecessor to the modern notion of ideals. Nonzero ideals of$O_K$do indeed factorize uniquely into products of prime ideals. This unique ideal factorization property follows from the fact that a ring of integers is a Dedekind domain. Thus, we formalized that a ring of integers is a Dedekind domain and that Dedekind domains have unique ideal factorization. A question we might ask is how far away a Dedekind domain is from unique element factorization. In a principal ideal ring, where all ideals are principal, i.e. generated by one element, unique factorization of elements and unique factorization of ideals are essentially the same property. Therefore the amount of non-principal ideals tells us something about the failure of unique factorization. The class group$\mathrm{Cl}(R)$of a ring$R$is defined as the quotient of all ideals modulo the principal ideals:$I$and$J$are in the same class iff$(x)I = (y)J$for some nonzero$x, y \in R$. In a ring of integers of a global field, the class group is always finite, so it makes sense to talk about the class number of a number field$K$, which is the finite cardinality of$\mathrm{Cl}(O_K)$; by the arguments above, the class number is equal to one if and only if all ideals are principal, if and only if$O_K$has unique element factorization. The pen-and-paper proof that the class number is finite requires some results like Minkowski's theorem that were not yet available in mathlib, and differs significantly between the number field and function field case. Sander designed a new finiteness proof that works uniformly for all global fields, as long as there exists something we call an admissible absolute value. Very specifically, we formalized that the class group of$S$is finite if it is the integral closure of a Dedekind domain with Euclidean algorithm$R$in a finite separable extension$L$of the fraction field$K$of$R$, if$R$has an admissible absolute value; and the final PR #9701 shows this is indeed the case whenever$S$is the ring of integers of a global field. Before that final pull request could be merged, we ran into an unexpected issue that delayed it by about a month: the definition of a function field relies on a field of rational functions$\mathbb{F}_q(t)$, which we denoted in Lean as fraction_ring (polynomial Fq). Both fraction_ring and polynomial, and more importantly their field resp. ring structure, are quite complicated definitions. This is no problem when working with them normally, however when there are missing typeclass instances Lean can end up desparately unfolding all of these definitions into their basic axioms, causing timeouts during error reporting. We want errors to be reported quickly and indeed Mathlib has a linter that ensures missing instances fail in the expected way, so we needed to fix this timeout issue before the PR could get merged. In the end, I contributed a new definition of rational functions ratfunc. Since ratfunc is a structure type, it means ratfunc Fq will not be unfolded so drastically and the timeout is resolved. This is an example of timeout issues I'm running into frequently, suggesting that mathlib is running into the limitations of Lean 3's simple instance search mechanism. Hopefully Lean 4's improved algorithm solves these issues without workarounds like having to introduce new structure types. Having formalized the class number opens up a number of areas of future research. My next goal is to formally compute the class group of some simple rings of integers like$\mathbb{Z}[\sqrt{-5}]$"by hand". Once we have a good understanding of how to do so manually, I hope to automate these calculations as much as possible, in the end creating a tactic that takes a number field and returns its class number (perhaps interfacing with computer algebra systems to do the hard computations, and certifying the results within Lean). The end goal for this kind of automation is a system where you can enter an equation like$x^2 + 2 = y^3$for$x, y \in \mathbb{Z}$, and Lean would output a formally verified set of solutions. # Contributions to mathlib from LTE about normed groups When the Liquid Tensor Experiment started, in December 2020, mathlib already had a decent theory of normed spaces. With this post I want to show how mathlib can benefit from projects like LTE, showing what we added to the theory of normed spaces in almost one year of work (this is only a small part of what has been added to mathlib from LTE, for a more complete list, see the history of for_mathlib folder). Besides several small missing lemmas, we added the following notions. • normed_group_hom: we already had the operator norm, but no bundled hom between normed groups. We introduced normed_group_hom G H, that it is itself a normed group. We also introduced kernels and images of a normed groups hom. • semi_normed_group: a seminorm generalizes a norm by allowing nonzero vectors of zero (semi)norm. This notion is needed in LTE, and we introduced it in mathlib (providing a reasonable API). Since normed_group depends on metric_space that in turn depends on emetric_space, we had first of all introduced (extended) pseudo metric spaces. We also introduced semi_normed_space and similar related notions. • normed_group_quotient: the theory of quotients of (semi) normed groups was completely missing. We now have a good API for it. • normed_group_hom.completion: similarly to normed_group_quotient, mathlib did not know completions of normed groups. Using the already existing theory for topological groups, we added an API for completions of normed groups. • nnnorm: sometimes it is useful to consider the norm as taking values in the nonnegative reals. We introduced the class has_nnnorm, with the obvious instances, and wrote an API for it. • SemiNormedGroup: we introduced SemiNormedGroup, the category of semi normed groups, as a preadditive category with kernels and cokernels. We promoted normed_group_hom.completion to a functor, showing its universal property. Working with cokernels, an interesting problem arose: if f : X → Y is a normed groups hom, one usually consider the cokernel coker(f) = Y/Im(f), with the quotient norm and it is obvious that the projection π : Y → coker(f) satisfies ∥π∥ ≤ 1. This is often needed in computations, but the category theory API doesn't promise any particular model of the cokernel, so one can for example scale the quotient norm by any positive factor, ending up with another cokernel, whose natural projection has norm bigger than 1. If f itself has norm less or equal than 1, one can work with SemiNormedGroup₁, the category of seminormed groups and norm nonincreasing morphisms (that we proved has cokernels), but in general we ended up providing explicit_cokernel f, an explicit choice of cokernel, which has good properties with respect to the norm. This was enough for LTE, but still not completely satisfying, since one cannot directly use the category theory API for explicit_cokernel. # This month in mathlib (Oct 2021) This post summarizes some of the activity that happened in mathlib in October. ## Highlighted PRs • In algebraic geometry, there has been a surge in activity. After the definition of schemes entered mathlib, the development stalled for a while, because the Liquid Tensor Experiment took up a lot of energy and attention. But new contributors showed up, and are pushing the library forward. Two highlights this month include: • PR #9416 Global sections of the spectrum of a ring are isomorphic to that ring. • PR #9861 Closed points in the prime spectrum correspond to maximal ideals. • In commutative algebra: • algebraic indepence and transcendence are introduced in PR #9229 and PR #9377 proved the existence of transcendence bases. See the stacks project for an informal account. • PR #9817 and PR #9834 fills a glaring hole in the field theory library: finite fields of the same cardinality are isomorphic. • In group theory: • PR #8976 proves the Jordan Hölder theorem. It also proves a version for submodules. • PR #9662 proves Frattini's Argument: If$N$is a normal subgroup of$G$, and$P$is a Sylow$p$-subgroup of$N$, then$\text{N}_G(P)*N=G$. • In number theory, PR #9646 and PR #9702 prove that Liouville numbers form a dense$G_δ$set which has measure zero. • In combinatorics, PR #7825 proves a generalized version of Hall's marriage theorem using a compactness argument. Using the language of category theory, there is an inverse system of solutions to the marriage problem when restricted to finite subsets of the domain, and the limit of this kind of inverse system is nonempty. • In measure theory, the series of PRs PR #9462, PR #9461, PR #9576, PR #9679 and PR #9680 prove the Vitali and Besicovitch covering theorems (with the optimal constant for the Besicovitch covering theorem, following Füredi and Loeb, On the best constant for the Besicovitch covering theorem). These theorems ensure that, from a covering of a (not necessarily measurable) set$s$by nice enough sets (e.g. balls), one can extract a disjoint subfamily covering almost all$s$. They are instrumental in forthcoming results on differentiation of measures. • In probability: • PR #9323 defines probability density functions, • PR #9378 proves that conditional expectation on real functions equal Radon-Nikodym derivative • PR #9469 brought us notations that really demonstrate we're now doing probability in mathlib! • In algebraic topology, PR #9252 and PR #9141 set up some basic homotopy theory. • In topological algebra, some foundational parts of the perfectoid project were moved to mathlib after a long time of inactivity: • PR #9521 defines the adic topology associated to an ideal • PR #9589 defines the topology associated to a valuation on a ring and proves its fundamental properties. In particular a valuation on a field 𝕂 is extended to the completion of 𝕂 with respect to the valuation topology. • In differential calculus, • PR #9496 and PR #9811 prove the divergence theorem on a box, for a Henstock-style integral and the Bochner integral respectively. This is a foundational result that will be used in complex analysis and differential topology. • PR #9440 defines superpolynomial decay of a function • In functional analysis: • PR #9540 proves the open mapping theorem for maps between affine spaces • PR #9924 states the Riesz representation theorem uniformly over$ℝ$and$ℂ$. In 2020, Frédéric Dupuis proved this theorem over both$ℝ$and$ℂ$(see PR #4379), but, since mathlib lacked conjugate-linear maps at that time, the$ℂ$-version was stated in a rather awkward way. Conjugate-linear maps were added to mathlib last month and now the$ℝ$and$ℂ\$ versions of the theorem can be stated uniformly.
• The core version of Lean was bumped twice in the past month. Most of the changes are in preparation for porting mathlib to Lean 4.