Zulip Chat Archive
Stream: mathlib4
Topic: leanup design discussion
Mario Carneiro (Jan 14 2024 at 17:16):
I'm thinking about what a "mathport4" would look like, which migrates projects from old lean 4 to new lean 4. The provisional name of the project is leanup
since it's a bit snappier. Here's an overall sketch:
- It is triggered by the user where they would otherwise use
lake update
. Importantly, we are currently on an old version of lean 4 and the current project builds. If the project is not on a supported version, we require that they manually upgrade or downgrade to the nearest supported version first. - First, we build the project and generate an "elaborator dump", a JSON version of the AST and the InfoTree. This will be a highly compatible format, shared between all supported lean versions and versioned at high granularity.
- We apply "migrations" sequentially to hop from one supported version to the next. We don't modify the elaborator dump or the text directly, but rather build up a semantic diff which expresses the changes to be made.
- We upgrade the project to the target supported version.
- We apply the changes to the text, making use of block indent motion where possible to avoid unnecessary textual diffs.
- The user can run
lake update
if there is still a gap to the latest version, this part is done without assistance.
Step 2 is the interesting one. (Step 1 and step 4 are also 80% of the work to a code reformatter.) Possible migrations are:
- A theorem is renamed. This will be kept symbolic, but during step 3 we can take into account the namespaces and opens to ensure that we refer to the name in a natural way.
- A theorem is deleted. Unfortunately there isn't much we can do in this case, it will have to be handled in the migration management stage.
- A theorem has arguments reordered, added or removed. This is an especially common kind of syntactic transformation that we can have special support for.
- A piece of syntax (a command, term elab or tactic) is modified or restructured. These will have to be handled by custom code ("migration plugins") for the most part.
The key piece in this mechanism is the migration list. This project will not be depending on mathlib (and will likely be a dependency of mathlib, for reasons described below), so the migration list is something that has to be provided externally through data files. leanup
itself can provide migrations for lean changes, but mathlib will want to add its own migrations. The structure would be something like this:
migrations/all.json
is of the form[[commit, migration...]...]
wherecommit
is the SHA of the next supported version andmigration
is either a file reference like"v4.3.0-rc2.json"
or an inline migration list. (Multiple migrations can be provided for a commit, these are applied sequentially. This is sometimes important if effects need to stack up in the right order.)migrations/v4.3.0-rc2.json
would have the form{"changes": [[removed, added]...], "extra": "Mathlib.Migrations.V4_3_0_rc2"}
where:changes
is a list of changes whereremoved
and/oradded
are fully qualified names ornull
to indicate additions/removals/changesextra
, if provided, is a reference to a lean file which defines additional programmatic migrations. These files are compiled by lake as an additional library target.
The final component is the management of migrations, for which we need another tool, leanup-build
, which is run by maintainers to add an additional supported version to the list, and will run (or extend) leaff
to construct an initial attempt at the migration between two versions of mathlib, which is then manually refined. It will use heuristics and #align
statements to try to minimize the number of theorems marked as "deleted". I would expect there to be essentially no deletions in mathlib migrations, because even a mostly wrong 'rename' to a theorem with a completely different type but which is morally related is better than nothing.
What do people think about this overall design and workflow? There are a lot of moving parts here, but nothing that I think is impossible to implement. I think the biggest change to the status quo here is that mathlib will be maintaining a migrations database, which may be a big ask, which is why I think the leanup-build
program is important since we need to ensure the cost of maintenance is low and we can add additional supported points after the fact before, after, and in between existing points without too much difficulty. leanup
itself is tasked with maintaining the migrations database for lean and possibly also std.
Wrenna Robson (Jan 14 2024 at 18:09):
For reference, what is the difference between "old Lean 4" and "new Lean 4"?
Yaël Dillies (Jan 14 2024 at 18:20):
Lean 4 from a few months ago vs current Lean 4, so the difference is whatever commits have happened between them
Patrick Massot (Jan 14 2024 at 18:27):
It sounds great!
Kim Morrison (Jan 14 2024 at 22:47):
I'm unclear from the description how the migrations are indexed.
You said:
migrations/all.json
is of the form[[commit, migration...]...]
wherecommit
is the SHA of the next supported version
but I don't understand what "next supported version" is referring to. Presumably it is a Lean toolchain version? Does migrations/all.json
contain information about moving from old toolchains to the current toolchain, or or information about moving from one (old) toolchain to another toolchain (possibly the current one)?
Are you envisioning this strictly being at toolchain granularity, or could upstream projects have independent versioning schemes? (e.g. if my project used lean 4.37.0
and dependency X
v17
, and I want to move to 4.38.0
and v18
, I could look up migration instructions from Lean for the 4.37.0
-> v4.38.0
change, and loop up instructions from X
for the v17
to v18
change).
Mario Carneiro (Jan 15 2024 at 01:29):
@Scott Morrison The idea is that there are a bunch of commit
s listed in chronological order in the file, these are the "supported versions". They are mathlib commits, and need not align with toolchain versions, but I'm imagining for the initial MVP we'll focus on toolchain tagged mathlibs since more supported commits means more data gathering effort. leanup
will upgrade projects from any supported version to any later one.
Kim Morrison (Jan 15 2024 at 01:31):
Ah, okay. So it would look at each upstream dependency (e.g. Mathlib), and find all the commits mentioned in Mathlib's migrations/all.json
that occur after the currently referenced commit (in the lake-manifest of the project we're updating), and apply those.
Mario Carneiro (Jan 15 2024 at 01:31):
The names of migration files (like v4.3.0-rc2.json
and Mathlib.Migrations.V4_3_0_rc2
here) is purely up to the library author to choose, they need not line up with anything else. I would recommend that they contain dates or some other globally ordered thing in them though, as opposed to just being SHAs so that people can binary search them and understand what order they are applied in
Mario Carneiro (Jan 15 2024 at 01:47):
One task that leanup will need to do which I'm not entirely sure about is to locate where a given commit lies chronologically in a list of SHAs. That is, assuming for simplicity that the master branch is linearly ordered like mathlib, and that the supported commit list is already ordered correctly, we have to find out which of those commits is after the current one and which is before. (If the current commit is supported then that's easy, but we also have to handle the case that the user is on an unsupported commit between two supported versions of mathlib.) This needs some kind of git request, and what's more it needs to work on shallow clones since that's what lake checks out.
Eric Wieser (Jan 15 2024 at 05:29):
Presumably it would be fine for this tool to un-shallow the clones
Eric Wieser (Jan 15 2024 at 05:30):
Since you can't do anything with the history in a shallow clone anyway
Mario Carneiro (Jan 15 2024 at 05:31):
it might be nice if we could provide some quick API endpoints to avoid having to download too much for this
Eric Wieser (Jan 15 2024 at 05:57):
Scott Morrison said:
So it would [...] find all the commits mentioned in Mathlib's
migrations/all.json
Just to check I understand Mario's proposal correctly, I think this json file does not live in mathlib itself (or at least, not in the master branch), as this gives us a way to easily correct past migrations?
Mario Carneiro (Jan 15 2024 at 06:03):
No, my proposal is to put that file in mathlib itself
Mario Carneiro (Jan 15 2024 at 06:05):
We would use the migrations file from the target commit (in normal usage, that means the latest master or whatever lake update
would select)
Mario Carneiro (Jan 15 2024 at 06:07):
One of the reasons we want this to be in the git repo itself is because this solves the issue I kind of glossed over regarding linear history: if you have a long lived branch you can have migrations for it too, reflecting the history of the branch back to the initial commit or wherever
Mario Carneiro (Jan 15 2024 at 06:08):
that doesn't really solve the issue of merge commits though
Thomas Murrills (Jan 15 2024 at 08:24):
I love this! Five points/thoughts:
- I’d echo the need to keep maintenance workload low. To that end, I think it might be useful if each PR is required to come with its own migration, one way or another: either the trivial migration; a
leanup-build
autogenerated one; a manual migration plugin; or a special “deferred” migration (like asorry
) when we don’t want to deal with it now (but want to record & communicate the present lack of one, which is better than simply not supporting the commit—hopefully these would be infrequent, but would probably be necessary sometimes!). I think this could be relatively painless if we invest in a little GitHub tooling to check if the migration is needed, attempt to autogenerate/commit one with leanup-build, and automatically attach a status when possible. The maintainers would then be responsible only for the deferred migrations (unless someone else volunteered, of course), and it would be easy to keep track of what work there was still to do. - Many changes to mathlib4 that require migrations—e.g. theorem renames—could simply be migrations in the first place. It would be useful to have a workflow/tools which allowed someone to write the migration first, then apply the migration to mathlib4 (and use the resulting changes as a mathlib4 PR)! (Something like
leanup self
?) - Re: lean and std changes, I think it’d be useful to subsume adaptation PRs by adaption migrations as much as possible. That is, when making a std4 or lean4 PR, instead of making a corresponding PR to mathlib4/std4 adapting to the changes manually, we’d simply create a lean4 or std4 migration, and then run
leanup
on mathlib4/std4 when bumping. Tooling should test (at lean4/std4 PR time) that the associated migration works as expected. E.g.,lean-pr-testing-NNNN
would start with aleanup
ped (lean
ed-up
?) version of mathlib4, applying all previous migrations as well as the migration associated with PR #NNNN (which the PR author would provide, somehow). And hopefully in many cases a migration would be all that’s necessary! - There would still be cases (esp. with core changes, I’m guessing) that require fully manual fixes (where even migration plugins aren’t sufficient). What would we do in these cases? It might make sense to have leanup emit a provided message when attempting to migrate past that commit, e.g. “Breaking change to
rw
: [explanation of how it behaves differently]”. (Possibly only withleanup -v
?) Maybeleanup -i
(?) would stop at this point and ask if you needed to fix things up before proceeding (or maybe this would be default behavior). - I’m a little unsure if having the migrations located within the repository would be incompatible with (1.). With squash commits, that seems to mean (1.) would need a commit history that goes [commit A]; [automated leanup commit for commit A]; [commit B]; [automated leanup commit for commit B]; etc. This could be avoided by either maintaining a separate repository (and just have more tooling for synchronizing the corresponding branches and PRs to implement (1.)), which would also let us fix up past commits—and/or by not using commits as keys for the migrations, but instead storing the current “leanup key” in the project itself so that it can be updated in the same commit that we add the migration in. The key could be a (long) version number that was kept updated appropriately by GitHub tooling (assuming that’s possible given merge queues, I’m not sure). (The latter option would also solve the “finding where the current commit is” issue, as presumably these version numbers would be interpretable by lean and be orderable.)
Mario Carneiro (Jan 15 2024 at 08:26):
I don't think we should ever really "defer" a migration. We should always have some migration, and we can post-hoc improve the quality of autogenerated migrations independently from the usual PR traffic
Thomas Murrills (Jan 15 2024 at 08:27):
Even for migrations of tactics or things requiring a plugin, for instance? That’s what I was thinking of.
Mario Carneiro (Jan 15 2024 at 08:27):
Having some todo that doesn't block the PR is just a recipe for getting a bunch of work to pile up without a clear mechanism to clear it
Mario Carneiro (Jan 15 2024 at 08:28):
It would be nice to have migrations for syntax stuff, but I don't think we necessarily want to require it, because it can be almost arbitrarily complex depending on how you want to handle it
Mario Carneiro (Jan 15 2024 at 08:29):
We may want to have a low effort thing which indicates that syntax changed (so that we can find it later) without actually doing a real migration, maybe you could call that a todo
Mario Carneiro (Jan 15 2024 at 08:30):
certainly one of the challenges with improving migrations after the fact will be locating when things happened
Thomas Murrills (Jan 15 2024 at 08:30):
True—fwiw I was envisioning deferring migrations as a kind of “last resort” option for when we want to avoid introducing a ton of friction to getting a PR in, but still want to remember to eventually provide a migration of some sort and/or alert leanup users that, hey, this change isn’t covered by the migrations. I think that lines up roughly with what you’re saying (in sentiment at least)?
Mario Carneiro (Jan 15 2024 at 08:33):
it's a good point that using leanup
to conduct std and mathlib updates will naturally improve the quality of migrations because incentives are aligned
Mario Carneiro (Jan 15 2024 at 08:34):
But keep in mind that like mathport, I expect leanup to only have a 95% success rate, some manual adjustment will probably be required in many if not most migrations
Mario Carneiro (Jan 15 2024 at 08:38):
Re (4): I was also thinking about a leanup -i
similar to interactive rebase which would take smaller steps and also pause between steps to get the code compiling again. My plan for handling changes that can't be automatically fixed is to just do nothing about them; this would cause breakage in the project and you would either fix them on the spot or after all migrations have been applied. The important part is that the easy stuff is out of the way so that you can just focus on these breakages without everything else
Mario Carneiro (Jan 15 2024 at 08:39):
But for sure, we could also have a "migration" which is just a note which is printed to stdout
Mario Carneiro (Jan 15 2024 at 08:43):
the leanup key idea would certainly help with the commit locating issue, although a downside is that it's not backward compatible, we wouldn't be able to retroactively handle early lean 4 stuff
Mario Carneiro (Jan 15 2024 at 08:47):
The major drawback of putting anything outside the repo is that it means we can't just seamlessly work on any github project, we would need some external infrastructure which would become an additional point of failure
Mario Carneiro (Jan 15 2024 at 08:48):
I think we could do something like a hybrid of the "leanup key" idea and commit identifiers for historical stuff and ex post facto migrations
Thomas Murrills (Jan 15 2024 at 09:40):
That all makes sense! (Btw, I’m technically away, so I might not participate in this discussion much further, but I opened zulip, happened to see this thread, and thought it was cool… :grinning_face_with_smiling_eyes:)
Mario Carneiro (Jan 16 2024 at 00:50):
It just occurs to me that for mathlib specifically, a good choice for a "leanup key" which is not commit-based is the PR number, although it will not be monotonic since PRs end up on the main branch in the order they are landed but are named based on the order they were created
Mario Carneiro (Jan 16 2024 at 00:54):
From an avoiding-conflicts angle I think the best kind of in-repo migration tracking is just to have a bunch of independent files and no all.json
which collects them together (because this will be a frequent source of merge conflicts). The downside is that this requires additional postprocessing to be usable by leanup, since you would have to use a lot of indirect information like when files appeared to put an order on things
Mario Carneiro (Jan 16 2024 at 00:56):
The "leanup key" is also prone to merge conflicts, if there is a file which contains the current key which is used to find where a commit lies in the list
Mario Carneiro (Jan 16 2024 at 01:08):
I wonder if we can use bors to automatically put a leanup commit (if necessary) at the end of every staging commit, so people can just put "XXX"
for the commit ID and the bot will fill this in with the actual commit as calculated on the staging branch. That way we don't waste any extra CI time on the [commit A], [leanup commit for commit A], [commit B], [leanup commit for commit B] issue, and we only have one leanup commit per batch instead of per commit
Mario Carneiro (Jan 16 2024 at 01:09):
I'm starting to think that using actual commit IDs like this together with appropriate automation are the only viable method subject to the no-external-infrastructure constraint
Thomas Murrills (Jan 16 2024 at 04:33):
Can bors run a script to add a commit to a branch just before merging? If so we could:
- use a long version number as a leanup key, e.g. 4.7.1.103, ordered lexicographically
- have each migration file be independent but in a common
migrations/
directory, and named by some (translation of) the leanup key, e.g. v4-7-1-103.lean (or maybe directories, e.g. 4/7/1/103.lean?) - the leanup key of the project is inferred as the highest leanup key in
migrations/
instead of being written in a file - the migration associated with a given PR (if there is one) is
migrations/_currentMigration.lean
(or some special name) - right before bors merges a branch into master, it finds the leanup key of the master branch, increments it (4.7.1.104), and renames the
_currentMigration.lean
file accordingly (if it exists)
(If we’re incrementing any but the last component (e.g. moving to 4.7.2), we can do that manually.)
Mario Carneiro (Jan 16 2024 at 04:34):
bors actions are controlled by mk_build_yml.sh
, so yes we can make it do what we want
Mario Carneiro (Jan 16 2024 at 04:39):
Note that in my plan most migrations would be json, you would only need .lean migrations for complex things involving custom code. This is important because if all goes well there will presumably be hundreds to thousands of migrations for mathlib and we don't want to have to compile all of those lean files if 90% of them are autogenerated leaff-based migrations
Thomas Murrills (Jan 16 2024 at 04:52):
oh, wait—are there some contexts in which people might not have permission to list all files in a directory to infer the leanup key? or would that be pretty rare?
(though the inference of the leanup key suggested above is really just to prevent merge conflicts; we could just as well update an explicitly-stored leanup key with automation, and live with the merge conflicts. or, I wonder if there’s a way to configure a file in mathlib4 to tell git to not modify the leanup key file when merging (as it will just be overwritten by automation)? not sure what .gitignore
files and friends are capable of.)
Mario Carneiro (Jan 16 2024 at 04:52):
That doesn't cover the case of retrofitting a migration that was skipped though. Suppose that we have v4-7-1-103 and then five commits and then v4-7-1-104, and then we decide that actually we want to split the 104 migration in two because commit number two in the series happened to be very popular / lacks a migration in hindsight. It doesn't have a unique leanup key, the algorithm would say that it has key v4-7-1-103 just like the first commit, but we want to add an intermediate v4-7-1-103-deadbeef migration
Mario Carneiro (Jan 16 2024 at 04:52):
I think listing files is not an issue, we can get git
to list the files even if the filesystem doesn't support it (but I don't think any supported OS has an issue like this)
Thomas Murrills (Jan 16 2024 at 04:56):
Hmm, we could just extend the numbering: 4.7.1.103 < 4.7.1.103.1 < 4.7.1.104. (Though actually in this scenario there probably should be a special delimiter between the automatically-handled portion of the key and the manual version number, e.g. 4.7.1-103.1, just so things never get confused.)
Mario Carneiro (Jan 16 2024 at 04:56):
regarding directory separation, I'm thinking that v4-7-1/103.lean
is a good split which doesn't make too many folders or too many files
Mario Carneiro (Jan 16 2024 at 04:57):
I agree that we can use the fact that lex order is dense to our advantage here, but that's not the problem, the problem is that commit number two was not marked as 4.7.1.103.1 in this example. It's an immutable commit and the last file in it is 4.7.1.103
Mario Carneiro (Jan 16 2024 at 04:59):
the -deadbeef
there is a commit SHA, which we can use to identify commit number two even though it doesn't have a unique key
Mario Carneiro (Jan 16 2024 at 04:59):
It's also worth mentioning that the same solution that works for retrofitted migrations like this also works for historical migrations from commits prior to the introduction of migrations/
Thomas Murrills (Jan 16 2024 at 05:07):
Ah, okay, I think I see…but just to be clear, in what scenarios do we actually need the deadbeef-as-commit-SHA info? This is if we’re leanupping (leaning up?) to the aforementioned commit 2 and are specifying as such using the commit SHA? Then: we look at master, check if the SHA appears in deadbeef position, and otherwise infer the leanup key at commit 2?
Thomas Murrills (Jan 16 2024 at 05:15):
Another option would be to maintain a single file somewhere associating special commits to “artificial” leanup keys, e.g. pairs like [<aforementioned commit 2 SHA>, 4.7.1-103.1]
, which would also work for historical migrations. But this is kind of just a matter of where we keep the SHA data (the filename itself or in a file)
Mario Carneiro (Jan 16 2024 at 06:02):
The workflow is that we start out at an arbitrary commit, maybe it has a migration and maybe not, and have to find the interval of migrations given the git repo and that commit and a target commit (which will usually be latest master, but can be earlier, especially during a leanup -i
interactive migration)
Mario Carneiro (Jan 16 2024 at 06:05):
Let's say we have commits A B C D E, and A is v0 and E is v1, and later we decide to make C v0.5. The master branch describes the migrations v0 -> v0.5 and v0.5 -> v1 and also points at A, C, E somehow, and we need to deduce that if we are on commits A or B then we apply both migrations but if we are on commits C or D then we only apply the v0.5 -> v1 migration
Mario Carneiro (Jan 16 2024 at 06:09):
Prior to the introduction of v0.5 we would have had only one v0 -> v1 migration which is equivalent to the composition of the v0 -> v0.5 and v0.5 -> v1 migrations (but maybe of lower quality), and we would apply this migration in order to go from any of A B C D to E. We would introduce v0.5 if we find that people on C and D are getting messed up by the incorrectly applied v0 -> v0.5 part of the migration
Mario Carneiro (Jan 16 2024 at 06:23):
"Artificial leanup keys" are I think more or less what we need here, but it isn't a straight lookup because we also have to be able to handle commits which are not marked but lie somewhere next to one of these artificial keys. We end up with a sort of two level hierarchy: The commits are separated into regions with the same leanup key (i.e. A,B,C,D are v0 and E is v1 in the above example), and then artificial keys are just marked commits like C, which we may associate to a name like v0.5 if we like but it isn't really used for ordering. (EDIT: Oh, except that once we have located the start we still want to apply a big interval of migrations in order without any further git lookups, so we do need the ordering here.)
Given initial commit D, we can see that the leanup key is v0 because that's directly in the repo, and then we check the artificial keys for v0 to see that C is marked; then we walk up the parent commits as long as we are still on v0 until we either hit the root or an artificial key, and in this case we reach C first, so D is on v0.5, while if we started at B we would fall off the end without finding any artificial keys so we know we are on v0.
Thomas Murrills (Jan 16 2024 at 09:01):
Makes sense!! :) (Btw, it strikes me that by the time we “fix up” a migration like this, the commits will already be in place and immutable. So we can have the artificial key suffix simply be the number of commits since the root commit: e.g. B would be allocated v0.1, C would get v0.2, and D v0.3. (Ofc, only C’s artificial key of v0.2 would actually be used in the above situation.))
Mario Carneiro (Jan 16 2024 at 09:07):
We could, but "n commits past commit X" is not an indexing method supported by git, so it would be difficult to locate and cross reference things that way. (Also consider that this whole discussion extends to early history commits, these all being in the "version zero" regime. It would get a bit unwieldy to refer to things as "commit 1404 since the dawn of mathlib4".)
Eric Wieser (Jan 16 2024 at 09:09):
Can you remind me why we need "leanup keys" that are different from git hashes? Is it really that expensive to compute the order of hashes, and if so, can we store that order in a generated file?
Mario Carneiro (Jan 16 2024 at 09:11):
The issue is that when you are writing a migration, you don't know yet what commit it has. Plus we want the migrations themselves to be neatly ordered and organized in the repo
Mario Carneiro (Jan 16 2024 at 09:12):
I think computing the order of all hashes will indeed be quite expensive. Ideally all the git searching should be relatively local in scope, looking for the next migration tag, rather than collating every commit in the repo
Eric Wieser (Jan 16 2024 at 09:23):
Mario Carneiro said:
It just occurs to me that for mathlib specifically, a good choice for a "leanup key" which is not commit-based is the PR number, although it will not be monotonic since PRs end up on the main branch in the order they are landed but are named based on the order they were created
PR numbers can be converted into git hashes without too much work; perhaps users can use the PR number as a key, and a bot can annotate the SHA in a cron job after the PR lands?
Mario Carneiro (Jan 16 2024 at 09:26):
I think the real question is whether we want to enforce ordering of file names, or whether they should just be ordered "randomly" / by PR in the folder and we have some external data like a list of commits or PRs which gives the ordering
Mario Carneiro (Jan 16 2024 at 09:28):
Thomas's design with just files and no global index except for "fixups" after the fact means there are fewer merge conflicts. Even if a bot manages the global index file it can still cause conflicts when rebasing and such
Eric Wieser (Jan 16 2024 at 09:35):
If the only thing that ever updates the index is a bot running against master, I don't see any conflicts arising
Eric Wieser (Jan 16 2024 at 09:36):
(and we could have CI to enforce the index is not touched by hand in PRs)
Mario Carneiro (Jan 16 2024 at 09:36):
true
Thomas Murrills (Jan 16 2024 at 09:36):
Mario Carneiro said:
We could, but "n commits past commit X" is not an indexing method supported by git
Gotcha—this wouldn’t be useful for indexing, then, but just for giving us an order for free while making sure that there are always enough migration spots open and saving us a bit of annoyance (if we called C v0.1 but then wanted to create an artificial key for B, we’d have to bump C—or call B v0.0.1, i guess).
I feel there’s also at least a little value to being able to order any two keys intrinsically, without checking another file
(Also consider that this whole discussion extends to early history commits, these all being in the "version zero" regime. It would get a bit unwieldy to refer to things as "commit 1404 since the dawn of mathlib4".)
True, I was only considering non-historical ones here. I was imagining we’d create the overall artificial leanup keys for historical commits manually (possibly with artificial versions as well) and only use this scheme when fixing up things between those manually-identified migration points! :) But there are many other ways to do this.
Mario Carneiro (Jan 16 2024 at 09:41):
one "drawback" of naming migrations after PRs is that there is an implication that the migrations are caused by that PR and nothing else, which excludes the possibility of doing chunks of PRs at a time for larger hops
Mario Carneiro (Jan 16 2024 at 09:43):
("drawback" being in quotes because it's not necessarily a bad thing to try to enforce every PR to come with a migration, but it would hurt more than it helps while we are still getting up to speed and migrations are few and far between)
Thomas Murrills (Jan 16 2024 at 09:45):
I also think there’s value to introducing versioning into the repo itself—if we did have an only-bot-updated file in the mix here, it’d be great if it contained a sensible version number, that way we could say @[deprecated since <version>]
, check the current version number of a package in lean itself, etc.
Plus, being able to use a version number as a leanup target instead of (in addition to) a commit SHA or PR might be appealing, as you can match the major features and changes up against release notes (if they exist)
Mario Carneiro (Jan 16 2024 at 09:46):
I definitely agree with that, we've probably put off versioning for mathlib long enough. That said I don't want a versioning policy to be a prerequisite for using leanup
Thomas Murrills (Jan 16 2024 at 10:26):
I suppose an initial folder (of some name?) in migrations/
wouldn’t necessitate versioning; if the repo wanted to introduce versioning later on and apply it retroactively, the artificial keys file could be used for this task as well somehow (e.g. if the initial folder is called init/
, the artificial keys file could say that the existing init/50.json
is now the start of v1.0.0
, init/20.json
is the start of v0.1.0
, etc.)
Mario Carneiro (Jan 16 2024 at 10:30):
let's not consider the case of changing key names after the fact just yet - this design is complicated enough as it is
Mario Carneiro (Jan 16 2024 at 10:32):
I think @Eric Wieser is right that with a bot we can probably just use commit SHAs as in the original design
Thomas Murrills (Jan 16 2024 at 19:06):
Don’t we still wind up with [commit A] [leanup commit for A] [commit B] [leanup commit for B] like that?
Mario Carneiro (Jan 16 2024 at 19:08):
we can still use the bot optimization: only one leanup commit per batch, and no additional CI requirements
Mario Carneiro (Jan 16 2024 at 19:09):
I started to put pen to paper on this program and I immediately realized something else: leanup is also going to need to have many version branches and recompile itself for the target lean version
Mario Carneiro (Jan 16 2024 at 19:10):
I'm thinking about stashing it inside the .elan/toolchains/VERSION/leanup
directory
Mario Carneiro (Jan 16 2024 at 19:13):
The reason is because the first thing it does is orchestrate a lake build of the project with additional data collection, which means it needs to use the lake API and it needs to work on the old version of lean
Thomas Murrills (Jan 16 2024 at 22:19):
This is probably easy to figure out, but with only commit SHAs as keys and only a bot to update the index, how do we do fixup migrations? Maybe a separate temporary file that specifies the insertion that must be made to the index, then the bot figures out where to insert it in the index, inserts it into the index, and then deletes the temporary file?
Eric Wieser (Jan 17 2024 at 02:16):
Can't we just edit the migration file to do a fixup?
Eric Wieser (Jan 17 2024 at 02:17):
(that is, if we have files with SHAs as their names that describe past migrations, can't we just... edit those files?)
Thomas Murrills (Jan 17 2024 at 02:18):
Here I’m using “fixup migration” to mean “we didn’t have any migration associated with a prior commit, but want to introduce one retroactively” (which maybe isn’t the best terminology)
Eric Wieser (Jan 17 2024 at 02:19):
I think my comment basically still applies; a new file can be created, and the bot can regenerate the index from scratch every time
Eric Wieser (Jan 17 2024 at 02:20):
Crawling a git history is far cheaper than building mathlib anyway
Thomas Murrills (Jan 17 2024 at 02:22):
Though wait, is your proposal to have files which are named by the commit SHA as well, as opposed to an index pointing to arbitrarily-named files as in the original proposal?
Eric Wieser (Jan 17 2024 at 02:23):
Shas are probably bad names vs PR numbers, but I envisage each file containing a reference to the sha it migrates to
Eric Wieser (Jan 17 2024 at 02:24):
Thomas Murrills said:
Don’t we still wind up with [commit A] [leanup commit for A] [commit B] [leanup commit for B] like that?
Is "leanup commit" "add migration config" or "apply a the migration to the source"?
Thomas Murrills (Jan 17 2024 at 02:42):
By “leanup commit” I mean “commit where we instantiate placeholders for a migration’s own commit SHA (in the case where the migration is introduced in the same PR/squash-commit it refers to, which should be most migrations)” (is this what you mean by “add migration config”?)
Mario Carneiro (Jan 17 2024 at 06:57):
Eric Wieser said:
Shas are probably bad names vs PR numbers, but I envisage each file containing a reference to the sha it migrates to
IMO we shouldn't put the SHA in the individual migration file. I am imagining a migration file with a name like v4.5.0-rc1/pr_1234.json
which has no SHA in it, and then the index file all.json
contains a mapping like [["1234abcd", "v4.5.0-rc1/pr_1234.json"], ...]
and which is managed by the bot. To create a retrofit migration you edit the index file directly to add a past SHA, and to create a new migration you just add a file without adding it to the index, and the bot will ensure that all unindexed migration files are added to the mapping in the commit that added them.
Eric Wieser (Jan 17 2024 at 10:38):
The advantage of my proposal is that the index file contains no new data and is strictly an optimization; and so could potentially be kept on azure/AWS rather than within the repo
Chris Wong (Jan 28 2024 at 00:48):
Was looking for automated migration proposals (after watching Lean Together) and found this.
What I'm wondering is why we're building a new format and infrastructure for this?
For example, Rust uses lint suggestions for its edition upgrades. And Kotlin lets you include the replacement code in its @deprecated
annotation:
https://readyset.build/kotlin-deprecation-goodies-a35a397aa9b5
I know Lean has some really powerful code rewriting tools in its macros-by-example and simprocs. So it seems strange to me that we're not repurposing any of that.
Mario Carneiro (Jan 28 2024 at 01:31):
The problem with the fully-in-lean approaches is that they don't allow migration across lean versions
Mario Carneiro (Jan 28 2024 at 01:32):
So we have to transform the data into something durable across versions and do a little bit of work with old lean and a little bit of work with new lean and hopefully we can survive the trip
Mario Carneiro (Jan 28 2024 at 01:33):
Rust and Kotlin solve this problem by actually being backward compatible. It would be nice if lean did this too but it seems like too much to ask in the near term
Mario Carneiro (Jan 28 2024 at 01:41):
The deprecation mechanisms available in Rust and Kotlin work when the library changes but the compiler doesn't. But when coming back to an old project, Lean changes are approximately as likely to be causing migration pains as Mathlib changes, and furthermore even if we ignored the Lean changes it would not do to ignore the mathlib changes that happened on old versions of lean and are no longer applicable because we can't migrate the migrations (!)
Jannis Limperg (Jan 28 2024 at 11:19):
If Lean were to guarantee that version x+1
can still parse (not necessarily elaborate) code from version x
, that would suffice, right? Then the syntax of version x+1
would be a superset of the syntax of version x
and so a migration could be given as a syntax-to-syntax translation (or a hint that something has to be done manually).
Mario Carneiro (Jan 28 2024 at 11:19):
syntax-to-syntax translations are heavily limited in accuracy though (c.f. mathport), I'd rather not be limited by that
Mario Carneiro (Jan 28 2024 at 11:20):
Plus, I'm not sure we can extract anything like that promise in the first place
Mario Carneiro (Jan 28 2024 at 11:21):
Also does it count if the syntax still parses but the syntax tree is different?
Mario Carneiro (Jan 28 2024 at 11:21):
because in that case you would have to render and reparse the syntax in order to transform the tree to a new valid AST
Jannis Limperg (Jan 28 2024 at 11:28):
I think the syntax tree doesn't have to be the same since the migration is defined in version x + 1
. So suppose simp (config := ...)
is renamed to simp (options := ...)
. In Lean version x+1
, simp
allows both variants and a migration offers to rewrite (config := ...)
to (options := ....)
. Version x+2
then drops (config := ...)
.
Jannis Limperg (Jan 28 2024 at 11:28):
Let me think about how this would work for other common migrations...
Mario Carneiro (Jan 28 2024 at 11:31):
As I'm envisioning it, the migration itself operates on an elaborated AST thingy, i.e. text stuffed up with as much contextual data as possible because we don't expect it to re-elaborate without user assistance and we don't want to ask for that until after we're done applying all migrations
Mario Carneiro (Jan 28 2024 at 11:32):
So it's not like we would run a single migration to edit some text, pause, then try to reparse the text on version x+1
and hope that things still work
Mario Carneiro (Jan 28 2024 at 11:35):
The scenario you described is nice for users who are manually bumping things, but I wouldn't make it a mandatory part of leanup operation, because historically most syntax changes have been of the more traumatic kind where you just change simp (config := ...)
to simp (options := ...)
in core and then just fix the world
Mario Carneiro (Jan 28 2024 at 11:36):
And I would prefer leanup to be able to work on lean as it exists and not just on the lean we hope it to one day become
Mario Carneiro (Jan 28 2024 at 11:37):
(because I would argue that it is the nature of migration tools that they have to deal with imperfection, otherwise there would be nothing to migrate)
Mario Carneiro (Jan 28 2024 at 11:39):
Also, even in your example we still have to deal with the "traumatic change" of removing (config := ...)
in version x+2
, because the migration for x+2
has to say what to do with code that still says config :=
for some reason
Mario Carneiro (Jan 28 2024 at 11:41):
I suppose that points for the need for "pre-migrations", which are migrations which run on the previous version and don't change semantics but remove syntax or theorem uses that are about to disappear and which we don't want to handle rewriting on the new version
Mario Carneiro (Jan 28 2024 at 11:42):
that is, the rewrite for simp (config := ...)
to simp (options := ...)
would be a version x+2
pre-migration, which runs on version x+1
and hence is able to talk about both syntaxes at once
Jannis Limperg (Jan 28 2024 at 11:48):
Indeed my design (well, sketch) relies on doing the migrations one by one and fixing up any errors as they come. This is less nice for big jumps (and it involves downloading some intermediate Lean versions, which is not ideal but perhaps not a deal breaker). However, I'm somewhat optimistic that most small changes can be migrated automatically. And for bigger changes (e.g. a tactic was removed entirely with no one-to-one replacement), I'm not sure whether I want a migration tool to attempt a big jump. It seems easier, both on the tool and perhaps the user, to fix up any errors immediately rather than try to carry a broken state forward.
Jannis Limperg (Jan 28 2024 at 11:49):
Mario Carneiro said:
Also, even in your example we still have to deal with the "traumatic change" of removing
(config := ...)
in versionx+2
, because the migration forx+2
has to say what to do with code that still saysconfig :=
for some reason
This would mean that the user hasn't fully applied the migrations in version x+1
. That's their problem imo. It's like ignoring a deprecation warning.
Mario Carneiro (Jan 28 2024 at 11:50):
I think it would be pretty prohibitive for mathlib, if there are 10 commits a day and you skip a month that means you have to recompile the project 300 times
Mario Carneiro (Jan 28 2024 at 11:51):
It's true that if there are big changes you may want to stop and recalibrate more often (aka leanup -i
), but I think support for big jumps where not much relevant to your project changes is also important
Mario Carneiro (Jan 28 2024 at 11:52):
it's basically the same consideration as with interactive rebase
Jannis Limperg (Jan 28 2024 at 11:52):
Mario Carneiro said:
I think it would be pretty prohibitive for mathlib, if there are 10 commits a day and you skip a month that means you have to recompile the project 300 times
Fair, but are commit-to-commit migrations for Mathlib necessary? I would think most dependent projects want to jump between monthly releases at most. And the migrations could still be defined as the changes are made (I think?).
Mario Carneiro (Jan 28 2024 at 11:53):
Jannis Limperg said:
Mario Carneiro said:
Also, even in your example we still have to deal with the "traumatic change" of removing
(config := ...)
in versionx+2
, because the migration forx+2
has to say what to do with code that still saysconfig :=
for some reasonThis would mean that the user hasn't fully applied the migrations in version
x+1
. That's their problem imo. It's like ignoring a deprecation warning.
They might have been on version x+1
to start with
Mario Carneiro (Jan 28 2024 at 11:53):
plus, there's no guarantee config :=
was even deprecated in x+1
Jannis Limperg (Jan 28 2024 at 11:55):
Okay, I assume that config :=
would be deprecated if it's slated to disappear in the next version. So the user would get a warning.
Jannis Limperg (Jan 28 2024 at 11:56):
Mario Carneiro said:
It's true that if there are big changes you may want to stop and recalibrate more often (aka
leanup -i
), but I think support for big jumps where not much relevant to your project changes is also important
Fair. If little of relevance changes, I would hope that the migrations are correspondingly automatic (like rebase -i
auto-merging), but they may indeed take a long time.
Mario Carneiro (Jan 28 2024 at 11:56):
Jannis Limperg said:
Fair, but are commit-to-commit migrations for Mathlib necessary? I would think most dependent projects want to jump between monthly releases at most. And the migrations could still be defined as the changes are made (I think?).
I think we will want to start with monthly migrations, but ramp up to per-PR migrations once we have a good process in place, because that makes more sense for maintenance. Plus, more accurate migrations means that we can upgrade projects that are on arbitrary commits, and one important class of those is old mathlib PRs
Mario Carneiro (Jan 28 2024 at 11:57):
Jannis Limperg said:
Mario Carneiro said:
It's true that if there are big changes you may want to stop and recalibrate more often (aka
leanup -i
), but I think support for big jumps where not much relevant to your project changes is also importantFair. If little of relevance changes, I would hope that the migrations are correspondingly automatic (like
rebase -i
auto-merging), but they may indeed take a long time.
I think that re-elaborating frequently is not only very costly, it also significantly increases the probability of an inessential breakage that will break the flow
Jannis Limperg (Jan 28 2024 at 12:00):
I see. If commit-to-commit migrations are the desired mode, I guess that kills the design.
Mario Carneiro (Jan 28 2024 at 12:01):
We can still glob up a sequence of commit-to-commit migrations and try to apply them as a block, but this is not necessarily correct if some theorem undergoes multiple renames in that period
Mario Carneiro (Jan 28 2024 at 12:01):
basically, as long as all the diffs are commutative this is okay, but it's hard to predict how often that is really the case
Mario Carneiro (Jan 28 2024 at 12:02):
with some kind of changes like add/rename/delete we can possibly calculate the composition correctly but migrations defined by code are not analyzable in this way
Mario Carneiro (Jan 28 2024 at 12:05):
Another nifty property of migrations operating on an elab dump instead of lean text is that you don't have inter-file dependencies, it becomes a fully parallel problem
Jannis Limperg (Jan 28 2024 at 13:40):
Oh yes, that's actually very nice.
Patrick Massot (Jun 22 2024 at 08:09):
We had quite a bit of discussion among mathlib maintainers about this topic and the #align
statements. We need to see whether anyone wants to help here.
The current situation is:
- The
#align
statements take up space, they do not look nice, and occur a maintenance cost. - the
#align
statements are no longer useful with porting stuff from Lean 3, because we have not seen any port recently and porting now would be very difficult because so many things changed during the past year. I think I didn’t see anyone denying this. - the
#align
statements are recording some potentially useful information. This information could be useful to help updating projects from an old Lean/Mathlib 4 to a newer one. - But this information is currently not used.
- The issue of having easier migration tools is a really important one. There are more and more important Lean projects beyond Core/Batteries/Mathlib, in all directions we are interested in.
- There are ambitious plans discussed in this thread.
- But nobody is working on leanup currently. This seems unlikely to change in the near future.
- The recent improvements of the deprecation attributes are really nice but they will never do everything leanup is meant to do.
- Recently we got a bot posting some diff information as comments to GitHub PRs. But this diff information is extracted by a shell script based on git logs and regexp heuristics. There is no Lean meta-programming involved here.
- Alex Best’s leaff tool is not super actively maintained but is actually very stable. It depends on almost nothing and the bump commits we see in git history of that project are almost only changing the
lean-toolchain
file.
So the question is: can we get rid of #align
without loosing information that could be useful in a future migration tool?
It seems that the following could be a short term workflow:
- whenever a PR is opened on GitHub, a bot runs a Lean meta-program that computes which declarations were renamed. Maybe this uses leaff, maybe not. The goal here is to compare with the
#align
baseline, so it should be more reliable that the shell script in the sense that it only produces valid fully qualified Lean names, but not necessarily much fancier (although it seems that also getting the Lean 4 name before renaming would be an easy win compared to#align
). Then the bot creates some json file recording this information, and pushes a commit putting this file in some folder of the mathlib repository that is not the mainMathlib
folder. - during the PR reviewing process, we can ask the bot to redo its computation and update this file. Of course human beings can also update the file by hand, but hopefully this should not be needed.
- when bors is ready to merge a PR, it checks that the renaming information is still reliable. If yes then it puts the current date in the json file and merge. If not then it calls the diff bot again. If the diff bot fails then bors refuses to merge the PR and human intervention is required.
- the result is that Mathlib gains a
migration
folder with one json file per PR. This information is not used for anything yet, but is strictly better than what#align
records, removes the ugly#align
lines from the main files and, hopefully, does that with less overhead that the current#align
maintenance duties.
An important thing to discuss is how this compares to strictly enforcing that every rename comes with a deprecation alias (say the diff bot would simply complain about missing deprecation aliases and not create the json file). I think the main immediate difference is that deprecations are meant to be transient whether the json files would remain in the repo forever. But this is probably worth the trouble only if we think that those migration files will one day become something like the leanup dream.
Now the big question: is anyone willing to work on implementing this? I think the required meta-programming is very light, so this is mostly a job for CI/GitHub bots enthusiasts. If nobody volunteers then we will probably simply remove the align statements in the near future.
Kim Morrison (Jun 22 2024 at 08:19):
Please can we not introduce automation that pushes commits to PRs without very strong reasons for doing so? This introduces a whole new layer of complication and fragility to our system.
Kim Morrison (Jun 22 2024 at 08:20):
If information beyond deprecation attributes would be useful, surely we can track it without committing it!
Patrick Massot (Jun 22 2024 at 08:26):
Alternative designs are welcome! I was only trying to find a compromise here. I am about to board a transatlantic flight so I won’t be able to participate in this discussion for some time.
Michael Stoll (Jun 22 2024 at 08:40):
See #12410 for an indication (by now about two months old, but if desired, I can merge master and bench again) of what the performance impact on building Mathlib is.
Ruben Van de Velde (Jun 22 2024 at 08:49):
I'm not sure I quite follow the relevance of #align in this discussion - is it because we could theoretically build a tool that uses changes from #align foo bar
to #align foo baz
as evidence of a rename bar -> baz?
Damiano Testa (Jun 22 2024 at 08:54):
The shell script that move-decls
uses and the one that the Mathlib reports
uses can be combined easily to give a complete diff of the fully-qualified names between any two commits that have a working cache.
Trying to get also the signatures of the declarations into the mix is possible, but would require more work.
Eric Wieser (Jun 22 2024 at 10:14):
Kim Morrison said:
Please can we not introduce automation that pushes commits to PRs without very strong reasons for doing so? This introduces a whole new layer of complication and fragility to our system.
I agree that this usually makes a mess of CI; perhaps the json could go in the PR description above the fold, and CI could check it is there (edit: having it in the repo makes it possible to retrospectively fix past migrations, so perhaps that is better after all).
Eric Wieser (Jun 22 2024 at 10:17):
Patrick Massot said:
An important thing to discuss is how this compares to strictly enforcing that every rename comes with a deprecation alias.
Occasionally we end up with a rename (foo, bar) -> (bar, foo), or (foo, bar) -> (bar, baz). In these cases @[deprecated]
can't help us, but #align
or a suitable replacement can
Jon Eugster (Jun 22 2024 at 10:46):
As a very low-tech suggestion, one could also extract all #align
s into one file (e.g. Mathlib.Mathport.Aligns
) and let them rot there: If somebody tried to use mathport again they might be essential, but they wouldn't bother anybody elsewhere in mathlib.
It sounds a bit like the suggestions to help migrating across recent mathlib versions and renames therein isn't directly linked to the #align
statements existing today.
Damiano Testa (Jun 22 2024 at 11:02):
Is the idea that there should be a persistent naming across commits, so that you can track the name of each declaration?
The #align
s were taking over partly this task between latest lean3 and latest lean4, but now the idea is that these should track each declaration and its renames across every commit?
This would certainly be useful for refactor
to then automate bumps.
Damiano Testa (Jun 22 2024 at 11:04):
As for automating this, if a commit is just a rename, with no change in the type signature, then this is probably straightforward. If the signature changes, though, then I can see how an automated process can be fooled easily.
Yaël Dillies (Jun 22 2024 at 11:30):
Eric Wieser said:
perhaps the json could go in the PR description above the fold, and CI could check it is there (edit: having it in the repo makes it possible to retrospectively fix past migrations, so perhaps that is better after all).
Would be nice if bors could pick it from the PR description into a json in the repo
Damiano Testa (Jun 23 2024 at 07:16):
I have implemented in Lean a version of the move-decls
: #14046.
The PR has a couple of problems, though.
- I would like to separate the test to a separate workflow that only runs after the cache has been built and uploaded. Right now it is an extra step of CI, right after the cache is uploaded. This seems to be impossible without some administrator changing some branch protection rule or the script already being on master, i.e. merging "blindly" a PR, hoping that CI would "just work".
- For the simple reason of obtaining the PR number from GitHub actions, I added that the Mathlib CI should be run on
pull_request
s as well aspush
. The effect of this is that CI shows basically every step twice (although I am not sure whether it really runs them all twice). - The step is not especially fast, as the relevant CI step takes approximately 2mins (plus an overhead of about 20sec to checkout the full Mathlib repository, instead of just the head commit of the PR).
Making CI run on both pull_request
s and push
es also has the effect that the PR fails CI, even though all the steps appear to be successful. I suspect that these are some later steps that cancel some "repeated" version of themselves that had started earlier.
In conclusion, the PR shows
- the "old" declaration diff, that picks up the new lean command, even though it is in
scripts
and - the "new" declaration diff, that picks up on the changed namespace for one of the declarations.
Ironically, the declaration that changed names is not the "obvious" one, as that one is marked unsafe
and is ignored by the diff. You can also see what the diff of all declarations is by looking at the history of the added comment: the very first run was a full diff, not just the "reasonable" declarations.
Kim Morrison (Jun 23 2024 at 07:53):
Re: 2, this is running everything twice, so is a no-go.
Damiano Testa (Jun 23 2024 at 07:54):
Ok, I think that if I managed to make the step a separate one, then I could make that step only run on pull_request
s and would fix the rest.
Damiano Testa (Jun 23 2024 at 07:55):
However, I think that the only way for this to be testable is to actually merge the extra, separate CI workflow without knowing if it will really work.
Damiano Testa (Jun 23 2024 at 07:55):
(I think, information about this is very hard to obtain.)
Damiano Testa (Jun 23 2024 at 07:57):
I tried lots of variations of the GitHub actions that supposedly wait on other actions to finish and none of them actually worked. I also read that they work if the "wait-for-the-action-to-finish" logic is already on master.
Damiano Testa (Jun 23 2024 at 07:57):
Alternatively, there seems to be a workaround, but it needs the main CI step on mathlib to be named and (I think) that it currently is not.
Jireh Loreaux (Jul 09 2024 at 17:02):
@Damiano Testa I think the declaration diff script is getting something wrong. For example, here it says no declarations were harmed, but that's not true, the namespace changed from StarAlgHom
to NonUnitalStarAlgHom
.
Damiano Testa (Jul 09 2024 at 17:04):
This decl_diff
is text-based, so it only notices if the very first layer of surface syntax changes. In particular, it is not aware of namespaces. So, what you can deduce is that very likely, you did not loose lemmas, but they may have changed by something "invisible".
Damiano Testa (Jul 09 2024 at 17:05):
The script that computes the actual diff of the declaration names, using typing information is stuck since I am not yet sure of how to avoid duplicating running CI to get the relevant information...
Jireh Loreaux (Jul 09 2024 at 17:18):
I suppose you could just keep track of namespace Foo
and end Foo
, but that means you need to track state, even if only per file.
Damiano Testa (Jul 09 2024 at 17:26):
Right, it is of course possible to extend it, but the idea of this script was to be a light and quick patch before leaff...
Damiano Testa (Jul 09 2024 at 17:27):
Also, tracking namespace
means also being aware of _root_
and slowly you are trying to become a parser for lean syntax.
Ruben Van de Velde (Jul 09 2024 at 17:28):
Yeah, I think putting engineering effort in something lean-based would be better. I'm not sure what - if anything - is stopping us from building something leaff-based
Damiano Testa (Jul 09 2024 at 17:32):
Besides, despite the simple-mindedness of the script, I am fairly surprised by how well it seems to perform. You need to know some quirks (like the namespace issue that you mention), but, taking that into account, the script is essentially a regex parsing of git diff
.
Jireh Loreaux (Jul 09 2024 at 19:00):
yes, its simplicity and immediacy is nice, despite the drawbacks. I just didn't realize its scope and initially thought this (ignoring namespaces) was a bug.
Last updated: May 02 2025 at 03:31 UTC