Zulip Chat Archive
Stream: mathlib4
Topic: API for 'multiverse' metatheorems?
James E Hanson (Feb 08 2026 at 16:22):
This is related to the conversation in the thread about non-standard analysis, but I was actually also thinking about something related independently.
There a number of places in mathematical logic where one builds multiple models of ZFC and then uses the fact that standard theorems relativize to these various models, with the three most common constructions being forcing extension, inner models, and ultrapowers. This is typically used for independence results, but it also has a couple of applications for proving positive theorems:
- Non-standard analysis can be formalized in terms of 'taking an ultrapower of the whole universe' (which doesn't seem too hard to define in terms of the existing ultrapower machinery in Mathlib).
- Shoenfield absoluteness allows you to transfer certain low-complexity statements from forcing extensions and inner models to the ground model. For example, you can use this to turn a proof of a statement about the natural numbers in ZFC + CH into a ZFC theorem (and even a ZF theorem). Historically this has actually been used a couple of times in the context of model theory (although one can typically find a proof that avoids this technique).
In the context of independence results, this is also important whenever you want to prove the independence of something that needs a lot of machinery from outside set theory. For instance, in order to prove the independence of the Whitehead problem, one step you need to take is proving that every Whitehead group is free in L. While L is relatively easy to define in the ZFSet library, there is no mechanism I know of to then immediately start talking about the category of abelian groups inside L.
I know this is a pretty open-ended question because these constructions might end up looking pretty different from each other in Lean, but has anyone thought about the feasibility of doing something like this? For instance, if I have a small model of a fragment of Lean or ZFC, I want to be able to just talk about their internal categories of abelian groups or their internal version of measure theory and in particular apply existing theorems in Mathlib there.
Mirek Olšák (Feb 08 2026 at 16:39):
I already mentioned this in a topic of dream projects: It should be possible to build "logical quotation". As an inspiration, I take the Qq library, here we can write things like
import Qq
open Qq
def e : Q(Prop) := q(∀ a b c n : Nat, n > 2 → a ≠ 0 → b ≠ 0 → a^n + b^n ≠ c^n)
#check e
#print e
So we construct e by naturally writing it but it is stored as an (meta) expression, not a Lean object. So we can decompose it, and observe that it starts with a quantifier, etc. However, since Qq is used for metaprogramming, and not for logic, it doesn't store definitions with it (only the definition names occuring in the expression).
So I envision a logical quotation that would capture also all the definitions this depends on, With that, we could then make logical claims about such expressions such as unprovability.
I don't think it is necessary for non-standard analysis -- first I thought it was but later I realized that a lot of non-standard analysis (in general ultra-product stuff) could be done natively in Lean. But I see logical quotation as the only sensible way how to talk about undecidability.
And by the way, regarding universes, I hope we agree we can just mostly live in Lean's Type (or up to the Mathlib's 5 universes), if we really want large cardinals, we could assume them (as assumptions, or axioms) inside Type 0.
James E Hanson (Feb 08 2026 at 16:45):
I was also thinking about the q( ) macro.
James E Hanson (Feb 08 2026 at 16:46):
Mirek Olšák said:
And by the way, regarding universes, I hope we agree we can just mostly live in Lean's
Type(or up to the Mathlib's 5 universes), if we really want large cardinals, we could assume them (as assumptions, or axioms) inside Type 0.
I mean, is there any reason to think that the number of universes used by Mathlib isn't going to grow over time?
Adam Topaz (Feb 08 2026 at 16:47):
Many people have thought about such things for a long time. Let me ping @𝚠𝚘𝚓𝚌𝚒𝚎𝚌𝚑 𝚗𝚊𝚠𝚛𝚘𝚌𝚔𝚒 specifically who seems to be doing related things at the moment.
Adam Topaz (Feb 08 2026 at 16:50):
Personally, I think a viable approach to such things is to go via the syntactic category construction, and rely more heavily on categorical logic semantics, although I recognize that not everyone is comfortable with the category theory library.
Mirek Olšák (Feb 08 2026 at 16:50):
James E Hanson said:
I mean, is there any reason to think that the number of universes used by Mathlib isn't going to grow over time?
I just meant that there is no reason to stretch Lean's universe system to its limits (when it cannot get to very large cardinals anyway). No problem if a few more universes will get added over time for convenience / ignorance.
James E Hanson (Feb 08 2026 at 16:52):
Adam Topaz said:
Personally, I think a viable approach to such things is to go via the syntactic category construction, and rely more heavily on categorical logic semantics, although I recognize that not everyone is comfortable with the category theory library.
How would you envision this working with something like L?
James E Hanson (Feb 08 2026 at 16:53):
Oh sorry, I misunderstood. You're talking about applying this at the level of syntax, not semantics.
Adam Topaz (Feb 08 2026 at 16:54):
yes exactly.
Adam Topaz (Feb 08 2026 at 16:54):
If you want abelian groups in L, you would define the category of abelian groups internally in the syntactic category of L.
Adam Topaz (Feb 08 2026 at 16:54):
(If needed, you can then pass to models by applying functors, etc.)
James E Hanson (Feb 08 2026 at 17:04):
A lot of classical mathematical logicians would have a bit of difficulty defining the word functor. I'm concerned that if you built the framework for doing independence results using a lot of category theory, you'd have difficulty finding, say, set theorists willing to work with it.
Adam Topaz (Feb 08 2026 at 17:04):
yeah, I understand. I think this is where good automation like analogues of q(...) could help.
Eric Wieser (Feb 08 2026 at 17:14):
One difference here is that the q() elaborator is only used in tactics, so is typically not considered part of the trusted codebase. If you're using something to form theorem statements, you're now trusting the complex elaborator to really build the right expression in your meta theory.
Adam Topaz (Feb 08 2026 at 17:51):
This is of course always true whenever an elaborator is used to create a definition.
𝚠𝚘𝚓𝚌𝚒𝚎𝚌𝚑 𝚗𝚊𝚠𝚛𝚘𝚌𝚔𝚒 (Feb 09 2026 at 03:40):
Adam Topaz said:
Many people have thought about such things for a long time. Let me ping 𝚠𝚘𝚓𝚌𝚒𝚎𝚌𝚑 𝚗𝚊𝚠𝚛𝚘𝚌𝚔𝚒 specifically who seems to be doing related things at the moment.
The q(..)-like construct we have there is an elaborator tm%{e} (currently on this branch) that, given a Lean term e that is well-typed w.r.t. a subset of Lean's type theory (e.g. without Prop), produces a Lean proof of this well-typedness fact (according to rules for MLTT that we have formalized). In principle this can be used to reason internally to any model of type theory (we also have formalized certain classes of models).
I don't know much about the constructible hierarchy, so I can't say whether it's relevant. Additionally, for talking about 'abelian groups in C' one may prefer to use something that doesn't require a full model of type theory (?), e.g. Lawvere theories.
𝚠𝚘𝚓𝚌𝚒𝚎𝚌𝚑 𝚗𝚊𝚠𝚛𝚘𝚌𝚔𝚒 (Feb 09 2026 at 03:43):
Mirek Olšák said:
So I envision a logical quotation that would capture also all the definitions this depends on, With that, we could then make logical claims about such expressions such as unprovability.
I am confused about a detail: why do you want to record all definitions used there, rather than just the axioms? In the elaborator I mentioned above, we record typing derivations w.r.t. a 'theory signature', namely a list of the axioms used, since those are ultimately what needs to be justified manually in any model construction.
Mirek Olšák (Feb 09 2026 at 15:02):
𝚠𝚘𝚓𝚌𝚒𝚎𝚌𝚑 𝚗𝚊𝚠𝚛𝚘𝚌𝚔𝚒 said:
I am confused about a detail: why do you want to record all definitions used there, rather than just the axioms? In the elaborator I mentioned above, we record typing derivations w.r.t. a 'theory signature', namely a list of the axioms used, since those are ultimately what needs to be justified manually in any model construction.
Because only together with the definitions, the term has a full meaning. The specific envisioned use-case I was showing was stating
theorem continuum_hypothesis_undecidable : Undecidable lq(Cardinal.aleph 1 = Cardinal.beth 1) := by
sorry
To capture the meaning of this statement, we need to know the definitions of Cardinal.aleph and Cardinal.beth. If they are represented only as strings (Lean.Names to be more precise), this has no meaning.
Mirek Olšák (Feb 09 2026 at 15:30):
I imagined this being done so that any such definition would get automatically constructed meta-variant (for example when requested by the logical quotation). So it would create Cardinal.aleph.meta, RelEmbedding.trans.meta, etc which contains the analog of Expr (but including definitions). Of course, for meta-reasoning projects, it would be useful to precompute certain basic constants.
𝚠𝚘𝚓𝚌𝚒𝚎𝚌𝚑 𝚗𝚊𝚠𝚛𝚘𝚌𝚔𝚒 (Feb 09 2026 at 18:06):
Mirek Olšák said:
I imagined this being done so that any such definition would get automatically constructed
meta-variant (for example when requested by the logical quotation). So it would createCardinal.aleph.meta,RelEmbedding.trans.meta, etc which contains the analog ofExpr(but including definitions). Of course, for meta-reasoning projects, it would be useful to precompute certain basic constants.
Right, I see! In HoTTLean this is achieved by a @[reflect] attribute that produces someDefinition.reflection, a structure containing the Expr and the typehood proof. Then when you refer to someDefinition inside lq(..), under the hood someDefinition.reflection is made use of.
Last updated: Feb 28 2026 at 14:05 UTC