Zulip Chat Archive

Stream: general

Topic: Highest universe in mathlib


Yaël Dillies (Dec 24 2021 at 12:28):

We all know we have infinitely many universes in Lean, and basically everything is universe-polymorphic in mathlib, but how many universes do we really need currently? That is, to which point can we squish down all universes in mathlib and still have everything typecheck? Kevin argued that Type 1 was enough, but I suspect some constructions in category might something higher.

Johan Commelin (Dec 24 2021 at 12:29):

Certainly Type 2 will be good enough. I think that's what's needed for LTE.

Yaël Dillies (Dec 24 2021 at 14:05):

I must say I'm surprised we don't need anything higher!

Kevin Buzzard (Dec 24 2021 at 15:23):

I'm not. If you're a set theorist then you think everything is a set so everything's in Type. Definitions like ordinals and the category of all sets/groups etc are called classes, and they're all in Type 1.

Kevin Buzzard (Dec 24 2021 at 15:23):

Classically in mathematics you only deal with sets and classes

Kevin Buzzard (Dec 24 2021 at 15:27):

Remember that historically mathematicians didn't even need sets! People like Gauss and Euler worked with terms and functions. Then Galois came along and said "I think it would be helpful if we could identify the permutation group of +-sqrt(2) with the permutation group of +-sqrt(3) because I think I'm on to something". It was only around then that we started needing to talk about some kind of holder type G for a collection of terms g. So in some sense before that we didn't even need Type. We just had concrete examples like the naturals, reals and complexes

Kevin Buzzard (Dec 24 2021 at 15:28):

The push to develop abstract group theory and then abstract algebra and ideas like homomorphisms of groups are historically very recent. This is when people started to want to talk about maps between abstract types -- functions had of course existed for centuries before that but again only between concrete types such as the sine and cosine function on the reals

Kevin Buzzard (Dec 24 2021 at 15:30):

So types became "things" and then set theory gave a name to the things, but Russell's paradox showed that you had to be careful so then we got things like ZFC and at that point mathematics was going on purely within Type.

Kevin Buzzard (Dec 24 2021 at 15:32):

Then category theory and homological algebra and Grothendieck came along and then people wanted to start talking about abstract mathematical objects such as the category of all sets, which weren't even in Type. Grothendieck fixed this by adding an extra axiom to set theory to enable more than one universe of types, or of sets as he called them, and even then you don't need infinitely many, you just need to allow yourself to do the occasional universe bump. Hence Type 1

Mario Carneiro (Dec 24 2021 at 20:10):

I've been curious about this question for a while, so I decided to answer it for real instead of theorizing.

open tactic

meta inductive level_spec
| const : level  level_spec
| parametric : list name  level  level_spec

meta def level.has_any_param : level  bool
| (level.succ l)     :=  l.has_any_param
| (level.max l₁ l₂)  :=  l₁.has_any_param ||  l₂.has_any_param
| (level.imax l₁ l₂) :=  l₁.has_any_param ||  l₂.has_any_param
| (level.param _)    := tt
| l                  := ff

meta def level.at_zero : level  nat
| (level.succ l)     := l.at_zero + 1
| (level.max l₁ l₂)  := max l₁.at_zero l₂.at_zero
| (level.imax l₁ l₂) := let n := l₂.at_zero in if n = 0 then 0 else max l₁.at_zero n
| _                  := 0

meta def level_spec.at_zero : level_spec  nat
| (level_spec.const l) := l.at_zero
| (level_spec.parametric _ l)  := l.at_zero

meta mutual def get_expr_level, get_const_spec (env : environment) (r : ref (name_map level_spec))
with get_expr_level : expr  level  tactic level
| v l := v.fold (pure l) $ λ e _ r, r >>= λ l,
  match e with
  | expr.const n us := do
    sp  get_const_spec n,
    match sp with
    | level_spec.const l' := pure $ l.max l'
    | level_spec.parametric us' l' := pure $ l.max $ l'.instantiate (us'.zip us)
    end
  | expr.sort l' := pure $ l.max l'
  | _ := pure l
  end
with get_const_spec : name  tactic level_spec
| n := do
  m  read_ref r,
  match m.find n with
  | some sp := pure sp
  | none := do
    let process_decl (n : name) (us : list name) (l : level) : tactic level_spec := (do
      let l := l.normalize,
      let sp := if l.has_any_param then level_spec.parametric us l else level_spec.const l,
      m  read_ref r,
      write_ref r (m.insert n sp),
      pure sp),
    d  env.get n,
    match d with
    | (declaration.defn n us t v _ _) :=
      get_expr_level t level.zero >>= get_expr_level v >>= process_decl n us
    | (declaration.thm n us t v) :=
      get_expr_level t level.zero >>= get_expr_level v.get >>= process_decl n us
    | (declaration.cnst n us t _) := get_expr_level t level.zero >>= process_decl n us
    | (declaration.ax n us t) := get_expr_level t level.zero >>= process_decl n us
    end
  end

run_cmd
  using_new_ref mk_name_map $ λ m, do
  env  tactic.get_env,
  (n, z)  env.fold (pure (name.anonymous, 0)) $ λ d r, (do
    p  r,
    let n := d.to_name,
    sp  get_const_spec env m n,
    let z := sp.at_zero,
    pure $ if p.2 < z then (n, z) else p),
  trace (n, z)

Mario Carneiro (Dec 24 2021 at 20:10):

For the core library, the output is (sigma.mk.inj_eq, 3), which is to say, if you were to specialize every definition to every concrete universe, then the largest subterm of the form Sort u that appears in the hierarchy of definitions leading to sigma.mk.inj_eq.{0, 0} is Sort 3, aka Type 2. sigma.mk.inj_eq is the only definition that gets as high as 3.

For mathlib, the output is (measure_theory.L1.set_to_L1_congr_left', 4), and I'm checking now to see if there are any others at height 4. So the official answer seems to be that you need Type 3 to compile mathlib, or alternatively, mathlib can be translated to work in ZFC with 4 inaccessible cardinals.

Mario Carneiro (Dec 24 2021 at 20:12):

It looks like there are a huge number of definitions at height 4 (and measure_theory.L1.set_to_L1_congr_left' is a random selection). I will look for minimal ones.

Kevin Buzzard (Dec 24 2021 at 20:30):

Just to be clear here -- if a random definition takes four types in, in universes u v w x, that doesn't artificially push the bound up to 4?

Kevin Buzzard (Dec 24 2021 at 20:30):

docs#measure_theory.L1.set_to_L1_congr_left'

Mario Carneiro (Dec 24 2021 at 20:30):

Here is the list of level 4 definitions that do not depend on any level 4 definitions (i.e. they are level 4 because they either directly use a large universe or reference a level 3 definition with a universe bump in the substitution):

ordinal.omin._match_1
nhds_is_measurably_generated
finprod_mem_def
finprod_eq_prod_of_mul_support_subset
add_monoid_hom.map_finsum
pSet.definable
pSet.arity.equiv._main
finprod_mem_univ
algebraic_geometry.LocallyRingedSpace.coproduct._proof_4
finprod_mem_empty
category_theory.types_glue._proof_1
category_theory.types_glue._proof_2
ordinal.lift.principal_seg._proof_1
hahn_series.summable_family.hsum._proof_1
smooth_partition_of_unity.mk
partition_of_unity.mk
bump_covering.to_pou_fun
finprod_cond_nonneg
finsum_eq_sum_of_support_subset
partition_of_unity.sum_eq_one'
partition_of_unity.sum_le_one'
smooth_partition_of_unity.sum_eq_one'
partition_of_unity.sum_nonneg
smooth_partition_of_unity.sum_le_one'
finsum_mem_def
finsum_mem_univ
finsum_mem_congr
algebraic_geometry.Scheme.basic_open_is_open_immersion
uchange
finsum_mem_induction
algebraic_geometry.LocallyRingedSpace.has_coequalizer.image_basic_open._proof_2
algebraic_geometry.LocallyRingedSpace.has_coequalizer.image_basic_open._proof_4
algebraic_geometry.LocallyRingedSpace.has_coequalizer.image_basic_open._proof_8
algebraic_geometry.LocallyRingedSpace.has_coequalizer.coequalizer_π_app_is_local_ring_hom
finsum_mem_empty
algebraic_geometry.LocallyRingedSpace.is_open_immersion.forget_preserves_pullback_of_left
algebraic_geometry.LocallyRingedSpace.coproduct._proof_3
category_theory.subcanonical_types_grothendieck_topology
algebraic_geometry.PresheafedSpace.is_open_immersion.to_LocallyRingedSpace._proof_4
algebraic_geometry.LocallyRingedSpace.coequalizer._proof_3
classical.all_definable._main._proof_1
finsum_mem_zero
monoid_hom.map_finprod
pSet.embed._match_1
finprod_mem_induction
classical.all_definable._main._proof_2
at_bot_is_measurably_generated
ordinal.typein_iso._match_1
ordinal.typein_iso._proof_1
pSet.arity.equiv._main._meta_aux
finsum_dmem
finprod_mem_one
finprod_mem_congr
category_theory.flat_iff_Lan_flat
finprod_dmem
at_top_is_measurably_generated

Mario Carneiro (Dec 24 2021 at 20:31):

@Kevin Buzzard No, having lots of universe parameters does not bump the level, but instantiating those parameters (directly or indirectly) with a u+1 somewhere does

Kevin Buzzard (Dec 24 2021 at 20:32):

measure_theory.L1.set_to_L1_congr_left' just looks to me like some random technical lemma in measure theory that has nothing to do with universes

Mario Carneiro (Dec 24 2021 at 20:32):

The new list is probably more useful

Johan Commelin (Dec 24 2021 at 20:34):

How the hack can something like finsum_mem_empty need a nontrivial amount of universes?

Mario Carneiro (Dec 24 2021 at 20:34):

One thing that is a bit dicey about any calculation like this is that if you just chop off the universe hierarchy, then not every term has a type. Right now, the program is asserting for every definition that its type and value have to be typecheckable; perhaps it would be better to skip the type for definitions and theorems

Kevin Buzzard (Dec 24 2021 at 20:34):

finprod_mem_empty needs four universes? I still don't think I understand the question :-)

Mario Carneiro (Dec 24 2021 at 20:38):

finsum_mem_empty.{u_1, u_4} has max universe max (u_1+4) (u_4+2)

Johan Commelin (Dec 24 2021 at 20:39):

docs#finsum_mem_empty

Kevin Buzzard (Dec 24 2021 at 20:40):

lol what is going on

Yaël Dillies (Dec 24 2021 at 20:41):

Wow! Thanks, Mario.

Mario Carneiro (Dec 24 2021 at 20:42):

  • finsum_mem_empty.{u_1 u_4} has max universe max (u_1+4) (u_4+2) because it uses finsum.{u_4 u_1+1}
  • finsum.{u_1 u_2} has max universe max (u_1+2) (u_2+3), still not sure where this is coming from, I will make a program to unravel it

Kevin Buzzard (Dec 24 2021 at 20:46):

lol I wrote finprod_mem_empty

Kevin Buzzard (Dec 24 2021 at 20:47):

oh! Oh so the issue is finsum, not my lemma

Yaël Dillies (Dec 24 2021 at 20:47):

You're in the highest universe, Kevin!

Kevin Buzzard (Dec 24 2021 at 20:48):

golly, I apparently wrote finprod too

Yaël Dillies (Dec 24 2021 at 20:49):

Mario Carneiro said:

still not sure where this is coming from, I will make a program to unravel it

The answer is Kevin :upside_down:

Kevin Buzzard (Dec 24 2021 at 20:49):

I'm pretty sure that other people told me what to write in that definition :-)

Kevin Buzzard (Dec 24 2021 at 20:50):

What is all this plift stuff?

Mario Carneiro (Dec 24 2021 at 20:54):

Here's the program for outputting the level of a definition and its direct dependencies:

meta def level_spec.level : level_spec  level
| (level_spec.const l) := l
| (level_spec.parametric _ l)  := l

meta def level_spec.args : level_spec  list name
| (level_spec.const l) := []
| (level_spec.parametric us _)  := us

set_option pp.all true
run_cmd
  using_new_ref mk_name_map $ λ m, do
  env  tactic.get_env,
  let n := `finsum,
  l  get_const_spec env m n,
  trace (l.args, l.level),
  d  env.get n,
  let map := d.value.fold mk_name_set $ λ e _ m,
    if e.is_constant then m.insert e.const_name else m,
  map.fold (pure ()) $ λ n r, do
    r,
    l  get_const_spec env m n,
    trace (n, l.args, l.level)

Heather Macbeth (Dec 24 2021 at 20:57):

Kevin Buzzard said:

finprod_mem_empty needs four universes? I still don't think I understand the question :-)

I don't think I understand most of these examples either ... is it possible that many of them are just "mistakes"? People mis-using universes when writing the lemma/definition?

Mario Carneiro (Dec 24 2021 at 20:59):

  • finsum_mem_empty.{u_1 u_4} has max universe max (u_1+4) (u_4+2) because it uses finsum.{u_4 u_1+1}
  • finsum.{u_1 u_2} has max universe max (u_1+2) (u_2+3), because it uses set.finite.to_finset.{u_2}
  • set.finite.to_finset.{u} has max universe u+3, because it uses set.finite.fintype.{u}
  • set.finite.fintype.{u} has max universe u+3, because it uses set.has_coe_to_sort.{u}
  • set.has_coe_to_sort.{u} has max universe u+3, because it uses has_coe_to_sort.mk.{(max (u+1) 1) u+2}
  • has_coe_to_sort.mk.{u v} has max universe max u (v+1), because its type uses out_param.{v+1}
  • out_param.{u} has max universe u, because it mentions Sort u

Kevin Buzzard (Dec 24 2021 at 21:00):

It seems that the index types alpha,beta,iota are allowed to be Sorts however finset takes a Type, that's what all the plift nonsense is all about.

Mario Carneiro (Dec 24 2021 at 21:01):

I don't think that anything like that is going to cause spurious universe bumps

Kevin Buzzard (Dec 24 2021 at 21:01):

set.has_coe_to_sort.{u} has max universe u+3, because it uses has_coe_to_sort.mk.{(max (u+1) 1) u+2}

lol what is this?

Mario Carneiro (Dec 24 2021 at 21:01):

whether the indexing starts at 0 or 1 for a given universe variable doesn't really matter

Mario Carneiro (Dec 24 2021 at 21:01):

yeah that one is a bit surprising

Mario Carneiro (Dec 24 2021 at 21:01):

but you can see it clearly in the pp.all

Mario Carneiro (Dec 24 2021 at 21:06):

Note that in the has_coe_to_sort.mk step, I'm actually looking at the type of the definition rather than the value (since it's a constructor). Looking at types can cause universe bumps, since Type u : Type (u+1) - if you insist that the type be well formed for the value to be then you will get no upper bound

Mario Carneiro (Dec 24 2021 at 21:18):

Here's another way to look at the derivation:

  • finsum_mem_empty.{0 0} uses finsum.{0 1}
  • finsum.{0 1} uses set.finite.to_finset.{1}
  • set.finite.to_finset.{1} uses set.finite.fintype.{1}
  • set.finite.fintype.{1} uses set.has_coe_to_sort.{1}
  • set.has_coe_to_sort.{1} uses has_coe_to_sort.mk.{2 3}
  • has_coe_to_sort.mk.{2 3} uses out_param.{4}
  • out_param.{4} contains Sort 4

The last two examples might help to understand what's happening:

#check @finsum_mem_empty.{0 0}
#check @finsum.{0 1}
#check @set.finite.to_finset.{1}
#check @set.finite.fintype.{1}
#check @set.has_coe_to_sort.{1}
#check @has_coe_to_sort.mk.{2 3}
-- has_coe_to_sort.mk.{2 3} : Π {a : Type 1} {S : out_param.{4} (Type 2)}, (a → S) → has_coe_to_sort.{2 3} a S
#check @out_param.{4}
-- out_param.{4} : Type 3 → Type 3

out_param is just an identity function, but in order to apply the identity function to Type 2 you need a Type 3 to write the type of the argument

Heather Macbeth (Dec 24 2021 at 21:19):

If the uses of set.has_coe_to_sort were replaced by subtype, would this get rid of the +3 universe bump?

Mario Carneiro (Dec 24 2021 at 21:21):

here's a simplified example how you can get lots of bumping without really doing anything weird

def foo : Prop := @id (@id (@id (@id (@id (Type 3) (Type 2)) (Type 1)) Type) Prop) true

Mario Carneiro (Dec 24 2021 at 21:29):

@Heather Macbeth Yes, you have to eliminate the uses of set.has_coe_to_sort in dependents as well, but this version drops the level of set.finite.fintype from u+3 to u+1:

inductive set.finite' {α} (s : set α) : Prop
| intro : fintype (subtype s)  set.finite'

lemma {u} set.finite_def' {α : Type u} {s : set α} : set.finite' s  nonempty (fintype (subtype s)) := λ h⟩, h⟩, λ h⟩, h⟩⟩

noncomputable def {u} set.finite.fintype' {α : Type u} {s : set α} (h : set.finite' s) : fintype (subtype s) :=
classical.choice $ set.finite_def'.1 h

Heather Macbeth (Dec 24 2021 at 21:31):

Hmm, maybe set.has_coe_to_sort should be considered harmful ...

Mario Carneiro (Dec 24 2021 at 21:33):

Probably a better approach for determining the real axiomatic strength of mathlib would be to allow for higher universes that don't support inductive types at all. As long as you only use them in id applications like this that might be sufficient.

Mario Carneiro (Dec 24 2021 at 22:23):

Looking at this some more, finsum.{0 1} : Π {M α : Type}, ... which looks normal, but it references

set.finite.to_finset.{1} : Π {α : Type 1} {s : set α}, s.finite  finset α

which seems gratuitous (why are we dealing with sets of elements in Type 1?). I suspect the reason is plift : Sort u -> Type u. Perhaps we should try changing that to plift : Sort u -> Sort (max u 1) and see what breaks

Mario Carneiro (Dec 24 2021 at 22:26):

Oh, that won't work:

@[irreducible] noncomputable def finsum {M : Type u} {α : Sort v} [add_comm_monoid M] (f : α  M) : M :=
if h : finite (support (f  plift.down)) then  i in h.to_finset, f i.down else 0

Here support (f ∘ plift.down) is needed because support takes a Type u', and if you had a variant of plift returning Sort (max v 1) then it wouldn't unify with Type u' because u' + 1 = max v 1 does not have a solution for u' in level arithmetic

Kevin Buzzard (Dec 24 2021 at 23:02):

try induction v? ;-)

Kevin Buzzard (Dec 24 2021 at 23:02):

Whose idea was it to sum over Props anyway?

Yury G. Kudryashov (Dec 25 2021 at 01:28):

It was my idea

Yury G. Kudryashov (Dec 25 2021 at 01:29):

This way you can write \sum^f n < 5, n

Kevin Buzzard (Dec 25 2021 at 01:34):

We need pfinsum :-)

Yury G. Kudryashov (Dec 25 2021 at 02:28):

Why? You won't get nice notation if you don't use the same def for sums over n : nat and hn : n < 5.

Yury G. Kudryashov (Dec 25 2021 at 02:28):

Do you really care about universe bump here?

Johan Commelin (Dec 25 2021 at 04:55):

For me, I find ∑ᶠ n < 5, n more important than avoiding universe bumps.

Anne Baanen (Dec 25 2021 at 19:18):

I have heard of people who believe in the Axiom of Choice up to some aleph-n (n = 2 or 3 I think?). I haven't heard of anyone who doesn't believe in at most 3 inaccessible cardinals. So I'm curious why we care

Kevin Buzzard (Dec 25 2021 at 21:25):

Because I tell ZFC people that obviously it all works in ZFC?

Mario Carneiro (Jun 18 2025 at 23:58):

I recently gave a talk at Big Proof about this topic, so I may as well resuscitate the meta program so we can run it on modern mathlib:

import Lean
open Lean Elab.Command

def LevelSpec := List Name × Level
  deriving Inhabited

def Lean.Level.atZero : Level  Nat
  | .succ l     => l.atZero + 1
  | .max l₁ l₂  => l₁.atZero.max l₂.atZero
  | .imax l₁ l₂ => let n := l₂.atZero; if n = 0 then 0 else l₁.atZero.max n
  | _           => 0

namespace GetConstSpec

structure State where
  visited : NameSet := {}
  levels : NameMap LevelSpec := {}

abbrev M := StateRefT State IO

def getExprLevel (r : NameMap LevelSpec) (v : Expr) (l := Level.zero) : Level := runST fun _ => do
  (·.snd) <$> (StateT.run (s := l) <| v.forEach fun
    | .const n us =>
      match r.findD n default with
      | ([], l') => modify fun l => mkLevelMax' l l'
      | (us', l') => modify fun l => mkLevelMax' l <| l'.instantiateParams us' us
    | .sort l' => modify fun l => mkLevelMax' l l'
    | _ => pure ())

partial def ensureConstSpec (env : Environment) (n : Name) : M Unit := do
  if ( get).visited.contains n then return
  modify fun s => { s with visited := s.visited.insert n }
  let some ci := env.find? n | unreachable!
  let v := ci.value?.getD ci.type
  v.getUsedConstants.forM (ensureConstSpec env)
  let r := ( get).levels
  let l := (getExprLevel r v .zero).normalize
  let sp := if l.hasParam then (ci.levelParams, l) else ([], l)
  modify fun s => { s with levels := s.levels.insert n sp }

def getConstSpec (env : Environment) (n : Name) : M LevelSpec := do
  ensureConstSpec env n
  return ( get).levels.find! n

def M.run {α} (x : Environment  M α) : CommandElabM α := do
  (x ( getEnv)).run' {}

def explainDefinition (n : Name) : CommandElabM Unit := do
  logInfo <|  M.run fun env => do
    let l  getConstSpec env n
    let mut msg := m!"{n}.{l.1}: {l.2}\n===="
    let some ci := env.find? n | throw <| .userError "unknown"
    let v := ci.value?.getD ci.type
    for n in v.getUsedConstants.qsort Name.lt do
      let l  getConstSpec env n
      msg := msg ++ m!"\n{n}.{l.1}: {l.2}"
    pure msg

run_cmd explainDefinition ``Int.noConfusionType.withCtorType

def getMinimalDefinitionsAbove (limit : Nat) : CommandElabM (Array Name) := M.run fun env => do
  let ls  env.constants.foldM (init := #[]) fun p _ d => do
    let n := d.name
    if n.isInternal || d.isUnsafe || d.isPartial then return p
    let sp  getConstSpec env n
    let z := sp.2.atZero
    if z < limit then return p
    let some ci := env.find? n | throw <| .userError "unknown"
    let mut consts := ci.type.getUsedConstantsAsSet
    if let some val := ci.value? then
      consts := val.foldConsts consts fun c cs => cs.insert c
    for n in consts do
      let l  getConstSpec env n
      if l.2.atZero  limit then return p
    return p.push n
  return ls.qsort Name.lt

#eval do logInfo m!"{← (← getMinimalDefinitionsAbove 3).mapM mkConstWithLevelParams}"

def getMaximalDefinitions : CommandElabM (Nat × Array Name) := M.run fun env => do
  let (n, ls)  env.constants.foldM (init := (0, #[])) fun p _ d => do
    let n := d.name
    if n.isInternal || d.isUnsafe || d.isPartial then return p
    let sp  getConstSpec env n
    let z := sp.2.atZero
    return if p.1 < z then (z, #[n]) else if p.1 = z then (p.1, p.2.push n) else p
  return (n, ls.qsort Name.lt)

#eval do logInfo m!"{← getMaximalDefinitions}"

Bhavik Mehta (Jun 19 2025 at 00:06):

That says the highest universe in Lean is 2, and the highest in Mathlib is 3 for me, with many of the things achieving 3 being about condensed sets

Mario Carneiro (Jun 19 2025 at 00:13):

To my surprise, the number on mathlib has decreased to 3 since I ran this test 4 years ago. To take the example of finsum_mem_empty which was previously the poster child of a level 4 definition, it is now just level 1 with the following trace:

  • finsum_mem_empty.[u_1, u_5]: u_1+1
  • finsum.[u_7, u_8]: u_7+1
  • wrapped._@.Mathlib.Algebra.BigOperators.Finprod._hyg.33.[u_7, u_8]: u_8+1
  • definition._@.Mathlib.Algebra.BigOperators.Finprod._hyg.33.[u_2, u_4]: u_2+1
  • Set.Finite.toFinset.[u]: u+1
  • Set.Elem.[u]: u+1 :exclamation:
  • Set.[u]: u+1

The Set.Elem line replaces the previous use of set.has_toe_to_sort which was responsible for a +2 bump; it is now only a +1 bump because coercions in lean 4 are eagerly unfolded.

Mario Carneiro (Jun 19 2025 at 00:23):

Here is the list of minimal level 3 definitions:

CondensedSet
Cardinal.univ
CommRingCat.moduleCatExtendScalarsPseudofunctor
CommRingCat.moduleCatRestrictScalarsPseudofunctor
Ordinal.univ
PartialFun.instInhabited
RingCat.moduleCatRestrictScalarsPseudofunctor
Cardinal.lt_univ.match_1_1
Cardinal.lt_univ'.match_1_1
Cardinal.lt_univ'.match_1_3
CategoryTheory.Cat.instInhabited
CategoryTheory.Quiv.instInhabited

Mario Carneiro (Jun 19 2025 at 00:23):

I am pleased to see that they actually have something to do with universes now

Mario Carneiro (Jun 19 2025 at 00:37):

and a derivation for CondensedSet:

  • CondensedSet.{0} uses CategoryTheory.types.{1}
  • CategoryTheory.types.{1} uses CategoryTheory.LargeCategory.{1}
  • CategoryTheory.LargeCategory.{1} uses Type 2 aka Sort 3

Mario Carneiro (Jun 19 2025 at 00:37):

actually the type of CondensedSet.{0} is already Type 2

Mario Carneiro (Jun 19 2025 at 00:41):

Cardinal.univ is the existence of a "small" inaccessible, so I think level 3 definitions correspond to things that should require 1 inaccessible on top of ZFC. Level 2 definitions seem to be roughly ZFC level

James E Hanson (Jun 19 2025 at 01:37):

Mario Carneiro said:

Cardinal.univ is the existence of a "small" inaccessible, so I think level 3 definitions correspond to things that should require 1 inaccessible on top of ZFC. Level 2 definitions seem to be roughly ZFC level

What do you mean by a small inaccessible specifically?

Joël Riou (Jun 19 2025 at 11:55):

Indeed, CommRingCat.moduleCatExtendScalarsPseudofunctor (and the similar names including "pseudofunctor") is a quite big object: it is the pseudofunctor which sends a commutative ring R : Type u to the category of modules over R (which is itself a large category).

Andrew Yang (Jun 19 2025 at 12:00):

I wonder if it would be more interesting to see the list of declaration whose value/proof term has higher universe than the ones mentioned in the type/statement.

Mario Carneiro (Jun 19 2025 at 14:23):

@Andrew Yang that's what this list is

Mario Carneiro (Jun 19 2025 at 14:23):

the full list of level 3 definitions is longer, this is just the ones that are minimal in the dependency order, meaning that all direct dependencies are level 2 or lower

Mario Carneiro (Jun 19 2025 at 14:26):

James E Hanson said:

Mario Carneiro said:

Cardinal.univ is the existence of a "small" inaccessible, so I think level 3 definitions correspond to things that should require 1 inaccessible on top of ZFC. Level 2 definitions seem to be roughly ZFC level

What do you mean by a small inaccessible specifically?

It asserts that an object which is inaccessible is an element of another type (not a universe), so I'm not sure how you would be able to model that without real inaccessibles. That is, if you have Cardinal alone then you can argue that this is a proper class in ZFC land, but if you define Cardinal.univ := |Cardinal| and assert Cardinal.univ : Cardinal now you've asserted that this inaccessible thing is itself an element of a set so it has to be a small (i.e. "real") inaccessible

Mario Carneiro (Jun 19 2025 at 14:29):

It's possible that you can still argue that this isn't really using inaccessibles by saying that this higher structure is just formal stuff and the universe so constructed is very deficient, but so far I haven't found an analysis good enough to do that

Mario Carneiro (Jun 19 2025 at 14:34):

Andrew Yang said:

I wonder if it would be more interesting to see the list of declaration whose value/proof term has higher universe than the ones mentioned in the type/statement.

Oh, I realize I misinterpreted this question. I agree that looking at types is a bit sketchy, because it obviously leads to universe bumping in some situations. The problem is that it's not clear how to analyze axiomatic definitions (like inductive types and constructors) without looking at the types, since they have no definition to inspect

Mario Carneiro (Jun 19 2025 at 14:39):

The test is easy enough to perform though. Removing the type checking for definitions, theorems, and opaques but leaving it for inductives and axioms actually reduces the max level in mathlib to 2, and the list of level 2 definitions is entirely composed of higher order class definitions:

#[Alternative, AlternativeMonad, Applicative, Bifunctor, Bind, Bitraversable, Class, CompHausLike,
  CondensedMod, CondensedSet, ContextFreeGrammar, EquivFunctor, Functor, Locale, Monad, MonadCont,
  MonadFinally, MonadShareCommon, MonomialOrder, MvPFunctor, Pure, QPF, Seq, SeqLeft, SeqRight, Traversable,
  small_type, Aesop.MonadStats, CategoryTheory.Bicategory, CategoryTheory.Bundled, CategoryTheory.Category,
  CategoryTheory.CategoryStruct, CategoryTheory.Groupoid, CategoryTheory.Mat_, Computability.Encoding,
  Computability.FinEncoding, Condensed.instHasLimitsOfSizeModuleCat, CondensedMod.instHasLimitsOfSizeModuleCat,
  Lean.MonadError, Lean.MonadQuotation, Lean.MonadRecDepth, Lean.MonadRef, Lean.MonadRuntimeException,
  Lean.MonadWithOptions, PartialFun.of, Plausible.SampleableExt, Lean.Syntax.MonadTraverser,
  SetTheory.PGame.ListShort, SetTheory.PGame.Short, Lean.Compiler.LCNF.MonadScope, Lean.Compiler.LCNF.TraverseFVar,
  Std.Internal.IO.Async.Selectable]

Mirek Olšák (Jun 19 2025 at 15:02):

I tried to answer Andrew's question on my own (I think it was asking something a bit different), and I am getting confused by Expr.foldM -- it only returns the direct subterms, not all deep subterms of the term, right? Is then getExprLevel really searching through the entire term?

Mirek Olšák (Jun 19 2025 at 15:07):

btw, I thought the question was: For each constant calculate two numbers:

  1. Maximal universe used both in type and value
  2. Maximal universe used in type only

Find the maximum difference (1) - (2).

Mario Carneiro (Jun 19 2025 at 15:14):

the difference can't be more than 1 I think

Mirek Olšák (Jun 19 2025 at 15:16):

In principle it can -- it is possible to state the consistency of ZFC & 42 inaccessibles only using finite structures (safely within Type 0) but then you need more then 42 universes to prove it. Of course we expect such things not happening much in practice but would be nice to know.

Mario Carneiro (Jun 19 2025 at 15:17):

oh, wait I misread 2, I thought you meant value only

Mario Carneiro (Jun 19 2025 at 15:17):

I'm doing tests with value and value + type

Mario Carneiro (Jun 19 2025 at 15:19):

type only is not very relevant, since I'm trying to figure out what it takes to do the proof (what it takes to write the statement is accounted for in the definitions involved in the statement anyway)

Andrew Yang (Jun 19 2025 at 15:22):

To me, constructing a term of type Sort 3 of course involves 3 universes, and this isn’t really doing anything fishy. But if there is a result that is statable in Sort 3 but the proof involves Sort 5 then something weird is going on and I would consider this as “using two extra universes”

Mirek Olšák (Jun 19 2025 at 15:25):

My philosophical take is that I don't really care about highly abstract objects that are using high universes, and about their interpretations. I have a statement that I understand as a set-theorist, such as FLT, and I want to be able to tell (after Kevin succeeds), if I will have a ZFC proof of it, or not. Only the occurrence of these "weird abstract objects" inside proofs / objects that should be ZFC-understandable forces me to give them an interpretation (depending on how they are used). So it would be nice to see some examples of this happening.

David Michael Roberts (Jun 19 2025 at 17:14):

Maybe someone needs to formalise McLarty's paper that derived functor cohomology of a topos (from SGA) is do-able in higher-order arithmetic.

Mirek Olšák (Jun 19 2025 at 17:28):

I am fine with ZFC, no need to push it to arithmetic so far :-). But no problem, I can imagine there are some points in mathematical proofs that require large universes but can be circumvented. Looking up the largest difference (value-universe) minus (type-universe) could expose such moments that deserve attention. Perhaps sometimes, the universe reduction could be automated, sometimes it requires manual care.

Mario Carneiro (Jun 19 2025 at 17:59):

Mirek Olšák said:

I tried to answer Andrew's question on my own (I think it was asking something a bit different), and I am getting confused by Expr.foldM -- it only returns the direct subterms, not all deep subterms of the term, right? Is then getExprLevel really searching through the entire term?

You're right, this was a bug, I've fixed it above. Unfortunately it now takes a lot longer, and I haven't run it on mathlib yet. The maximum universe used in lean is now reported as 3, with almost all of the level 3 definitions being generated functions from the Int.noConfusionType.withCtorType family:

@[reducible] protected def Int.noConfusionType.withCtorType.{u} : Type u  Nat  Type (max 1 u) :=
fun P ctorIdx  bif ctorIdx.blt 1 then PUnit  Nat  P else PUnit  Nat  P

Mario Carneiro (Jun 19 2025 at 18:02):

the reason this is level 3 is because it uses cond.{3} because the type of the two arms of the bif are Type 1 : Type 2

Mario Carneiro (Jun 19 2025 at 18:04):

and the reason the arms are in Type 1 instead of Type is because of the use of PUnit.{2} for no apparent reason

Mario Carneiro (Jun 19 2025 at 18:42):

Results from mathlib (it takes about 15 minutes): Highest universe is 5 now! Here are the winners:

instCartesianClosedCondensedSet
Cardinal.lift_lt_univ
CondensedMod.epi_iff_locallySurjective_on_compHaus
CondensedMod.epi_iff_surjective_on_stonean
CondensedSet.epi_iff_locallySurjective_on_compHaus
CondensedSet.epi_iff_surjective_on_stonean
ENat.card_plift
Nat.card_plift
PSet.rank_eq_wfRank
PartENat.card_plift
ZFSet.rank_eq_wfRank
CondensedMod.LocallyConstant.instIsIsoCondensedSetMapForgetAppCondensedModuleCatCounitDiscreteUnderlyingAdjObjFunctor

Damiano Testa (Jun 19 2025 at 18:56):

So, is it always one more than the Lean version?

Joachim Breitner (Jun 19 2025 at 19:07):

Mario Carneiro schrieb:

and the reason the arms are in Type 1 instead of Type is because of the use of PUnit.{2} for no apparent reason

I wrote that code recently (https://github.com/leanprover/lean4/pull/8037), and I put in the PUnit here to get all the various branches or withCtorType to the same universe level. Maybe this isn't actually needed? It was in my testing. Or maybe I'm just off by one?

Mario Carneiro (Jun 19 2025 at 19:08):

I think you are off by one

Mario Carneiro (Jun 19 2025 at 19:08):

You could also use ULift to ensure they have the same level

Joachim Breitner (Jun 19 2025 at 19:08):

PR welcome in that case!

Mario Carneiro (Jun 19 2025 at 19:09):

Here's a traceback for instCartesianClosedCondensedSet:

  • instCartesianClosedCondensedSet.{0} uses
  • CategoryTheory.instCartesianClosedFunctorType_1.{0, 1} uses
  • CategoryTheory.cartesianClosedFunctorToTypes.{0, 1, 1} uses
  • CategoryTheory.instCartesianClosedFunctorType.{1} uses
  • CategoryTheory.instCartesianClosedFunctorType._proof_1.{1} uses
  • CategoryTheory.Presheaf.isLeftAdjoint_of_preservesColimits.{1,1,2} uses
  • CategoryTheory.Presheaf.instIsLeftKanExtensionFunctorOppositeTypeIdCompYonedaOfPreservesColimitsOfSizeOfHasPointwiseLeftKanExtension.{1,1,2} uses
  • CategoryTheory.Presheaf.isLeftKanExtension_of_preservesColimits.{1,1,2} uses
  • CategoryTheory.Presheaf.isLeftKanExtension_along_yoneda_iff.{1,1,2} uses
  • CategoryTheory.Presheaf.preservesColimitsOfSize_leftKanExtension.{1,1,1,2,1} uses
  • CategoryTheory.Functor.instIsLeftKanExtensionLeftKanExtensionLeftKanExtensionUnit.{1,2,2,1,1,1} uses
  • CategoryTheory.Limits.initialIsInitial.{2,2} uses
  • CategoryTheory.Limits.initialIsInitial._proof_2.{2,2} uses
  • CategoryTheory.Limits.colimit.hom_ext.{0,0,2,2} uses
  • CategoryTheory.Limits.IsColimit.hom_ext.{0,2,0,2} uses
  • eq_of_heq.{4} uses
  • rfl.{5} uses
  • Eq.refl.{5} uses
  • Eq.{5} uses Sort 5

Joachim Breitner (Jun 19 2025 at 19:09):

I tried ULift but it worked less good when I tried. Don't remember the details, though. Maybe Max vs imax or so

Mario Carneiro (Jun 19 2025 at 19:27):

I was surprised by the CategoryTheory.Limits.IsColimit.hom_ext line; it seems this is the consequence of using congr to rewrite the statement including an instance inst : (CategoryTheory.Category.{2, 2} C : Type 3). And eq_of_heq has a proof that actually involves a universe bump because it uses equality of types while proving something about equality of values

Kyle Miller (Jun 19 2025 at 19:44):

@Mario Carneiro How does congr! 3; exact funext w do in that proof? I think congr! might avoid the type equality proofs, but I'm not sure.

Mario Carneiro (Jun 19 2025 at 19:45):

I think it is the same

Mario Carneiro (Jun 19 2025 at 19:45):

the problem is that the congruence lemma for IsColimit.desc involves all the arguments, even though we aren't rewriting the parameters like the category instance

Mario Carneiro (Jun 19 2025 at 19:46):

and these are proved in a roundabout way using eq_of_heq HEq.rfl which itself involves an unnecessary bump

Mario Carneiro (Jun 19 2025 at 19:50):

oh no you are right, congr! produces a proof term without the bump

Mirek Olšák (Jun 21 2025 at 14:58):

I was investigating why relatively innocent-looking Int.Linear.Poly.denote'_go_eq_denote (from Lean core) has maximal recursive universe level 3, Turns out that Lean.Data.AC.Variable could have been defined to have Sort (max 1 u) instead of Type u.

  • Would it be possible to define Lean.Data.AC.Variable as Sort (max 1 u) in the library? Is is desired? I was not able to keep Lean.Data.AC.Context as Type u then (and I am not certain of the purpose of Lean.Data.AC).
  • Could the analysis tool detect such universe lifts? I guess it might be non-trivial to get through the PLift (unless explicitly stated).

Mario Carneiro (Jun 21 2025 at 15:00):

can you just remove the type ascription on that structure?

Mario Carneiro (Jun 21 2025 at 15:01):

all of these are lean internals, and the ones involved in writing tactic code don't matter. This one does matter I think since Int.Linear.Poly theorems are generated by omega

Mirek Olšák (Jun 21 2025 at 15:01):

Yes, the trouble is with later putting Variables into a List. Somehow, Lists do not like to carry objects of type Sort (max 1 u)

Aaron Liu (Jun 21 2025 at 15:02):

Yes, since List takes a Type u as argument you can only put Type _ into a list

Jovan Gerbscheid (Jun 21 2025 at 15:03):

You can write α : Type u instead of α : Sort u, which I think solves the awkward universe lift

Mirek Olšák (Jun 21 2025 at 15:03):

Aaron Liu said:

Yes, since List takes a Type u as argument you can only put Type _ into a list

That's why we need inaccessible cardinals : :laughing:

Mirek Olšák (Jun 21 2025 at 15:04):

Jovan Gerbscheid said:

You can write α : Type u instead of α : Sort u, which I think solves the awkward universe lift

Yes, that is an option, I am not sure if Variable ever wants to get Sort 0 as a parameter, perhaps not.

Mario Carneiro (Jun 21 2025 at 15:05):

are you saying you don't do linear arithmetic over the vector space of proofs of a proposition?

Mirek Olšák (Jun 22 2025 at 11:42):

What is wrong with Boolean algebras? Apparently

run_cmd explainDefinition ``compl_compl  -- compl_compl.[u]: u+3
-- ...
-- BooleanAlgebra.toBiheytingAlgebra.[u]: u+3
-- ...

Can you share the code that produces the trace
Mario Carneiro said:

  • instCartesianClosedCondensedSet.{0} uses
  • CategoryTheory.instCartesianClosedFunctorType_1.{0, 1} uses
  • CategoryTheory.cartesianClosedFunctorToTypes.{0, 1, 1} uses
    ...

By the way, I tried to understand where Nat.card_plift uses universe level 5. Part of the story is that to calculate Nat.card_plift, we first calculate the Cardinal in the appropriate universe, and then convert it first to ENat, and then Nat. The conversion itself has relatively tame universe levels but then we are "for fun" proving that it is an order & additive & multiplicative homomorphism which sometimes causes universe bumps, and one of them is the mentioned BooleanAlgebra thing.

Mirek Olšák (Jun 22 2025 at 21:26):

After further investigation, the excessive universe levels in boolean algebras boil down to our good old friend Lean.Data.AC.Variable. Perhaps I should really edit Lean Core...

Mario Carneiro (Jun 22 2025 at 21:29):

Mirek Olšák said:

Can you share the code that produces the trace

I'm afraid I used a sophisticated neural network for that (using biological materials), it's a bit hard to reproduce

Mirek Olšák (Jun 22 2025 at 21:31):

no problem, I will implement some AI to replace it then

Mirek Olšák (Jun 22 2025 at 22:28):

Nat.card_plift.[u_3]: u_3+5
====
Nat.card.[u_3+1]->u_3+5
Nat.card_congr.[u_3+1, u_3]->u_3+5

Nat.card.[u_3]: u_3+4
====
Cardinal.toNat.[u_3]->u_3+4

Cardinal.toNat.[u_1]: u_1+4
====
Cardinal.toENat.[u_1]->u_1+4

Cardinal.toENat.[u]: u+4
====
Cardinal.toENat.proof_2.[u]->u+4
Cardinal.toENat.proof_3.[u]->u+4
Cardinal.toENat.proof_4.[u]->u+4

Cardinal.toENat.proof_2.[u_1]: u_1+4
====
Cardinal.canLiftCardinalNat.[u_1]->u_1+4
Cardinal.one_le_iff_ne_zero.[u_1]->u_1+4
Cardinal.toENatAux_eq_top.[u_1]->u_1+4
Cardinal.toENatAux_eq_zero.[u_1]->u_1+4

Cardinal.canLiftCardinalNat.[u_1]: u_1+4
====
Cardinal.lt_aleph0.[u_1]->u_1+4

Cardinal.lt_aleph0.[u_1]: u_1+4
====
Cardinal.nat_lt_aleph0.[u_1]->u_1+4

Cardinal.nat_lt_aleph0.[u]: u+4
====
Cardinal.instSuccOrder.[u]->u+4
Cardinal.nat_succ.[u]->u+4

Cardinal.instSuccOrder.[u_1]: u_1+4
====
Cardinal.instConditionallyCompleteLinearOrderBot.[u_1]->u_1+4
Cardinal.instWellFoundedLT.[u_1]->u_1+4

Cardinal.instConditionallyCompleteLinearOrderBot.[u_1]: u_1+4
====
Cardinal.instWellFoundedLT.[u_1]->u_1+4
WellFoundedLT.conditionallyCompleteLinearOrderBot.[u_1+1]->u_1+4

Cardinal.instWellFoundedLT.[u]: u+4
====
Cardinal.lt_wf.[u]->u+4

Cardinal.lt_wf.[u]: u+4
====
Function.Embedding.min_injective.[u+1, u]->u+4

Function.Embedding.min_injective.[u, v]: max (u+3) (v+3)
====
zorn_subset.[max u v]->(max u v)+3
Maximal.eq_of_subset.[max u v]->max 3 ((max u v)+2)
Set.instIsReflSubset.[max u v]->(max u v)+3
Set.subset_sUnion_of_mem.[max u v]->max 3 ((max u v)+2)

zorn_subset.[u_1]: u_1+3
====
zorn_le₀.[u_1]->u_1+3
Set.instCompleteAtomicBooleanAlgebra.[u_1]->max 3 (u_1+2)

zorn_le₀.[u_1]: u_1+3
====
zorn_le.[u_1]->u_1+3

zorn_le.[u_1]: u_1+3
====
exists_maximal_of_chains_bounded.[u_1]->u_1+3

exists_maximal_of_chains_bounded.[u_1]: u_1+3
====
maxChain_spec.[u_1]->u_1+3

maxChain_spec.[u_1]: u_1+3
====
SuccChain.[u_1]->u_1+3
SuperChain.[u_1]->u_1+3
ChainClosure.isChain.[u_1]->u_1+3
ChainClosure.succ_fixpoint_iff.[u_1]->u_1+3
IsChain.superChain_succChain.[u_1]->u_1+3
Set.instHasSSubset.[u_1]->u_1+3
Set.instIsIrreflSSubset.[u_1]->u_1+3
maxChain_spec.match_1.[u_1]->u_1+3

SuccChain.[u_1]: u_1+3
====
SuperChain.[u_1]->u_1+3

SuperChain.[u_1]: u_1+3
====
Set.instHasSSubset.[u_1]->u_1+3

Set.instHasSSubset.[u]: u+3
====
BooleanAlgebra.toBiheytingAlgebra.[u]->u+3
Set.instBooleanAlgebra.[u]->max 3 (u+2)

BooleanAlgebra.toBiheytingAlgebra.[u]: u+3
====
GeneralizedBooleanAlgebra.toGeneralizedCoheytingAlgebra.[u]->u+3
BooleanAlgebra.toBiheytingAlgebra.proof_3.[u]->u+3

GeneralizedBooleanAlgebra.toGeneralizedCoheytingAlgebra.[u]: u+3
====
GeneralizedBooleanAlgebra.toGeneralizedCoheytingAlgebra.proof_2.[u]->u+3

GeneralizedBooleanAlgebra.toGeneralizedCoheytingAlgebra.proof_2.[u_1]: u_1+3
====
inf_sdiff_self_left.[u_1]->u_1+3
inf_sdiff_self_right.[u_1]->u_1+3
_private.Mathlib.Order.BooleanAlgebra.0.sdiff_sup_self'.[u_1]->u_1+3
Lean.Data.AC.Context.eq_of_norm.[u_1+1]->u_1+3

inf_sdiff_self_left.[u]: u+3
====
inf_sdiff_self_right.[u]->u+3

inf_sdiff_self_right.[u]: u+3
====
sdiff_inf_sdiff.[u]->u+3

sdiff_inf_sdiff.[u]: u+3
====
Lean.Data.AC.Context.eq_of_norm.[u+1]->u+3

Lean.Data.AC.Context.eq_of_norm.[u_1]: max (u_1+2) (imax (max 1 u_1) u_1)
====
instDecidableEqNat.[]->2
instDecidableEqNat.[]->2
List.instBEq.[0]->2
List.instBEq.[0]->2
List.instLawfulBEq.[0]->2
Nat.instLawfulBEq.[]->2
Lean.Data.AC.eval.[u_1, u_1+1]->max (max 2 (u_1+1)) (imax (max 1 u_1) u_1)
Lean.Data.AC.eval.[u_1, u_1+1]->max (max 2 (u_1+1)) (imax (max 1 u_1) u_1)
Lean.Data.AC.evalList.[u_1, u_1+1]->max (max 2 (u_1+1)) (imax (max 1 u_1) u_1)
Lean.Data.AC.instContextInformationContext.[u_1]->u_1+2
Lean.Data.AC.instContextInformationContext.[u_1]->u_1+2
Lean.Data.AC.instEvalInformationContext.[u_1]->u_1+2
Lean.Data.AC.instEvalInformationContext.[u_1]->u_1+2
Lean.Data.AC.norm.[u_1+1]->max 2 (u_1+1)
Lean.Data.AC.norm.[u_1+1]->max 2 (u_1+1)
Lean.Data.AC.Context.eval_norm.[u_1]->max (u_1+2) (imax (max 1 u_1) u_1)

instDecidableEqNat.[]: 2
====
Nat.decEq.[]->2

Nat.decEq.[]: 2
====
Nat.beq.[]->2
Nat.eq_of_beq_eq_true.[]->2
Nat.ne_of_beq_eq_false.[]->2

Nat.beq.[]: 2
====
Nat.below.[1]->2
Nat.brecOn.[1]->2

Nat.below.[u]: max 2 (u+1)
====
Nat.rec.[(max 1 u)+1]->(max 1 u)+1

Nat.rec.[u]: max 1 u
====
Nat.[]->1
Nat.succ.[]->1
Nat.zero.[]->1

Nat.[]: 1
====

Joachim Breitner (Jun 24 2025 at 09:03):

Mario Carneiro schrieb:

I think you are off by one

https://github.com/leanprover/lean4/pull/8972. Will wait for the mathlib ci infrastructure to work again before merging, in case that uncovers issues with this.

Joachim Breitner (Jun 24 2025 at 14:13):

Mario Carneiro schrieb:

You could also use ULift to ensure they have the same level

Tried that as well, and it wasn’t a drop-in replacement, as ULift lifts two Types to their max.

If I generalize ULift to wrap a Sort instead:

- structure ULift.{r, s} (α : Type s) : Type (max s r) where
+ structure ULift.{r, s} (α : Sort s) : Sort (max s (r+1)) where

and also avoid my construction for inductive props (which don’t have interesting noConfusionTypes anyways), then I can use ULift.

Furthermore, because PUnit u → … involves imax but ULift involves max this fixes lean4#8962, although probably doesn’t solve all issues of that kind while level equality checking is incomplete.

This ULift generalization looks harmless and backwards compatible as long as the second level parameter is inferred (which the docstring for ULift points out as an explicit feature of that parameter order). Do you see a problem with that generalization?

Mario Carneiro (Jun 24 2025 at 14:14):

I think ULift is better than using PUnit -> here, because the latter introduces an imax you probably want to avoid

Mario Carneiro (Jun 24 2025 at 14:14):

I suggested a change to your version to use Sort max s r 1 instead, but this will change the indexing for downstream uses

Mario Carneiro (Jun 24 2025 at 14:17):

maybe it would be better to take your version of ULift and make mine PULift

Joachim Breitner (Jun 24 2025 at 14:21):

Mario Carneiro schrieb:

but this will change the indexing for downstream uses

What do you mean with indexing? Explicit instantiation of the r parameter?

Mario Carneiro (Jun 24 2025 at 14:22):

yes

Robin Arnez (Jun 24 2025 at 14:22):

structure PULift.{r, s} (α : Sort s) : Sort max r s 1 where
  up :: down : α

I assume

Mario Carneiro (Jun 24 2025 at 14:23):

incidentally, this also subsumes PLift

Mario Carneiro (Jun 24 2025 at 14:24):

and if we can get people away from PLift.{u} that will prevent some more spurious universe bumps

Mario Carneiro (Jun 24 2025 at 14:24):

(PLift.{0} is fine, but also subsumed if we have these new things)

Joachim Breitner (Jun 24 2025 at 14:26):

Mario Carneiro schrieb:

incidentally, this also subsumes PLift

Either variant subsumes PLift, right?

Mario Carneiro (Jun 24 2025 at 14:26):

yes

Joachim Breitner (Jun 24 2025 at 14:40):

Put the variant that breaks less code up at PR https://github.com/leanprover/lean4/pull/8975

Mirek Olšák (Jun 24 2025 at 21:15):

Another typical reason for an unnecessary extra universe is the default way how recursion is done using below and brecOn. Compare

def lowLevGet? {α : Type u} (l : List α) (n : Nat) : Option α :=
  @List.rec.{u+1,u} α (fun _ => Nat  Option α) (fun _ => Option.none) (
    fun h _t getFromTail? n => match n with
    | 0 => h
    | Nat.succ n' => getFromTail? n'
  ) l n

def recursGet? {α : Type u} (n : Nat) : List α  Option α
  | [] => none
  | h::t => match n with
    | 0 => h
    | Nat.succ n' => recursGet? n' t

run_cmd explainDefinition ``lowLevGet?  -- lowLevGet?.[u]: u+1
run_cmd explainDefinition ``recursGet?  -- recursGet?.[u]: u+2

But I suppose here the analysis tool should probably detect that the motive didn't need to be recursively defined...


Last updated: Dec 20 2025 at 21:32 UTC