Zulip Chat Archive
Stream: Program verification
Topic: universes
Johan Commelin (Oct 14 2021 at 07:12):
Naive newbie question: Lean has a hierarchy of universes. Do these ever show up in software development? Or is Prop
+ Type
all you need?
Mario Carneiro (Oct 14 2021 at 07:14):
It comes up in some naive higher order programming, e.g. if you want a monad transformer with a coherence property that universally quantifies over all monads
Mario Carneiro (Oct 14 2021 at 07:16):
I remember struggling with a definition of io
that didn't bump the universe, because it existentially quantifies over a monad satisfying some interface operations
Sebastian Ullrich (Oct 14 2021 at 07:18):
Existentials also come up in containers of (runtime) heterogeneous elements
#check List (Σ α : Type, Σ _ : ToString α, α) -- list of printable things
Mario Carneiro (Oct 14 2021 at 07:18):
If we wrote haskell code like the lens library we would have universe bumps all over the place:
type Traversal s t a b = forall f. Applicative f => (a -> f b) -> s -> f t
Mario Carneiro (Oct 14 2021 at 07:18):
runST
also has a higher order type quantification
Sebastian Ullrich (Oct 14 2021 at 07:22):
In practice though, for anything you ever want to stick in a monad, Type
is the only viable kind :upside_down: . Because bind
requires alpha and beta to be in the same universe, you have to agree on a single "monadic universe" for the whole program.
Mario Carneiro (Oct 14 2021 at 07:23):
I rather think that all these examples don't want more universes, but instead system F (which can be summarized as Type 1 = Type
)
Johan Commelin (Oct 14 2021 at 07:37):
Aha, thanks for the replies!
Kevin Buzzard (Oct 14 2021 at 13:46):
So the conclusion is that Lean has infinitely many universes only to make its type theory consistent, and neither mathematicians nor computer scientists need anything other than Prop
or Type
?? Given that universes sometimes cause issues, isn't this an argument for just being much less polymorphic? :-)
Eric Wieser (Oct 14 2021 at 17:43):
Doesn't the problem usually manifest as trouble _writing new_ polymorphic lemmas, rather than _applying_ old ones? So there'd be no advantage to de-polymorphizing existing code.
Eric Wieser (Oct 14 2021 at 17:43):
Why does bind
require the universe to stay the same?
Kyle Miller (Oct 14 2021 at 18:04):
Eric Wieser said:
Why does
bind
require the universe to stay the same?
Iirc, it's because a monad is a particular functor, and you can't quantify over universe variables (you can say F.{u}
for a specific level u
is a monad, but you can't require that F.{u}
is a monad for all u
.)
Matthew Ballard (Oct 14 2021 at 19:38):
The agitation in Rust for type constructors associated to traits is a good sign for the pratical value for devs. (Please correct me if I am confused here.)
Mario Carneiro (Oct 15 2021 at 03:38):
@Kevin Buzzard There is another solution, which is to not have universes at all, or only have one. You then have to deal with the fact that some types are "small" and others are "large" and the large ones don't have a type in the formal system. This is how ZFC works and IMO it's quite possible to make do with this
Chris B (Oct 15 2021 at 05:53):
What do you do when you want to make a statement about a large type? Are there any proof assistants that use this approach?
Kyle Miller (Oct 15 2021 at 06:26):
Chris B said:
What do you do when you want to make a statement about a large type? Are there any proof assistants that use this approach?
Maybe related is (at least one version of) Automath, which only had the Prop
and Type
type universes. What happens is that not every theorem/definition has a type -- essentially it has two classes of functions, the usual lambda abstractions (which have types), and, not sure what to call them, metafunctions (which don't).
Imagine a modification to Lean, where in place of explicit universe levels you can have other kinds of arguments:
lemma id.(α : Type) : α → α := λ x, x
The id
function is too big to have a type itself in a two-universe system, but, for example, id.(ℤ)
would have the type ℤ → ℤ : Type
.
(One annoyance is that this id
cannot be used with Prop
s, which is something the Sort
hierarchy solves in Lean.)
Kyle Miller (Oct 15 2021 at 06:30):
Just like how in Lean you can't have functions that take arguments that quantify over universe levels (and instead you have to specialize a given function with some specific universe levels, possibly universe variables), you can't have functions that take these metafunctions as arguments without specializing all of the metafunction arguments to specific values. (I wonder how you'd deal with monads on Type
?)
Chris B (Oct 15 2021 at 06:32):
Thanks, I'll check automath out. The metafunction/"fixed at point of use" idea is sort of what I figured it might be.
Mario Carneiro (Oct 15 2021 at 06:41):
@Chris B In metamath, the distinction I am making corresponds to sets versus classes. You can prove theorems about classes, but class quantification is always implicit and always a universal closure, similar to how universe parametric definitions in lean work. (You can't prove that there exists a class satisfying such and such, but you can construct a concrete class and prove it satisfies such and such.) In a type theory, it would probably work out that large types can appear in the context, and they can be substituted but not quantified over, and the typing judgment for large types would be something like |- A type
instead of A : Type
because Type
is not an object in this language.
Mario Carneiro (Oct 15 2021 at 06:43):
Kyle's example is right on the money. The distinction between regular arguments and const-generic/type arguments in Rust (and early vs late bound lifetimes) is also of a similar flavor.
Mac (Oct 15 2021 at 15:49):
Sebastian Ullrich said:
In practice though, for anything you ever want to stick in a monad,
Type
is the only viable kind :upside_down: . Becausebind
requires alpha and beta to be in the same universe, you have to agree on a single "monadic universe" for the whole program.
Could this just be solved with a more generalHBind
? For example:
class HBind (m : Type u1 → Type v1) (n : Type u2 → Type v2) where
hBind {α : Type u1} {β : Type u2} : m α → (α → n β) → n β
@[defaultInstance] instance [Bind m] : HBind m m := ⟨bind⟩
Such a bind would also be useful for asynchronous programming (as its signature is compatible with that of bindAsync
).
Chris B (Oct 15 2021 at 17:07):
@Mario Carneiro
So it seems like it's along the lines of mm0's first/second order distinction, where first order would be small and second order is large. The strategy in peano.mm1 of using hand-rolled metaprograms to get ad-hoc polymorphism (ala eqtac
) seems to work pretty well even in the absence of a fancier typeclass/resolution system.
In an mm1 project where there are a large number of sorts, can you use annotations to fake a {A}, List A
that does monomorphization at points of use to get you List <concrete sort>
, or would that blow up too quickly?
Mario Carneiro (Oct 15 2021 at 17:09):
You actually don't need metaprograms to get the required instantiations here; instantiation just directly works on a theorem with second order free variables (in fact, this is the preferred way to state lemmas). You would only need a metaprogram to simulate third order variables
Mario Carneiro (Oct 15 2021 at 17:11):
You would also need a metaprogram to get sort-polymorphism in MM0, this is not directly supported. That would be one (very much not recommended) way to implement a rich type system in the sorts, by just minting a new sort whenever you need a different type. I guess you could call that monomorphization
Mario Carneiro (Oct 15 2021 at 17:13):
It's not recommended though because a lot of the architecture assumes that the number of sorts is small finite (like, less than 128) so a scheme like that would quickly blow the implementation limits
Chris B (Oct 15 2021 at 17:22):
Boo. Thanks for humoring me.
Mario Carneiro (Oct 15 2021 at 17:31):
If users started pushing for that it wouldn't be that hard to increase the limit, but I'm trying to discourage that kind of use. It makes about as much sense as having thousands or tens of thousands of syntax categories
Mario Carneiro (Oct 15 2021 at 17:32):
Or Type 10000
for that matter
Mac (Oct 15 2021 at 17:47):
Mario Carneiro said:
It makes about as much sense as having thousands or tens of thousands of syntax categories
Personally, this makes a lot of sense to me. :laughing: I am actually working on something privately that made do this kind of thing.
Mac (Oct 15 2021 at 17:49):
The library essentially uses syntax categories as a substitute for type classes (and inductives -- so types in general).
Chris B (Oct 15 2021 at 18:11):
Mario Carneiro said:
If users started pushing for that it wouldn't be that hard to increase the limit, but I'm trying to discourage that kind of use. It makes about as much sense as having thousands or tens of thousands of syntax categories
This makes sense in the context of mm0 as a theorem prover prover. If you were to import something like mathlib or another large formalization from a different system, I assume you would keep their declarations as "things that are stated in terms of the sorts set out in lean.mm1".
Mario Carneiro (Oct 15 2021 at 18:30):
Right. Each importer might have a different set of sorts, but it is a function only of the formal system itself so it's usually actually a small number. For lean I think there are 3 sorts, for terms, levels and inductive specifications
Last updated: Dec 20 2023 at 11:08 UTC