Zulip Chat Archive

Stream: new members

Topic: more more basics


Iocta (Mar 16 2020 at 22:13):

inductive natural : Type
| zero : natural
| succ : natural  natural


open natural


def pred (n : natural) : natural :=
natural.rec_on n zero (λ n pred_n, n)


def plus (m n : natural) : natural
| _ zero := m
| zero _ := n
| _ _ := plus (pred m) (succ n)
 08.02.le    15   3 error           function expected at
   plus
 term has type
   natural (lean-checker)
 08.02.le    15  10 error           ill-formed match/equation expression (lean-checker)
 08.02.le    16   3 error           function expected at
   plus
 term has type
   natural (lean-checker)
 08.02.le    17   3 error           function expected at
   plus
 term has type
   natural (lean-checker)
 08.02.le    17  10 error           function expected at
   plus
 term has type
   natural (lean-checker)

Iocta (Mar 16 2020 at 22:13):

What's wrong?

Iocta (Mar 16 2020 at 22:17):

looks like it wants a function in those _s, but why/which function?

Reid Barton (Mar 16 2020 at 22:19):

Stuff to the left of the colon like (m n : natural) doesn't participate in the pattern match. You need to write def plus : natural -> natural -> natural | ...

Reid Barton (Mar 16 2020 at 22:19):

That stuff is also fixed/implicitly passed to the recursive call, which is why Lean is saying that plus is not a function

Reid Barton (Mar 16 2020 at 22:20):

Moving the arguments to the right of the colon will fix that as well.

Iocta (Mar 16 2020 at 22:22):

ok that's progress

Iocta (Mar 16 2020 at 22:26):

def plus : natural  natural  natural
| a zero := a
| zero b := b
| a b := plus (pred a) (succ b)

now it wants ⊢ natural.sizeof (pred a) < natural.sizeof a which I guess makes sense

Mario Carneiro (Mar 16 2020 at 22:26):

Don't use pred a like that

Mario Carneiro (Mar 16 2020 at 22:26):

have a case for succ a and use a

Reid Barton (Mar 16 2020 at 22:27):

(pred a is not smaller than a when a = 0, and Lean doesn't "know" that a is not equal to 0 in the last case. Plus this definition would be awkward to prove things about anyways.)

Mario Carneiro (Mar 16 2020 at 22:27):

def plus : natural  natural  natural
| a zero := a
| zero b := b
| (succ a) b := plus a (succ b)

Iocta (Mar 16 2020 at 22:27):

yeah that works

Mario Carneiro (Mar 16 2020 at 22:28):

(the first case is unnecessary)

Iocta (Mar 16 2020 at 22:28):

mhmm

Iocta (Mar 16 2020 at 22:33):

What's happening with the scope in | here :=? looks like it's referencing zero and succ but also defining a and b.

Iocta (Mar 16 2020 at 22:34):

iow, how does it know whether to bind a name or reference a name?

Iocta (Mar 16 2020 at 22:37):

perhaps it's just "define a name unless it's already defined, then use it"?

Mario Carneiro (Mar 16 2020 at 22:37):

it's a pattern

Mario Carneiro (Mar 16 2020 at 22:37):

you can use the names of constructors, but everything else becomes a new bound variable

Mario Carneiro (Mar 16 2020 at 22:38):

it helps here if you have ever used a functional programming language with pattern matching capabilities

Iocta (Mar 16 2020 at 22:40):

I see, | a succ := succ doesn't rebind succ because succ is a constructor

Iocta (Mar 16 2020 at 22:41):

but given def foo := 5 I can rebind "foo" with | a foo := foo

Iocta (Mar 16 2020 at 22:41):

because foo isn't a constructor of natural

Iocta (Mar 16 2020 at 22:42):

right?

Iocta (Mar 16 2020 at 22:43):

I mean like

def foo := 5

def plus : natural  natural  natural
| zero foo := foo
| (succ a) b := plus a (succ b)

Iocta (Mar 16 2020 at 22:43):

ok this makes sense

Reid Barton (Mar 16 2020 at 22:49):

Iocta said:

perhaps it's just "define a name unless it's already defined, then use it"?

Yes. To be fair, this is different from how it works in Haskell; I don't remember what ML-family languages do here

Mario Carneiro (Mar 16 2020 at 22:50):

You actually have to be a bit careful here because if the constructor is in a namespace, then lean will interpret it as a variable instead. The following defines the constant A function, not the identity function:

inductive foo
| A
| B

example : foo  foo
| A := A
| B := B -- equation compiler error, equation #2 has not been used in the compilation, note that the left-hand-side of equation #1 is a variable

Mario Carneiro (Mar 16 2020 at 22:51):

the identity function is actually this:

example : foo  foo
| foo.A := foo.A
| foo.B := foo.B

Reid Barton (Mar 16 2020 at 22:51):

wait I'm confused, isn't your example also the identity function, but for the wrong reason?

Mario Carneiro (Mar 16 2020 at 22:51):

aha, you're right

Mario Carneiro (Mar 16 2020 at 22:52):

if I had | A := foo.A it would be the constant function but that's too obvious

Mario Carneiro (Mar 16 2020 at 22:52):

| A := B and | B := A would make the issue more clear since on the first line you would get "unknown identifier 'B'"

Iocta (Mar 17 2020 at 03:56):

inductive natural : Type
| zero : natural
| succ : natural  natural

open natural


def pred (n : natural) : natural :=
natural.rec_on n zero (λ n pred_n, n)


def add : natural  natural  natural
| m zero := m
| m (succ n) := succ (add m n)



theorem add_zero :  (m: natural), add m zero = m
| (zero) := rfl
| (succ m) := by rw [add]

theorem zero_add :  (m: natural), add zero m = m
| zero := rfl
| (succ m) := by rw [add, zero_add]

theorem succ_add :  (m n: natural), add m (succ n) = succ (add m n)
| zero n := rfl
| (succ m) n := by rw add


theorem add_succ : forall (m n: natural), add (succ m) n = succ (add m n)
:= sorry

How do I fill the sorry?

Mario Carneiro (Mar 17 2020 at 04:42):

succ_add should be provable by rfl

Mario Carneiro (Mar 17 2020 at 04:43):

add_succ can be proven similar to zero_add

Mario Carneiro (Mar 17 2020 at 04:43):

(your names for succ_add and add_succ appear reversed)

Iocta (Mar 17 2020 at 05:23):

theorem succ_add : forall (m n: natural), add (succ m) n = succ (add m n)
| m zero := rfl
| m (succ n) := by rw [add_succ, add, succ_add]

works. I didn't expect to be able to access succ_add in the rw, but apparently that's ok

Mario Carneiro (Mar 17 2020 at 06:33):

it's because you are using the equation compiler to write the theorem. It's just like when you used add in the second clause for the definition of add

Mario Carneiro (Mar 17 2020 at 06:33):

If you attempted to use succ_add in the wrong way to make a circular proof, the equation compiler would complain about nontermination

Iocta (Mar 17 2020 at 06:51):

ok

Kevin Buzzard (Mar 17 2020 at 08:29):

The equation compiler is just a glorified rec

Iocta (Mar 17 2020 at 21:14):

theorem add_comm :  (a b : natural), add a b = add b a
| zero zero := rfl
| (succ a) zero := by rw [zero_add, add_zero]
| zero (succ b) := by rw [zero_add, add_zero]
| (succ a) (succ b) := by rw [add, add, succ_add, succ_add, add_comm]

proves the same point as

theorem add_comm (m n : natural ) : add m n = add n m :=
natural.rec_on n
(by rw [add_zero, zero_add ])
(assume (n: natural) ,
  (assume ih: add m n = add n m,
    show add m (succ n) = add (succ n) m,
    from calc
    add m (succ n) = succ (add m n) : rfl
    ... = succ (add n m) : by rw ih
    ... = add (succ n) m : by apply succ_add))

but in a different style. Can lean tell that these are similar, or can I convince it that they are?

Iocta (Mar 17 2020 at 21:16):

given one of them, can it generate the other in case I want to see the calc version?

Iocta (Mar 17 2020 at 21:18):

more generally, given a tactic proof can it generate a term proof?

Bryan Gin-ge Chen (Mar 17 2020 at 21:23):

You can see the term that Lean generates from a tactic proof with #print add_comm.

Iocta (Mar 17 2020 at 21:34):

Oh neat

Iocta (Mar 17 2020 at 21:38):

Would it be hard to write a function that transforms that output into a calc-based proof?

Bryan Gin-ge Chen (Mar 17 2020 at 21:51):

It's certainly too hard for me to do.

Iocta (Mar 17 2020 at 21:54):

Sounds like it's pretty hard then :-)

Iocta (Mar 17 2020 at 21:57):

Why does inductive vector (α : Type u) : nat → Type u have \to Type u and not \to \a?

Iocta (Mar 17 2020 at 21:57):

inductive vector (α : Type u) : nat  Type u
| nil {} : vector 0
| cons   : Π {n}, α  vector n  vector (n+1)

Patrick Massot (Mar 17 2020 at 21:59):

For each natural number n you want the type of vectors of objects of type alpha with length n

Patrick Massot (Mar 17 2020 at 21:59):

This is indeed in Type u

Iocta (Mar 17 2020 at 22:01):

ok I see

Iocta (Mar 17 2020 at 22:55):

I had

def append {α : Type} :
  Π {m n : nat}, vector α m  vector α n  vector α (m + n)
| 0 0 a b := nil
| 0 (nat.succ n) a b := _
| (nat.succ m) n a b := _

and I thought this would help

def zero_add {α : Type} :
 Π {n}, vector α (0 + n)  vector α n
| 0 nil  := nil
| (nat.succ n) (h::t) := (h::t)

but I'm not sure how to eliminate the 0 +

 08.04.le    40  28 error           equation compiler failed to create auxiliary declaration 'zero_add._main'
 nested exception message:
 type mismatch at application
   a_a :: a_a_1
 term
   a_a_1
 has type
   vector α (nat.add 0 n)
 but is expected to have type
   vector α 0 (lean-checker)

Kenny Lau (Mar 17 2020 at 22:58):

you need to use zero_add

Kevin Buzzard (Mar 17 2020 at 22:59):

Welcome to dependent type theory.

Kevin Buzzard (Mar 17 2020 at 23:12):

You can do it like this

universe u

inductive vector (α : Type u) : nat  Type u
| nil {} : vector 0
| cons   : Π {n}, α  vector n  vector (n+1)

namespace vector

def zero_add {α : Type} : Π {n}, vector α (0 + n)  vector α n :=
λ n v, by rw zero_add at v; exact v

end vector

but this is a complete disaster because I used tactics to define data, so zero_add will be hard to use in practice. The computer scientists know tricks around this. here is a situation where this issue is really quite embedded and you have to dig hard to get out.

Kenny Lau (Mar 17 2020 at 23:18):

universe u

inductive vector (α : Type u) : nat  Type u
| nil {} : vector 0
| cons   : Π {n}, α  vector n  vector (nat.succ n)

def vector.zero_add {α : Type} :
 Π {n}, vector α (0 + n)  vector α n
| 0 vector.nil  := vector.nil
| (nat.succ n) (vector.cons h t) := vector.cons h (vector.zero_add t)

Kenny Lau (Mar 17 2020 at 23:18):

this is what I meant

Kevin Buzzard (Mar 17 2020 at 23:20):

This is some sort of a dark art, which fortunately doesn't seem to come up in the kind of mathematics I tend to formalise (which is one of the reasons I'm not very good at it)

Iocta (Mar 17 2020 at 23:21):

hmm I'd actually tried

def zero_add {α : Type} :
 Π {n}, vector α (0 + n)  vector α n
| 0 nil  := nil
| (nat.succ n) (h::t) := h::(zero_add t)

which I think is the same, but it didn't work for me:

 08.04.le    37   5 error           equation compiler failed to create auxiliary declaration 'zero_add._main'
 nested exception message:
 type mismatch at application
   eq.rec (id_rhs (vector α (0 + 1)) (a_a :: ))
 term
   id_rhs (vector α (0 + 1)) (a_a :: )
 has type
   vector α (0 + 1)
 but is expected to have type
   (λ (a : vector α (0 + nat.succ n)), vector α (nat.succ n)) (a_a :: a_a_1) (lean-checker)
 08.04.le    40  30 error           type mismatch at application
   zero_add t
 term
   t
 has type
   vector α 0
 but is expected to have type
   vector α (0 + ?m_1) (lean-checker)

Kenny Lau (Mar 17 2020 at 23:23):

the resulting type of vector.cons had to be vector (nat.succ n)

Iocta (Mar 17 2020 at 23:26):

What is the implication of that?

Kenny Lau (Mar 17 2020 at 23:38):

Lean fails to unite nat.succ n with ?1 + 1

Reid Barton (Mar 17 2020 at 23:39):

Is it possible to use @ with the recursive call? @zero_add n t?

Kevin Buzzard (Mar 17 2020 at 23:50):

Just to remark that these issues aren't there with what Mario calls the "untyped approach" in that CDGA thread linked to above:

universe u

def vector (α : Type u) (n : ) : Type u := {l : list α // l.length = n}

def append {α : Type} :
  Π {m n : nat}, vector α m  vector α n  vector α (m + n) :=
λ m n v w, v.1 ++ w.1, by rw [list.length_append, v.2, w.2] -- easy

Iocta (Mar 18 2020 at 00:00):

I don't see Mario's comments in that thread

Iocta (Mar 18 2020 at 00:01):

scratch that

Iocta (Mar 18 2020 at 06:56):

Just to clarify in case I missed the point.. what was the conclusion of this conversation?: is there a way to fill this?

universe u

inductive vector (α : Type u) : nat  Type u
| nil {} : vector 0
| cons   : Π {n}, α  vector n  vector (nat.succ n)


def append {α : Type} :
  Π {m n : nat}, vector α m  vector α n  vector α (m + n)
| 0 0 a b := nil
| 0 (nat.succ n) a b := _
| (nat.succ m) n a b := _

Mario Carneiro (Mar 18 2020 at 08:39):

Here's a definition that sidesteps the definitional equality problems:

def append {α : Type} :
  Π {m n : nat}, vector α m  vector α n  vector α (n + m)
| _ n vector.nil b := b
| _ n (@vector.cons _ m x a) b := vector.cons x (append a b)

Mario Carneiro (Mar 18 2020 at 08:41):

To make this work, you need the recursion to follow the same structure on the vectors and on nat. Since addition is defined by recursion on the second argument, but append is defined by recursion on the first vector, that's why the addends are swapped

Mario Carneiro (Mar 18 2020 at 09:47):

If you really want the addends to be in the right order, I don't think you can do better than just having a cast function that transports along equality for the type. (This is actually a special case of eq.subst, but it computes better if you really want to do this dependent type stuff.)

def append' {α : Type} :
  Π {m n : nat}, vector α m  vector α n  vector α (n + m)
| _ n vector.nil b := b
| _ n (@vector.cons _ m x a) b := vector.cons x (append' a b)

def vector.cast {α : Type} :
  Π {m : nat}, vector α m   {n : nat}, m = n  vector α n
| _ vector.nil 0 _ := vector.nil
| _ (@vector.cons _ m x a) (nat.succ n) e :=
  vector.cons x (vector.cast a (nat.succ.inj e))

def append {α : Type} {m n : nat} (a : vector α m) (b : vector α n) : vector α (m + n) :=
(append' a b).cast (nat.add_comm n m)

Iocta (Mar 18 2020 at 18:57):

Thanks, gonna have to process this :-)

Kevin Buzzard (Mar 18 2020 at 18:59):

I have seen this phenomenon several times now -- great, you can now append, but is this really what you want to be doing? You will presumably have to fight dependent type theory when you want to make your next definition on vectors. The approach where you define them as lists of a certain length removes all this headache.

Marc Huisinga (Mar 18 2020 at 19:05):

Iocta is probably working through TPIL, where you're asked to implement append for vectors in 8.9.5

Kevin Buzzard (Mar 18 2020 at 19:05):

harsh!

Kevin Buzzard (Mar 18 2020 at 19:06):

Is the ability to write this sort of code important in formalising? I just try and avoid it...

Mario Carneiro (Mar 18 2020 at 19:06):

I think you are only asked to write down the type of vector append in TPIL

Marc Huisinga (Mar 18 2020 at 19:07):

"Following the examples in Section 8.6, define a function that will append two vectors. This is tricky; you will have to define an auxiliary function."

Kevin Buzzard (Mar 18 2020 at 19:07):

oh mario posted spoilers!

Marc Huisinga (Mar 18 2020 at 19:08):

it's probably not very important, but the vector example is used very commonly when motivating DTT, especially in programming languages with DTT. my only DTT experience is with lean - is this stuff less painful in any other DTT language?

Mario Carneiro (Mar 18 2020 at 19:09):

I don't think so, the issues that come up are basically to do with DTT itself and not lean's implementation of it

Kevin Buzzard (Mar 18 2020 at 19:10):

I think it's telling that both the definitions of vector that we have in mathlib are not this one.

Kevin Buzzard (Mar 18 2020 at 19:10):

We have "lists of length n" and "functions fin n -> alpha"

Marc Huisinga (Mar 18 2020 at 19:11):

kind of weird that everyone uses the vector example to motivate DTT then, if it isn't practical anywhere

Mario Carneiro (Mar 18 2020 at 19:11):

the vector example is used very commonly when motivating DTT

I also think it's telling that the very things that motivate DTT are the things that are most difficult to deal with in DTT

Kevin Buzzard (Mar 18 2020 at 19:11):

They should just get beginners doing commutative algebra :-)

Marc Huisinga (Mar 18 2020 at 19:12):

when i started formalizing things for my bsc thesis, i also started working with vectors when i wanted to carry around some length invariant.
i think i wrote two definitions and then decided that this isn't a good use of my time :p

Patrick Massot (Mar 18 2020 at 20:04):

At Big Proof 2, Gilles Dowek was ranting against vectors being presented as the motivation for DTT. He claims that ∀ x, P x is the actual motivation.

Kevin Buzzard (Mar 18 2020 at 20:04):

Exactly! The statement of Fermat's Last Theorem!

Kevin Buzzard (Mar 18 2020 at 20:04):

For every n>=3, some statement which depends on n is true.

Reid Barton (Mar 18 2020 at 20:26):

But systems based on, say, first order logic don't have any trouble with such statements either

Mario Carneiro (Mar 18 2020 at 20:36):

I think that if you follow the "untyped style" with data + proofs, then you are essentially rejecting curry howard, because proofs and data become entirely separate classes, with the thing that makes dependent type theory actually dependent, namely the use of terms inside types (not counting props), being avoided. If you follow this subset strictly, I think you land essentially in HOL.

But then this makes me wonder: we've argued before that HOL is too weak to represent some "real maths", like schemes and such. If that's the case, then this must be using "real dependent types", and there is probably an associated dependent type hell to be had. So how does it manifest?

Kevin Buzzard (Mar 18 2020 at 20:55):

Recall that a sheaf of types is, amongst other things, a way of associating a type to each open subset of a topological space. We can make these types rings or abelian groups, and then talk about sheaves of modules over a sheaf of rings. I've always suspected that this is problematic in HOL because you won't be able to use typeclasses.

Mario Carneiro (Mar 18 2020 at 21:03):

Right, so my question is what happens after that, when you do it in DTT. My general prediction is that anything that is impossible in HOL is DTT hell in DTT

Mario Carneiro (Mar 18 2020 at 21:04):

I think in this case it shows up in some struggles you had about a year ago with using the restriction map for equal sets like U \cap U = U

Kevin Buzzard (Mar 18 2020 at 21:06):

I suspect that sheaves of modules over a fixed sheaf of rings should be some monoidal category (except that I only just found out what these were the other day). That might be a nice exercise :-)

Mario Carneiro (Mar 18 2020 at 21:08):

and this is reminding me of some recent comments of Thorsten Altenkirch on the cross-mailing list DTT thread:

But more practically: indeed you are right if you accept this point (i.e. univalence) then you have to give up some properties of equality in particular that it is propositional, i..e a property. This can be annoying because it may mean that you have to deal with coherence issues when using equality, but I suspect you should be used to this. However, as long as you only deal with simple objects, i.e. as in the “world of numbers” everything is fine and nothing changes. It is only when you leave the realm of “set level Mathematics” and talk about structures (as in the Grothendieck example) you encounter equalities which are not propositions (but structures themselves).

Mario Carneiro (Mar 18 2020 at 21:09):

"coherence issues when using equality" sounds like trouble

Kevin Buzzard (Mar 18 2020 at 21:10):

This monoidal category stuff is exactly some sort of attempt to get around it -- Type is a monoidal category because product is not associative, it's only associative up to canonical isomorphism. But carrying around canonical isomorphisms is less painful than carrying around isomorphisms

Patrick Massot (Mar 18 2020 at 21:10):

Mario, did you carefully look at what Sébastien explained to us in Pittsburgh, and the corresponding mathlib code?

Mario Carneiro (Mar 18 2020 at 21:11):

probably not? I'm not sure what you are referring to

Patrick Massot (Mar 18 2020 at 21:12):

Sébastien's motivation for leaving the automated world of Isabelle is not sheaf theory.

Mario Carneiro (Mar 18 2020 at 21:13):

I'm not saying that HOL is the answer, but rather that DTT has only slightly improved the situation from "you can't do it" to "you can do it, but only if you are a masochist"

Mario Carneiro (Mar 18 2020 at 21:14):

(or to put it more positively: if you like tricky technical puzzles)

Reid Barton (Mar 18 2020 at 21:15):

Is it clear that vector (terms inside types) is really the same kind of issue as sheaves (types inside terms)?

Mario Carneiro (Mar 18 2020 at 21:16):

that's an interesting way to put it

Mario Carneiro (Mar 18 2020 at 21:17):

DTT (as implemented by lean) brings more than just terms in types, it also brings universes, which allows you to have types as terms. This use isn't particularly problematic AFAIK

Kevin Buzzard (Mar 18 2020 at 21:19):

It should be noted that we have not yet defined Picard groups in Lean (isomorphism classes of certain sheaves of modules, with group law being tensor product)

Mario Carneiro (Mar 18 2020 at 21:19):

is that actually a group? It sounds like a 2-group

Reid Barton (Mar 18 2020 at 21:20):

It's a group because of the phrase "isomorphism classes of". It is true it is the π0\pi_0 of a 2-group or something.

Mario Carneiro (Mar 18 2020 at 21:22):

aren't those isomorphism classes proper?

Kevin Buzzard (Mar 18 2020 at 21:22):

It's isomorphism classes of so-called invertible sheaves of modules. This saves you.

Kevin Buzzard (Mar 18 2020 at 21:23):

All sheaves of modules, you're doomed. An example of a scheme is: choose a field K. Now let the top space be 1 point, and let the sheaf of rings send the empty set to the zero ring, and the whole space to K.

Kevin Buzzard (Mar 18 2020 at 21:24):

A sheaf of modules for that scheme is just a K-vector space. An invertible sheaf of modules is a 1-dimensional K-vector space.

Kevin Buzzard (Mar 18 2020 at 21:24):

Tensor product of two 1-dimensional spaces is 1-dimensional; if V is 1-dimensional then the linear dual of V is its inverse. But it turns out they're all isomorphic to K anyway, so the Picard group is trivial.

Mario Carneiro (Mar 18 2020 at 21:26):

it's still a proper class if you are talking about all 1D K-vector spaces, but I guess there is a small set of representatives

Kevin Buzzard (Mar 18 2020 at 21:26):

The group is the isomorphism classes so it's trivial

Kevin Buzzard (Mar 18 2020 at 21:26):

and in general you do some small set trickery to show the group is a set

Reid Barton (Mar 18 2020 at 21:31):

If you took all the definitions literally in ZFC + one universe U (used to interpret "small" things) then the Picard group of a field would be a group with a single element which is not a member of U (hence neither is the group). But the group is isomorphic to the trivial group, so we pretend it is the trivial group and in particular small.

Reid Barton (Mar 18 2020 at 21:33):

Similarly if you wrote down all the definitions in Lean you would get a group in Type 1 (or Type (u+1)) but it's isomorphic (as a group) to some "ordinary"-sized group

Kevin Buzzard (Mar 18 2020 at 22:32):

I envisage a future where we're doing etale cohomology or Picard schemes or condensed sets in Lean and every time the answer is in type u+n this means that there are n problems for the ZFC guys to sort out at a future date

Reid Barton (Mar 18 2020 at 22:39):

Well there are two issues here. One is how to remove the "+U" part of ZFC+U when you want to "export" a Lean proof to ZFC. But even if you are happy to assume ZFC+U, you will probably still want operations like Pic, K-theory, etc. to not increase the universe level.

Kevin Buzzard (Mar 18 2020 at 22:46):

Yeah but I figure that it's not my problem.

Kevin Buzzard (Mar 18 2020 at 22:47):

I've worked through enough of these arguments to know that it will all be fine and hey, that's what universe polymorphism is for, right?

Kevin Buzzard (Mar 18 2020 at 22:47):

The people who are interested in reading section 4 of Scholze etale cohomology of diamonds can fix it up later

Kevin Buzzard (Mar 18 2020 at 22:49):

I wonder which universe the ideal class group of a number field will end up in

Reid Barton (Mar 18 2020 at 22:50):

See, I expect the second one will eventually become your problem, though I admit I don't have a compelling example at the moment

Reid Barton (Mar 18 2020 at 22:55):

The best I can do so far is the red-shift conjecture which, if you take it as a statement about iterated K-theory, can't be stated in Lean without arranging for K(R)K(R) to live in the same universe as RR, but in the general formulation described on the nlab can be.

Kevin Buzzard (Mar 18 2020 at 23:15):

Maybe we should ask Leo for a universe for every countable ordinal?

Jasmin Blanchette (Mar 19 2020 at 07:30):

@Mario Carneiro There are actually things that feel OK in HOL and considered hell in DTT. Namely, inductive predicates. HOL's inductive predicates are built on top of Knaster-Tarski and not encoded as inductive types (Curry-Howard). They work quite smoothly. Whereas in CIC, one ends up with the evil "vector" essentially, and dependently typed matching, etc. Maybe this is an argument against Curry-Howard -- or maybe it just shows that Lean's "induction/cases/match" could be improved.

I tried (with Johannes) to build a course that would almost exclusively remain in the HOL fragment of Lean, but in the end I had to give up because of inductive predicates. Now I'll even teach the (suboptimal) way of doing "vector", so that I can show the parallel with inductive predicates -- Curry-Howard in action.

Jasmin Blanchette (Mar 19 2020 at 07:32):

@Mario Carneiro Concerning "you _can_ do it, but only if you are a masochist", I think things like dependent pairs (and structures) and higher-rank polymorphism are useful and not hellish.

Kevin Buzzard (Mar 19 2020 at 07:42):

Equality of structures is hellish but in a lot of mathematics it turns out that we don't use the concept. We do use _some_ kind of equivalence relation, but it seems to be not any of the ones that these systems make available to us

Mario Carneiro (Mar 19 2020 at 07:52):

the major hellish aspect of DTT here is what I call "rigid parameters" like the n in vector A n. Here we have a type nat about which we have a bunch of useful (propositional) equalities, but by putting the parameter in a type dependency, only defeq matters, and this isn't what we want, so we have to fight it with casts and things and it's downhill from there.

Mario Carneiro (Mar 19 2020 at 07:54):

This is different from the issue that HoTT tries to solve, which is how to replace isomorphism of various kinds with equality (not defeq). You still have to use casts on rigid parameters, and the casts carry information now so you have coherences all over the place

Johan Commelin (Mar 19 2020 at 07:57):

Why? You could transfer a vector A n to a vector A m along h : n = m, right?

Mario Carneiro (Mar 19 2020 at 07:59):

that's just another way to use cast

Mario Carneiro (Mar 19 2020 at 07:59):

the problem is that because vector is data, that cast is not being used in a proof, it is being used in a definition and that means you will have to destructure this later

Mario Carneiro (Mar 19 2020 at 07:59):

and that usually means second order cast

Johan Commelin (Mar 19 2020 at 08:01):

Well, sure. But my impression was that HoTT has better support for higher order cast. It's what you're supposed to be doing.

Mario Carneiro (Mar 19 2020 at 08:01):

For example, I wrote down a definition of append above, using a cast. Proving that append is associative is both not fun and impossible because of type constraints

Johan Commelin (Mar 19 2020 at 08:02):

Do you run into the same problem with HoTT?

Johan Commelin (Mar 19 2020 at 08:02):

(I don't know, I never tried it.)

Mario Carneiro (Mar 19 2020 at 08:02):

Yes, HoTT encourages analysis of the structure of higher order cast. That's why everyone gets distracted with this and forgets to get back to the maths

Johan Commelin (Mar 19 2020 at 08:02):

My guess is that with HoTT you would be able to write down a higher homotopy that proves the assoc. And probably it's not even ugly.

Mario Carneiro (Mar 19 2020 at 08:05):

one theoretical fact that's not so nice is that you can never be "done" with your library of theorems about equality. You can prove that eq is symmetric, but that's a definition; now you want to prove that this definition has an inverse, and that's another higher order thing, then you want that your associator satisfies a pentagon lemma, and it just keeps going

Mario Carneiro (Mar 19 2020 at 08:05):

with a proof irrelevant equality it bottoms out at the first level, and then all equalities you need are just true

Mario Carneiro (Mar 19 2020 at 08:06):

in HoTT it feels like you can never just say that a fact is true, because how it was proved always comes back later

Johan Commelin (Mar 19 2020 at 08:07):

Sure, but it shifts the problem. Because when you're doing tensor products of modules, you end up with those pentagon lemmas anyway (but with lesser support). Higher-order equality is a real thing in math. And we need to be able to rewrite along it effortlessly.

Mario Carneiro (Mar 19 2020 at 08:08):

I'm not convinced that HoTT actually makes dealing with any of it nicer than you can already do in lean, it just makes these problems pervasive

Mario Carneiro (Mar 19 2020 at 08:12):

To put it another way, when you have Prop, every theorem tells you a little bit more about the mathematical structures of interest. In HoTT, every theorem is a definition, every answer results in several more questions, and no matter how much you work at it there is always a "raw end" where you haven't proven all the important theorems about some definition, because it's turtles all the way down. It's psychologically unsatisfying

Mario Carneiro (Mar 19 2020 at 08:13):

which is why it's no surprise that people get distracted with the higher order stuff

Johan Commelin (Mar 19 2020 at 08:14):

Well, you can prove that definition X is an hProp (assuming that it is the analogue of some Lean Prop). And you can prove once and for all that hProp foobar is an hProp for all foobar. So I think the turtles do become extremely trivial at some point.

Johan Commelin (Mar 19 2020 at 08:16):

In mathlib we have unique_unique, but not unique_unique_unique...

Mario Carneiro (Mar 19 2020 at 08:16):

But it's not like the h-levels exhaust the type universe, and why make an assumption if it's not needed? So you study general types instead and there is no bottom there

Kevin Buzzard (Mar 19 2020 at 09:21):

I think that ultimately there is a big question: what happens if you try to do the mathematics which Johan and I think of as normal, but which turns out to be a million miles from anything done in these provers, and I don't think it's clear what the answer will be. I don't need the pentagon axiom to do Picard groups because an element of that group is an isomorphism class of shaves of modules and all I need to know for associativity in that group is that (A tensor B) tensor C is isomorphic to A tensor (B tensor C) and I don't seem to care about compatibilities. For this particular topic mathematicians are happy to truncate. For others, we just don't know. Do I need the pentagon axiom to prove Fermat's last theorem? I'm not so sure.

Scott Morrison (Mar 19 2020 at 16:11):

This question is all about your "category number", i.e. what level in the hierarchy of n-categories you can/need/want to truncate at.

Scott Morrison (Mar 19 2020 at 16:11):

Nearly all mathematicians' category number is one higher than they realise it is. :-)

Scott Morrison (Mar 19 2020 at 16:12):

But maybe 50% of mathematicians have category number 0, another 40% category number 1, then 9% 2. (And then add +1 to most of those numbers when it's time to formalise their hunches about how things work. :-)

Scott Morrison (Mar 19 2020 at 16:13):

Scott Morrison said:

Nearly all mathematicians' category number is one higher than they realise it is. :-)

Sadly, this is one of those rules that sometimes still applies even after you've taken into account that it applies.... It's a slippery slope!

Scott Morrison (Mar 19 2020 at 16:14):

(And of course those percentages are total garbage.)

Ulrik Buchholtz (Mar 21 2020 at 17:08):

I just saw this thread now, after Kevin pinged me. Of course many people are now interested in homotopified mathematics and brave new algebra and what-not, but the large majority are happy with sets and groupoids and the occasional 2-type.

Kevin Buzzard (Mar 21 2020 at 17:13):

working mathematicians certainly work like this. Is it possible to work like this in a HoTT theory without any inconvenience? I guess I am hoping for concrete examples. I am hoping that Johan or others will soon show me such things.

Ulrik Buchholtz (Mar 21 2020 at 17:15):

I don't see why not, we already have quite a bit of algebra done like this in some of the HoTT libraries.

Kevin Buzzard (Mar 21 2020 at 17:19):

Which ones and what algebra? Can you answer in the thread on the HoTT Zulip? I am interested in a detailed overview of some of these systems, the README.md for these HoTT repos does not give this.

Kevin Buzzard (Mar 21 2020 at 17:20):

I was thinking about defining Noetherian rings and then it occurred to me that some of these systems might not currently have a workable definition of a finite type. This is thousands of lines of Lean code.

Ulrik Buchholtz (Mar 21 2020 at 17:20):

Kevin Buzzard said:

Which ones and what algebra? Can you answer in the thread on the HoTT Zulip?

OK, let's continue there.


Last updated: Dec 20 2023 at 11:08 UTC