Zulip Chat Archive
Stream: lean4
Topic: Issue using a Type 1 in forIn
James Gallicchio (Mar 01 2022 at 21:12):
Weird test case, but here's an MWE:
inductive Node (τ : Type u)
| Node2 : τ → τ → Node τ
inductive FingerTree : Type u → Type (u+1)
| Empty {τ} : FingerTree τ
| Deep {τ} : τ → FingerTree (Node τ) → τ → FingerTree τ
def FingerTree.snoc (f : FingerTree τ) (a : τ) : FingerTree τ := sorry
def testAll : IO Unit := do
let mut q : FingerTree Nat := FingerTree.Empty
for i in [:100] do
q := FingerTree.snoc q i
I have a type for finger trees (much more complicated than here) which are indexed by a Type u
(and therefore needs to live in Type (u+1)
). Thought this would be fine. But now I can't use it in test cases I'm writing! (this works fine if it's in Type u instead)
James Gallicchio (Mar 01 2022 at 21:14):
It also works fine if the last line isn't in a for
, so I think something must be up with the signature of forIn
James Gallicchio (Mar 01 2022 at 21:15):
I can probably (?) rewrite this to be indexed by Nat
instead of the type itself, to avoid the increase in universe level, but this definition is much more elegant so I'd like to keep it if possible
Leonardo de Moura (Mar 02 2022 at 02:15):
Some remarks:
IO
is not universe polymorphic, and has typeType -> Type
for ... in ... do
notation uses an auxiliary type classForIn
, and this type class forces the elements being updated to be in the universe taken by the monad.testAll
function is equivalent to
def testAll : IO Unit := do
let q := FingerTree.Empty;
let col := { start := 0, stop := 100, step := 1 : Std.Range};
let q ← forIn col q fun i q => pure (ForInStep.yield (FingerTree.snoc q i))
pure PUnit.unit
- The same example will work with a universe polymorphic monad, but a slightly more complicated example will not without universe lifts which are a pain to use.
def testAll' : Id PUnit := do
let mut q : FingerTree Nat := FingerTree.Empty
for i in [:100] do
q := FingerTree.snoc q i
The following fails
def testAll' : Id PUnit := do
let mut q : FingerTree Nat := FingerTree.Empty
let mut s := 0
for i in [:100] do
q := FingerTree.snoc q i -- q : FingerTree Nat is in Type 1
s := s + i -- s : Nat is in Type
- The universe bump at
FingerTree
is quite annoying. If I remember correctly Coq has an extension that allows us to avoid the bump.
James Gallicchio (Mar 02 2022 at 04:01):
Hm. Okay. So the goal should really be to get FingerTree
within the same universe as its input
I did redefine it a bit to index the family on height Nat
instead of the node type, which compiles, but is super annoying to work with. As you said in Coq the definition just works:
https://github.com/coq-contribs/finger-tree/blob/master/DependentFingerTree.v#L154
I'll look around for some documentation of Coq's universe constraint generation
Mario Carneiro (Mar 02 2022 at 04:04):
It should be possible to make the nat indexed finger tree work mostly painlessly with the right setup. What does your version look like?
James Gallicchio (Mar 02 2022 at 04:10):
def NodeTree (τ : Type u) : Nat → Type u
| 0 => τ
| (n+1) => Node (NodeTree τ n)
inductive FingerTree (τ : Type u) : Nat → Type u
| Empty : FingerTree τ n
| Single : NodeTree τ n → FingerTree τ n
| Deep : Digit (NodeTree τ n) → FingerTree τ (n+1) → Digit (NodeTree τ n) → FingerTree τ n
It's .. fine? I guess? Since unification gives us essentially the same type signatures
cons : FingerTree τ n → NodeTree τ n → FingerTree τ n
can unify to
cons : FingerTree τ 0 → τ → FingerTree τ 0
Mario Carneiro (Mar 02 2022 at 04:21):
You might also consider using
def NodeTree (τ : Type u) : Nat → Type u
| 0 => τ
| n+1 => NodeTree (Node τ) n
which might work better for defeq
James Gallicchio (Mar 02 2022 at 04:49):
Hrm, I think Node (NodeTree ...)
is working better because I can peel off the nodes one layer at a time pretty easily. But now I'm having issue trying to get Lean simplifying NodeTree τ 0
to τ
consistently without explicit casts :concerned:
James Gallicchio (Mar 02 2022 at 04:49):
I'll see how far I can take this though
Mario Carneiro (Mar 02 2022 at 05:14):
What is your application? Maybe there is another approach altogether that doesn't involve these type changing shenanigans
Mario Carneiro (Mar 02 2022 at 05:36):
For finger trees, I would go with a slightly less strongly typed version for the base operations, and enforce the structural constraints separately:
inductive Node (τ : Type u)
| _1 : τ → Node τ
| _2 : Node τ → Node τ → Node τ
| _3 : Node τ → Node τ → Node τ → Node τ
inductive Digit (τ : Type u)
| _1 : τ → Digit τ
| _2 : τ → τ → Digit τ
| _3 : τ → τ → τ → Digit τ
| _4 : τ → τ → τ → τ → Digit τ
inductive FingerTree (τ : Type u) : Type u
| empty : FingerTree τ
| single : Node τ → FingerTree τ
| deep : Digit (Node τ) → FingerTree τ → Digit (Node τ) → FingerTree τ
def Node.isRegular : Node τ → Nat → Prop
| Node._1 _, n => n = 0
| _, 0 => False
| Node._2 a b, n+1 => a.isRegular n ∧ b.isRegular n
| Node._3 a b c, n+1 => a.isRegular n ∧ b.isRegular n ∧ c.isRegular n
def Digit.isRegular : Digit (Node τ) → Nat → Prop
| Digit._1 a, n => a.isRegular n
| Digit._2 a b, n => a.isRegular n ∧ b.isRegular n
| Digit._3 a b c, n => a.isRegular n ∧ b.isRegular n ∧ c.isRegular n
| Digit._4 a b c d, n => a.isRegular n ∧ b.isRegular n ∧ c.isRegular n ∧ d.isRegular n
def FingerTree.isRegular : FingerTree τ → Nat → Prop
| FingerTree.empty, n => True
| FingerTree.single a, n => a.isRegular n
| FingerTree.deep a m b, n => a.isRegular n ∧ m.isRegular (n+1) ∧ b.isRegular n
James Gallicchio (Mar 02 2022 at 16:45):
Goal is to implement fast vectors :) the version indexed by tau is quite elegant IMO, so I'm trying to preserve that as much as possible. The nat-indexed trees have been okay so far, no major issues yet.
I suspect externally enforcing the constraints would be a pain, because you'd end up with lots of unreachable branches. But if I can't get the nat indexed version to work I'll try that out
Mario Carneiro (Mar 02 2022 at 18:40):
@James Gallicchio I think if you want fast vectors you should build on Array
, not trees
Mario Carneiro (Mar 02 2022 at 18:42):
I suspect externally enforcing the constraints would be a pain, because you'd end up with lots of unreachable branches. But if I can't get the nat indexed version to work I'll try that out
If you have the regularity condition as a hypothesis, you can prove the unreachable branches are unreachable (and the equation compiler will often allow you to omit these branches)
Reid Barton (Mar 02 2022 at 18:44):
Do you know if that means the compiler compiler will emit code that skips the branches at runtime?
Mario Carneiro (Mar 02 2022 at 18:45):
But if you can write the code without the regularity condition as a hypothesis and just prove that the straightforward implementation preserves the regularity conditions, I think that would be even better
Mario Carneiro (Mar 02 2022 at 18:46):
If it's a two-way branch vs three I don't think there is any difference
Mario Carneiro (Mar 02 2022 at 18:47):
It's probably just wishful thinking on my part that false.rec
compiles to C UB which is useful for optimization. More likely it's all a runtime unreachable check
Mario Carneiro (Mar 02 2022 at 18:47):
Generated Lean code is pretty defensive
Mario Carneiro (Mar 02 2022 at 18:48):
I wish there was a mode to allow lean to actually trust its dynamic semantics and proofs
Mario Carneiro (Mar 02 2022 at 18:50):
Although, it is possible to refactor an if statement with an unreachable branch into a have statement, which is zero-cost and leaves no branch
Gabriel Ebner (Mar 02 2022 at 18:54):
Apparently False.rec is UB already:
set_option trace.compiler.ir.result true
def foo (x : Nat) : Nat :=
if 10 * x < 100 then
nomatch show False from sorry
else
42
#eval foo 3 -- returns 42 although 10 * 3 < 100
Mario Carneiro (Mar 02 2022 at 18:56):
I recall fantasizing early in lean 4 about autogenerated functions T.discr : T -> Nat
and T.get_0_1 : (t : T) -> t.discr = 0 -> A
(the first projection of the first variant of the inductive type T
) which would compile to the good things, which you could use to implement more efficient and flexible variations on match
Mario Carneiro (Mar 02 2022 at 18:59):
@Gabriel Ebner It's certainly UB according to the lean compiler (it uses csimp
lemmas and other things that rely on the general soundness of lean code), but I don't think it transmits that UB to the C code. My guess is that the optimization you are observing happens in lean (well, the IR trace indicates as much)
Gabriel Ebner (Mar 02 2022 at 19:07):
Oh right, this optimization certainly happens on the Lean side. Unreachable instructions that are not optimized away are compiled to panics.
James Gallicchio (Mar 02 2022 at 19:18):
Mario Carneiro said:
I suspect externally enforcing the constraints would be a pain, because you'd end up with lots of unreachable branches. But if I can't get the nat indexed version to work I'll try that out
If you have the regularity condition as a hypothesis, you can prove the unreachable branches are unreachable (and the equation compiler will often allow you to omit these branches)
Is the lean4 equation compiler better in that regard than the lean 3 one? I remember having difficulty getting lean 3 to do it automatically on a different project. Might be user error though :joy:
James Gallicchio (Mar 02 2022 at 19:20):
Mario Carneiro said:
James Gallicchio I think if you want fast vectors you should build on
Array
, not trees
Yeah, heavily depends on the use case. For persistent use, trees will vastly outperform arrays, and for very large vectors it can be beneficial to not have a contiguous memory allocation and instead have some fragmentation. I'm aiming to replicate Scala's vector implementation, which is a wide radix-balanced finger tree or something (whatever that means). So these finger trees are just a step towards replicating that
Mario Carneiro (Mar 02 2022 at 19:21):
Here are some examples of the equation compiler being helpful:
inductive Foo | A | B | C
def is_A : Foo → Prop
| Foo.A => True
| _ => False
example : (f : Foo) → is_A f → Nat
| Foo.A, _ => 1
example : (f : Foo) → f ≠ Foo.A → Nat
| Foo.B, _ => 1
| Foo.C, _ => 1
Mario Carneiro (Mar 02 2022 at 19:24):
James Gallicchio said:
Mario Carneiro said:
James Gallicchio I think if you want fast vectors you should build on
Array
, not treesYeah, heavily depends on the use case. For persistent use, trees will vastly outperform arrays, and for very large vectors it can be beneficial to not have a contiguous memory allocation and instead have some fragmentation. I'm aiming to replicate Scala's vector implementation, which is a wide radix-balanced finger tree or something (whatever that means). So these finger trees are just a step towards replicating that
Naturally. Did you mean persistent vectors? One of the reasons FP languages rely so much on persistent data structures is that they do imperative ones badly, but I think lean is fairly good at native support for imperative data structures, and imperative data structures usually outperform persistent ones when persistence isn't a requirement
Mario Carneiro (Mar 02 2022 at 19:25):
In fact, I wonder whether Array A
would be a better choice than Digit A
in the finger tree implementation
James Gallicchio (Mar 02 2022 at 19:27):
Totally agreed, I'm pretty impressed overall with lean 4 handling imperative code. Hoping to get some benchmarks today RE: how much faster Array is than this finger tree for ephemeral use
Mario Carneiro (Mar 02 2022 at 19:27):
Personally, I would look to Rust to find data structures that match lean's requirements
Mario Carneiro (Mar 02 2022 at 19:28):
like, maybe B-trees are where it's at
James Gallicchio (Mar 02 2022 at 19:29):
As an aside, is there a reason Array models a List A
instead of Vector A n
? Non-resizable native arrays would be super nice for some of my stuff :/
James Gallicchio (Mar 02 2022 at 19:29):
like I totally would use array if i could say Array A 4
instead of Digit A
Mario Carneiro (Mar 02 2022 at 19:31):
That's the way it was in lean 3, and I don't have the whole story for what happened in lean 4. it's possible that the extra index causes bad code generation, but it's definitely annoying when n
is supposed to be fixed or known externally
Mario Carneiro (Mar 02 2022 at 19:32):
Although I'm not sure that fixed length arrays would be a suitable substitute for Digit A
since it can have length 1-4
James Gallicchio (Mar 02 2022 at 19:32):
Mario Carneiro said:
like, maybe B-trees are where it's at
Agreed. I think long term the right approach is to have two collection hierarchies, one for persistent structures and one for ephemeral structures. But I kinda wish there were some way to enforce linear usage of ephemeral structures like you can in Rust.
With the current status quo if you use an array persistently you just get hit with bad performance without any indication why...
James Gallicchio (Mar 02 2022 at 19:33):
Mario Carneiro said:
Although I'm not sure that fixed length arrays would be a suitable substitute for
Digit A
since it can have length 1-4
Fair point -- I assume Lean's arrays store their length, which I think means Digit A
is faster & more compact? Unclear
James Gallicchio (Mar 02 2022 at 19:36):
Lemme finish up this FingerTree implementation and then get some quick benchmarks of its performance against Array
Mario Carneiro (Mar 02 2022 at 19:44):
With the current status quo if you use an array persistently you just get hit with bad performance without any indication why...
You can use dbgTraceIfShared
to detect non-persistent use
Mario Carneiro (Mar 02 2022 at 19:44):
I wonder if you can panic in a dbgTraceIfShared
message
Wojciech Nawrocki (Mar 02 2022 at 23:39):
@James Gallicchio regarding finger trees in general, are you familiar with Claessen's paper? It explains them quite nicely. I implemented it the other day here. The code is untested and unverified, thus possibly buggy. In case it is of any use to you, though, feel free to use any of it. (It does have the annoying universe bump.)
James Gallicchio (Mar 03 2022 at 00:18):
Oh, that's awesome! The paper is really nice too, I think I'll go back and delete Digit 4 now that it's unnecessary :confetti:
James Gallicchio (Mar 03 2022 at 01:02):
Preliminary speed results:
| Ephemeral | Persistent
FingerTree | 604 | 1
Array | 127 | 39
so around 5x as slow in the ephemeral case (which just pushes many elements and then pops them in order). I'm actually surprised it's not slower, given that these are not very wide trees. A much wider tree should narrow that gap quite a lot
James Gallicchio (Apr 05 2022 at 08:19):
Mario Carneiro said:
I wonder if you can panic in a
dbgTraceIfShared
message
implemented arrays with fixed length which panic if used persistently :p
James Gallicchio (Apr 05 2022 at 08:23):
(deleted)
Leonardo de Moura (Apr 06 2022 at 00:10):
Gabriel Ebner said:
Apparently False.rec is UB already:
set_option trace.compiler.ir.result true def foo (x : Nat) : Nat := if 10 * x < 100 then nomatch show False from sorry else 42 #eval foo 3 -- returns 42 although 10 * 3 < 100
I missed this thread. The Lean compiler interprets False.rec
as unreachable code. I think this is a quite reasonable interpretation and enables useful optimizations. As you probably noticed, the compiler eliminates any case
statement if all but one alternative is reachable. This is exactly what is happening here. The key problem here is that you are essentially saying "trust me this is unreachable", but this is not true. In my point of view, this is an incorrect use of sorry
.
Leonardo de Moura (Apr 06 2022 at 00:33):
James Gallicchio said:
Mario Carneiro said:
I wonder if you can panic in a
dbgTraceIfShared
messageimplemented arrays with fixed length which panic if used persistently :p
We know that dbgTraceIfShared
is far from ideal for debugging code that is accidentally performing nondestructive updates. We want to have better support for this in the future, but we do not want to modify the Lean kernel since this feature is only relevant to users that want to use Lean as a programming language. Our current plan is to modify the type checker in Meta
that is outside the kernel. It would support annotations similar to the ones in Idris 2 (https://www.type-driven.org.uk/edwinb/linearity-and-erasure-in-idris-2.html). Unfortunately, this will not happen this year unless we get help from volunteers.
Mario Carneiro (Apr 06 2022 at 01:44):
Leonardo de Moura said:
The Lean compiler interprets
False.rec
as unreachable code. I think this is a quite reasonable interpretation and enables useful optimizations. As you probably noticed, the compiler eliminates anycase
statement if all but one alternative is reachable. This is exactly what is happening here. The key problem here is that you are essentially saying "trust me this is unreachable", but this is not true. In my point of view, this is an incorrect use ofsorry
.
By the way, in case my earlier comments didn't make it clear, I think there is no problem at all treating False.rec
as undefined behavior to execute and optimizing in accordance with this. This is a very useful effect of having proofs in the source language. I was lamenting above that it doesn't lower to C undefined behavior, meaning that although you get lean-level optimizations you don't get any LLVM optimizations from the existence of a proof of false in the function (because the proofs are erased and any unoptimized unreachable branches are compiled to a runtime panic, which LLVM has to treat as having defined behavior).
Mario Carneiro (Apr 06 2022 at 01:48):
One way to assist with this would be to lower some have
statements to llvm.assume
or whatever produces that effect in C
Leonardo de Moura (Apr 06 2022 at 02:04):
Thanks for clarifying, but I was confused by Gabriel's comment that False.rec
is UB (undefined behavior) in Lean when its behavior is well defined (it is unreachable code). I think you two are using UB as "the compiler is free to do whatever it wants", correct?
Mario Carneiro (Apr 06 2022 at 02:08):
yes
Mario Carneiro (Apr 06 2022 at 02:09):
this is UB as in C
Mario Carneiro (Apr 06 2022 at 02:11):
Even though False.rec
has a specific implementation which prints a message and so on, it is undefined behavior to call, which means that the program might not print that message even if you call it. This is what licenses optimizations like the 1-branch match example
Leonardo de Moura (Apr 06 2022 at 02:17):
I view it a bit differently. False.rec
is unreachable. The message is there only for users that are using sorry
for stuff that is not really true. In the ideal (sorry
free) world, it would be impossible to reach a False.rec
at runtime.
Mario Carneiro (Apr 06 2022 at 02:20):
If "unreachable" means "the compiler knows it cannot be reached", then that's what we would normally call undefined behavior. For example, if i < arr.size then arr[i] else panic
is a bounds checked array access, but if i < arr.size then arr[i] else unreachable
is not bounds checked (because the compiler will optimize the branch away)
Mario Carneiro (Apr 06 2022 at 02:22):
since the lean compiler will (in some cases) compile the second to the first (which is a valid lowering), it removes the undefined behavior before it gets to C, which means that LLVM won't get a chance to optimize that branch away (because it would not be legal to do so)
Leonardo de Moura (Apr 06 2022 at 02:26):
Are you referring to the Array.get!
implementation? The else
branch there is definitely reachable.
Mario Carneiro (Apr 06 2022 at 02:28):
In those two examples it doesn't matter which arr[i]
we're talking about because inside the if it's definitely in bounds
Mario Carneiro (Apr 06 2022 at 02:28):
but if I had to make a parallel, the first one is Array.get!
and the second one is Array.get
Mario Carneiro (Apr 06 2022 at 02:30):
(Array.get
isn't written like that because the else unreachable
is superfluous, but you could rewrite it into such a form and my point is that the compiler is permitted to rewrite it back to eliminate the if statement)
Leonardo de Moura (Apr 06 2022 at 02:30):
Not sure what you are suggesting anymore. Are you suggesting we change the implementation of Array.get!
? If yes, to what?
Mario Carneiro (Apr 06 2022 at 02:31):
No, both of the functions are fine. I'm talking about codegenning unreachable
to something like *NULL
which is known to C to be undefined behavior
Mario Carneiro (Apr 06 2022 at 02:32):
and possibly also removing bounds checks from lean_array_fget
since they shouldn't be needed
Mario Carneiro (Apr 06 2022 at 02:33):
actually scratch that last part, looks like the bounds check is already under a debug flag
Leonardo de Moura (Apr 06 2022 at 02:33):
Mario Carneiro said:
and possibly also removing bounds checks from
lean_array_fget
since they shouldn't be needed
It does not have any.
Mario Carneiro (Apr 06 2022 at 02:38):
for example, I think that if you match on a three variant inductive where one of the variants is a nomatch
, the lean compiler will still emit a three-way branch with an explicit unreachable
in one branch, and then LLVM won't be able to reduce it to just a two-way branch even though it knows how to do so
Leonardo de Moura (Apr 06 2022 at 02:45):
Yes, this seems useful in practice.
BTW, would be useful for users if we distinguish False.rec
applications that are sorry
(and user axioms) free from the ones that are not?
Mario Carneiro (Apr 06 2022 at 02:48):
heh, how would we be able to demonstrate what UB looks like in that case? :)
Mario Carneiro (Apr 06 2022 at 02:48):
Probably it would be better to just have it be a global compiler option
Mario Carneiro (Apr 06 2022 at 02:49):
in debug, it should be a panic
and in release it should be unreachable
, and with a set_option
you can pick how you want this function to compile
Leonardo de Moura (Apr 06 2022 at 02:49):
False.rec
applications that aresorry
free are treated as "unreachable" code in the Lean and LLVM compiler passes.False.rec
applications that containsorry
or depend on user axioms are treated aspanic "unreachable"
in all compiler passes. Extra: allow users to reduce this case to the first one using a compiler option.
Mario Carneiro (Apr 06 2022 at 02:51):
it's hard to say what to do about user axioms. It wouldn't be nice to pessimize their code, they might have a good reason to, say, assume the correctness of their external register allocator or something, but it is also not nice if you actually hit UB because you will probably get a weird crash
Leonardo de Moura (Apr 06 2022 at 02:53):
BTW, we currently don't have a solution for different code generation modes (debug, release, etc). We want to support that in the future, but we are not there yet.
Mario Carneiro (Apr 06 2022 at 02:54):
Personally I think it's fine to treat it as UB even if it uses sorry
; you don't want sorry
MWE-ification to cause nonlocal changes to the compilation of a function
Leonardo de Moura (Apr 06 2022 at 02:57):
How do you feel about the panic
?
Mario Carneiro (Apr 06 2022 at 02:58):
even if you don't have release/debug modes, you could still have a set_option
to control this; I'm not sure what the default should be but I'm leaning toward panic
by default
Mario Carneiro (Apr 06 2022 at 02:59):
I think people are more likely to not know what they are doing and stumble on it by accident (or on purpose) compared to folks who really want to eliminate bounds checks, and the panic message is good discoverability
James Gallicchio (Apr 06 2022 at 17:12):
Leonardo de Moura said:
Our current plan is to modify the type checker in
Meta
that is outside the kernel. It would support annotations similar to the ones in Idris 2 (https://www.type-driven.org.uk/edwinb/linearity-and-erasure-in-idris-2.html). Unfortunately, this will not happen this year unless we get help from volunteers.
By this, do you mean essentially embedding a linear language in Lean via metaprogramming? Or would this be a linter that operates outside/separate from the compiler's typechecker?
James Gallicchio (Apr 06 2022 at 17:19):
(In either case, I'd be quite willing to help where I can, but I'll need to learn a lot before then...)
Henrik Böving (Apr 06 2022 at 17:20):
I just so happen to have been looking into linear types as well since this week so I'd definitely be in for that as well (with the same remark on learning of course^^)
Leonardo de Moura (Apr 07 2022 at 01:36):
@James Gallicchio
By this, do you mean essentially embedding a linear language in Lean via metaprogramming? Or would this be a linter that operates
outside/separate from the compiler's typechecker?
The goal was to encode the Idris2-like annotations in our Expr
objects. It is not clear right not now what the best encoding is. We could try to use Expr.mdata
, identity-like functions (i.e., the same approach we use for optParam
), etc. Then, write a new Meta.check
that also checks the new annotations. Just having this checker would be a big improvement. Then, when we move the rest of the compiler code to Lean, we make sure we don't lose the annotations in the compiler passes, and may even eventually emit code that does not need to check whether the RC = 1 or not before performing destructive updates.
Henrik Böving (Apr 07 2022 at 08:34):
I have a question regarding the Idris 2 approach though. in this section https://www.type-driven.org.uk/edwinb/linearity-and-erasure-in-idris-2.html#erasure the author details how the compiler knows what to erase and what not to erase, in particular that all not explicitly denoted variables have multiplicity 0 by default (which makes sense of of course) but right now the Lean approach is to just automatically generate a regular implicit argument already so it would be a breaking change if done like Idris 2 right? The correct way to go in Lean would probably be to be liberal about multiplicity bindings and instead bind with unrestricted multiplicity per default correct?
Sebastian Ullrich (Apr 07 2022 at 11:56):
FYI, @Marc Huisinga is planning to investigate the uniqueness typing part for his master thesis under my supervision. It's not clear to me yet whether this should also entail erasure, I'll leave that for Marc to find out!
Henrik Böving (Apr 07 2022 at 13:08):
Ah, we probably better leave it to Marc then^^
James Gallicchio (Apr 07 2022 at 18:17):
(Personally, I'd be a big fan of some sort of inference mechanism that looks at the usage of the variable and automatically assigns the smallest possible multiplicity; so that parameters which only appear in types can be erased, etc.)
Sebastian Ullrich (Apr 07 2022 at 19:38):
That's just dead code/parameter elimination, which Lean already does. Multiplicity annotations are for when you cannot infer (and/or want to guarantee) this behavior, especially with higher-order functions.
PaleChaos (Sep 09 2023 at 22:40):
(deleted)
PaleChaos (Sep 18 2023 at 23:09):
Leonardo de Moura said:
James Gallicchio said:
Mario Carneiro said:
I wonder if you can panic in a
dbgTraceIfShared
messageimplemented arrays with fixed length which panic if used persistently :p
We know that
dbgTraceIfShared
is far from ideal for debugging code that is accidentally performing nondestructive updates. We want to have better support for this in the future, but we do not want to modify the Lean kernel since this feature is only relevant to users that want to use Lean as a programming language. Our current plan is to modify the type checker inMeta
that is outside the kernel. It would support annotations similar to the ones in Idris 2 (https://www.type-driven.org.uk/edwinb/linearity-and-erasure-in-idris-2.html). Unfortunately, this will not happen this year unless we get help from volunteers.
What is the current status of implantation of linear types in Lean?
Henrik Böving (Sep 19 2023 at 09:15):
There is no change in status that I am aware of
Sebastian Ullrich (Sep 19 2023 at 09:58):
Sebastian Ullrich said:
FYI, Marc Huisinga is planning to investigate the uniqueness typing part for his master thesis under my supervision. It's not clear to me yet whether this should also entail erasure, I'll leave that for Marc to find out!
This part progressed: https://pp.ipd.kit.edu/uploads/publikationen/huisinga23masterarbeit.pdf
Last updated: Dec 20 2023 at 11:08 UTC