Zulip Chat Archive
Stream: general
Topic: What is the subject reduction debate?
Jason Rute (Mar 06 2022 at 23:12):
A Coq user I know recently said they don’t particularly like Lean because Lean breaks subject reduction. I’ve heard about this on many online discussions (Github, Lean Zulip, Coq Zulip, as well as smaller discussions on a blog, Hacker News, More Hacker News, and PA Stack Exchange). But at the same time there is really nothing clearly explaining the debate from both sides (at least not without wading through long back-and-forths between @Mario Carneiro and @Pierre-Marie Pédrot). Can someone explain the issue from both sides? (I also have seen places where @Karl Palmskog has suggested this to be well-documented.)
Jason Rute (Mar 06 2022 at 23:12):
To start, here is my very vague understanding of what is going on, probably filled with many misunderstandings: There is a property called subject reduction that relates to definitional equality. It is desired, because without it, two things which should be defeq aren’t. Because Lean extends definitional equality to quotients, when a quotient is applied to Prop
it can break subject reduction and therefore break definitional equality (maybe even make defeq undecidable?). It is important at this point to note that I believe Mario’s thesis shows this doesn’t break consistency, or soundness relative to the usual set theoretic model. I’m not sure it really even breaks soundness relative any constructive models, at least at the Prop
level. (Does it effect anything about which Prop
s are provable with or without axioms?)
Jason Rute (Mar 06 2022 at 23:12):
Now, for Lean users, it seems this is viewed as a non-issue which doesn’t come up in practice. I believe to produce this error you have to take a quotient of a Prop
which is a no-op which no-one would do, right? And since the issue is just defeq failing, you can always prove it another away. Moreover, it is easy to have defeq fail just because it timed out. (I recall Lean has trouble even adding numbers using definitional equality.) And of course Lean users are happy to use non-constructive axioms which I think also get around this issue somehow (although I don’t understand why). I also assume all this is just a consequence of other nice properties one gets in Lean, but I'm not sure I could name those. In short for a typical Lean user, I think they would say there isn’t really anything obviously broken here.
Jason Rute (Mar 06 2022 at 23:12):
On the other hand, Coq users seem to see this at least as a “code smell” where this nice property of the type theory is broken for no good reason and it doesn’t bode well for Lean. (Actually, I think subject reduction is also broken in Coq via coinductive types, and they know this, but the difference in their minds is that they are trying to fix it while Lean developers don’t care about such issues.) Also, I think Coq developers have a very different model in their heads, and a set-theoretic soundness proof wouldn’t suffice. Instead the soundness proof they have in mind is proof theoretic and to do this they need properties like soundness reduction. I also think because defeq is undecidable in Lean (if I have this right), that this means that Lean is more like extensional type theory than intentional type theory (which is maybe another code smell for type theorists).
Jason Rute (Mar 06 2022 at 23:12):
I also think there is a big part of this that comes down to computation. I think (and maybe I’m mistaken) that Coq’s kernel is where Coq does its computation. So for example, I think the four color theorem was a giant defeq proof all done in the kernel. For Lean users, computing in the kernel via rfl
is more like a tactic. If it works, great; if it fails or times out, try something else. No big deal. For Lean 3 the bulk of computation happens in the vm (or for Lean 4 in compiled code). I think both Coq and Lean users (especially Lean 4 since it is a programming language) would be horrified if say they could write a simple function which should compute but crashes. The difference seems to be where they expect this code to compute. For Coq users this should compute in the kernel, but for Lean users it should compute in the VM (or compiled code in Lean 4). And I think for Lean subject-reduction is not an issue in the VM for some reason.
Jason Rute (Mar 06 2022 at 23:13):
But I’m still not sure if I’m mixing subject reduction up with other issues (or even worth I have no idea what I'm talking about here). I’m also not sure if subject reduction issues are easy to fix in Lean (e.g. just don’t allow quotients over Prop
)? Also Mario seems to claim that quotients aren’t really the issue, but subsingleton elimination + proof irrelevance is. Is this all a necessary evil to get nice properties (e.g. defeq quotients) Or is subject reduction just one small part of what Coq users see as a larger problem stemming from various design choices of Lean? And is there some sense that this is particularly not an issue for Lean users specifically because Lean users typically use classical logic?
Jason Rute (Mar 06 2022 at 23:13):
I know I’m being too careless with the terms “Coq users” and “Lean users”. Neither is a monolithic group and there is, of course, non-trivial overlap.
Mario Carneiro (Mar 06 2022 at 23:16):
There is a property called subject reduction that relates to definitional equality. It is desired, because without it, two things which should be defeq aren’t.
Not exactly. Subject reduction is the property that if you replace a subterm of a term with a defeq one (especially if the subterm is the result of reduction), the resulting big term remains typecheckable. This fails in lean because if you reduce some of the identities in @id A (@id B (@id C t))
you can deduce transitivity of defeq, so by applying one of the counterexamples to transitivity you get a term such that reducing the internal identity functions results in another term that doesn't typecheck
Jason Rute (Mar 06 2022 at 23:19):
Ok, so it is about type checkability. I see.
Mario Carneiro (Mar 06 2022 at 23:19):
Because Lean extends definitional equality to quotients, when a quotient is applied to Prop it can break subject reduction and therefore break definitional equality (maybe even make defeq undecidable?).
Quotients in Prop
are a red herring. The quotient of a Prop is not a thing people ever do, and it's not what makes the Coq folks upset. This is a silly oversight in the lean 3 axioms, and you can remove it without breaking mathlib by making quotients have the type Sort (max 1 u)
instead of Sort u
.
Mario Carneiro (Mar 06 2022 at 23:22):
Jason Rute said:
It is important at this point to note that I believe Mario’s thesis shows this doesn’t break consistency, or soundness relative to the usual set theoretic model. I’m not sure it really even breaks soundness relative any constructive models, at least at the
Prop
level. (Does it effect anything about whichProp
s are provable with or without axioms?)
The quotient axioms are actually quite load-bearing in lean, since they are used to derive function extensionality. If you were to simply remove quotient types from lean you would want to put this back in as an axiom.
Mario Carneiro (Mar 06 2022 at 23:25):
I also think because defeq is undecidable in Lean (if I have this right), that this means that Lean is more like extensional type theory than intentional type theory (which is maybe another code smell for type theorists).
This depends on what you mean by "defeq". In my thesis I separate two notions of "defeq": the ideal one, which is transitive and satisfies subject reduction and is undecidable, and the "algorithmic" one, which is (approximately) what lean implements, is decidable, non-transitive and fails SR.
Mario Carneiro (Mar 06 2022 at 23:30):
And I think for Lean subject-reduction is not an issue in the VM for some reason.
Subject reduction doesn't apply because the VM is not a type system, it is an evaluation semantics. What matters is that VM computation does not get stuck, and this happens exactly when the lean term is not noncomputable
. One open question that still remains from the thesis is whether and in what sense the VM is type-preserving. We would certainly like it to be true: if we have a closed term of type nat
we want it to compute to a nat
, not a string
or something else. But the VM will execute right through eq.cast
operations as if they were identity functions, which means that you can get terms applied to things that would not type check in lean, at which point it becomes hard to see how to get back on the well-typed train.
Jason Rute (Mar 06 2022 at 23:45):
So it sounds like all the issues come down to non-transitivity of defeq in the "algorithmic" version of defeq produced by Lean's kernel. I assume Coq has (or is aiming for) a kernel where defeq is (in theory with enough time and resources) transitive?
Kevin Buzzard (Mar 06 2022 at 23:48):
I have this vague idea that subject reduction is related to the idea that given a term you'd like to be able to put it into some kind of unique canonical normal form. But most mathematical objects don't have this property -- for example the word problem is not solvable even for finitely presented groups, which means that faced with a random element of such a group, "does this reduce to the identity" is not algorithmically decidable. Perhaps it's the fact that mathematicians spend their lives manipulating such objects that they don't have a particular desire to try and always put objects into some kind of canonical normal form.
Mario Carneiro (Mar 06 2022 at 23:57):
I have this vague idea that subject reduction is related to the idea that given a term you'd like to be able to put it into some kind of unique canonical normal form.
That's a different desirable property of type systems called "canonicity". This is important for anything with a proper evaluation semantics, for example VM computation - it would be no good if you could write computable functions producing nat
and the VM just crashes somewhere in the middle of computation. Lean's defeq reduction relation fails canonicity if you use any axioms (propext, quot.sound, choice all break it)
Mario Carneiro (Mar 07 2022 at 00:01):
But even if you have canonicity in principle, it can still be surprisingly tricky to make use of this fact. For example in the POPL panel discussion you were in with Favonia, he mentioned the "Brunerie number" which is essentially a stress test of the canonicity property of certain cubical type theories. The fact that you can still get stuck due to implementation limits is one reason why I don't consider canonicity and strong normalization to be particularly useful properties in their bare form with no runtime bounds.
Mario Carneiro (Mar 07 2022 at 00:04):
It's also why I always mention that Metamath / MM0 have linear time verification, because this is a property very much absent from DTT systems. They can at least promise that verification is decidable, but this is an impossibly weak statement - the asymptotic complexity of DTT verification is the fastest growing computable function I know how to describe
Kyle Miller (Mar 07 2022 at 00:08):
What's a simple concrete example of failure of subject reduction? It doesn't seem like any have been posted to zulip before.
Jason Rute (Mar 07 2022 at 00:09):
This is one example which I think was copying Mario's thesis. It uses quotients. I'm curious given the above, if it can be done without quotients (or at least quotients in Prop
).
Jason Rute (Mar 07 2022 at 00:10):
(I assume this is a failure of SR, but I'm still not sure of these things.)
Mario Carneiro (Mar 07 2022 at 01:04):
variables {A : Type} {R : A → A → Prop} (x : A) (h : acc R x)
def my_rec : ∀ x : A, acc R x → ℕ := @acc.rec A R (λ _, ℕ) (λ _ _ _, 1)
def inv {x : A} (h : acc R x) : acc R x := acc.intro x (λ y h', acc.inv h h')
example : inv h = h := rfl -- ok
#reduce my_rec x (inv h) -- 1
#reduce my_rec x h -- acc.rec _ h
-- failure of transitivity
#check (rfl : my_rec x (inv h) = 1) -- ok
#check (rfl : inv h = h) -- ok
#check (rfl : my_rec x (inv h) = my_rec x h) -- ok
#check (rfl : my_rec x h = 1) -- fail
-- failure of SR:
#check @id (my_rec x h = 1) (@id (my_rec x (inv h) = 1) rfl) -- ok
#check @id (my_rec x h = 1) (@id (1 = 1) rfl) -- fail
-- fooling tactics into producing type incorrect terms:
def T (X : 1 = my_rec x h → Type) :
X (@id (1 = my_rec x (inv h)) rfl) = X (@id (1 = my_rec x (inv h)) rfl) :=
by { dsimp, refl }
-- kernel failed to type check declaration 'T' this is usually due to a buggy tactic or a bug in the builtin elaborator
-- elaborated type:
-- ∀ {A : Type} {R : A → A → Prop} (x : A) (h : acc R x) (X : 1 = my_rec x h → Type), X _ = X _
-- elaborated value:
-- λ {A : Type} {R : A → A → Prop} (x : A) (h : acc R x) (X : 1 = my_rec x h → Type), id (eq.refl (X rfl))
-- nested exception message:
-- type mismatch at application
-- X rfl
-- term
-- rfl
-- has type
-- 1 = 1
-- but is expected to have type
-- 1 = my_rec x h
Karl Palmskog (Mar 07 2022 at 06:54):
@Jason Rute if we want a public summary of the issue on both sides, the Proof Assistant Stack Exchange may be a good neutral ground. Pierre-Marie already talks about subject reduction in his answer here: https://proofassistants.stackexchange.com/questions/583/what-is-the-state-of-coinductive-types-and-reasoning-in-coq
Karl Palmskog (Mar 07 2022 at 08:03):
Gaëtan Gilbert linked to the following 2019 paper by Abel & Coquand in an earlier debate ("Failure of Normalization in Impredicative Type Theory with Proof-Irrelevant Propositional Equality"): https://arxiv.org/abs/1911.08174
Jason Rute (Mar 07 2022 at 12:00):
@Karl Palmskog I missed that question. Indeed Pierre-Marie’s answer is helpful, especially the section on Subject Reduction. For example, here is a snippet:
On the practical side, this is not a black-and-white issue. It actually depends on the magnitude of SR breakage. One can survive within a locally broken system (case in point: Coq), but the more failures of SR, the less practical the proof assistant. Indeed, failure of SR means that type-checking is not stable by the equational theory of the language, a clear source of non-modularity. A perfectly fine proposition may become ill-typed after substituting a subterm by another convertible one. This is particularly infuriating when it happens because it is virtually impossible to debug, as one has to manually apply the typing rules to see where it fails.
Jason Rute (Mar 07 2022 at 12:17):
@Karl Palmskog I would very much be open to having this be a question on that stack exchange. I almost went there first, but given I understood so little about it, I though it was best to get some more clarity first here.
Patrick Massot (Mar 07 2022 at 12:21):
I haven't looked at the source but that quotation of Pierre-Marie sounds really fishy. It seems to imply there the situation is ok in Coq but not in Lean. However we still haven't seen any example where this "issue" arose except for specifically hand-crafted cases designed to exhibit this phenomenon.
Jason Rute (Mar 07 2022 at 12:22):
That source was only about Coq, so that is (at least one reason) why it doesn't mention Lean.
Jason Rute (Mar 07 2022 at 12:27):
In particular it is about co-inductive types, which is where SR breaks in Coq I believe.
Jason Rute (Mar 07 2022 at 12:32):
@Mario Carneiro Given that your example seems to depend on proof irrelevance of Prop
and the small (subsingleton) elimination of Prop
used in the definition of docs#acc, do you think this would also be a problem for Coq's SProp
? I don't know enough to understand the difference between Coq's SProp
and Lean's Prop
. It seems SProp
is proof irrelevant, but I don't know how elimination works. Maybe the first thing would be to see if one could define docs#acc in SProp
...
Jason Rute (Mar 07 2022 at 12:34):
Also I thought Arend has a Prop type which is proof irrelevant and supports sub-singleton elimination (even to sub-singletons in Type). I wonder if they have problems, or if HoTT comes to save the day.
Jason Rute (Mar 07 2022 at 12:38):
Oh, from Arend's 1.2.0 release:
We decided to make \Prop proof relevant in general (but it is still proof irrelevant in some cases). The implementation in 1.1.0 was incorrect and it turns out that it is impossible to implement it without loss of the normalization property.
Kevin Buzzard (Mar 07 2022 at 12:54):
but the more failures of SR, the less practical the proof assistant.
Lean has turned out to be a really practical system for doing hard mathematics, and my understanding is that SR is completely broken in Lean. So this word "practical" might be highly subject-dependent.
Jason Rute (Mar 07 2022 at 12:59):
@Patrick Massot and @Kevin Buzzard I think it would at least be nice to know where the boundary is that one has to exploit (or avoid) to get into this case. If nothing else, it gives a better argument for why it isn't a big deal in practice. (For Coq it seems to be using positive co-induction types in a certain way.) I used to think in Lean it was taking quotients of Props. Know I think it is exploiting proof irrelevance of certain kinds of syntactic subsingleton definitions or something like that.
Patrick Massot (Mar 07 2022 at 13:00):
The conjecture is: the boundary is somewhere in pure type theory land, it has nothing to do with formalized mathematics.
Sebastien Gouezel (Mar 07 2022 at 13:02):
Just like we know by Gödel that maths are undecidable, but in practice this never shows up unless you are working in very specific areas of set theory.
Jason Rute (Mar 07 2022 at 13:07):
Note the same argument can be made about any obscure bug in Lean. I guess your point is that there is no way for someone to exploit this "bug" to trick Lean into accepting a bad proof (unlike the recent unsoundness bug in Lean 4). So it just comes down to, if you do weird stuff Lean might crash with an incomprehensible error, but that is just normal for computers. (Edit: Crash is too strong a word. Error is more correct.)
Arthur Paulino (Mar 07 2022 at 13:07):
Patrick Massot said:
The conjecture is: the boundary is somewhere in pure type theory land, it has nothing to do with formalized mathematics.
Following this reasoning and given the fact that Lean 4 will be (and already is) a powerful and genuine programming language, I wonder if this issue has a higher chance of showing up in other domains. For example, if someone wants to use Lean 4 to create some unorthodox application or programming language or something like that
Patrick Massot (Mar 07 2022 at 13:11):
Jason, this has nothing to do with soundness. This simply never happened.
Patrick Massot (Mar 07 2022 at 13:13):
Arthur, this is perfectly valid concern, I have no idea. I guess the Coq people who think subject reduction is important have good reason to think so, related to whatever they are doing with their proof assistant. If we start to use Lean for whatever they are doing where subject reduction is important then we may have to care.
Jason Rute (Mar 07 2022 at 13:13):
I guess that is my point. The boundary is that if it isn’t a soundness issue (which I agree this isn’t and hasn’t ever been) then it is just a usability question. If there are some funky side issues with the type theory which don’t break soundness, then Lean users can live with those if they don’t come up in practice.
Patrick Massot (Mar 07 2022 at 13:14):
Frankly we could even live with a soundness issue if it were as artificial as what we are currently discussing.
Jason Rute (Mar 07 2022 at 13:15):
I’m not sure you speak for the community on that point.
Kevin Buzzard (Mar 07 2022 at 14:11):
We're trying to type in proofs here, not exploit the system :-)
Arthur Paulino (Mar 07 2022 at 14:25):
I think the concern, in case of unsoundness, is accidental/unintentional exploitation. Weird scenario: an AI learns how to abuse it
Mario Carneiro (Mar 07 2022 at 15:30):
The theoretical existence of soundness holes in the theory can make proof translation much more difficult, because a proof of soundness translates fairly directly to a proof export algorithm. As long as soundness is an open question (like in Coq), it is unclear how to move the theorems into a known-to-be-sound foundation.
A variation on this issue appears in Lean, in the claim that "Type is all you need". This is certainly a plausible claim given how we normally use mathlib, so if we want to translate Lean proofs to ZFC then the analogue of a soundness hole would be a construction of a ZFC model with an inaccessible or something like that. That is, it's something that is entirely possible to do, but difficult to accidentally stumble upon (because we aren't usually exploiting such strong induction principles in lean). A translation procedure then has to figure out what kind of subsystem captures all the things done in practice (including higher universes than Type
, because they do show up in practice) even though there are clear ways to break it.
Mario Carneiro (Mar 07 2022 at 15:37):
Sebastien Gouezel said:
Just like we know by Gödel that maths are undecidable, but in practice this never shows up unless you are working in very specific areas of set theory.
I'm not sure I agree with this statement. Maths is undecidable, and this property shows up all the time in the fact that mathematicians still have a job and we did not find a way to push-button automate the whole thing away. (Hm, this seems to be a conflicting meaning of "undecidable" since I think you mean "independent of the base theory".) Still, I would say that in just the same way as above, the theoretical existence of independent statements makes it much harder to figure out how to decide in practice large classes of problems with a uniform method.
Mario Carneiro (Mar 07 2022 at 15:42):
Kevin Buzzard said:
but the more failures of SR, the less practical the proof assistant.
Lean has turned out to be a really practical system for doing hard mathematics, and my understanding is that SR is completely broken in Lean. So this word "practical" might be highly subject-dependent.
SR is not "completely broken" in lean. I would say the situation is pretty similar to that described in Coq. The main evidence that SR is a thing to watch out for is in the fact that dsimp
and change
put an id
in the proof term (they wouldn't need to do that if SR always happened reliably), and sometimes unfolding things with dsimp
can make defeq problems much harder which can cause slow or unusable proofs (this came up one or two times in LTE in some joint coding sessions with @Johan Commelin). So to me it is a hazard sign that I have been aware of for a long time but have only had to come near a small handful of times in all my leaning.
Sebastien Gouezel (Mar 07 2022 at 15:42):
By undecidable, I mean that some statements are neither provable nor refutable (which implies in particular that there is no algorithm to decide the truth of all statements). And I don't think this has anything to do with mathematicians having a job: even if everything were decidable (and even decidable by a reasonable time algorithm), this would not solve the problem of finding the interesting questions and the fruitful theories, which is the heart of our job.
Mario Carneiro (Mar 07 2022 at 15:44):
In fact, I would say that the issues with unfolding are far more commonly performance issues than actual rejected proofs. It isn't too hard to come up with examples where x = y
and y = z
are fast to check but x = z
is super slow, even if it might resolve to defeq eventually you can't put it in your proof and so you are working around lack of "effective transitivity".
Johan Commelin (Mar 07 2022 at 15:47):
Mario Carneiro said:
sometimes unfolding things with
dsimp
can make defeq problems much harder which can cause slow or unusable proofs (this came up one or two times in LTE in some joint coding sessions with Johan Commelin).
The memories :scream:
Jason Rute (Mar 27 2022 at 23:02):
This discussion came up again on the new Proof Assistant StackExchange (not by me).
Jason Rute (Mar 27 2022 at 23:06):
I realize however that the real question is not something like "why is Lean broken" or "how is Lean usable given that it is broken". We know from Mario that subsingleton elimination + proof irrelevance is the issue and that is a relatively small issue. I think the real question is what does having subsingleton elimination and proof irrelevance give you in Lean? This may be obvious to others, but I'm not certain. I have the (possibly) wrong impression that they are there to allow you to write computable functions using (Prop) classical logic and run them in the VM, and maybe also to ignore proofs during computation, making computation faster. But maybe I'm totally mistaken.
Eric Wieser (Mar 27 2022 at 23:22):
What exactly does "subsingleton elimination" mean here?
Eric Wieser (Mar 27 2022 at 23:23):
Presumably it doesn't refer to subsingleton.elim
, as I don't think the lean 3 kernel knows about that
Jason Rute (Mar 27 2022 at 23:31):
I'm referring to what this document calls subsingleton elimination, and what Mario explicitly used in his example above to eliminate docs#acc to nat
(which doesn't go through if you define a similar thing to acc
in Coq's SProp
).
Kyle Miller (Mar 28 2022 at 00:00):
Maybe it's worth calling it "syntactic subsingleton elimination" just for sake of discussion.
Mario Carneiro (Mar 28 2022 at 04:01):
Reposting my comment from the SE question:
"why does Lean have proof irrelevance": Without proof irrelevance, I think it is hard to argue that your logic acts like classical mathematics, because subtypes don't act like subtypes anymore. Things are much more like HoTT at this level. (It doesn't have to be definitional proof irrelevance though. Coq gets by just fine with axiomatic proof irrelevance. But definitional proof irrelevance is unquestionably convenient, and even Coq recognized this in adding SProp.)
"why... syntactic subsingleton elimination": The main application of subsingleton elimination is constructing definitions by well founded recursion where the proof of well-foundedness is a Prop and hence erased by the VM/compiler, which is important in practice. If well_founded was data, then the noncomputable checker would have to get more sophisticated with an effect system and ghost annotations so that noncomputable proofs of well foundedness can be used in computable recursive functions.
Leonardo de Moura (Mar 28 2022 at 15:06):
@Mario Carneiro
Yes, the motivation for including id
in dsimp
and change
was performance because of unfolding. Before we had id
there, we would often have performance problems with them. I don't recall having ever added an auxiliary id
to workaround the SR issue.
I am glad you are always willing to explain the "SR issue" to users.
Not sure how to convince others this is a nonissue and soundness is not affected. Should we create a FAQ for it?
BTW, I found the issue about "type checking stability" raised by one of the Coq guys very disingenuous. To trigger the SR issue we need to use types such as acc
(the accessibility predicate for well-founded recursion) which is mainly used by Lean to justify termination and is barely used by users.
Leonardo de Moura (Mar 28 2022 at 15:13):
Regarding "subsingleton elimination", the main motivation was extra definitional equalities when using structural recursion.
inductive Vector (α : Type u): Nat → Type u where
| nil : Vector α 0
| cons : α → Vector α n → Vector α (n+1)
def Vector.map2 (f : α → β → σ) : Vector α n → Vector β n → Vector σ n
| nil, nil => nil
| cons a as, cons b bs => cons (f a b) (map2 f as bs)
theorem Vector.map2_cons_eq : map2 f (cons a as) (cons b bs) = cons (f a b) (map2 f as bs) :=
rfl -- It would not hold without subsingleton elimination
Leonardo de Moura (Mar 28 2022 at 15:38):
Arthur Paulino said:
Patrick Massot said:
The conjecture is: the boundary is somewhere in pure type theory land, it has nothing to do with formalized mathematics.
Following this reasoning and given the fact that Lean 4 will be (and already is) a powerful and genuine programming language, I wonder if this issue has a higher chance of showing up in other domains. For example, if someone wants to use Lean 4 to create some unorthodox application or programming language or something like that
No, there isn't a higher chance. To trigger the issue, one has to use types such as Acc
(the accessibility predicate for defining well-founded recursion). It is indirectly used by Lean to justify a function terminates. The whole process is automated by Lean. Lean converts the definition to one using WellFounded.fix
to prove termination, proves the equations theorems, and compiles the code. When reasoning about the code, one uses the equalities proved automatically by Lean.
Note that even when (very advanced) users want to use Acc
to show that a new relation R
is well-founded, the SR issue does not occur there. It is a very localized use. They would prove that the relation is well-founded, and Lean would use the fact as a "black box". The file src/Init/WF.lean
contains examples.
Leonardo de Moura (Mar 28 2022 at 15:40):
Arthur Paulino said:
I think the concern, in case of unsoundness, is accidental/unintentional exploitation. Weird scenario: an AI learns how to abuse it
It was mentioned above, but it does not hurt to repeat: there is no soundness issue.
Last updated: Dec 20 2023 at 11:08 UTC