Zulip Chat Archive
Stream: CSLib
Topic: deriving dot notation
Chris Henson (Nov 08 2025 at 19:42):
Let's continue CSlib-specific discussion from #Is there code for X? > Deriving dot notation from class instances here.
Chris Henson (Nov 08 2025 at 20:07):
If desired, I can look at the discussed extension of creating an abbrev for each compatible field (this seems not hard). An attribute is straightforward because it was already written in MetaM. There should hopefully be no concern of problems with grind because these are abbrevs.
I do think @Ching-Tsun Chou has a point about complexity. I think it got obscured by being too entwined with the specifics of Acceptor and I wouldn't phrase it as "extending Lean", but in general we should weigh the maintenance of meta code with its benefit. I am hesitant when the upside is additional dot notation. I also think when writing meta code that creates declarations it is good to think about the user experience.
Fabrizio Montesi (Nov 08 2025 at 20:33):
There are two discussions at play here.
- Whether having an Acceptor typeclass to establish a common interface (guarantee the same def names), reuse code (for now not much but might expand in the future, e.g., when finiteness assumptions are given as parameters), and force that things (like language) are defined in the same way.
- Whether we sometimes wanna have dot notation when using typeclasses. More specifically, as in this case, when we're basically using a class instance to implement a trait/interface with default methods a-la OOP, where people are used to get dot notation.
This triggered my curiosity to explore whether a programmatic solution for (2) can be provided, to support the pattern.
@Ching-Tsun Chou had some doubts about whether having Acceptor is worth it, so the discussions got mixed. Also, (1) is specific whereas (2) is a much more general point.
I see value in doing (1) even without doing (2). See the points I list in (1), and there are already 7 instances of Acceptor (I'm gonna merge it with OmegaAcceptor). So I think it's worth a shot. We can always revert it to having duplicated code if it doesn't pay off.
Chris Henson (Nov 08 2025 at 20:40):
Yes, my hope was to untangle these two things a bit. My inclination is also to try (1) and leave (2) on the table for later.
Ching-Tsun Chou (Nov 08 2025 at 23:35):
The current alternatives are all conveniently available in different PRs:
- cslib#142 is a design without the "acceptor" classes, where the definition
languageand the theoremmem_languageare directly on each of the 6 possible combinations of {DA,NA}.{FinAcc,Buchi,Muller}. - cslib#144 is a design with the "acceptor" classes, where
languageandmem_languageare only in the "acceptor" file, but we end up having to write an expression likeCslib.Automata.Acceptor.language nfa = lrather than the much shorternfa.language = l, orAcceptor.language na.toDAFinAcc = Acceptor.language narathe thanna.toDAFinAcc.language = na.language. - cslib#145 is a modification of cslib#144, where I added back the definition
languageand the theoremmem_languagefor each of the 6 possible combinations of {DA,NA}.{FinAcc,Buchi,Muller}. (To be sure, I did only {DA,NA}.{FinAcc} in the PR so far, but the idea is to do it for all six combinations.). So we get the shorter, more readable expressions back at the expense of the same additional code as in cslib#142 (on top of the code for the "acceptor" classes).
Ching-Tsun Chou (Nov 08 2025 at 23:54):
Here's my suggestion: we go for cslib#145, so that we can actually start developing some automata theory, in which we can use natural notations like nfa.language. In the meantime, if @Fabrizio Montesi is so inclined, he can pursue using Lean metaprogramming to eliminate the duplicated code. The duplicated code is localized and can be easily removed with a new PR. On the other hand, if that approach proves unsuccessful, the acceptor code can also be easily removed because of the presence of the duplicated code.
Ching-Tsun Chou (Nov 09 2025 at 00:01):
The approach above has the advantage that the development of automata theory is not held up by the finer points in automata representation. It has the additional advantage that as the code base of automata theory grows, the metaprogramming code will get more test cases and reality checks as it is being developed.
Fabrizio Montesi (Nov 09 2025 at 06:27):
I think the situation is much better than you think:
Some of the problems you mention are just scoping issues. Here's how you can make the code look nice: https://github.com/leanprover/cslib/pull/144/commits/45a57cd5855915eafbbecad6be899e211846f563
The actual problem is really if we want so hard to be able to write nfa.language instead of language nfa. The same problem arises in many parts of Lean. If it is so important, I think that the right way of doing it is to just add this single line (with the right parameter type) for each of the 7 instances:
abbrev language (a : FinAcc State Symbol) := Acceptor.language a
This gives dot notation for language. There's no need to do the same for the theorems, since grind will find them (by expanding the abbrev automatically; I've tested this: the proofs work with no modifications) and even when we reference them manually in general we should refer to theorems under the class namespace (to elicit in the proof that we're using a shared concept; this is a general observation, independent from Acceptor).
In summary:
- We can write simple things like
language nfa. - If we want
nfa.language, it's just one line away.
I'll add the abbrevs to cslib#144, so that we get dot notation for language and we can go ahead with more stuff on automata. Is that reasonable, aestethics-wise?
Chris Henson (Nov 09 2025 at 06:32):
I personally would prefer just writing language nfa. For me the simplicity of fewer declarations outweighs the very slight benefit of dot notation. As you say, once you open the scope they are nearly identical already.
Fabrizio Montesi (Nov 09 2025 at 06:37):
Yes and in this case language a reads quite nicely as 'the language of a'.
Ching-Tsun Chou (Nov 09 2025 at 08:01):
@Fabrizio Montesi Please test your abbrev based dot-notation on the following example. I have verified that the language ... + open ... in version works. The files that Regular.lean imports plus Cslib.Computability.Automata.OmegaAcceptor should provide all the definitions needed.
variable (State1 State2 State3 : Type*) in
example
(a1 : DA.FinAcc State1 Symbol) (a2 : DA.Buchi State2 Symbol) (a3 : DA.Buchi State3 Symbol) :
a1.language * a2.language = a3.language := by
sorry
Ching-Tsun Chou (Nov 09 2025 at 08:03):
If the example works, I think the solution is good enough and we can move on.
Fabrizio Montesi (Nov 09 2025 at 11:43):
Yes, it works as expected.
This also works btw, with no need for weird parentheses:
open Acceptor ωAcceptor in
variable (State1 State2 State3 : Type*) in
example
(a1 : DA.FinAcc State1 Symbol) (a2 : DA.Buchi State2 Symbol) (a3 : DA.Buchi State3 Symbol) :
language a1 * language a2 = language a3 := by
sorry
And, just for curiosity, statements and proofs seem to work unaffected even when the two styles are mixed (but we should never do this!):
open Acceptor in
/-- The `DA` constructed from an `NA` has the same language. -/
@[scoped grind =]
theorem toDAFinAcc_language_eq {na : NA.FinAcc State Symbol} :
na.toDAFinAcc.language = language na := by
Fabrizio Montesi (Nov 09 2025 at 12:05):
I've been trying to find examples of similar approaches to the abbrev here, and this seems to be one: https://github.com/leanprover-community/mathlib4/blob/29692a968705f26dc0f7432961647c38f620ae1f/Mathlib/Order/Category/DistLat.lean#L60
(The ones pointed out in the github thread so far are slightly different from what we're doing here.)
Chris Henson (Nov 09 2025 at 12:24):
There's a little more going on there than just dot notation, not sure I would take that as an example.
Fabrizio Montesi (Nov 09 2025 at 12:25):
Testing the abbrev solution. It all works but there's a subtle effect worth pointing out.
With all abbrevs in place, we cannot use the normal 'class syntax with open' discussed previously in some situations any longer. For example this gives an error
namespace Automata.εNA.FinAcc
[...]
open Acceptor in
theorem toNAFinAcc_language_eq {ena : εNA.FinAcc State Symbol} :
language ena.toNAFinAcc = ena.language := by
This gives an error for language ena.toNAFinAcc, because language now exists under Automata.εNA.FinAcc and therefore takes priority over instance resolution. One has then to write Acceptor.language ena.toNAFinAcc explicitly (then it works).
So if one wants to use the non-dot syntax, this looks like:
namespace Automata.εNA.FinAcc
[...]
open Acceptor in
theorem toNAFinAcc_language_eq {ena : εNA.FinAcc State Symbol} :
Acceptor.language ena.toNAFinAcc = Acceptor.language ena := by
So the price to pay for dot notation here is that if one wants to use normal function application notation, it gets a bit worse.
Chris Henson (Nov 09 2025 at 12:28):
I think that pretty much settles the case against those abbrev, just not worth it IMO.
Fabrizio Montesi (Nov 09 2025 at 13:02):
Yes, language a without the abbrev seems to be a better path towards choosing the 'right' language def. Let's go ahead with that for now. PR updated (and I've dealt with the other comments).
I suggest we merge and get back to working on results about LTS and automata theory for now. I see more stuff in LTS that should really become a class, it'll provide further food for thought later.
Ching-Tsun Chou (Nov 09 2025 at 23:01):
I am at the current "main" and I want to prove the following theorem:
import Cslib.Computability.Automata.DA
namespace Cslib.Automata.DA
variable {State : Type _} {Symbol : Type _}
open Acceptor ωAcceptor in
@[scoped grind _=_]
theorem buchi_eq_finAcc_omegaLim {da : DA State Symbol} {acc : Set State} :
language (Buchi.mk da acc) = (language (FinAcc.mk da acc))↗ω := by
ext xs ; constructor
· sorry
· sorry
end Cslib.Automata.DA
rintro does not work at the first sorry. What can I do to unpack the LHS?
Ching-Tsun Chou (Nov 09 2025 at 23:17):
The following works:
open Filter
open scoped Computability Cslib.FLTS
Cslib.Automata.Acceptor Cslib.Automata.ωAcceptor
Cslib.Automata.DA Cslib.Automata.DA.FinAcc Cslib.Automata.DA.Buchi
namespace Cslib.Automata.DA
variable {State : Type _} {Symbol : Type _}
open Acceptor ωAcceptor in
@[scoped grind _=_]
theorem buchi_eq_finAcc_omegaLim {da : DA State Symbol} {acc : Set State} :
language (Buchi.mk da acc) = (language (FinAcc.mk da acc))↗ω := by
ext xs ; constructor
· intro h
have : ∃ᶠ k in atTop, da.run xs k ∈ acc := by grind
sorry
· sorry
But if I have to explicitly state the intermediate result (∃ᶠ k in atTop, da.run xs k ∈ acc) every time I need it, then the supposed saving from eliminating the "duplicated code" is just illusory, isn't it?
Furthermore, I have to open scoped a bunch of namespaces to make grind work. To be sure, the above list of open scoped is not the minimal list. But how should I figure out what the minimal list is?
Eric Wieser (Nov 10 2025 at 00:19):
I've very much skimmed this thread, but one advantage of having language dfa and language nfa is you have then have a typeclass that says language (n1 + n2) = language n1 + language n2 or similar, with instances for all sorts of automata
Chris Henson (Nov 10 2025 at 01:00):
Let me try to separate out your questions @Ching-Tsun Chou
rintrodoes not work at the firstsorry. What can I do to unpack the LHS?
It is not clear to me what kind of pattern you want here, could you explain?
But if I have to explicitly state the intermediate result (
∃ᶠ k in atTop, da.run xs k ∈ acc) every time I need it, then the supposed saving from eliminating the "duplicated code" is just illusory, isn't it?
I believe this is just missing API, as what you've written here is equal to the hypothesis. To be explicit, this works:
import Cslib
open Filter Cslib Automata ωAcceptor DA Buchi
variable {α β : Type*}
@[grind →]
theorem foo {toDA : DA α β} {accept : Set α} (h : ys ∈ language { toDA, accept : Buchi α β }) :
∃ᶠ k in atTop, toDA.run ys k ∈ accept := h
If you would like this to be inferred by grind, add this and annotate it as a forward rule as I have done above.
Furthermore, I have to
open scopeda bunch of namespaces to makegrindwork. To be sure, the above list ofopen scopedis not the minimal list. But how should I figure out what the minimal list is?
I don't have a great answer to this. I don't believe there is a way to minimize open scoped in this way. As I described earlier, I would probably just write
open Computability Cslib FLTS Automata Acceptor ωAcceptor DA FinAcc Buchi
for brevity at the onset.
Ching-Tsun Chou (Nov 10 2025 at 02:04):
@Chris Henson Your theorem foo is basically one direction of the theorem mem_language specialized to DA.Buchi. If I have to prove that theorem as an API lemma anyway, what "code deduplication" has this whole business of "acceptor" achieved anywy? We could have taken the route in cslib#142 and end up with the same amount of code (actually less code, because there wouldn't be separate Acceptor and ωAcceptor classes) and we wouldn't need to open Acceptor ωAcceptor before language can be used without qualifiers (because each of {DA,NA}.{FinAcc,Buchi,Muller} has its own language method and the dot-notation can be used).
Ching-Tsun Chou (Nov 10 2025 at 02:29):
Put another way, if I have to state and prove he following theorem (or some version of it) explicitly anyway:
open ωAcceptor in
theorem mem_language (a : Buchi State Symbol) (xs : ωSequence Symbol) :
xs ∈ language a ↔ ∃ᶠ k in atTop, a.run xs k ∈ a.accept :=
Iff.rfl
what "code deduplication" have the "acceptors" bought us? I could have defined DA.Buchi.language and proved the above theorem and dispensed with the annoying open ωAcceptor in by using the dot-notation. The only "duplicated" code is the definition of DA.Buchi.language, which is just one or two lines anyway. In the end, because we don't need "acceptors" anymore, we end up with fewer files and about the same number of lines of code. And a reader of the code has one fewer concept to comprehend.
Chris Henson (Nov 10 2025 at 02:51):
I guess I am not seeing exactly why you need that have or the seperate theorem you've written. Is the rest of that proof branch short enough that you could finish it to demonstrate what grind isn't picking up?
I'll leave the more general decisions about Acceptor to you and Fabrizio. I believe the intent in merging cslib#144 was to move past this design discussion so you could continue actually working on automata. If after some further work you still feel it is not adding anything, we can certainly remove Acceptor. I think it is just a bit easier to see this with some concrete downstream proofs in place rather than rehashing the discussion beforehand.
Ching-Tsun Chou (Nov 10 2025 at 02:54):
Chris Henson said:
I guess I am not seeing exactly why you need that
haveor the seperate theorem you've written. Is the rest of that proof branch short enough that you could finish it to demonstrate whatgrindisn't picking up?
Please tell me how I can make progress in proving buchi_eq_finAcc_omegaLim if I don't have the have or a separate theorem like mem_language. How do I transform the LHS of buchi_eq_finAcc_omegaLim into something I can use? We have a similar problem for the RHS as well.
Chris Henson (Nov 10 2025 at 03:00):
I am not necessarily contradicting this, I am just saying it would be helpful for me to the rest of the proof (with the have) to better understand the problem.
Ching-Tsun Chou (Nov 10 2025 at 03:42):
Consider the following partial proof:
open Acceptor ωAcceptor in
@[scoped grind _=_]
theorem buchi_eq_finAcc_omegaLim {da : DA State Symbol} {acc : Set State} :
language (Buchi.mk da acc) = (language (FinAcc.mk da acc))↗ω := by
ext xs ; constructor
· simp only [Buchi.mem_language]
intro h
apply Frequently.mono h
intro n h_n
simp only [FinAcc.mem_language]
sorry
· sorry
Just before the first sorry, the tactic state looks like:
State : Type u_1
Symbol : Type u_2
da : DA State Symbol
acc : Set State
xs : ωSequence Symbol
h : ∃ᶠ (k : ℕ) in atTop, (da.run xs) k ∈ acc
n : ℕ
h_n : (da.run xs) n ∈ acc
⊢ da.mtr da.start (xs.extract 0 n) ∈ acc
Now if we comment out the two simps, the tactic state just before the first sorry looks like:
State : Type u_1
Symbol : Type u_2
da : DA State Symbol
acc : Set State
xs : ωSequence Symbol
h : xs ∈ ωAcceptor.language { toDA := da, accept := acc }
n : ℕ
h_n : ({ toDA := da, accept := acc }.run xs) n ∈ { toDA := da, accept := acc }.accept
⊢ xs.extract 0 n ∈ Acceptor.language { toDA := da, accept := acc }
The second tactic state is much less readable than the first one, isn't it? Upon seeing the first tactic state, you immediately see that the proof boils down to showing that da.mtr da.start (xs.extract 0 n) = (da.run xs) n. Would you be able to see that from the second tactic state? I can't. To be sure, the two simps may end up being not strictly necessary in the end, but they surely help, don't they? Which means that I need to prove the mem_language lemmas for DA.FinAcc and DA.Buchi, just to keep myself sane when doing proofs. Now if I need those lemmas explicitly, then the whole "code deduplication" argument is just illusory, isn't it?
Fabrizio Montesi (Nov 10 2025 at 05:18):
Instead of simp only [Buchi.mem_language] you do simp only [ωAcceptor.mem_language], and if you wanna reveal the definition of Accepts, you do simp only [ωAcceptor.mem_language, ωAcceptor.Accepts].
So for example:
open Acceptor ωAcceptor in
@[scoped grind _=_]
theorem buchi_eq_finAcc_omegaLim {da : DA State Symbol} :
language (Buchi.mk da acc) = (language (FinAcc.mk da acc))↗ω := by
ext xs; constructor
· simp only [ωAcceptor.mem_language, ωAcceptor.Accepts]
intro h
apply Frequently.mono h
intro n h_n
simp only [Acceptor.mem_language, Acceptor.Accepts]
sorry
· sorry
which takes you to:
State : Type u_1
Symbol : Type u_2
acc : Set State
da : DA State Symbol
xs : ωSequence Symbol
h : ∃ᶠ (k : ℕ) in atTop, (da.run xs) k ∈ acc
n : ℕ
h_n : (da.run xs) n ∈ acc
⊢ da.mtr da.start (xs.extract 0 n) ∈ acc
which is what you want.
Ching-Tsun Chou (Nov 10 2025 at 08:45):
I didn't mention that I actually have @[simp] annotation on the lemmas DA.FinAcc.mem_language and DA.Buchi.mem_language. So simp alone already does the job for me. Then simp? told me exactly which simp lemmas are used and I got the simp only with a single click. I tried putting @[simp] annotations on both language and mem_language in both Acceptor.lean and OmegaAcceptor.lean (and removed the @[simp] annotation on DA.FinAcc.mem_language and DA.Buchi.mem_language). This is how far simp can get me with that approach:
State : Type u_1
Symbol : Type u_2
da : DA State Symbol
acc : Set State
xs : ωSequence Symbol
h : xs ∈ {xs | ωAcceptor.Accepts { toDA := da, accept := acc } xs}
n : ℕ
h_n : ({ toDA := da, accept := acc }.run xs) n ∈ { toDA := da, accept := acc }.accept
⊢ xs.extract 0 n ∈ {xs | Acceptor.Accepts { toDA := da, accept := acc } xs}
Not very useful, is it?
Furthermore, lemmas like DA.FinAcc.mem_language and DA.Buchi.mem_language can be used in more ways than just by rw or simp. For example, I can get their mp or mpr direction and use that implication in forward or backward reasoning. How am I supposed to do that if I don't have those lemmas already proved?
Fabrizio Montesi (Nov 10 2025 at 11:09):
(deleted)
Fabrizio Montesi (Nov 10 2025 at 12:04):
If you want simp to help you you can just annotate these things. Here's how to do it post-definition:
attribute [simp] ωAcceptor.mem_language
attribute [simp] Cslib.Automata.DA.Buchi.instωAcceptor
As for the second question, can't you do it in the usual way? I tested this in your proof:
haveI := (ωAcceptor.mem_language (Buchi.mk da acc) xs).mp
haveI := (ωAcceptor.mem_language (Buchi.mk da acc) xs).mpr
Ching-Tsun Chou (Nov 10 2025 at 17:41):
Is that your idea of "code deduplication"? If I need the mp direction 3 times and the mpr direction 2 times, I'll (basically) re-prove DA.Buchi.mem_language 5 times. And every time I do it, I'll just reconstruct in my mind what the theorem I'm proving actually looks like. As to your suggestion about simp, it doesn't really work. I played with it and the best it can do is:
State : Type u_1
Symbol : Type u_2
da : DA State Symbol
acc : Set State
xs : ωSequence Symbol
h : xs ∈ {xs | ∃ᶠ (k : ℕ) in atTop, (da.run xs) k ∈ acc}
n : ℕ
h_n : (da.run xs) n ∈ acc
⊢ xs.extract 0 n ∈ {xs | Acceptor.Accepts { toDA := da, accept := acc } xs}
Also note that the name instωAcceptor is an instance name generated by Lean.
Anyway, I'm putting the mem_language lemmas back. It is me who's doing the proofs and I'll do them in a way that is most comfortable to me.
Fabrizio Montesi (Nov 10 2025 at 17:46):
That's weird, it worked for me by writing simp at the beginning of the relevant case. What does your proof look like?
Ching-Tsun Chou (Nov 10 2025 at 19:03):
You need to add:
attribute [simp] Cslib.Automata.Acceptor.mem_language
attribute [simp] Cslib.Automata.DA.FinAcc.instAcceptor
attribute [simp] Cslib.Automata.ωAcceptor.mem_language
attribute [simp] Cslib.Automata.DA.Buchi.instωAcceptor
to make everything work.
As I wrote above, I'm putting the mem_language lemmas back. I'm not going to spend my days going through level upon level of indirection just to prove some painfully obvious facts implicitly. I'll keep my proofs simple and stupid by explicitly spelling out those painfully obvious facts. Life is too short and I want to get on with doing some actual development of automata theory.
Fabrizio Montesi (Nov 10 2025 at 19:06):
Ching-Tsun Chou said:
You need to add:
attribute [simp] Cslib.Automata.Acceptor.mem_language attribute [simp] Cslib.Automata.DA.FinAcc.instAcceptor attribute [simp] Cslib.Automata.ωAcceptor.mem_language attribute [simp] Cslib.Automata.DA.Buchi.instωAcceptorto make everything work.
As I wrote above, I'm putting the
mem_languagelemmas back. I'm not going to spend my days going through level upon level of indirection just to prove some painfully obvious facts implicitly. I'll keep my proofs simple and stupid by explicitly spelling out those painfully obvious facts. Life is too short and I want to get on with doing some actual development of automata theory.
I still don't get what the problem is with just putting a @[simp] over precisely those definitions (instances included, you can annotate them), since that's what you're doing elsewhere.
My example with 'attribute' was to keep it self-contained.
Ching-Tsun Chou (Nov 10 2025 at 19:52):
@Fabrizio Montesi You claim that having "acceptors" would lead to "code deduplication". But with all the @[simp] annotations, open scoped Acceptor lines to allow language to be used without qualifiers, and the overhead of the "acceptor" code itself (Acceptor.lean and OmegaAcceptor.lean), are you actually reducing the lines of code overall? Sure, you'll have fewer explicit definitions and theorems (6 language definitions and 6 mem_langauge theorems, to be precise). But when you end up writing things like:
haveI := (ωAcceptor.mem_language (Buchi.mk da acc) xs).mp
you are essentially proving the theorem DA.Buchi.mem_language on the fly. Why not prove it once and for all and refer to it later? After all, there are only 6 such theorems. Perhaps you are smart enough to see the line above and know immediately that it is the following theorem:
xs ∈ language a → ∃ᶠ k in atTop, a.run xs k ∈ a.accept
I'm not that smart, and I suspect most future readers of the code are not that smart, either.
And does the "acceptor" actually offer any abstraction at a conceptual level? If you read the code and strip away all irrelevant details, you'll see that an "acceptor" consists of three things:
- A predicate
P - A set defined by the predicate
P:s := { x | P x } - A theorem relating
Pands:x ∈ s ↔ P x
That's it. I am highly skeptical that any useful conceptual abstraction can be gotten out of that meager set of data. I also don't think that explicitly spelling out each instance of "acceptor" (and there are only 6 of them) makes the code unnecessarily redundant or duplicative or harder to understand.
Fabrizio Montesi (Nov 17 2025 at 11:07):
Maybe this will help the discussion. I've taken your cslib#152 and made it work without the additional lemmas, without (importantly) changing your proofs: https://github.com/leanprover/cslib/commit/a62c4c838f87f77edd2752738d36c892673bac8b
(Sidenote: buchi_eq_finAcc_omegaLim can be proven by grind if you open OmegaLanguage, but I think using simp is perfectly fine.)
Back to the general topic:
What I'm trying to do is to understand if you can keep proving things simply and, at the same time, we can make code easier to review.
Specifically here, the 'deduplication' given by (Omega)Acceptor that I'm interested in is not so much lines of code. It's more about enforcing that everybody calls the functions language and mem_language always the same, and that they are always defined in the same way. So I'm looking for what in OOP is an interface with default methods, or a trait.
We can do something else if this leads us to doing stupid things, but for now it still seems fine.
Btw, if you have code where I can see the problem with using mp/mpr directions of OmegaAcceptor, please feel free to point me to it. It might happen because of doing a simplification too early (but I'd need to see an example to really have an opinion).
Q for everyone: Is adding @[simp] to an instance stupid?
Chris Henson (Nov 17 2025 at 11:18):
Note that I did approve cslib#152 based on my review comments being addressed, but I did not intend to bypass this discussion about (Omega)Acceptor.
I have seen some instances marked as simp in Mathlib, but I'm not sure of best practices around that.
Fabrizio Montesi (Nov 17 2025 at 12:23):
No problem, I'm gonna merge cslib#152, I think it's very nice otherwise. I'll then make a PR of my own with the commit above.
Chris Henson (Nov 17 2025 at 13:37):
Your cslib#161, especially considering it doesn't change any proofs, seems reasonable to me.
Ching-Tsun Chou (Nov 17 2025 at 22:03):
I tested cslib#161 manually at various places and it seems to work. So I've approved it.
Last updated: Dec 20 2025 at 21:32 UTC