Zulip Chat Archive

Stream: iris-lean

Topic: Issues with typeclasses in the proof mode


Michael Sammler (Dec 10 2025 at 15:32):

Hi, I've been looking into the proof mode and in particular how the proof mode uses typeclasses. As part of this, I noticed some differences between how typeclasses work in Rocq and Lean and some issues caused by this. Does someone with more knowledge about typeclasses in Lean have thoughts on the points below and how one can resolve them?

Issue 1: Typeclasses in Lean cannot instantiate mvars

The first issue is that typeclasses in Lean do no instantiate mvars that were created before the search was started while in Rocq the typeclass search can instantiate mvars. In general, I think that Lean has taken the right choice here, however this causes problems when implementing the proof mode via typeclasses as in Rocq. In particular, it is very common in Iris proofs to first create a bunch of mvars and then instantiate them via proof mode tactics.

The following example illustrates the problem (the equivalent Rocq code works):

/--
error: failed to synthesize
  ProofMode.FromAssumption false (Q 1) (Q ?w)

Hint: Additional diagnostic information may be available using the `set_option diagnostics true` command.
---
trace: [Meta.synthInstance] 💥️ ProofMode.FromAssumption false (Q 1) (Q ?w)
  [Meta.synthInstance] new goal ProofMode.FromAssumption false (Q 1) (Q ?w)
    [Meta.synthInstance.instances] #[@ProofMode.fromAssumption_exact]
  [Meta.synthInstance] 💥️ apply @ProofMode.fromAssumption_exact to ProofMode.FromAssumption false (Q 1) (Q ?w)
    [Meta.synthInstance.tryResolve] 💥️ ProofMode.FromAssumption false (Q 1)
          (Q ?w) ≟ ProofMode.FromAssumption ?m.25 ?m.27 ?m.27
-/
#guard_msgs in
theorem exact_lean_test [BI PROP] (Q : Nat  PROP) :  x, Q 1  Q x := by
  apply Exists.intro
  iintro HQ
  set_option trace.Meta.synthInstance true in
  iexact HQ

Question 1: Is there a way to instantiate mvars using typeclasses in Lean?

Issue 2: Typeclasses in Lean cannot create new mvars

The dual of the previous issue: Typeclasses in Rocq can create new mvars and the proof mode uses this, for example for the IntoWand instance for ∀ quantifiers (notice the x). Iris Lean has the corresponding instance here, but this instance cannot be used to create new goals with mvars in them since Lean rejects typeclass search solutions that introduce mvars with the "skip answer containing metavariables" error. However, this instance is quite important for the proof mode to automatically handle magic wands that start with a universal quantifier without forcing the user to create an mvar for the quantifier explicitly.

Question 2: Is there a way to create mvars using typeclasses in Lean?

Issue 3: Input / Output handling of typeclasses is more rigid in Lean than in Rocq

Lean allows only one set of input / output annotations per typeclass whereas in Rocq a typeclass can be used with different sets of inputs and outputs and the Iris proof mode make use of this feature. For example, for FromAssumption the argument P is sometimes an input and sometimes an output. (Shown a bit by the Hint Mode here.) When translated to Lean, this can cause infinite loops in the typeclass search. In particular, fromAssumption_intuitionistically_l can create such a loop since it just adds a persistent modality in front of the output P in a loop. The Rocq version resolves this by introducing the KnownLFromAssumption typeclass with a Hint Mode that prevents fromAssumption_intuitionistically_l to apply when P is an output.
Also for IntoWand, the argument P can be an input (when an explicit hypothesis is given to iSpecialize / iApply) or an output (when no explicit hypothesis is given to iSpecialize / iApply).

Question 3: What is the best way to implement such typeclasses with multiple different input / output specifications in Lean?

Overall, given all these questions, I am wondering if the type class system in Lean is the best match for implementing the proof mode of Iris or if one should use some other meta programming feature in Lean instead. Any thoughts on this?

Markus de Medeiros (Dec 10 2025 at 23:53):

I don't have the technical knowledge to answer your questions (I'm not even sure who does actually) but re. your overall point I get the impression that the typeclass search in Iris-Rocq is only really used as a well-supported and extensible way to do a backtracking search. You're more experienced with this, do you feel the same?

If so, then yeah I think we could try rolling our own to avoid some of these problems.

Michael Sammler (Dec 11 2025 at 07:42):

For the questions, maybe @Mario Carneiro or @Henrik Böving might be able to say something?

Markus de Medeiros said:

I get the impression that the typeclass search in Iris-Rocq is only really used as a well-supported and extensible way to do a backtracking search. You're more experienced with this, do you feel the same?

My feeling is that typeclasses are basically the only extensible proof search mechanism in Rocq so all kind of proof search gets shoehorned into typeclasses and this is why the proof mode uses them. That said, if you can use the existing type class mechanism it is quite nice since one can reuse quite a bit of infrastructure. In Lean, we have the advantage that we can implement our own extensible mechanism if we want to, but this is quite a bit of work. So before doing this, I would like to make sure that this is indeed the best option and there is no way to use existing mechanisms (like type classes) to do what we want.

Henrik Böving (Dec 11 2025 at 08:26):

I think Issue 1 is not correct. Lean can (and has to) instantiate mvars based on type class search sometimes. Though it might of course be that it doesn't work for the specific mvars that you are interested in.

Issue 2 is correct I think.

For issue 3 you can try to split it up into multiple type classes to simulate the different fundeps that you are interested in.

More generally speaking though, if you are interested in an extensible proof search mechanism I think doing aesop annotations could be just fine?

Michael Sammler (Dec 11 2025 at 09:55):

Henrik Böving said:

I think Issue 1 is not correct. Lean can (and has to) instantiate mvars based on type class search sometimes. Though it might of course be that it doesn't work for the specific mvars that you are interested in.

Could you clarify in which cases Lean's type class search can instantiate mvars? My understanding is that tc search increases the mvar depth so no mvar that was instantiated before the tc search can be instantiated by the search. Is this correct? Is there a way to prevent this increase of the depth?

Here is a pure Lean example that shows the issue. I would be interested if there is a way to make this example work:

class Impl (A B : Prop) where
  impl_correct : A  B

instance {A} : Impl A A := by grind

example (P : Nat  Prop) : P 1   x, P x := by
  intro HP
  apply Exists.intro
  set_option trace.Meta.synthInstance true in
  apply (Impl.impl_correct HP) -- error here since the instance does not apply

Henrik Böving said:

For issue 3 you can try to split it up into multiple type classes to simulate the different fundeps that you are interested in.

This could work, but it would probably require duplicating some instances for the multiple type classes. I will need to look into this.

Henrik Böving said:

More generally speaking though, if you are interested in an extensible proof search mechanism I think doing aesop annotations could be just fine?

Thanks for the pointer! I've not looked into aesop yet, but will do so.

Zongyuan Liu (Dec 11 2025 at 10:05):

Michael Sammler said:

Here is a pure Lean example that shows the issue. I would be interested if there is a way to make this example work:

class Impl (A B : Prop) where
  impl_correct : A  B

instance {A} : Impl A A := by grind

example (P : Nat  Prop) : P 1   x, P x := by
  intro HP
  apply Exists.intro
  set_option trace.Meta.synthInstance true in
  apply (Impl.impl_correct HP) -- error here since the instance does not apply

In this specific example, can you make B an output parameter?

Henrik Böving (Dec 11 2025 at 10:22):

Could you clarify in which cases Lean's type class search can instantiate mvars?

It can instantiate mvars for outParam parameters. In fact the way in which they are implemented is that, even if they are already determined at the call site, the argument to an outParam gets forcibly swapped with a metavariable that TC search is expected to fill. If you can't do outParams it becomes more difficult though yes.

Michael Sammler (Dec 11 2025 at 11:47):

Thanks for pointing this out! The example indeed works if B is an outParam (but not for a semiOutParam). I will have to see if I can use this to address the problem.

Markus de Medeiros (Dec 11 2025 at 12:22):

Just one other thing I will throw in which I've already told Michael about but I want to put in text.

In Lean it does not seem possible or idiomatic to mix typeclass backtracking search and tactic-level backtracking search #general > Search for typeclass with Qq @ 💬. Iris-Rocq tactics frequently use the tc_solve tactic from stdpp, which searches over the set of all possible typeclass solutions, and consequently is used to search for a typeclass instance such that a higher-level Iris tactic (like apply) succeeds.

If it's possible to move all of the typeclass synthesis upfront, or if there is only ever one solution to to the typeclass search problem, then this is no problem and the same design as Rocq works in Lean. For the simple tactics this is the case. But if we really want (or need) to preserve the possibility that the typeclass search returns an solution that we want to reject for other reasons, then this might be another reason to write our own search.

Mario Carneiro (Dec 11 2025 at 14:57):

For that, you don't necessarily need to write your own search; you just need to call a more low-level entrypoint to TC search

Michael Sammler (Dec 11 2025 at 15:02):

This sounds interesting. Could you elaborate what you mean with "that" and which low-level entrypoint to TC search?
In fact, I looked for more low-level entry points to TC search that might give more control but only found the functions here and I am not sure which one I can call except the toplevel synthInstance family of functions.

Mario Carneiro (Dec 11 2025 at 15:07):

We might have to write it ourselves, but the capability fundamentally exists

Mario Carneiro (Dec 11 2025 at 15:08):

it would probably be best to get it upstreamed though

Michael Sammler (Dec 11 2025 at 15:12):

I see. Just to clarify, which of the issues do you think one could solve by tweaking type class search (with a more low-level entry point)? The issue of calling tactics that Markus mentioned or (one of) the 3 issues in the original post or all of them?

Mario Carneiro (Dec 11 2025 at 15:13):

Markus's issue, not your 3

Mario Carneiro (Dec 11 2025 at 15:14):

I think it may be possible to solve 1 and/or 2 by setting a flag which allows setting metavariables outside the usual stack discipline

Mario Carneiro (Dec 11 2025 at 15:15):

I think issue 3 is under control already, I don't see evidence it's a major hindrance

Michael Sammler (Dec 12 2025 at 08:13):

Michael Sammler said:

Henrik Böving said:

More generally speaking though, if you are interested in an extensible proof search mechanism I think doing aesop annotations could be just fine?

Thanks for the pointer! I've not looked into aesop yet, but will do so.

I've looked into aesop and it seems like it would probably indeed be possible to use it for the proof automation. It allows instantiating mvars (issue 1) and can also create new evars (issue 2). The proof search can also be customized with custom tactics.

My main concern about aesop is performance since it does a lot of things that we do not need / want. E.g. it calls simp at every step of the proof search. I've not looked into this, but maybe with the right flags one can get a stripped down version of aesop that is similar to what tc search does, but more customizable.


Last updated: Dec 20 2025 at 21:32 UTC