Zulip Chat Archive

Stream: new members

Topic: MetaEval failed to be synthesized for elaboration result


Adomas Baliuka (Aug 17 2024 at 03:10):

Looking at the metaprogramming book, ch.3, the following works:

import Lean

open Lean Nat

def nat : Expr := .const ``Nat []

def addOne : Expr :=
  .lam `x nat
    (mkAppN (.const ``Nat.add []) #[.bvar 0, mkNatLit 1])
    BinderInfo.default

def mapAddOneNil : Expr :=
  mkAppN (.const ``List.map [levelOne, levelOne])
    #[nat, nat, addOne, .app (.const ``List.nil [levelOne]) nat]

elab "mapAddOneNil" : term => return mapAddOneNil

#check mapAddOneNil
-- List.map (fun x => Nat.add x 1) [] : List Nat

set_option pp.universes true in
set_option pp.explicit true in
#check mapAddOneNil
-- @List.map.{1, 1} Nat Nat (fun x => Nat.add x 1) (@List.nil.{1} Nat) : List.{1} Nat

#reduce mapAddOneNil
-- []

However, adding #eval mapAddOneNil gives the error

<...> has type List ℕ but instance MetaEval (List ℕ) failed to be synthesized

Even though there definitely is such an instance. Why does this happen?

Sebastian Ullrich (Aug 17 2024 at 10:46):

The example code is faulty, all levelOnes should be levelZeros as List takes Type u and Nat is a Type 0

Adomas Baliuka (Aug 17 2024 at 11:22):

Right, thanks! Made a PR on the book repo (which doesn't seem to be very actively maintained though)

In that case, why does check and reduce work though? And how does one do a .const of a Sort 0 (i.e. Prop)?

Sebastian Ullrich (Aug 17 2024 at 11:38):

Adomas Baliuka said:

In that case, why does check and reduce work though?

The plumbing level of Lean necessarily assumes that all input expressions are well-formed; if a metaprogram breaks that assumption, some things may break and others may not.

And how does one do a .const of a Sort 0 (e.g. Prop)?

If, unlike List, a type is defined in Sort u, then substituting u with levelZero does give you Prop. It entirely depends on the definition of the constant.

Adomas Baliuka (Aug 17 2024 at 11:57):

Last question: Is there a way to make the kernel "check everything" somehow and get a proper error message? For example

import Lean
open Nat Lean
abbrev tmp (a : Sort _)  := a

def expr_1eq1 : Expr :=
  .app (.const ``tmp [0]) $  .app (.app (.app (.const ``Eq [1]) (.const ``Nat [])) (mkNatLit 1)) (mkNatLit 1)

elab "elab_1eq1" : term => return expr_1eq1

example : elab_1eq1 := by
  unfold tmp
  rfl

works but having a wrong universe level, e.g., switching .const ``Eq [1] to .const ``Eq [0] gives an error

application type mismatch
Eq
argument has type
Type
but function has type
{α : Prop} → α → α → Prop

which I guess even makes sense when you think about it, but is there a way to find out that elab_1eq1 is already malformed? The output of #check elab_1eq1 looks misleadingly correct and #eval just fails to synthesize Decidable.

Kyle Miller (Aug 17 2024 at 16:41):

You could put in an explicit type check in your elaborator:

def expr_1eq1 : Expr :=
  .app (.const ``tmp [0]) $  .app (.app (.app (.const ``Eq [0]) (.const ``Nat [])) (mkNatLit 1)) (mkNatLit 1)

elab "elab_1eq1" : term => do
  let e := expr_1eq1
  Meta.check e
  return e

example : elab_1eq1 := by
        --^ "application type mismatch ..."
  unfold tmp
  rfl

Adomas Baliuka (Aug 17 2024 at 16:43):

That's certainly nice, but I was hoping for something that doesn't require changing the elaborator (or even knowing which elaborator or macros are called while elaborating something)... I guess that's going into the direction of lean4checker?

Another question: Why does #reduce elab_1eq1 not unfold tmp?

Kyle Miller (Aug 17 2024 at 16:47):

I don't think it's going in that direction — without the Meta.check, you see the type error on example. There's a typecheck happening somewhere without need for any external tools. (I don't remember if the example command itself is doing it, or if it's when the definition is passed to the kernel).

Kyle Miller (Aug 17 2024 at 16:50):

Potentially there could be some option you could set for debugging term elaborators, to type check every result from every elaboration, but elaborators can circumvent these checks in any number of ways, since they can, for example, assign unchecked expressions to metavariables. Without intercepting every metavariable assignment too, it's only at the very end of term elaboration that you can see that there's a type error.

Adomas Baliuka (Aug 17 2024 at 16:51):

I guess I probably just want a version of #check that actually makes the kernel check the type (which I would naively have expected it would do)

Kyle Miller (Aug 17 2024 at 16:52):

Or at least the elaborator check the type (Meta.check is the elaborator's type checker, which is aware of things like metavariables, but then there's also the kernel type checker)

Kyle Miller (Aug 17 2024 at 16:52):

There's an assumption that elaborators produce type correct terms, throwing an error themselves if the user supplied something that can't type check.

It seems like it would make sense to have #check do a Meta.check just to be sure

Kyle Miller (Aug 17 2024 at 16:53):

#reduce doesn't reduce types by default. Here:

#reduce (types := true) elab_1eq1

Kyle Miller (Aug 17 2024 at 17:09):

(lean4#5079 adds typechecking to #check and #reduce)

Adomas Baliuka (Aug 17 2024 at 17:22):

Thanks a lot! I hope this gets merged.

This is still not a kernel check, right? How could one "call the kernel explicitly"? It doesn't seem like the metaprogramming book explains this (though I haven't read all of it carefully...). Why are there separate "kernel" and "elaborator" versions of type checking?

Kyle Miller (Aug 18 2024 at 02:27):

It's still not a kernel check. Regarding the difference between the elaborator and the kernel type checkers, the kernel type checker is only for fully elaborated terms without any metavariables, and the elaborator type checker is able to handle metavariables (and in general every expression that can appear partway through elaboration).

As far as I know, the only way to invoke the kernel type checker is to create a declaration and try to add it to the environment.

import Lean

open Lean Elab Term Meta

elab tk:"#typecheck " t:term : command => Command.runTermElabM fun vars => do
  let e  withSynthesize <| elabTerm t none
  let e  mkLambdaFVars vars e (usedOnly := true)
  let e  Term.levelMVarToParam ( instantiateMVars e)
  if e.hasMVar then
    throwError "expression contains metavariables{indentD e}"
  withoutModifyingEnv do
    addDecl <| .defnDecl {
      name := `_typecheck
      levelParams := ( getLevelNames)
      type := ( inferType e)
      value := e
      hints := ReducibilityHints.abbrev
      safety := DefinitionSafety.safe
    }
  logInfoAt tk m!"{e} : {← inferType e}"

#typecheck ?_
-- expression contains metavariables

elab "bad" : term => return .app (.const ``Option [0]) (.const ``True [])
#typecheck bad
-- application type mismatch

variable (x : Nat)
#typecheck 2 + x
-- fun x => 2 + x : Nat → Nat

Adomas Baliuka (Aug 18 2024 at 12:03):

Thanks, this is great!

The docstring of inferType says that it

Returns the inferred type of the given expression, or fails if it is not type-correct.

What is the meaning of "type-correct" here? Or is the docstring wrong, since it does actually accept bad (if theaddDecl step is removed)?

Sebastian Ullrich (Aug 18 2024 at 13:35):

It will certainly detect type errors that make it impossible to infer a type but there are no guarantees that it will find all of them, no

Adomas Baliuka (Aug 18 2024 at 13:39):

So "type-correct" is a vaguely defined concept here? Meaning something like "type-correct enough, as judged by this function's internals"?

Sebastian Ullrich (Aug 18 2024 at 14:09):

Essentially yes. Detecting type errors is not its goal.

Kyle Miller (Aug 18 2024 at 18:48):

I think the documentation is not correct here; it should say it fails only if it is not type-correct (or rather something like "or fails if it detects it is not type-correct"). Type-correctness shouldn't be a vaguely defined concept in a proof checker!

lean4#5087 has a clarified docstring.


Last updated: May 02 2025 at 03:31 UTC