Zulip Chat Archive

Stream: general

Topic: mathematics in lean error


Gihan Marasingha (Jul 30 2020 at 12:28):

Section 3.1 of 'Mathematics in Lean' contains the code

variables {α : Type*} (r s t : set α)

example : s  s :=
by { intros x xs, exact xs }

example : s  s := λ x xs, xs

The last line causes an error in vscode and in the community web editor:

type mismatch, term
  λ (x : ?m_1) (xs : ?m_2[x]), xs
has type
  Π (x : ?m_1), ?m_2[x]  ?m_2[x]
but is expected to have type
  s  s

I don't understand why there's an error message.

@Jeremy Avigad

Jalex Stark (Jul 30 2020 at 12:33):

notice that the following works

import data.set.basic
variables {α : Type*} (s : set α)
example : s  s := by exact λ x xs, xs

(In case it's not clear, the difference here is that exact will use something like refl to unify the goal and the target)

Jalex Stark (Jul 30 2020 at 12:34):

this doesn't work

import data.set.basic
variables {α : Type*} (s : set α)
local attribute [reducible] set.subset
example : s  s := λ x xs, xs

Jalex Stark (Jul 30 2020 at 12:39):

and this does work

import data.set.basic
variables {α : Type*} (s : set α)

example : s  s := set.subset_def.mpr $ λ x xs, xs

Kenny Lau (Jul 30 2020 at 12:46):

variables {α : Type*} (s : set α)
example : s  s := λ x, id

Jeremy Avigad (Jul 30 2020 at 13:13):

This is a quirk of Lean's parser: example is parsed like def rather than theorem, which means that the elaborator doesn't use the statement of the theorem to elaborate the proof term. That's why exact works where the proof term doesn't. It will also work if you change example to theorem foo.

I don't remember why Leo implemented example that way, except maybe that he also wanted it to work for definitions, like example := 3. I also don't know why this example made it past the test suite, but we'll fix it. @Gihan Marasingha, thanks for the correction.

Gihan Marasingha (Jul 30 2020 at 13:19):

Thanks @Jeremy Avigad . This was really confusing me.

Sebastian Ullrich (Jul 30 2020 at 13:32):

Jeremy Avigad said:

I don't remember why Leo implemented example that way, except maybe that he also wanted it to work for definitions, like example := 3.

I would guess so as well. Fortunately it won't matter in Lean 4 because a def with a specified type will behave exactly like theorem does in Lean 3 (except for the parallelism, currently).

Mario Carneiro (Jul 30 2020 at 14:41):

This is a surprise to me. Not that example is parsed like def but that def ignores the expected type. Why would it do that? I would have expected that it behaves the same as def f := show s ⊆ s, from λ x xs, xs

Sebastian Ullrich (Jul 30 2020 at 15:21):

...good question. I hadn't looked very closely at the actual example. I don't think it completely ignores it, but there must be some subtle, probably unintended difference.

ZHAO Jinxiang (Aug 16 2020 at 16:21):

Section (3.3. Negation) of 'Mathematics in Lean' contains the code

import data.real.basic

-- BEGIN
example :
  ¬  {f :   }, monotone f   {a b}, f a  f b  a  b :=
begin
  intro h,
  let f := λ x : , (0 : ),
  have monof : monotone f,
  { intros x y h1, dsimp [f], reflexivity},
  have h' : f 1  f 0,
    dsimp [f],
    reflexivity,
  have h1: 1  0, {
    -- exact h monof h',
    sorry,
  },
  linarith,
end
-- END

If I remove sorry and use exact h monof h',, it will told me

unnamed_1180.lean:15:10
invalid type ascription, term has type
  ?m_1  ?m_2
but is expected to have type
  1  0
state:
h :  {f :   }, monotone f   {a b : }, f a  f b  a  b,
f :    := λ (x : ), 0,
monof : monotone f,
h' : f 1  f 0
 1  0

I don't understand why there's an error message.

@Jeremy Avigad

Mario Carneiro (Aug 16 2020 at 16:28):

The statement h1 says (1 : nat) <= 0, because there is nothing to suggest what the type should be so lean picks nat by default. Change that line to have h1: (1 : ℝ) ≤ 0,

Bryan Gin-ge Chen (Aug 16 2020 at 16:31):

That error message is incredibly unhelpful though.

ZHAO Jinxiang (Aug 16 2020 at 16:31):

@Mario Carneiro Thank you very much.

ZHAO Jinxiang (Aug 16 2020 at 16:37):

Is the 1 here a polymorphic 1, not nat and not real?

Kevin Buzzard (Aug 16 2020 at 16:42):

It's polymorphic until lean gives up on the idea that you're going to tell it which one, and then it uses nat

ZHAO Jinxiang (Aug 16 2020 at 16:45):

So the ?m_1 here is (1:nat)

Mario Carneiro (Aug 16 2020 at 16:46):

The 1 at the bottom is 1:nat, the ?m_1 at the top is ?m_1:real and that's why there is a type unification error

Mario Carneiro (Aug 16 2020 at 16:46):

and yes, the message is incredibly unhelpful, I only know the problem because I've seen it many times

ZHAO Jinxiang (Aug 17 2020 at 02:04):

I have another question about this:

import data.real.basic

-- BEGIN
example :
  ¬  {f :   }, monotone f   {a b}, f a  f b  a  b :=
begin
  intro h,
  let f := λ x : , (0 : ),
  have monof : monotone f,
  { intros x y h1, dsimp [f], reflexivity},
  have h' : f 1  f 0,
    dsimp [f],
    reflexivity,
  -- have h1: (1:ℝ) ≤ 0,
  --   exact h monof h',
  have h'':monotone f   {a b}, f a  f b  a  b,
    exact h,
  have h''': {a b : }, f a  f b  a  b,
    exact h'' monof,
  -- linarith,
end
-- END

Error message:

invalid type ascription, term has type
  f ?m_1  f ?m_2  ?m_1  ?m_2
but is expected to have type
   {a b : }, f a  f b  a  b

Floris van Doorn (Aug 17 2020 at 02:22):

replace the line where you declare h'' by

  have h'' : monotone f   a b, f a  f b  a  b,

Floris van Doorn (Aug 17 2020 at 02:29):

The curly braces mean these are implicit arguments, so h'' monof is interpreted as @h'' monof _ _.

Floris van Doorn (Aug 17 2020 at 02:29):

You can also use strict implicit arguments, using {{a b}} or \{a b\}.

Kenny Lau (Aug 17 2020 at 05:53):

we really need {{}} to be the default behaviour

Johan Commelin (Aug 17 2020 at 06:00):

Can we swap the meaning of {{ }} and { } in C++?

Johan Commelin (Aug 17 2020 at 06:00):

That would save us a huge refactor

Kenny Lau (Aug 17 2020 at 06:04):

yeah that's what I mean

Mario Carneiro (Aug 17 2020 at 06:18):

I don't know about this. I think there are a significant fraction of { } that should be { }, compared to a small fraction where {{ }} is noticeably better or more appropriate

Johan Commelin (Aug 17 2020 at 06:42):

@Mario Carneiro I'm slow today. Can you give an example where {{ }} is clearly the wrong choice?

Mario Carneiro (Aug 17 2020 at 06:45):

Something like \forall {{x y}}, f x y = 0

Mario Carneiro (Aug 17 2020 at 06:46):

really anything where there are no explicit arguments after the semi implicit args

Mario Carneiro (Aug 17 2020 at 06:46):

because then you can't apply the semi implicit args even if you want to

Mario Carneiro (Aug 17 2020 at 06:46):

you would have to write @foo _ _

Johan Commelin (Aug 17 2020 at 06:47):

Fair enough. Personally, I wouldn't mind having these explicit.

Johan Commelin (Aug 17 2020 at 06:48):

For quite some time, I've been thinking that we're marking too many things implicit.

Mario Carneiro (Aug 17 2020 at 06:48):

For a local hypothesis that makes sense, but it is not uncommon for type arguments to be implicit like this

Mario Carneiro (Aug 17 2020 at 06:50):

my ballpark estimate is that today with { } default we use { } 99% of the time and prefer {{ }} about 1% of the time, while if {{ }} was the default then we would use it 90% of the time and would find reasons to use { } 10% of the time

Johan Commelin (Aug 17 2020 at 06:52):

I'm also talking about things like

lemma submodule.neg_mem (hx : x  p) : -x  p := -- x is implicit

but I realise that this is somewhat off topic in this thread.

These statements are annoying when you don't have hx yet, and -x \in p is also not exactly the expected type.

ZHAO Jinxiang (Aug 17 2020 at 12:46):

import data.real.basic

def fn_ub (f :   ) (a : ) : Prop :=  x, f x  a
def fn_has_ub (f :   ) :=  a, fn_ub f a

open_locale classical

variable (f :   )

-- BEGIN
example (h : ¬  a,  x, f x > a) : fn_has_ub f :=
begin
  push_neg at h,
  exact h
end

example (h : ¬ fn_has_ub f) :  a,  x, f x > a :=
begin
  simp only [fn_has_ub, fn_ub] at h,
  push_neg at h,
  exact h
end
-- END

If I change simp only [fn_has_ub, fn_ub] at h to simp only [fn_has_ub] at h,simp only [fn_ub] at h, it will told me:

unnamed_1338.lean:19:29
simplify tactic failed to simplify
state:
f :   ,
h : ¬Exists (fn_ub f)
  (a : ),  (x : ), f x > a

Kevin Buzzard (Aug 17 2020 at 12:56):

If this is not an error in the mathematics in Lean book, can I request that we move this to a new thread?

Jeremy Avigad (Aug 17 2020 at 15:44):

I am doing my late-morning Zulip catch-up, so I don't know if this moved to a new thread. BA long time ago I suggested exactly this change to Leo, and he tried it and said that lots of things broke with the change. IIRC he came to the conclusion that neither was clearly the more natural default. This is not an argument against trying it again and seeing what happens. It's just a (vaguely remembered) data point.

Floris van Doorn (Aug 17 2020 at 15:50):

I think normal implicit is a better default, but there are plenty of cases where strict implicit is desirable. If you like to write inv_inj.mpr or more generallyany_proof_of_an_iff_without_explicit_arguments.mp, then you have a hard time with strict implicit arguments.

DBE (Nov 07 2023 at 16:58):

Beginner questions, I'm following the guide on this link:
https://leanprover-community.github.io/mathematics_in_lean/C05_Elementary_Number_Theory.html
using Lean "4.2.0-rc4"
1 At the end of section 5.1 there is a mention to Nat.count_factors_mul_of_pos which I can't #check, and is not used at all in the following example:

example {m n k r : } (nnz : n  0) (pow_eq : m ^ k = r * n ^ k) {p : } (prime_p : p.Prime) :
    k  r.factorization p :=

2 In this example, the prime_p hypothesis is not used in the proof (as provided in the solutions)
3 I don't understand why I can't execute

#eval factorization 20 5

The InfoView says: "failed to compile definition, consider marking it as 'noncomputable' because it depends on 'factorization', and it does not have executable code". I suspect there is a good reason for this behavior (or is a bug?) but I think the guide could make some advice to clarify.

Ruben Van de Velde (Nov 07 2023 at 17:28):

  1. In lean, there are definitions that you can compute in the sense of #eval because there is machine code generated for them, and other definitions that you can't. factorization is one of the latter

Patrick Massot (Nov 07 2023 at 17:55):

DBE, could you open an issue at https://github.com/avigad/mathematics_in_lean_source about the first two points?

DBE (Nov 07 2023 at 18:07):

Done: https://github.com/avigad/mathematics_in_lean_source/issues/151


Last updated: Dec 20 2023 at 11:08 UTC