Zulip Chat Archive

Stream: mathlib4

Topic: porting `fprop` tactic


Tomas Skrivan (Jan 15 2024 at 21:55):

I have started porting the fprop tactic from SciLean to mathlib. Can I please get write permission for the user name lecopivo?

Right now, I do not have much to report on but once I will have some minimal version working I will share it here.

Tomas Skrivan (Jan 15 2024 at 22:01):

bump @maintainers

Yury G. Kudryashov (Jan 15 2024 at 22:15):

I'll send an invitation in a minute or two

Yury G. Kudryashov (Jan 15 2024 at 22:18):

Done.

Tomas Skrivan (Jan 16 2024 at 22:24):

I have initial version.

To set up fprop you just mark the theorems and the function property, like Continuous, with fprop attribute.

This commit does that for Continuous and automates few proofs in Topology/Constructions.lean.

Here is a test file I use to debug which does a minimal set up of fprop.

Tomas Skrivan (Jan 24 2024 at 14:39):

I have a version that is ready to be tested! As a stress test, fprop can prove automatically this lemma from sphere eversion project:

theorem corrugation.contDiff' {n : } {γ : G  E  Loop F} (hγ_diff : 𝒞 n γ) {x : H  E}
    (hx : 𝒞 n x) {g : H  G} (hg : 𝒞 n g) : 𝒞 n fun h => 𝒯 N (γ <| g h) <| x h := by
  unfold corrugation
  fprop

From the trace you can see the steps taken by fprop

The main features of fprop are:

  • general tactic to prove stuff like Continuous, Differentiable, ContDiff, IsLinearMap etc.
  • fast
  • handles local hypothesis about n-ary functions i.e. can work with (hf : Continuous fun (x,y) => f x y)
  • handles bundled morphisms, in the example above notice that γ is actually function of three arguments and hγ_diff says it is jointly differentiable in all three arguments

I would appreciate help from other people by:

  • point out theorems in mathlib that could be automated
  • testing it, it is available under the branch lecopivo/fprop
  • setting it up for other function properties like Measurable, soon I will write a guide how to set it up but for now you can follow my test file

Johan Commelin (Jan 24 2024 at 14:41):

That trace looks really good! So I guess show_term fprop would also show a pretty nice proof term?

Tomas Skrivan (Jan 24 2024 at 14:44):

The proof term of corrugation.contDiff' is:

code

I guess that is somewhat reasonable

Jireh Loreaux (Jan 24 2024 at 14:47):

"somewhat reasonable" is a gross understatement. That's excellent!

Tomas Skrivan (Jan 24 2024 at 21:21):

I wrote a short tutorial how to set up fprop for Measureable. It can be found as literate file here.

Tomas Skrivan (Jan 24 2024 at 21:21):

Here is Zulip version:

The first step in using fprop is to mark desired function property with fprop attribute.
In this example we work with Measurable.

attribute [fprop] Measurable

Now we can start marking theorems about Measurable with the attribute @[fprop]. It is best to start with the basic lambda calculus rules. There are five of these rules in total

  • identity rule Measurable fun x => x
  • constant rule Measurable fun x => y
  • composition rule Measurable f → Measurable g → Measurable fun x => f (g x)
  • apply rule Measurable fun f => f x
  • pi rule ∀ i, Measurable (f · i) → Measurable fun x i => f x i

You do not have to provide them all. For example IsLinearMap does not have the constant rule. However, to have any hope at using fprop successfully you need to at least provide identity and composition rule.

attribute [fprop]
  measurable_id'
  measurable_const
  Measurable.comp'
  measurable_pi_apply
  measurable_pi_lambda

Measurability also behaves nicely with respect to taking products. Let's mark the product constructor

attribute [fprop]
  Measurable.prod_mk -- Measurable f → Measurable g → Measurable fun x => (f x, g x)

When it comes to product projection, their properties are usually stated in two different ways

measurable_fst : Measurable fun x => Prod.fst x

or

Measurable.fst : Measurable f  Measurable fun x => Prod.fst (f x)

Tactic fprop can work with both versions and it should be sufficient to provide just one of them. It does not hurt to provide both of them though.

attribute [fprop]
  measurable_fst
  Measurable.fst
  measurable_snd
  Measurable.snd

A silly example on which measurability fails and fprop succeeds. Let's turn on tracing to see what is going on

set_option trace.Meta.Tactic.fprop true in
example {α} [MeasurableSpace α] (f : α  α  α) (hf : Measurable fun (x,y) => f x y) (a : α) :
    Measurable (fun x => (f x a, f (f x x) (f (f x x) x))) := by (try measurability); fprop

To give more complicated examples we mark theorems about arithmetic operations with `@[fprop]

Again we mark both versions of theorems. Internally fprop says that theorems like measurable_add are in "uncurried form" and theorems like Measurable.add are in compositional form.

attribute [fprop]
  measurable_add
  measurable_mul
  measurable_neg
  measurable_div
  measurable_smul

  Measurable.add
  Measurable.mul
  Measurable.neg
  Measurable.div
  Measurable.smul

An example of measurability of some arithmetic function

example : Measurable fun x :  => (x * x - 1) / x + (x - x*x) := by fprop

So far we talked about two types of theorems:

  1. theorems about basic lambda calculus terms
  2. theorems about defined functions

There are two other kinds of theorems fprop can work with:

  1. transition theorems - theorems that imply e.g. measurability from continuity
  2. morphism theorems - theorems talking about bundles

When you mark a theorem with @[fprop] attribute you can check the type of the theorem by turning on the option trace.Meta.Tactic.fprop.attr.

Transition theorems prove one function property from another. We already mentioned that continuity implies measurability but there are many more. For example differentiability implies continuity, linear map between finitely dimensional spaces is continuous etc.

The theorem proving measurability from continuity is Continuous.measurable so let's mark it with @[fprop]

attribute [fprop]
  Continuous.measurable -- Continuous f → Measurable f

Now we can prove one of the earlier examples assuming the function is continuous instead of measurable.

example (f :     ) (hf : Continuous fun (x,y) => f x y) (a : ) :
    Measurable (fun x => (f x a, f (f x x) (f (f x x) x))) := by fprop

To keep fprop performant it is important to keep these "transition theorems" in the form P f → Q f i.e. the conclusion has to talk about a single free variable f. Furthermore, the "transition theorems" should not form a cycle.

Lastly there are "morphism theorems". These are really just theorems about the properties of DFunLike.coe and are treated somewhat specially.

Let's make continuous linear maps work with measurability. The function DFunLike.coe is a function of two arguments f : E →L[K] F and x : E. Mathlib currently states measurability of DFunLike.coe in f and x separately.

The theorem ContinuousLinearMap.measurable states measurability in x in uncurried form.
The theorem ContinuousLinearMap.measurable_comp states measurability in x in compositional form.
The theorem ContinuousLinearMap.measurable_apply states measurability in f in uncurried form.
The theorem Measurable.apply_continuousLinearMap states measurability in f in compositional form.

attribute [fprop]
  ContinuousLinearMap.measurable       -- Measurable fun (x : E) => DFunLike.coe L x
  ContinuousLinearMap.measurable_comp  -- Measurable φ → Measurable fun (x : E) => DFunLike.coe L (φ x)
  ContinuousLinearMap.measurable_apply -- Measurable fun (f : E →[K] F) => DFunLike.coe f x
  Measurable.apply_continuousLinearMap -- Measurable φ → Measurable fun (x : α) => DFunLike.coe (L x) v

A silly example that everything works together as expected

set_option trace.Meta.Tactic.fprop true in
example (f :     ( L[] )) (hf : Continuous (fun (x,y) => f x y)) :
    Measurable fun x => (f (x / x) (x * x) 1 + x) := by fprop

In the current state of fprop, morphism theorems have to be stated in compositional form. Sometimes they might work in uncurried form but fprop is not designed that way right now.

In other cases the function property of DFunLike.coe can be stated jointly in f and x. This is the case of ContDiff n and continuous linear maps. The theorem is ContDiff.clm_apply.

#check ContDiff.clm_apply -- {f : E → F →L[K] G} → {g : E → F} →  ContDiff K n f → ContDiff K n g → ContDiff K n fun x => DFunLike.coe (f x) (g x)

If possible, fprop theorem about DFunLike.coe should be state in this way.

That should be all about fprop, I hope you will enjoy using it :)

Jireh Loreaux (Jan 24 2024 at 22:59):

@Tomas Skrivan what about "variants" (e.g., docs#ContinuousOn, docs#ContinuousAt, docs#ContinuousWithinAt, etc., and similarly for differentiability, measurability, ...)? Are these capable of being handled with "transition theorems", or not because they don't satisfy the "single free variable" condition?

Jireh Loreaux (Jan 24 2024 at 23:01):

I guess I'm wondering more generally:

  1. what are the limits of fprop?
  2. can these limits be circumvented (perhaps at the cost of performance)?
  3. if yes, can we turn that on optionally in order to have the best of both worlds?

Tomas Skrivan (Jan 25 2024 at 01:41):

Right now the tactic is designed to prove property that is "obvious" like ... yeah yeah it is smooth just because it is a composition of smooth functions.

Variants like ContinuousAt are not yet tested. I'm planning on providing custom discharger to prove assumptions like x != 0. Then I think fprop could work for it too.

I would appreciate an example of some nontrivial proofs of e.g. ContinuousAt that you feel should be easy to automate.

Another limitation is that fprop is not doing any kind of reduction (except beta, eta and sometimes iota).

David Renshaw (Jan 25 2024 at 01:46):

Here's a continuity proof that ought to be possible to write more succiently with proper tactic support. It seems that the continuity tactic does not currently support continuousOn.

Tomas Skrivan (Jan 25 2024 at 01:52):

Thank you, that is an excellent example! I need more theorems like that.

Tomas Skrivan (Jan 25 2024 at 02:01):

Jireh Loreaux said:

Tomas Skrivan what about "variants" (e.g., docs#ContinuousOn, docs#ContinuousAt, docs#ContinuousWithinAt, etc., and similarly for differentiability, measurability, ...)? Are these capable of being handled with "transition theorems", or not because they don't satisfy the "single free variable" condition?

The single free variable is talking about the function argument only and not about other arguments like the set on which you want continuity. For example, you should not state that a composition of linear functions is continuous. That would break the single free variable condition as the conclusion is talking about function composition. Stating that Continuous implies ContinuousOn is fine.

Tomas Skrivan (Jan 25 2024 at 02:16):

The issue with ContinuousOn that I do not know if there is a tactic that would be able to prove the intermediate (h : Set.MapsTo f s t) coming from ContinuousOn.comp?

But in any case, fprop can just try to prove the continuity anyway and pass all the MapsTo goals to the user.

Jireh Loreaux (Jan 25 2024 at 02:35):

I was thinking docs#Continuous.comp maybe wouldn't be included, but docs#Continuous.comp_continuousOn would be. The canonical situation would be things involving \-1 or log. For instance, it would be cool if fprop could determine that fun x => x * (Real.log x) ^ 2 - Real.exp x / x is continuous on the complement of zero. (Artificial example)

Tomas Skrivan (Jan 25 2024 at 02:44):

It should be enough to state most of the ContinuousOn theorems in the "compositional form". Then they are effectively specialized versions of Continuous.comp_continuousOn and the general composition theorem can still be ContinuousOn.comp for full power.

Tomas Skrivan (Jan 25 2024 at 02:50):

Tomorrow I will try to prove continuity fun x => x * (Real.log x) ^ 2 - Real.exp x / x. It should be doable.

Jannis Limperg (Jan 25 2024 at 09:47):

Tomas Skrivan said:

Another limitation is that fprop is not doing any kind of reduction (except beta, eta and sometimes iota).

Most tactics work up to reducible defeq, so it would be really nice if fprop could support this as well.

Tomas Skrivan (Jan 25 2024 at 15:28):

Jannis Limperg said:

Most tactics work up to reducible defeq, so it would be really nice if fprop could support this as well.

Good point, I do some unfolding but I'm having a hard time getting it right. Likely because I do not use whnf as it destroys let bindings. Maybe I should try again as now we have WhnfCoreConfig.

Floris van Doorn (Jan 25 2024 at 15:45):

That measurable test file looks amazing!

Floris van Doorn (Jan 25 2024 at 15:47):

I'm also very happy to hear that it doesn't unfold semireducible (default) definitions. I am skeptical that any other behavior can be fast in "real-world" cases.

Floris van Doorn (Jan 25 2024 at 16:02):

Did you write refined discrimination trees yourself? It sounds like there are some ideas in there that would be useful for other tactics using discrimination trees. Link: https://github.com/leanprover-community/mathlib4/blob/lecopivo/fprop/Mathlib/Tactic/FProp/RefinedDiscrTree.lean

Tomas Skrivan (Jan 25 2024 at 16:09):

Floris van Doorn said:

Did you write refined discrimination trees yourself? It sounds like there are some ideas in there that would be useful for other tactics using discrimination trees. Link: https://github.com/leanprover-community/mathlib4/blob/lecopivo/fprop/Mathlib/Tactic/FProp/RefinedDiscrTree.lean

Oh no no, all the glory and fame for that goes to @Jovan Gerbscheid, there is std4 PR 394 for it. It works wonderfully for this purpose.

Tomas Skrivan (Jan 25 2024 at 19:22):

You can now provide custom discharger and this works now

example : ContinuousOn (fun x => x * (Real.log x) ^ 2 - Real.exp x / x) {0} := by fprop (disch:=aesop)
example (y : ) (hy : y0) : ContinuousAt (fun x => x * (Real.log x) ^ 2 - Real.exp x / x) y := by fprop (disch:=aesop)

trace

Here is the complete set up of ContinuousAt and ContinuousOn. I had to slightly reformulate certain theorems to a form fprop expects.

code

Tomas Skrivan (Jan 25 2024 at 19:58):

I have trouble proving

example : ContinuousOn (fun (t:)  1 / (1 + (1 - t) + t * (1 - t))) (Set.Icc 0 1) := by fprop (disch:=simp)

the subgoal is

example :  x  Set.Icc 0 1, 1 + (1 - x) + x * (1 - x)  0 := by simp

which is provable on its own but when I run simp as part of fprop it is not doing the same stuff.

I must be doing something wrong in running a tactic in meta code. I'm using this function to turn Syntax to Expr -> MetaM (Option Expr). It is bastardized version of similar function I found in simp. What did I do wrong?

edit: everything is ok, the standalone example accidentally talks about natural numbers and reals. Over reals it is not provable by simp.

Jovan Gerbscheid (Jan 25 2024 at 20:01):

In my most recent update, I made it so that the discrimination tree indexes id and fun x => x in the same way. I was thinking about doing something similar for composition, and more generally improve on indexing terms the same when a user would think of them as the same. Would that help with not requiring alternative forms of the theorems?

Tomas Skrivan (Jan 25 2024 at 20:14):

I guess it would help a little but I can deal with it myself fairly easily. I'm not sure if such capability should be part discrimination tree or not.

Tomas Skrivan (Jan 25 2024 at 22:53):

David Renshaw said:

Here's a continuity proof that ought to be possible to write more succiently with proper tactic support. It seems that the continuity tactic does not currently support continuousOn.

Done :) just add this to the previous code that sets up ContinuousAt and ContinousOn

noncomputable def S (a b c d : ) :  :=
    a / (a + b + d) + b / (a + b + c) +
    c / (b + c + d) + d / (a + c + d)

noncomputable def T (t : ) :  := S 1 (1 - t) t (t * (1 - t))

example : ContinuousOn T (Set.Icc 0 1) := by
  unfold T S
  fprop (disch:=(rintro x a,b; nlinarith))

Tomas Skrivan (Jan 25 2024 at 23:05):

When you do not provide discharger the trace looks like:

trace

You can clearly see what went wrong from:

[Meta.Tactic.fprop] [] applying: ContinuousOn.div''
   [Meta.Tactic.fprop] [] ContinuousOn (fun x  1) (Set.Icc 0 1)
     ...
   [Meta.Tactic.fprop] [] ContinuousOn (fun x  1 + (1 - x) + x * (1 - x)) (Set.Icc 0 1)
     ...
   [Meta.Tactic.fprop] [] discharging:  x  Set.Icc 0 1, 1 + (1 - x) + x * (1 - x)  0

Tomas Skrivan (Jan 25 2024 at 23:10):

I want more theorems to prove :) Any other theorems about continuity(differentiability, ... ) that should be automated? Maybe something involving bundled morphisms?

Jireh Loreaux (Jan 26 2024 at 18:13):

@Tomas Skrivan here's a real-world example from #9767. Can fprop take care of proving h₀ and h₁?

import Mathlib.Analysis.Calculus.IteratedDeriv.Lemmas

variable
  {𝕜 : Type*} [NontriviallyNormedField 𝕜]
  {F : Type*} [NormedAddCommGroup F] [NormedSpace 𝕜 F]
  {R : Type*} [Semiring R] [Module R F] [SMulCommClass 𝕜 R F] [ContinuousConstSMul R F]
  {n : } {x : 𝕜} {s : Set 𝕜} (hx : x  s) (h : UniqueDiffOn 𝕜 s) {f g : 𝕜  F}

theorem iteratedDeriv_const_smul {n : } {f : 𝕜  F} (h : ContDiff 𝕜 n f) (c : 𝕜) :
    iteratedDeriv n (fun x => f (c * x)) = fun x => c ^ n  iteratedDeriv n f (c * x) := by
  induction n with
  | zero => simp
  | succ n ih =>
    funext x
    have h₀ : DifferentiableAt 𝕜 (iteratedDeriv n f) (c * x) :=
      h.differentiable_iteratedDeriv n (Nat.cast_lt.mpr n.lt_succ_self) |>.differentiableAt
    have h₁ : DifferentiableAt 𝕜 (fun x => iteratedDeriv n f (c * x)) x := by
      rw [ Function.comp_def]
      apply DifferentiableAt.comp
      · exact h.differentiable_iteratedDeriv n (Nat.cast_lt.mpr n.lt_succ_self) |>.differentiableAt
      · exact differentiableAt_id'.const_mul _
    rw [iteratedDeriv_succ, ih h.of_succ, deriv_const_smul _ h₁, iteratedDeriv_succ,
       Function.comp_def, deriv.scomp x h₀ (differentiableAt_id'.const_mul _),
      deriv_const_mul _ differentiableAt_id', deriv_id'', smul_smul, mul_one, pow_succ']

Ruben Van de Velde (Jan 26 2024 at 18:17):

Heh, I thought I recognised that lemma

Yury G. Kudryashov (Jan 26 2024 at 18:18):

I think that this lemma should be true without ContDiff assumption.

Yury G. Kudryashov (Jan 26 2024 at 18:19):

We should have a lemma about iteratedDeriv n (f ∘ e), where e is a ContinuousLinearEquiv.

Yury G. Kudryashov (Jan 26 2024 at 18:19):

Also, it should work for c : R, not only c : k (possibly, this version requires ContDiffAt).

Ruben Van de Velde (Jan 26 2024 at 18:20):

And here I just wanted a lemma about exp

Ruben Van de Velde (Jan 26 2024 at 18:24):

Please comment on the pr (preferably with proof) if you want those generalisations before I land it :)

Jireh Loreaux (Jan 26 2024 at 18:41):

@Yury G. Kudryashov I think we may only have API currently for docs#IteratedFDeriv. I found docs#ContinuousLinearEquiv.iteratedFDerivWithin_comp_right

Tomas Skrivan (Jan 26 2024 at 19:54):

Jireh Loreaux said:

Tomas Skrivan here's a real-world example from #9767. Can fprop take care of proving h₀ and h₁?

This is a very nice example! I had to modify fprop as it had hard time inferring m when using the theorem ContDiff.differentiable_iteratedDeriv.

Also I do not know which tactic can prove ∀ {n : ℕ}, (n : ℕ∞) < (n : ℕ∞) + 1 automatically so I just stated it as theorem lt_succ_self' and used discharger exact lt_succ_self'

Here is complete setup

code

Yury G. Kudryashov (Jan 26 2024 at 19:55):

I meant that we should add this if we don't have it yet.

Tomas Skrivan (Jan 26 2024 at 20:04):

This also works

example (hf : ContDiff K (n+1) f) (hf : ContDiff K (n+1) g) (c x : K) :
    DifferentiableAt K (iteratedDeriv n (f  g)) (c * x) := by fprop (disch:=(exact lt_succ_self'))

I'm not sure how to approach this when the level of differentiability is not the same, these currently fail

example (hf : ContDiff K (n+2) f) (hf : ContDiff K (n+1) g) (c x : K) :
    DifferentiableAt K (iteratedDeriv n (f  g)) (c * x) := by fprop (disch:=(exact lt_succ_self'))

example (hf : ContDiff K (n+1) f) (hf : ContDiff K (n+2) g) (c x : K) :
    DifferentiableAt K (iteratedDeriv n (f  g)) (c * x) := by fprop (disch:=(exact lt_succ_self'))

Michael Stoll (Jan 26 2024 at 20:09):

You have two assumptions named hf. Apparently, fprop can access the shadowed one.

Tomas Skrivan (Jan 26 2024 at 20:11):

Ohh right I missed that. Tactic using shadowed variables is not a problem, no?

Tomas Skrivan (Jan 26 2024 at 20:11):

In writing tactic you really do not care about the names of variables.

Yury G. Kudryashov (Jan 26 2024 at 20:13):

It becomes a bit of a problem if you add fprop? version which prints the generated proof.

Tomas Skrivan (Jan 26 2024 at 20:15):

I see, fprop? can easily check if the proof has free variables with valid names and warn user about it.

Tomas Skrivan (Jan 26 2024 at 20:40):

I think for the purpose of automation the theorem ContDiff.differentiable_iteratedDeriv should be stated as

theorem ContDiff.differentiable_iteratedDeriv' {f : K  F} {n : } (h : ContDiff K (n+1) f) :
    Differentiable K (iteratedDeriv n f) := ContDiff.differentiable_iteratedDeriv n h (Nat.cast_lt.mpr n.lt_succ_self)

and then I have to think about how to properly apply "transition theorem"

theorem ContDiff.of_lt {n m} {f : K  F} (h : ContDiff K m f) (h' : n < m) : ContDiff K n f := ...

Tomas Skrivan (Jan 29 2024 at 23:47):

I have created a PR #10040 I would appreciate feedback/review.

Kim Morrison (Jan 30 2024 at 01:48):

@Tomas Skrivan, I left a few fairly superficial comments.

This is a great contribution, and I think we should merge soon. I'm pretty happy if we plan to shake out any problems as they arise. The test files are really great, and despite not planning on reading the meta code in detail, I'm happy to merge within a week or so if no other reviews eventuate!

Tomas Skrivan (Jan 30 2024 at 01:54):

If forgot to mention that RefinedDiscrTree and StateList are not really part of this PR. There is std4#394 for it by @Jovan Gerbscheid. fprop might be a good motivation for having another discriminatory tree as that was a concern of that PR.

Kim Morrison (Jan 30 2024 at 02:00):

I think the consensus though is that the new DiscrTree variants (which are potentially really great!) should incubate first in Mathlib. So std4#394 is unlikely to move fast.

The only question is whether you want to split it into a separate Mathlib PR, or just deal with it in this PR. (My comment above about wanting to merge applies in either case.)

Tomas Skrivan (Jan 30 2024 at 02:08):

I'm ok having it in one PR and probably easier for me. However, it is @Jovan Gerbscheid's code so it is probably his call if he wants to move the std PR over to mathlib.

Jovan Gerbscheid (Jan 30 2024 at 02:14):

I'm fine with it either way.

Tomas Skrivan (Jan 30 2024 at 02:16):

The easiest thing for me is just keep it in the Mathlib.Tactic.FProp namespace. We can always move it later on.

Tomas Skrivan (Jan 30 2024 at 02:37):

The tactic name fprop might not be the best, fun_prop or funprop might be more descriptive. Should I rename it? I'm open to any other name suggestions too.

Tomas Skrivan (Jan 30 2024 at 02:37):

/poll Name of the tactic
fprop
funprop
fun_prop

Eric Wieser (Jan 30 2024 at 07:01):

fun_prop seems inconsistent with funext (the tactic)

Mario Carneiro (Jan 30 2024 at 07:23):

I don't really like the prop suffix, it's a bit ambiguous (proposition or property?) and also vague about what it does

Eric Wieser (Jan 30 2024 at 07:24):

fun_rules?

Mario Carneiro (Jan 30 2024 at 07:25):

How does this differ from apply_rules? Is it just strictly better?

Mario Carneiro (Jan 30 2024 at 07:26):

I was also thinking about naming along the lines "higher order apply_rules"

Mario Carneiro (Jan 30 2024 at 07:28):

Is this a tactic users are meant to use directly, or is it meant to be wrapped? That is, will it replace measurability, or will measurability be reimplemented as a macro tactic for it?

Tomas Skrivan (Jan 30 2024 at 07:33):

It is meant to be used directly.

Tomas Skrivan (Jan 30 2024 at 07:42):

I don't have much experience with apply_rules but fprop is way more specialized. In some way, it can work only with one argument functions and has specialized unification for them. Multi argument functions are handled via curring.

Tomas Skrivan (Jan 30 2024 at 07:57):

If I'm not mistaken, apply_rule uses Lean's unification. fprop has custom unification to apply composition rule.

Tomas Skrivan (Jan 30 2024 at 08:03):

Mario Carneiro said:

I don't really like the prop suffix, it's a bit ambiguous (proposition or property?) and also vague about what it does

It can stand for either. It proves propositions about functions or it proves function properties.

Yaël Dillies (Jan 30 2024 at 08:41):

I don't understand the funprop name at all

Kevin Buzzard (Jan 30 2024 at 09:13):

Whereas you completely understand the name omega

Ruben Van de Velde (Jan 30 2024 at 09:34):

So we just pick a Greek letter that isn't in use yet?

Henrik Böving (Jan 30 2024 at 09:37):

Ruben Van de Velde said:

So we just pick a Greek letter that isn't in use yet?

In that case I propose lambda

Henrik Böving (Jan 30 2024 at 09:37):

Note that omega is named the way it is because the decision procedure it uses is named that way (for reasons unknown to me) not because the author applied this heuristic.

Tomas Skrivan (Jan 30 2024 at 21:03):

Nice bike shedding, I will change it to fun_prop :) (secretly I want to call it duh because all the proofs are "duh, obviously")

I added minimal setup files for Continuous, Differentiable, ContDiff I'm expecting that as some point theorems will me marked directly but while I'm refining the tactic it is better to have it in a separate file.

What other function property should I setup? Measurable, IsLinearMap, something else?

Eric Wieser (Jan 30 2024 at 21:12):

I think before going too far with IsLinearMap, we should consider changing Linear map (and every other morphism) to take such a proof as a field

Eric Wieser (Jan 30 2024 at 21:12):

Maybe AEMeasurable and variants too?

Tomas Skrivan (Jan 30 2024 at 21:17):

Eric Wieser said:

I think before going too far with IsLinearMap, we should consider changing Linear map (and every other morphism) to take such a proof as a field

Are you talking about #2202?

Eric Wieser (Jan 30 2024 at 21:18):

#2202 is a superset of what I'm describing

Yaël Dillies (Jan 30 2024 at 22:08):

Tomas Skrivan said:

secretly I want to call it duh because all the proofs are "duh, obviously"

Maybe you should reclaim the name obviously, then!

Yury G. Kudryashov (Jan 30 2024 at 23:07):

Too many of our tactics can be named obviously...

Yury G. Kudryashov (Jan 30 2024 at 23:08):

Eric Wieser said:

#2202 is a superset of what I'm describing

Current discussion of #2202 (the "set" part, not the "function" part) happens in https://leanprover.zulipchat.com/#narrow/stream/287929-mathlib4/topic/RFC.3A.20bundled.20sets.20.236442

Mario Carneiro (Jan 30 2024 at 23:09):

maybe we need to follow the isabelle convention of tactics with violent-sounding names

Yury G. Kudryashov (Jan 30 2024 at 23:10):

Like what, for people who never used Isabelle?

Mario Carneiro (Jan 30 2024 at 23:10):

I'm thinking things like crush, blast, demolish etc

Yury G. Kudryashov (Jan 30 2024 at 23:11):

Then we should have an editor plugin that tries all these generically named tactics and suggests the right one.

Mario Carneiro (Jan 30 2024 at 23:12):

In this case I think we can do better than that though, this is not a kitchen sink tactic

Kim Morrison (Jan 30 2024 at 23:13):

fprop is not a terrible candidate for hint

Kim Morrison (Jan 30 2024 at 23:13):

(Public service announcement for those who haven't been using it: please do report your experiences with hint!)

Tomas Skrivan (Jan 31 2024 at 03:37):

Scott Morrison said:

fprop is not a terrible candidate for hint

That might be a good idea as it should fail immediately on non applicable goals.

Notification Bot (Feb 01 2024 at 02:56):

A message was moved from this topic to #mathlib4 > feedback on hint by Scott Morrison.

Tomas Skrivan (Feb 01 2024 at 15:49):

How do I make hint suggest fun_prop? I can easily write a function that looks at a goal and decides this should be solvable by fun_prop or not without actually running the tactic.

Kim Morrison (Feb 03 2024 at 12:30):

Just register_hint fun_prop.

David Renshaw (Feb 04 2024 at 23:58):

fun_prop made the tactic execution time for this proof (the same one I linked to above) go from 2 seconds to 0.25 seconds.
:tada: :tada:

Shreyas Srinivas (Feb 05 2024 at 12:08):

What's the TL DR; of fprop?

Tomas Skrivan (Feb 05 2024 at 12:12):

It is merged to mathlib now, I announced it in general. Is that enough tldr?

Shreyas Srinivas (Feb 05 2024 at 12:13):

Oh sorry. Didn't see that thread.


Last updated: May 02 2025 at 03:31 UTC