Zulip Chat Archive

Stream: mathlib4

Topic: Using fun_prop for the sum of two continuous functions


Yongxi Lin (Aaron) (Feb 14 2026 at 11:19):

Why does the first one work but the second one does not?

import Mathlib

theorem add {f g :   } (hf : Continuous f) (hg : Continuous g) :
    Continuous (fun x => f x + g x) := by fun_prop

theorem add' {f g :   } (hf : Continuous f) (hg : Continuous g) :
    Continuous (f + g) := by fun_prop

David Loeffler (Feb 14 2026 at 11:42):

We have a convention that certain lemmas about properties of functions are supposed to be formulated twice, once in "eta-expanded" form and once not; see e.g. docs#Differentiable.add and docs#Differentiable.fun_add. (The latter is autogenerated from the former using the @[to_fun] decorator.)

It looks like this wasn't carried out for Continuous.add.

Yongxi Lin (Aaron) (Feb 14 2026 at 11:58):

I can make a PR to address this

Floris van Doorn (Feb 16 2026 at 10:59):

David Loeffler said:

We have a convention that certain lemmas about properties of functions are supposed to be formulated twice, once in "eta-expanded" form and once not; see e.g. docs#Differentiable.add and docs#Differentiable.fun_add. (The latter is autogenerated from the former using the @[to_fun] decorator.)

It looks like this wasn't carried out for Continuous.add.

I don't think we have that convention at all. We allow this in situations where it's really necessary, but there is no consensus that we want to duplicate all such lemmas.

I am actually surprised that fun_prop fails for the second one, I thought that it had logic to deal with this.

Snir Broshi (Feb 16 2026 at 12:37):

Floris van Doorn said:

David Loeffler said:
I don't think we have that convention at all. We allow this in situations where it's really necessary, but there is no consensus that we want to duplicate all such lemmas.

#mathlib4 > Convention for `f * g` and / or `fun x ↦ f x * g x` @ 💬 caused this to be added to the naming convention, and the @[to_fun] attribute.
In the same thread at #mathlib4 > Convention for `f * g` and / or `fun x ↦ f x * g x` @ 💬 it was said that fun_ shouldn't be added to names if the unexpanded form will never exist.
It was never said explicitly, but the conclusion of that thread does seem to imply we should use @[to_fun] wherever possible.

Snir Broshi (Feb 16 2026 at 12:37):

Yongxi Lin (Aaron) said:

I can make a PR to address this

To cross-link, this happened in #35306

Michael Rothgang (Feb 16 2026 at 22:34):

You have a fair point: the naming conventions do say

If there is no need to disambiguate because a lemma is given using only the expanded form, the prefix fun_ is not required.

That does not imply "we need to duplicate each lemma every time". (And I would disagree with that.)

Michael Rothgang (Feb 16 2026 at 22:37):

My favourite outcome would be that

  • we find out why fun_prop fails in the second example
  • we fix this underlying cause, and
  • we revert #35306 as it is no longer necessary.

Michael Rothgang (Feb 16 2026 at 22:37):

For the root cause: my understanding was the fun_prop uses the refined discrimination tree by @Jovan Gerbscheid, and that treats f * g and its eta-expanded version the same --- so fun_prop should treat them the same. Perhaps you can comment on that, Jovan and @Tomas Skrivan?

Jovan Gerbscheid (Feb 16 2026 at 22:42):

It was originally my idea to have the discrimination tree treat f * g the same as fun x => f x * g x. But typically for performance, we want to run unification in the reducible transparency, and unfortunately we need the reducible_and_instances transparency to unify these two forms. So this idea is not actually used now.

Jovan Gerbscheid (Feb 16 2026 at 22:46):

I could imagine a world where pull fun _ => _ is part of the preprocessing that fun_prop does whenever the function in question is not a lambda function. This way, we only need the fun version of any lemma to be tagged with fun_prop.

Jireh Loreaux (Feb 16 2026 at 22:49):

But you would also have to do that preprocessing to local hypotheses, no?

Jovan Gerbscheid (Feb 16 2026 at 22:53):

If we do that, then fun_prop will also be able to solve this:

example (f :   ) (h : Continuous (f + g)) : Continuous (fun x  f x + g x) := by
  fun_prop

Jovan Gerbscheid (Feb 16 2026 at 23:28):

Hmm, in that case what should be the syntax for creating a fun_ lemma and tagging only that version with fun_prop. Should it be the to_fun_prop attribute? :sweat_smile:

Tomas Skrivan (Feb 17 2026 at 07:42):

Michael Rothgang said:

For the root cause: my understanding was the fun_prop uses the refined discrimination tree by Jovan Gerbscheid, and that treats f * g and its eta-expanded version the same --- so fun_prop should treat them the same. Perhaps you can comment on that, Jovan and Tomas Skrivan?

The issue is not with eta expansion but the the reduction (f + g) x ==> f x + g x. fun_prop internally changes the functions to fun x => (f + g) x but it won't do the reduction to fun x => f x + g x.

fun_prop keeps track of the indices of the function arguments it knows something about. The theorem docs#Continuous.fun_add is compositional form for the first and second argument of +/HAdd.hAdd, the theorem docs#Continuous.add is simple form for the third argument.

Compositional form for all three arguments would be

theorem add'' {f g :     } {h :   } (hf : Continuous f) (hg : Continuous g) (hh : Continuous h) :
    Continuous (fun x => ((f x) + (g x)) (h x)) := by fun_prop

which is still not solvable purely by fun_prop and needs dsimp or something before.

Jovan Gerbscheid (Feb 17 2026 at 08:26):

In this example too, using pull fun _ => _ does the trick:

module
import Mathlib

theorem add'' {f g :     } {h :   } (hf : Continuous f) (hg : Continuous g) (hh : Continuous h) :
    Continuous (fun x => ((f x) + (g x)) (h x)) := by
  pull fun _  _ -- changes the goal to `Continuous fun x => f x (h x) + g x (h x)`
  fun_prop

Tomas Skrivan (Feb 17 2026 at 08:29):

Yes, whatever is capable of doing the reduction (f + g) x = f x + g x

Tomas Skrivan (Feb 17 2026 at 10:46):

We could experiment with adding and option to preprocess the function with a custom conv tactic like pull or simp.


Last updated: Feb 28 2026 at 14:05 UTC