Zulip Chat Archive

Stream: new members

Topic: How to do differentiation on concrete multivariable function


ZhiKai Pong (Apr 08 2025 at 00:08):

Hi, I'm trying to learn how differentiation works in mathlib. I still don't really understand how fderiv works so I'm trying to construct some examples myself. The following is clearly wrong, and I would like to know whether stating something like this is possible and how to proceed.

xcos(x)sin(y)=sin(x)sin(y)\frac{\partial}{\partial x}cos(x)sin(y) = -sin(x)sin(y)

2 questions:

  1. How to define a multivariable function? do I need to specify I'm in R2\R^2? (and how do I do that)
  2. does deriv theorems carry over to fderiv? what's the correct syntax?

any tips and explanations appreciated!

import Mathlib

example : deriv (fun x :  => Real.cos x) = (fun x => -Real.sin x) := by
  simp

example : fderiv  (fun x :  => Real.cos x) = (fun x => -Real.sin x) := by
  sorry --statement is wrong, type mismatch

example : fderiv  (fun x y :  => (Real.cos x)*(Real.sin y)) = (fun x y => (-Real.sin x)*(Real.sin y)) := by
  sorry --Don't know how to define cos(x)sin(y)

ZhiKai Pong (Apr 08 2025 at 13:17):

Oh if it's a function to R\R does that mean I don't have to actually use fderiv? is this the correct syntax?

import Mathlib

noncomputable

def some_f :      := fun x y => (Real.cos x)*(Real.sin y)

example : deriv (fun x' => some_f x' y) x = (-Real.sin x)*(Real.sin y) := by

  sorry

ZhiKai Pong (Apr 08 2025 at 13:43):

simp [some_f] closes the goal. Is this stating what I'm wanting it to say? simp seems too powerful here, and I would like to understand what's going on under the hood.

I think I'll still want to know how to write this with fderiv, since my current goal is to work towards something related to vector calculus in R3\R^3.

Ruben Van de Velde (Apr 08 2025 at 15:09):

You can look what simp? [some_f] brings up - that probably gives an indication both about what API we have and if it uses the steps you'd expect mathematically

ZhiKai Pong (Apr 08 2025 at 21:03):

import Mathlib

variable (f :   )  (x : )

example : deriv f x = fderiv  f x 1 := by rfl

example : deriv (fun x => Real.cos x) = (fun x => -Real.sin x) := by
  simp

example : fderiv  (fun x => Real.cos x) 1 = (fun x => -Real.sin x) := by
  simp --doesn't work

maybe this illustrates the problem more clearly, how should I state the same thing with fderiv? I guess I don't really understand the function of the 1 in fderiv ℝ f x 1

Aaron Liu (Apr 08 2025 at 21:10):

The fderiv ℝ (fun x => Real.cos x) x 1 means you are taking the derivative of the function Real.cos at the point x in the direction of 1

Aaron Liu (Apr 08 2025 at 21:13):

This is useful if you are taking the jacobian in a higher dimension, because you can plug in a direction.

ZhiKai Pong (Apr 08 2025 at 21:14):

thanks, does that scale? if I put a 2 does the result also multiply by a factor of 2?

ZhiKai Pong (Apr 08 2025 at 21:16):

I found that there is fderiv_cos but I don't really understand how to use it. the goal state says ⇑(fderiv ℝ (fun x ↦ Real.cos x) 1) = fun x ↦ -Real.sin x and I don't understand how the coercion works

Aaron Liu (Apr 08 2025 at 21:21):

You want

example (x : ) : fderiv  (fun x => Real.cos x) x 1 = -Real.sin x := by
  simp

ZhiKai Pong (Apr 08 2025 at 21:24):

thanks! why is the fun necessary on the left but not on the right?

Aaron Liu (Apr 08 2025 at 21:25):

It is in fact not necessary on the left :)

example (x : ) : fderiv  Real.cos x 1 = -Real.sin x := by
  simp

ZhiKai Pong (Apr 08 2025 at 21:26):

oh I get it now, the x just substitutes in the fun, thanks a lot!

ZhiKai Pong (Apr 08 2025 at 22:27):

I've now moved on to try to prove the derivatives in R3R^3 commute along the standard basis. Using code I scraped together (tried my best to understand #new members > partial derivative of a vector field @ 💬 ) I came up with the following. I hope this is the correct statement, and I'm left with IsSymmSndFDerivAt ℝ f x and 2 pairs of goals involving DifferentiableAt . I wonder are those provable from properties of EuclideanSpace or would I need more properties. Would like some pointers as to how to proceed.

import Mathlib.Analysis.Calculus.FDeriv.Symmetric
import Mathlib.Analysis.InnerProductSpace.PiL2

local notation "ℝ³" => EuclideanSpace  (Fin 3)

macro "∂[" i:term "]" : term => `(fun f x => fderiv  f x (EuclideanSpace.single $i (1:)))

open EuclideanSpace in
lemma deriv_commute (f : ³  ³) (x : ³) (v u: Fin 3) : [u] ([v] f) x = [v] ([u] f) x := by
  simp
  rw [fderiv_clm_apply, fderiv_clm_apply]
  simp
  rw [IsSymmSndFDerivAt.eq]
  sorry

ZhiKai Pong (Apr 09 2025 at 10:25):

import Mathlib.Analysis.Calculus.FDeriv.Symmetric
import Mathlib.Analysis.InnerProductSpace.PiL2

local notation "ℝ³" => EuclideanSpace  (Fin 3)

macro "∂[" i:term "]" : term => `(fun f x => fderiv  f x (EuclideanSpace.single $i (1:)))

open EuclideanSpace in
lemma deriv_commute (f : ³  ³) (x : ³) (v u: Fin 3) (hf : ContDiff   f) : [u] ([v] f) x = [v] ([u] f) x := by
  simp
  rw [fderiv_clm_apply, fderiv_clm_apply]
  simp
  rw [IsSymmSndFDerivAt.eq]
  apply ContDiffAt.isSymmSndFDerivAt_of_omega
  apply hf.contDiffAt
  repeat
  show DifferentiableAt  (fderiv  f) x
  sorry
  apply differentiableAt_const

I got this far, I think I need (hf : ContDiff ℝ ⊤ f) to specify the function is CC^\infty. Any suggestions on how to finish the final sorry?

ZhiKai Pong (Apr 09 2025 at 13:27):

import Mathlib.Analysis.Calculus.FDeriv.Symmetric

import Mathlib.Analysis.InnerProductSpace.PiL2

local notation "ℝ³" => EuclideanSpace  (Fin 3)

example (f : ³  ³) (x : ³) (hf : ContDiff   f): DifferentiableAt  (fderiv  f) x := by

  rw [contDiff_infty_iff_fderiv] at hf --doesn't work

I think this is what I need, but I'm not understanding why this rewrite doesn't work

Aaron Liu (Apr 09 2025 at 14:31):

Actually, ContDiff ℝ ⊤ f means f is analytic

ZhiKai Pong (Apr 09 2025 at 15:01):

Aaron Liu said:

analytic

Sorry I don't quite follow, I'm aware I can do this

import Mathlib.Analysis.Calculus.FDeriv.Symmetric
import Mathlib.Analysis.InnerProductSpace.PiL2

local notation "ℝ³" => EuclideanSpace  (Fin 3)

example (f : ³  ³) (x : ³) (hf : ContDiff   f): DifferentiableAt  (fderiv  f) x := by
  apply AnalyticAt.differentiableAt
  apply ContDiffAt.analyticAt
  apply ContDiff.contDiffAt

and get the goal state as

f : ³  ³
x : ³
hf : ContDiff   f
 ContDiff   (fderiv  f)

which is actually how I found contDiff_infty_iff_fderiv, but I'm not sure how to apply that. Am I missing something or are you suggesting something else?

Aaron Liu (Apr 09 2025 at 15:07):

Read carefully: docs#ContDiff takes a WithTop ENat, which has two tops. The higher top ("omega") is for analytic functions, and the lower top ("infinity") is for smooth (infinitely differentiable) functions.

Aaron Liu (Apr 09 2025 at 15:17):

If you open scoped ContDiff it'll print these differently.

ZhiKai Pong (Apr 09 2025 at 15:19):

I'm not from a maths background so please correct me if I'm wrong: analyticity is a stronger condition than infinitely differentiable so having (hf : ContDiff ℝ ⊤ f) should be strong enough. I want to show DifferentiableAt ℝ (fderiv ℝ f) x which just means the second derivative exists right? and contDiff_infty_iff_fderiv is applicable when f is analytic which is satisfied by hf? what am I missing with my reasoning?

Aaron Liu (Apr 09 2025 at 15:27):

import Mathlib.Analysis.Calculus.FDeriv.Symmetric
import Mathlib.Analysis.InnerProductSpace.PiL2

local notation "ℝ³" => EuclideanSpace  (Fin 3)

example (f : ³  ³) (x : ³) (hf : ContDiff   f) : DifferentiableAt  (fderiv  f) x := by
  have hf' : ContDiff  (.some ) f := hf.of_le le_top
  rw [contDiff_infty_iff_fderiv] at hf'
  apply hf'.right.contDiffAt.differentiableAt
  simp

Yakov Pechersky (Apr 09 2025 at 16:40):

More idiomatic would be have hf' : ContDiff ℝ (⊤ : ℕ∞) f := hf.of_le le_top, which matches the syntax of the contDiff_infty_iff_fderiv statement

ZhiKai Pong (Apr 11 2025 at 16:19):

Sorry if this looks a bit messy. Basically I'm trying to prove partial derivatives distribute over add but wrapped in some custom notation, and I would like to ask how do I apply pd_add keeping everything in EuclideanSpace. If pd_add is better defined in another form please let me know.

import Mathlib

local notation "ℝ³" => EuclideanSpace  (Fin 3)
macro "∂[" i:term "]" : term => `(fun f x => fderiv  f x (EuclideanSpace.single $i (1:)))

lemma pd_add  [NormedAddCommGroup M] [NormedSpace  M] (f1 f2 : EuclideanSpace  (Fin n)  M) (hf1 : ContDiff   f1) (hf2 : ContDiff   f2) :
    [u] (f1 + f2) = [u] f1 + [u] f2 := by
  sorry -- I can prove this

#check pd_add

noncomputable def grad (f : ³  ) :
  ³  ³ := fun x i =>
    [i] f x

lemma grad_add (f1 f2 : ³  ): grad (f1 + f2) = grad f1 + grad f2 := by
  unfold grad
  ext x i
  simp
  sorry -- how to use pd_add?

(edit: added differentiability assumptions)

Sébastien Gouëzel (Apr 11 2025 at 16:22):

You can do

lemma grad_add (f1 f2 : ³  ): grad (f1 + f2) = grad f1 + grad f2 := by
  unfold grad
  ext x i
  rw [pd_add]
  rfl

But note that pd_add is not true, you need differentiability assumptions.

ZhiKai Pong (Apr 11 2025 at 16:33):

If I'm already in the state after simp is there a way to go back to the fun f => form?

ZhiKai Pong (Apr 11 2025 at 16:43):

context: divergence and curl are defined through some convoluted notation and after simp it's in a form that I can't apply pd_add directly e.g.

import Mathlib

local notation "ℝ³" => EuclideanSpace  (Fin 3)
macro "∂[" i:term "]" : term => `(fun f x => fderiv  f x (EuclideanSpace.single $i (1:)))

lemma pd_add  [NormedAddCommGroup M] [NormedSpace  M] (f1 f2 : EuclideanSpace  (Fin n)  M) (hf1 : ContDiff   f1) (hf2 : ContDiff   f2) :
    [u] (f1 + f2) = [u] f1 + [u] f2 := by
  sorry -- I can prove this

noncomputable def div (f : ³  ³) :
  ³   := fun x =>
  -- get i-th component of `f`
  let fi i x := inner (f x) (EuclideanSpace.basisFun (Fin 3)  i)
  -- derivative of i-th component in j-th coordinate
  -- ∂fᵢ/∂xⱼ
  let df i x := [i] (fi i) x
   i, df i x

lemma div_add (f1 f2 : ³  ³) (hf1 : ContDiff   f1) (hf2 : ContDiff   f2) : div (f1 + f2) = div f1 + div f2 := by
  unfold div
  simp
  ext x
  unfold Finset.sum
  simp
  sorry -- rw [pd_add] doesn't work

Sébastien Gouëzel (Apr 11 2025 at 16:43):

You can do a change to switch to whatever form you prefer. Or modify the statement of pd_add to have it in applied form, to make sure it applies after the simp.

ZhiKai Pong (Apr 11 2025 at 16:49):

Sorry what does applied form mean?

Kevin Buzzard (Apr 11 2025 at 17:56):

It means "don't say two functions are equal, say that for all v they have equal values at v"

ZhiKai Pong (Apr 11 2025 at 19:11):

Kevin Buzzard said:

It means "don't say two functions are equal, say that for all v they have equal values at v"

thanks you so much, this works now:

lemma pd_add  [NormedAddCommGroup M] [NormedSpace  M] (f1 f2 : EuclideanSpace  (Fin n)  M) (hf1 : ContDiff   f1) (hf2 : ContDiff   f2):
     v, [u] (f1 + f2) v = [u] f1 v + [u] f2 v := by
  sorry -- I can prove this

#check pd_add

noncomputable def grad (f : ³  ) :
  ³  ³ := fun x i =>
    [i] f x

lemma grad_add (f1 f2 : ³  ) (hf1 : ContDiff   f1) (hf2 : ContDiff   f2) : grad (f1 + f2) = grad f1 + grad f2 := by
  unfold grad
  ext x i
  simp
  apply pd_add --works
  <;> assumption

ZhiKai Pong (Apr 11 2025 at 21:54):

Sorry its getting messier - I now have another layer of definition cderiv and I hope that I can avoid unfolding to the fderiv level. becasue of the way div is defined, the differentiation occurs at each coordinate. I tried to define pd_coord_add by pattern matching with the goal but the apply still doesn't work. I probably have to change the pd_coord_add definition somehow but I still don't fully grasp how the fun notation works

import Mathlib

noncomputable def cderiv [AddCommGroup M] [Module  M] [TopologicalSpace M]
    (μ : Fin d) (f : EuclideanSpace  (Fin d)  M) : EuclideanSpace  (Fin d)  M :=
  (fun x => fderiv  f x (EuclideanSpace.single μ (1:)))

local notation "ℝ³" => EuclideanSpace  (Fin 3)
macro "∂[" i:term "]" : term => `(fun f => cderiv $i f)

lemma pd_add  [NormedAddCommGroup M] [NormedSpace  M] (f1 f2 : EuclideanSpace  (Fin d)  M) (hf1 : ContDiff   f1) (hf2 : ContDiff   f2):
     v, [u] (f1 + f2) v = [u] f1 v + [u] f2 v := by
  sorry -- v ∈ EuclideanSpace ℝ (Fin d)

lemma pd_coord_add (f1 f2 : EuclideanSpace  (Fin d)  EuclideanSpace  (Fin d)) (hf1 : ContDiff   f1) (hf2 : ContDiff   f2) :
     v, ([u] (fun v => f1 v i + f2 v i) v) = ([u] (fun v => f1 v i) v) + ([u] (fun v => f2 v i) v) := by
  sorry -- I think I need this for explicit coordinates

#check pd_coord_add


noncomputable def div (f : ³  ³) :
  ³   := fun x =>
  -- get i-th component of `f`
  let fi i x := inner (f x) (EuclideanSpace.basisFun (Fin 3)  i)
  -- derivative of i-th component in j-th coordinate
  -- ∂fᵢ/∂xⱼ
  let df i x := [i] (fi i) x
   i, df i x

lemma div_add (f1 f2 : ³  ³) (hf1 : ContDiff   f1) (hf2 : ContDiff   f2) : div (f1 + f2) = div f1 + div f2 := by
  unfold div
  simp
  ext x
  unfold Finset.sum
  simp
  apply pd_coord_add --tried to define pd_coord_add by pattern matching but doesn't work

ZhiKai Pong (Apr 11 2025 at 21:56):

essentially I don't really know how to proceed when the fun is nested within something else

ZhiKai Pong (Apr 12 2025 at 11:50):

Summarizing my current understanding:

import Mathlib

noncomputable def cderiv [AddCommGroup M] [Module  M] [TopologicalSpace M]
    (μ : Fin d) (f : EuclideanSpace  (Fin d)  M) : EuclideanSpace  (Fin d)  M :=
  (fun x => fderiv  f x (EuclideanSpace.single μ (1:)))

local notation "ℝ³" => EuclideanSpace  (Fin 3)
macro "∂[" i:term "]" : term => `(fun f => cderiv $i f)

lemma pd_add  [NormedAddCommGroup M] [NormedSpace  M]
    (f1 f2 : EuclideanSpace  (Fin n)  M) (hf1 : ContDiff   f1) (hf2 : ContDiff   f2) :
  [u] (f1 + f2) = [u] f1 + [u] f2 := by
  sorry -- I can prove this

lemma pd_add_applied  [NormedAddCommGroup M] [NormedSpace  M]
    (f1 f2 : EuclideanSpace  (Fin n)  M) (hf1 : ContDiff   f1) (hf2 : ContDiff   f2) :
  [u] (f1 + f2) v = [u] f1 v + [u] f2 v:= by
  sorry -- I can prove this

lemma pd_fun_add (f1 f2 : EuclideanSpace  (Fin d)  EuclideanSpace  (Fin d))
    (hf1 : ContDiff   f1) (hf2 : ContDiff   f2) :
  ([u] (fun x => (f1 + f2) x) v) = ([u] f1 v) + ([u] f2 v) := by
  sorry -- how do I state the fun x => version?

#check pd_add

noncomputable def grad (f : ³  ) :
  ³  ³ := fun x i =>
    [i] f x

example (f1 f2 : ³  ) (hf1 : ContDiff   f1) (hf2 : ContDiff   f2) :
    grad (f1 + f2) = grad f1 + grad f2 := by
  unfold grad
  ext x i
  rw [pd_add] -- rw works for fun =>
  rfl
  apply hf1
  apply hf2

example (f1 f2 : ³  ) (hf1 : ContDiff   f1) (hf2 : ContDiff   f2) :
    grad (f1 + f2) = grad f1 + grad f2 := by
  unfold grad
  ext x i
  fin_cases i <;>
  . simp                 -- rw doesn't work after simp
    apply pd_add_applied -- apply can handle Fin indexing but pattern still has to be exact match
    apply hf1
    apply hf2

/- -------------------------------------------- -/

noncomputable def tmp (f : ³  ³) :
    ³  ³ := fun x =>
    -- get i-th component of `f`
    let fi i x := inner (f x) (EuclideanSpace.basisFun (Fin 3)  i)
    -- derivative of i-th component in j-th coordinate
    -- ∂fᵢ/∂xⱼ
    let df i := [i] (fi i) x
    fun i =>
      match i with
      | 0 => df 0 + df 1
      | 1 => df 1 + df 2
      | 2 => df 2 + df 0

example (f1 f2 : ³  ³) (hf1 : ContDiff   f1) (hf2 : ContDiff   f2) :
    tmp (f1 + f2) = tmp f1 + tmp f2 := by
  unfold tmp
  ext x i
  fin_cases i
  simp                --necessary
  -- rw [pd_fun_add]  --doesn't work
  apply pd_fun_add    --apply has to be exact match, is there a partial apply or some alternative way of rewriting?

ZhiKai Pong (Apr 12 2025 at 11:59):

Am I correct that the applied form cannot be used for rewriting? because of the way tmp is defined, I have to simp to simply the definitions, then I run into the issue where I can't rw the goal (of the form:

cderiv 0 (fun x => f1 x 0 + f2 x 0) x + ... = cderiv 0 (fun x => f1 x 0) x + cderiv 0 (fun x => f2 x 0) x + ...

I want to rw this using a helper lemma, but I'm not sure how to state that.

ZhiKai Pong (Apr 12 2025 at 14:23):

I think I figured it out, I couldn't get rid of the (fun f => ...) because of the way the macro is defined.
I'll have to change to macro to

macro "∂[" i:term "]" : term => `(cderiv $i)

then it should work. apply and rw can't "see through" the fun notation

ZhiKai Pong (Apr 12 2025 at 23:43):

back to differentiability arguments: I now have the following

import Mathlib

local notation "ℝ³" => EuclideanSpace  (Fin 3)

example (f : ³  ³) (hf : ContDiff   f) :
    DifferentiableAt  (fun x => (fderiv  f x) (EuclideanSpace.single v 1)) x := by
  have hf' : ContDiff  ( : ) f := hf.of_le le_top
  rw [contDiff_infty_iff_fderiv] at hf'
  apply ContDiffAt.differentiableAt
  apply ContDiff.contDiffAt
  sorry --apply hf'.right doesn't work

ZhiKai Pong (Apr 12 2025 at 23:44):

I feel like I'm getting more and more confused by fun x => notation... Any hints as to how to proceed from here?

Aaron Liu (Apr 12 2025 at 23:45):

Maybe docs#contDiff_pi

ZhiKai Pong (Apr 13 2025 at 00:08):

Neither docs#contDiff_pi nor docs#contDiff_single works :(

ZhiKai Pong (Apr 13 2025 at 11:02):

import Mathlib

local notation "ℝ³" => EuclideanSpace  (Fin 3)

example (f : ³  ³) (hf : ContDiff   f) :
    DifferentiableAt  (fun x => (fderiv  f x) (EuclideanSpace.single v 1)) x := by
  have hf' : ContDiff  ( : ) f := hf.of_le le_top
  rw [contDiff_infty_iff_fderiv] at hf'
  apply ContDiffAt.differentiableAt
  apply ContDiff.contDiffAt
  apply contDiff_euclidean.mpr
  sorry --apply contDiff_single somehow?

I think apply contDiff_euclidean.mpr is the right call, but still unable to use contDiff_single

ZhiKai Pong (Apr 13 2025 at 21:57):

import Mathlib

local notation "ℝ³" => EuclideanSpace  (Fin 3)

example (f : ³  ³) (hf : ContDiff   f) :
    Differentiable  fun x =>
      (fderiv  (fun x => f x (u : Fin 3)) x) (EuclideanSpace.single (v : Fin 3) (1 : )) := by
  have hf' : ContDiff  ( : ) f := hf.of_le le_top
  rw [contDiff_infty_iff_fderiv] at hf'
  sorry --some modified form of fderiv_pi?

After playing around some more with the definitions, I ended up with this expression which I think just says ∂fᵤ/∂xᵥ is differentiable if f is infinitely differentiable. At this point I think I've tried enough things to say I'm at a point where I have to further understand lean syntax to make appropriate modifications to existing mathlib theorems to proceed, or I'm missing something obvious with the maths. would appreciate some guidance as to whether this is just a syntax thing or is this something deeper.

Tomas Skrivan (Apr 13 2025 at 22:20):

I think you want to use docs#DifferentiableAt.clm_apply to move the evaluation of the linear map outside of DifferentiableAt. Unfortunately, I don't have the time to look into the details right now.

ZhiKai Pong (Apr 13 2025 at 23:50):

Tomas Skrivan said:

I think you want to use docs#DifferentiableAt.clm_apply to move the evaluation of the linear map outside of DifferentiableAt. Unfortunately, I don't have the time to look into the details right now.

thank you so much, I think I'll have to read up on the difference between comp and apply to really understand how this works.

ZhiKai Pong (Apr 13 2025 at 23:52):

The final goal
Differentiable ℝ (fderiv ℝ fun x ↦ f x w)

looks a lot like docs#fderiv_pi, but the primed version which doesn't exist yet. Not sure whether there's an easy way to switch between them, I'll try to understand the difference between the primed and non-primed version of similar theorems

ZhiKai Pong (Apr 14 2025 at 14:17):

import Mathlib

local notation "ℝ³" => EuclideanSpace  (Fin 3)

example (f : ³  ³) (h : Differentiable  (fderiv  f)) :
    Differentiable  (fderiv  fun x => f x i) := by
  sorry

After removing all the clutter what I need is essentially the above. I think what it's saying and the way to do it on paper is:

the fderiv of each component is just projection composed with fderiv, since both are differentiable so the fderiv of each component follows from fderiv of f.

but I'm not sure how to state this in lean.

Tomas Skrivan (Apr 14 2025 at 17:21):

All there examples are solvable by fun_prop after adding these two theorems

theorem ContDiff.differentiable_fderiv (f : X  Y) (hf : ContDiff 𝕜 2 f) :
    Differentiable 𝕜 (fun x => fderiv 𝕜 f x) := ...

and

theorem EuclideanSpace.contDiff_apply {ι} [Fintype ι] (i : ι) :
    ContDiff  n (fun f : EuclideanSpace 𝕜 ι => f i) := ...

Now you can solve your examples with fun_prop (disch:=decide). The (disch:=decide) is necessary for fun_prop to show that 2 ≤ ⊤. Also you can drop your assumption from ContDiff ℝ ⊤ f to ContDiff ℝ 2 f.

Tomas Skrivan (Apr 14 2025 at 17:24):

Full code

code

What is left to show is that

fderiv 𝕜 f x = continuousMultilinearCurryFin1 𝕜 X Y (iteratedFDeriv 𝕜 1 f x)

I don't have experience working with multilinear maps so I would appreciate if someone helps out.

And

theorem EuclideanSpace.contDiff_apply {ι} [Fintype ι] (i : ι) :
    ContDiff  n (fun f : EuclideanSpace 𝕜 ι => f i) := sorry

which is effectively contDiff_apply but under L2 norm rather than L∞ norm.

Tomas Skrivan (Apr 14 2025 at 17:25):

Yeah sorry, it looks like there is some API missing and doing what you are trying to do is messier then it should be.

Tomas Skrivan (Apr 14 2025 at 17:49):

I reduced ContDiff ℝ n (fun f : EuclideanSpace 𝕜 ι => f i) to showing that WithLp.linearEquiv 2 𝕜 (ι → 𝕜) is a bounded/continuous map which I'm unable to find as a statement in mathlib

import Mathlib

variable
  {𝕜} [RCLike 𝕜]

theorem euclideanSpace_contDiff_apply {ι} [Fintype ι] (i : ι) :
    ContDiff 𝕜  (fun f : EuclideanSpace 𝕜 ι => f i) := by
  have h : (fun f : EuclideanSpace 𝕜 ι => f i)
           =
           (fun f : ι  𝕜 => f i)  (WithLp.linearEquiv 2 𝕜 (ι  𝕜)).symm := by rfl
  rw[h]
  apply ContDiff.comp
  exact contDiff_apply 𝕜 𝕜 i
  apply IsBoundedLinearMap.contDiff
  constructor
  · apply LinearMap.isLinear
  · sorry

Aaron Liu (Apr 14 2025 at 18:03):

import Mathlib

theorem PiLp.contDiff_proj_apply {𝕜 : Type*} [NontriviallyNormedField 𝕜]
    {ι : Type*} [Fintype ι] {f : ι  Type*} [ i, NormedAddCommGroup (f i)]
    [ i, NormedSpace 𝕜 (f i)] {n : WithTop ENat} (p : ENNReal) [Fact (1  p)] (i : ι) :
    ContDiff 𝕜 n fun p : PiLp p f => p i :=
  (PiLp.proj p (𝕜 := 𝕜) f i).contDiff

ZhiKai Pong (Apr 14 2025 at 18:40):

Tomas Skrivan said:

I reduced ContDiff ℝ n (fun f : EuclideanSpace 𝕜 ι => f i) to showing that WithLp.linearEquiv 2 𝕜 (ι → 𝕜) is a bounded/continuous map which I'm unable to find as a statement in mathlib

The fun_prop doesn't work for the second example after replacing euclideanSpace_contDiff_apply in the longer code above, only difference I can tell is the ContDiff ℝ n is changed to ContDiff 𝕜 ⊤ and fun_prop can't synthesize that?

works after I change the assumption back to hf : ContDiff ℝ ⊤ f

ZhiKai Pong (Apr 14 2025 at 18:42):

many thanks to both, I'll try to chew through the above

ZhiKai Pong (Apr 14 2025 at 18:47):

full code

code

Tomas Skrivan (Apr 14 2025 at 19:08):

With the addition of Aaron's proof:

code

So the only missing piece is to show that fderiv 𝕜 f x = continuousMultilinearCurryFin1 𝕜 X Y (iteratedFDeriv 𝕜 1 f x)

edit: removed all sorries

ZhiKai Pong (Apr 14 2025 at 19:10):

I was working on this before you posted - wondering whether this leads anywhere or am I completely off the tracks:

attempt

Tomas Skrivan (Apr 14 2025 at 19:21):

It is essentially how the fun_prop proof works

example (f : ³  ³) (hf : ContDiff  2 f) :
    DifferentiableAt  (fderiv  fun x => f x i) x := by
  have eq : (fderiv  (fun x  f x i))
      = (fun x => (ContinuousLinearMap.proj i).comp (fderiv  f x)) := by
    ext x
    rw [aux]
    apply hf.differentiable
    decide
  rw[eq] --the x is "seaprable" on the LHS but not on the RHS
  apply DifferentiableAt.clm_comp --does this lead anywhere?
  · fun_prop
  · apply ContDiff.differentiable_fderiv
    apply hf

I had to change few things

  • you need stronger assumption on f such as ContDiff ℝ 2 f
  • rw does not play well with binders (i.e. fun x => ... ) so the statement of eq has to be fderiv ℝ (fun x ↦ f x i)) = ... (i.e. without fun x => ...)
  • include the theorem ContDiff.differentiable_fderiv I gave above

You can call fun_prop right after rw[eq], with set_option trace.Meta.Tactic.fun_prop true you can actually see what fun_prop is doing:

    [Meta.Tactic.fun_prop] [✅️] applying: DifferentiableAt.clm_comp
...
        [Meta.Tactic.fun_prop] [✅️] applying: differentiableAt_const
...
        [Meta.Tactic.fun_prop] [✅️] applying: Differentiable.differentiableAt
...
            [Meta.Tactic.fun_prop] [✅️] applying: ContDiff.differentiable_fderiv
...
                [Meta.Tactic.fun_prop] [✅️] applying: hf : ContDiff  2 f

ZhiKai Pong (Apr 14 2025 at 19:25):

Amazing thanks!

ZhiKai Pong (Apr 14 2025 at 19:56):

I found this which imo simplifies a lot:

theorem ContDiff.differentiable_fderiv' (f : X  Y) (hf : ContDiff 𝕜 2 f) :
    Differentiable 𝕜 (fun x => fderiv 𝕜 f x) := by
  have hf' : ContDiff 𝕜 (1+1) f := hf
  rw [contDiff_succ_iff_fderiv] at hf'
  apply hf'.right.right.differentiable
  decide

(is there a better way to write the have statement? writing 2 = 1+1 is kind of stupid)

Kevin Buzzard (Apr 14 2025 at 19:57):

(Just think yourself lucky that we're not using Russell and Whitehead's foundations, or else it would take 300 pages to justify 2 = 1 + 1)

Aaron Liu (Apr 14 2025 at 19:57):

rw [show 2 = 1 + 1 by norm_num] at hf?

Ruben Van de Velde (Apr 14 2025 at 19:58):

You can use docs#one_add_one_eq_two

Aaron Liu (Apr 14 2025 at 20:00):

That's from nng

ZhiKai Pong (Apr 14 2025 at 20:00):

Aaron Liu said:

rw [show 2 = 1 + 1 by norm_num] at hf?

I actually have to write
rw [show (2 : WithTop ℕ∞) = 1 + 1 by norm_num] at hf

ZhiKai Pong (Apr 14 2025 at 20:01):

but that's just a minor inconvenience, I'm happy with what I've got, thanks all

Tomas Skrivan (Apr 14 2025 at 22:46):

I made PR #24056 that sets up ContDiff with fun_prop. You should be able to compose derivatives as much you want now:

import Mathlib.Analysis.Calculus.ContDiff.Operations
import Mathlib.Analysis.InnerProductSpace.PiL2

example (f :   ) (h : ContDiff   f) :
    Differentiable  (fun a => fderiv  (deriv^[3] (deriv (fun x :  => f (x^3 + x)))) a (a*a)) := by
  fun_prop (disch:=decide)

I should also add

theorem EuclideanSpace.contDiff_apply {ι} [Fintype ι] (i : ι) :
    ContDiff  n (fun f : EuclideanSpace 𝕜 ι => f i) :=

not sure where it should go and how to name it.

Tomas Skrivan (Apr 15 2025 at 17:39):

Ok, I think the PR #24056 is ready. I had to reformulate few theorems such that they work better with fun_prop and I would appreciate to look over it such that I follow correct naming convention. Most of the time I just added ' but in few cases _one, _succ or _fun might be more appropriate.

On that PR you should be able now to do

import Mathlib.Analysis.Calculus.ContDiff.Operations
import Mathlib.Analysis.Calculus.ContDiff.WithLp
import Mathlib.Analysis.InnerProductSpace.PiL2

local notation "ℝ³" => EuclideanSpace  (Fin 3)

example (f : ³  ³) (hf : ContDiff   f) :
    Differentiable  (fun x => (fderiv  f x) (EuclideanSpace.single v 1)) := by
  fun_prop (disch:=decide)

example (f : ³  ³) (hf : ContDiff  2 f) :
    Differentiable  (fderiv  fun x => f x i) := by
  fun_prop

example {u v : Fin 3} (f : ³  ³) (hf : ContDiff   f) :
    Differentiable  fun x =>
      (fderiv  (fun x => f x u) x) (EuclideanSpace.single v (1 : )) := by
  fun_prop (disch:=decide)

@ZhiKai Pong Let me know if there are other proof of differentiability/smoothess you struggle with. I think everything what you do should be solvable by fun_prop(potentially with (disch:=aesop) or similar tactic) but it might be the case that it is not set up correctly.

Alok Singh (Apr 16 2025 at 06:15):

Dumb q @Tomas Skrivan : have you tried grind as the discharger? (afk or id try)

Tomas Skrivan (Apr 16 2025 at 06:28):

I still do not know what kind of goals is grind supposed to solve. The subgoals of fun_prop are usually inequalities or non-equality on reals or set membership. Is grind good for that?

Tomas Skrivan (Apr 16 2025 at 06:30):

I would expect that grind, as many other tactics, would need some kind of set up, like tagging appropriate theorems with an attribute. So far I haven't seen anything like that in Mathlib.

Kevin Buzzard (Apr 16 2025 at 07:26):

The tactic isn't even finished yet, so it's premature for mathlib to start adapting to it


Last updated: May 02 2025 at 03:31 UTC