Zulip Chat Archive
Stream: maths
Topic: Implicit function theorem in familiar form
Miyahara Kō (Jul 03 2024 at 15:01):
Hi.
As we all know, Mathlib does not contain the implicit function theorem in the familiar form.
According to Wikipedia, the implicit function theorem is written in the following form:
Let , , be Banach spaces. Let the mapping be continuously Fréchet differentiable. If , , and is a Banach space isomorphism from onto , then there exist neighbourhoods of and of and a Fréchet differentiable function such that and if and only if , for all .
So, in my spare time, I formalized it in the above form.
I do not intend to make this PR as it is tedious, but I hope it helps!
import Mathlib
open scoped Topology
variable {X : Type*} [NormedAddCommGroup X] [NormedSpace ℝ X] [CompleteSpace X]
variable {Y : Type*} [NormedAddCommGroup Y] [NormedSpace ℝ Y] [CompleteSpace Y]
variable {Z : Type*} [NormedAddCommGroup Z] [NormedSpace ℝ Z] [CompleteSpace Z]
example
{f : X × Y → Z} {x₀ : X} {y₀ : Y} {dfx : X →L[ℝ] Z} {dfy : Y ≃L[ℝ] Z}
(hdf : HasStrictFDerivAt f (dfx.coprod ↑dfy) (x₀, y₀)) (hf : f (x₀, y₀) = 0) :
∃ g : X → Y, (∃ dg : X →L[ℝ] Y, HasStrictFDerivAt g dg x₀) ∧
∀ᶠ p in 𝓝 (x₀, y₀), f p = 0 ↔ p.2 = g p.1 := by
let d : ImplicitFunctionData ℝ (X × Y) Z X :=
{ leftFun := f
rightFun := Prod.fst
pt := (x₀, y₀)
leftDeriv := dfx.coprod ↑dfy
left_has_deriv := hdf
rightDeriv := ContinuousLinearMap.fst ℝ X Y
right_has_deriv := hasStrictFDerivAt_fst
left_range := by
change LinearMap.range (↑(dfx.coprod ↑dfy : X × Y →L[ℝ] Z) : X × Y →ₗ[ℝ] Z) = ⊤
rw [ContinuousLinearMap.coe_coprod, LinearMap.range_coprod]
conv_lhs => arg 2; arg 1; change (↑(↑dfy : Y ≃ₗ[ℝ] Z) : Y →ₗ[ℝ] Z)
simp
right_range := Submodule.range_fst
isCompl_ker := by
change IsCompl
(LinearMap.ker (↑(dfx.coprod ↑dfy : X × Y →L[ℝ] Z) : X × Y →ₗ[ℝ] Z))
(LinearMap.ker (LinearMap.fst ℝ X Y))
rw [ContinuousLinearMap.coe_coprod]
conv => arg 1; arg 1; arg 2; change (↑(↑dfy : Y ≃ₗ[ℝ] Z) : Y →ₗ[ℝ] Z)
constructor
· rw [disjoint_iff]; ext ⟨x, y⟩
simp only [Submodule.mem_inf, LinearMap.mem_ker, LinearMap.coprod_apply,
ContinuousLinearMap.coe_coe, LinearEquiv.coe_coe, LinearMap.fst_apply,
and_comm (b := x = 0), Submodule.mem_bot, Prod.mk_eq_zero, and_congr_right_iff]
rintro rfl; simp
· rw [codisjoint_iff]; ext ⟨x, y⟩
simp only [Submodule.mem_sup, LinearMap.mem_ker, LinearMap.coprod_apply,
ContinuousLinearMap.coe_coe, LinearEquiv.coe_coe, LinearMap.fst_apply, Prod.exists,
exists_and_left, exists_eq_left, Prod.mk_add_mk, add_zero, Prod.mk.injEq,
Submodule.mem_top, iff_true]
use x, dfy.symm (dfx (-x))
refine ⟨?_, rfl, ?_⟩
· change dfx x + dfy (dfy.symm (dfx (-x))) = 0
simp
use dfy.symm (dfx x) + y
simp }
use fun x => (d.implicitFunction 0 x).2
constructor
· use -(↑dfy.symm ∘L dfx)
have hd :=
d.implicitFunction_hasStrictFDerivAt
((ContinuousLinearMap.id ℝ X).prod (-(↑dfy.symm ∘L dfx))) ?_ ?_
· replace hd := hasStrictFDerivAt_snd.comp x₀ hd
simp only [hf, ContinuousLinearMap.snd_comp_prod] at hd
exact hd
· simp
· ext x; simp
· have hd₁ := d.left_map_implicitFunction; simp only [ImplicitFunctionData.prodFun, hf] at hd₁
replace hd₁ := hd₁.curry_nhds.self_of_nhds.prod_inl_nhds y₀
have hd₂ := d.right_map_implicitFunction; simp only [ImplicitFunctionData.prodFun, hf] at hd₂
replace hd₂ := hd₂.curry_nhds.self_of_nhds.prod_inl_nhds y₀
filter_upwards [hd₁, hd₂, d.implicitFunction_apply_image] with ⟨x, y⟩ hxy₁ hxy₂ hxy₃
simp only at hxy₁ hxy₂ hxy₃ ⊢
constructor
· intro hxy₄
replace hxy₄ := congr_arg (fun z => (d.implicitFunction z x).2) hxy₄
beta_reduce at hxy₄; rwa [hxy₃] at hxy₄
· rintro rfl
convert hxy₁
exact hxy₂.symm
Kim Morrison (Jul 03 2024 at 21:17):
Why not PR it? Nothing wrong with tedious!
Ruben Van de Velde (Jul 03 2024 at 21:31):
Tedious proof or tedious PR process?
Miyahara Kō (Jul 04 2024 at 03:01):
PR process. I wrote a whack-a-mole proof so should be golfed, but I can't afford to do that because I am in the middle of the project.
A. (Jul 04 2024 at 06:00):
Would you like me to help out by putting a PR together?
Miyahara Kō (Jul 04 2024 at 06:00):
@A. Yes, thank you very much!
Kim Morrison (Jul 04 2024 at 21:30):
Just a general remark --- mathematically contentful proofs shouldn't be "golfed", they should be made more readable and more structured! Golfing is only for proofs that no one will ever want to read (including someone in the far future needing to do maintenance for a breakage).
Richard Osborn (Jul 05 2024 at 14:09):
It really surprises me that this community created a positive connotation with the word "golf" when basically every other programming community uses it as a negative term.
Johan Commelin (Jul 05 2024 at 14:11):
Aren't there entire subreddits and stackexchanges dedicated to this positive activity :rofl: ?
Richard Osborn (Jul 05 2024 at 14:13):
There's also the International Obfuscated C Code Contest, but I don't think we should use it for inspiration :laughing:
Yury G. Kudryashov (Jul 08 2024 at 02:21):
When I was formalizing the implicit function theorem, I decided to prove a very general version and leave formalizing specialized versions for future, because there are tons of special cases we may need, and it's hard to guess which ones will be useful in practice.
Yury G. Kudryashov (Jul 08 2024 at 02:23):
The only question I have about "golfing" or not this proof is "are there parts of the proof that can be moved to reusable lemmas?"
A. (Jul 08 2024 at 19:53):
Thanks Yury. I do have a specific use case in mind and something like the above should work for it. (I want to talk about the evolution of the fixed point of a function defined on .)
I was thinking of
noncomputable def prodImplicitFunction {f : X × Y → Z} {x₀ : X} {y₀ : Y}
{dfx : X →L[𝕜] Z} {dfy : Y ≃L[𝕜] Z} (df₀ : HasStrictFDerivAt f (dfx.coprod dfy) (x₀, y₀)) :
X → Y := …
or possibly
noncomputable def prodImplicitFunction {f : X → Y → Z} {x₀ : X} {y₀ : Y}
{dfx : X →L[𝕜] Z} (dfx₀ : HasStrictFDerivAt (f · y₀) dfx x₀)
{dfy : Y ≃L[𝕜] Z} (dfy₀ : HasStrictFDerivAt (f x₀) (dfy : Y →L[𝕜] Z) y₀) :
X → Y := …
(or both) along with
example {f : X × Y → Z} {x₀ : X} {y₀ : Y}
{dfx : X →L[𝕜] Z} {dfy : Y ≃L[𝕜] Z} (df₀ : HasStrictFDerivAt f (dfx.coprod dfy) (x₀, y₀)) :
∀ᶠ (x, y) in 𝓝 (x₀, y₀), f (x, y) = f (x₀, y₀) ↔ y = df₀.prodImplicitFunction x := ...
example {f : X × Y → Z} {x₀ : X} {y₀ : Y}
{dfx : X →L[𝕜] Z} {dfy : Y ≃L[𝕜] Z} (df₀ : HasStrictFDerivAt f (dfx.coprod dfy) (x₀, y₀)) :
HasStrictFDerivAt df₀.prodImplicitFunction (-dfy.symm ∘L dfx) x₀ := ...
etc.
A. (Jul 08 2024 at 19:54):
I also want something on the second derivative.
A. (Sep 12 2024 at 21:10):
Violeta Hernández (Sep 13 2024 at 00:53):
Kim Morrison said:
Just a general remark --- mathematically contentful proofs shouldn't be "golfed", they should be made more readable and more structured!
It really does say something about Mathlib (about all formal math in general?) that 99% of our theorems still classify as golf-worthy :stuck_out_tongue:
A. (Nov 27 2025 at 15:28):
@Winston Yin (尹維晨) I guess this stuff will be fresh in your mind since your PR #30595 merged yesterday was only written last month. Does it subsume mine?
A. (Nov 27 2025 at 15:32):
Oh mine has a different number since the migration-to-fork thing. It's now #26985. It finally got the thumbs-up literally hours before yours went through!
A. (Nov 27 2025 at 15:40):
There's also this dependent PR #30077 for the curried function.
Winston Yin (尹維晨) (Nov 27 2025 at 18:02):
Thanks for pinging me on this. I'm sorry that I was not aware of this thread before starting my work in July. Let me ping @Michael Rothgang and @Sébastien Gouëzel who reviewed my PR to see how we may proceed.
Winston Yin (尹維晨) (Nov 27 2025 at 18:16):
Having one's hard work subsumed by someone else's PR is a very disappointing experience, and I apologise for this. However, I believe there are parts that can be extracted from your PRs (e.g. curried version). My PR was meant to show C^n for the implicit function, so that's different from yours. I'll also mention my dependent PR #31988, which shows the local uniqueness of the implicit function. Does that overlap with anything in your PRs?
A. (Nov 27 2025 at 18:46):
Don't worry, of my many disappointments this is among the smallest! I've only brought it up here because I'm lazy. It's been such a long time since I wrote mine that it will be an effort for me to work out the relationships myself.
A. (Nov 27 2025 at 18:50):
No I didn't attempt anything on $C^n$. IIRC I had developed the idea that the best thing to do there would be to extend the Faa di Bruno stuff that exists elsewhere.
Winston Yin (尹維晨) (Nov 27 2025 at 18:52):
My C^n stuff is whatever works minimally, and some generalisation is likely possible. I'm also happy to schedule a video call where we disentangle all this. We may be able to use some of the type signature of your bivariate implicit function in the curried form, reducing them to the IsContDiffImplicitAt in my PR.
Jireh Loreaux (Nov 27 2025 at 19:09):
@Winston Yin (尹維晨) and @A., yes, I still think the curried version in #26985 is worthwhile, unless I'm missing somethinb obvious, but I'm sorry that also only saw this thread for the first time today.
Jireh Loreaux (Dec 02 2025 at 19:44):
@Winston Yin (尹維晨) would you mind reviewing #26895 ? I would like to see it merged, and I had already delegated before this came up. Since you are so familiar with the material and its current state in Mathlib, I think you would be ideal to review it.
Winston Yin (尹維晨) (Dec 09 2025 at 11:46):
Yes, I'm happy to. I like that #26985 (btw @Jireh Loreaux you mislabelled the PR in your second message) is more general than my mathlib#IsContDiffImplicitAt.implicitFunctionData in that it doesn't assume ContDiff, and I also like its hypotheses {f₁ : E₁ →L[𝕜] F} {f₂ : E₂ ≃L[𝕜] F}. I would like to state the latter in terms of the former. @A. : Feel free to do this yourself before I attempt it myself.
A. (Dec 09 2025 at 20:58):
Great :grinning: In a week or two I should have the time to look through it myself
Winston Yin (尹維晨) (Jan 08 2026 at 17:51):
Hi @A. have you had the chance to look through this? I believe it would be good to make these suggested changes before too many things start to depend on IsContDiffImplicitAt.implicitFunctionData. Happy to work with you.
A. (Jan 08 2026 at 17:58):
Hi! I'll try to get to it tomorrow.
A. (Jan 08 2026 at 18:01):
Presumably IsContDiffImplicitAt.implicitFunctionData will continue to exist? I did think that your leftFun and rightFun should be swapped to match what is currently in implicitFunOfProdDomain.
A. (Jan 08 2026 at 18:03):
IIRC it has to be that ordering to make the derivative formula stuff work, e.g. hasStrictFDerivAt_implicitFunOfProdDomain.
Winston Yin (尹維晨) (Jan 08 2026 at 18:06):
In ImplicitContDiff.lean, I really only need contDiff_implicitFunction and contDiffAt_implicitFunction to remain. Feel free to eliminate IsContDiffImplicitAt and state the contDiffAt field as a hypothesis of these theorems.
Winston Yin (尹維晨) (Jan 08 2026 at 18:07):
IsContDiffImplicitAt.implicitFunctionData is in effect the same as your stuff but I prefer yours. Choose whichever ordering of left/right makes the most sense!
A. (Jan 10 2026 at 19:51):
It turns out you can define a chain of implicitFunctionDatas and implicitFunctions all the way back to implicitFunctionDataOfComplemented. What do you think? Don't look too closely at the proofs! (This is a long comment but I expect Zulip to fold it)
import Mathlib.Analysis.Calculus.Implicit
import Mathlib.Analysis.Calculus.InverseFunctionTheorem.ContDiff
noncomputable section Implicit -- file Analysis/Calculus/Implicit.lean
open scoped Topology
variable {𝕜 : Type*} [NontriviallyNormedField 𝕜]
namespace ImplicitFunctionData
variable {E F G H : Type*}
[NormedAddCommGroup E] [NormedSpace 𝕜 E] [CompleteSpace E]
[NormedAddCommGroup F] [NormedSpace 𝕜 F] [CompleteSpace F]
[NormedAddCommGroup G] [NormedSpace 𝕜 G] [CompleteSpace G]
[NormedAddCommGroup H] [NormedSpace 𝕜 H] [CompleteSpace H]
def implicitFunctionDataOfBijective (φ : ImplicitFunctionData 𝕜 E F G)
{h : G →L[𝕜] H} (kh : h.ker = ⊥) (rh : h.range = ⊤) : ImplicitFunctionData 𝕜 E F H :=
{ φ with
rightFun := h ∘ φ.rightFun
rightDeriv := h ∘L φ.rightDeriv
hasStrictFDerivAt_rightFun := h.hasStrictFDerivAt.comp φ.pt φ.hasStrictFDerivAt_rightFun
range_rightDeriv := by
simpa [rh] using LinearMap.range_comp_of_range_eq_top (h : G →ₗ[𝕜] H) φ.range_rightDeriv
isCompl_ker := by simpa [φ.rightDeriv.ker_comp_of_ker_eq_bot kh] using φ.isCompl_ker }
end ImplicitFunctionData
namespace HasStrictFDerivAt
variable {E₁ E₂ F : Type*}
[NormedAddCommGroup E₁] [NormedSpace 𝕜 E₁] [CompleteSpace E₁]
[NormedAddCommGroup E₂] [NormedSpace 𝕜 E₂] [CompleteSpace E₂]
[NormedAddCommGroup F] [NormedSpace 𝕜 F] [CompleteSpace F]
def implicitFunctionDataOfProdDomain₂ {u : E₁ × E₂} {f : E₁ × E₂ → F}
{f' : E₁ × E₂ →L[𝕜] F} (rf₂ : (f' ∘L .inr _ _ _).range = ⊤)
(kf₂ : (f' ∘L .inr _ _ _).ker.ClosedComplemented) (dfx : HasStrictFDerivAt f f' u) :
ImplicitFunctionData 𝕜 (E₁ × E₂) F (E₁ × (f' ∘L .inr _ _ _).ker) :=
let f₁ := f' ∘L .inl 𝕜 E₁ E₂
let f₂ := f' ∘L .inr 𝕜 E₁ E₂
let h := (ContinuousLinearMap.id 𝕜 E₁).prodMap kf₂.choose ∘L f'.ker.subtypeL
have kh : h.ker = ⊥ := by
rw [LinearMap.ker_eq_bot']
intro ⟨(v₁, v₂), hv⟩
suffices v₁ = 0 → kf₂.choose v₂ = 0 → (v₁, v₂) = 0 by simpa [h] using this
intro hv₁ hv₂
suffices f₂ v₂ = 0 by
have := (kf₂.choose_spec ⟨v₂, this⟩).symm
rw [hv₂, ← Submodule.coe_eq_zero] at this
simpa using ⟨hv₁, this⟩
simpa [hv₁] using hv
have ef' : f' = f₁.coprod f₂ := by aesop
have rf' : f'.range = ⊤ := by
rw [ef', ← top_le_iff, ← rf₂, ContinuousLinearMap.range_coprod]
exact le_sup_right
have rh : h.range = ⊤ := by
rw [LinearMap.range_eq_top] at ⊢ rf₂
intro (v₁, v₂)
let ⟨y, (hy : f₂ y = f₁ v₁)⟩ := rf₂ (f₁ v₁)
use ⟨(v₁, v₂ - (y - kf₂.choose y)), by simp [ef', hy]⟩
conv_lhs => arg 1; rw [ContinuousLinearMap.coe_comp]
rw [LinearMap.comp_apply]
conv_lhs => arg 2; change (v₁, v₂ - _)
rw [ContinuousLinearMap.coe_prodMap, LinearMap.prodMap_apply, map_sub]
conv_lhs => arg 2; rw [ContinuousLinearMap.coe_coe, kf₂.choose_spec v₂]
rw [map_sub, kf₂.choose_spec (kf₂.choose y)]
simp
have kf' : f'.ker.ClosedComplemented := by
let he : f'.ker ≃L[𝕜] E₁ × f₂.ker := ContinuousLinearEquiv.ofBijective _ kh rh
use he.symm ∘L .prodMap (.id _ _) kf₂.choose
exact ContinuousLinearEquiv.ofBijective_symm_apply_apply h kh rh
dfx.implicitFunctionDataOfComplemented f _ rf' kf' |>.implicitFunctionDataOfBijective kh rh
-- Alternatively
def implicitFunctionDataOfProdDomain₂' {x : E₁ × E₂} {f : E₁ × E₂ → F}
{f' : E₁ × E₂ →L[𝕜] F} (rf₂ : (f' ∘L .inr _ _ _).range = ⊤)
(kf₂ : (f' ∘L .inr _ _ _).ker.ClosedComplemented) (dfx : HasStrictFDerivAt f f' x) :
ImplicitFunctionData 𝕜 (E₁ × E₂) F (E₁ × (f' ∘L .inr _ _ _).ker) where
leftFun := f
rightFun := ContinuousLinearMap.prodMap (.id _ _) kf₂.choose
pt := x
leftDeriv := f'
hasStrictFDerivAt_leftFun := dfx
rightDeriv := ContinuousLinearMap.prodMap (.id _ _) kf₂.choose
hasStrictFDerivAt_rightFun := ContinuousLinearMap.hasStrictFDerivAt _
range_leftDeriv := by
rw [← top_le_iff, ← rf₂]
apply LinearMap.range_comp_le_range
range_rightDeriv := by
rw [ContinuousLinearMap.coe_prodMap, LinearMap.range_prodMap]
have : kf₂.choose.range = ⊤ := by
apply LinearMap.range_eq_of_proj
exact kf₂.choose_spec
rw [this]
simp
isCompl_ker := by
constructor
· rw [Submodule.disjoint_def]
intro (v₁, v₂) hl hr
rw [ContinuousLinearMap.coe_prodMap, LinearMap.ker_prodMap, Submodule.mem_prod] at hr
rw [Prod.mk_eq_zero]
have hv₁ : v₁ = 0 := by simpa using hr.1
use hv₁
have hv₂ : v₂ ∈ (f' ∘L .inr _ _ _).ker := by simpa [hv₁] using hl
have := kf₂.choose_spec ⟨v₂, hv₂⟩
have hr2 : kf₂.choose v₂ = 0 := by simpa using hr.2
rw [Subtype.coe_mk, hr2] at this
simpa using this.symm
· rw [Submodule.codisjoint_iff_exists_add_eq]
intro (v₁, v₂)
obtain ⟨z, hz⟩ : ∃ z : E₂, f' (v₁, z) = 0 := by
rw [Submodule.eq_top_iff'] at rf₂
refine rf₂ (-f' (v₁, 0)) |>.imp fun z hz ↦ ?_
rw [← (v₁, z).fst_add_snd, map_add]
simpa [eq_neg_iff_add_eq_zero, add_comm] using hz
rcases Submodule.codisjoint_iff_exists_add_eq.mp
(LinearMap.isCompl_of_proj kf₂.choose_spec).codisjoint (v₂ - z)
with ⟨w, t, hw, ht, hsub⟩
use (v₁, w + z)
refine ⟨(0, t), ?ker, by simpa using ht, ?add⟩
case ker =>
rwa [← zero_add v₁, ← Prod.mk_add_mk, LinearMap.mem_ker, map_add,
ContinuousLinearMap.coe_coe, hz, add_zero]
case add =>
rw [Prod.mk_add_mk, add_zero, add_right_comm w z t, hsub, sub_add_cancel]
def implicitFunctionOfProdDomain₂
{u : E₁ × E₂} {f : E₁ × E₂ → F} {f' : E₁ × E₂ →L[𝕜] F}
(rf₂ : (f' ∘L .inr _ _ _).range = ⊤) (kf₂ : (f' ∘L .inr _ _ _).ker.ClosedComplemented)
(dfx : HasStrictFDerivAt f f' u) : (E₁ × (f' ∘L .inr _ _ _).ker) → E₂ :=
fun x => ((dfx.implicitFunctionDataOfProdDomain₂ rf₂ kf₂).implicitFunction (f u) x).2
def implicitFunctionDataOfProdDomain₁ {u : E₁ × E₂} {f : E₁ × E₂ → F}
{f' : E₁ × E₂ →L[𝕜] F} (bf₂ : (⇑(f' ∘L .inr _ _ _)).Bijective)
(dfx : HasStrictFDerivAt f f' u) : ImplicitFunctionData 𝕜 (E₁ × E₂) F E₁ :=
have kf₂ : (f' ∘L .inr _ _ _).ker = ⊥ := LinearMap.ker_eq_bot.2 bf₂.injective
have kh : (ContinuousLinearMap.fst 𝕜 E₁ (f' ∘L .inr _ _ _).ker).ker = ⊥ := by
rw [LinearMap.ker_eq_bot']
intro (v₁, ⟨v₂, h₂⟩) h₁
rw [kf₂, Submodule.mem_bot] at h₂
simpa using ⟨h₁, h₂⟩
dfx.implicitFunctionDataOfProdDomain₂'
(LinearMap.range_eq_top.2 bf₂.surjective) (kf₂ ▸ Submodule.closedComplemented_bot)
|>.implicitFunctionDataOfBijective kh (LinearMap.range_eq_top.2 Prod.fst_surjective)
def implicitFunctionOfProdDomain₁ {u : E₁ × E₂} {f : E₁ × E₂ → F}
{f' : E₁ × E₂ →L[𝕜] F} (bf₂ : (⇑(f' ∘L .inr _ _ _)).Bijective)
(dfx : HasStrictFDerivAt f f' u) : E₁ → E₂ :=
fun x => ((dfx.implicitFunctionDataOfProdDomain₁ bf₂).implicitFunction (f u) x).2
def implicitFunctionOfProdDomain {u : E₁ × E₂} {f : E₁ × E₂ → F}
{f₁ : E₁ →L[𝕜] F} {f₂ : E₂ ≃L[𝕜] F} (dfx : HasStrictFDerivAt f (f₁.coprod f₂) u) : E₁ → E₂ :=
dfx.implicitFunctionOfProdDomain₁ (by simp [f₂.bijective])
theorem hasStrictFDerivAt_implicitFunctionOfProdDomain {u : E₁ × E₂} {f : E₁ × E₂ → F}
{f₁ : E₁ →L[𝕜] F} {f₂ : E₂ ≃L[𝕜] F} (dfx : HasStrictFDerivAt f (f₁.coprod f₂) u) :
HasStrictFDerivAt dfx.implicitFunOfProdDomain (-f₂.symm ∘L f₁) u.1 := by
set ψ' : E₁ →L[𝕜] E₂ := -f₂.symm ∘L f₁
apply HasStrictFDerivAt.snd (f₂' := (ContinuousLinearMap.id 𝕜 E₁).prod ψ')
apply dfx.implicitFunDataOfProdDomain.implicitFunction_hasStrictFDerivAt
· apply ContinuousLinearMap.fst_comp_prod
· change f₁ + f₂ ∘L ψ' = 0
simp [ψ', ← ContinuousLinearMap.comp_assoc]
-- etcetera
end HasStrictFDerivAt
end Implicit
section ImplicitContDiff -- file Analysis/Calculus/ImplicitContDiff.lean
variable {𝕜 E₁ E₂ F : Type*} [RCLike 𝕜]
[NormedAddCommGroup E₁] [NormedSpace 𝕜 E₁] [CompleteSpace E₁] [NormedAddCommGroup E₂]
[NormedSpace 𝕜 E₂] [CompleteSpace E₂] [NormedAddCommGroup F] [NormedSpace 𝕜 F] [CompleteSpace F]
namespace ImplicitFunctionData
theorem contDiff_implicitFunction {φ : ImplicitFunctionData 𝕜 E₁ E₂ F} {n : WithTop ℕ∞}
(nz : n ≠ 0) (hl : ContDiffAt 𝕜 n φ.leftFun φ.pt) (hr : ContDiffAt 𝕜 n φ.rightFun φ.pt) :
ContDiffAt 𝕜 n φ.implicitFunction.uncurry (φ.prodFun φ.pt) := by
rw [implicitFunction, Function.uncurry_curry, toOpenPartialHomeomorph,
← HasStrictFDerivAt.localInverse_def]
exact (hl.prodMk hr).to_localInverse (φ.hasStrictFDerivAt |>.hasFDerivAt) nz
end ImplicitFunctionData
namespace ContDiffAt
noncomputable def implicitFunction {u : E₁ × E₂} {f : E₁ × E₂ → F} {n : WithTop ℕ∞}
(nz : n ≠ 0) (cf : ContDiffAt 𝕜 n f u) (bf₂ : (⇑(fderiv 𝕜 f u ∘L .inr 𝕜 E₁ E₂)).Bijective) :
E₁ → E₂ :=
(cf.hasStrictFDerivAt nz).implicitFunctionOfProdDomain₁ bf₂
theorem contDiffAt_implicitFunction {u : E₁ × E₂} {f : E₁ × E₂ → F} {n : WithTop ℕ∞}
(nz : n ≠ 0) (cf : ContDiffAt 𝕜 n f u) (bf₂ : (⇑(fderiv 𝕜 f u ∘L .inr 𝕜 E₁ E₂)).Bijective) :
ContDiffAt 𝕜 n (cf.implicitFunction nz bf₂) u.1 := by
have := HasStrictFDerivAt.implicitFunctionDataOfProdDomain₁ bf₂ (cf.hasStrictFDerivAt nz)
|>.contDiff_implicitFunction nz cf <| contDiffAt_fst
unfold implicitFunction HasStrictFDerivAt.implicitFunctionOfProdDomain₁
fun_prop
end ContDiffAt
end ImplicitContDiff
A. (Jan 10 2026 at 19:56):
The chain also includes a version desired in this thread. I see that @Scott Carnahan might also be working on it?
Winston Yin (尹維晨) (Jan 10 2026 at 20:01):
I think you should cross post in that thread as well to get more opinions, esp. from Yury.
A. (Jan 10 2026 at 20:18):
Ok. Does the ImplicitContDiff section accord with your expectations?
Winston Yin (尹維晨) (Jan 10 2026 at 21:35):
I think that's cleaner, yes!
The only thing I'm not sure about is the way bf₂ is spelt. It's good to use fderiv instead of HasFDerivAt, so one doesn't have to specify the form of the derivative f' before using this theorem, but if you are coercing the continuous linear map into a function anyway, why not just coerce fderiv 𝕜 f u directly and use function notation? I think it may make future proof obligations easier.
Winston Yin (尹維晨) (Jan 10 2026 at 21:36):
Thanks for working on this.
Scott Carnahan (Jan 10 2026 at 23:01):
Sorry, my finger slipped on the “working on it” emoji. I am not actually working on it.
A. (Jan 14 2026 at 13:11):
Hi! I’ve checked and found that changes are incoming here and here. I think there’s very little point in my trying to rescue my PR; it’s bound to get knocked out for a third time and I won’t have learned anything. I agree the situation at the moment looks a little suboptimal but, as I can’t even remember why I wanted to work on this in the first place, I’m probably not the person to fix it.
Wrenna Robson (Jan 15 2026 at 13:46):
Kim Morrison said:
Just a general remark --- mathematically contentful proofs shouldn't be "golfed", they should be made more readable and more structured! Golfing is only for proofs that no one will ever want to read (including someone in the far future needing to do maintenance for a breakage).
You should tell AIs...
Wrenna Robson (Jan 15 2026 at 13:47):
Richard Osborn said:
It really surprises me that this community created a positive connotation with the word "golf" when basically every other programming community uses it as a negative term.
I do not think this is true.
Wrenna Robson (Jan 15 2026 at 13:47):
https://en.wikipedia.org/wiki/Code_golf
Ruben Van de Velde (Jan 15 2026 at 15:44):
I also don't think this community has given it an unconditional positive connotation
Ruben Van de Velde (Jan 15 2026 at 15:46):
I'd say it's more a strong commitment to DRY that leads to lemmas that on their own have very little content
Artie Khovanov (Jan 15 2026 at 18:04):
Golfing in the Lean sense frequently improves readability and maintainability. Long Lean proofs are often long because they have a brittle layout.
Winston Yin (尹維晨) (Jan 15 2026 at 18:05):
A. said:
Hi! I’ve checked and found that changes are incoming here and here. I think there’s very little point in my trying to rescue my PR; it’s bound to get knocked out for a third time and I won’t have learned anything. I agree the situation at the moment looks a little suboptimal but, as I can’t even remember why I wanted to work on this in the first place, I’m probably not the person to fix it.
I'm sorry that suddenly there's a lot of interest in the implicit function theorem, complicating your work. It's nice that this part of the library will get more robust support. For what its worth, I think your code is good for Mathlib, and you've probably already learnt a lot.
Last updated: Feb 28 2026 at 14:05 UTC