Zulip Chat Archive

Stream: PhysLean

Topic: Vector calculus


ZhiKai Pong (Apr 05 2025 at 21:36):

although there is gradient in mathlib, will it still be useful to state the usual 3D gradient? (I assume making it compatible with SpaceTime requires this anyway) so we can state things like

theorem curl_of_grad_eq_zero (x: SpaceTime)
    (A : SpaceTime  EuclideanSpace  (Fin 3)) : ∇× ( A) x = 0 := by
  sorry

and things like the Laplacian operator

ZhiKai Pong (Apr 05 2025 at 21:55):

/-- The gradient of a function `SpaceTime → EuclideanSpace ℝ (Fin 3)`. -/
def spaceGrad (f : SpaceTime  EuclideanSpace  (Fin 3)) :
    SpaceTime  EuclideanSpace  (Fin 3) := fun x j =>
  match j with
  | 0 => deriv 0 (fun y => f y 0) x
  | 1 => deriv 1 (fun y => f y 1) x
  | 2 => deriv 2 (fun y => f y 2) x

@[inherit_doc spaceGrad]
scoped[SpaceTime] notation "∇" => spaceGrad

attempt following how spaceDiv and spaceCurl are defined

ZhiKai Pong (Apr 10 2025 at 15:15):

I realized f should be from SpaceTime to R (scalar function)... I'll correct this in the next PR

ZhiKai Pong (Apr 14 2025 at 21:23):

I'm working on some vector calculus identities, should we assume everything is ContDiff 𝕜 ⊤ f or keep all the assumptions as minimal as possible?

ZhiKai Pong (Apr 14 2025 at 21:23):

i.e. for example only assume ContDiff 𝕜 2 f for deriv_commute

ZhiKai Pong (Apr 14 2025 at 21:24):

Or maybe the identities should have minimal assumption, then we prove the physical functions satisfies those?

Kevin Buzzard (Apr 14 2025 at 21:31):

The philosophy in mathlib is to stick to minimal assumptions as far as is reasonably possible, and in mathlib this has its advantages. At the very least it seems pointless putting more hypothesis in a theorem than the proof uses!

Joseph Tooby-Smith (Apr 15 2025 at 13:09):

I would like it if for div and curl we could have ∇ × f and ∇ ⬝ f instead of ∇× f and ∇⬝ f (note the different spacings). I couldn't find any easy way to do this with notation or marcos.

Joseph Tooby-Smith (Apr 15 2025 at 13:14):

Actually never mind, something like:

macro (name := divNotation) "∇" "⬝" f:term:100  : term => `(div $f)

works. I was trying

macro (name := divNotation) "∇ ⬝" f:term:100  : term => `(div $f)

which doesn't work.

ZhiKai Pong (Apr 16 2025 at 15:15):

trying to define the Laplacian, which one of ∇² and Δ is more commonly used? I'm thinking of using ∇² for the scalar and Δ for the vector Laplacian, not sure if that's a good idea.

Joseph Tooby-Smith (Apr 16 2025 at 15:26):

I don't really have an opinion here, so don't mind too much. Both seem equally as common.

But I will say (somewhat related), at some point it would be really nice to have the (scalar) Laplacian written in e.g. spherical coordinates and an easy way to go between cartesian and spherical coordinates on space. I think this will be very useful in situations with spherical symmetry.

Joseph Tooby-Smith (Apr 16 2025 at 15:28):

I think your suggestion for using ∇² for the scalar and Δ for the vector is good - though if possible I think it would be nicer to use the same notation for both

ZhiKai Pong (Apr 16 2025 at 15:51):

Joseph Tooby-Smith said:

I don't really have an opinion here, so don't mind too much. Both seem equally as common.

But I will say (somewhat related), at some point it would be really nice to have the (scalar) Laplacian written in e.g. spherical coordinates and an easy way to go between cartesian and spherical coordinates on space. I think this will be very useful in situations with spherical symmetry.

is spherical coordinates in mathlib? I found docs#polarCoord but don't seem to find spherical coordinates.

ZhiKai Pong (Apr 16 2025 at 16:01):

ok I think it's safe to say it's just not a thing in mathlib :sweat_smile: #maths > polar coordinates @ 💬 is pretty much the only discussion on this

ZhiKai Pong (Apr 16 2025 at 21:49):

Joseph Tooby-Smith said:

I think your suggestion for using ∇² for the scalar and Δ for the vector is good - though if possible I think it would be nicer to use the same notation for both

I don't think having the same notation works, lean throws an error

hmm nevermind it might work, let me do some more testing

Tomas Skrivan (Apr 16 2025 at 22:28):

There are multiple ways to have the same notation for different things. I think there are three main approaches:

  • overloaded notation
  • use notation for a specialized typeclass and provide instances for every use case. The canonical example is + and the associated type class HAdd
  • write custom elaborator, this is most flexible but requires meta programming

I think the typeclass approach should work in this case.

Tomas Skrivan (Apr 16 2025 at 22:29):

Personally, I would reserve ∇² for Hessian i.e. matrix of all second order derivatives.

Tomas Skrivan (Apr 16 2025 at 22:30):

For Laplacian I would go with Δ

Joseph Tooby-Smith (Apr 18 2025 at 07:50):

One place it would be nice to implement these results is in
PhysLean.QuantumMechanics .OneDimension.GeneralPotential.Basic, and more generally in quantum mechanics.

I also think it may be worth making Space, and Time non-reducible(?) to prevent casting from say to Time. One way of doing this is to make them structures, but I think there is probably a better way (I've discussed this with some people before but forgot what the conclusion was).

(Not expecting anyone to do any of this just getting it out there :) )

ZhiKai Pong (Apr 18 2025 at 15:19):

I think this is somewhat related, if I want to show Time.deriv and Space.deriv commute, is it sufficient to state that their respective underlying fderiv commute, or do we want a more "physical" approach? (I don't know what that would look like)

ZhiKai Pong (Apr 18 2025 at 15:21):

for example ∂ₜ (∇ × E) = ∇ × (∂ₜ E)

Joseph Tooby-Smith (Apr 18 2025 at 15:22):

I have no problem with you using the underlying fderiv :).

My comment about non-reduciablity is to stop you doing something like:

def oddDef (t : Time) : Space 1 := t

which would currently work because Lean tries to help by casting types - but in this case isn't helping.

Tomas Skrivan (Apr 18 2025 at 22:00):

Joseph Tooby-Smith said:

I also think it may be worth making Space, and Time non-reducible(?) to prevent casting from say to Time. One way of doing this is to make them structures, but I think there is probably a better way (I've discussed this with some people before but forgot what the conclusion was).

I understand that from a philosophical standpoint this is desirable but I would be very cautious about this. You might end up duplicating lots of API for not technical benefit but only philosophical one.

Alex Meiburg (Apr 19 2025 at 16:45):

Hot take: the real issue here (philsophically - and this can be reflected in the Lean!) is that Space and Time aren't reals, they're AddTorsor ℝ.

That is, Time is some generic Type* equipped with an AddTorsor ℝ Time instance; Space k is some other generic Type* equipped with an AddTime ℝ^k (Space k) instance.

docs#AddTorsor

Alex Meiburg (Apr 19 2025 at 16:46):

This correctly reflects the fact that, for instance, there's no notion of adding a Time to a Time: I can't add last Wednesday to tomorrow to get a meaningful number. Same goes for locations in space

You can take the differences between times, though, and get a Real out, with p₁ -ᵥ p₂

Alex Meiburg (Apr 19 2025 at 16:48):

And I contend that casting that Real acrossSpace 1 and Time is actually a much smaller sin. Since it's only ever measuring distances (and not absolute positions), this is just converting distance to durations (at, presumably, the speed of light), which is a "cast" that physicists happily do very often all the time anyway

Alex Meiburg (Apr 19 2025 at 16:53):

It's actually a failing that docs#deriv has the type signature

{𝕜 : Type u} [NontriviallyNormedField 𝕜] {F : Type v} [AddCommGroup F] [Module 𝕜 F] [TopologicalSpace F] (f : 𝕜  F) (x : 𝕜) : F

when it could just as well have the type signature

{𝕜 : Type u} [NontriviallyNormedField 𝕜] {P : Type w} [AddTorsor 𝕜 P] {F : Type v} [AddCommGroup F] [Module 𝕜 F] [TopologicalSpace F] (f : P  F) (x : 𝕜) : F

which would correctly let us talk about derivatives on a torsor. Similarly, fderiv requires the domain be a module AddCommGroup E when it really should be AddTorsor P E.

Then it would use the instance addGroupIsAddTorsor by default anyway, and basically work the same for "simple" cases.

Alex Meiburg (Apr 19 2025 at 16:59):

I guess I would be willing to try to make this change in upstream mathlib (assuming that mathlib folks are okay with it - will open thread in a sec). I recognize that doing calculus is kind of a bare minimum for a useful definition, so I wouldn't expect PhysLean to accept torsors as the basis for spacetime until you can do derivatives with them in a reasonable way... but I feel confident this is the "right" way to state (flat) spacetime!

Joseph Tooby-Smith (Apr 19 2025 at 17:07):

Hi Alex, I agree with pretty much all of this. This is somewhat related to my post about the definition of a second and how is equipped with a norm in a way that Time really shouldn't be.

I think (correct me if you disagree) that we can take Time as manifold should be taken as isomorphic to . (I know in some systems you take time to be isomorphic to the circle as a manifold, or a closed interval etc.) But I think taking it isomorphic to is most useful for most purposes.
So maybe defining Time as the type without any additional structure (so def not abbrev), then putting on it an instance of AddTorsor ℝ Time and any other bona-fide instances instead of just inheriting things from .

This will also make it much easier to make Time non-reducible which solves my problem.

Alex Meiburg (Apr 19 2025 at 17:17):

Yeah, I think that taking it as an AddTorsor should be logically equivalent to taking it as a simply connected manifold with zero curvature...

At least, it is a fact that AddTorsor G P always means that P is isomorphic to a group G. It just has no "canonical" origin, and so no canonical isomorphism. This is a special case of how a group is a manifold for itself, I guess

Alex Meiburg (Apr 19 2025 at 17:23):

Other thread about mathlib

Joseph Tooby-Smith (Apr 19 2025 at 17:25):

This relies you on having the type P in the first place though, and I think it is better to have a specific type called Time, then having a variable P in all equations which can equivalently represent time (although the category theorist in me doesn't want to say this :().

In any case I think we are agreeing on the main points.
I will create an issue on PhysLean regarding this. To others: This shouldn't discourage anyone from working with Time and Space as they currently are - we can refactor later :).

Alex Meiburg (Apr 19 2025 at 17:26):

Ah yes, that's how I meant it too. :)

Tomas Skrivan (Apr 19 2025 at 21:39):

On a high level, I guess I disagree. I believe that simple things should be kept simple. A huge part of physics is classical, no relativity and no quantum physics, and adding all this complexity has very little benefit there.

I would also consider the cost this will incur on onboarding new people to the project. Someone wants to do a simple thing in classical mechanics and suddenly has to learn how manifolds are done in mathlib or what an AddTorsor is.

What I want to say is that unless you have a concrete project where this is really beneficial I would strongly recommend keeping simple things simple. I think I'm talking from experience, multiple times I tried to make something "future proof" only to realize that I didn't consider some really common case and had to refactor everything again. (To give you an example, for a very long time I tried making symbolic/automatic differentiation in SciLean to also work for variational calculus too but it was adding so much more complexity than I had to abandoned it. I think I wasted lots of time and effort on it which could have been used elsewhere more efficiently.)

Joseph Tooby-Smith (Apr 20 2025 at 06:44):

I agree that it is wise to worry if there is going to be a cost to onboarding new people by doing this - and by making any such change in PhysLean.

I think this is something we can actually test. We have enough usages of Time and Space in classical mechanics, quantum mechanics, vector calculus etc. in PhysLean that I think we can test if this change is going to make a massive difference to it's usage and the end user experience.

But the way I see this is that you will only notice this change in definition of Time if you were doing something with Time that you shouldn't be.
For example, add two times, take the zero time, multiply two times etc.

The definition actual definition of Time will still be

def  Time : Type := 

we're just not going to inherit every instance on .
So unless you actually are doing something related to smoothness etc. you are not going to need to worry about the definition of a manifold etc.

Kevin Buzzard (Apr 20 2025 at 22:03):

I would also consider the cost this will incur on onboarding new people to the project. Someone wants to do a simple thing in classical mechanics and suddenly has to learn how manifolds are done in mathlib or what an AddTorsor is.

For many years in the 2010s I was vehemently arguing that we should just get on and prove the fundamental theorem of calculus the way it's proved in a 1st year undergraduate course, so that 1st year undergraduates could get on formalizing analysis. There was a huge pushback from the analysts here; their argument was that the correct way to formalize integration was via some exotic integral taking values in a topological vector space, and that there was no point setting up the theory of 1-variable calculus, it should all be deduced from a multivariable theory; the argument was that multivariable calculus is not done by induction on number of variables, and there was simply no point doing the one-variable case when we knew we wanted the multivariable case. The idea was to deduce the FTC from some form of Stokes' theorem applied to an interval in the reals, with boundary the two endpoints. I argued and I argued (because I had a pile of 1st year undergraduates desperate to formalize analysis as they were being taught it) but I didn't get my way, and ultimately I think that the community made the right decision by ignoring me and doing things the way Bourbaki did them.

Tyler Josephson ⚛️ (Apr 21 2025 at 03:07):

Onboarding cost is very real. I’ve guided many scientists / engineers into Lean, and it remains a steep learning curve. But I find it interesting how it’s gotten easier over the years - better documentation helped, Mechanics of Proof helped, search engine tools like Moogle helped. When I started, it was Lean 3, and my UG ChemE student didn’t know what a ring was or anything about abstract algebra, and he had to learn it to make any headway. Now, students come in with exactly the same background, and they can make more headway because powerful tactics and other tools just help.

I’ve been curious about differential manifolds since watching a few lovely videos about thermodynamics and manifolds, but I don’t understand them, I haven’t had the time to learn that version of thermo. I intuit that the rigorous way to formalize classical thermo will be through manifolds, and that such derivations will be more elegant and more powerful than those in all the engineering textbooks. But, I’d struggle with even reading such proofs, in the end. I’d struggle to communicate such results with people in my field, and they would struggle to build on it.

Tyler Josephson ⚛️ (Apr 21 2025 at 03:24):

I wonder if PhysLean should be like Mathlib in this regard - make it as rigorous as possible, relying on present or future tactics / tools to make it accessible. Or is PhysLean just different? Kevin is glad Mathlib didn’t align with the math level of undergrads, and that’s wise because the people driving the field of mathematics forward have much more math expertise. But in science and engineering, “undergrad math” is not just a reasonable level of math for most learners, but many/most of the people driving the field forward don’t have much more math than the typical core taught to scientists and engineers. I’d like it if a grad student with that kind of background could derive a new theory in their area of research and also be able to formalize it, needing no more than perhaps a one-semester introduction to proofs in Lean.

Joseph Tooby-Smith (Apr 21 2025 at 05:22):

I agree with you Tyler. Especially at this stage when the community needs growing, anything that turns people off contributing is necessarily a bad thing.

I really hope that we can have the best of both worlds here. That we can have the rigorous definitions sitting behind, or parallel to more conventional and easy to use approaches. Then someone who's work depends on the rigorous definitions can use that, whilst someone who wants to work with more down-to-earth definitions can.
I think this is really a case of making good API and documentation. Similar to what is currently happening around Space and Time.

To use the example of theormodyanmics (which I don't know too much about - so apologize if I say something stupid). Ideally I would like to have in PhysLean both the conventional approach and the manifold approach. If the manifold approach can sit behind the conventional approach in such a way that someone using the conventional approach doesn't need to know it exists, then this is good. But I don't think we should sacrifice the usability of the conventional approach for the manifold approach - but I do think we should have the manifold approach also.

Joseph Tooby-Smith (Apr 21 2025 at 05:35):

I guess by "doesn't need to know it exists" can mean a number of different things.

  • It could mean that the conventional approach is built completely separately from the manifold approach. And there may exist separate modules connecting the two.
  • It could mean that the basic definitions of the conventional approach come from the manifold approach, but enough API exists that this is hidden.

I think the exact choice of which we follow here will depend on the exact nature of the problem.

Joseph Tooby-Smith (Apr 21 2025 at 06:15):

Given we can't actually change the definition of Space and Time until
this is resolved, and that this doesn't seem to be possible anytime soon, I've made this PR improving the docs around Time and Space.

Basically it explains that we inherit instances on these types and this, for example automatically gives us a metric etc., but also a few things we don't necessarily want.

ZhiKai Pong (Apr 21 2025 at 22:22):

Just writing this down somewhere to document what I've learnt the past couple days (mainly through here).

ZhiKai Pong said:

if I want to show Time.deriv and Space.deriv commute, is it sufficient to state that their respective underlying fderiv commute

Few days ago I wrote the message above as I set out to prove ∂ₜ (∇ × E) = ∇ × (∂ₜ E). It turns out this is much more complicated than what I previously conceived so I thought it might be worth sharing. What I had in mind was pretty much just: after unfolding all definitions I should have a bunch of terms of the form
2xt=2tx\frac{\partial^2}{\partial_x\partial_t}=\frac{\partial^2}{\partial_t\partial_x}
and then I just show they commute. I've already proved differentiations along space coordinates (2xy=2yx\frac{\partial^2}{\partial_x\partial_y}=\frac{\partial^2}{\partial_y\partial_x}) commute so surely this can't be that bad.

Now I'm realizing that Space and Time lives in different domains, and I'm actually trying to do differentiation in ℝ × ℝ³. I think how I'll have to approach this is to turn Space and Time derivatives into SpaceTime derivatives where the domain is ℝ × ℝ³ via timeSlice, and in Lean terms this involves a whole lot of (un)currying.

Many thanks to @Joseph Tooby-Smith and @Tomas Skrivan for their help and patience!

Tomas Skrivan (Apr 22 2025 at 02:13):

I got intrigued by how annoying this is, here is my sketch of proving

theorem fderiv_swap (f : X  Y  Z) (x dx : X) (y dy : Y) (hf : IsSymmSndFDerivAt 𝕜 (f) (x,y)) :
    fderiv 𝕜 (fun x' => fderiv 𝕜 (fun y' => f x' y') y dy) x dx
    =
    fderiv 𝕜 (fun y' => fderiv 𝕜 (fun x' => f x' y') x dx) y dy := by ...

Full code missing just the final differentiability obligations. With PR #24056 they might be solvable simple by fun_prop if you add the assumption ContDiff 𝕜 2 (↿f).

code

There are six versions of fderiv_uncurry:/

Tomas Skrivan (Apr 22 2025 at 02:19):

At the bottom of that code you can find version of the theorem with dx and dy not applied.

theorem fderiv_swap' (f : X  Y  Z) (x : X) (y : Y) (hf : IsSymmSndFDerivAt 𝕜 (f) (x,y)) :
    fderiv 𝕜 (fun x' => fderiv 𝕜 (fun y' => f x' y') y) x
    =
    (fderiv 𝕜 (fun y' => fderiv 𝕜 (fun x' => f x' y') x) y).flip := ..

ZhiKai Pong (Apr 23 2025 at 16:55):

@Tomas Skrivan sorry for pinging, I have this as an intermediate step:

import Mathlib

local notation "ℝ³" => EuclideanSpace  (Fin 3)

lemma fderiv_coord (f :   ³  ³) (t dt : ) (x : ³) :
    (fderiv  (fun t' => f t' x i) t) dt = (fderiv  f t) dt x i := by
  sorry

codomain of the left fderiv is (fun x ↦ ℝ) i and codomain of the right fderiv is ℝ³ → ℝ³ . this manages to typecheck somehow, but how do I proceed to proof this?

ZhiKai Pong (Apr 23 2025 at 16:57):

(I think I'm getting the gist of what your previous message is doing, but it's getting quite complicated in my implementation due to all the coordinate functions i.e. EuclideanSpace.single )

Tomas Skrivan (Apr 23 2025 at 17:13):

I'm afraid that with the current mathlib you can't really prove this exactly because the codomain on rhs is ℝ³ → ℝ³. You can state it only because fderiv was generalized to general topological vector spaces but 99% of the theorems has not been generalized and they are formulated using normed vector spaces and ℝ³ → ℝ³ is not a normed vector space.

So my question is what is the original problem that got you to this lemma? Maybe you can't reformulate the original problem differently.

I will check with @Yury G. Kudryashov what are his thoughts on this as he is the one generalizing fderiv to topological vector spaces.

ZhiKai Pong (Apr 23 2025 at 17:39):

Tomas Skrivan said:

So my question is what is the original problem that got you to this lemma? Maybe you can't reformulate the original problem differently.

code

I think the problem stems from the original form of the statement itself?

In PhysLean the corresponding statement is

lemma time_deriv_curl (fₜ : Time  Space  EuclideanSpace  (Fin 3)) (t : Time) (x : Space) :
    ∂ₜ (fun t => ( × fₜ t) x) t = ( × (∂ₜ fₜ t)) x:= by

if that gives any insight.

Tomas Skrivan (Apr 23 2025 at 17:56):

Instead of curl (fderiv ℝ (fun t' => (fₜ t')) t 1) x I think you should have curl (fun x' => fderiv ℝ (fun t' => (fₜ t' x')) t 1) x.

The PhysLean statement has the same issue here (∇ × (∂ₜ fₜ t)) x as you are differentiating fₜ which has codomain Space → EuclideanSpace ℝ (Fin 3). I do not know how to reformulate it "correctly" with the PhysLean's fancy notation (correctly is relative here as we are fighting mathlib's deficiency).

You can of course write (∇ × (fun x' => ∂ₜ (fun t' => fₜ t' x') t)) x but it effectively looses the point of the nice notation.

In SciLean's derivative notation I would write ∇× (x':=x), ∂ (t':=t), fₜ t' x' except there is no ∇× operator defined right now. But there is notation like this for all kinds of derivatives ∂ ∇ ∂> <∂ (derivative, gradient, forward mode AD, reverse mode AD).

ZhiKai Pong (Apr 23 2025 at 18:19):

With this change
Tomas Skrivan said:

Instead of curl (fderiv ℝ (fun t' => (fₜ t')) t 1) x I think you should have curl (fun x' => fderiv ℝ (fun t' => (fₜ t' x')) t 1) x.

I'm still left with

(fderiv  (fun t' => ft t' x' i) t) dt = (fderiv  (fun t' => ft t' x) t) dt i

Maybe this is doable with fderiv_pi, let me try

ZhiKai Pong (Apr 23 2025 at 18:25):

ah ok this works! thanks

ZhiKai Pong (Apr 23 2025 at 18:53):

is there an idiomatic way to state the assumption that leads to DifferentiableAt ℝ (fun t' => f t' x i) t?

Edit: I think hf : Differentiable ℝ (↿f) works

ZhiKai Pong (Apr 23 2025 at 20:45):

@Tomas Skrivan I don't know how bad this is, can your PR (#24056) handle this?
I can imagine doing differentiability arguments on curried functions might be a terrible idea...

import Mathlib

local notation "ℝ³" => EuclideanSpace  (Fin 3)

example (ft :   ³  ³) (hf : ContDiff  2 ft) :
    DifferentiableAt  (fun t =>
    (fderiv  (fun x => ft t x 2) x) (EuclideanSpace.single 1 1)) t := by
  apply Differentiable.clm_apply
  . sorry
  · fun_prop

Tomas Skrivan (Apr 23 2025 at 20:48):

I think fun_prop should be able to do this, it is definitely designed to handle assumptions on uncurried functions. Right now I'm not at my computer so I can't test it

ZhiKai Pong (Apr 23 2025 at 21:12):

I'm not sure what I should do here, should I just wait for your PR to go through or will that take a long time?
Or is it the best to add the new lemmas manually for the time being then refactor later once the PR goes live?

Anyway, this is no hurry at all, so please don't feel pressured if you're busy, you are already extremely helpful and I appreciate it a lot :)

Tomas Skrivan (Apr 24 2025 at 00:52):

No idea what is the best approach. You can create a PR that depends on that PR but you won't have PhysLean unless you make a special branch that depends on that PR.

You(or me) could also harass someone to review it and merge it. Maybe I can make a topic in #mathlib4 about it to draw more attention. I don't know :man_shrugging:

Joseph Tooby-Smith (Apr 24 2025 at 06:42):

Sorry I've only managed to vaguely follow this conversation.

One option would be to, for the time being, make a definition of a Prop corresponding to the lemma you can't prove yet. Then

  1. carry around this Prop as an assumption in further results.
  2. Add that this Prop needs a proof, and that we are waiting on the relevant PR.

Joseph Tooby-Smith (Apr 24 2025 at 07:36):

Ok, this is not the nicest, and may not be what you actually want here, but this may hold over until we have this PR.

Code

PhysLean already has fderiv_uncurry in the TimeSlice file because I needed this there.

ZhiKai Pong (Apr 24 2025 at 18:55):

I realized that the fderiv_swapoccurs at ℝ × ℝ³, but then what I actually need is ℝ × (fun x ↦ ℝ) i since I'm working with fin_cases...

what I mean by that is I ended up with this:

import Mathlib

local notation "ℝ³" => EuclideanSpace  (Fin 3)

example (f :   ³  ³) (x dx : ³) (t dt : )
    (h : (fderiv  (fun t' => (fderiv  (fun x' => f t' x') x) dx) t) dt
    =
    (fderiv  (fun x => (fderiv  (fun x' => f x' x) t) dt) x) dx) :
    (fderiv  (fun t' => (fderiv  (fun x' => f t' x' i) x) dx) t) dt
    =
    (fderiv  (fun x' => (fderiv  (fun t' => f t' x' i) t) dt) x) dx := by
  sorry

I think I'll have to reformulate the fderiv_uncurry and fderiv_wrt_prod to handle the FinType explicitly? I'll have a go but let me know if there's a simpler way to do this directly on what's given above...

ZhiKai Pong (Apr 24 2025 at 19:10):

have hf' : IsSymmSndFDerivAt  (f) (t,x) :=
    sorry

ok I think it's just this^ statement has to be changed to be in in terms of the FinType, I'm just not sure how to state this

full code

Joseph Tooby-Smith (Apr 24 2025 at 20:14):

Wouldn't fderiv_pi work here? (may have to flip the arguments of f)

Joseph Tooby-Smith (Apr 24 2025 at 20:38):

Is something like the following what you are looking for?

code

Joseph Tooby-Smith (Apr 24 2025 at 20:54):

I think all the remaining sorry’s (except the first which is easy) here could be relatively straight forward applications of the same principal I used. Define a functions through compositions of functions which are known to be differentiable.

ZhiKai Pong (Apr 24 2025 at 21:13):

oh my, thanks a lot... I'll try to understand this and clean this up over the next few days.

Just curious, may I ask is this annoying and complicated for you as well or is it more or less straightforward with experience?

Kevin Buzzard (Apr 24 2025 at 21:24):

For some of us, solving these complicated problems is part of the fun :-)

Joseph Tooby-Smith (Apr 24 2025 at 21:39):

Yeah I agree with Kevin - its all part of the fun :)

I would say this is probably something that comes with time though, you get to know where to look, what sort of things are likely to work and what things are likely to be called etc. These all make some of the more annoying bits easier, and probably more enjoyable. Getting to know how to use loogle effectively helped me an awful lot.

Joseph Tooby-Smith (Apr 25 2025 at 07:02):

Here is a complete proof of fderiv_swap. It is very much a mess, but is sorry-free. It can very likely be broken down into a series of smaller lemmas.

The code below needs the preliminary material of the code in my last but one message (it goes over the character limit if I include it here).

Code

ZhiKai Pong (Apr 25 2025 at 13:41):

@Joseph Tooby-Smith

(fderiv 𝕜 (f) xy) dxy = (fderiv 𝕜 (Function.uncurry f) xy) dxy

this is resolvable by rflbut rw can't see through this. Which version do we want?

Joseph Tooby-Smith (Apr 25 2025 at 13:54):

IMO I think the former is better.

ZhiKai Pong (Apr 25 2025 at 16:00):

variable {k : Type} [NontriviallyNormedField k]
    {N W M : Type} [NormedAddCommGroup M] [NormedSpace k M]
    [NormedAddCommGroup N] [NormedSpace k N]
    [NormedAddCommGroup W] [NormedSpace k W]

In a declaration like this is it more idiomatic to write {k : Type} or just {k}?

Joseph Tooby-Smith (Apr 25 2025 at 17:26):

I prefer the former i.e. {k : Type}. My brain has to do slightly less thinking to process it :).

ZhiKai Pong (Apr 25 2025 at 17:48):

@Tomas Skrivan may I ask why is docs#Differentiable.comp' in Mathlib.Tactic.FunProp.Differentiable but not in Mathlib.Analysis.Calculus.FDeriv.Comp like docs#Differentiable.comp? I tried to find the former in FDeriv.Comp and only realized they're in a different directory later on...

Tomas Skrivan (Apr 25 2025 at 17:50):

Ohh that is for historical reasons, that file should be removed and those theorems moved/removed.

The reason was while developing fun_prop I didn't have to recompile the whole mathlib when I made changes to fun_prop tactic.

ZhiKai Pong (Apr 25 2025 at 17:53):

I was puzzled for a while why it runs fine on the web editor but not on my own setup until I realized I didn't have the right imports :melting_face:

ZhiKai Pong (Apr 26 2025 at 15:14):

@Tomas Skrivan does this actually require x and y to be real?

import Mathlib

variable {𝕜 : Type} [NontriviallyNormedField 𝕜]
    {X Y Z : Type} [NormedAddCommGroup X] [NormedSpace 𝕜 X]
    [NormedAddCommGroup Y] [NormedSpace 𝕜 Y]
    [NormedAddCommGroup Z] [NormedSpace 𝕜 Z]

example (x : X) (y : Y) : DifferentiableAt 𝕜 (fderiv 𝕜 fun y' => (x, y')) y := by
  fun_prop --doesn't work

example (x : ) (y : ) : DifferentiableAt  (fderiv  fun y' => (x, y')) y := by
  fun_prop --goes through analytic and requires CompleteSpace (X × Y)

ZhiKai Pong (Apr 26 2025 at 15:45):

import Mathlib

variable {𝕜 : Type} [NontriviallyNormedField 𝕜]
    {X Y Z : Type} [NormedAddCommGroup X] [NormedSpace 𝕜 X]
    [NormedAddCommGroup Y] [NormedSpace 𝕜 Y]
    [NormedAddCommGroup Z] [NormedSpace 𝕜 Z]

example (x : X) (y : Y) : DifferentiableAt 𝕜 (fderiv 𝕜 fun y' => (x, y')) y := by
  have hl (y' : Y) : (fderiv 𝕜 fun y' => (x, y')) y' = ContinuousLinearMap.inr 𝕜 X Y := by
    rw [(hasFDerivAt_prodMk_right x y').fderiv]
  conv_lhs =>
    enter [y]
    rw [hl]
  fun_prop

managed to prove it, probably just a matter of missing API then

Tomas Skrivan (Apr 26 2025 at 16:43):

Nice! I didn't test it but I would expect that this should be provable with #24056 by using just fun_prop

ZhiKai Pong (May 01 2025 at 19:11):

variable {𝕜 : Type} [NontriviallyNormedField 𝕜]
    {X Y Z : Type} [NormedAddCommGroup X] [NormedSpace 𝕜 X]
    [NormedAddCommGroup Y] [NormedSpace 𝕜 Y]
    [NormedAddCommGroup Z] [NormedSpace 𝕜 Z]

lemma ContDiff.uncurry (f : X  Y  Z) (x : X) (hf : ContDiff 𝕜 2 f) :
    ContDiff 𝕜 2 (f x) := by
  sorry

this looks like it's straightforward but I can't figure out how to proceed, suggestions appreciated

Joseph Tooby-Smith (May 01 2025 at 19:17):

I would expect something like the following:
f x is the composite of ↿f and the inclusion Y → X x Y taking y to (x, y). Then something like a comp property of ContDiff.

ZhiKai Pong (May 02 2025 at 15:49):

I guess this is a more general question rather than vector calculus specific but might as well ask here:

lemma curl_smul (f : Space  EuclideanSpace  (Fin 3)) (k : )
    (hf : Differentiable  f) :
     × (k  f) = k   × f

I have the above, if I have something of the form

 ( × (k  f)) x = k  (fun x => ( × f) x) x

is it possible to use curl_smul or do I have to define something separately like the following?

lemma curl_smul_apply (f : Space  EuclideanSpace  (Fin 3)) (k : )
    (hf : Differentiable  f) (x : Space):
    ( × (k  f)) x = k  (fun x => ( × f) x) x

Joseph Tooby-Smith (May 02 2025 at 16:01):

You should be able to use curl_smul to rewrite

( × (k  f)) x = (k   × f) x

I think then this should be definitionally equal to k • (fun x => (∇ × f) x) x
So e.g.

rw [curl_smul, show (k   × f) x = k  (fun x => ( × f) x) x from rfl]

might work.

ZhiKai Pong (May 02 2025 at 16:25):

ah I think I figured it out, lean parses the right hand side as k • ((∇ × f) x), and I have to explicitly tell it to parse this as (k • (∇ × f)) x... thanks!

Tomas Skrivan (May 02 2025 at 16:59):

@ZhiKai Pong this topic might be an interesting read which discusses convention fun x => k * f x vs k*f and similar. https://leanprover.zulipchat.com/#narrow/stream/287929-mathlib4/topic/Convention.20for.20.60f.20*.20g.60.20and.20.2F.20or.20.60fun.20x.20.E2.86.A6.20f.20x.20*.20g.20x.60

ZhiKai Pong (May 05 2025 at 15:01):

I'm looking at trying to state the stokes theorem in this or some similar form (If I understand correctly mathlib is still quite some way from having the differential forms version):
https://people.bath.ac.uk/masrs/ma20010/stokesproofs.pdf

there's not much in the integration chapter of #mil and I don't actually know much about measure theory so I have no idea how difficult this is, so I'd like to ask how feasible would this be, and also if anyone is more familiar with the integration-related part of mathlib.

I found docs#MeasureTheory.integral_divergence_of_hasFDerivWithinAt_off_countable which looks somewhat useful, but would appreciate pointers as to where I should start looking at.

Joseph Tooby-Smith (May 05 2025 at 15:15):

There is some integration stuff in this file:

https://github.com/HEPLean/PhysLean/blob/master/PhysLean/QuantumMechanics/OneDimension/HarmonicOscillator/Completeness.lean

which might be worth taking a look at.

Joseph Tooby-Smith (May 05 2025 at 18:36):

I'm not sure the difference but there are also Stokes-like theorems in this file

ZhiKai Pong (May 07 2025 at 13:09):

@Joseph Tooby-Smith I think I'm still not quite fully understanding how HasFDerivAt works.

looking at docs#fderiv_eq, it seems to me that the (f₀' x) term in ((h' : ∀ x, HasFDerivAt f₀ (f₀' x) x) is just to make sure fderiv 𝕜 f = f type checks, but it should in fact be the same x? In this case, I'm confused by where the 1 in the second derivative (h'' : ∀ x, HasFDerivAt (fun x => f₀' x 1) (f₀'' x) x) comes from, or whether it should be some other expression.

Playing around with the code I have (h'' : ∀ x x'', HasFDerivAt (fun x' => f₀' x' x'') (f₀'' x) x) that kind of works, but I've completely lost track of the mathematical meaning.

ZhiKai Pong (May 07 2025 at 13:14):

currently I have this end goal:

import Mathlib

variable {d : } (f₀'' :    L[] EuclideanSpace  (Fin d))

example (i j : Fin d) (x s : EuclideanSpace  (Fin d)) (c t : ) :
     j, inner ((f₀'' (inner x s - c * t)) (inner (EuclideanSpace.single j (1:)) s))
    (EuclideanSpace.single i (1:)) = (f₀'' (inner x s - c * t)) 1 i := by
  sorry

but this doesn't really make sense to me. (not really a mwe, still figuring out what conditions am I missing to make this compile without errors)

Joseph Tooby-Smith (May 07 2025 at 13:26):

Let me try and break down my understanding of things:

fderiv 𝕜 f has type ℝ → ℝ →L[ℝ] EuclideanSpace ℝ (Fin d). The first in this type is the position where we want the derivative. The second is the direction in which we want the derivative. For there is only really one direction we would want to take a derivative, so we use 1 for this second .
Thus fderiv 𝕜 f x 1 is the derivative of f at the position x projected along 1 - the is the normal derivative of x. This mirrors the definition of docs#deriv.

Now turning to HasFDerivAt. The condition ∀ x, HasFDerivAt f (f' x) x says that f has a derivative equal to f' x at x. In otherwords, that fderiv 𝕜 f x = f' x, or
that fderiv 𝕜 f = f'.
Note that f' x does not yet specify the direction of that derivative, it is a function ℝ →L[ℝ] EuclideanSpace ℝ (Fin d). We only take the derivatives in the direction 1 so actually care about f' x 1. One could probably use docs#HasDerivAt here instead, and not carry around this direction.

Given this: (h'' : ∀ x, HasFDerivAt (fun x => f' x 1) (f'' x) x) says that the function (fun x => f' x 1) (that is the derivative of f in the direction 1) has derivative at x equal to f'' x, for all x. Again f'' x does not specify a direction so is a function ℝ →L[ℝ] EuclideanSpace ℝ (Fin d) and thus we will really get f'' x 1 appearing.

Joseph Tooby-Smith (May 07 2025 at 13:32):

Concerning your mwe, I wouldn't expect something like f'' (inner x s - c * t) _ to appear where _ is equal to (inner (EuclideanSpace.single j (1:ℝ)) s), namely the presences of s in this expression worries me.

ZhiKai Pong (May 21 2025 at 22:01):

import Mathlib

variable (f : EuclideanSpace  (Fin 3)  EuclideanSpace  (Fin 3))

example {s : EuclideanSpace  (Fin 3)} (h :  x, inner  (fderiv  f x s) s = 0)
    (hf : Differentiable  f) :
     C,  x, inner  (f x - C) s = 0 := by
  have h' :  x, fderiv  (fun x => (inner  (f x) s)) x s = 0 := by
    intro x
    rw [fderiv_inner_apply]
    simp [-PiLp.inner_apply]
    rw [h]
    repeat fun_prop
  apply is_const_of_fderiv_eq_zero at h' --I want this but only restricted in direction of s

I've been stuck at this for a while, couldn't find any equivalent of docs#is_const_of_fderiv_eq_zero for a specific direction. docs#lineDeriv looks like it might be useful but I couldn't find anything. Any suggestions on where to look and what might be useful? Not sure if I'm missing something very obvious from Mathlib or this is not as trivial as it looks

Joseph Tooby-Smith (May 22 2025 at 05:22):

I do not think this result is true.

To give an explicit example, and using (x,y,z)(x,y,z) instead of xx for a position in EuclideanSpace ℝ (Fin 3). Let f(x,y,z)=(y,x,0)f(x,y,z) = (y, x, 0) and let s=eys = e_y. Thenfderiv ℝ f (x, y, z) s is exe_x. Thus h is satisfied. But the goal is not true since the goal would imply that inner ℝ (f (x, y, z)) s = x is a constant, which it is not.

Joseph Tooby-Smith (May 22 2025 at 05:54):

Correct me if I'm wrong but I think the statement you are trying to prove is: If f is constant along the direction s then ∃ C, ∀ x, inner ℝ (f x - C) s = 0. The statement that f is constant along the direction s is equivalent to h : ∀ y, ∀ x, inner ℝ (fderiv ℝ f x y) s = 0. Then

import Mathlib

variable (f : EuclideanSpace  (Fin 3)  EuclideanSpace  (Fin 3))

example {s : EuclideanSpace  (Fin 3)} (h :  y,  x, inner  (fderiv  f x y) s = 0)
    (hf : Differentiable  f) :
     C,  x, inner  (f x - C) s = 0 := by
  have h' :   x, fderiv  (fun x => (inner  (f x) s)) x  = 0 := by
    intro x
    ext y
    rw [fderiv_inner_apply]
    simp [-PiLp.inner_apply]
    rw [h]
    repeat fun_prop
  apply is_const_of_fderiv_eq_zero at h'
  · use f 0
    intro x
    rw [inner_sub_left, h' 0 x]
    simp
  · sorry

where the goal at sorry is of the form Differentiable _ _.

In my example of ff in my previous message, it does not satisfy my condition hh but does satisfy yours. It is also not constant in the direction ss.

ZhiKai Pong (May 22 2025 at 15:57):

Joseph Tooby-Smith said:

The statement that f is constant along the direction s is equivalent to h : ∀ y, ∀ x, inner ℝ (fderiv ℝ f x y) s = 0.

This is precisely what I'm getting it wrong, thanks!

ZhiKai Pong (May 22 2025 at 17:29):

import Mathlib

variable (f :   EuclideanSpace  (Fin 3)  EuclideanSpace  (Fin 3))

example {s : EuclideanSpace  (Fin 3)} (h :  t x, fderiv  (fun t => f t x) t = 0)
    (hf : Differentiable  f) :
     C,  t x, f t x = C := by
  apply is_const_of_fderiv_eq_zero at h

If f takes two different variables how should I apply is_const_of_fderiv_eq_zero ?

ZhiKai Pong (May 22 2025 at 17:31):

Or is this not true and I have to state it as ∀ x, ∃ C, ∀ t, f t x = C? trying to wrap my head around the quantifiers

ZhiKai Pong (May 22 2025 at 17:35):

Previously I just left the quantifiers out and assumed it works, but Kevin convinced me that they mean different things and I'm now realizing that it is indeed causing me trouble

Joseph Tooby-Smith (May 22 2025 at 17:35):

The hypothesis ∀ t x, fderiv ℝ (fun t => f t x) t = 0 states that for a fixed x the function t => f t x is constant. For example f(t,x)=xf(t, x) = x satisfies this but does not satisfy your goal.

I believe ∀ x, ∃ C, ∀ t, f t x = C is correct here.

Joseph Tooby-Smith (May 22 2025 at 17:40):

The result you want to show is something along the following lines? If f t x is a function whose derivative with respect to time is zero everywhere, then the function f t x is independent of time.

Tomas Skrivan (May 22 2025 at 17:41):

Only the differentiability assumption hf should be stated as forall x, Differentiable R (f . x) or likely that uncurried f is differentiable i.e. differentiable in t and x jointly.

ZhiKai Pong (May 22 2025 at 17:44):

Joseph Tooby-Smith said:

The result you want to show is something along the following lines? If f t x is a function whose derivative with respect to time is zero everywhere, then the function f t x is independent of time.

I think I want it to be constant with respect to both space and time, and now I think I have to differentiate with respect to both and combine the results

ZhiKai Pong (May 23 2025 at 13:22):

I think ultimately I want something like this :

code

where f is either ElectricField or MagneticField. What's the most idiomatic way to state ElectricField or MagneticField is linear? do I show it follows from Maxwell's equations? I'm not quite sure how

Joseph Tooby-Smith (May 23 2025 at 13:26):

I don't see any reason why an ElectricField or MagneticField should be linear function from space-time to the vector space of electric or magnetic fields.

ZhiKai Pong (May 23 2025 at 13:32):

Yea after typing that out I realized that's obviously false. I think saying that ∃ C, ∀ t x, f t x = C in the context of EMwaves is the correct statement, as in the constant field is independent of space and time. I'm trying to sketch out what's needed to reach this final statement, if what's shown in the above code isn't sufficient or not the right approach.

Joseph Tooby-Smith (May 23 2025 at 13:34):

I agree with this, and I think your code snippet above should be true.

ZhiKai Pong (May 23 2025 at 13:34):

how do I close the final step then?

Joseph Tooby-Smith (May 23 2025 at 13:37):

import Mathlib

variable (f :   EuclideanSpace  (Fin 3)  EuclideanSpace  (Fin 3))

example (ht:  t x, fderiv  (fun t => f t x) t = 0)
    (hx :  t x , fderiv  (fun x => f t x) x  = 0)
    (hf : Differentiable  f) :
     C,  t x, f t x = C := by
  have hct :   (c : EuclideanSpace  (Fin 3)  EuclideanSpace  (Fin 3)),  t, f t = c := by
    use f 0
    intro t
    funext x
    have ht' := fun t => ht t x
    apply is_const_of_fderiv_eq_zero at ht'
    rw [ht' t 0]
    fun_prop
  obtain c, hc := hct
  use c 0
  intro t
  rw [hc t]
  have hc_deriv :  x, fderiv  (fun x => c x) x  = 0 := by
    rw [ hc t]
    exact fun x => hx t x
  apply is_const_of_fderiv_eq_zero at hc_deriv
  · intro x
    exact hc_deriv x 0
  · rw [ hc 0]
    fun_prop

Joseph Tooby-Smith (May 23 2025 at 13:38):

(basically just a slight reworking of what you had)

ZhiKai Pong (May 23 2025 at 13:40):

ahhh... I was overthinking it too much, thanks!!

Joseph Tooby-Smith (May 23 2025 at 13:42):

No worries :slight_smile:

ZhiKai Pong (May 23 2025 at 14:28):

@Tomas Skrivan fun_prop solves the second one but not the first one, is this to be expected?

import Mathlib

variable {𝕜 : Type} [NontriviallyNormedField 𝕜]
    {X Y Z : Type} [NormedAddCommGroup X] [NormedSpace 𝕜 X]
    [NormedAddCommGroup Y] [NormedSpace 𝕜 Y]
    [NormedAddCommGroup Z] [NormedSpace 𝕜 Z]

set_option trace.Meta.Tactic.fun_prop true

lemma ContDiff.two_differentiable (f : X  Y  Z) (hf : ContDiff 𝕜 2 f) :
    Differentiable 𝕜 (f) := by
  fun_prop --`fun_prop` was unable to prove `Differentiable 𝕜 ↿f`

lemma ContDiff.two_fderiv_differentiable (f : X  Y  Z) (hf : ContDiff 𝕜 2 f) :
    Differentiable 𝕜 (fderiv 𝕜 (f)) := by
  fun_prop

Tomas Skrivan (May 23 2025 at 14:33):

There has to be a theorem missing, what is set_option trace.Meta.Tactic.fun_prop true saying?

ZhiKai Pong (May 23 2025 at 14:35):

should ContDiff.differentiable be tagged or is it intentionally left out?

ZhiKai Pong (May 23 2025 at 14:36):

Tomas Skrivan said:

There has to be a theorem missing, what is set_option trace.Meta.Tactic.fun_prop true saying?

`fun_prop` was unable to prove `Differentiable 𝕜 f`

Issues:
  No theorems found for `f` in order to prove `Differentiable 𝕜 fun a => f a.1 a.2`
mathlib-demo.lean:12:2
[Meta.Tactic.fun_prop] [❌️] Differentiable 𝕜 f 
  [] candidate local theorems for f #[]
  [] candidate transition theorems: [@IsBoundedBilinearMap.differentiable,
       Differentiable.restrictScalars,
       @IsBoundedLinearMap.differentiable,
       @differentiable_one]
  [] [❌️] applying: IsBoundedBilinearMap.differentiable 
  [] [❌️] applying: Differentiable.restrictScalars 
  [] [❌️] applying: IsBoundedLinearMap.differentiable 
  [] [❌️] applying: ContDiff.differentiable_one 

Tomas Skrivan (May 23 2025 at 14:37):

That is intentional, there is ContDiff.differentiable_one. fun_prop can't figure out n in CibtDiff.differentiable

Tomas Skrivan (May 23 2025 at 14:39):

  [] [❌️] applying: ContDiff.differentiable_one 

This branch should solve the goal, can you expand it?

ZhiKai Pong (May 23 2025 at 14:39):

expanding error message for ContDiff.differentiable_one:

[] [❌️] applying: ContDiff.differentiable_one 
    [] [❌️] ContDiff 𝕜 1 fun a => f a.1 a.2 
      [] candidate local theorems for f #[hf : ContDiff 𝕜 2 f]
      [] [❌️] applying: hf : ContDiff 𝕜 2 f 
        [] failed to unify hf
            ContDiff 𝕜 2 f
            with
            ContDiff 𝕜 1 fun a => f a.1 a.2
      [] maximum transition depth (1) reached
              if you want `fun_prop` to continue then increase the maximum depth with `fun_prop (config := {maxTransitionDepth := 2})`
    [] @differentiable_one, failed to discharge hypotheses
          ContDiff 𝕜 1 fun a => f a.1 a.2

Tomas Skrivan (May 23 2025 at 14:42):

Ahh, you need to increase maxTransitionDepth there are two transitions: Differentiable -> ContDiff 1 -> ContDiff 2

Tomas Skrivan (May 23 2025 at 14:43):

fun_prop (maxTransitionDepth := 2) (disch:=aesop) should solve the goal

Tomas Skrivan (May 23 2025 at 14:44):

Or with whatever tactic capable showing 1 <= 2 as discharger

Tomas Skrivan (May 23 2025 at 14:45):

Does it work? (I'm on the phone so I can't test it myself)

ZhiKai Pong (May 23 2025 at 14:45):

yep it works

Tomas Skrivan (May 23 2025 at 14:46):

Maybe the default value of 1 for maxTransitionDepth is too low. Too high value would like to make fun_prop slow in failure cases.

ZhiKai Pong (May 23 2025 at 14:46):

but is this intended? the second one with fderiv surely looks more complicated

Tomas Skrivan (May 23 2025 at 14:49):

ZhiKai Pong said:

but is this intended? the second one with fderiv surely looks more complicated

Yeah it is an unexpected consequence how the tactic work. If you would assume that the function is three times differentiable then the second proof would fail too and would require the same config as the first proof.

Tomas Skrivan (May 23 2025 at 14:50):

Similarly the first proof would work just with fun_prop if you just assume ContDiff 1

ZhiKai Pong (May 23 2025 at 14:51):

would this be fixed in the future or should I just carry around the (maxTransitionDepth := 2) (disch:=aesop) for now?

Tomas Skrivan (May 23 2025 at 14:51):

This all comes from the difficulty of dealing with ContDiff n where you can have different n flying around.

Tomas Skrivan (May 23 2025 at 14:56):

ZhiKai Pong said:

would this be fixed in the future or should I just carry around the (maxTransitionDepth := 2) (disch:=aesop) for now?

For now, use this.

A potential solution could be:

  • not to count ContDiff.of_le as a transition theorem
  • increase default maxTransitionDepth

Not sure what is a better choice.

Tomas Skrivan (May 23 2025 at 14:57):

Do you get a sensible error when you increase maxTransitionDepth but do not provide a discharger?

ZhiKai Pong (May 23 2025 at 14:58):

If I remove the disch:=aesop this is the error:

`fun_prop` was unable to prove `Differentiable 𝕜 f`

Issues:
  No theorems found for `f` in order to prove `Differentiable 𝕜 fun a => f a.1 a.2`
mathlib-demo.lean:12:2
[Meta.Tactic.fun_prop] [❌️] Differentiable 𝕜 f 
  [] candidate local theorems for f #[]
  [] candidate transition theorems: [@IsBoundedBilinearMap.differentiable,
       Differentiable.restrictScalars,
       @IsBoundedLinearMap.differentiable,
       @differentiable_one]
  [] [❌️] applying: IsBoundedBilinearMap.differentiable 
  [] [❌️] applying: Differentiable.restrictScalars 
  [] [❌️] applying: IsBoundedLinearMap.differentiable 
  [] [❌️] applying: ContDiff.differentiable_one 
    [] [❌️] ContDiff 𝕜 1 fun a => f a.1 a.2 
      [] candidate local theorems for f #[hf : ContDiff 𝕜 2 f]
      [] [❌️] applying: hf : ContDiff 𝕜 2 f 
        [] failed to unify hf
            ContDiff 𝕜 2 f
            with
            ContDiff 𝕜 1 fun a => f a.1 a.2
      [] candidate transition theorems: [@of_le]
      [] [❌️] applying: ContDiff.of_le 
        [] [✅️] ContDiff 𝕜 2 fun a => f a.1 a.2 
        [] [❌️] discharging: 1  2
        [] @of_le, failed to discharge hypotheses
              1  2
    [] @differentiable_one, failed to discharge hypotheses
          ContDiff 𝕜 1 fun a => f a.1 a.2

Tomas Skrivan (May 23 2025 at 15:01):

Yeah this is not good. The error failed to discharge hypotheses, 1 ≤ 2 should show up under Issues at the top but I don't know how to fix that without producing lots of noise messages in other cases.

Yakov Pechersky (May 23 2025 at 15:02):

docs#ContDiff.of_le

Yakov Pechersky (May 23 2025 at 15:03):

Perhaps one can make a variant of that lemmas where the proof obligation has an autoparam?

Tomas Skrivan (May 23 2025 at 15:06):

Yes that would definitely be an option. fun_prop does not run autoParam but I have such a feature working with another tactic so it should be easy to get to work.

ZhiKai Pong (May 23 2025 at 15:22):

thank you Tomas for looking into this :)

Joseph Tooby-Smith (Jun 05 2025 at 15:23):

Just a quick comment: It is probably worth keeping an eye on #25427, and if it gets merged into Mathlib, how we can use it in PhysLean.

Joseph Tooby-Smith (Jul 21 2025 at 15:11):

I've added a new file (this PR) containing the div, grad and curl of distributions. These are much more nicely behaved than those of ordinary functions, and I suspect in the long run easier to work with. What needs to be done at some point, is create a good API around them including reproducing the vector identities we have for ordinary functions.

ZhiKai Pong (Jul 27 2025 at 22:44):

I came across this reference randomly, at the moment it's probably a long way away from formalizable but I feel like this is a good long term goal to work towards and build API around. I hope one day we can build electrodynamics etc. on top of this framework in lean :)

https://www.tandfonline.com/doi/epdf/10.1080/00029890.2002.11919870

Joseph Tooby-Smith (Aug 27 2025 at 06:38):

If anyone has some free time, I think the following lemma would be nice to prove and add to PhysLean:

import PhysLean.SpaceAndTime.Space.Basic
import PhysLean.SpaceAndTime.Space.VectorIdentities
import Mathlib.Analysis.InnerProductSpace.Calculus
import Mathlib.Analysis.Calculus.FDeriv.Symmetric
import Mathlib.Analysis.SpecialFunctions.Pow.Deriv
import Mathlib.Analysis.Calculus.Gradient.Basic

namespace Space

open InnerProductSpace

lemma grad_dot_unit_vector {d} (x : Space d) (f : Space d  ) :
    ⟪∇ f x, x‖⁻¹  x⟫_ = _root_.deriv (fun r => f (r  x‖⁻¹  x)) x := by
  sorry

end Space

Open in the online version of PhysLean here.

Joseph Tooby-Smith (Aug 27 2025 at 07:01):

Never mind - ended up being a simple application of the chain rule:

lemma grad_dot_unit_vector {d} (x : Space d) (f : Space d  ) (hd : Differentiable  f) :
    ⟪∇ f x, x‖⁻¹  x⟫_ = _root_.deriv (fun r => f (r  x‖⁻¹  x)) x := by
  by_cases hx : x = 0
  · subst hx
    simp
  symm
  calc _
    _ = fderiv  (f  (fun r => r  x‖⁻¹  x)) x 1 := by rfl
    _ =  (fderiv  f (x  x‖⁻¹  x)) (_root_.deriv (fun r => r  x‖⁻¹  x) x) := by
      rw [fderiv_comp _ (by fun_prop) (by fun_prop)]
      simp
    _ = (fderiv  f x) (_root_.deriv (fun r => r  x‖⁻¹  x) x) := by
      have hx : x  0 := norm_ne_zero_iff.mpr hx
      field_simp [smul_smul]
  rw [grad_inner_eq f x (x‖⁻¹  x)]
  congr
  rw [deriv_smul_const (by fun_prop)]
  simp

Joseph Tooby-Smith (Oct 31 2025 at 06:17):

FYI, I am planning a big refactor of the vector calculus stuff, primarily to move it into the SpaceAndTime directory, and to put all similar results next to each other. A the minute we have results about derivatives in a number of different modules which is not ideal.

Joseph Tooby-Smith (Oct 31 2025 at 10:39):

This is done here PhysLean#796, the main things I have done are:

  1. Give deriv, curl, div, grad, on Space their own directory. Similar with Time and SpaceTime.
  2. Put the derivatives of functions and distributions in the same module
  3. Clean up some of the notation.
  4. Move all the vector-caculus results to the SpaceAndTime directory.

I've been using lake exe module_doc_lint to help enforce good module documentation, and have followed this in this PR.


Last updated: Dec 20 2025 at 21:32 UTC