Zulip Chat Archive

Stream: PhysLean

Topic: Maxwell's Equations


Joseph Tooby-Smith (Mar 20 2025 at 13:30):

I have just committed to PhysLean a PR containing Maxwell’s equations:

https://github.com/HEPLean/PhysLean/pull/410

Given the importance of Maxwell’s equations to certain areas of physics, I thought it is worth making a thread about it here.

I also want to make something clear. It is relatively easy to just write down Maxwell’s equations into Lean. However, what you really want is Maxwell’s equations integrated and working nicely with a general API for space-time, special relativity etc.  This is what is contained in the above PR. 

The upshot of this, is it becomes easy to talk about the field strength of an EM field (which is already in PhysLean), the vector potential etc and use standard index notation to talk about and manipulate these constructions. It should allow it to be connected to QED once API around gauge field theories is formulated.

Tomas Skrivan (Mar 20 2025 at 17:09):

Very nice, the equations turned out really nice.

What is your opinion on bundling all the data together in EMSystem? I understand that right now it makes everything a lot cleaner but is it going to be flexible in applications? In my experience, every time I introduce structure like this I have to rebuild equivalent API as for Prod if I start working with it e.g. EMSystem.ρ is a linear map.

Joseph Tooby-Smith (Mar 20 2025 at 17:21):

Thanks! I'm hoping there is enough API around them and spacetime that they are also useable in practice.

Good point about bundling things together. I agree that in this case I think I may have gone over-board.
Maybe an option in-between would be to move ρ and J out of EMSystem (and giving them there own named types ChargeDensity and CurrentDensity) and just make EMSystem about the constants c and ε₀. I think this would also physically make more sense. But I'm open to other options.

Tomas Skrivan (Mar 20 2025 at 18:55):

Yeah I would personally keep ρ and J as explicit arguments. Imagine you would like to do magneto-hydrodynamics then your unknowns are something like E, B, v(velocity), ρ(density), q(charge density) then J is a function of v,ρ and q.

Alex Meiburg (Mar 22 2025 at 16:51):

Can I suggest that EMSystem gets a requirement that the charge and current density are _measurable_ functions? I think this will significantly improve downstream facts ... situations where they aren't measurable is kind of unphysical.

Alex Meiburg (Mar 22 2025 at 16:52):

Actually, if I reflect on it ... I think the charge density as just being a real-valued number isn't appropriate either. I definitely care about point charges! Shouldn't it (and the current) be a signed measure instead?

Alex Meiburg (Mar 22 2025 at 16:52):

(either way very cool and I'm happy to see it :smile: )

Joseph Tooby-Smith (Mar 24 2025 at 10:15):

Hi Alex!  Thanks for your comment.

I was following the approach that we should include properties like measurability as and when they are needed, rather then including them in the type. But I agree with you that to deal with point charges we will need signed measures.

Ultimately we will want to connect this with four currents and i think it is good if we make these objects as easy to work with as possible. So some thought and experimentation might be needed.

For now I've split the current density and charge density from EMSystem following @Tomas Skrivan's suggestion and added TODOS regarding signed measures.

Anyone should feel welcome to have an attempt at this.

Nadim Farhat (Apr 04 2025 at 21:16):

@Joseph Tooby-Smith do you know if there is pde support to implement the wave equations in EM ?

Tomas Skrivan (Apr 04 2025 at 21:33):

What exactly do you mean by "pde support"? Something like finite differences to actually run a simulation or do you want to just prove that such pde has a solution?

Nadim Farhat (Apr 04 2025 at 22:33):

@Tomas Skrivan simulation, I have found your GitHub, I will take a look at it

Tomas Skrivan (Apr 04 2025 at 22:34):

Here is simulation of scalar wave equation starting from Hamiltonian discretized in space.

Nadim Farhat (Apr 04 2025 at 22:35):

Thanks, you read my mind

Tomas Skrivan (Apr 04 2025 at 22:36):

And here I talk about it, youtube

Nadim Farhat (Apr 04 2025 at 22:40):

already on it :)

Joseph Tooby-Smith (Apr 05 2025 at 06:44):

Tomas Skrivan said:

What exactly do you mean by "pde support"? Something like finite differences to actually run a simulation or do you want to just prove that such pde has a solution?

Just to answer the other side of this. There is support to prove that such pde's have solutions formally. In PhysLean this exists for odes in the form of the quantum harmonic oscillator. But I'm sure it won't be long for examples using pde's. Not to say this is easy though.

Tomas Skrivan (Apr 05 2025 at 22:59):

@ZhiKai Pong's struggles prompted me to write down Maxwell's equations without using SpaceTime. All the machinery for tensors in PhysLean is really cool but still needs lots of work and API to be really usable. In meantime, it might be a good idea to take step back and write down Maxwell's equations in a simpler way using only mathlib EuclideanSpace and develop vector calculus over normal ℝ³

import Mathlib.Analysis.Calculus.FDeriv.Symmetric
import Mathlib.Analysis.InnerProductSpace.PiL2
import Mathlib.LinearAlgebra.Trace

noncomputable
def div {X} [NormedAddCommGroup X] [NormedSpace  X]
    (f : X  X) (x : X) :  :=
  LinearMap.trace  X (fderiv  f x)

local notation "ℝ³" => EuclideanSpace  (Fin 3)

open EuclideanSpace InnerProductSpace in
noncomputable
def curl (f : ³  ³)
    (x : ³) : ³ :=
  -- get i-th component of `f`
  let fi i x := f x, single i (1:)⟫_
  -- derivative of i-th component in j-th coordinate
  -- ∂fᵢ/∂xⱼ
  let df i j x := fderiv  (fi i) x (single j (1:))
  fun i =>
    match i with
    | 0 => df 1 2 x - df 2 1 x
    | 1 => df 2 0 x - df 0 2 x
    | 2 => df 0 1 x - df 1 0 x

variable (μ₀ ε₀ : ) (ρ :   ³  ) (J :   ³  ³)

/-- Gauss's law for the Electric field. -/
def GaussLawElectric (E :   ³  ³) : Prop :=
   t x , ε₀ * (div (E t ·)) x = ρ t x

/-- Gauss's law for the Magnetic field. -/
def GaussLawMagnetic (B :   ³  ³) : Prop :=
   t x, (div (B t ·)) x = 0

/-- Faraday's law. -/
def FaradayLaw (E B :   ³  ³) : Prop :=
   t x, curl (B t ·) x = μ₀  (J t x + ε₀  deriv (E · x) t)

/-- Ampère's law. -/
def AmpereLaw (E B :   ³  ³) : Prop :=
   t x, curl (E t ·) x = - deriv (B · x) t

Tomas Skrivan (Apr 05 2025 at 23:04):

Also I think a better way to define curl would be to first have a function that extracts antisymmetric part of a matrix and puts it in a vector (not sure what is the best way to implement this)

/--
Take anti-symmetric part of `A` and flatten it to a vector
-/
def Matrix.antisymmVec (A : Matrix (Fin n) (Fin n) ) :
  EuclideanSpace  (Fin (n*(n-1)/2)) := sorry

i.e. for 3x3 matrix it returns (a₁₂ - a₂₁, a₂₀ - a₀₂, a₀₂ - a₂₀)
Then for symmetric matrix we have that this function returns zero

theorem Matrix.antisymmVec_of_symmetric (A : Matrix (Fin n) (Fin n) ) :
  ( i j, A i j = A j i)  A.antisymmVec = 0 := sorry

Then curl is simply the antisymmetric part of derivative

open EuclideanSpace InnerProductSpace in
noncomputable
def curl' (f : ³  ³) (x : ³) :=
  let b := EuclideanSpace.basisFun (Fin 3)  |>.toBasis
  (LinearMap.toMatrix b b (fderiv  f x)).antisymmVec

With this definition it should be easy to prove that curl of gradient is zero because you take the second derivative and take the antisymmetric part.

Tomas Skrivan (Apr 05 2025 at 23:05):

And divergence of curl should just boil down to the statement that trace of antisymmetric matrix is zero.

Joseph Tooby-Smith (Apr 06 2025 at 05:23):

I agree that the API around tensors needs improving, especially around derivatives of tensors. If you know what would make it easier in terms of derivatives @Tomas Skrivan, I would be glad to hear.

The reason SpaceTime is defined the way it is - is so that one can use index notation with it, it carries an action of the Lorentz group etc.

Splitting space and time in this way is bad for relativity, however it is done already in the form of Maxwell equations we have. Thus, I think we should define a (linear?) equivalence

def timeSlice (M : Type) [...] [...]  : (SpaceTime  M)  (Time   Space   M) := ...

and prove some API around it, for example that it commutes with spatial div, curl and grad, and derivatives in general. Here Space is just EuclideanSpace and Time is just .

We can then have a result which shows that the Maxwell's equations in PhysLean are equivalent through timeSlice to @Tomas Skrivan 's version.

Tomas Skrivan (Apr 06 2025 at 05:52):

I'm actually wondering, shouldn't Maxwell equations be formulated with covariant derivative?

You use tensors for the domain of electromagnetic fields but shouldn't they be used also for the codomain?

Joseph Tooby-Smith (Apr 06 2025 at 06:00):

You can write them in terms of the field strength as for example the Banchi identity. There is some API around the field strength and its relation to the electric and magnetic field. But not the maxwell equations in terms of it yet.

Tomas Skrivan (Apr 06 2025 at 06:00):

Joseph Tooby-Smith said:

I agree that the API around tensors needs improving, especially around derivatives of tensors. If you know what would make it easier in terms of derivatives Tomas Skrivan, I would be glad to hear.

I think you want to minimize the number of core derivative operators, each one will require a substantial amount of API to be easy to use.

I would say that having spaceDiv and spaceCurl is not the best choice. spaceDiv should just be an abbreviation for covariant derivative and then contracting the corresponding dimensions and should be defined in arbitrary dimension. Similarly for spaceCurl I would define Levi-Civita tensor and again appropriately contract it with covariant derivative.

Joseph Tooby-Smith (Apr 06 2025 at 06:04):

@Tomas Skrivan agreed this is very likely what we want to do. I want to implement properly Euclidean tensors at some point which can hopefully feed in to this.

Joseph Tooby-Smith (Apr 06 2025 at 06:06):

We also have to deal with relating derivatives on SpacTime with derivatives on Space and Time but this should be relatively easy

Tomas Skrivan (Apr 06 2025 at 06:06):

Ha I wrote it without looking at your definitions properly, the divergence is more less as I would expect. I would just change the definition of curl in terms of Levi-Civita tensor

Tomas Skrivan (Apr 06 2025 at 06:10):

Yeah there is quite a tension between having the spacetime together and having space and time arguments.

I guess if you do relativity you really want the two together but for anything non-relativistic or even relativistic but in a concrete coordinate frame you probably want space and time separate.

I have started playing around with some meta programming that could turn PDE to its finite difference discretization and there it would be much easier to have time and space as separate arguments.

Tomas Skrivan (Apr 06 2025 at 06:13):

Ohh one thing, I think you should not have SpaceTime.deriv but should have a special definition for whatever this is ((realLorentzTensor d).tensorBasis _ (fun x => Fin.cast (by simp) μ)) and just use fderiv

Tomas Skrivan (Apr 06 2025 at 06:13):

And probably keep SpaceTime.deriv as an abbreviation

Joseph Tooby-Smith (Apr 06 2025 at 06:17):

Ok, this is good to know! I'm hoping something like timeSlice above will allow us to go between these two descriptions. Note that timeSlice explicitly breaks special relativity (isn't equivariant under the Lorentz group etc), which is what we expect.

Joseph Tooby-Smith (Apr 06 2025 at 06:19):

Tomas Skrivan said:

Ohh one thing, I think you should not have SpaceTime.deriv but should have a special definition for whatever this is ((realLorentzTensor d).tensorBasis _ (fun x => Fin.cast (by simp) μ)) and just use fderiv

Ha, yeah, ((realLorentzTensor d).tensorBasis _ (fun x => Fin.cast (by simp) μ)) is just the unit vector in SpaceTime pointing in the μ direction. We can surely think of a good name for this :). I think keeping SpaceTime.deriv as an abbreviation is good - if only not to scare people with fderiv.

Tomas Skrivan (Apr 06 2025 at 06:20):

Do I remember correctly that in relativity you use Greek indices for indexing over spacetime and Latin indices to index only over space? Some kind of mechanism to easily switch between spacetime and space only in Lean would be nice.

Joseph Tooby-Smith (Apr 06 2025 at 06:22):

Yes this is true. I think this should be possible to do with the elaborator.

(I should note I am refactoring the Tensor part of PhysLean to make it a lot simpler, it won't change any of the discussion here regarding SpaceTime, it will just make it more adaptable and easy to use in general).

Tomas Skrivan (Apr 06 2025 at 06:24):

Is fderiv really scary? It is because of the fancy name Frechet derivative which is traditionally connected with another scary term Banach spaces? It is just a linear approximation of a function at a point and given direction it gives you a derivative in that direction.

Tomas Skrivan (Apr 06 2025 at 06:26):

Joseph Tooby-Smith said:

Yes this is true. I think this should be possible to do with the elaborator.

I'm not sure if it is a good idea to mimic exactly what is being done on paper. I'm just suggesting to have some kind of mechanism that achieves the same goal. It could be just a simple function with a short name and a well thought out API

Joseph Tooby-Smith (Apr 06 2025 at 06:30):

I guess "scary" is a matter of opinion. I don't find higher category theory scary - but I find measure theory etc. scary :). I think many people would be the opposite

To me it is scary in the sense that I have to take an extra thought, which is "is this derivative the same as the one I am use to". The answer is - yes, but it is something I have to think about. If an abbreviation helps make it explicit that fderiv is the thing you want, I think this is a good thing.

Joseph Tooby-Smith (Apr 06 2025 at 06:33):

This, I guess, is somewhat related to 'boolean blindness'. For example, I'd much rather have a type Space which people go - "ok, this is what I should use for space" rather then them having to realize that they need to use EuclideanSpace _ _.

Matteo Cipollina (Apr 06 2025 at 06:36):

Tomas Skrivan ha scritto:

Do I remember correctly that in relativity you use Greek indices for indexing over spacetime and Latin indices to index only over space? Some kind of mechanism to easily switch between spacetime and space only in Lean would be nice.

Perhaps this is not exactly what you meant, but in the context of trying to develop SR within PhysLean (https://github.com/HEPLean/PhysLean/pull/439), based on a suggestion from @Joseph Tooby-Smith I'm making good use of the below:

/-- Extract spatial components from a Lorentz vector,
    returning them as a vector in Euclidean space. -/
def spatialPart {d : } (v : Vector d) : EuclideanSpace  (Fin d) :=
  fun i => v (Sum.inr i)

/-- Extract time component from a Lorentz vector -/
def timeComponent {d : } (v : Vector d) :  :=
  v (Sum.inl 0)

Joseph Tooby-Smith (Apr 06 2025 at 06:39):

@Matteo Cipollina Ha, a feature of Zulip, is that it assumes your PR is about Mathlib and links to the corresponding PR thereof. :)

Joseph Tooby-Smith (Apr 06 2025 at 06:48):

Let me summarize what I think the actionable points of this conversation are:

  • Define the epsilon tensor in general dimension.
  • Define spaceCurl in terms of this epsilon tensor.
  • Define a map timeSlice taking SpaceTime → M to Time → Space → M with corresponding API.
  • Define ((realLorentzTensor d).tensorBasis _ (fun x => Fin.cast (by simp) μ)) formally as a unit vector in SpaceTime pointing the the μ direction.
  • Change SpaceTime.deriv to an abbreviation of fderiv using the above spacetime derivative.
  • Come up with a good way to go from SpaceTime to Space derivatives etc. (e.g. greek derivatives vs latin)
  • Use the above to rewrite Maxwell's equations in a better way.

Sorry if I've missed or mis-represented something here.

Tomas Skrivan (Apr 06 2025 at 06:54):

Not sure if you want to define epsilon in an arbitrary dimension its shape would be (n*(n-1)/2, n, n) which happens to be (3,3,3) for n=3. But I checked that 3*(3-1)/2 is defeq to 3 so it should not cause too much trouble.

Tomas Skrivan (Apr 06 2025 at 06:55):

An alternative name for timeSplice could be timeCurry or SpaceTime.curry

Joseph Tooby-Smith (Apr 06 2025 at 06:56):

Tomas Skrivan said:

Not sure if you want to define epsilon in an arbitrary dimension its shape would be (n*(n-1)/2, n, n) which happens to be (3,3,3) for n=3. But I checked that 3*(3-1)/2 is defeq to 3 so it should not cause too much trouble.

In 4d wouldn't it be (4, 4, 4, 4), in 5d (5, 5, 5, 5, 5) i.e here?

Tomas Skrivan (Apr 06 2025 at 06:58):

Ohh I see, I was thinking about it together with derivative as taking 1-forms to 2-forms.

Alex Meiburg (Apr 07 2025 at 16:32):

The (4,4,4,4) epsilon comes up in special relativity (and even more so in general relativity) plenty :)

Joseph Tooby-Smith (Apr 08 2025 at 14:50):

After a bit of thought I have changed my stance of things.

I now agree that we should redefine the Electric and Magnetic field to be:

abbrev ElectricField (d :  := 3) := Time   Space d  EuclideanSpace  (Fin d)

abbrev MagneticField (d :  := 3) :=  Time   Space d   EuclideanSpace  (Fin d)

instead of

abbrev ElectricField (d :  := 3) := SpaceTime d  EuclideanSpace  (Fin d)

abbrev MagneticField (d :  := 3) :=  SpaceTime d  EuclideanSpace  (Fin d)

However we should keep the field strength defined through:

SpaceTime d  ℝT[d, .up, .up]

Here is my reasoning:

  • In PhysLean SpaceTime d is defined to carry the action of Lorentz group.
  • In PhysLean Space d is defined to carry the action of rotations.[its actually not currently defined in this way but this is easy to do].
  • For the field strength it is the former which is important (naturally because it is a Lorentz tensor).
  • For the electric and magnetic field it is the latter. Notably rotations take the Electric field to the Electric field whilst boosts do not.

Joseph Tooby-Smith (Apr 08 2025 at 14:59):

A more basic thing I'm not sure about is where should we put the API for Space and Time. Naturally SpaceTime sits in the Relativity folder, but I don't think Space and Time should. Maybe ClassicalMechanics?

Tomas Skrivan (Apr 08 2025 at 15:00):

The transformation argument is neat! And I favor typing the fields like this.

Tomas Skrivan (Apr 08 2025 at 15:04):

Joseph Tooby-Smith said:

A more basic thing I'm not sure about is where should we put the API for Space and Time. Naturally SpaceTime sits in the Relativity folder, but I don't think Space and Time should. Maybe ClassicalMechanics?

I would expect some specialized math folder, but I don't have a strong opinion about this. Classical mechanics folder seems fine.

Joseph Tooby-Smith (Apr 09 2025 at 11:02):

I've added this PR which adds the Space and Time files. We do have a Mathematics folder but this is more for stuff that really should be in Mathlib but are not yet - so I've added this to Classical Mechanics.

I think it would be nice to have an extensive api around both space and time, and spacetime, to make it as easy to use as possible. I think some of @Tomas Skrivan 's stuff above can be added almost as-is.

(Tagging @Matteo Cipollina and @ZhiKai Pong as I think you both also have an interest in this)

ZhiKai Pong (Apr 11 2025 at 22:24):

is it a good idea to try to show the Maxwell's Equations are linear? what I'm thinking is that I can try to show div and grad are docs#LinearMap, hopefully it's not too complicated (I don't know what a ring homomorphism is if that matters). Is it sufficient to just work with the Space definitions?

Joseph Tooby-Smith (Apr 12 2025 at 05:54):

I would be surprised if you could show div and grad are Linear maps without some assumption on differentiability of the functions involved. But showing that they are linear with these conditions, as well as the Maxwell equations is a good idea!

If you just want to use Space you are going to have to change Maxwell's equations and the definition of the Electric and Magnetic field to Time → Space → .... This is something we want to do anyway, but adapting the FieldStrength code may be a bit tricky (but doable).

Concerning ring homomorphisms: div and grad are definitely not ring homomorphisms, so we don't need to worry about this.

Tomas Skrivan (Apr 12 2025 at 11:21):

What you want to show is that fderiv 𝕜 f is linear in f. However this is not true for arbitrary f : X → Y and you need to restrict only to differentiable f. The idiomatic way of doing this is to parametrize f by some arbitrary space W and assume that f w x is linear w and differentiable in x.

import Mathlib.Analysis.Calculus.FDeriv.Basic
import Mathlib.Analysis.Calculus.FDeriv.Add
import Mathlib.Analysis.RCLike.Basic

variable {𝕜} [RCLike 𝕜]
  {W} [NormedAddCommGroup W] [NormedSpace 𝕜 W]
  {X} [NormedAddCommGroup X] [NormedSpace 𝕜 X]
  {Y} [NormedAddCommGroup Y] [NormedSpace 𝕜 Y]

theorem fderiv_linear_map (f : W  X  Y)
    (hf :  w, Differentiable 𝕜 (f w))
    (hf' : IsLinearMap 𝕜 f) :
    IsLinearMap 𝕜 (fun w => fderiv 𝕜 (f w)) := by
  constructor
  · intros w w'; funext x
    rw[hf'.map_add]
    rw[fderiv_add' (hf w x) (hf w' x)]
    rfl
  · intros k w; funext x
    rw[hf'.map_smul]
    rw[fderiv_const_smul' (hf w x) k]
    rfl

Generalizing this to div and grad should be trivial.

Unfortunately, IsLinearMap is not set up with fun_prop otherwise using this theorem you would be able to easily prove things like this

example : IsLinearMap  (fun c :  => fderiv  (fun x :  => cx)) := by fun_prop

(here I have a file that sets up IsLinearMap with fun_prop, however it is not in mathlib right now)

Tomas Skrivan (Apr 12 2025 at 11:31):

Joseph Tooby-Smith said:

Concerning ring homomorphisms: div and grad are definitely not ring homomorphisms, so we don't need to worry about this.

I think @ZhiKai Pong was talking about the ring homomorphism in the definition of LinearMap which is about expressing linear maps between two modules over different rings. However we exclusively work with vector spaces over reals so this ring homomorphism is identity, so nothing to be worried about.

Joseph Tooby-Smith (Apr 14 2025 at 11:32):

In this PR I've added timeSlice, and two semi-formal results (results currently without proofs) related to the derivatives before and after time-slicing.

I think this is a precursor to updating the definition of the Electric and Magnetic field to depend on Space and Time, due to FieldStrength.

Joseph Tooby-Smith (Apr 14 2025 at 15:04):

In relation to:

Which is related to @Alex Meiburg 's comment about point charges and signed measures.

IMHO, I think we should have two versions of Maxwell's equations, one for functions and one for distributions, and an API connecting them. The distribution version will require us (I believe) to define derivatives of distributions on Space.

If anyone wants to attempt this, this MSE post and related, might serve as some inspiration.

Joseph Tooby-Smith (Apr 17 2025 at 14:51):

In this PR I converted Maxwell's equations to use Space and Time instead of SpaceTime.

To do this I've commented out some code related to derivatives of the field strength, which needs the following TODO items to be done before it can be reinstated:

This is related to #Is there code for X? > fderiv and curry

ZhiKai Pong (Apr 28 2025 at 15:24):

I want to try to look at how to setup actually using Maxwell's equation for some derivation. Does something like this look reasonable?

import PhysLean.ClassicalMechanics.VectorFields
import PhysLean.Electromagnetism.MaxwellEquations

namespace Electromagnetism

structure FreeSpace where
  𝓔 : EMSystem
  ρ : ChargeDensity
  J : CurrentDensity
  ChargeFree : ρ = 0
  CurrentFree : J = 0

variable (M : FreeSpace)

local notation "ε₀" => M.𝓔.ε₀
local notation "μ₀" => M.𝓔.μ₀

theorem GaussLawElectricFreeSpace (E : ElectricField) (h : GaussLawElectric M.𝓔 M.ρ E) :
    ε₀ * (  E t) x = 0 := by
  unfold GaussLawElectric at h
  rw [h]
  rw [M.ChargeFree]
  rfl

ZhiKai Pong (Apr 28 2025 at 15:27):

Currently variable (𝓔 : EMSystem) (ρ : ChargeDensity) (J : CurrentDensity) are just free variables sitting in Electromagnetism.MaxwellEquations, and I'm thinking of is there a better way to group them together like what's shown here.

ZhiKai Pong (Apr 28 2025 at 15:28):

Thinking ahead, I'm concerned about if I have two medium with different 𝓔 ρ J how would I go about specifying that. Maybe FreeSpace here should just be Medium and the entire physical setup will be specified within it?

Joseph Tooby-Smith (Apr 28 2025 at 15:33):

At one point I had ChargeDensity and CurrentDensity within EMSystem but then I think it was decided (above maybe?) that they should be removed.

We could rename EMSystem as OpticalMedium, and then extend it
to ChargedMedium with the inclusion of ChargeDensity and CurrentDensity. Every OpticalMedium then defines a ChargedMedium with zero charge and current. We could call this the free, ChargedMedium. Or something like this?

Joseph Tooby-Smith (Apr 28 2025 at 15:34):

Currently you have FreeSpace as a type, whilst really I think it should be the term of a type.

(My names above are random suggestions - there may be better terminology :))

Joseph Tooby-Smith (Apr 28 2025 at 15:35):

Something like

/-- An electromagnetic system consists of charge density, a current density,
  the speed of light and the electric permittivity. -/
structure OpticalMedium where
  /-- The speed of light. -/
  c : 
  /-- The permittivity. -/
  ε₀ : 

/-- The charge density. -/
abbrev ChargeDensity := Time  Space  

/-- Current density. -/
abbrev CurrentDensity := Time  Space  EuclideanSpace  (Fin 3)

structure ChargedMedium extends OpticalMedium where
  /-- The charge density. -/
  ρ : ChargeDensity
  /-- The current density. -/
  J : CurrentDensity

def ChargedMedium.free (O : OpticalMedium) : ChargedMedium where
  c := O.c
  ε₀ := O.ε₀
  ρ := fun _ _ => 0
  J := fun _ _ => 0

Joseph Tooby-Smith (Apr 28 2025 at 15:38):

(This would involve writing the Maxwell's equations to take ChargedMedium as an input.)

Tomas Skrivan (Apr 28 2025 at 16:02):

In my opinion, I would avoid bundling things into a structure. Unless you have a clear hierarchy of structures or a couple of different applications of such structure in mind I would avoid introducing it. Every time you introduce a structure you will be forced to provide an API similar to Prod which is lots of work. Ideally this would be automated, some version of deriving on steroids, but this does not exist.

In a way I agree, that bundling constants into a structure like OpticalMedium is desirable but I think that bundling fields into a structure like ChargedMedium will likely result in a refactor later on. There will be cases where the density or currents is the unknown you want to solve for. In such a scenario ChargedMedium will be in the way. We discussed this somewhere before.

But hey, the options are:

  • bundle things into structure -> inflexible/lots of missing API for such structures
  • keep it unbundled -> now you have to deal with functions with ten or more arguments

Neither is ideal and will lead to problems. Potential solutions I see:

  • some metaprogramming/deriving that automatically generates all the API for a new structure. I think lots of leg work could be done through proxy_equiv% + some meta tools capable of pulling structures through equivalences
  • having autoParam for implicit arguments and write a specialized tactic that fills in arguments from the local context. This way all the physical constants could passed as an implicit argument.

ZhiKai Pong (Apr 28 2025 at 16:19):

Just testing the idea of keeping things unbundled:

import PhysLean.ClassicalMechanics.VectorFields
import PhysLean.Electromagnetism.MaxwellEquations

namespace Electromagnetism

def ChargeFree (ρ : ChargeDensity) := ρ = 0
def CurrentFree (J : CurrentDensity) := J = 0

variable (𝓔 : EMSystem) (ρ : ChargeDensity) (J : CurrentDensity)

local notation "ε₀" => 𝓔.ε₀
local notation "μ₀" => 𝓔.μ₀

theorem GaussLawElectricFreeSpace (E : ElectricField)
    (h : GaussLawElectric 𝓔 ρ E) (h2 : ChargeFree ρ):
    ε₀ * (  E t) x = 0 := by
  unfold GaussLawElectric at h
  rw [h]
  rw [h2]
  rfl

Is there a better way to declare and use the def? any non-trivial problems in physics probably has tens of constraints in addition to tens of variables, and I don't think having like ten or twenty hypothesis each specifying one constraint is a good idea...

ZhiKai Pong (Apr 28 2025 at 16:20):

especially in dealing with specific physical systems that requires those constraints to be used over and over again.

Joseph Tooby-Smith (Apr 28 2025 at 16:26):

Acutally @ZhiKai Pong's example illustrates why I think bundling in this case is physically valid (I understand why computationally it is not desirable).

Most problems in EM are based on specifying both the charge density and the current density (or in special relativity - the 4-current). If you don't bundle for each of these problems you have to define both a charge and a current density with two separate definitions, and keep track of all of these.

I'm open to either solution here. Bundling to me just seems a bit cleaner in this case. If someone can come up with a nice way of doing things without bundling I'd be open to that.

Joseph Tooby-Smith (Apr 28 2025 at 16:29):

For the problem of finding the current and charge density given the electric and magnetic field, I think this can be adquitleqy defined constraining the projection of ChargedMaterial back to OpticalMaterial etc.

There may be examples where you are given a current density and not the charge density (say).... those would lead us to problems. But I'm not sure if such examples actually exist.

Joseph Tooby-Smith (Apr 28 2025 at 16:33):

One option for unbundled would be to define e.g.

def IsFree (ρ : ChargeDensity) (J : CurrentDensity) : Prop := ...

def IsPointParticleAt  (ρ : ChargeDensity) (J : CurrentDensity) (r : Space) : Prop :=...

def IsHalfPlane (ρ : ChargeDensity) (J : CurrentDensity)  : Prop :=...

etc. This is similar to @ZhiKai Pong 's solution.

(sorry for the multiple messages, I'm thinking aloud - which is probably not the best idea).

Joseph Tooby-Smith (Apr 28 2025 at 16:39):

As an aside: For optics, should ε₀ and c also be made functions of Space? E.g. for boundaries between two materials?

ZhiKai Pong (Apr 28 2025 at 16:56):

My personal feeling is that "specifying both the charge density and the current density and find solutions to E and B" is a large enough class of problem, and it is worth bundling variables and constraints to deal with it. If another class of problems require us to view the problem from a different perspective, we probably have to reorganize the variables and constraints in a different way anyway regardless of whether they're bundled or not, and this reorganization to me doesn't sound easier than writing API between different structures.

I might be complete wrong here but I think I'll give the bundling approach a try since I think it makes more sense to me conceptually to view any problem as a system with given set of initial conditions. Maybe some level of compromise can be made here: Maxwell's equations are fundamental enough to keep things unbundled, and then when I start dealing with homogeneous media I start bundling things together.

ZhiKai Pong (Apr 28 2025 at 16:59):

ZhiKai Pong said:

I want to try to look at how to setup actually using Maxwell's equation for some derivation. Does something like this look reasonable?

import PhysLean.ClassicalMechanics.VectorFields
import PhysLean.Electromagnetism.MaxwellEquations

namespace Electromagnetism

structure FreeSpace where
  𝓔 : EMSystem
  ρ : ChargeDensity
  J : CurrentDensity
  ChargeFree : ρ = 0
  CurrentFree : J = 0

variable (M : FreeSpace)

local notation "ε₀" => M.𝓔.ε₀
local notation "μ₀" => M.𝓔.μ₀

theorem GaussLawElectricFreeSpace (E : ElectricField) (h : GaussLawElectric M.𝓔 M.ρ E) :
    ε₀ * (  E t) x = 0 := by
  unfold GaussLawElectric at h
  rw [h]
  rw [M.ChargeFree]
  rfl

So something closer to my original idea here, but perhaps only in a separate file that all the freespace stuff follows rather than at the Maxwell Equations level.

ZhiKai Pong (Apr 28 2025 at 17:03):

Joseph Tooby-Smith said:

As an aside: For optics, should ε₀ and c also be made functions of Space? E.g. for boundaries between two materials?

This is actually an interesting question which I had thought about, I hope other optics people can correct me if I'm wrong, but the way I actually think about the speed of light is not as a function directly of space but as a function of ε, in the sense that it is much more common to specify the refractive index first, and then the speed of light arises from solving the wave equation. I think this relates to the discussion above, as I believe there isn't one single foundation for solving all problems and the "bundling" of constraints is an indication of the approach to solve the problem.

ZhiKai Pong (Apr 28 2025 at 17:10):

ok maybe this is actually more of a philosophical thing, to me the statement "speed of light is constant" is actually a result of "ε₀ and μ₀ are constant and c=1μ0ε0c = \frac{1}{\sqrt{μ₀ε₀}}" which probably sounds ridiculous from a physics standpoint and I should probably define this the other way round :sweat_smile:

Alex Meiburg (Apr 28 2025 at 17:10):

I mean, why should eps0 be a scalar? Birefringence (where it's a tensor) and diffraction (where it's a function of frequency) are both quite important too!

Joseph Tooby-Smith (Apr 28 2025 at 17:12):

ZhiKai Pong said:

My personal feeling is that "specifying both the charge density and the current density and find solutions to E and B" is a large enough class of problem, and it is worth bundling variables and constraints to deal with it. If another class of problems require us to view the problem from a different perspective, we probably have to reorganize the variables and constraints in a different way anyway regardless of whether they're bundled or not, and this reorganization to me doesn't sound easier than writing API between different structures.

I might be complete wrong here but I think I'll give the bundling approach a try since I think it makes more sense to me conceptually to view any problem as a system with given set of initial conditions. Maybe some level of compromise can be made here: Maxwell's equations are fundamental enough to keep things unbundled, and then when I start dealing with homogeneous media I start bundling things together.

This sounds like a valid plan to me. I would suggest something along these lines #PhysLean > Maxwell's Equations @ 💬 (click for direct link) first - but this is only a suggestion.

Joseph Tooby-Smith (Apr 28 2025 at 17:13):

ZhiKai Pong said:

ok maybe this is actually more of a philosophical thing, to me the statement "speed of light is constant" is actually a result of "ε₀ and μ₀ are constant and c=1μ0ε0c = \frac{1}{\sqrt{μ₀ε₀}}" which probably sounds ridiculous from a physics standpoint and I should probably define this the other way round :sweat_smile:

I agree - especially when one is not working in free-space. Maybe we should redefine EMSystem to be in terms of ε₀ and μ₀.

ZhiKai Pong (Apr 28 2025 at 17:17):

Alex Meiburg said:

I mean, why should eps0 be a scalar? Birefringence (where it's a tensor) and diffraction (where it's a function of frequency) are both quite important too!

I don't have experience working with type coercion, but my hope is that it is possible to redefine eps in terms of those later and then have the relevant API such that in homogeneous media and for monochromatic light everything derived will be left untouched.

Joseph Tooby-Smith (Apr 28 2025 at 17:21):

Maybe we can include a TODO item regarding this. But I agree that this should not hinder progress in the case of a homogeneous medium and monochromatic light etc. IMO having something is better than nothing, and we can refactor appropriately later.

ZhiKai Pong (Apr 28 2025 at 21:04):

I gave it some thought and maybe it is the best to keep both the bundled and unbundled versions (having definitions for both) and just show they are equivalent, this way we have the flexibility of using the unbundled version when we have to and ease of handling variables with the bundled version. I think this setup also allows us to change upstream definitions without affecting downstream ones too much, and we only have to confirm the equivalences still hold. I made a PR please see if this makes sense.

Alex Meiburg (Apr 29 2025 at 02:37):

Alex Meiburg said:

I mean, why should eps0 be a scalar? Birefringence (where it's a tensor) and diffraction (where it's a function of frequency) are both quite important too!

I realize my intended message might not have gotten across - I was trying to make the point that there are almost always more ways to make things more general, piece by piece, and worrying about e.g. making them functions of Space is just one of many. And so I was trying to say that I think we shouldn't worry about it. :)

Joseph Tooby-Smith (Apr 29 2025 at 05:29):

ZhiKai Pong said:

I gave it some thought and maybe it is the best to keep both the bundled and unbundled versions (having definitions for both) and just show they are equivalent, this way we have the flexibility of using the unbundled version when we have to and ease of handling variables with the bundled version. I think this setup also allows us to change upstream definitions without affecting downstream ones too much, and we only have to confirm the equivalences still hold. I made a PR please see if this makes sense.

Thanks for making the PR - I've put some comments there :).


Last updated: May 02 2025 at 03:31 UTC