Zulip Chat Archive

Stream: general

Topic: Help Standard way of handling derivative of vector function


Assar Lannerborn (Feb 02 2026 at 05:50):

Hi! I have been trying to figure out how i can prove that a function (least squares) derivative is what it is, and not being that knowledgeable about the ecosystem, and after wandering in circles for a day or two I reluctantly admit defeat!

Here is kinda what i want to do.

import Mathlib.LinearAlgebra.Matrix.Determinant.Basic
import Mathlib.Data.Real.Basic
import Mathlib.Analysis.Calculus.FDeriv.Basic
import Mathlib.Analysis.InnerProductSpace.PiL2
import Mathlib.Tactic

open BigOperators
open Matrix

-- a dataset with N observations
-- with p componets per x observation and 1 components per y observation.
variable {N p : }
variable {X : Matrix (Fin N) (Fin (p+1)) }
variable {Y : Fin N  }

-- Matrix version
def RSS (β : Fin (p+1)  ) :  := (Y - (X *ᵥ β)) ⬝ᵥ (Y - (X *ᵥ β))

theorem rss_fderiv (β : Fin (p+1)  ) :
  HasFDerivAt RSS (innerSL  (-2  (transpose X) *ᵥ (Y - X *ᵥ β))) β := by
  -- Proof goes here
  sorry

I found that since innerSL which supposedly generates the required linear map, requires an InnerProductSpace , and turns out the Pi type isn't compatible since it already has a built in distance metric, i should maybe use (EuclideanSpace ℝ (Fin N) instead for vectors. But that begs the question how i handle Matrix since it is implemented as a Pi type, and i don't even know if dotProduct is defined for these EuclideanSpace vector types

Gemini suggested that it was normal practice to use normal Pi types for the matrix math, and the EuclideanSpace for derivative stuff, and convert between them, but it generated stuff that didn't work.

If someone could point me in the right direction i would be very thankful :)


Last updated: Feb 28 2026 at 14:05 UTC