Zulip Chat Archive

Stream: general

Topic: SciLean: scientific computing with Lean


Tomas Skrivan (Jan 15 2022 at 17:10):

For quite some time I have been working on a library that would allow writing scientific computing code in Lean. As Lean can formalize mathematics, I think it is an ideal programming language for math heavy code such as physics simulation or machine learning. Either as a way more powerful glue code, like Python, or leverage its meta programming capabilities, like Julia, or why not to turn Lean into computer algebra system, like Mathematica, if it already understands lots of mathematics.

I have finally arrived to a point where it makes sense to share my code.

Currently, I have only one working example worth mentioning and it is a simulation of wave equation, full code:

Wave equation can be defined through its Hamiltonian

def H (m k : ) (x p : ^n) :=
  let Δx := (1 : )/(n : )
  (Δx/(2*m)) * p∥² + (Δx * k/2) * ( i, x[i] - x[i-1]∥²)

(it is already discretized in space)

The simulation of such Hamiltonian system with Runge-Kutta 4 can be implemented with the following code:

def solver (m k : ) (steps : Nat) : Impl (ode_solve (HamiltonianSystem (H n m k))) :=
by
  -- Unfold Hamiltonian definition and compute gradients
  simp[HamiltonianSystem, H]
  autograd
  autograd

  -- Apply RK4 method
  rw [ode_solve_fixed_dt runge_kutta4_step]
  lift_limit steps "Number of ODE solver steps."; admit; simp

  finish_impl

Roughly speaking:

  1. simp[HamiltonianSystem, H]; autograd; autograd expands definitions and preform symbolic differentiation
  2. rw [ode_solve_fixed_dt runge_kutta4_step] replaces noncomputable function ode_solve with Runge-Kutta 4 integration scheme
  3. lift_limit steps "Number of ODE solver steps."; admit; simp says we are computing the specification only in an approximate sense and we choose steps integration steps

This then produces this animation
wave.gif

See the readme file for more detailed explanation. (It assumes you do not know Lean, so something might sound silly)


The current state of the library is that I have a solid base which is capable automatically prove that function is smooth, linear or has adjoint and it can also automatically compute differentials, adjoints and gradients. To a very limited degree it can also automatically invert functions, but that is currently only used in computing adjoints of expressions involving sums, as you sometimes need to reindex sums.

There is tons of directions I would like to explore:

  1. Variational calculus
    The above example shows wave equation that has already been discretized in space. I want to start with the continuous version of Hamiltonian and choose the space discretization in similar fashion as I picked the time discretization with RK4.
    I have some rudimentary results with functional derivative. For this purpose I have smooth/linear lambda abstraction, i.e. you can write λ (x : X) (r : ℝ) ⊸ r*x the indicate it is a linear function in every argument. Writing λ (r : ℝ) ⊸ r*r produces error as the function is not linear. example code

  2. Numerical Algebra
    I have already shared some of the results here, it mainly provides high level interface for array/tensor like objects.
    Together with @Arthur Paulino we are working on a library that would allow somewhat automatically generating C code for unboxed arrays like FloatArray. Hopefully this will also extends to GPUs.
    Wrapping existing linear solvers is also highly desirable.

  3. Symbolic manipulation with polynomials
    I want to do some computations with polynomials, differential forms and tensor products.
    One big result I want is computational isomorphism between polynomials of n+1 variables and polynomials in one variable but values in polynomials with n variables i.e. 𝓟[ℝⁿ⁺¹, ℝ] ≅ 𝓟[ℝ, 𝓟[ℝⁿ, ℝ]]. This way, I can easily get Horner form of a polynomial(crucial for fast evaluation) in several variables just from Horner form of polynomial in one variable.
    Some incoherent experimentation in this direction is here

  4. (semi)Prismatic sets
    In physics simulation, meshes are everywhere. Most commonly triangular or simplicial meshes. I want to have unified treatment of these and want to include meshes with quads, hexes, prisms and pyramids. I call these prisms and they can be generated by a point and two operations: cone and prod. I have inductive definition here, for example line segment is cone point, square is product of two segments prod (cone points) (cone point). The inclusion map of one Prism to another is represented by the inductive type Prism.Face and composition is Face.ofFace. This forms direct analog of simplex category(containing only inclusions). So I would like to define (semi)prismatic sets as presheaf over Prism category. This should be the core tool for dealing with meshes on top of which I can start building finite element/difference/volume code.

  5. Machine learning
    I have defined the code for forward and reverse mode differential operators and their core simplification identities, forward and reverse. These identities are expressing nothing more then functionality of (co)tangent map between manifolds. The task is to generalize the identities I have for normal differential to these two differential operators. Then we can hopefully unleash it on my mockup of VGG and hopefully get back-propagation.


I'm happy to hear any comments, suggestions. Or if anyone is interested in any of these topics I'm happy for any helping hand :)

However, the current thing I should focus on is probably optimizing order/priority of all simp lemmas and type class instances. As timeouts are sometimes really bad and I'm constantly running into infinite loops.

Yaël Dillies (Jan 15 2022 at 17:15):

Wow, that's great! This is part of my Cambridge computing project, so I'll refrain to look at the code :smile:

Eric Wieser (Jan 15 2022 at 17:30):

Is this the first "real" project depending on Mathlib4?

Arthur Paulino (Jan 15 2022 at 17:46):

That's some decent amount of work :open_mouth:
The usability of a machine learning lib is highly dependent on the existence of model deployment tools or at least a nice interactive REPL environment, so I think it would be a lower priority target at the moment

Tomas Skrivan (Jan 15 2022 at 20:34):

Another big direction I want to take is to use diffeological spaces to work with manifolds. Currently, I'm working only with convenient vector spaces and smooth maps between them. Therefore, I can't for example say what is a smooth map between spaces of all invertible functions. At some point I want to formulate fluid simulation as a geodesic path on the space of all volume preserving maps.

(I'm not actually fully formalizing all the math, not even providing full definitions, but I'm just peeking in to books and making sure it can be fully formalized at some point. Right now, I'm focusing on the computational aspect of it.)

I have very limited understanding of diffeology, so having someone who understands it would be nice. My current idea is to maybe to work only with diffeological spaces that are quotients of vector spaces and smooth map would be defined as a quotient of a smooth map. Not sure sure if this is a completely valid but should preserve reasonably well the computational aspect. It also naturally reflects how we usually work computationally with manifolds. For example, a point on a sphere is modeled as (x,y,z) with a condition |(x,y,z)| = 1. From time to time you have to project the point back on the sphere. This projection exactly defines the relation with which you take the quotient. Computationally, this is way way better then covering sphere with some maps and working with those.

Also I know there are some problems with tangent spaces of a general diffeological spaces that I do not fully understand. This probably gives some condition on the relation with which I would do the quotient. Maybe defining the relation based on a function satisfying implicit function theorem?

Patrick Massot (Jan 15 2022 at 21:50):

I think it's really great that someone tries to do numerical analysis in Lean! I had no idea that diffeological spaces could be useful in numerical maths. I only very vaguely know them as an esoteric corner of differential geometry. I wasn't aware of any link with anything else than diffeological spaces applied to diffeological spaces.

Tomas Skrivan (Jan 15 2022 at 22:06):

For example this paper on differentiable programming: 𝜆ₛ: computable semantics for differentiable programming with higher-order functions and datatypes uses diffeological spaces as a tool for soundness of their method.

I'm mostly following the first chapter of The Convenient Setting of Global Analysis. I especially like the part of the introduction:

An eminent mathematician once said that for infinite dimensional calculus each
serious application needs its own foundation. By a serious application one obviously
means some application of a hard inverse function theorem. These theorems can
be proved, if by assuming enough a priori estimates one creates enough Banach
space situation for some modified iteration procedure to converge. Many authors
try to build their platonic idea of an a priori estimate into their differential calculus.
We think that this makes the calculus inapplicable and hides the origin of the a
priori estimates. We believe that the calculus itself should be as easy to use as
possible, and that all further assumptions (which most often come from ellipticity
of some nonlinear partial differential equation of geometric origin) should be treated
separately, in a setting depending on the specific problem

This totally aligns with my experience from my studies, mostly analysis and PDEs, and the total disconnect on how mathematicians and physicist treat similar problems.

All transformations in SciLean can be(hopefully will be one day) formally argued by using convenient calculus (or diffeological spaces). To actually prove any kind of convergence of the generated algorithms you then have to go and manually provide some apriory estimates, choose correct Banach/Sobolev space etc. However, this is not a task I'm particularly interested in and it is super hard(yes looking at you Navier-Stokes).

Tomas Skrivan (Jan 15 2022 at 22:10):

My hope is to automatically prove consistency i.e. if the algorithm converges then it actually solves the specified problem.

However, the overruling principle is to minimize the human time it takes to go from formal specification of the problem to the actually executable code.

Patrick Massot (Jan 16 2022 at 11:03):

This is really surprising to me. It would be really super nice if this were actually useful. But I fear you'll have a very hard time finding experts here. Maybe you should email the authors of this paper and try to get them interested?

Tomas Skrivan (Jan 16 2022 at 11:47):

I will try that, but maybe I should read that paper properly first :)

Joe Hendrix (Jan 16 2022 at 16:54):

@Tomas Skrivan I'll mention that Ben Sherman (the first author of the paper) knows Lean -- he was an intern at Galois when we were working on formalizing a distributed protocol.

Tomas Skrivan (Jan 16 2022 at 16:55):

Cool! Thanks a lot.

Tomas Skrivan (Jan 19 2022 at 20:49):

ASCII art is cool but fully raytraced animations are cooler. I wrote a plugin for Houdini, software for visual effects similar to Blender, and now I can use Lean a scripting language in it. Visualization of the wave simulation: wave.mp4

Tomas Skrivan (May 08 2023 at 22:30):

I have cool new example to show.

The task it to shoot a target with a cannon i.e. find initial velocity for ballistic motion such that it hits specified target.

I wrote a widget that aims at your mouse :)
Peek-2023-05-08-17-59.gif


Here a description how is it done:

Ballistic motion is a solution to these differential equations:

x˙=vv˙=gf(v)\begin{align} \dot x &= v \\ \dot v &= g - f(v) \end{align}

where ff is some function describing air drag. In Lean this system is defined with the function ballisticMotion (with f(v)=(5+v)vf(v) = (5+\|v\|)v

def balisticMotion (x v : ×) := (v, g  - (5 + v)  v)

The functionaimToTarget finds approximately the initial velocity to hit target at time T. The specification of this function is:

approx aimToTarget (v₀ : ×) (optimizationRate : ) :=
  λ (T : ) (target : ×) =>
  let shoot := λ (t : ) (v : ×) =>
                 odeSolve (t₀ := 0) (x₀ := ((0:×),v)) (t := t)
                   (f := λ (t : ) (x,v) => balisticMotion x v)
  (λ v => (shoot T v).1)⁻¹ target

Where we define a local function shoot that returns a position an velocity of projectile at time t shot with initial velocity v. So if we want to hit the target at time T we are interested in inverting the function shoot, i.e. (λ v => (shoot T v).1)⁻¹ target.

The arguments v₀ and optimizationRate are later used as initial parameters for gradient descent.

This specification is then turned into executable code using tactics. The steps are roughly follows

  1. Reformulate inverse as minimization and apply gradient descent
  2. Run automatic differentiation - this will formulate the adjoint problem because we are differentiating through solution of ODEs
  3. choose ODEs discretization for the forward and backward pass of the adjoint formulation

Unfortunately, my automatic differentiation is still not working completely smoothly, so there is no neat tactic called autodiff but bunch of calls to fun_trans which is a generic tactic doing function transformations.

If you want to see more details, here is the code.

Jeremy Avigad (May 09 2023 at 13:10):

Does this mean that we can now apply for DARPA grants for widgets?


Last updated: Dec 20 2023 at 11:08 UTC