# Zulip Chat Archive

## Stream: maths

### Topic: interval arithmetic and Lean

#### Kevin Buzzard (Sep 25 2020 at 12:29):

Over the last couple of years several staff members from across the university have asked me questions about doing formally verified computations in Lean and it's about time I learnt something about how this works. Today I was talking to Sheehan Olver, who does research into computational methods, and he was brandishing Julia code at me involving interval arithmetic and asking if it could be done in Lean. I mean yes, in theory -- but in practice? It seemed to me that he could do with an interval class which comprised of two rational endpoints and then things like square root functions on this class, and also the ability to do e.g. computations with sparse 1000x1000 matrices on this class (e.g. some variant of QR which was stable whose name I've forgotten, possibly multi-syllabic and begins with H).

Would I be better off telling him to use Coq? How much work is developing some computationally useful package in Lean 3 or Lean 4? Presumably these already exist in other systems?

#### Patrick Massot (Sep 25 2020 at 12:44):

You should ask @Fabian Immler

#### Joseph Myers (Sep 25 2020 at 16:01):

You don't want just to use rational endpoints in a naive way (you can do exact add/subtract/multiply/divide on intervals with rational endpoints, but the numerators and denominators will blow up in size very quickly), you need some kind of rounding. Floating-point endpoints (with an appropriate formal model of floating-point arithmetic) are better here than general rational ones.

#### Joseph Myers (Sep 25 2020 at 16:03):

mathlib's `data.fp.basic`

is all `meta`

with no proofs that anything has the correct semantics, but I don't see any particular obstruction to having a formal model of floating-point suitable both for proofs, and, via some suitable variant of `norm_num`

, computations. (The mathlib model would seem to be that you don't care about the arithmetic working sensibly via kernel reduction, as long as you have tactics that can do computations with it.)

#### Reid Barton (Sep 25 2020 at 16:05):

You could also ask Johannes: https://home.in.tum.de/~hoelzl/documents/hoelzl09diplomathesis.pdf

#### Joseph Myers (Sep 25 2020 at 16:07):

(When I'm not doing olympiad stuff, much of the mathematically-related things I do is floating point as part of maintaining glibc libm. At some point I need to implement all the new special functions from IEEE 754-2008 that have been added to the next revision of the C standard.)

#### Alex J. Best (Sep 25 2020 at 16:07):

Coq has http://arxiv.org/abs/1106.3448 for one.

#### Reid Barton (Sep 25 2020 at 16:15):

The main questions here are what kind of performance you need and in what sense you want the result to be verified

#### Joseph Myers (Sep 25 2020 at 16:15):

If you want a formal floating-point implementation that can be used to prove properties of floating-point algorithms, as well as as part of formally verified calculations for more mathematical results, that may impose further requirements on how the implementation is designed, but I think it should still be possible to cover both use cases in a single implementation. (For example, for proving properties of floating-point algorithms you might want exceptions and rounding modes and some IEEE implementation choices to be handled in the implementation, and for it to cover decimal floating point as well as binary; probably to represent things at IEEE level 3 "Representations of floating-point data" as the lowest level of IEEE floating-point semantics where almost all results are still fully specified by the IEEE standard.)

#### Joseph Myers (Sep 25 2020 at 16:19):

A simple mathematical use case might be proving some inequality you need as a larger proof (cf. the bounds on pi that are already in mathlib). A use case for proving properties of floating-point algorithms might be to prove that a given algorithm calculates some function on floating-point inputs and with floating-point results, say tan(x), with a given bound on the errors. (No doubt the people Kevin's talking to want more complicated results, but proving bounds on a special function or error bounds on a floating-point algorithm for calculating a special function should be easier to achieve.)

#### Joseph Myers (Sep 25 2020 at 16:29):

In my PhD I found I needed to prove a complicated two-variable function was positive, for one variable between 0 and 1 and the other variable between 0 and a number given as a root of a cubic equation, in order to prove a graph theory result. The very informal reason to believe the result was looking at graphs of that function for a few values of the second variable. The proof I came up with involved numerical bounds on the function and its first and second partial derivatives in 80 separate regions for the two variables (none of that formalised or even using interval arithmetic for computing the individual values of the functions and the derivatives that were used to deduce bounds on those intervals), effectively justifying that the graph of the function looked like it appeared to when plotted. I expect that kind of calculation that appears as part of a larger proof should be well within the scope of what can practically be formalised using a formal version of floating-point interval arithmetic.

#### Sheehan Olver (Sep 25 2020 at 18:22):

The idea was not to do the calculations in Lean but translate interval arithmetic as implemented in floating point by IntervalArithmetic.jl into lean statements. The easy way would be to translate each floating point number into a rational. So I’m not sure your worry about large numerator/denominator is valid: the rounding is taken care of already, so it’s only whatever the worst case of a floating point as a rational number.

Though I agree it’s probably better to have a floating point type.

Last updated: May 19 2021 at 03:22 UTC