Zulip Chat Archive

Stream: general

Topic: Extraction replacing reals with float?


Kevin Sullivan (Jul 13 2021 at 21:30):

Dear Lean: Are there tools out there for extracting from Lean (or compiling it to) to an ordinary programming language, replacing reals with floats? Maybe in Lean 4 with compilation to C? Even better would be to have some theorems/computations to quantify error bounds. Another way to ask the question, is "What's out there in this regard?" Thank you, as always. --Kevin

Mario Carneiro (Jul 13 2021 at 22:18):

Lean 4 compiles to C, but it doesn't replace reals with floats, it replaces Float with floats

Mario Carneiro (Jul 13 2021 at 22:19):

Replacing reals with float is just semantically wrong, so it is very difficult to argue that compilers that do this are doing anything that makes sense. This is available in many C compilers under a -ffast-math flag

Mario Carneiro (Jul 13 2021 at 22:22):

where fast is an abbreviation for Floats Allowing Sketchy Transformations

Jason Rute (Jul 13 2021 at 22:26):

Also, I assume the mathlib functions on the reals, like pi, sin, log, abs and log are not optimize or even computable.

Kevin Buzzard (Jul 13 2021 at 22:28):

Right now the definition of pi is "twice the smallest positive zero of cosine, which exists and is between 1 and 2 because of the intermediate value theorem", however we have managed to get some pretty good approximations to pi using trickery. I don't think anyone has attempted to make anything computable though.

Kevin Sullivan (Jul 14 2021 at 00:35):

Mario Carneiro said:

Replacing reals with float is just semantically wrong, so it is very difficult to argue that compilers that do this are doing anything that makes sense. This is available in many C compilers under a -ffast-math flag

I know there's not a semantic equivalence, but this kind of substitution seems practically unavoidable if one wants to compute in addition to reasoning. In our present application, we've formalized affine coordinate spaces over arbitrary fields, and that's all well and good. But we're particularly interested in Euclidean spaces for work in robotics. So the field is R. That makes the theory uncomputable, of course. But we ultimately want to compute, as well as to reason, so we will consider implementing R as float and accept that there's a loss of assurance. There's work out there that seeks to quantify the loss due to such replacements. This is what every programmer does in practice: think real, use float. Decent programmers know the major pitfalls (e.g., never use == ). We'd use Q with big nats, but Q isn't closed under square root, so we'd be unable to define the Euclidean inner product, thus distances, and angles. We're looking for best practices and practical compromises to get "some of both worlds." Ideas welcome!

Jason Rute (Jul 14 2021 at 02:00):

I do see your point. Namely, my experience with writing production scientific code with floats is not in implementing a complicated floating point function with strict error bounds where I have to reason about floats carefully. Instead, it is calculating an average of a list of numbers and not forgetting about the zero list, or it is not subtracting when I mean to add, or not taking a square root of a negative number. More mathematically, I sometimes want to show that I'm using a function associated with a probability distribution correctly. (Also, I have A LOT of issues with unit conversation and dimensionality. Scala's squats library is very helpful in that regard, but that isn't really about the reals per say. Similarly, just storing different kind of values in their own types helps to avoid a lot of mistakes.)

Jason Rute (Jul 14 2021 at 02:00):

I do wonder though if you have a specific practical use case on how you would use such a capability? For mathlib style stuff, I doubt the extractable algorithms are efficient or practical. For real world programming, since you are not looking for perfect guarantees, I wonder if just good type management would solve a lot of potential problems. (And if you are getting into the weeds of numerical stability, one probably needs to reason about floats.)

Jason Rute (Jul 14 2021 at 02:01):

Nonetheless, I image one could make a new type FloatReal. It is an inductive type defining terms using the floating point operations, but only those ideally compatible with the reals (so no infinities, NaNs, dividing by zero, nor signed zeros). Edit: Actually, just skip to my next idea. I think this one isn't working. Then you can have a well-defined, but non-computable FloatReal -> Real and Real -> FloatReal functions, as well as computable FloatReal -> Float and Float -> FloatReal functions. Then you can run FloatReal to compute the result.

Jason Rute (Jul 14 2021 at 02:01):

Alternatively (and probably better)FloatReal could be a type class which is implemented for the types Float and Real. The type class only specifies how to implement the floating point operations (again, those compatible with the reals in theory). One can then build up a library of functions using those operations. One uses Real for verifying (idealistic, analytical) properties about the functions, while using Float for running the function in practice.

Yakov Pechersky (Jul 14 2021 at 02:07):

How do you deal with the nonassociativity of binops like addition?

Mario Carneiro (Jul 14 2021 at 02:11):

We'd use Q with big nats, but Q isn't closed under square root, so we'd be unable to define the Euclidean inner product, thus distances, and angles.

You can always get around this the "rational trigonometry" way: only ever use squared distances and the squared cosine of the angle, and round at the end if you need actual angles/distances

Jason Rute (Jul 14 2021 at 02:12):

@Yakov Pechersky If you are asking about my FloatReal proposal, you don't. First, you eliminate all the craziest stuff from the floats by forcing everything to go through the FloatReal interface, and then you accept that you won't necessarily be proving anything about the Float implementation, just the Real implementation, and hoping for the best. (Alternately, you could also reason about the Float implementation, since it is transparently there.)

Yakov Pechersky (Jul 14 2021 at 02:14):

I see. Perhaps having a FixedPoint type will be an additional type that this would handle, at different word sizes.

Mario Carneiro (Jul 14 2021 at 02:17):

One way to work rigorously with float/real replacement is via interval arithmetic. You replace each real with two floats and real ops with pairs of float ops with opposite rounding modes; then you can define a representation relation between the float pairs you compute and the real numbers they represent

Jason Rute (Jul 14 2021 at 03:03):

Just so it is clear, I don't think interval arithmetic is a practical replacement for most applications of floating point numbers.

Jason Rute (Jul 14 2021 at 03:04):

Jeremy Avigad once asked some people who do practical scientific computing if they use theorems that say how long one has to wait for the result to converge. They said, no, they just run it for a long time until it looks like it converges. They are just concerned that it converges in theory. Machine learning papers often contain proofs that their algorithms work in certain cases and those proofs involve basic analysis and calculus. Most of these don't mention anything about floating point numbers. My own work involves a lot of numerical work. I personally am not concerned about numerical stability, just that my code's logic and math is correct. I once saw a production financial software bug which moved money from one account to another and forgot to subtract it from the first account. That wasn't a floating point issue! (Of course the pedantic person will of course point out that money should not be implemented with floats, but lots of scientific applications also involves conservation of X, where the sum total of a quantity X in the system is constant.)

Jason Rute (Jul 14 2021 at 03:07):

If Lean 4 takes off and there is a "verified" numerical library of analysis functions that can be run with floating point (Jeremy once mentioned to me this as a possible long term vision for Lean), one would have to consider how verified they need to be. I assume the user would want two things. (1) That the function is accurate (which might require reasoning about floats), and (2) that it is easy to reason about how that function would mathematically behave in their own code, even if that is only under an idealized model. Since floats are awfully behaved, I think the only way to make (2) not onerous is to use a model of floating point numbers like the reals. Otherwise, users will just not bother to prove that their code works as intended (which admittedly is currently the status quo and will likely be that way for a long long time.)

Jason Rute (Jul 14 2021 at 03:08):

Sorry, I'm rambling. I guess my point is that I agree with the OP that this is a nice to have a feature in some language of the future, but I also agree it isn't easy to find the right solution.

Mario Carneiro (Jul 14 2021 at 03:19):

Jason Rute said:

Just so it is clear, I don't think interval arithmetic is a practical replacement for most applications of floating point numbers.

While I am sympathetic to this POV, I don't see any practical alternatives. For me "correctness" (in the provable-in-principle sense) is a basic requirement for any computer program, and I can't make any meaningful sense out of a program that is not correct (and not within a couple bugfixes of a program that is correct). Interval arithmetic is not the only way to relate floats to rational numbers, but it's a lot better behaved than actual IEEE floats and float ops.

Mario Carneiro (Jul 14 2021 at 03:20):

AFAIK most numerically stable algorithms perform just fine with interval arithmetic

Mario Carneiro (Jul 14 2021 at 03:24):

My preferred approach to verifying systems is to not make any changes to what the locals are doing and instead try to figure out why what they do makes sense, but when it comes to floats, a lot of the time the answer is "they don't actually know that it's correct, they are ballparking it and there are literal counterexamples to correctness of the method"

Jason Rute (Jul 14 2021 at 03:25):

Are the counterexamples in the logic/math, or the floating point numbers?

Mario Carneiro (Jul 14 2021 at 03:27):

The counterexamples in the logic/math are usually simple bugs that can be fixed, like a messed up sign. Verification is good at finding these, and they are recognizable as bugs even without verification. The floating point bugs are more endemic and generally require a change of strategy, which may not be acceptable to the application, which is why they are a bigger concern for me

Mario Carneiro (Jul 14 2021 at 03:30):

I generally do everything in my power to avoid using floats ever. Fixed point is a pretty good alternative (you represent numbers as integer multiples of some small epsilon); financial calculations are of course a good match for this

Mario Carneiro (Jul 14 2021 at 03:31):

You can do rounding and approximation over Q as well, there is no need to use floats for this

Mario Carneiro (Jul 14 2021 at 03:34):

In short, whenever you do a floating point calculation, you should be able to say what the result you get has to do with the number you were trying to calculate. Most of the time people can't even say what this relation is

Kevin Sullivan (Jul 14 2021 at 03:34):

Jason Rute said:

Are the counterexamples in the logic/math, or the floating point numbers?

There are many well known cases of major system failures due to misunderstanding or mistreatment of floating point numbers. One of the most famous was the failure of a Patriot Missile battery to intercept an incoming SCUD missile in the first Gulf War, leading to the deaths of 28 US Army personnel. Here's what Wikipedia has to say about that case:

On February 25, 1991, an Iraqi Scud hit the barracks in Dhahran, Saudi Arabia, killing 28 soldiers from the U.S. Army's 14th Quartermaster Detachment.[48]

A government investigation revealed that the failed intercept at Dhahran had been caused by a software error in the system's handling of timestamps.[49][50] The Patriot missile battery at Dhahran had been in operation for 100 hours, by which time the system's internal clock had drifted by one-third of a second. Due to the missile's speed this was equivalent to a miss distance of 600 meters.

The radar system had successfully detected the Scud and predicted where to look for it next. However, the timestamps of the two radar pulses being compared were converted to floating point differently: one correctly, the other introducing an error proportionate to the operation time so far (100 hours) caused by the truncation in a 24-bit fixed-point register. As a result, the difference between the pulses was wrong, so the system looked in the wrong part of the sky and found no target. With no target, the initial detection was assumed to be a spurious track and the missile was removed from the system.[51][52] No interception was attempted, and the Scud impacted on a makeshift barracks in an Al Khobar warehouse, killing 28 soldiers, the first Americans to be killed from the Scuds that Iraq had launched against Saudi Arabia and Israel.

Two weeks earlier, on February 11, 1991, the Israelis had identified the problem and informed the U.S. Army and the PATRIOT Project Office, the software manufacturer.[49] As a stopgap measure, the Israelis had recommended rebooting the system's computers regularly. The manufacturer supplied updated software to the Army on February 26.

There had previously been failures in the MIM-104 system at the Joint Defense Facility Nurrungar in Australia, which was charged with processing signals from satellite-based early launch detection systems.[53]

Yakov Pechersky (Jul 14 2021 at 03:35):

Integrators (Verlet, etc), barostats, thermostats require very particular architecturing to reduce numerical instability, if they are operating on floatin point

Yakov Pechersky (Jul 14 2021 at 03:36):

The missile example is more of an error of mixing coercions. But had the missile flown long enough and used the wrong integrator, then it also would be in the "wrong" place.

Mario Carneiro (Jul 14 2021 at 03:37):

that example is also why you should never use floats for timekeeping

Jason Rute (Jul 14 2021 at 03:40):

@Kevin Sullivan I feel like you and I have switched sides on this debate. :smile:

Kevin Sullivan (Jul 14 2021 at 03:59):

Mario Carneiro said:

that example is also why you should never use floats for timekeeping

The problem remains: physics is generally written using structures over real and complex fields. Formalizing physics, e.g., classical dynamics, which is what we're about at the moment, is done (at least by us) in terms of these fields. For example, SUM_i (a_i * p_i), is a point in an affine space iff the p_i are points and the a_i sum to exactly 1. Implementing the a_i as floats can obviously screw things up here (e.g., 0.1 + ... _ 0.1 ten times is not == 10.0 in IEEE 754 arithmetic). On the other hand, this mapping from R to float appears to be what almost everyone does, at least implicitly. For example, major libraries in the widely used Robot Operating System (ROS) framework represent points in Euclidean space as floating point 3-tuples.

We know that mindlessly replacing R with float has clear potential to "break stuff." Nevertheless, as a practical matter, there can be benefits in doing this to get quick-and-dirty computable implementation, albeit one based on a semantics-breaking transform. Mario's right about potential pitfalls in doing this. I'm looking for insights into practical tools, best practices for translating formalized, idealized maths into computable implementations, and any other insights anyone can offer in this space! Thank you all for the ongoing stream of comments and discussion.

Kevin Sullivan (Jul 14 2021 at 04:02):

Jason Rute said:

Kevin Sullivan I feel like you and I have switched sides on this debate. :smile:

Indeed! I'm still looking for quick/dirty even if unsound "extraction" tools/methods, replacing R with float. At the same time, one can't deny the problems with floats.

Mario Carneiro (Jul 14 2021 at 04:03):

If you are okay with unsound mapping, it seems like you could just transcribe a definition from C/python/etc to lean or vice versa

Mario Carneiro (Jul 14 2021 at 04:04):

It's also possible to automate the translation of a limited fragment of the language if you are worried about typos

Mario Carneiro (Jul 14 2021 at 04:06):

But neither lean 3 nor lean 4 has "code extraction" / transpilation into a high level language, in the same way as Coq or isabelle. Lean 4 compiles to C but the result is really not intended for human consumption

Kevin Sullivan (Jul 14 2021 at 04:07):

Mario Carneiro said:

It's also possible to automate the translation of a limited fragment of the language if you are worried about typos

That would work. I think Coq extraction tools will already do this. What I guess I was vaguely hoping for was additional thoughts on how one might then reason about "how far off" the results could be from the "real" maths. I do appreciate the feedback and will look at some of the ideas you and others have suggested here.

Yakov Pechersky (Jul 14 2021 at 04:09):

Isn't this the whole field of numerical analysis?

Mario Carneiro (Jul 14 2021 at 04:09):

What I guess I was vaguely hoping for was additional thoughts on how one might then reason about "how far off" the results could be from the "real" maths.

If you use the float and real types in lean, you can at least precisely define what this means, and possibly prove things about it (which would probably lead you to an interval arithmetic implementation)

Mario Carneiro (Jul 14 2021 at 04:16):

What I would really like is a way to describe a real number computation in abstract form (for example, we want to prove x + y + z <= w by evaluating approximations to each of these real numbers), together with a cost model for getting n-bit approximations to each number (which depends on the real numbers involved, i.e. we can say how hard it would be to evaluate pi to within 2^-n based on the algorithm complexity), which can then be used to develop an evaluation plan (i.e. we want 5 bits of x and w and 6 bits of y and z because the true gap between x+y+z and w is around 2^-4), and then we approximate each number and verify the bounds in order to prove the goal


Last updated: Dec 20 2023 at 11:08 UTC