Zulip Chat Archive
Stream: new members
Topic: Float division in Lean?
Vaibhav Karve (Apr 09 2020 at 20:32):
How do I make Lean compute 5/2 = 2.5?
import data.rat #eval 5 / 2 -- Lean says this is 2 because of integer division #eval 5.0 / 2 -- still 2. #eval (5 : ℚ) / 2 -- Lean says this is 5/2 because of rational division
Kevin Buzzard (Apr 09 2020 at 20:37):
What's your definition of 2.5? Lean's job is not really to compute.
Kevin Buzzard (Apr 09 2020 at 20:37):
What is the type of 2.5
?
Vaibhav Karve (Apr 09 2020 at 20:39):
"Floating point"? Whatever that means.
Vaibhav Karve (Apr 09 2020 at 20:40):
I was just expecting that there is an inbuilt floating point type. Just like int
is the inbuilt integer type.
Kevin Buzzard (Apr 09 2020 at 20:40):
You understand that int
is nothing like a C integer, right?
Alex J. Best (Apr 09 2020 at 20:41):
There are some things in data.fp.basic
but its not used particularly much, so I doubt it is advanced as you'd like!
Kevin Buzzard (Apr 09 2020 at 20:41):
Can you precisely define which type you are talking about? Because if not then your question is meaningless.
Rob Lewis (Apr 09 2020 at 20:42):
There's a builtin native.float
, but it's a meta type implemented in C++. (This is likely not what you're looking for.)
Kevin Buzzard (Apr 09 2020 at 20:43):
My point is simply that there are many answers to the question "what does 2.5 mean" and if we don't know precisely what we're talking about then the question is ill-formed.
Vaibhav Karve (Apr 09 2020 at 20:43):
Kevin Buzzard said:
You understand that
int
is nothing like a C integer, right?
No! So far I was under the impression that int
is like C (CS) integer and nat
more like Peano (math) numbers.
So I was expecting a similar kind of analogue for floating point
and real
.
Rob Lewis (Apr 09 2020 at 20:43):
Lean knows exactly what 2.5 means. It means 5/2 :)
Alex J. Best (Apr 09 2020 at 20:44):
import data.fp.basic def my_cfg : fp.float_cfg := { prec := 3, emax := 4, prec_pos := by norm_num, prec_max := by norm_num } #eval @fp.of_rat_dn my_cfg (5/ 2 : ℚ) --(#2 #0 #2147483646 #1)
Kevin Buzzard (Apr 09 2020 at 20:44):
int
is just like nat
, it's arbitrary precision and designed for proving, not computing.
Kevin Buzzard (Apr 09 2020 at 20:44):
import data.real.basic noncomputable def x : ℝ := 2.5
There's a 2.5, but you're going to have real trouble computing with it ;-)
Kevin Buzzard (Apr 09 2020 at 20:46):
nat
is Peano math naturals, i.e. naturals defined by axioms, int
is integers defined by axioms, rat
is rationals defined by axioms, real
is reals defined by axioms. The last one involves a noncomputable construction so #eval
and #reduce
are unlikely to be able to do anything.
Vaibhav Karve (Apr 09 2020 at 20:46):
Okay. All these answers make sense to me now.
Having said that, should Lean have the ability to do floating point calculations for the sake of it? I can do it in Haskell and Python. So I want to do it in Lean too.
Kevin Buzzard (Apr 09 2020 at 20:47):
Lean is not designed for computing.
Vaibhav Karve (Apr 09 2020 at 20:47):
Then should we call it a programming language?
Rob Lewis (Apr 09 2020 at 20:47):
You can do all the fp calculations you want(*) with native.float
.
Rob Lewis (Apr 09 2020 at 20:47):
You just can't prove anything about them.
Kevin Buzzard (Apr 09 2020 at 20:47):
The reals are intrinsically noncomputable.
Filip Szczepański (Apr 09 2020 at 20:48):
You could say Lean is designed for computing proofs
Kevin Buzzard (Apr 09 2020 at 20:48):
you can prove lots of things with them but can't compute anything. You can do this though:
import data.real.basic example : (2.5 : ℝ) = 5/2 := by norm_num
but I didn't compute anything there, I proved a theorem.
Alex J. Best (Apr 09 2020 at 20:48):
You _can_ program with lean, but the types you use to do so are (often) different to those you prove things about.
Vaibhav Karve (Apr 09 2020 at 20:48):
Rob Lewis said:
You can do all the fp calculations you want(*) with
native.float
.
Got it! I think that is what I wanted. I want the choice to compute. I understand proving things about floating points would be worthless :slight_smile:
Rob Lewis (Apr 09 2020 at 20:49):
It's certainly not worthless, just hard!
Kevin Buzzard (Apr 09 2020 at 20:49):
Stuff like associativity is true for ℝ
but is false for native.float
:-)
Rob Lewis (Apr 09 2020 at 20:50):
Sylvie Boldo and others have done a lot of work in Coq on this.
Kevin Buzzard (Apr 09 2020 at 20:51):
buzzard@bob:~$ python Python 2.7.17 (default, Nov 7 2019, 10:07:09) [GCC 7.4.0] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> N=(10.0)**20 >>> e=(10.0)**(-20) >>> M=-N >>> (M+N)+e==M+(N+e) False >>>
Filip Szczepański (Apr 09 2020 at 20:52):
I remember reading something about a property weaker than associativity that floating point does have
Kevin Buzzard (Apr 09 2020 at 20:52):
non-associativity
Gabriel Ebner (Apr 09 2020 at 20:53):
I think addition and multiplication are commutative, ignoring NaNs for a moment.
Kevin Buzzard (Apr 09 2020 at 20:53):
You can check this with dec_trivial
Kevin Buzzard (Apr 09 2020 at 20:55):
stackexchange seems to suggest that even a + b == a + b
is not 100% guaranteed
Kevin Buzzard (Apr 09 2020 at 20:55):
Maybe there's some property weaker than equality that floating point does have
Filip Szczepański (Apr 09 2020 at 20:55):
a + b == a + b
can be false if a or b are NaN
Kevin Buzzard (Apr 09 2020 at 20:56):
>>> 100*(0.1+0.2)==100*0.1+100*0.2 False
rofl it's not even a distrib
. How you guys can work with this object is beyond me.
Kevin Buzzard (Apr 09 2020 at 20:57):
>>> 100*0.1+100*0.2 30.0 >>> 100*(0.1+0.2) 30.000000000000004 >>>
Filip Szczepański (Apr 09 2020 at 20:57):
It's good for things that don't need exact results. Definitely not a good fit for a theorem prover though
Marc Huisinga (Apr 09 2020 at 20:57):
lean4 has a float type: https://github.com/leanprover/lean4/blob/master/src/Init/Data/Float.lean
Filip Szczepański (Apr 09 2020 at 20:58):
I just wish I'd remember what the property/equality I saw before was
Rob Lewis (Apr 09 2020 at 20:58):
It's very interesting to verify error bounds in fp computations though.
Alex J. Best (Apr 09 2020 at 20:59):
Filip Szczepański said:
I just wish I'd remember what the property/equality I saw before was
Hopefully it is https://en.wikipedia.org/wiki/Power_associativity at least
Filip Szczepański (Apr 09 2020 at 21:00):
I haven't checked, but I'd actually bet that floating point isn't power associative
Filip Szczepański (Apr 09 2020 at 21:02):
Yeah, it isn't.
from math import pi (pi*pi)*(pi*pi) == pi*(pi*(pi*pi))
This returns false
Gabriel Ebner (Apr 09 2020 at 21:03):
Yeah, "in which order should I sum numbers" is like numerics 101.
Reid Barton (Apr 09 2020 at 21:14):
Add them forwards and then backwards, and average the results. I mean it was good enough for Gauss
Vaibhav Karve (Apr 09 2020 at 21:15):
Marc Huisinga said:
lean4 has a float type: https://github.com/leanprover/lean4/blob/master/src/Init/Data/Float.lean
Looks interesting. But still looks like it is meant for proving theorems, not for pure number crunching.
Rob Lewis (Apr 09 2020 at 21:16):
Quite the opposite. I think that's roughly equivalent to native.float
in Lean 3. It's just exposing C++ floats.
Marc Huisinga (Apr 09 2020 at 21:26):
(i should probably read the topics i reply to so i don't constantly repeat things other people just said :sweat_smile:)
Kevin Buzzard (Apr 09 2020 at 22:08):
I guess all those lean constants in that code just mean "Lean doesn't actually know anything about this function" and then the extern tags must mean "...but if you want me to evaluate it in some way then I'll use this C++ function"
Kevin Buzzard (Apr 09 2020 at 22:10):
I know this is a very naive question, but can one guarantee that these external function calls will always return the same answer? They don't depend on some other parameters which might change depending on what you're doing?
Kevin Buzzard (Apr 09 2020 at 22:11):
Oh -- judging by the commit message these are C floats not C++ floats
Kevin Buzzard (Apr 09 2020 at 22:13):
So how will that code work? Will I be able to do #eval Float.sin (3.141)
?
Marc Huisinga (Apr 09 2020 at 22:25):
there seems to be a small test file here:
https://github.com/leanprover/lean4/blob/master/tests/compiler/float.lean
Kevin Buzzard (Apr 09 2020 at 22:28):
That file was last edited with a commit which removed decidable equality. Could this be because if you use C < to decide < and C = to decide = then you can deduce a contradiction?
Marc Huisinga (Apr 09 2020 at 22:30):
my guess is that it's because float equality is really badly behaved. after all, nan is not equal to nan
Marc Huisinga (Apr 09 2020 at 22:35):
ah yes, the commit message confirms it: https://github.com/leanprover/lean4/commit/705530b62bc23b47c0ba9baa8e199ab376b8fd7b
Reid Barton (Apr 09 2020 at 22:38):
do we know what beq
is supposed to mean?
Marc Huisinga (Apr 09 2020 at 22:41):
Reid Barton said:
do we know what
beq
is supposed to mean?
Reid Barton (Apr 09 2020 at 22:47):
Ah okay, I guess b
means bool
here too then (like in band
etc.)
Kevin Buzzard (Apr 09 2020 at 22:59):
Ha ha you computer scientists shouldn't be allowed to use the sacred terms + and = on your silly floats
Kevin Buzzard (Apr 09 2020 at 22:59):
You should choose your favourite emojis and use those instead to remind us all about what you're really doing with your code
Kevin Buzzard (Apr 09 2020 at 23:00):
You should switch to the p-adic numbers, the error bounds are far better behaved
Kevin Buzzard (Apr 09 2020 at 23:01):
That is one of the funniest commit messages I've ever seen
Kevin Buzzard (Apr 09 2020 at 23:12):
You should just let NaN be zero like in Lean. It works fine!
Filip Szczepański (Apr 09 2020 at 23:13):
0/0 = 0 in Lean?
Kevin Buzzard (Apr 09 2020 at 23:13):
Sure
Kevin Buzzard (Apr 09 2020 at 23:14):
Mathematicians took an oath never to divide by zero so the computer scientists just defined it to be something convenient
Kevin Buzzard (Apr 09 2020 at 23:14):
Actually for all I know 0/0=37
Kevin Buzzard (Apr 09 2020 at 23:14):
All I know is that it's a number
Kevin Buzzard (Apr 09 2020 at 23:15):
Because the type of /
is a->a->a
Filip Szczepański (Apr 09 2020 at 23:15):
Maybe it should be a->a->Maybe a
Filip Szczepański (Apr 09 2020 at 23:16):
Or a->NonZero a->a
Kevin Buzzard (Apr 09 2020 at 23:16):
That would be a nightmare to work with
Kevin Buzzard (Apr 09 2020 at 23:16):
Both would
Kevin Buzzard (Apr 09 2020 at 23:16):
What we have is the best solution, convenient and simple
Kevin Buzzard (Apr 09 2020 at 23:17):
The nonzero assumptions are in the theorems not the definition
Marc Huisinga (Apr 09 2020 at 23:17):
there's that famous javascript WAT talk (https://www.destroyallsoftware.com/talks/wat), i'm sure kevin would be able to hold a talk like this about theorem prover langs
Filip Szczepański (Apr 09 2020 at 23:22):
How does it avoid contradictions like 0=1?
Bryan Gin-ge Chen (Apr 09 2020 at 23:24):
What's an example derivation of 0=1
that you have in mind?
Bryan Gin-ge Chen (Apr 09 2020 at 23:30):
Note that you won't be able to prove anything like:
example (a b c : nat) : a / c = b / c → a = b := sorry
Filip Szczepański (Apr 09 2020 at 23:33):
Right, not with flooring division. I was thinking about rationals
Bryan Gin-ge Chen (Apr 09 2020 at 23:35):
The same statement with rat
is also unprovable without a c ≠ 0
hypothesis.
Mario Carneiro (Apr 10 2020 at 00:18):
Kevin Buzzard said:
Mathematicians took an oath never to divide by zero so the computer scientists just defined it to be something convenient
I think this is one instance where your conflation of "computer scientists" and "non-number theorists" might be misleading. The actual computer scientists, the ones who write programming languages and computer architectures, have almost universally decreed that x/0 = exception
Mario Carneiro (Apr 10 2020 at 00:19):
however this plays absolute havoc in a theorem prover
Mario Carneiro (Apr 10 2020 at 00:21):
it's even a problem in compilers, where there is abstract semantics of the language and x/y has to be considered a "side effecting" operation, which means that x/y + z/w
might not have the same behavior as z/w + x/y
, that is, they are not extensionally equal as programs
Reid Barton (Apr 10 2020 at 01:46):
Kevin Buzzard said:
So how will that code work? Will I be able to do
#eval Float.sin (3.141)
?
I tried it but I got result type does not have an instance of type class 'HasRepr' or 'Lean.HasEval'
Reid Barton (Apr 10 2020 at 01:47):
instance : HasRepr Float := { repr := Float.toString } #eval (Float.sin 3.14) -- 0.001593
Mario Carneiro (Apr 10 2020 at 01:51):
What is HasEval
?
Reid Barton (Apr 10 2020 at 02:27):
it looks like mainly a convenience thing
Reid Barton (Apr 10 2020 at 02:28):
https://github.com/leanprover/lean4/blob/master/src/Init/System/IO.lean#L319
Reid Barton (Apr 10 2020 at 02:33):
so that #eval
more or less matches the behavior of the ghci repl for example, where you can give it either a plain expression or an IO action
Andrew Ashworth (Apr 15 2020 at 02:31):
Kevin Buzzard said:
You should switch to the p-adic numbers, the error bounds are far better behaved
We've got a better representation, but IEEE floats are pretty entrenched. I'd bet on posits or some variation becoming the new standard, though. They're already being used in custom FPGA work.
Andrew Ashworth (Apr 15 2020 at 02:31):
https://www.johndcook.com/blog/2018/04/11/anatomy-of-a-posit-number/
Andrew Ashworth (Apr 15 2020 at 02:35):
well, standard.. maybe... in 30 years. The x86 and IEEE 754 mafia is strong
Andrew Ashworth (Apr 15 2020 at 02:39):
I guess the billions of dollars sunk in existing FPU designs might have something to do with it too.
Andrew Ashworth (Apr 15 2020 at 03:13):
One of the original scientists who specified floating point numbers has a hot take on posits. Spittin' fire! Would recommend reading if you're absolutely bored from staying indoors all day: https://people.eecs.berkeley.edu/~wkahan/EndErErs.pdf
Last updated: Dec 20 2023 at 11:08 UTC