Zulip Chat Archive

Stream: general

Topic: Opaque junk values for partial functions


Niels Voss (Jan 04 2023 at 17:46):

I was reading through this thread and was trying to think of ways to stop proofs that 1 - 2 = 3 - 6 (or something similar, I could see a variety of reasons that nat subtraction specifically shouldn't be changed). This idea probably sounded better in my head than it does on paper, but I was wondering if we could define a junk_value function that prevents proofs of equality, like this:

opaque junk_value (n m : ) :  := 0
def minus (n m : ) :  := if m  n then n - m else junk_value n m

That way, 1 - 2 reduces to junk_value 1 2 and 3 - 5 reduces to junk_value 3 5, which are computationally equal, but as far as I know there's no way to prove this (except maybe native_decide, which I don't know how it works).

A more generic junk_value function that works for all types could be:

universe u v
opaque junk_value {α : Type u} {β : Type v} [Inhabited α] (b : β) : α := Inhabited.default

where β would likely be a tuple containing the type of all arguments.

Niels Voss (Jan 04 2023 at 17:59):

One caveat is that for subtraction, it essentially has to be defined twice, once where 1 - 2 returns 0 and once where it returns junk_value 1 2. If you try defining it in one go, like

def sub : Nat -> Nat -> Nat
| 0 m => junk_value 0 m
| n 0 => n
| (n+1) (m+1) => sub n m

Then 1 - 5 and 2 - 6 both reduce to junk_value 0 4 and can be proven equal.

Johan Commelin (Jan 04 2023 at 18:01):

Is that a problem?

Johan Commelin (Jan 04 2023 at 18:02):

Because (1:ℤ) - (5:ℤ) = (2:ℤ) - (6:ℤ).

Johan Commelin (Jan 04 2023 at 18:03):

I think it's reasonable to occasionaly have junk_value a b = junk_value c d if a + d = c + b.

Johan Commelin (Jan 04 2023 at 18:03):

But anyway, it turns out that for applications in can be very helpful to have well-chosen junk-values.

Johan Commelin (Jan 04 2023 at 18:04):

It can occasionally be a footgun, which is probably what you are trying to avoid. But it also saves you from checking a tonne of annoying side conditions.

Niels Voss (Jan 04 2023 at 18:07):

For natural number subtraction, I do agree that opaque junk values are not the way to go because having 1 - 2 = 0 is actually sometimes a useful property. I was thinking that junk_value would be more helpful in situations where there's not really a good junk value to choose.

Johan Commelin (Jan 04 2023 at 18:08):

Why would it be helpful in such situations?

Johan Commelin (Jan 04 2023 at 18:08):

If there's not really a good junk value to choose, then the risk for footguns is also quite small, I think.

Johan Commelin (Jan 04 2023 at 18:08):

In which case it doesn't matter which junk value you choose.

Johan Commelin (Jan 04 2023 at 18:09):

Do you have a concrete example in mind?

Niels Voss (Jan 04 2023 at 18:13):

No, I don't really have a concrete example. I guess it would help in some obscure definitions where it isn't made clear that junk values exist. I remember seeing some definition of an inverse in a ring or monoid (docs#ring.inverse) where it would return the inverse if it existed and zero if it didn't, but the function was just called something.inverse I think. That's not really a good example either because you can just read the doc strings, but someone looking at a lemma might see something like inverse 3 = inverse 4 and be confused

Niels Voss (Jan 04 2023 at 18:14):

But generally, I agree that the risk is quite small

Yaël Dillies (Jan 04 2023 at 18:14):

docs#ring.inverse

Gabriel Ebner (Jan 04 2023 at 18:42):

A more generic junk_value function that works for all types could be:

universe u v
opaque junk_value {α : Type u} {β : Type v} [Inhabited α] (b : β) : α := Inhabited.default

where β would likely be a tuple containing the type of all arguments.

A generic junk value function is just as problematic as using 0, if you want to avoid "junk" theorems. Because now you can prove e.g. x / 0 = x % 0 by reflexivity. You need to use a different opaque function for every occurrence to get truly unspecified values.

Gabriel Ebner (Jan 04 2023 at 18:46):

And even then they're merely unspecified, you can still prove all kinds of theorems about them. Like (a / b)^2 + 1 > 0.

Niels Voss (Jan 04 2023 at 18:56):

I didn't consider the x / 0 = x % 0 case that you mentioned. I guess a tactic could be used to generate the junk functions, or have junk_value take in a string containing the function name.
I don't really think there's a way to solve the second problem. At the very least, there's no way to take advantage of the specific undefined value and any theorems you prove about them essentially have to hold for every nat. It is still a bit unmathematical, but less alarming than something like ∀ n : ℕ, ∃ k, n / k = k.

Junyan Xu (Jan 04 2023 at 19:51):

On the other hand, there are useful theorems for nat.sub that you can't prove if the value is arbitrary instead of 0, like docs#has_ordered_sub.

Winston Yin (尹維晨) (Jan 05 2023 at 07:23):

Idea: mathlib should compile successfully independently of the choice of junk value for any partial function in mathlib. Why not have some kind of marker on all definitions of partial functions? Then periodically somebody goes through all these markers and changes their junk values to other junk values to test if the rest of mathlib is truly independent of these choices.

Kevin Buzzard (Jan 05 2023 at 07:26):

I remember thinking about this a few years ago and then deciding that any change that made things more complicated was hard to justify in practice.

Moritz Doll (Jan 05 2023 at 07:27):

this is not true: we sometimes rely on junk values so that some simple theorems are true with less assumptions than in traditional maths. I don't have a good example at hand, though.

Winston Yin (尹維晨) (Jan 05 2023 at 07:28):

Is this something desirable if traditional mathematicians are the intended users?

Sebastien Gouezel (Jan 05 2023 at 07:30):

This is definitely desirable since it leaves less assumptions to check for the user of theorems, and therefore makes the library smoother to use.

Moritz Doll (Jan 05 2023 at 07:30):

it is very practiable and if it does not change the interesting mathematics, then I think nobody will object to that.

Sebastien Gouezel (Jan 05 2023 at 07:31):

A good example is the change of variables formula docs#measure_theory.integral_target_eq_integral_abs_det_fderiv_smul, where you don't need to check that the function is integrable because otherwise the same junk value 0 is used on both sides of the formula which is therefore still true.

Kevin Buzzard (Jan 05 2023 at 07:47):

The canonical example is (a+b)/c=a/c+b/c

Ruben Van de Velde (Jan 05 2023 at 12:35):

Kevin has a nice blog post about this on Xena, btw. (If I was at a desk I'd look it up)

Kevin Buzzard (Jan 05 2023 at 12:42):

https://xenaproject.wordpress.com/2020/07/05/division-by-zero-in-type-theory-a-faq/

Winston Yin (尹維晨) (Jan 05 2023 at 23:24):

I somewhat agree with Sebastian Reichelt's comment on your post, Kevin, that the proof assistant should make the experience of using partial functions more convenient, independently of the question of the practical use of junk values. For example (maybe I'm totally misunderstanding how proof assistants work), when writing real.sqrt x, the proof assistant should let you keep on proving things without interruption, while silently introducing another goal x ≥ 0 to be completed later. If such goals are showing up repeatedly throughout the proof, then a list of local "facts" may be declared and proven at the beginning of your proof / block, which the compiler can automatically use to discharge such goals.

Mario Carneiro (Jan 05 2023 at 23:28):

that would be nice, but I think it is mostly wishful thinking to think that those goals would be silently created and/or discharged without interrupting the "flow"

Joseph Myers (Jan 06 2023 at 01:14):

A whole series of real functions happen to have junk values defined to be equal to the real part of the (principal value of the) corresponding complex function at that argument; that applies to sqrt, log, arcsin, arccos, arctan, rpow at least (though only rpow has that as the definition, and we don't have the complex inverse trigonometric functions defined at all). It seems those junk values often work well together to reduce the number of times hypotheses are needed that arguments are in range.

Niels Voss (Jan 06 2023 at 01:14):

Doesn't typing suffices h : 0 <= x basically just keep the 0 <= x goal out of the way and force you to prove it at the end instead of immediately? I know no one uses it that way but I don't really think that Winston Yin's suggestion is infeasible with our current tactics. Discharging these goals is probably the hard part. It could perhaps be some new syntax, like two underscores or something, that you use instead of a proof to defer the proof until the end.

Winston Yin (尹維晨) (Jan 06 2023 at 05:29):

A whole series of real functions happen to have junk values defined to be equal to the real part of the ...

Then I am happy to stop calling them junk and call them unconventional instead. Seems to me they're no different than "0! = 1" or "heaviside(0) = 1/2" in spirit. From the standpoint of popularising formalisation towards mathematicians, this (philosophical) point about partial functions would have to be made clear from the beginning, probably somewhere on the path of de-conversion from set theory. Then I'll just accept that "any practical implementation of type theory really doesn't like partial functions".

Johan Commelin (Jan 06 2023 at 05:32):

@Niels Voss You would still be typing more lines of proof than you have to do now. Which I would rather not.

Niels Voss (Jan 06 2023 at 05:35):

I guess I agree with that. Also, having read all the replies to this thread, I agree now that having 1 - 2 be 0 is probably the best that we can do in practice, though I agree with Winston Yin that this should be documented somewhere.

Winston Yin (尹維晨) (Jan 06 2023 at 05:39):

My point is that it should not only be documented on nat.sub, but in any introduction to computer formalisation of maths. For me that was TPiL, but it was not clear to me then how much Lean doesn't like partial functions and subtypes.

Reid Barton (Jan 06 2023 at 08:11):

Johan Commelin said:

Niels Voss You would still be typing more lines of proof than you have to do now. Which I would rather not.

I think this is actually false

Johan Commelin (Jan 06 2023 at 08:12):

@Reid Barton Would you bundle the side conditions? Otherwise I don't see how you would save on lines.

Johan Commelin (Jan 06 2023 at 08:12):

Unless you have some really smart automation going on.

Reid Barton (Jan 06 2023 at 08:14):

You basically always need to know whatever facts guarantee that what you wrote down was "well-defined" (else, why not just write 37 instead?) and if you make those proofs arguments to the partial functions, then they are available for later automation

Johan Commelin (Jan 06 2023 at 08:18):

But that can also be done with our current strategy, right?

Johan Commelin (Jan 06 2023 at 08:18):

You add have aux1 : side condition to your context, and voila, it is available to automation.

Reid Barton (Jan 06 2023 at 08:19):

only sufficiently local automation

Reid Barton (Jan 06 2023 at 08:19):

Anyways, I'm fairly convinced that the mathlib wisdom on this is just wrong but I don't think it will change.

Johan Commelin (Jan 06 2023 at 08:20):

Are there examples of libraries (preferably with a DTT foundation) that take a different route?

Johan Commelin (Jan 06 2023 at 08:21):

Also, if you want to pass the side condition to the partial function (which isn't what Niels was suggesting in the post I replied to) then how would you combine that with readable notation?

Patrick Johnson (Jan 06 2023 at 08:34):

Gabriel Ebner said:

And even then they're merely unspecified, you can still prove all kinds of theorems about them. Like (a / b)^2 + 1 > 0.

That only applies to type theory. In set theory, a function is a set of ordered pairs. If called with an argument outside of the domain, the result can be any set (choice-based set parametrized with the function and the argument), which is not necessarily in the codomain. So, you couldn't prove (a / b)^2 + 1 > 0 if a or b cannot be proved to be real numbers for example.

Patrick Johnson (Jan 06 2023 at 08:36):

Reid Barton said:

You basically always need to know whatever facts guarantee that what you wrote down was "well-defined" (else, why not just write 37 instead?) and if you make those proofs arguments to the partial functions, then they are available for later automation

Making more functions take proofs as arguments will lead to rw throwing motive not type correct all the time.

Patrick Johnson (Jan 06 2023 at 08:43):

I think the idea for having an "unknown" junk value (or an implicit junk value as a corollary of using classical.epsilon) rather than an explicit default value is to prevent us from proving nonsense theorems (such as docs#nat.div_zero), simply because they are counter-intuitive to most mathematicians, and generally a bad advertisement for Lean. Instead, the right path would be to invent a better way of dealing with those special conditions.

Gabriel Ebner (Jan 06 2023 at 18:31):

That only applies to type theory.

Obviously we're only talking about Lean('s foundations) here. There are other foundations where partiality is even "better" handled, and you can't even prove a / b = a / b (which is a junk theorem provable in set theory).

Patrick Johnson (Jan 06 2023 at 23:55):

Do you have an example of such foundations? I'm interested to learn more about them.

Patrick Johnson (Jan 06 2023 at 23:56):

BTW, I think a / b = a / b would be the expected result no matter what a and b are. Most mathematicians would agree that 1 / 0 = 1 / 0, even though 1 / 0 may not be a real number, but it is definitely "something", and any object is equal to itself. (Unless equality itself is a partial relation?)

Tyler Josephson ⚛️ (Jan 07 2023 at 00:20):

Mario Carneiro said:

that would be nice, but I think it is mostly wishful thinking to think that those goals would be silently created and/or discharged without interrupting the "flow"

This reminds me of what I see WolframAlpha doing sometimes. For example, https://www.wolframalpha.com/input?i=a+x+%5E+2+%2B+b+x+%2B+c+%3D+0+solve+for+x. The answer depends on whether or not a and b are 0, and I didn’t specify. But I still get an answer — in fact, I get a few answers, along with the additional hypotheses needed to get to each one.

Trebor Huang (Jan 07 2023 at 05:02):

We have this thing called partial equivalence relations, and it is very useful in proving type theoretic theorems. Also, some constructive foundations use this (a type equipped with a PER) when you don't have good quotients and subtypes.

Yury G. Kudryashov (Jan 07 2023 at 08:28):

We have docs#divp, docs#has_deriv_at, and docs#nnreal.sqrt for people who want to avoid junk values.

Trebor Huang (Jan 17 2023 at 18:24):

After some research I have come to the conclusion: Every proof assistant that uses classical logic and formalizes real numbers have x/0=0.

Trebor Huang (Jan 17 2023 at 18:25):

I am very happy to be proved wrong but I just haven't come across any. For constructive people, this is not an option because you can't decide whether a real number is zero, so they have to find another way.

Trebor Huang (Jan 17 2023 at 18:26):

Also, this is not restricted to type theory. Mizar does that too.

Trebor Huang (Jan 17 2023 at 18:28):

Metamath defines x/0 to be the empty set instead of 0, but that also counts as a junk value.

Damiano Testa (Jan 17 2023 at 18:28):

I would count any value for x/0 as a junk value...

Trebor Huang (Jan 17 2023 at 18:29):

No, if division returns Option Real and x/0 = None then probably not

Trebor Huang (Jan 17 2023 at 18:30):

Or if it is outright undefined, like if you have a proof obligation before you use division.

Junyan Xu (Jan 17 2023 at 18:30):

Can you add a real number to the empty set in Metamath? I think they use complex numbers?

Damiano Testa (Jan 17 2023 at 18:30):

In that case, before starting on formalizing math, I would have considered all values as junk values! :upside_down:

Trebor Huang (Jan 17 2023 at 18:31):

Yes you can add anything. Nonsense addtions return the empty set.

Junyan Xu (Jan 17 2023 at 18:31):

Yeah so maybe the lesson is that junk values occur much more often in set theory than in type theory.

Kyle Miller (Jan 17 2023 at 18:47):

There's a paper defining "meadows," which are rings with a total inverse-like function. Fields extend to meadows if you have x/0 = 0.

There's also something called a von Neumann regular ring -- I guess I brought these up before in this context. In a von Neumann regular ring, for every xx there exists a yy such that x=xyxx=xyx, a sort of weak inverse. If you say you want an involutive function f:RRf:R\to R that chooses a weak inverse for each element of RR, then it must be the case that f(0)=0f(0)=0.

Johan Commelin (Jan 17 2023 at 18:49):

I think that from an algebraic point of view, there are plenty of good reasons for x/0 = 0. But topologically it is of course garbáge.

Patrick Johnson (Jan 17 2023 at 19:46):

Metamath defines x/0 to be the empty set instead of 0, but that also counts as a junk value.
Nonsense additions return the empty set.

In the classical set theory (also applies to mizar) it would be much better if they defined addition and division of real numbers using Hilbert's epsilon, so that the result of x / 0 can be any set that could possibly exist, not necessarily a real number. Then create an appropriate API for using division and all lemmas that use division a / b would have an assumption b ≠ 0. From the implementational point of view, this additional assumption can be proved automatically from the local context in most cases (just like Lean's type class resolution).

Patrick Johnson (Jan 17 2023 at 19:49):

Unfortunately, metamath lacks automation and mizar is no longer actively developed (and more importantly not open source). I'm wondering what could be the reason there are no good set-theoretic theorem provers out there.

Kevin Buzzard (Jan 17 2023 at 20:30):

Metamath must have something going for it, if Mario can prove the prime number theorem in it.

Damiano Testa (Jan 17 2023 at 20:40):

That thing is probably Mario...

Eric Wieser (Jan 18 2023 at 02:16):

Reid Barton said:

You basically always need to know whatever facts guarantee that what you wrote down was "well-defined" (else, why not just write 37 instead?) and if you make those proofs arguments to the partial functions, then they are available for later automation

A good example of where this approach is useful is docs#finset.cons / docs#finset.disj_union vs docs#finset.has_insert / docs#finset.has_union. When working with sums, the first pair has an obvious lemma with no side conditions.

Eric Wieser (Jan 18 2023 at 02:18):

One downside of this approach is that you often need two versions of every lemma, one with the free hypothesis variables on the RHS, and one with them on the LHS; docs#finset.singleton_disj_union doesn't automatically populate the side condition when rewriting backwards.

Yury G. Kudryashov (Jan 18 2023 at 02:21):

For division we have /ₚ. It requires the denominator to be a unit.

Mario Carneiro (Jan 18 2023 at 06:47):

Trebor Huang said:

After some research I have come to the conclusion: Every proof assistant that uses classical logic and formalizes real numbers have x/0=0.

Metamath does not define the division function at zero. The function properly has the domain C×(C{0})C\mathbb{C}\times(\mathbb{C}\setminus\{0\})\to \mathbb{C}. It is true that if you evaluate ( x / 0 ) you get the empty set but that's because this is what happens when you use the "function value" operator on a ZFC function out of domain, not because division was defined that way.

Mario Carneiro (Jan 18 2023 at 06:56):

Even more interestingly, there is a new metamath database iset.mm (for intuitionistic set theory) which has gotten a lot of recent work done on it thanks to Jim Kingdon, and while it tries to follow set.mm where possible you just flat out cannot prove that the "function value" operation makes any sense without an assumption that the input is in the function's domain. It gets as far as the real numbers, and division again has the same definition (well, I think you need the denominator to be apart from zero but w/e), but you definitely don't have x/0 = 0 there. (Although, you did head this off by qualifying "classical logic" so I suppose that doesn't count against your claim.)

Mario Carneiro (Jan 18 2023 at 07:12):

Patrick Johnson said:

Unfortunately, metamath lacks automation and mizar is no longer actively developed (and more importantly not open source). I'm wondering what could be the reason there are no good set-theoretic theorem provers out there.

Just in case it wasn't obvious, this is basically asking "why aren't there more theorem provers with 30+ person-years of effort put into them". That doesn't come cheap, and as a result you will see a variety of development-specific peculiarities in the ones that exist. Mizar is quite honestly the best contender in that space, it has had many years of effort put into it, but it started out in a completely different era and it has a lot of baggage from that time period. I will also of course shill my MM0 system as a sort of hybrid of metamath with automation, although (like metamath) it is not explicitly set theory based so much as FOL based.

Patrick Johnson (Jan 18 2023 at 07:37):

Mario Carneiro said:

Just in case it wasn't obvious, this is basically asking "why aren't there more theorem provers with 30+ person-years of effort put into them".

I'm not talking about math library, I'm talking about design choices of the verifier. Foundations, implementation decisions, term/tactic mode proofs, and similar concepts. Once conceptually designed, a single person can implemented it in a very short period of time and start building a math library.

Patrick Johnson (Jan 18 2023 at 07:44):

Last year I started designing and implementing a purely set-theoretic theorem prover. Two of my friends are working with me and we plan to finish the verifier near the end of this year. Then we will build a simple math library and prove basic things from number theory. I'm pretty sure this won't take 30+ person-years of effort.

Mario Carneiro (Jan 18 2023 at 08:03):

sure, but will it be "good"? That's a really load-bearing word. It takes years of effort for a theorem prover to become "good"

Mario Carneiro (Jan 18 2023 at 08:04):

There are plenty of theorem provers out there with every possible design decision if you don't put that word in

Trebor Huang (Jan 18 2023 at 08:08):

Set theory based provers probably have different aesthetics, so what counts as junk value would be different. Judging from a type theoretic perspective returning the empty set definitely counts as junk values, but you could argue otherwise from the set theoretic view.

Mario Carneiro (Jan 18 2023 at 08:11):

In any FOL system, function symbols are fundamentally required to denote something. It's no different from a type theory system in which there is only one or only a few types

Mario Carneiro (Jan 18 2023 at 08:11):

the best you can do is control what theorems are provable about that something

Mario Carneiro (Jan 18 2023 at 08:13):

Personally, I'm in the camp of embracing "junk values" and making them as useful as possible though, so I'm not really motivated to come up with even more tightly isolated junk values, because it never works and only causes pain in my experience

Mario Carneiro (Jan 18 2023 at 08:15):

So if I were making a theorem prover I would set x / 0 = 0 like lean does

Trebor Huang (Jan 18 2023 at 08:15):

In first order logic + set theory, the empty set looks very much like a global canonical junk, so I'm fine with that. Type theory doesn't give you that (at least not in Lean, I think some type theories not using the CH isomorphism introduce a global undefined value for every type).

Mario Carneiro (Jan 18 2023 at 08:15):

Inhabited basically gives you canonical junk

Patrick Johnson (Jan 18 2023 at 18:07):

Mario Carneiro said:

So if I were making a theorem prover I would set x / 0 = 0 like lean does

It highly depends on what the goal of a theorem prover is supposed to be. MM0 has an objective task it's trying to accomplish, so it makes sense to have x / 0 = 0 as a theorem if it helps reach the final task. However, if we want to write a general-purpose theorem prover to formalize actual mathematics, I see the undefinedness of x / 0 as a challenge for the theorem prover inventors, rather than a nuisance we want to avoid at all costs. Mathematicians can handle that on paper without any trouble, so a good theorem prover should have the ability to smoothly translate mathematician's intuition to a formal proof, not to artificially force mathematicians to change their intuition because of the implementational limitations.

Mario Carneiro (Jan 18 2023 at 18:42):

This is most likely an irreconcilable difference of opinion, but my angle as a logician and formalist is to find the optimal way to communicate mathematics to a computer, not just to do paper mathematics in the computer and faithfully represent all the designed-for-humans tricks used in paper presentations of mathematics. The latter is a goal for some people, and systems based on controlled natural language very clearly show it, but I am looking at the longer term, where we eventually realize we don't need to follow those old habits anymore. It is akin to the evolution of programming languages: many of the old programming languages were very CNL inspired, but modern programming languages have diverged somewhat from that into simpler grammars with a greater emphasis on symbols instead of words, making the most of a restricted lexicon to make things easy for both the human and the computer.

As it relates to undefinedness specifically, I find that the best way to render the mathematical practice of having "unmentionables" is to use garbage values and then just... not talk about them. Or use them to reduce hypotheses in theorems because more hypotheses = more work.

Trebor Huang (Jan 18 2023 at 19:34):

Patrick Johnson said:

A good theorem prover should have the ability to smoothly translate mathematician's intuition to a formal proof, not to artificially force mathematicians to change their intuition because of the implementational limitations.

Note that this might be alternatively phrased as

A good theorem prover should never produce new mathematical viewpoints or ideas, and it should have the ability to transfer mathematician's old ideas as faithfully as possible.

Jireh Loreaux (Jan 18 2023 at 19:47):

I don't think I agree with this rephrasing. But in any case, I think that while there are cases where garbage values really are garbage, in many instances I think that appropriately chosen junk values can actually become "not junk". For instance, I consider the theorem

example {G : Type*} [group_with_zero G] (x : G) : x⁻¹⁻¹ = x := sorry

a feature, not a bug or a misrepresentation of the mathematical content. Sure, it conflicts with the way mathematical content is currently presented, but I have pretty much decided that I prefer it this way and I would hope eventually the mathematical community could accept the more useful (i.e., ones that make a bunch of theorems have weaker hypotheses) "junk values".

Oh, and if you don't want garbage values, there's always docs#pfun.

Kevin Buzzard (Jan 18 2023 at 20:14):

I would say that mathematicians do not handle division by zero on paper without any trouble -- they simply don't divide by zero.

Arien Malec (Jan 18 2023 at 21:28):

Physicists, on the other hand…

Patrick Johnson (Jan 18 2023 at 21:53):

I would say that mathematicians do not handle division by zero on paper without any trouble -- they simply don't divide by zero.

By "Mathematicians can handle that on paper" I mean handling of the additional assumptions of the form x ≠ 0

Patrick Johnson (Jan 18 2023 at 21:55):

That is, they don't need to interrupt the proof to show that the division in some particular expression is well-defined. It is either obvious from the context, or in case it's not, they give a small comment explaining why the denominator can't be zero there.

Patrick Johnson (Jan 18 2023 at 22:01):

If one day mathematicians realize they want division by zero to be 0 and change the standard convention, we should accept that. My point is not whether that definition would be useful in practice or not, but that a good theorem prover should let the user easily work with truly undefined results if the user wants them in definitions for whatever reason.

Kevin Buzzard (Jan 18 2023 at 22:04):

Yes, mathematicians are good at knowing that xy is obviously nonzero if x and y are nonzero, or that sqrt(x^2+1) is obviously non-zero etc (here x and y are reals). But this sounds like a hard problem for automation.

Eric Rodriguez (Jan 18 2023 at 22:10):

Patrick Johnson said:

If one day mathematicians realize they want division by zero to be 0 and change the standard convention, we should accept that. My point is not whether that definition would be useful in practice or not, but that a good theorem prover should let the user easily work with truly undefined results if the user wants them in definitions for whatever reason.

These people are welcome to make their division with pfun but I'm happy to stick with what's easiest. I don't particularly believe in building scaffolding so that people can fall off from higher up

Reid Barton (Jan 18 2023 at 22:15):

You can also use this as an argument against type systems.

Reid Barton (Jan 18 2023 at 22:26):

Kevin Buzzard said:

that xy is obviously nonzero if x and y are nonzero, or that sqrt(x^2+1) is obviously non-zero etc

And it's not just this but also: the cardinality of a nonempty (and also finite, obviously) set is obviously non-zero, and the set is also nonempty for some obvious reason...

Yury G. Kudryashov (Jan 19 2023 at 01:15):

If a mathematician wants to talk about division (as a partial function) in Lean, they can use docs#divp with units.mk0 here and there.

Niels Voss (Jan 19 2023 at 04:27):

Trebor Huang said:
Note that this might be alternatively phrased as

A good theorem prover should never produce new mathematical viewpoints or ideas, and it should have the ability to transfer mathematician's old ideas as faithfully as possible.

I don't really know if I agree with this rephrasing, because it is possible that at some point in the future, the problem of partial functions will be solved by an innovation in mathematics. I read somewhere that to formalize the many different types of limits in Lean we had to use filters, which in my opinion is an example of something difficult to formalize being formalized by coming up with new mathematics. You can call me overly optimistic, but in other words it's possible that at some point in the future we will encounter a way to express partial functions in a natural way consistent with pen-and-paper mathematics but friendly to formalization (kind of like how the Curry-Howard isomorphism is very unique and different from what mathematicians are used to but ultimately is consistent with mathematicians' basic rules). My point is not that this is particularly likely to happen, but rather that looking for ways to formalize difficult-to-formalize ideas in a natural way can actually be somewhat insightful, and not just clinging onto obsolete ideas from pen-and-paper mathematics.

However, as far as Lean is concerned, I think n / 0 = 0 is the best choice simply because it makes a lot of things easier to work with by reducing the number of hypotheses, and isn't as different from pen-and-paper mathematics as I had originally thought (and, as other people have mentioned, junk values can have a lot of desirable properties). The only place where I am a bit wary of this would be when programming, since I would normally expect division by zero to crash the program (which can potentially avoid many worse things such as data corruption or security vulnerabilities). Hopefully with the ability to do logical verification this won't be that big of a problem.

Trebor Huang (Jan 19 2023 at 04:40):

What is the essential difference between introducing filters (for basic analysis) and introducing x/0=0? One is more aesthetically grounded than the other? Apart from that these two are both forcing something mathematicians don't conventionally use, because they make formalization easier.

Trebor Huang (Jan 19 2023 at 04:41):

Also, in Lean we have that lst[n]! crashes the program, but is definitionally equal to some junk value. Do we do that for division? (Not for real number, for the more computable stuff)

Niels Voss (Jan 19 2023 at 04:48):

I can't really argue much about the filters because I actually don't know much about them, I just happened to briefly read about them somewhere. Division could be designed to crash when dividing by zero, but I think that actually might prevent you from being able to prove that your program doesn't crash. Right now, you pretty much know that if you don't use either panic or an operation ending with a ! then your program probably won't crash. Maybe a separate /! operator that crashes could be introduced.
Now that I think about it more, I actually don't know a good way to resolve the programming aspect of division by zero, aside from using divp, which as was discussed earlier, would be more difficult to use.

Niels Voss (Jan 19 2023 at 04:55):

Now that I think about it, something like divp (or a divp? that returns an option) might not actually be that bad of an idea while programming, because it helps prevent certain types of errors when implementing algorithms. n / 0 = 0 is still probably better for doing mathematics, though.

Patrick Johnson (Apr 13 2023 at 13:23):

Kevin Buzzard said in the blog post:

Mathematicians don’t divide by 0 and hence in practice they never notice the difference between real.div and mathematical division (for which 1/0 is undefined).

I've spent some time thinking about this. The purpose of an interactive theorem prover is not only to allow proving desirable results, but also to prevent proving undesirable results. Humans are prone to mistakes and the whole point of formalization is to be totally sure that our argument is correct. The quote from the blog post can be rephrased as: "Mathematicians never prove wrong theorems, so having axiom foo : false would not be a problem, because mathematicians don't use it." (supposing there is no print axioms of course). If a theorem depends on 1 / 0 = 0, it won't ever be considered a valid logical argument in any discussion.

Kevin Buzzard said:

My instinct is that we formalise mathematics in type theory precisely so we can avoid nonsense like this. In other words, these hacks solve a problem in set theory which doesn't need to be solved in type theory.

Type theory takes W-types for granted, but their existence had to be proved using set theory in order to convice mathematicians. Therefore, I won't call set-theoretic constructs a nonsense. Once you create an API, it doesn't matter what underlying foundation you're using.

Eric Wieser (Apr 13 2023 at 13:29):

axiom foo : false is very different to lemma nat.one_div_zero : 1 / 0 = 0; the first makes your whole world fall apart, the second just means that / means something subtly different to what you expect it to mean

Eric Rodriguez (Apr 13 2023 at 13:38):

I guess the implied question is "is there some natural-looking theorem that is true only when 1/0 = 0 and is actually false for normal mathematicians" and I can't formulate an easy answer to that

Patrick Johnson (Apr 13 2023 at 13:39):

Eric Wieser said:

axiom foo : false is very different to lemma nat.one_div_zero : 1 / 0 = 0; the first makes your whole world fall apart, the second just means that / means something subtly different to what you expect it to mean

The example is just to show what proving an undesirable result means. The argument that mathematicians don't use 1 / 0 and we can define it to have any value, is a bad choice, because if a theorem on paper (using actual division) is true only under assumption 1 / 0 = 0, it won't be convicing to other mathematicians (even if it may be very hard to come up with such example). If someone formalizes a big theorem in Lean, and the theorem statement uses division, I won't call that formalization convicing, because there is no way to check whether the proof relies on division by zero. Of course, it proved something, but possibly a subtly different statement to what mathematicians may think.

Riccardo Brasca (Apr 13 2023 at 13:45):

If your problem is just convincing mathematicians, I can guarantee that 1/0 = 0 is not a problem (I've explained this to several colleagues, and nobody is bothered, since they're only interested in division when the denominator is not zero). Of course one can cheat and use this to write statements that look like a big theorem but are not, but this is totally unavoidable I think. In any case one should check that is defined as it should and similar things. You can have a look at Johan's talk about this (I don't have a link, but I am sure someone has).

Riccardo Brasca (Apr 13 2023 at 13:47):

The point is that division is characterized by (a/b)*b=a if b ≠ 0. This is true for Lean's division and it means that our division is the "correct" one.

Kevin Buzzard (Apr 13 2023 at 13:48):

Mathematicians don't divide by zero but there's no getting away from the fact that theorem statements and definitions need to be checked manually. It's not just that I can divide by 0, I can write something which looks like the Riemann hypothesis but which isn't, either because I made a slip or because I did something malicious with notation or unicode or whatever. In the liquid tensor experiment Adam Topaz made a great effort to convince skeptics that the team had indeed proved what they'd claimed they'd proved.

Eric Wieser (Apr 13 2023 at 14:11):

Patrick Johnson said:

If someone formalizes a big theorem in Lean, and the theorem statement uses division, I won't call that formalization convicing, because there is no way to check whether the proof relies on division by zero..

Whether the proof relies on division by zero is irrelevant. All you care about is whether the statement divides by zero. Rationale: pretend the proof starts with "let us define a new operator lean_div which we shall write / that satisfies x / 0 = 0". Clearly the validity of a proof does not depend on which symbols we choose to notate it.

In general, "correctness" of definitions that don't appear in the statement are irrelevant to trusting the proof.

Kevin Buzzard (Apr 13 2023 at 14:28):

Patrick Johnson your claim that if someone formalises a theorem whose statement mentions division then you won't believe it indicates the error in your understanding. If you can very t that the statement doesn't involve division by zero then it doesn't matter if the proof uses division by zero, because instead of dividing by zero I can just define a new function foo x y which is division if y isn't zero and is 0 if y=0, and of course this doesn't give you a contradiction. My point is that this has nothing to do with division -- for every statement which is claimed to be proved by a theorem prover, you cannot be convinced by it until you have verified yourself that all the definitions involved are what you think they are. I will happily replace all occurrences of division in my statement by foo and openly claim that foo is division if the denominator isn't zero, and 0 otherwise, and this doesn't change anything. I can also define a function bar which takes two reals x and y and a proof that y isn't zero and returns x/y, and then claim a theorem involving bar and with additional hypothesis that all my denominators are nonzero, and from your objection this sounds like it will placate you, even though I'm just going to prove it by throwing away the hypotheses and using division anyway. I think you are failing to conflate the problem that lean division doesn't mean mathematical division with the problem that every single definition needs to be checked, whether it's division or not, if you want to be convinced of a lean proof.

Kevin Buzzard (Apr 13 2023 at 14:29):

(sorry, I'm restating things Eric already said, my internet is not ideal right now)

Eric Wieser (Apr 13 2023 at 14:32):

even though I'm just going to prove it by throwing away the hypotheses and using division anyway

Throwing it away can change the meaning of your statement if the hypothesis appears in an existential quantifier.

Patrick Johnson (Apr 13 2023 at 14:33):

Maybe I was not precise enough. What I meant by "the proof relies on division by zero" is that if the statement contains division by a bound variable, the proof would be "suspicious" if it relies on one_div_zero, because the statement may not hold for mathematical division (I'm just clarifying up that sentence).

In general, even though divison by zero is probably the most notable example, my point is that changing standard definitions to make proofs easier, and then pretending they're "usual" definitions when proving claims about them may be considered a wrong approach.

Eric Wieser (Apr 13 2023 at 14:33):

Of course, if we're going to claim that division is confusing, 5 / 2 = 2 is surely far more upsetting to mathematicians than 5 / 0 = 0

Eric Wieser (Apr 13 2023 at 14:36):

if the statement contains division by a bound variable

Here's an example of the type of statement that is cause for concern:

import data.rat.basic
import tactic.norm_num

-- this is probably rejected as false/meaninless by most mathematicians, but Lean says it is true
example :  x : , 2 / x = 3 / x := 0, by simp

-- adding a `x ≠ 0` condition gives it the intended meaning
example : ¬  (x : ) (hx : x  0), 2 / x = 3 / x :=
by { push_neg, intros x hx, norm_num [div_eq_div_iff, hx] }

-- or as Kevin suggests, where `/ₚ` consumes the proof `hx` itself
example : ¬  (x : ) (hx : x  0), 2 /ₚ (units.mk0 x hx) = 3 /ₚ (units.mk0 x hx) :=
by { push_neg, intros x hx, norm_num [div_eq_div_iff, hx] }

But note that this is only a problem because these are bound by . The outermost binders are always safe, as you can pretend there was an unused assumption that the divisor was non-zero

Patrick Massot (Apr 13 2023 at 14:38):

Patrick, weren't you meant to create your own proof assistant "fixing" all those "issues"? Why coming back again and again with always the same complaints?

Martin Dvořák (Apr 13 2023 at 14:38):

My two cents:

I also used to struggle with understanding whether 1 / 0 = 0 could lead to unsound results. The blog post https://xenaproject.wordpress.com/2020/07/05/division-by-zero-in-type-theory-a-faq/ helped me a lot, but I had to sort it out in my head a bit further to really convince myself.

I agree that correctness of definitions (whatever it might possibly mean) is relevant only if it appears in a theorem statement or as an axiom that is reachable from the file. As for the axioms, it is easy; we don't add extra axioms. To make sure somebody really didn't add an inconsistent axiom, we can write #print axioms the_unbelievable_theorem and check that it contains only the standard axioms. So we are done with the axioms and we turn to the theorem statement. If someone adds an assumption (called a "hypothesis" in this community) inconsistent with 1 / 0 = 0 then the whole theorem is ruined, of course. However, I claim that such a mistake is easy to spot. How would such an assumption even look? The assumption cannot say "the value 1 / 0 is undefined" – this is not expressible in Lean. It would have to say something like 1 / 0 = 42 in order to allow us to prove false from it. However, such an assumption is immediately rejected by mathematicians, regardless of what they think about the division in Lean. Hence, probably the only assumption that might trick somebody into believing a deceitful theorem is ∀ x, 1 / x ≠ 0 or something like that. Again, why would the theorem statement even contain such a thing?

Eric Wieser (Apr 13 2023 at 14:41):

Martin, I'm curious why you think division by zero is a bigger trap than unexpected rounding

Martin Dvořák (Apr 13 2023 at 14:43):

Eric Wieser said:

Martin, I'm curious why you think division by zero is a bigger trap than unexpected rounding

Rounding down is familiar from "normal" programming languages. Having a valid output for dividing by zero is "unheard of".

Patrick Johnson (Apr 13 2023 at 14:44):

Patrick Massot said:

Patrick, weren't you meant to create your own proof assistant "fixing" all those "issues"? Why coming back again and again with always the same complaints?

They're not complaints. I'm currently busy with my final university semester (that's why the project is temporarily stalled), and just wanted to share my comment about the blog post. I'm happy that there are different theorem provers that handle those "issues" in different ways.

Johan Commelin (Apr 13 2023 at 14:53):

Martin Dvořák said:

Rounding down is familiar from "normal" programming languages. Having a valid output for dividing by zero is "unheard of".

Note that Eric was talking about mathematicians in his claim. Most of them don't know/care about programming languages or the attitudes that PLs take towards dividing by zero.

Johan Commelin (Apr 13 2023 at 14:55):

Eric Wieser said:

Throwing it away can change the meaning of your statement if the hypothesis appears in an existential quantifier.

Of course you can also use a universal quantifier: ∀ x, 2 / x ≠ 3 / x

Eric Wieser (Apr 13 2023 at 15:02):

What I meant to exclude is top-level universal quantifiers. If you forget to add x ≠ 0 there then your statement is just impossible to prove; If you forget in an existential, then your statement is true but looks like it should be false

Eric Wieser (Apr 13 2023 at 15:03):

I imagine there's some better terminology that captures precisely which quantifiers it matters in, but hopefully it should be obvious on any given example

Jireh Loreaux (Apr 13 2023 at 15:42):

I had what I thought was a decent reply to these sorts of questions in the condensed mathematics stream, but since probably most are not subscribed to that stream I'll link it here: https://leanprover.zulipchat.com/#narrow/stream/267928-condensed-mathematics/topic/Real.20measures/near/295058844

I guess I'll additionally argue the following. In general, mathematicians are not especially careful about how objects are defined (at least not at first blush), and yet they are comfortable switching between paradigms that arise often enough. For example, there are plenty of introductory algebra texts that say a ring has a 1, and a bunch more that don't have that requirement. Of course, mathematicians will be aware that rings in the former context satisfy the property that every proper ideal is contained in a maximal ideal, but this is false for rings in the latter context.

This also happens in higher level theory where things are much more subtle. For example, an absorbing extension of C⋆-algebras often means something different when the underlying C⋆-algebra is unital versus non-unital, but the non-unital definition also makes sense for unital algebras, it just doesn't give the right theorems; this caused at least one major error in the literature that had to be later corrected.

The point I'm trying to make is this: at least in Lean (or any formalization of mathematics) at least you can check (relatively easily?) that the definition corresponds with the one you intuited, or doesn't, as the case may be. Whereas in paper mathematics you have to check whether the author means absorbing in the unital sense or the non-unital sense, and then you have to trust that all the results they have based their work on also use the appropriate definition for the context. This is nigh on impossible to actually verify yourself.

Mario Carneiro (Apr 13 2023 at 15:47):

Eric Wieser said:

I imagine there's some better terminology that captures precisely which quantifiers it matters in, but hopefully it should be obvious on any given example

I think the right pattern is that you should require that the expression would still typecheck if it used a partial division operator. So ∀ x, 2 / x ≠ 3 / x is not good but ∀ x, x ≠ 0 -> 2 / x ≠ 3 / x is

Eric Wieser (Apr 13 2023 at 15:51):

My claim is that something like the true statement example : ∀ x, x / x * x = x is harmless intuitively even though it doesn't mention x ≠ 0.

Eric Wieser (Apr 13 2023 at 15:52):

Because when reading you can imagine that it did say x ≠ 0 and you just get a weaker statement than the one Lean had. The problems only arise when x ≠ 0 would make make the statement stronger.

Jireh Loreaux (Apr 13 2023 at 15:54):

Right, I think the point you are making is this: (∀ x, Q x) → (∀ x, P x → Q x) wheres ∃ x, Q x does not imply ∃ x, P x ∧ Q x. Here P x is the predicate for the subtype (e.g., x ≠ 0) which, if present, would make Q x type check with the partial division, per Mario's comment.

Yaël Dillies (Apr 13 2023 at 17:01):

Jireh Loreaux said:

an absorbing extension of C⋆-algebras often means something different when the underlying C⋆-algebra is unital versus non-unital, but the non-unital definition also makes sense for unital algebras, it just doesn't give the right theorems

I encountered that in the Cambridge Part III Functional Analysis this term. I was absolutely confused how shamelessly we even made such a pair of overlapping definitions!

Jireh Loreaux (Apr 13 2023 at 17:17):

It actually makes a bit of sense if you think about it from a categorical perspective, but admittedly that's a bit hard to see when first encountering it. That is, (non-unital) absorbing extensions behave correctly in the category C⋆Alg, whereas (strongly unital) absorbing extensions behave correctly in the category C⋆Alg₁, for a suitable interpretation of "behave correctly". (C⋆Alg is the category of not necessarily unital C⋆-algebras with ⋆-homomorphisms and C⋆Alg₁ is the category of unital C⋆-algebras with unital ⋆-homomorphisms.)


Last updated: Dec 20 2023 at 11:08 UTC