Zulip Chat Archive

Stream: new members

Topic: How were the rules of the game decided?


Jay (Jun 27 2024 at 08:00):

Sorry to be a beginner querying the wisdom of the whole thing but for me it is difficult to get started when it seems like learning the arbitrary rules of another arbitrary game. I will try to explain:

I started a Lean 4 tutorial and it begins with introducing me to the successor function and I immediately hate it. Where does this function come from? On the face of it, it looks like somebody accepted a lot of people are taught how to count so they took that learning and named a function after it which goes to the next number from whichever number you are considering. It's introduced as if it's axiomatic somehow as I just have to accept it. What's worse is it then explained addition hasn't been defined and on my journey I then need to define addition in terms of the tools I have been given. So the first question is: Why? By what means am I to accept this?

To me, if you are going to just airdrop a successor function in then it seems no less sensible to start by airdropping addition in as a basic function and then stating the successor function is nothing more than a specific case of addition. In much the same way in geometry you might say a square is a special case of the more general rectangle or a circle is a special case of a more general ellipse.

So, if you are going to begin with a successor function why not begin with any other function? You can define the successor function as a specific case of the more general case of addition just as easily as you can define addition in terms of the successor function. So the successor function seems to me no less arbitrary a starting point than addition.

Why am I wrong?

Damiano Testa (Jun 27 2024 at 08:06):

First, I don't think that you can be wrong, since you expressed an opinion.

Second, you can choose to start where you want, but frequently in maths you try to start with the least possible amount of starting data and then try to build everything else from that.

Jay (Jun 27 2024 at 08:19):

Don't you think if that is the case then there has to be something fundamentally wrong with it? You are right, it is my opinion, and that opinion is that it needn't have been the way it is. It could have been some other way and it's plausible that within some other way is a more sensible approach to take. My point is that if I just some random bozo and I can begin with a valid criticism of how the rules of the game are defined then that undermines the whole thing.

For a mathematical basis shouldn't I expect there to be a far more austere and rigorous basis for the choice of given functions than "it's just somebody else's opinion?".

Eric Wieser (Jun 27 2024 at 08:25):

if you are going to just airdrop a successor function in

I think the key thing is that you're not "airdrop"ing a function, but a constructor; and the rules of constructors is that they are only equal if they have identical arguments. So you can't take add as a primitive constructor, because add 1 2 and add 2 1 would not be equal

Damiano Testa (Jun 27 2024 at 08:26):

For me, as long as there is something that you cannot prove, whatever theory that you develop is meaningful. Whether it will be interesting or not is a question of personal taste.

Eric Wieser (Jun 27 2024 at 08:27):

You could play around with

inductive MyNotNat where
| zero  : MyNotNat
| one : MyNotNat
| add : MyNotNat -> MyNotNat -> MyNotNat

to try out what happens if you start with add instead of succ as your primitive

Jay (Jun 27 2024 at 08:37):

OK Eric, but didn't you just invalidate what I wrote by "airdropping in" yet another arbitrary rule which we need not accept? We know that addition is commutative because we know that adding 3 to 2 isn't any different from adding 2 to 3 because you are adding 3 and 2 so the order you do it in makes no difference. Just as surely as if you were adding kilo weights into a bag you know that if you drop 5 kilo weights into an empty bag or you walk in 5 strides one metre in length it won't make any different how you separate or group them. 5 kilo weights weigh 5 kilos whether you add 3 then 2, 4 then 1, or just drop all five in. The grouping doesn't matter. So we know without having to invent a rule outlawing it that 1 + 2 and 2 + 1 end up in the same place, don't we?

Jay (Jun 27 2024 at 08:39):

Say, as a programmer, you can define a function which adds 1 or you can define a function which adds anything from 0 to 99. One of those functions doesn't take precedence or priority or have some default special status over the other, does it?

Mauricio Collares (Jun 27 2024 at 08:41):

It seems like "the game" in your title refers to "mathematics", not "Lean". Yes, most mathematics axioms are, in a way, "airdropped in" to capture some intuitive property experts "know without having to invent a rule". But it turns our having a "minimal" set of axioms that still captures all the desirable properties is nice for various reasons (e.g., maybe one of the properties you "know without having to invent a rule" isn't actually necessary for proving a result you want, and therefore the same proof still goes through even in more general situations where the property doesn't hold -- for example in modular arithmetic, say).

Mauricio Collares (Jun 27 2024 at 08:45):

The succ example is heavily inspired by https://en.wikipedia.org/wiki/Peano_axioms, and the introduction talks a bit about how "deducing facts from a small set of axioms is a good idea" took people a long time to figure out.

Yakov Pechersky (Jun 27 2024 at 08:47):

I'm not sure that one can assert that we "know" that addition is commutative. Okay, let's say we do in this case. And even that multiplication is commutative. But what about multiplication of matrices? Where does "knowing" stop and proving begin?

Henrik Böving (Jun 27 2024 at 08:59):

Jay said:

Sorry to be a beginner querying the wisdom of the whole thing but for me it is difficult to get started when it seems like learning the arbitrary rules of another arbitrary game. I will try to explain:

I started a Lean 4 tutorial and it begins with introducing me to the successor function and I immediately hate it. Where does this function come from? On the face of it, it looks like somebody accepted a lot of people are taught how to count so they took that learning and named a function after it which goes to the next number from whichever number you are considering. It's introduced as if it's axiomatic somehow as I just have to accept it. What's worse is it then explained addition hasn't been defined and on my journey I then need to define addition in terms of the tools I have been given. So the first question is: Why? By what means am I to accept this?

To me, if you are going to just airdrop a successor function in then it seems no less sensible to start by airdropping addition in as a basic function and then stating the successor function is nothing more than a specific case of addition. In much the same way in geometry you might say a square is a special case of the more general rectangle or a circle is a special case of a more general ellipse.

So, if you are going to begin with a successor function why not begin with any other function? You can define the successor function as a specific case of the more general case of addition just as easily as you can define addition in terms of the successor function. So the successor function seems to me no less arbitrary a starting point than addition.

Why am I wrong?

Lean as a system allows you to define the mathematical objects you are working on in whichever way you like. If you want to go even more basic than natural numbers you could say that the rules for declaring and working with types are in fact the primitive axioms that Lean is based upon and the mathematicians then define things using those. However most people that use Lean are (rightfully) not interested in the inner workings of the type system that underlies Lean. They wish to build whichever mathematical object they know to on top of Lean, in the way they are used to from normal mathematics (+ things that might make them easier to work with in Lean but that's an advanced topic).

The definition of natural numbers that is being presented to you here is merely one that has been popular with mathematicians for a very long time (as already linked by Mauricio above) and thus the canonical representation of natural numbers in Lean.

Jay (Jun 27 2024 at 09:06):

Mauricio, yes, I might benefit from reading the Peano axioms and their explanation. I have no doubt my issue is coming at it from the perspective of a programmer. To me all these functions, whatever they are, make no sense before you have some generalised method by which you are able to define exactly what a function does. So, even a successor function calls for an applicable system which does what the successor function does in the same way to "add 1" you would need to describe the program which adds 1. You can't just have it out of the blue. Even adding 1 calls for a systematic process for adding 1. For this you need to define that systematic process and even the tools you would use to define it. So, a successor doesn't sound very fundamental to me because a lot has to happen before you do it. That's why it takes a while for children to learn how to count. We program such a successor function into children's minds which we then use for counting. That to me is why a successor function is no more axiomatic than addition. Simpler to define, yes, but not more fundamental. The fundament, for me, would be in defining the tools which enable the description of what the successor function does such that the function can be applied to any integer, just as we do when we count. Say.... you shift the rightmost symbol incrementally through an ordered list of symbols and after incrementing on the final symbol '9' you then increment the symbol to the left with the same rules and then reset the symbol to its right to zero then continue. Do you see what I mean? There is a level of abstraction beneath the successor function which we need to define it.... and that lower level can be just as well used to define addition or any other function. That's what the Lamda calculus does. So for me even to get a successor function you imply a whole box of machinery you need to use to define what it does. That is why for me the successor function cannot be foundational because you must have an even more foundational set of tools to describe the function.

Jay (Jun 27 2024 at 09:08):

Yakov, yes, I actually accept that and anticipated that criticism because it was obvious as I went along that in claiming that 3 + 2 = 2 + 3 I had to describe a proof (adding that number of kilo weights to an empty bag) to illustrate Why it is true.

Riccardo Brasca (Jun 27 2024 at 09:09):

Mathematically you have to start somewhere, with some set of axioms. Lean's axioms are based on type theory (plus other stuff), and in particular the successor function is essentially already "built-in in the system" (meaning that the existence of is automatic). This is not the only axiomatic system that can used for mathematics: indeed mathematicians usually tend to think to set theory as the basic axiomatic system: here can be defined, and the successor function is also defined.

Jay (Jun 27 2024 at 09:10):

"The definition of natural numbers that is being presented to you here is merely one that has been popular with mathematicians for a very long time"

Sure ok, and does it allow me to open the box on the successor function to find out what's in it and how that function is defined?

Riccardo Brasca (Jun 27 2024 at 09:11):

The reason why Lean (and most of the proof assistants around) uses type theory is mainly because it is more convenient. But on the other hand we know that mathematically the two systems are equivalent, so we can do mathematics as we've always done.

Jay (Jun 27 2024 at 09:13):

"here  can be defined, and the successor function is also defined."

That's what I am getting at, or trying to. In Lean do I get access to how that successor function is defined and am I able to change it or define functions, like addition, too?

Henrik Böving (Jun 27 2024 at 09:14):

Jay said:

"The definition of natural numbers that is being presented to you here is merely one that has been popular with mathematicians for a very long time"

Sure ok, and does it allow me to open the box on the successor function to find out what's in it and how that function is defined?

The successor function is defined like this:

inductive Nat where
| zero : Nat
| succ (n : Nat) : Nat

If you want to know what this "means" (in the sense of how precisely inductive types internally work, obviously you can just accept this and keep going) you will have to learn the afformentioned underlying type theory of Lean as the notion of an inductive type is one of the core primitives of the type theory.

Riccardo Brasca (Jun 27 2024 at 09:17):

In practice what we are saying is: is a "set" that contains an element, that we call 0 and elements of the form succ 0, succ (succ 0) etc.

Riccardo Brasca (Jun 27 2024 at 09:17):

(and we know that they're all distinct, plus other things)

Henrik Böving (Jun 27 2024 at 09:17):

We have resources explaining the surface level behavior of inductive types in books like Theorem Proving in Lean 4 or Functional Programming in Lean 4. There is also a theory heavy explanation of them in #leantt but again this explanation is usually not of help to most people.

For a beginner the only recommendation I can make is to draw a border of abstraction at some point. Where you draw that border of abstraction is of course up to you but I would recommend to draw it at the surface level effects of the inductive command and not at it's definition in terms of type theory.

Riccardo Brasca (Jun 27 2024 at 09:21):

Think like this: here are all the elements of (here in any line is just a symbol, it doesn't mean anything):

  • 0
  • succ 0
  • succ (succ 0)
  • succ (succ (succ 0))
    ...
    The successor function just take a line a gives the following.

Riccardo Brasca (Jun 27 2024 at 09:23):

Then of course we need other axioms, for example to prove that the elements are all distinct. And Peano came up with a particularly nice and short list of axioms.

Riccardo Brasca (Jun 27 2024 at 09:28):

Concerning the fact that you want to know what succ does, I think it is because you have a programmer background. Mathematically a function doesn't do anything, it's not something we can evaluate in practice. The identity function (very easy to evaluate) is as good the function that gives the n-th digit of a Chaitin's constant (that cannot be computed).

Riccardo Brasca (Jun 27 2024 at 09:28):

Note that I am not saying the mathematical point of view is better, it is just different.

Jay (Jun 27 2024 at 09:32):

I think it is different and explains some of the problem. Especially when it comes to getting off the ground with a successor function. I want to know how is it defined, how does work, why did I get that and not something else more useful, and why, now I have it, am I being forced to use it.

Riccardo Brasca (Jun 27 2024 at 09:34):

The point is that you have to start somewhere, right?

Riccardo Brasca (Jun 27 2024 at 09:35):

I mean, you could ask "what is ?"

Riccardo Brasca (Jun 27 2024 at 09:37):

At some point you have to stop and accept something as "primitive" (i.e. an axiom). Mathematicians like to choose as few axioms as possible, and Peano's axioms are very good at it. There is nothing intrinsically wrong in having axioms for the addition, but the list would be longer (commutativity, associativity, 0+n = n, probably some version of induction).

Giacomo Stevanato (Jun 27 2024 at 09:38):

Note that succ is not "just a function", it's a constructor. It doesn't compute anything, rather it constructs some data. You are defining how a natural number is "built up", not what operations you can do on one.

Jay (Jun 27 2024 at 09:38):

Yes, I could. Where I am starting is with a toolkit by which the /function/ of functions can be described. In computer science that would be something like the Lambda calculus in which functions, such as the successor function, can be described. So, for me, the successor function is too high a layer of abstraction to start on. By that point assumptions have been made and the mechanism which defines what it does needs to be defined.

Eduardo Ochs (Jun 27 2024 at 09:38):

What about 1/0 - 1/0? It is obvious that 1/0 is a number, and it is obvious that 1/0 - 1/0 should be 0, right? Or do things become less obvious when they become more complex? Is there a clearly boundary between "less obvious but still right" and "less obvious and wrong"?

Jay (Jun 27 2024 at 09:40):

Giaco, but succ is an operation which you apply to natural numbers, isn't it? 2 = succ (succ 0)?

Riccardo Brasca (Jun 27 2024 at 09:41):

What is 2?

Giacomo Stevanato (Jun 27 2024 at 09:41):

You are starting in a blank space, 2 doesn't exist. Rather you can define it as an alias for succ (succ zero)

Jay (Jun 27 2024 at 09:42):

Sure.... the digits have no meaning until you assign meaning to them. But, even without them having meaning you can still define addition in terms of an operation which acts upon a set of symbols, those symbols being the digits of the numeric base you are using. You can define the function on a set of symbols without having to assign meaning to those symbols first.

Jay (Jun 27 2024 at 09:43):

And yes, the digits don't exist until you create them too.

Jay (Jun 27 2024 at 09:45):

Sure Giacomo, but in order to have succ 0 you need a concept of what is its successor before you can have that function produce it, don't you?

Riccardo Brasca (Jun 27 2024 at 09:47):

Where would you start?

Giacomo Stevanato (Jun 27 2024 at 09:49):

succ doesn't produce the successor, you are defining the successor of a number with succ. succ 0 does not "return" some value, it is itself the value. You then define everything else in terms of succ and zero, not before.

Jay (Jun 27 2024 at 09:49):

Riccardo.... you know what? I don't know. I think that question is even more fundamental than mathematics. You can easily respond with "Well, wherever you start, why start with maths at all? Why not start with something else altogether?". Why invent anything at all, especially given that you probably can't without making an unnecessary assumption.

Jay (Jun 27 2024 at 09:51):

Giacomo, sure, but to do that you must have some concept of its successor, mustn't you?

Riccardo Brasca (Jun 27 2024 at 09:51):

The point is not really where to start, but the fact that you need a starting point. Otherwise it is like keeping asking "why? why? why?" forever. You are free to choose the rules you want before playing the game, but you need rules to play a game.

Henrik Böving (Jun 27 2024 at 09:52):

Jay said:

Yes, I could. Where I am starting is with a toolkit by which the /function/ of functions can be described. In computer science that would be something like the Lambda calculus in which functions, such as the successor function, can be described. So, for me, the successor function is too high a layer of abstraction to start on. By that point assumptions have been made and the mechanism which defines what it does needs to be defined.

If you want this read #leantt, but if you are not good with type theory yet this won't be helpful for you

Eduardo Ochs (Jun 27 2024 at 09:53):

For me the idea that "the elements of each inductive set are terms of a certain form" sounded not only weird but also very hard to define precisely...

Henrik Böving (Jun 27 2024 at 09:55):

Its not really, by its very notion an inductive set is the least set that contains all the elements defined by its inductive rules. If you now put all of these rules into Lean and say that they are of the type that you are defining you get quite literally the least type whose elements are constructed from the inductive rules.

Giacomo Stevanato (Jun 27 2024 at 09:56):

Jay said:

Giacomo, sure, but to do that you must have some concept of its successor, mustn't you?

Well sure in your mind you must have such a concept. But in this representation of natural numbers you are identifying a natural number as being either zero or the successor of another natural number.

Note that there could be other ways to identify natural numbers (for example the zero/succ correspond to an unary representation, but you can go binary/n-ary), though you probably don't want to use e.g. addition, because then you have multiple ways to represent the same concept (zero? Or add zero zero? They are definitionally different but for you they are probably the same number, right?) and that complicates things a lot.

Eduardo Ochs (Jun 27 2024 at 09:57):

@Jay, take a look at "A Modern Perspective on Type Theory - From its Origins until Today", by Fairouz Kamareddine...

Henrik Böving (Jun 27 2024 at 09:57):

Henrik Böving said:

Jay said:

Yes, I could. Where I am starting is with a toolkit by which the /function/ of functions can be described. In computer science that would be something like the Lambda calculus in which functions, such as the successor function, can be described. So, for me, the successor function is too high a layer of abstraction to start on. By that point assumptions have been made and the mechanism which defines what it does needs to be defined.

If you want this read #leantt, but if you are not good with type theory yet this won't be helpful for you

I can additionally offer you the practical computer science point of view which is how these things actually look like in memory if you are working with them. But I don't think that is of much help to fix this confusion?

Giacomo Stevanato (Jun 27 2024 at 10:04):

Another alternative viewpoint that might seem less arbitrary: with zero/succ you are defining a number as how many times you have zero wrapped in a succ.

llllvvuu (Jun 27 2024 at 10:26):

Jay said:

Say, as a programmer, you can define a function which adds 1 or you can define a function which adds anything from 0 to 99. One of those functions doesn't take precedence or priority or have some default special status over the other, does it?

This isn't the right comparison.

Binary numbers:Unary numbers::Binary addition:Unary addition.

The representation of a number exists on a different and more fundamental level than operations on numbers.

You cannot define binary addition before defining binary numbers. And defining binary numbers does not give you addition for free, you have to make an ALU, which is quite involved. And the fact that addition in the ALU is commutative is not obvious at all, this requires formal verification.

Just as all datatypes are represented as lists of 0s and 1s in memory, all datatypes are represented as trees of constructors in Lean's type theory. succ is just the unary number version of a bit. It is not comparable to addition. Rather it is the subject of addition.

You can make the Natural Number Game using BinNat instead of Nat if it puts you more at ease. It is more efficient but harder to prove theorems about.

llllvvuu (Jun 27 2024 at 10:31):

FWIW axiomatization is a thing in Lean. This is achieved with typeclasses. For example Nat is a Semiring. Semirings are defined in terms of + and *. No succ, like you wanted. However, "semiring" does not define Nat (e.g. there are semirings other than Nat).

Yakov Pechersky (Jun 27 2024 at 10:47):

On the topic of rule invention, axiomatization, and where natural numbers come from, if one defines a Semiring (a collection of axioms) and the axioms of a category (there are objects and morphisms between them), then one can show that the naturals are the initial object in that category.

Yakov Pechersky (Jun 27 2024 at 11:24):

I want to add one more complication. @Jay , you linked succ with "adding one". But why should we call that addition? I could just as well call + taking the minimum of two numbers, and the operation * that takes two arguments a and b such that a * zero = zero * a = a and a * succ b = succ (a * b). This is also a Semiring (if our numbers have positive infinity in them too)

Jay (Jun 27 2024 at 16:06):

Giacomo, yes, I see that zero / succ is how the tutorial begins. I can see some of the descriptions too are describing /how it is/ in the environment. I think what I'm looking for is a clearer sense of /why/ that is how it is. That's one thing.

On another track, I am not clear on what you gain from doing things as you describe in your example. Say, for example, /if/ you are defining 37 as 0, wrapped in nested successor functions, 37 times then don't you still need the concept of the number 37 beforehand to do it?

Jay (Jun 27 2024 at 16:38):

llllvvuu, I agree with you that you don't get addition for nothing. If anything, that is part of what I am driving it. When we look at it that way though my point is you don't get a successor function for free either. In fact, some microprocessors I have used have something like the successor function and addition in terms of Increment and Add commands. Neither of them come for free, both of them need to be defined in the silicon or the software, somehow. So, even to have a successor function on a CPU somewhere there must be something which describes its implementation. I tried to describe it in shorthand once here before. Let's say you are doing this on binary numbers. You begin with a pointer pointing to the rightmost bit of a binary word. You invert the bit. If that bit inverts to 0 then you move the pointer one binary digit to the left and the you repeat the same process with the next bit along. If a bit does not invert to zero then the incrementation is finished. Next time you want to increment a number you do that again with the number you want to increment. See what I mean? There is nothing "fundamental" about a successor function. It in itself is still something you need to define in terms of other operations which describe what it means and does.

Furthermore, as I also wrote earlier, because you need to do that then why start with such a limited function? You may as well start with addition too, which is exactly what most even simple microprocessors do. Slightly more complex ones have multiplication in the silicon, slightly more complex ones than that have division and more complex ones than that have floating point maths. Certainly incrementation is the simplest of those operations but I don't think that simplicity alone makes it axiomatic. And, if we are going to argue for increased simplicity as increasingly axiomatic then we would need to then accept there are simpler operations even than a successor function. In computing there are then logical operations like AND, OR and NOT which are simpler even than incrementation. So, if simplicity is what we are arguing for then a successor function is still not foundational because there are simpler operations than that.

My struggle is the same as when I began: Sure, in this environment perhaps you do begin with 0 and a successor function but the operation of the successor function still needs to be defined before it can make sense just like the same is true of addition. So, why not just begin with natural numbers and basic arithmetic including incrementation? Why give me a successor function and not addition given, as I said, to my mind incrementation is just a special unique case of addition anyway. And, also, surely one which only makes sense in terms of the naturals? You can't have a successor function for reals can you but you can still define arithmetical functions like addition for reals.

Again, it leaves me in the same place as when I started: I can see that I get 0 and the successor function but without understanding Why I do it seems like an arbitrary choice which could just as easily been something else.

And, for me as a programmer, that "something else" would be something like the Lambda calculus or something rather than a set of axiomatic statements but instead rather an axiomatic set of rules or processes from which arbitrary functions can be built, which is exactly what the Lambda calculus attempts to be: A very, very simple set of operations intending to be a minimal set of operations from which such things as successor functions and addition can be built.

But, I am starting to turn in circles by offering various different but equally arbitrary ways to begin. The central point for me being "0 plus a successor function seems arbitrary. We could have started with something else. So, what can we do to find a starting point which is demonstrably Not arbitrary, or, what can we do to demonstrate what we get in this environment, with 0 + a successor function, is not arbitrary?

I might add: Surely, 0 + succ is simpler than N + addition but... so what? It isn't as simple as the Boolean operations. It also has far less expressive power than addition.

So, we can't convincingly defend 0 + succ, versus other possible starting points, against accusations of arbitrariness then why don't we start with N + addition, or the Boolean operations and some tools to apply those in building a successor function, instead?

I'm making this more complicated than it needs to be by defending my case.

The simplest way of putting it is: "Why? Why this and not something else?". What's a concrete defence for why we didn't do one of a multitude of other things we could have done instead? And, can we define some starting point for which we can also provide an objectively agreeable defence so all reasonable people can agree there was no better way to begin and avoid accusations of false presumption where something else could have been used as well or better instead?

Yakov Pechersky (Jun 27 2024 at 16:56):

The simplest explanation here is, all concrete types are made by using the inductive keyword. And succ is not just a function, it is a constructor. And by the axioms of this type theory, constructors are injective, so zero is never ever equal to anything succ x, by the axioms that "inductive" installs.

Yakov Pechersky (Jun 27 2024 at 16:59):

There are various ways to define integers (Z) for example. One is to have an inductive that has a nonnegative constructor and a negative constructor, where the constructors take Nats. Or one can have an inductive that has a nullary constructor zero and a positive and negative constructors that take Nats. Or one can define integers as a quotient type of Nats under some relation.

Yakov Pechersky (Jun 27 2024 at 17:00):

After making each of these constructions, one can show that there are isomorphisms between them that respect what "zero" means and what "one" means and also what "add" means on each type.

Jay (Jun 27 2024 at 17:00):

Ok but, without wanting to cause too much frustration: Is that not mere reiteration? You are doing a good job of explaining /what/ I get in this environment, but, my fundamental question is not "How does this Lean thing work?", it is "Why does it work this way?".

Yakov Pechersky (Jun 27 2024 at 17:01):

Lean works in this inductive way because it is an explicit choice of the underlying type theory. Because this type of Calculus of Constructions creates types that make it straightforward to write proofs about the constructed objects, and gives computational rules too (what's known as a recursor).

Yakov Pechersky (Jun 27 2024 at 17:03):

And on top of this, the proving methodology about these objects doesn't require a huge set of laws. Rather, the kernel, which is what checks that everything is rigorously defined, is small, that is, there is a small set of laws that are thought to not be inconsistent -- that they wouldn't accept something false. This is what #leantt is about

Jay (Jun 27 2024 at 17:03):

Ok cool. So..... is "this type of calculus of constructors" the same as, or part of "the underlying type theory"?

Yakov Pechersky (Jun 27 2024 at 17:04):

This is out of my depth, but I would say "part of" but foundationally.

Yakov Pechersky (Jun 27 2024 at 17:05):

But I want to note that this is separate from the choice of how to construct the naturals.

Yakov Pechersky (Jun 27 2024 at 17:06):

I would suggest that you read up on how the reals can be constructed. Not in Lean but in regular old math classes and departments. And what we care about in the reals _after_ we construct them.

Jay (Jun 27 2024 at 17:08):

Yep, that does sound like a pretty good start. Is "this type of calculus of constructors" a general topic I can learn as well as the example case you provide?

Yakov Pechersky (Jun 27 2024 at 17:12):

You could try reading about how it works in lean in #tpil

Yakov Pechersky (Jun 27 2024 at 17:12):

https://lean-lang.org/theorem_proving_in_lean4/dependent_type_theory.html and get to chapter 12

Henrik Böving (Jun 27 2024 at 17:18):

Jay said:

Yep, that does sound like a pretty good start. Is "this type of calculus of constructors" a general topic I can learn as well as the example case you provide?

As I've been telling you over multiple messages now, if you truly want to understand the internals of Lean's type theory you can read #leantt and related academic material, resources like #tpil or the functional programming book will teach you about what you can do with this system, not how it works underneath. And again I heavily recommend to instead focus on working with the system for a while before trying to understand complicated material like #leantt because it will be of highly limited use in your day to day activity.

Jireh Loreaux (Jun 27 2024 at 17:27):

Henrik, I think #tpil is a great place to start here. It may not answer all of Jay's questions, but I think it is a good place from which one could go to something like #leantt in the future for a deep understanding.

Henrik Böving (Jun 27 2024 at 17:31):

Yes, that's what I was trying to say with the second half of my paragraph.

Jay (Jun 27 2024 at 18:52):

Yes, thanks Henrik, and in fact I know you have have been saying that and I haven't ignored what you have said, I have started to read that #leantt document and I have found it very interesting as it references other previous theorem proving systems I have encountered and it mentions the Lambda calculus which I have mentioned more than once.

I haven't looked at #tpil yet but I do that next. Thank you.

Kyle Miller (Jun 27 2024 at 20:11):

This is a long thread and I might I have missed someone else expressing this point, but it's worth noting that you don't have to accept the existence of a successor function. Everything you can see you can do with it is a truth contingent on there being a successor function. The fact that no one's ever proved a contradiction using the successor function is evidence that there could be one, but just because people have built the airstrip and surrounding infrastructure doesn't mean that anything's been airdropped already :-)

Sometimes I like to think about axioms as being a library interface for all of mathematics, where if you have an "operating system" that provides these resources, you can run this mathematics program there.

(Regarding "why not start with addition instead of the successor function", that's something you can do, but it's technically more challenging to set up I think. The one that comes to mind is where you take all expressions involving 0, 1, and plus, then you have to come up with all the rules for what it means for two such expressions to be "the same", and then you have to somehow assert that they're "the same" by using a quotient construction. Your observation about successor as being an abstract version of counting is astute: we're saying "what if you could count forever? and what if you could make use of that count an reinterpret it?" The latter question is about being able to do recursive definitions.)

llllvvuu (Jun 27 2024 at 22:31):

Jay said:

llllvvuu, I agree with you that you don't get addition for nothing. If anything, that is part of what I am driving it. When we look at it that way though my point is you don't get a successor function for free either. In fact, some microprocessors I have used have something like the successor function and addition in terms of Increment and Add commands. Neither of them come for free, both of them need to be defined in the silicon or the software, somehow. So, even to have a successor function on a CPU somewhere there must be something which describes its implementation. I tried to describe it in shorthand once here before. Let's say you are doing this on binary numbers. You begin with a pointer pointing to the rightmost bit of a binary word. You invert the bit. If that bit inverts to 0 then you move the pointer one binary digit to the left and the you repeat the same process with the next bit along. If a bit does not invert to zero then the incrementation is finished. Next time you want to increment a number you do that again with the number you want to increment. See what I mean? There is nothing "fundamental" about a successor function. It in itself is still something you need to define in terms of other operations which describe what it means and does.

The issue with this comparison is that successor is not a function. Would you say the number 101010 needs to be "implemented"? In some sense, yes (you need to create a "bit" in hardware) but clearly not on the same level as 101010 + 101010 needs to be implemented.

Your comparison with flipping bits is not applicable because succ is not an algorithm that has an input and output. It is just the input itself (just like a bit is not a function). You presume that a unary number processor would have a succ and add ops in its assembly, and that we are unfairly privileging the succ op. But this premise is flawed. There would be no succ op.

The "function" you're thinking of is the function "prepend a succ to this number". But that is not the same as "succ", just as "prepend a bit to this number" is not the same as "bit". What might be confusing is that both things are called succ, but it's important to understand this distinction.

Since you are familiar with the Lambda calculus, you must be aware of Church numerals, yes? This is the same thing.

llllvvuu (Jun 27 2024 at 23:02):

Here's a janky illustration in C:

#include <assert.h>
#include <stdlib.h>
#include <stdio.h>
#include <string.h>

typedef char* Nat;

#define zero (0)
#define succ (1)

void print(Nat x) {
  Nat i = x;
  while (*i == succ) {
    printf("succ ");
    i++;
  }
  assert(*i == zero);
  printf("zero\n");
}

Nat add(Nat x, Nat y) {
  if (*x == zero) {
    return y;
  } else {
    Nat succ_y = (Nat) malloc((strlen(y) + 1) * sizeof(char));
    memcpy(succ_y + 1, y, strlen(y) * sizeof(char));
    *succ_y = succ;
    return add(x + 1, succ_y);
  }
}

int main() {
  char two[3] = {succ, succ, zero};
  print(two);
  char three[4] = {succ, succ, succ, zero};
  print(three);
  Nat five = add(two, three);
  print(five);
}

Note that there is no succ function. Rather, succ is a primitive value. But there needs to be an add function. How would it be done the other way around?

Jay (Jun 27 2024 at 23:13):

OK, so I can see how in your C code succ is assigned to a primitive type, a char, not a function. With succ and zero defined as char values you can make strings of them. So that then begs thee question, "So, what is the meaning of 'succ 0'?". Doesn't 'succ 0' mean "the thing that comes after 0'?

llllvvuu (Jun 27 2024 at 23:20):

Yes, that's how we've chosen to represent it. We could have chosen represent Nat in a number of other ways. For example, if we were to make it a linked list of uint64_t, then the "next" pointer would be the analogue of succ. On particular it could be read assucc 0x... n instead of succ n. Actually, Lean has a type that looks like this called docs#List (familiar from ML / Haskell) which is basically a linked list. Perhaps the List type may be less controversial to you, although it is essentially the same principle as the Nat type.

The reason we have chosen to represent Nat as a string of succ, rather than as, say, a string of uint64_ts, is that it's much easier to do proofs with. We don't prioritize computational efficiency here.

llllvvuu (Jun 27 2024 at 23:24):

Note that this is no more arbitrary than choosing the represent numbers as sequence of 0 and 1. In one case (binary), we chose the representation because it is easy to represent with circuits. In another case (decimal), we chose the representation because it is easy to represent with 10 fingers. In yet another case (unary), we chose the representation because it is easy for mathematical proofs.

Yakov Pechersky (Jun 27 2024 at 23:27):

What if I postulate a type of nonpositive integers? With two constructors, zero and pred. Then pred means the thing that comes before 0... But this type is totally isomorphic to Nat, I just changed the name of one of the constructors.

llllvvuu (Jun 27 2024 at 23:29):

Indeed. This is similar to how 0x41 could be interpreted as 'A', 65, or a number of other things.

Henrik Böving (Jun 27 2024 at 23:30):

That is indeed correct, the interpretation of the values of a type is up to the reader. You could also say that List Unit is isomorphic to Nat or that the type of flat semi lattices:

inductive FlatTopLattice (a : Type) where
| top
| data (x : a)

is isomorphic to Option and so on.

Jay (Jun 27 2024 at 23:55):

Oh, I agree with the fact binary numbers are no less arbitrary. I don't think beginning with a numbering system of any kind is a good start. Any numbering system would also be arbitrary. Why a numbering system at all? What I do believe in though is beginning with a system which can be used to define such a thing and the operations which act upon it. That's why I have mentioned the Lambda Calculus because, from a computer science point of view, it's one of the purest systems we have.

With a computer though I see it not as an attempt to establish something fundamental about logic or reasoning. It isn't supposed to be an idealised existential environment in which each step from the first is validated. So with a computer it's far easier for me to accept the arbitrariness of some of its constructs. With a computer all that we need to expect is a toolkit of things which you can use to make enginery which provides some useful purpose. You can use a computer to build a theorem prover, for example. You use it make a theorem prover because the theorem prover, you expect, is a far more exacting environment than that of a general-purpose computing system. You use the computer to construct algorithms and the theorem prover you use to either prove or expose the faults in an algorithm.

So, the toolkit of things I expect from a theorem prover I expect to be far more exacting and more robust to scrutiny than a computer language, for example C++, which I don't think would be regarded as elegant or beautiful or ideal by most people. It does offer a long-established hotch-potch of things which have proven to be useful though, but I digress. This isn't the topic.

Jay (Jun 28 2024 at 00:01):

I think I get what you are saying, Yakov. You have a symbolism but what you name it or use it to represent is itself arbitrary.

I do sometime think that even binary is a gross abstraction of what actually happens at the electronics level. The close we look at the electronics the less it looks like binary. If we were to look at a chip on the microscopic level and watch the behaviour of electronics we'd realise even the 1s and 0s we talk about are notional too. They are merely labels we have given to a, hopefully, predictable behaviour which has been filtered through many separate stages to make it appear to be the case. So, even the binary digits we assign to those behavioural effects are notional in themselves. It isn't even really binary, it's some sort of physical property of the underlying materials to which we assign those labels.

llllvvuu (Jun 28 2024 at 00:07):

Is the Lambda Calculus not "arbitrary" and comparable to a numbering system? Is λ not "airdropped" to us? Alternatively, is there a better way to represent numbers in the lambda calculus than Church encoding? (which is a numbering system, and actually the same numbering system as the Peano one)

Jay (Jun 28 2024 at 00:36):

Oh, yes.... but that's what I said in my previous post...... what I am getting at is perhaps what I expect as a computer person.....

You have some such thing as a Turing-Complete system. That is non-arbitrary enough because it is defined as a system which can describe anything computable. What route you take to get that from it doesn't need to be ideal. It could result in the Lambda Calculus, Lisp, Haskell, C++ or any one of a load of other esoteric Turing-Complete systems. It's actually quite hard to Avoid Turing-Completeness. It isn't an especially wonderful quality. If you define a functional computer programming language with whatever you know about programming if you have some experience writing software. If such a language has loops and conditional branches it will almost certainly be Turing-complete as if by accident.

My point is That level, the level at which you are describing the system you use to describe the system, doesn't need to avoid arbitrariness so long as it achieve the end-result of Turing completeness and It's easy enough for a human to use. Hence, there are dozens of different Turing complete computer programming languages. Most of them are obviously not ideal and most contain fairly arbitrary choices and most of those choices are inherited by default from previous languages on which they are based.

You can then use that system to describe something like a Theorem prover but if you set out to do that then the system you describe using the general-purpose computing environment should be far more exacting.

I mean, that's just a repetition of what I said previously and what I said previously is that's what you find in something like Lean:

The computing platform used to describe Lean is not pure or perfect. It isn't supposed to be. It's just supposed to be useful, easy enough to use, and general purpose.

There are examples of programming languages defined to meet the dual purpose of being general purpose but also of being robust enough to be usable in proofs.

https://cakeml.org/

The reason you define Lean and you don't use C++ instead is because the purpose of Lean, or a theorem prover, is far more to achieve robustness and avoid arbitrariness and arrive at something far more axiomatic and bullet-proof in nature. With the aim of being bullet-proof I think it should be fair we fire bullets at it and see what happens. Isn't that somewhat the purpose of theorem proving?

That's my thesis on the matter. Lean is a robust system built out of a toolkit which is less robust and that's why you built it: Because the system you built it on isn't robust enough. The demands you have from Lean should be far more robust than the demands you place on the underlying system it was built from because that is its purpose. Again, I am in danger of going round in circles: You take a general-purpose system of moderate robustness (the computer and its languages) and you use that toolset to define a far more restrictive system of far greater robustness, be it Lean, Isabelle, COQ or something else.

llllvvuu (Jun 28 2024 at 00:44):

What do you consider not robust about unary/Peano numbers / Church numerals? As far as I can tell you haven't actually managed to break it nor implement a robust alternative. Or are you aware of any theorem provers that define natural numbers not in terms of a numbering system? If so, how do you write down a number?

Jay (Jun 28 2024 at 00:58):

Well, again, my outlook on that is more "Why?". An interrogation rather than a statement about it. It isn't immediately obvious to me what you get from that. From my point of view you shouldn't need to begin with a numbering system of any kind for the reasons we already explored: It's gong to be arbitrary. Merely the fact it is a numbering system is going to be arbitrary. Why a numbering system? Why not something else?

Rather, my view is the underlying system, be it the Lambda Calculus or any thing else (C++ if you want) need not itself be axiomatic. All it needs to be is general purpose, such as C++ is. The Lambda calculus is too, but you wouldn't really want to try to write complex software with it although we have Haskell and that's probably the nearest we have got to somebody trying to do just that (with much success).

Then with that set of underlying tools you are able to construct such things as numbering systems or any more (or less) restrictive scheme of working. With a compiler language written in Assembly you get a less restrictive system from a more restrictive one. With Lean you derive a More restrictive scheme from a Less restrictive one.

The issue I have with Peano numbers is therefore again: Why? It isn't obvious to me what you get from that which stops you from just having the natural numbers in the first place.

I feel like I am talking to people fighting my on my suggestion the successor function is just taken from basic counting. It's just accepted as "yep, that's a basic thing. We should use it". Within Lean one of the first things I learned is the proof "one_eq_succ_zero". So, by extension, I suppose "ten_eq_succ_nine"? So I can as yet see no important difference between the successor function and counting. Doesn't it say in lean that they are proven to be equivalent? If you can have that then why not give yourself the set of natural numbers and addition instead? Somebody may have tried to explain this before but if they did I clearly didn't understand the explanation.

llllvvuu (Jun 28 2024 at 00:59):

Can you implement that "something else"?

"just having the natural numbers in the first place" how? Can you implement it?

Eric Wieser (Jun 28 2024 at 01:06):

It isn't obvious to me what you get from that which stops you from just having the natural numbers in the first place.

Wait till you find out that Lean doesn't have = in the first place, and instead builds it from first principles (src#Eq) in the same way as it builds Nat!

Jay (Jun 28 2024 at 01:08):

Can you implement that "something else"?

Well, by something else, I mean anything else. I mean instead of giving you natural numbers or Peano numbers or a successor function instead what you get is a system in which you can define those things... or something else instead. Anything else. Not just one specific arbitrary thing.

"just having the natural numbers in the first place" how? Can you implement it?

I think so. In the way I explained: You begin instead with a system through which you can implement it. Such as system could, for the sake of starting gambit, accept symbols and allow you to define operations which act upon those symbols. If anything that's what a basic Turing-Machine is. That way you can start by accepting the symbols typically used by numbering systems like binary or decimal are arbitrary. You can accept the operations you define which act upon those symbols, such as addition, are arbitrary. Or the choice of zero or a successor is arbitrary and you can also do none of those arbitrary things but something else (anything you like) with the underlying system instead.

That way you can then say "yes, we accept the successor function is arbitrary, but that's ok because it's useful and here's why", just as you can with any system described using the same tools. Surely, it's an easier thing to have a system and accept the arbitrary nature of any such system defined by it? Then you can have a theorem prover or algebra of ZFC made using the same underlying system.

Jay (Jun 28 2024 at 01:10):

Wait till you find out that Lean doesn't have = in the first place

The point is I can absolutely cope with that. That, again, is my point. The thing is your point seems only to underline my point. You don't get = and you need to define what it is from first principles? Great! I'm happy about it.

So then... why did you get the successor function? (and not =?)

So, I'm great with not getting = for free and my point is you shouldn't have the successor function for free either!

You shouldn't get anything for free. You should Have to justify it.

llllvvuu (Jun 28 2024 at 01:12):

Jay said:

Wait till you find out that Lean doesn't have = in the first place

The point is I can absolutely cope with that. That, again, is my point. The thing is your point seems only to underline my point. You don't = but need to define what it means from first principles? Great!

So... why'd you get the successor function and not =?

So, I'm great with not getting = for free and my point is you shouldn't have the successor function for free either!

But Lean didn't give us succ either. We created docs#Nat the same way we created docs#Eq. It's not "free" and it doesn't come with the system.

Jay (Jun 28 2024 at 01:15):

Ok fine.... that's progress.

So.... we didn't get it for free.... so where did we get it from and why?

Eric Wieser (Jun 28 2024 at 01:15):

If you want to play hard-mode lean, write prelude at the top of your file

Eric Wieser (Jun 28 2024 at 01:15):

Then you get almost nothing; no succ, no Eq, no Nat, ...

Eric Wieser (Jun 28 2024 at 01:17):

The absence of prelude means import Init, and then you get everything defined in those imported file

Jay (Jun 28 2024 at 01:17):

Ok. I am very happy about that! But you do get something, right? So, what's the least you can have?

In the "You can't have any less stuff than this" what's in it?

What is the underlying thing from which you are building all these conceptual things like equality and a successor function?

Eric Wieser (Jun 28 2024 at 01:20):

You get the inductive keyword, which is part of the type theory

llllvvuu (Jun 28 2024 at 01:21):

You get a kind of souped-up lambda calculus. You can start here: https://www.youtube.com/playlist?list=PLNwzBl6BGLwOKBFVbvp-GFjAA_ESZ--q4

llllvvuu (Jun 28 2024 at 01:24):

Since you are interested in foundations, it sounds like you want to understand the Lean kernel. You can try also: https://ammkrn.github.io/type_checking_in_lean4/ as well as #leantt mentioned above. The former may be more accessible as it is more concrete and discusses code/implementation). You can see also docs#Lean.expr. And look up "calculus of inductive constructions". If you are looking for "minimal" then this is it, and if it's not minimal enough for you, then that is the thing to critique, not the init/prelude (as you are free to not use those)

Jay (Jun 28 2024 at 01:33):

You get the inductive keyword, which is part of the type theory

I am at # induction in the tutorial and it isn't something immediately comprehensible to me. I have applied it but I don't understand how it transformed the goal into the two goals I now have. To try not to spread out on tangents here I have asked for an explanation in the main "new members" channel. Please feel free to help me there if you can.

Presumably the "inductive" keyword and induction are related?

Jay (Jun 28 2024 at 01:34):

You get a kind of souped-up lambda calculus. You can start here: https://www.youtube.com/playlist?list=PLNwzBl6BGLwOKBFVbvp-GFjAA_ESZ--q4

That video series looks superb in fact. I will be sure to watch it. Thank you!

Jay (Jun 28 2024 at 01:39):

Since you are interested in foundations, it sounds like you want to understand the Lean kernel. You can try also: https://ammkrn.github.io/type_checking_in_lean4/

Superb! Yes!

The kernel is an implementation of Lean's logic in software; a computer program with the minimum amount of machinery required to construct elements of Lean's logical language and check those elements for correctness.

This (before actually reading the content) sounds exactly like the missing piece of the puzzle I have been trying to describe! The strict basis-system which can be defined in a conventional programming language and on which you can in turn start to derive the various things like zero, equality, a successor function and all those things I think you shouldn't get for nothing. If that's what the kernel does then is exactly what I am looking for! :)

Mario Carneiro (Jun 28 2024 at 01:42):

Jay said:

Oh, I agree with the fact binary numbers are no less arbitrary. I don't think beginning with a numbering system of any kind is a good start. Any numbering system would also be arbitrary. Why a numbering system at all? What I do believe in though is beginning with a system which can be used to define such a thing and the operations which act upon it. That's why I have mentioned the Lambda Calculus because, from a computer science point of view, it's one of the purest systems we have.

One thing you seem not to appreciate here is that this is exactly what lean already does. Lean doesn't start with a numbering system, it starts with a foundation not unlike pure lambda calculus. We then define the natural numbers as some inductive type generated by zero and succ. They are not god-given any more than any other definition is, and we can quite easily make different definitions with different constructors.

Jay (Jun 28 2024 at 01:45):

One thing you seem not to appreciate here is that this is exactly what lean already does

You are right. I did not appreciate that when I started and it's exactly why I started along this track!

I don't think I can be blamed entirely as when I started the tutorial it jumps right in with the successor function. It said nothing about you are saying here. It just thrust it upon me which is why I reacted against it because it should not just be handed to me as it fell out of the sky or is some sort of absolute basis. That's why I started this thread.

Jay (Jun 28 2024 at 01:47):

So... if I don't get a successor function and if succ is not a foundational component of Lean and if succ is constructed from a more basic underlying system in Lean, in which a successor function does not exist, but from which one can be built, I will be far happier using Lean.

Mario Carneiro (Jun 28 2024 at 01:48):

the key point to understand about inductive types is that they are basically just using expressions as a means of data representation. When you write succ (succ (succ 0)) that's not a sequence of function applications, it's the value itself. Every value of an inductive type has a unique normal form produced by combining the constructors in every possible way

Mario Carneiro (Jun 28 2024 at 01:49):

and it so happens that when you use this process with constructors zero : Nat and succ : Nat -> Nat you get a "counting system" that seems to look a lot like natural numbers

Mario Carneiro (Jun 28 2024 at 01:50):

If you do it with leaf : T and node : T -> T -> T you get unlabeled binary trees instead

Jay (Jun 28 2024 at 01:50):

Yes, that, again, is something I have learned from this thread. I had previously thought it was a function in its own right, as it would be if you had to define what counting is.

So.... I have been listening to what people here have said and I have learned something... more than one thing.... from reading the responses.

Mario Carneiro (Jun 28 2024 at 01:50):

and lambda calculus is basically just playing around with trees of applications

Mario Carneiro (Jun 28 2024 at 01:52):

in some sense inductive types are more orthogonal than lambda calculus because the values don't just compute on their own. Encoding things in lambda calculus means you have to be careful to ensure that the data is not in a position where it will start reducing right away

llllvvuu (Jun 28 2024 at 01:52):

Jay said:

One thing you seem not to appreciate here is that this is exactly what lean already does

You are right. I did not appreciate that when I started and it's exactly why I started along this track!

I don't think I can be blamed entirely as when I started the tutorial it jumps right in with the successor function. It said nothing about you are saying here. It just thrust it upon me which is why I reacted against it because it should not just be handed to me as it fell out of the sky or is some sort of absolute basis. That's why I started this thread.

One thing to note is that the target audience of the Natural Number Game doesn't share your enthusiasm for type theory / foundations, and so that's why it doesn't start from CIC.

I'm not sure if there is any good introductory resource that takes that approach. There are some nice cubical Agda tutorials with code exercises which start from type theory fundamentals, but I don't know if there is any such thing gamified and/or for Lean.

Mario Carneiro (Jun 28 2024 at 01:52):

#tpil starts with type theory

Mario Carneiro (Jun 28 2024 at 01:53):

but it's not gamified

llllvvuu (Jun 28 2024 at 01:54):

Ah yes, I think that would be perfect for you @Jay

Jay (Jun 28 2024 at 01:57):

I am reading #tpil right now. It is a really interesting resource getting into the foundations right from the start. Thank you for sharing with me and for you patience. You have all been very helpful!

Jay (Jun 28 2024 at 01:58):

(*so far)

I expect my curiosity and questions are far from at their end.

Arthur Paulino (Jun 28 2024 at 11:04):

Note that lambda calculus also has an "arbitrary" set of reduction rules and a few "arbitrary" encodings for numbers. Similarly, C also has a few "arbitrary" data representations for numbers. We create meaning for data we want to manipulate

Arthur Paulino (Jun 28 2024 at 11:20):

If you want something other than zero and succ, you can try representing numbers as a sequence of symbols sampled from a finite set of symbols. If your finite set has size two, that's morally binary representation. If your finite set has size 16, that's morally hexadecimal representation.

This is how you would count in binary:

inductive Symbol
  | A | B

inductive Num
  | nil
  | cons : Symbol  Num  Num

#check Num.nil
#check Num.cons Symbol.A Num.nil
#check Num.cons Symbol.B Num.nil
#check Num.cons Symbol.A (Num.cons Symbol.A Num.nil)
#check Num.cons Symbol.A (Num.cons Symbol.B Num.nil)
#check Num.cons Symbol.B (Num.cons Symbol.A Num.nil)
#check Num.cons Symbol.B (Num.cons Symbol.B Num.nil)

Then you can define your arithmetic operations on top of it. Beware of pain.

Arthur Paulino (Jun 28 2024 at 11:30):

And if you want to generalize your symbol domain, you can use Fin to represent your symbols. Fin 1 would give you unary representation. Fin 2 would give you binary representation...

If you're really motivated, it might be a good exercise to see where you can get. If you get to the point of proving commutativity of addition, compare your proof with the proof you'd be able to write using Lean's native Nat instead:

-- from Init.Data.Nat.Basic
@[simp] protected theorem zero_add :  (n : Nat), 0 + n = n
  | 0   => rfl
  | n+1 => congrArg succ (Nat.zero_add n)

protected theorem add_comm :  (n m : Nat), n + m = m + n
  | n, 0   => Eq.symm (Nat.zero_add n)
  | n, m+1 => by
    have : succ (n + m) = succ (m + n) := by apply congrArg; apply Nat.add_comm
    rw [succ_add m n]
    apply this

Arthur Paulino (Jun 28 2024 at 11:41):

Interestingly, note that if your symbols live in Fin 1, then you're basically back to Lean's Nat

Mark Fischer (Jul 02 2024 at 16:22):

When I define a new function, I need to define it's behavior in the same breath:

def fnFoo :    := λn  n + n^(n + 1)

So you can see clearly that fnFoo is created from some constituent parts — like addition and exponentiation.

When you see succ : ℕ → ℕ without any := after it, you might think "Oh, so Lean has some special knowledge of how to get the successor of a number. This is sort of true. It doesn't have any idea of numbers, successors or addition but it does have the notion of a constructor (and importantly that any unique constructor of a type is not equal to any other). If you're thinking about Lean as a programming language, then you can think of lean as somehow remembering which of the constructors (zero, or succ) and what data (if any) was used to make a value of type Nat so that it can be uniquely identified again later.

So succ 5 doesn't compute a six, it just creates a new Nat value uniquely identified by a 5 and we're the ones saying "Yeah, being uniquely identified by a 5 in this way makes you a 6."

It turns out at the way the constructors in inductives work lets you create a representation of the natural numbers (or as @Arthur Paulino points out, any number of representations). So succ isn't really special or built-in or anything like that.

Jacob Weightman (Feb 02 2025 at 22:15):

Having a silly issue here. I have these definitions:

structure TagSystem (α : Type u) [Inhabited α] where
  v : Nat
  v_nonzero : v  0
  step : α  List α

namespace TagSystem

variable {α : Type u} [Inhabited α]
variable {T : TagSystem α}

def halted (A : List α) :=
  A.length < T.v

instance dec_halted (A : List α) : Decidable (T.halted A) :=
  Nat.decLt A.length T.v

lemma read_v_nonempty (T : TagSystem α)(A : List α) : T.v  A.length  A  [] := by ...

def next (A : List α) : List α :=
  match Nat.decLe T.v A.length with -- could use `T.halted A` here, but that only indirected the problem!
  | isTrue h =>
      have h' : A  [] := T.read_v_nonempty A h
      (A.drop T.v) ++ (T.step (A.head h'))
  | isFalse _ => A

And now I'm trying to prove this very simple theorem, but getting stuck:

theorem next_halted (A : List α) : T.halted A  T.next A = A := by
  unfold halted
  intro h
  simp [next]

This leaves me with the following proof context:

unsolved goals
α : Type u
inst : Inhabited α
T : TagSystem α
A : List α
h : A.length < T.v
 (match T.v.decLe A.length with
  | isTrue h => List.drop T.v A ++ T.step (A.head )
  | isFalse h => A) =
  A

I would like to be able to use h to resolve the condition of the match statement to the isFalse branch, but I can't seem to figure it out. In general, I'm a bit confused about how to work with Decidable. Could someone point me in the right direction here?

Aaron Liu (Feb 02 2025 at 22:20):

Try using

def next (A : List α) : List α :=
  if h : T.v  A.length then
    have h' : A  [] := T.read_v_nonempty A h
    (A.drop T.v) ++ (T.step (A.head h'))
  else A

instead

Aaron Liu (Feb 02 2025 at 22:21):

Then you can use docs#dif_pos and docs#dif_neg to reason about it.


Last updated: May 02 2025 at 03:31 UTC