Zulip Chat Archive

Stream: general

Topic: computability


Kevin Buzzard (Jun 14 2019 at 17:34):

It's about time I understood what Lean means by computability.

import tactic.interactive

-- lean forces me to write noncomputable
noncomputable def g {X Y : Type} (f : X  Y) (hf : function.bijective f) : Y  X :=
begin
  intro y,
  have h := hf.2 y,
  choose x hx using h,
  use x
end

#print g -- it's supposed to be noncomputable but it looks like a computer program to me

-- Here I don't need to write noncomputable
theorem g_property {X Y : Type} (f : X  Y) (hf : function.bijective f) (y : Y) :
  f (g f hf y) = y :=
begin
  exact classical.some_spec (hf.2 y)
end

#print axioms g_property -- classical.choice
-- I used the axiom of choice but I'm computable anyway.

-- and this is computable
def lem (P : Prop) : P  ¬ P := classical.em P

#print axioms lem -- all the axioms

In the definition of g, I am forced to mark it as noncomputable, but what does that mean? I thought proofs were programs. Is my definition of g not a program? My proof of g_property uses the axiom of choice according to #print axioms -- and yet it's computable anyway (in the sense that I was not asked to mark it as noncomputable). And I was not asked to mark my definition of lem as computable, but that's a definition which apparently uses the axiom of choice too. What is going on? It all still looks totally random to me but I think I might understand things enough to actually begin to see what this noncomputable thing which I'm randomly asked to put in front of definitions and possibly theorems (does that happen?) actually means, if someone tries to explain it to me...

Kevin Buzzard (Jun 14 2019 at 17:38):

Lean’s standard library defines an additional axiom, propositional extensionality, and a quotient construction which in turn implies the principle of function extensionality. These extensions are used, for example, to develop theories of sets and finite sets. We will see below that using these theorems can block evaluation in Lean’s kernel, so that closed terms of type ℕ no longer evaluate to numerals. But Lean erases types and propositional information when compiling definitions to bytecode for its virtual machine evaluator, and since these axioms only add new propositions, they are compatible with that computational interpretation. Even computationally inclined users may wish to use the classical law of the excluded middle to reason about computation. This also blocks evaluation in the kernel, but it is compatible with compilation to bytecode.

This is from TPIL and it's one of very few parts of the book which I still do not understand. But I don't understand this paragraph at all. I don't know what bytecode is or the virtual machine. I need to be told something easier first.

Kevin Buzzard (Jun 14 2019 at 17:40):

What I do understand is that there are conjectures I know such as Waring's problem, where there is a number which I regard as "well-defined", but for which no mathematician knows an algorithm for working them out. I can see that these aren't "computable". I guess I don't see why the other things are computable.

Kevin Buzzard (Jun 14 2019 at 17:41):

https://en.wikipedia.org/wiki/Waring%27s_problem

It's known that 4G(3)74\leq G(3)\leq 7.

Chris Hughes (Jun 14 2019 at 18:01):

It's okay to use choice to make a Sort u, or a proof. If you use it for anything else it's noncomputable.

Reid Barton (Jun 14 2019 at 18:01):

I think the rules are simply
1. Types and propositions and proofs are always computable (because they are not actually represented at all at runtime). That is, anything whose type is Sort u for some u. This also extends to Pi-types ending in Sort u.
2. Axioms are not computable (because they have no definition). Normally I think this only applies to choice because the other standard axioms are propositions.
3. Anything that uses something noncomputable is noncomputable, except that rule 1 always takes priority.

Reid Barton (Jun 14 2019 at 18:04):

Maybe I've confused myself and rule 1 is really two separate rules (type erasure and proof irrelevance), not sure.

Kevin Buzzard (Jun 15 2019 at 08:13):

Ok so perhaps some examples of things which I might want to define are noncomputable nats and noncomputable functions f : X -> Y. So then my next, and I think my last, question is: what am I losing by having these things marked as noncomputable? Is the only difference that I am basically guaranteed not to be able to prove anything about these noncomputable gadgets using rfl? Is that it? I can just reason about these objects perfectly well using exactly the same sort of reasoning I would use in a maths paper. I would never want to use rfl on these objects anyway because I know they've been defined using some choice function.

With things like the perfectoid project I would work constructively by default, and would just write noncomputable in front of a definition every time Lean told me to. I could instead have just written noncomputable theory at the top of every file. What would I have lost if I'd done that? I am struggling to find any reason at all why I would ever care about computability. Let's say my goal was to define a perfectoid space and then state Scholze's tilting correspondence (this is feasible but would take a fair bit more work), and then let's say my next goal was to prove it (this would take an unfeasibly large amount of work unless I had a big team). Would I ever care about computability ever?

Mario Carneiro (Jun 15 2019 at 08:30):

Lean produces code for everything that's not marked noncomputable

Mario Carneiro (Jun 15 2019 at 08:30):

you can look at this code, although it's not very intelligible

Mario Carneiro (Jun 15 2019 at 08:32):

noncomputable theory has no downsides except that it makes computability less obvious (which itself may not be a downside depending on your POV). It actually means "please infer noncomputable for me"

Mario Carneiro (Jun 15 2019 at 08:33):

As I'm sure you've noticed, lean knows perfectly well which definitions are computable and which aren't, and without noncomputable theory it nags you until you put the markings it knows you should put. noncomputable theoryjust says to do this automatically without user interaction

Kevin Buzzard (Jun 15 2019 at 08:34):

I just don't get why computability is important to me if I just want to make mathematical objects and state and prove theorems about them.

Mario Carneiro (Jun 15 2019 at 08:34):

It's not, unless you are Anrdej Bauer

Mario Carneiro (Jun 15 2019 at 08:35):

and we're already long past that point since choice appears as an axiom so early in the development that it's difficult to do without

Mario Carneiro (Jun 15 2019 at 08:37):

You can still prove stuff about noncomputable gadgets using rfl, as long as you are doing some definition unfolding and such

Mario Carneiro (Jun 15 2019 at 08:38):

you won't be able to do large scale rfl proofs like "compute the class number of this field", but I think you know when to use the word "compute" in this context

Mario Carneiro (Jun 15 2019 at 08:39):

I think "noncomputable by default, computable for computations" is a reasonable stance for doing pure maths in lean

Kevin Buzzard (Jun 15 2019 at 08:40):

I am not sure I know what "computations" mean. Kenny pointed out to me on Thursday at Xena that mathematicians and computer scientists use the term in two different ways.

Kevin Buzzard (Jun 15 2019 at 08:40):

I think we use it incorrectly, and I think that the fact that I've been exposed to decades of incorrect usage of the term has clouded my thinking over the issue.

Mario Carneiro (Jun 15 2019 at 08:42):

roughly speaking, if you would open up GAP or a CAS to do it

Kenny Lau (Jun 15 2019 at 08:42):

then you need a computable model of say Q-bar

Mario Carneiro (Jun 15 2019 at 08:43):

sure, that's what I mean by "computable for computations"

Mario Carneiro (Jun 15 2019 at 08:43):

unless you need that, computability does you no favors

Kevin Buzzard (Jun 15 2019 at 08:57):

The Curry-Howard correspondence says that "proofs are programs". Is there a program corresponding to the proof of the theorem that a product of non-empty sets is non-empty? What is a program? What language is this program supposed to be in?

Chris Hughes (Jun 15 2019 at 08:58):

Yes. It's the program that returns (x,y), when x is the witness to X being nonempty, and y is the witness to Y being nonempty.

Kevin Buzzard (Jun 15 2019 at 08:58):

infinite product!

Kevin Buzzard (Jun 15 2019 at 08:58):

Then you want to give me a function.

Kevin Buzzard (Jun 15 2019 at 08:59):

Is there a program corresponding to the proof of the theorem that the Riemann Hypothesis is either true or false?

Chris Hughes (Jun 15 2019 at 08:59):

I was thinking about this the other day

Kevin Buzzard (Jun 15 2019 at 09:00):

Or maybe the Continuum Hypothesis would be a better example (something which is known not to be provable or disprovable from the axioms of maths)

Chris Hughes (Jun 15 2019 at 09:00):

I think you can do this computably

example {ι : Type*} {α : ι  Type*} [S :  i, setoid (α i)]
  (f :  i, quotient (S i)) : @quotient (Π i, α i) (by apply_instance)

Kevin Buzzard (Jun 15 2019 at 09:01):

If I have P : Prop and then I have some Lean file which makes hP : P, is that definitely a program?

Kevin Buzzard (Jun 15 2019 at 09:01):

Even if somewhere along the way, the construction of the term hP uses something noncomputable?

Chris Hughes (Jun 15 2019 at 09:02):

In fact you could define this computably in meta using quot.unquot I think, and you would still have a function (even though you used quot.unquot which isn't a mathematical function

Kevin Buzzard (Jun 15 2019 at 09:02):

There is something completely fundamental here which I don't understand. A .lean file looks to me like a program, because I can compile it using Lean on my computer. Does this have anything to do with computability though?

Chris Hughes (Jun 15 2019 at 09:02):

And this is basically choice if all the quotients are true.

Chris Hughes (Jun 15 2019 at 09:03):

I guess it depends what you mean by computable.

Kevin Buzzard (Jun 15 2019 at 09:03):

Lean seems to have a perfectly well-defined notion of what it means for something to be computable.

Chris Hughes (Jun 15 2019 at 09:04):

.lean files aren't in the domain of that particular computable predicate.

Kevin Buzzard (Jun 15 2019 at 09:05):

My understanding is that the initial interpretation of "proof = program" was that a proof was supposed to be interpreted as some sort of proof using only constructive logic or intuitionistic logic or whatever. As we all know, this is not what is going on maths departments across the world -- they have a more powerful notion of proof. Does this just break Curry-Howard, or is there a rigorous but wider interpretation of "program" which I can use which fixes it?

Mario Carneiro (Jun 15 2019 at 09:06):

The quotient product example is one place where lean's syntactic notion of "computable" is overly restrictive

Kenny Lau (Jun 15 2019 at 09:06):

just use Haskell's setoid-without-quotient philosophy

Kenny Lau (Jun 15 2019 at 09:06):

then the quotient.choice is computable

Mario Carneiro (Jun 15 2019 at 09:06):

aka setoid hell

Kevin Buzzard (Jun 15 2019 at 09:06):

I learnt about that from Nicola Gambini last week.

Mario Carneiro (Jun 15 2019 at 09:07):

There are curry howard interpretations of LEM but they are a bit weird

Kevin Buzzard (Jun 15 2019 at 09:07):

So Curry-Howard is a lie?

Mario Carneiro (Jun 15 2019 at 09:08):

Constructive proofs are programs

Chris Hughes (Jun 15 2019 at 09:08):

What I don't quite get is if I have computable choice and computable univalence, shouldn't I be able to get computable em. Obviously I can't, but I don't see where the proof breaks down.

Kevin Buzzard (Jun 15 2019 at 09:09):

C-H is pithy, alliterative, beautiful-looking, but not actually true in maths departments because we have a different definition of proof and there is no corresponding definition of program.

Mario Carneiro (Jun 15 2019 at 09:09):

I have heard it argued that for most of history mathematics was constructive

Kevin Buzzard (Jun 15 2019 at 09:10):

I don't live in the past.

Mario Carneiro (Jun 15 2019 at 09:10):

it wasn't until dedekind and such that we started getting a more general notion of function that doesn't readily admit "execution"

Mario Carneiro (Jun 15 2019 at 09:11):

You don't need to convince me that you don't see the point in constructivity

Kevin Buzzard (Jun 15 2019 at 09:11):

Maybe we can say that the age of Curry-Howard is passed. What I am confused about is the fact that I can prove Tychonoff's theorem (or some other theorem equivalence to the mathematician's axiom of choice) in Lean and make a text file called tykonov.lean which looks to a mathematician like a computer program.

Kevin Buzzard (Jun 15 2019 at 09:12):

Is it not a computer program?

Mario Carneiro (Jun 15 2019 at 09:12):

It is not

Kevin Buzzard (Jun 15 2019 at 09:12):

But I can compile it and it runs.

Kevin Buzzard (Jun 15 2019 at 09:12):

It produces no output.

Mario Carneiro (Jun 15 2019 at 09:12):

it is input to a computer program that analyzes the information in the file

Mario Carneiro (Jun 15 2019 at 09:13):

There are many computer programs in the world, not all of them are compilers

Kevin Buzzard (Jun 15 2019 at 09:13):

Aah, so Lean is the program, but my .lean file is not a program?

Kevin Buzzard (Jun 15 2019 at 09:13):

I thought Lean compiled my .lean file into an .olean file

Mario Carneiro (Jun 15 2019 at 09:13):

Just like when you write TeX code it's input to a program but it's not a program itself, it's just a data representation

Mario Carneiro (Jun 15 2019 at 09:14):

unless you write esoteric TeX macros which would be analogous to lean metaprograms which are actually programs

Mario Carneiro (Jun 15 2019 at 09:15):

But the key point is this: when you write a proof that is computable in lean, it is literally both a proof and a program

Mario Carneiro (Jun 15 2019 at 09:15):

you can run it with #eval

Mario Carneiro (Jun 15 2019 at 09:16):

and that's the main idea behind the computability interpretation of intuitionistic math

Kevin Buzzard (Jun 15 2019 at 09:16):

macros: I did this in my thesis, which was written in plain TeX (which has no theorem numbering macros); I wrote the macros myself. I think my confusion is now resolved. I don't want to lose track of Chris' question above about computable em, I know he's been thinking a lot about univalence recently. @Gabriel Ebner do you know anything about

What I don't quite get is if I have computable choice and computable univalence, shouldn't I be able to get computable em. Obviously I can't, but I don't see where the proof breaks down.

Mario -- thanks as ever.

Kevin Buzzard (Jun 15 2019 at 09:17):

Wait -- I thought the whole problem with HoTT was that univalence wasn't computable? And I have this vague idea that cubical type theory was an attempt to fix this.

Mario Carneiro (Jun 15 2019 at 09:18):

that's right

Kevin Buzzard (Jun 15 2019 at 09:18):

So Chris it seems to me that you have to become an expert in cubical type theory now.

Mario Carneiro (Jun 15 2019 at 09:19):

univalence is not a nonconstructive principle, meaning that we have long believed that it has a constructive interpretation, but as defined in book HoTT it's an axiom, meaning that when you have to "call" it like a function you don't know what to do

Mario Carneiro (Jun 15 2019 at 09:20):

In cubical type theory it's a theorem, so the implementation of that function (in terms of new primitives not in HoTT) is now known

Mario Carneiro (Jun 15 2019 at 09:20):

Re: univalence and choice, I would try going through DIaconescu with a constructive mindset to see what happens

Kevin Buzzard (Jun 15 2019 at 09:21):

I have this uncomfortable feeling about this (univalence axiom). When I was moaning about having built a bunch of structure on a ring R and then wanting to move it onto an isomorphic ring S, people were saying "use HoTT". Now I wonder whether some of the terms I'd get out of this HoTT strategy would just be useless in practice because they have been created using an axiom. On the other hand I guess I make terms using the axiom of choice and seem to get away with it.

Mario Carneiro (Jun 15 2019 at 09:22):

It is an open question whether all the consequences of univalence computability are provable in book HoTT. AFAIK it's true

Mario Carneiro (Jun 15 2019 at 09:23):

So you wouldn't actually get stuck, but you would need to prove lemmas about how transport works on different structures

Mario Carneiro (Jun 15 2019 at 09:23):

(transport is the function a = b -> P a -> P b)

Kevin Buzzard (Jun 15 2019 at 09:30):

But the key point is this: when you write a proof that is computable in lean, it is literally both a proof and a program

def X : 2 + 2 = 4 := rfl

#eval X -- #0

Kevin Buzzard (Jun 15 2019 at 09:41):

Wait, I am confused again. I thought every proof was computable in Lean, in the sense that you don't have to mark it noncomputable because it's a proof?

Alex J. Best (Jun 15 2019 at 09:41):

But the key point is this: when you write a proof that is computable in lean, it is literally both a proof and a program

I'm still amazed by how fast

import data.zmod.quadratic_reciprocity
#eval (zmodp.legendre_sym 1111 3119 (by sorry))

runs.

Mario Carneiro (Jun 15 2019 at 09:44):

The caveat is that lean doesn't run what we normally call proofs, i.e. elements of elements of Prop. These are erased in the VM, so they are trivially computable

Mario Carneiro (Jun 15 2019 at 09:45):

It is possible to build a theorem prover such that these proofs are executable, and actually lean will reduce proofs if you use #reduce, but there are some efficiency problems with doing this at large scale

Kevin Buzzard (Jun 15 2019 at 09:50):

What about my proof that the noncomputable inverse of a bijection was indeed an inverse?

Mario Carneiro (Jun 15 2019 at 10:14):

That's computable to a function if you didn't use any axioms

Mario Carneiro (Jun 15 2019 at 10:15):

Here's Diaconescu's proof reinterpreted with everything in Type:

open classical
variables (p : Type) {α : Type*} {β : Type*}

inductive eq' (a : α) : α  Type
| refl : eq' a
infix ` =' `:50 := eq'

def eq'.symm :  {a b : α}, a =' b  b =' a
| _ _ (eq'.refl _) := (eq'.refl _)

def eq'.trans :  {a b c : α}, a =' b  b =' c  a =' c
| _ _ _ (eq'.refl _) (eq'.refl _) := (eq'.refl _)

def eq'.congr_arg (f : α  β):  {a b : α}, a =' b  f a =' f b
| _ _ (eq'.refl a) := eq'.refl _

def eq'.cast :  {a b : Type}, a =' b  a  b
| _ _ (eq'.refl a) := id

def U (x : Type) : Type := x =' unit  p
def V (x : Type) : Type := x =' empty  p

def exU : Σ x, U p x := unit, sum.inl (eq'.refl _)
def exV : Σ x, V p x := empty, sum.inl (eq'.refl _)

def u : Type := (exU p).fst
def v : Type := (exV p).fst

def u_def : U p (u p) := (exU _).snd
def v_def : V p (v p) := (exV _).snd

def not_uv_or_p : (u p =' v p  empty)  p :=
begin
  cases u_def p with hu hu,
  { cases v_def p with hv hv,
    { refine sum.inl (λ e, _),
      exact (hu.symm.trans (e.trans hv)).cast () },
    { exact sum.inr hv } },
  { exact sum.inr hu }
end

axiom funext' {β : α  Type*}
  {f₁ f₂ : Π x : α, β x} (h :  x, f₁ x =' f₂ x) : f₁ =' f₂

noncomputable def p_implies_uv (hp : p) : u p =' v p :=
begin
  have : U p =' V p,
  { refine funext' (λ x, _),
    refine eq'.congr_arg (λ a, a  p) _,
    sorry /- provable by univalence, I'll ignore this -/ },
  suffices :  (exU : Σ x, U p x) (exV : Σ x, V p x),
    exU.fst =' exV.fst,
  { apply this },
  refine @eq'.rec_on _ (U p) (λ W _,
     (exU : Σ x, U p x) (exV : Σ x, W x),
      exU.fst =' exV.fst) (V p) this _,
  assume (exU exV : Σ (x : Type), U p x),
  show exU.fst =' exV.fst,
  sorry -- We can't prove this. In the original proof
        -- exU and exV were proofs and we used proof irrel here,
        -- but now it's definitely false; if we reduce everything
        -- else (and don't revert exU and exV) we find that
        -- exU.fst is unit and exV.fst is empty, so this is false.
end

noncomputable def em : p  (p  empty) :=
sum.cases_on (not_uv_or_p p)
  (λ h, sum.inr (λ e, h (p_implies_uv p e)))
  sum.inl

Kevin Buzzard (Jun 15 2019 at 12:22):

So this discussion has revealed the fact that I don't really know what a "program" is. I want to define a program as a "computer file" (a lean file, an olean file, a range of memory addresses) which can be turned by a compiler into something which can be run by my CPU. Is that a reasonable definition? Is that Curry-Howard's definition?

I am also a bit confused about definitions. Are definitions also sometimes programs?

Here are two theorems and two definitions.

def D1 :  := 2

theorem T1 : (2 : ) + 2 = 4 := rfl

noncomputable def D2 {X Y : Type} {f : X  Y} (hf : function.surjective f) : Y  X :=
λ y, classical.some (hf y)

theorem T2 (X Y : Type) (f : X  Y) (hf : function.surjective f) (y : Y) : f (D2 hf y) = y :=
classical.some_spec (hf y)

Which of those things can be turned into "programs" in the sense of Curry and Howard?

Chris Hughes (Jun 15 2019 at 12:34):

I think I proved trunc (p ⊕ (p → empty)). This is good enough for decidability of all props? Is there any stuff here that doesn't work in HoTT?

import data.quot
open classical
variables (p : Type) {α : Type*} {β : Type*}

inductive eq' (a : α) : α  Type
| refl : eq' a
infix ` =' `:50 := eq'

def eq'.symm :  {a b : α}, a =' b  b =' a
| _ _ (eq'.refl _) := (eq'.refl _)

def eq'.trans :  {a b c : α}, a =' b  b =' c  a =' c
| _ _ _ (eq'.refl _) (eq'.refl _) := (eq'.refl _)

def eq'.congr_arg (f : α  β):  {a b : α}, a =' b  f a =' f b
| _ _ (eq'.refl a) := eq'.refl _

def eq'.cast :  {a b : Type}, a =' b  a  b
| _ _ (eq'.refl a) := id

def U (x : Type) : Type := x =' unit  p
def V (x : Type) : Type := x =' empty  p

def exU : trunc (Σ x : Type, U p x) := trunc.mk unit, sum.inl (eq'.refl _)
def exV : trunc (Σ x : Type, V p x) := trunc.mk empty, sum.inl (eq'.refl _)

axiom choice' {ι : Type*} {α : ι  Type*} (f :  i, trunc (α i)) : trunc (Π i, α i)

def not_uv_or_p :  (u : Σ x : Type, U p x) (v : Σ x : Type, V p x), (trunc u.fst =' trunc v.fst  empty)  p :=
assume u v,
begin
  cases u.2 with hu hu,
  { cases v.2 with hv hv,
    { refine sum.inl (λ e, _),
      have : trunc unit =' trunc empty, from eq'.rec_on hu (eq'.rec_on hv e),
      have : trunc empty, from this.cast (trunc.mk ()),
      refine trunc.lift_on this id (assume a, by cases a) },
    { exact sum.inr hv } },
  { exact sum.inr hu }
end

axiom funext' {β : α  Type*}
  {f₁ f₂ : Π x : α, β x} (h :  x, f₁ x =' f₂ x) : f₁ =' f₂

noncomputable def p_implies_uv (hp : p) (u : Σ x : Type, U p x) (v : Σ x : Type, V p x) : trunc u.fst =' trunc v.fst :=
begin
  have : U p =' V p,
  { refine funext' (λ x, _),
    refine eq'.congr_arg (λ a, a  p) _,
    sorry /- provable by univalence, I'll ignore this -/ },
  revert u v,
  refine eq'.rec_on this _,
  assume u v,
  sorry, /- Should be provable by univalence. Both types are inhabited -/
end

noncomputable def em : trunc (p  (p  empty)) :=
have fU : trunc (Π p : Type, Σ x : Type, U p x), from choice' exU,
have fV : trunc (Π p : Type, Σ x : Type, V p x), from choice' exV,
trunc.rec_on_subsingleton fU (assume fU, trunc.rec_on_subsingleton fV (assume fV,
  trunc.mk begin
    refine sum.rec_on (not_uv_or_p p(fU p) (fV p)) _ _,
    { assume h,
      exact sum.inr (assume hp : p, h (p_implies_uv p hp _ _)) },
    { exact sum.inl }
end))

Mario Carneiro (Jun 15 2019 at 12:37):

choice' is the actual axiom of choice

Chris Hughes (Jun 15 2019 at 12:40):

What's the weaker computable version?

Mario Carneiro (Jun 15 2019 at 12:40):

no trunc

Mario Carneiro (Jun 15 2019 at 12:40):

it's trivial

Mario Carneiro (Jun 15 2019 at 12:41):

People call it the axiom of choice sometimes but it's just moving sigmas around

Mario Carneiro (Jun 15 2019 at 12:41):

The thing which is more like the real AC, which is equivalent in power to AC, puts a trunc on each side like you did

Chris Hughes (Jun 15 2019 at 12:42):

Without trunc it's the identity function. what do you mean exactly?

Mario Carneiro (Jun 15 2019 at 12:43):

Usually people state it as in \forall x, \exists y, p x y -> \exists f, \forall x, p x (f x)

Mario Carneiro (Jun 15 2019 at 12:43):

in your form it's particularly simple

Mario Carneiro (Jun 15 2019 at 12:44):

So this discussion has revealed the fact that I don't really know what a "program" is. I want to define a program as a "computer file" (a lean file, an olean file, a range of memory addresses) which can be turned by a compiler into something which can be run by my CPU. Is that a reasonable definition? Is that Curry-Howard's definition?

Yes, that's a reasonable definition. The church turing thesis says that basically any reasonable definition of "program" lands on the exact same class of functions, which are the computable functions. A more traditional definition would use turing machines or reduction of lambda terms

Chris Hughes (Jun 15 2019 at 12:46):

But my version is still computable

meta def choice2 {ι : Type*} {α : ι  Type*} (f :  i, trunc (α i)) : trunc (Π i, α i) :=
trunc.mk (assume i, quot.unquot (f i))

Mario Carneiro (Jun 15 2019 at 12:46):

I am also a bit confused about definitions. Are definitions also sometimes programs?

When you write def in lean, it goes through that compiling bit and produces something that can be run by the CPU, i.e. a program

Mario Carneiro (Jun 15 2019 at 12:53):

the choice' isn't needed

noncomputable def em : trunc (p  (p  empty)) :=
trunc.rec_on_subsingleton (exU p) (assume fU,
  trunc.rec_on_subsingleton (exV p) (assume fV,
    trunc.mk begin
      refine sum.rec_on (not_uv_or_p p fU fV) _ _,
      { assume h,
        exact sum.inr (assume hp : p, h (p_implies_uv p hp _ _)) },
      { exact sum.inl }
end))

Mario Carneiro (Jun 15 2019 at 13:01):

The nice thing about the computability interpretation is that you can just "inline functions" to find out what happens and maybe end up proving something much stronger:

noncomputable def em : (p  empty) :=
begin
  let u : Σ x : Type, U p x := unit, sum.inl (eq'.refl _),
  let v : Σ x : Type, V p x := empty, sum.inl (eq'.refl _),
  have h : trunc u.fst =' trunc v.fst  empty,
  { intro e,
    have : trunc unit =' trunc empty, from e,
    have : trunc empty, from this.cast (trunc.mk ()),
    refine trunc.lift_on this id (assume a, by cases a) },
  exact assume hp : p, h (p_implies_uv p hp _ _),
end

Mario Carneiro (Jun 15 2019 at 13:16):

Okay, it's completely self contained now. Clearly something is odd

import data.quot
open classical
variables (p : Type) {α : Type*} {β : Type*}

inductive eq' (a : α) : α  Type
| refl : eq' a
infix ` =' `:50 := eq'

def eq'.symm :  {a b : α}, a =' b  b =' a
| _ _ (eq'.refl _) := (eq'.refl _)

def eq'.trans :  {a b c : α}, a =' b  b =' c  a =' c
| _ _ _ (eq'.refl _) (eq'.refl _) := (eq'.refl _)

def eq'.congr_arg (f : α  β):  {a b : α}, a =' b  f a =' f b
| _ _ (eq'.refl a) := eq'.refl _

def eq'.cast :  {a b : Type}, a =' b  a  b
| _ _ (eq'.refl a) := id

axiom funext' {β : α  Type*}
  {f₁ f₂ : Π x : α, β x} (h :  x, f₁ x =' f₂ x) : f₁ =' f₂

noncomputable def hm : empty :=
let U (x : Type) : Type := x =' unit in
let V (x : Type) : Type := x =' empty in
begin
  let u : Σ x : Type, U x := unit, eq'.refl _⟩,
  let v : Σ x : Type, V x := empty, eq'.refl _⟩,
  have : trunc u.fst =' trunc v.fst,
  { have : U =' V,
    { refine funext' (λ x, _),
      dsimp [U, V],
      sorry /- provable by univalence, I'll ignore this -/ },
    generalize : u = u', generalize : v = v',
    clear u v, revert u' v',
    refine eq'.rec_on this (λ (u v : Σ x, x =' unit), _),
    sorry, /- Should be provable by univalence. Both types are inhabited -/
  },
  have : trunc empty, from this.cast (trunc.mk ()),
  exact trunc.lift_on this id (assume a, by cases a)
end

Mario Carneiro (Jun 15 2019 at 13:24):

Aha, of course:

example (H :  x, (x =' unit) =' (x =' empty)) : empty :=
((H unit).cast (eq'.refl _)).cast ()

Chris Hughes (Jun 15 2019 at 13:25):

So does that mean axiom univalence {α β : Sort*} : α ≃ β → α = β is inconsistent with Lean?

Chris Hughes (Jun 15 2019 at 13:25):

because funext' is provable in Lean

Mario Carneiro (Jun 15 2019 at 13:25):

the first sorry is false

Reid Barton (Jun 15 2019 at 13:25):

I don't understand how you intend to prove x =' unit =' (x =' empty)

Reid Barton (Jun 15 2019 at 13:25):

Yeah

Chris Hughes (Jun 15 2019 at 13:26):

I didn' check that one

Kevin Buzzard (Jun 15 2019 at 13:32):

I am also a bit confused about definitions. Are definitions also sometimes programs?

When you write def in lean, it goes through that compiling bit and produces something that can be run by the CPU, i.e. a program

So D1 and T1 (the definition and theorem which worked in vanilla Lean with no classical or anything) are turned into programs by the Lean executable, and D2 is probably not (because it's not computable), and I'm still a bit confused about T2.

Kevin Buzzard (Jun 15 2019 at 13:34):

Hey that univalence axiom looks really useful! Maybe someone should ask Leo to add it.

Mario Carneiro (Jun 15 2019 at 13:35):

Only D1 actually gets associated bytecode

Kevin Buzzard (Jun 15 2019 at 13:35):

I don't know what bytecode is. I want to know which ones become programs, or perhaps can become programs.

Mario Carneiro (Jun 15 2019 at 13:35):

that's CS speak for "the thing the computer runs"

Kevin Buzzard (Jun 15 2019 at 13:36):

TeX -- you can write a file, but it's not a program. Is bytecode always a program?

Mario Carneiro (Jun 15 2019 at 13:37):

bytecode is stuff that gets run by an interpreter. It's the equivalent of compilation for interpreted languages

Mario Carneiro (Jun 15 2019 at 13:37):

like python

Kevin Buzzard (Jun 15 2019 at 13:37):

@Chris Hughes I can make a term of type equiv nat int. If I can deduce nat = int and I have -1 : int, can I then make -1 : nat?

Mario Carneiro (Jun 15 2019 at 13:37):

no

Mario Carneiro (Jun 15 2019 at 13:37):

you get cast _ (-1) : nat

Kevin Buzzard (Jun 15 2019 at 13:38):

If I make two different equivs from nat to int, deduce nat = int twice, and use both deductions to move from int to nat, can I deduce cast _ (-1) : nat = cast _ (-1) : nat?

Kevin Buzzard (Jun 15 2019 at 13:38):

Is the _ a missing proof or missing data?

Mario Carneiro (Jun 15 2019 at 13:39):

missing proof, but proofs are data in HoTT

Mario Carneiro (Jun 15 2019 at 13:39):

in lean the proofs are defeq and so you would get in trouble at this point

Kevin Buzzard (Jun 15 2019 at 13:39):

So does that mean axiom univalence {α β : Sort*} : α ≃ β → α = β is inconsistent with Lean?

This was the question I was interested in. Is it inconsistent?

Mario Carneiro (Jun 15 2019 at 13:39):

no

Mario Carneiro (Jun 15 2019 at 13:40):

That's actually weak univalence; I think it's consistent

Kevin Buzzard (Jun 15 2019 at 13:42):

bytecode is stuff that gets run by an interpreter. It's the equivalent of compilation for interpreted languages

So the bytecode made with D1 is a program. And there's no bytecode made for T1 but if we weren't so proof-impredicative then there could be a program made for T1. And there's no program for D2, and there's no program for T2 but that's for two reasons -- firstly because the VM forgets all proofs, and secondly because even if it didn't, it wouldn't be able to make the program because there's no program for D2. Have I finally got this straight?

Kevin Buzzard (Jun 15 2019 at 13:43):

In HoTT there could be a program for T1 but not for T2

Mario Carneiro (Jun 15 2019 at 13:43):

Yes. More precisely, it could try to make programs for D2 and T2, but somewhere in there there would be a call to the function choice()

Mario Carneiro (Jun 15 2019 at 13:43):

and that function has no definition

Mario Carneiro (Jun 15 2019 at 13:43):

like we have no clue how to write a function that works

Kevin Buzzard (Jun 15 2019 at 13:43):

nat.zero has no definition.

Mario Carneiro (Jun 15 2019 at 13:43):

it is defined to return 0

Mario Carneiro (Jun 15 2019 at 13:44):

in the bytecode

Mario Carneiro (Jun 15 2019 at 13:44):

we know how to implement inductive types, by using tagged unions

Kevin Buzzard (Jun 15 2019 at 13:44):

I think I would like to see some simple examples of what bytecode looks like. How do I do this? I've inspected it in the past with a text editor and that didn't end well.

Mario Carneiro (Jun 15 2019 at 13:45):

set_option trace.compiler.optimize_bytecode true

Mario Carneiro (Jun 15 2019 at 13:45):

don't ask why the name of the option is so not obvious

Kevin Buzzard (Jun 15 2019 at 13:46):

def D1 :  := 2
/-
[compiler.optimize_bytecode]  D1 0
0: scnstr #2
1: ret
-/

and is there somewhere which gives me some idea of what these commands like scnstr mean?

Mario Carneiro (Jun 15 2019 at 13:48):

no, you have to guess but I can take a stab

Mario Carneiro (Jun 15 2019 at 13:49):

I think it is "simple constructor #2" which builds a simple atom (a nat) with value 2

Mario Carneiro (Jun 15 2019 at 13:49):

ret is return

Kevin Buzzard (Jun 15 2019 at 13:52):

I'm stuck on this one:

set_option trace.compiler.optimize_bytecode true

inductive A
| a : A
| b : A

open A

def D : list A := [a, b, a]
/-
[compiler.optimize_bytecode]  D 0
0: scnstr #0
1: scnstr #1
2: scnstr #0
3: scnstr #0
4: cnstr #1 2
5: cnstr #1 2
6: cnstr #1 2
7: ret
-/

Kevin Buzzard (Jun 15 2019 at 13:52):

Hey, wait a minute! This code has line numbers! It's like BBC Basic! It's like the old days! Hey this is a program, right?

Mario Carneiro (Jun 15 2019 at 13:57):

this is a very low level program indeed

Mario Carneiro (Jun 15 2019 at 13:58):

and it has line numbers because there is a goto

Kevin Buzzard (Jun 15 2019 at 14:01):

\me remembers the old days

Kevin Buzzard (Jun 15 2019 at 14:01):

darn it, I'm sure some variant of that works

Kenny Lau (Jun 15 2019 at 14:02):

set_option trace.compiler.optimize_bytecode true

inductive A
| a : A
| b : A

open A

def D1 : list A := []
/-
[compiler.optimize_bytecode]  D1 0
0: scnstr #0
1: ret
-/

def D2 : list A := [a]
/-
[compiler.optimize_bytecode]  D2 0
0: scnstr #0
1: scnstr #0
2: cnstr #1 2
3: ret
-/

def D3 : list A := [b, a]
/-
[compiler.optimize_bytecode]  D3 0
0: scnstr #1
1: scnstr #0
2: scnstr #0
3: cnstr #1 2
4: cnstr #1 2
5: ret
-/

def D4 : list A := [a, b, a]
/-
[compiler.optimize_bytecode]  D4 0
0: scnstr #0
1: scnstr #1
2: scnstr #0
3: scnstr #0
4: cnstr #1 2
5: cnstr #1 2
6: cnstr #1 2
7: ret
-/

Kevin Buzzard (Jun 15 2019 at 14:03):

That makes it much clearer.

Kevin Buzzard (Jun 15 2019 at 14:03):

oh wait actually it does make it clearer.

Kevin Buzzard (Jun 15 2019 at 14:04):

def D : list A := [b, b, a]
/-
[compiler.optimize_bytecode]  D 0
0: scnstr #1
1: scnstr #1
2: scnstr #0
3: scnstr #0
4: cnstr #1 2
5: cnstr #1 2
6: cnstr #1 2
7: ret
-/

The code is backwards or something.

Mario Carneiro (Jun 15 2019 at 14:04):

It works like a stack machine. Here's what the stack looks like, pushing items from the right:

   []
0: [#0]
1: [#0, #1]
2: [#0, #1, #0]
3: [#0, #1, #0, #0]
4: [#0, #1, #1 #0 #0]
5: [#0, #1 #1 (#1 #0 #0)]
6: [#1 #0 (#1 #1 (#1 #0 #0))]

the final line returns the value on the stack, which is #1 #0 (#1 #1 (#1 #0 #0))

Mario Carneiro (Jun 15 2019 at 14:05):

if you squint, you can read #0 and #1 as the constructors of both list and A, and see cons a (cons b (cons a nil))

Kevin Buzzard (Jun 15 2019 at 14:05):

aah of course I'd forgotten about list.nil

Kevin Buzzard (Jun 15 2019 at 14:09):

So cons and list and stuff are all being turned into bytecode. And the point is that choice cannot be turned into bytecode. So here is a proof

noncomputable def g {X Y : Type} {f : X  Y} (hf : function.surjective f) (y : Y) : X := classical.some (hf y)

def g_prop {X Y : Type} {f : X  Y} (hf : function.surjective f) (y : Y) := classical.some_spec (hf y)

theorem has_right_inverse_of_surjective (X Y : Type) (f : X  Y) (hf : function.surjective f) :
   g : Y  X,  y : Y, f (g y) = y := g hf, g_prop hf

which cannot be made into a program and which is hence a counterexample to a completely naive interpretation of the Curry-Howard correspondence that "proofs equal programs".

Kenny Lau (Jun 15 2019 at 14:12):

set_option trace.compiler.optimize_bytecode true

inductive A : Type | a | b (c : A)

def E : A := A.b (A.b A.a)
def F :  := nat.succ (nat.succ nat.zero)
def G :  := 2

/-
sandbox3.lean:5:0: information trace output
[compiler.optimize_bytecode]  E 0
0: scnstr #0
1: cnstr #1 1
2: cnstr #1 1
3: ret

sandbox3.lean:6:0: information trace output
[compiler.optimize_bytecode]  F 0
0: scnstr #1
1: cfun nat.succ
2: ret

sandbox3.lean:7:0: information trace output
[compiler.optimize_bytecode]  G 0
0: scnstr #2
1: ret
-/

Kenny Lau (Jun 15 2019 at 14:12):

why are F and G different?

Mario Carneiro (Jun 15 2019 at 14:13):

for some reason nat.succ was not optimized away in F

Kenny Lau (Jun 15 2019 at 14:13):

why are all of them different?

Mario Carneiro (Jun 15 2019 at 14:14):

Obviously the VM does not represent natural numbers as unary, so succ (succ zero) and 2 are not the same thing

Mario Carneiro (Jun 15 2019 at 14:14):

2 is stored as a 2 in memory

Kevin Buzzard (Jun 15 2019 at 14:14):

This trace.compiler.optimize_bytecode should win set_option option of the week.

Mario Carneiro (Jun 15 2019 at 14:14):

lol

Johan Commelin (Jun 15 2019 at 14:15):

\me remembers the old days

In the old days it was /me...

Mario Carneiro (Jun 15 2019 at 14:15):

/me thinks it still is

Kenny Lau (Jun 15 2019 at 14:15):

why is E and F different at all?

Kevin Buzzard (Jun 15 2019 at 14:16):

def N := 100000000
/-
[compiler.optimize_bytecode]  N 0
0: scnstr #100000000
1: ret
-/
def M := 10000000000
/-
[compiler.optimize_bytecode]  M 0
0: num 10000000000
1: ret
-/

Mario Carneiro (Jun 15 2019 at 14:16):

num is for really big numbers that don't fit in the size of a small nat

Mario Carneiro (Jun 15 2019 at 14:16):

which is something like 2^30

Johan Commelin (Jun 15 2019 at 14:16):

/me thinks @Mario Carneiro is from the old days...

Mario Carneiro (Jun 15 2019 at 14:17):

/me doesn't know how to use status messages properly

Johan Commelin (Jun 15 2019 at 14:18):

In the old days one used .plan

Kevin Buzzard (Jun 15 2019 at 14:18):

#help options

So much still to learn!

Kevin Buzzard (Jun 15 2019 at 14:18):

I remember disabling finger by creating a pipe called .plan somehow.

Kevin Buzzard (Jun 15 2019 at 14:19):

In the old days you could see if your friend had read your email to them by just fingering them.

Mario Carneiro (Jun 15 2019 at 14:19):

@Kenny Lau E and F are different because lean has special support for the nat inductive type

Kevin Buzzard (Jun 15 2019 at 14:20):

Can we see the state of the stack machine easily?

Mario Carneiro (Jun 15 2019 at 14:20):

One of the ideas in 3.5c is to make such overrides more obvious

Kevin Buzzard (Jun 15 2019 at 14:20):

[I mean, in the middle of some calculation]

Mario Carneiro (Jun 15 2019 at 14:21):

I think there is a debugger but I've never used it

Mario Carneiro (Jun 15 2019 at 14:21):

it's written in lean

Kevin Buzzard (Jun 15 2019 at 14:21):

I thought you wrote some code which somehow "read" olean files? Oh -- olean files are yet another thing somehow?

Kevin Buzzard (Jun 15 2019 at 14:21):

We're looking at the bytecode expressed in a far more sane way.

Mario Carneiro (Jun 15 2019 at 14:22):

this is all actually somewhat unrelated to olean files

Mario Carneiro (Jun 15 2019 at 14:23):

olean files are basically a compressed representation of everything that goes in a lean file and all the stuff that is generated as a result of running it

Mario Carneiro (Jun 15 2019 at 14:23):

the idea is that you can read an olean file and it will be just as good as reading the lean file but without having to run all the tactics

Olli (Jun 15 2019 at 14:24):

Wikipedia says that Lambda-mu calculus has classical proofs as programs, where AoC and double-negation elimination map to continuations

Mario Carneiro (Jun 15 2019 at 14:24):

right, that's the "a bit weird" part I mentioned. Double negation elimination has a computational implementation

Olli (Jun 15 2019 at 14:33):

I know continuations are useful for implementing control flow / concurrency, so I suppose if a constructive proof describes an algorithm, then a classical proof is one that contains a step that says "yes, let me get back to you on that", in which case you could think of sorry as a type of continuation

Mario Carneiro (Jun 15 2019 at 14:42):

I like to think of call/cc for LEM via oracles and time travel. You ask the oracle "Is RH true?" and it says NO, and gives you a token that says RH -> false. You celebrate the resolution of the long standing problem, until one day when someone finds that RH is actually true. But when you confront the oracle with your proof of falsity, it takes your proof of RH, destroys the universe and rewinds to the original question, and when asked "Is RH true?" it says YES and gives you a bloodstained proof of RH, and you celebrate the resolution of the long standing problem.

Kevin Buzzard (Jun 15 2019 at 15:30):

What about choice()?

Kevin Buzzard (Jun 15 2019 at 17:27):

https://xenaproject.wordpress.com/2019/06/15/proofs-are-not-programs/

Hopefully I haven't garbled too much of this conversation.

Johan Commelin (Jun 15 2019 at 19:06):

@Kevin Buzzard I think this is possible because of computable code:

@[simp] lemma witt_add_zero : witt_add p 0 = X (tt, 0) + X (ff, 0) :=
dec_trivial

@[simp] lemma witt_mul_zero : witt_mul p 0 = X (tt, 0) * X (ff, 0) :=
dec_trivial

Johan Commelin (Jun 15 2019 at 19:06):

Lean is computing the addition and multiplication on the zeroth truncation of Witt vectors

Kevin Buzzard (Jun 15 2019 at 19:10):

Would rfl work?

Johan Commelin (Jun 15 2019 at 19:12):

Let me try.

Johan Commelin (Jun 15 2019 at 19:12):

type mismatch, term
  rfl
has type
  ?m_2 = ?m_2
but is expected to have type
  witt_add p 0 = X (tt, 0) + X (ff, 0)

Johan Commelin (Jun 15 2019 at 19:13):

Nope (-;

Johan Commelin (Jun 15 2019 at 19:13):

And that's not surprising. Because it isn't even close to being the definition of witt_add.

Johan Commelin (Jun 15 2019 at 19:14):

@Mario Carneiro This is one reason (quite small, I agree) to keep polynomials computable.

Chris Hughes (Jul 13 2019 at 21:09):

I answered my own question about computable choice. The definition I suggested for computable choice is very much inconsistent. The definition was

meta def choice {α : Sort*} {β : α  Sort*} : (Π a, trunc (β a))  trunc (Π a, β a) :=
λ f, trunc.mk (λ a, quot.unquot (f a))

I claimed that this was a function, but it isn't. Using choice on `(id : trunc bool -> trunc bool) gives nonsense

import data.quot logic.basic

meta def choice {α : Sort*} {β : α  Sort*} : (Π a, trunc (β a))  trunc (Π a, β a) :=
λ f, trunc.mk (λ a, quot.unquot (f a))

def decidable_true (choice : Π {α : Sort*} {β : α  Sort*}
  (f : Π a, trunc (β a)), trunc (Π a, β a)) : decidable true :=
trunc.rec_on_subsingleton (choice (id : trunc bool  trunc bool))
  (λ f, decidable_of_iff (f (trunc.mk tt) = f (trunc.mk ff))
    (by rw [subsingleton.elim (trunc.mk tt) (trunc.mk ff)]; exact eq_self_iff_true _))

#eval decidable_true @choice --ff

Last updated: Dec 20 2023 at 11:08 UTC