Zulip Chat Archive

Stream: general

Topic: POPLmark challenge


Tim Daly (Jan 29 2020 at 20:11):

(https://blog.sigplan.org/2020/01/29/mechanized-proofs-for-pl-past-present-and-future) POPLmark challenge is about Mechanized Proofs for Programming Languages. Lean is not just about mechanizing formal mathematics. It is also about mechanizing computational mathematics (at least from my opinionated viewpoint). So besides pushing into the Math dept. we need to push into CS departments.

Tim Daly (Jan 29 2020 at 20:13):

CMU has Frank Pfenning, Bob Harper, and Karl Crary, all in CS and all doing related work, just not in Lean. What would it take to "Buzzard" the CS community?

Johan Commelin (Jan 29 2020 at 20:14):

It would take a buzzard...

Tim Daly (Jan 29 2020 at 20:22):

We have mathlib for the formal crowd. Perhaps we can import the Standard ML kinds of support into a proglib, which would contain definitions, axioms, and theorems related to code.

Tim Daly (Jan 29 2020 at 20:28):

From the talk... The paper abstract reads:
How close are we to a world where every paper on progamming languages is accompanied by an electronic appendix with machine-checked proofs?
We propose an initial set of benchmarks for measuring progress in this area. Based on the metatheory of System $F_<$, a typed lambda-calculus with second-order polymorphism, subtyping, and records, these benchmarks embody many aspects of programming languages that are challenging to formalize: variable binding at both the term and type levels, syntactic forms with variable numbers of components (including binders), and proofs demanding complex induction principles. We hopt that these benchmarks will help clarify the current state of the art, provide a basis for comparing competing technologies, and motivate further research.

Tim Daly (Jan 29 2020 at 21:05):

One of the likely side-effects of a proglib (vs mathlib) would be developing program generation from Lean proofs. This isn't of much interest to the formal math approach but would be to the computational math crowd (aka me :-) )

Patrick Massot (Jan 29 2020 at 22:18):

CMU has Frank Pfenning, Bob Harper, and Karl Crary, all in CS and all doing related work, just not in Lean. What would it take to "Buzzard" the CS community?

I don't want to minimize Kevin's work, but it looks like this forum gives you a very distorted view of the reality of maths departments. To finite order, nothing changed. Any effect is beyond what Taylor expansions can see. The number of mathematicians using a proof assistant may have been multiplied by 20 in the last two years, but it is still very very very close to zero in proportion.

Tim Daly (Jan 29 2020 at 22:31):

I did a survey of the connection between ITP and Computer Algebra. I can only name one person (James Davenport) who appears in both bibliographies. So I know that this is at the "not even noise" level. But I have a "30 Year Horizon" view and I believe the connection must arrive in the long term. In particular, the subset of programming that involves computer algebra seems most likely to be automated since the specifications of the algorithms are already (reasonably well) known.

I'm spending the evening digging up information on TWELF and Standard ML, looking for something that could serve as a kernel of "proglib" in Lean. If Lean could produce ML programs from proofs that would be a major advance for Lean.

The fact that nobody knows how to do this is what makes it research.

Simon Cruanes (Jan 29 2020 at 22:33):

By producing ML programs from proofs, are you referring to something similar to Coq's extraction to OCaml?

Simon Cruanes (Jan 29 2020 at 22:33):

(extracting programs from proof assistants seem relatively common to me, Isabelle/HOL even has components to extract imperative programs)

Tim Daly (Jan 29 2020 at 22:34):

Yes. Lean should be able to extract a program from a proof. I'd rather it could extract SPAD (Axiom's language) programs as SPAD is dependently typed but ML might be easier as some of the other systems could provide validation.

Tim Daly (Jan 29 2020 at 22:37):

One interesting thought would be to try to build 'proglib' so that everything has a programming language representation. Sort of "designed to be executed". I'm not sure yet how this can be done, of course.

Reid Barton (Jan 29 2020 at 22:38):

Lean is already a programming language, so you can take the program extraction to be the identity.

Tim Daly (Jan 29 2020 at 22:39):

I'm not sure what the execution semantics are for the proofs I've read.

Reid Barton (Jan 29 2020 at 22:40):

In Lean 4, this will sometimes even be a sensible thing to do. But a lot depends on the original proof. For example, if it is nonconstructive, then it has no computational meaning.

Reid Barton (Jan 29 2020 at 22:40):

Because those proofs were not intended to be executed, most likely

Simon Cruanes (Jan 29 2020 at 22:41):

If you want to write verified programs, why consider them as proofs and not as normal values? This way you can write programs in a readable way, and prove properties about them separately (which is done in CFML, CompCert, etc. afaik)

Tim Daly (Jan 29 2020 at 22:42):

My thought (and what I'm using as a basis for reading) is that I'm looking for a translation from program -> Lean -> program that is 1-1 (or some near approach). I've found some ML and TWELF papers so I'm reading.

Tim Daly (Jan 29 2020 at 22:55):

@Simon Cruanes I have programs written in a reasonable way (e.g. GCD, Groebner, etc) and specifications for them. But there needs to be some deeper, automated connection from these programs to Lean and back so they can be trusted. Otherwise it feels like "hand waving" (at least to me).

Tim Daly (Jan 29 2020 at 23:59):

There is an interesting split I've come to recognize between the ITP approach and the Type-Theory approach to things, despite the fact that they both seem to use the same judgments. ITP systems tend toward "tactics" whereas the Type-Theory approach tends to use unification.

Simon Cruanes (Jan 30 2020 at 01:16):

In my (limited) view of the field, it seems like most big programs that are verified in ITPs are all based on the separation of the program to prove and the properties on the program (even CompCert, and of course, SEL4, CakeML, etc.).
I'm not aware of any actual big program written as a proof following the CH correspondence. I'd love be to proven wrong though!

Tim Daly (Jan 30 2020 at 01:27):

Axiom is a collection of mathematical algorithms. Rather than prove a large program, my effort is to prove the individual algorithms. So, for instance, GCD has a specification and an implementation. There are several interesting questions that arise. Axiom actually has 22 different GCD algorithms (e.g. GCD for polynomials, Nats, etc). How can these algorithms be proven correct. How can they be proven correct in such a way that Lean can verify the proof, allowing Lean to use Axiom as an Oracle for a GCD computation.

Tim Daly (Jan 30 2020 at 01:32):

Nat in Axiom is called NonNegativeInteger (NNI). The definitions, axioms, and theorems available to the NNI Domain (an Axiom term for an implementation) are all inherited. The NNI Domain also includes a "carrier" (called a REP) which specifies how elements are implemented. So, given group theory axioms (e.g. associativity), a representation, a specification, and an algorithm... use Lean's group theory and Nat, as well as pre- and post-conditions and loop invariants to prove the GCD correct in Lean.

Simon Cruanes (Jan 30 2020 at 01:33):

So what language is the algorithm expressed in?

Tim Daly (Jan 30 2020 at 01:34):

SPAD, a dependently typed language built on Common Lisp. (See https://github.com/daly/PDFS/blob/master/bookvol0.pdf)

Tim Daly (Jan 30 2020 at 01:36):

In an ideal world a GCD algorithm would be transformed into a Lean proof object, proven, and then re-generated from the proof object. The proof then becomes a certificate you can hand to a proof-checker.

Simon Cruanes (Jan 30 2020 at 01:39):

Oh damn, I didn't realize it was that axiom, the CAS… :o

Tim Daly (Jan 30 2020 at 01:40):

There are a lot of "Axiom" things around, including the new Axiom that is working on space ships. But, yes, this is the Axiom originally from IBM Research and I'm one of the original people to blame. :-)

Simon Cruanes (Jan 30 2020 at 01:43):

There are a lot of "Axiom" things around

well of course, that's why we have the Axiom of choice!

Tim Daly (Jan 30 2020 at 01:43):

2 points on that one. I am SO stealing that.

Tim Daly (Jan 30 2020 at 01:53):

Axiom's byline is "The 30 Year Horizon" but I think you just came up with a "near miss".

Simon Cruanes (Jan 30 2020 at 01:56):

you can display a catchphrase randomly sampled on the website :)

Anton Lorenzen (Jan 30 2020 at 12:30):

CMU has Frank Pfenning, Bob Harper, and Karl Crary, all in CS and all doing related work, just not in Lean. What would it take to "Buzzard" the CS community?

I think since Lean already has decent category theory support it would make sense to formalize the categorical view of programming languages (at least Haskell folks seem to be interested in that). I have an 80%-finished experiment showing that SystemFw types form a category on my hard drive.. but to really draw attention it would probably need to formalize something like lenses which requires both type theory work (like a type inference so one can avoid invoking the axioms everywhere) and more category theory than mathlib currently has (mostly profunctors and (co)ends)

Mario Carneiro (Jan 30 2020 at 12:45):

I'm not sure the categorical view espoused by Haskellites is actually coherent, compelling though it is. See http://math.andrej.com/2016/08/06/hask-is-not-a-category/

Anton Lorenzen (Jan 30 2020 at 12:59):

I haven't finished my project yet, so I can't be 100% sure that it works, but I would be surprised if it didn't.. Mostly because SystemFw isn't Haskell and doesn't have laziness or infinite loops. I agree that most Haskellers use category theory a bit loosely, but then it actually makes some sense to assume that a Haskell program doesn't contain bottom since if it did it wouldn't run/finish :)

Mario Carneiro (Jan 30 2020 at 13:05):

haskell programs often contain bottom in places other than where the program actually goes. It seems difficult to me to deliver a denotational semantics for lazy haskell programs that does not contain bottom

Mario Carneiro (Jan 30 2020 at 13:06):

For example you can define prime :: [Integer] and then show (take 10 prime) is just fine but show prime is not

Mario Carneiro (Jan 30 2020 at 13:07):

You can't cleanly separate the good programs from bad without solving the halting problem

Anton Lorenzen (Jan 30 2020 at 13:13):

Yeah, that's true. And even when you make extra sure to only write halting functions you still have stuff like 'head' in the Prelude :( But still, if the goal is to "buzzard" Haskellers, category theory could be a good choice.

Tim Daly (Jan 30 2020 at 21:42):

A reasonably sound start for a 'proglib' would be to look at The (Revised) Definition of Standard ML (http://sml-family.org/sml97-defn.pdf). Constructed properly, it should be possible to write in ML, grind it into a proof expression, and output ML. Since ML has a full formal definition this should cover a reasonable range of "proofs with execution semantics".

Tim Carstens (Jan 30 2020 at 22:14):

Heh I was doing CS (separation logic) in Lean 2 ... then Lean 3

Tim Carstens (Jan 30 2020 at 22:14):

Lean was awesome for it

Tim Daly (Jan 30 2020 at 22:15):

Is your source code available?

Tim Carstens (Jan 30 2020 at 22:16):

No; it was a commercial investment by a client

Tim Carstens (Jan 30 2020 at 22:16):

The AST for C++11 was epic

Tim Carstens (Jan 30 2020 at 22:16):

Lean 2 took 45 minutes to convince itself the mutual inductive was :+1: but everything else was instantaneous :joy:

Tim Carstens (Jan 30 2020 at 22:17):

My buddy got so tired of the checking times for the AST def that he wrote a DSL and a tool that converted the DSL to lean syntax - he called it VasoLean

Tim Carstens (Jan 30 2020 at 22:20):

http://adam.chlipala.net/frap/

Tim Daly (Jan 30 2020 at 22:21):

I'm attracted to ML as an initial basis for a 'proglib' for several reasons. (1) it has an ITP history, (2) it has a semantics definition, (3) it has been used as a programming language for other things, (4) it is sort-of implemented in other ITP systems and OCaml.

My problem with it is that while functions are first class, types are not. Axiom has first class dependent types so it isn't quite the full basis I need but it is pretty close and, honestly, will keep me busy for a long time anyway :-)

Tim Daly (Jan 30 2020 at 22:25):

I am currently (re)-reading the Definition of Standard ML with an eye toward figuring out how to write Lean code. Clearly this is going to take a while.

Tim Carstens (Jan 30 2020 at 22:49):

Imho, there’s two big schools of thought for applying mechanized logic to software engineering

Tim Carstens (Jan 30 2020 at 22:50):

Naive way: just write your code in Lean or Coq and extract to ocaml or Haskell or C or whatever

Tim Carstens (Jan 30 2020 at 22:50):

Imho that’s a weak idea, basically just saying “the problem with Ada was that it wasn’t pure functional” - just offering yet another programming language

Tim Carstens (Jan 30 2020 at 22:51):

Smart way: use your logic as a logic, build models about other languages, use those models to do proofs. Flexible, language agnostic, extensible, modular, just overall faaaaaar more powerful

Tim Carstens (Jan 30 2020 at 22:52):

Tldr: dont compete with small fish like Dafny and Spark Ada. Go for the jugular: let’s eradicate prose specs. Compete with PDF!!!

Tim Carstens (Jan 30 2020 at 22:53):

Offer Lean as a foundation for all comp sci

Tim Carstens (Jan 30 2020 at 22:54):

For this purpose, foundational axioms & other stuff that makes the math ppl hard to sway are more or less bike-shedding

Tim Daly (Jan 30 2020 at 23:26):

I'm only going after specification and proofs of mathematical algorithms. They already work but are not yet proven in Lean so Lean cannot use them as Oracles. So 'proglib' would not be a naive approach as the algorithms already exist. Nor would it be a Smart way as the goal is not "all" programming language, just one that is a respected implementation of computational mathematics. 'Proglib' would provide a way to communicate soundly and with execution semantics (lacking in a lot of proofs I've looked at so far).

Axiom is the result of an estimated 42 million dollars and 300+ person years of work, some of it as PhD research. We don't have the funding or organization to repeat the effort from scratch. However, putting Axiom on a formal footing is possible but it requires automation and intimate interaction between Lean and Axiom. It is not a question of (Naive) accepting algorithms generated from Lean. It is a question of constructing a sound basis for cooperation.

Tim Carstens (Jan 30 2020 at 23:32):

Agree on that

Tim Carstens (Jan 30 2020 at 23:33):

I wasn’t replying directly to that idea, but rather, expressing my opinion on the big Q of what the right way is to do formal methods + comp sci in dependent type theory

Tim Carstens (Jan 30 2020 at 23:34):

I’m excited about the idea of having a popular, widely accepted meta logic for CS stuff - think it’ll be a huge boon for the field

Tim Carstens (Jan 30 2020 at 23:34):

There’s a battle for mindshare right now among the mechanized logics

Tim Carstens (Jan 30 2020 at 23:36):

It’d be a shame for pure math to converge on one system and for CS another - right now the CS ppl in USA kinda hate pure math but I think anyone who things the two fields are headed in separate directions is stuck in the past

Tim Carstens (Jan 30 2020 at 23:36):

Sadly I heard a lot of “lean for math, Coq for CS” talk at POPL

Tim Carstens (Jan 30 2020 at 23:37):

Disputes about foundations and stuff - legit I’m sure to the experts in logic, but I’ve done pure math in both envs and tbh I never noticed the difference

Tim Carstens (Jan 30 2020 at 23:38):

Maybe some stuff is super sensitive to those issues but I don’t personally know an example

Kevin Buzzard (Jan 30 2020 at 23:48):

Sadly I heard a lot of “lean for math, Coq for CS” talk at POPL

I am by no means an expert in this sort of thing, but I know for sure that Leo is targetting computer scientists with Lean 4.

Tim Carstens (Jan 31 2020 at 00:17):

That’s consistent with the conversations I’ve had with him in the past

Tim Carstens (Jan 31 2020 at 00:17):

And I am definitely a fan of that - I absolutely love Lean for a variety of reasons

Tim Carstens (Jan 31 2020 at 00:17):

Lots of tiny little decisions made right; adds up to a very nice experience imho

Tim Daly (Jan 31 2020 at 00:21):

The computer scientist camp (of which I'm one) tends to do Type Checking, Unification (ala prolog), and forward/backward typing. There is the idea of "judgments-as-type", which I don't see in the ITP literature. Ideas are expressed as code.

Having spent the last few years wandering in the ITP side I see mathematical notation. (I really wish @Jeremy Avigad "natural language idea would catch on as it is much more readable). I find the notation a really steep hill to climb (despite having a math degree) as it seems to be "write only", that is, if you understand what the statement says, then you understand it. But until you do, you don't.
Plus there isn't nearly as much emphasis on Types, except the constant bike-shedding on Universes.

There is also a dependence on "funky names" in Lean. If you need more than 1 word for a function there should be an inheritance hierarchy, as usually found in programming. I've been paid to program in 60 languages and I'm still struggling with Lean syntax.

Tim Daly (Jan 31 2020 at 00:23):

My usual approach to a new programming language is to write a chess playing program (simple rules, simple alpha-beta tree search, simple I/O) but I'm not sure where to begin in Lean.

Tim Daly (Jan 31 2020 at 00:30):

I am looking forward to Lean 4 as I understand that the kernel will be small and the rest of the language is in Lean. That gives me the opportunity to re-implement the kernel in Lisp and have Lean "be native". That assumes, of course, that there is some natural language documentation of the kernel. I've read the Lean 3 source code and have very little idea what is going on, what rules are implemented, what data structures correspond to (e.g. what is the theorem data structure, the proof data structure, the judgment data structure, and all of the other things you'd expect to find in the kernel).

Chris B (Jan 31 2020 at 01:41):

I am looking forward to Lean 4 as I understand that the kernel will be small and the rest of the language is in Lean.

The kernel is roughly the same size in 4 as it was in 3 (~5000 loc in .cpp files, ~2200 in .h files). I'll eventually re-do the documentation for the Rust one when the v4 branch is more stable. FWIW, reimplementing the kernel might end up becoming a long-cut if your goal is mostly to learn Lean as a programming language though, the gap between what you see in your editor and the kernel representation is pretty big.

Tim Daly (Jan 31 2020 at 02:50):

well I obviously have a lot more to learn about Lean. That said, the idea that the Lean kernel could be implemented in Lisp means that Lean could share data structures directly with Axiom, removing the need for

Tim Daly (Jan 31 2020 at 02:55):

I obviously have a lot more to learn about Lean. The idea of having the Lean kernel in Lisp is that they can share data structures directly, which removes the need to communicate. Further, it means that Lean can share Axiom's type hierarchy and it can show up in output.

In the new version of Axiom (the Sane branch), I want to add "provisos" to output. For example, I want to know that the result as presented provided certain conditions apply. For example, a result might only be valid if its argument is greater than zero, giving
foo(x) suchthat x>0
That requires a fair bit of machinery, some of which Lean can assist.

Chris B (Jan 31 2020 at 03:04):

well I obviously have a lot more to learn about Lean.

You and me both brother. It would be interesting to see whether a Lisp implementation could take the title for smallest Lean type checker, Trepplein currently has the crown at ~2.1k.

Tim Daly (Jan 31 2020 at 03:08):

I'd be happy to have it work correctly.

Simon Cruanes (Jan 31 2020 at 06:50):

The rust one? Are you redoing a lean4 kernel in rust? ö

Marc Huisinga (Jan 31 2020 at 06:50):

Smart way: use your logic as a logic, build models about other languages, use those models to do proofs. Flexible, language agnostic, extensible, modular, just overall faaaaaar more powerful

my worry is that reasoning in the model and ensuring that the translation between the the model and the programming language is correct would be significantly more difficult than just using lean itself.
i agree that there are many compelling reasons to do this, especially since programmers surely also care about ensuring non-functional properties like time complexity.

perhaps tactics can bridge the gap for reasoning in the model?

Mario Carneiro (Jan 31 2020 at 10:36):

@Simon Cruanes Chris wrote a typechecker for the lean 3 kernel in rust, and he's been updating it for the new lean 4 features

Mario Carneiro (Jan 31 2020 at 10:39):

@Marc Huisinga

my worry is that reasoning in the model and ensuring that the translation between the the model and the programming language is correct would be significantly more difficult than just using lean itself.
i agree that there are many compelling reasons to do this, especially since programmers surely also care about ensuring non-functional properties like time complexity.

perhaps tactics can bridge the gap for reasoning in the model?

(I completely agree with Tim Carsten's point here.) You don't have any translation gap if the "model" is not a model so much as a specification for the language itself. Depending on how your formalization relates to the language itself, it might be authoritative as a spec, or it could be a transcription of some natural language spec (in which case effort is needed to ensure these cohere). But if the language itself when executed has a different behavior, the presence of the spec ensures that this can be categorized as a compilation bug and not a modeling bug

Bas Spitters (Jan 31 2020 at 10:45):

We've done quite a bit of work connecting with the computer algebraist and people in numerics in workshops like MAP, CICM, and in the formath project. It's a slow process in which lean is just another step.
https://wiki.portal.chalmers.se/cse/pmwiki.php/ForMath/ForMath
https://www.cs.au.dk/~spitters/typesreal.html
https://www.cicm-conference.org/cicm.php
https://perso.crans.org/cohen/map2014/program/
https://www.newton.ac.uk/event/bpr

Marc Huisinga (Jan 31 2020 at 11:47):

You don't have any translation gap if the "model" is not a model so much as a specification for the language itself. Depending on how your formalization relates to the language itself, it might be authoritative as a spec, or it could be a transcription of some natural language spec (in which case effort is needed to ensure these cohere). But if the language itself when executed has a different behavior, the presence of the spec ensures that this can be categorized as a compilation bug and not a modeling bug

ah, i see what you mean!

but what about the effort of reasoning about the model/spec compared to plain lean? specifically, i imagine that one would work directly with some kind of formal semantics. my gut feeling tells me that dealing with denotational semantics should be easier than dealing with operational semantics in lean, but it also tells me that keeping track of stuff like time complexity is easier with operational semantics :)

i think it's also important that one would be able to translate ("some"?) theorems between different models to cut down on the duplication. this isn't just important for programming in different programming languages, but also for cs (for instance, both my algorithm and computational complexity theory lectures regularly switched or extended the machine model, carrying over theorems where sensible). this roughly sounds a bit like the transport stuff that has been discussed here before.

Tim Daly (Jan 31 2020 at 16:18):

@Bas Spitters re: The ForMath project whose definition is:
Concretely, the objective of this project is to develop libraries of formalised mathematics concerning algebra, linear algebra, real number computation, and algebraic topology. The libraries that we plan to develop in this proposal are especially chosen to have long-term applications in areas where software interacts with the physical world. The main originality of the work is to structure these libraries as a software development, relying on a basis that has already shown its power in the formal proof of the four-colour theorem, and to address topics that were mostly left untouched by previous research in formal proof or formal methods.

Proof of the four-colour theorem is very interesting as is other examples of formal proof. Certainly these formal mathematics libraries are valuable. My goal differs from the ForMath project in that I emphasis executable semantics of proofs. This merges proofs and computer algebra into "computational mathematics". Rather than stand-alone libraries, we need trusted computations of "college mathematics".

Tim Daly (Jan 31 2020 at 16:23):

For the Small Types workshop, there appear to be two streams of interest,
exact real number computation
semantics for real computations (e.g. domain theory, formal topology)

The exact real number work is of deep interest and I will do some follow-up research on the papers.
The semantics of real computations sounds interesting but seems to far on the "theory" side rather than execution.

Tim Daly (Jan 31 2020 at 16:29):

The Digital Mathematics Library seems oriented toward document handling, which is interesting for a different reason.

Axiom uses Knuth's Literate Programming so the actual source code is available in Latex documents which produce both PDFs and executable code. The hope is that, when distributing an electronic paper to a conference, one could download and execute "the paper" while the talk is in progress. That way the research can be "reproduced" and used immediately.

Tim Daly (Jan 31 2020 at 16:36):

The Mathematical Knowledge Management, seems designed to send something like a polynomial out of one system, through an "island" of mathematics doing semantics translation, and on to a target system.

Unfortunately I don't see how this can work. Consider the case of sending an Axiom polynomial to the island and back to the original Axiom system. Axiom polynomials have many representations (sparse, dense, recursive, etc.). They live over many different fields. In other words, there are whole towers of code associated with the "surface version" of a polynomial.

Tim Daly (Jan 31 2020 at 16:40):

The Map 2014 link has some interesting papers, such as Grenet's Computing low-degree factors of lacunary polynomials: a Newton-Puiseux approach. This may be an implementable algorithm. I will follow up on these papers. But the question for this forum is whether Grenet's algorithm includes a proof.

Tim Daly (Jan 31 2020 at 16:42):

Big Proof, which " brings together mathematicians interested in employing proof technology in their research, logicians exploring pragmatic and foundational issues in the formalisation of mathematics, and computer scientists engaged in developing and applying proof technology" is wildly interesting to me.

Reid Barton (Jan 31 2020 at 17:06):

The next sentence from the ForMath project summary after the paragraph you quoted is "The main milestones of this work will concern formally proved algorithms for solving problems in real arithmetics and in algebraic topology." I haven't looked at the project myself, but it sounds like the goal is to produce verified algorithms that one would actually run.

Tim Carstens (Feb 01 2020 at 06:50):

@Marc Huisinga - I think both approaches have their place overall; they offer different benefits and I’m a big believer in picking the right tech for the project. This is why I like to call out the “big opportunity” that I see in the model approach.

Though that approach requires more work to construct, it supports more diverse technologies. Importantly, this capability is unique to mechanized logic: while there are other programming languages for formal verification (such as Dafny), Lean (and others) stand alone in being able to unite different technologies from different parts of the computing stack. The CompCert project, augmented with Andrew Appel’s Verifiable C, as well as the work SiFive is doing with Kami, are both good examples of this versatility paying off.

I agree, though, that sometimes it’s easiest to just write it in Lean (or Coq). And that’s ok too, as long as I’m not boo’d when I dream about a unified meta-logic for comp sci 🥰

Tim Carstens (Feb 01 2020 at 06:51):

(And indeed, nobody is booing me, which is nice!)

Tim Carstens (Feb 01 2020 at 06:57):

I will say, I do think Lean has made many decisions which are well-suited to programming, in a way that perhaps Coq has not. And I admire Lean’s patient approach to getting it right. Imho, that’s the right way to respond to the state of the art.

Tim Carstens (Feb 01 2020 at 06:58):

There’s a modernity to the tools, language, and design philosophy that I think is a real asset

Bas Spitters (Feb 01 2020 at 07:35):

There's a (hopefully temporary) problem with the MAP homepage, but it can still be found here:
https://web.archive.org/web/20181228171600/http://map.disi.unige.it/
We've been having meetings since Dagstuhl 2003.

Bas Spitters (Feb 01 2020 at 07:37):

Here's a concrete concrete project, you may be interested in:
https://github.com/CoqEAL/CoqEAL

Bas Spitters (Feb 11 2020 at 17:15):

The website of the MAP community is mostly working again. It brings together people from computer algebra, formalization in type theory and constructive algebra.
https://mapcommunity.github.io/index.html

Floris van Doorn (Feb 11 2020 at 17:19):

Mathematics = Algorithms + Proofs
MAP = Mathematics + Algorithms + Proofs

So Map = 2 * Mathematics?


Last updated: Dec 20 2023 at 11:08 UTC