Zulip Chat Archive
Stream: general
Topic: Kind lang
Asei Inoue (Apr 03 2024 at 02:16):
Kind is a pure functional programming language and proof assistant. (Slightly similar to Lean 4.)
It claims to be even faster than Lean 4 due to parallelism.
https://github.com/HigherOrderCO/Kind
https://github.com/HigherOrderCO/Functional-Benchmarks
Asei Inoue (Apr 03 2024 at 02:18):
Can we take this as good news that there is still room for Lean 4 to be faster?
Kim Morrison (Apr 03 2024 at 02:26):
Last commit 7 months ago? Not sure it is an active project.
Asei Inoue (Apr 03 2024 at 02:27):
oh…
Kim Morrison (Apr 03 2024 at 02:27):
It would be interesting to see the Lean code for the slow benchmarks at https://github.com/HigherOrderCO/Functional-Benchmarks
Asei Inoue (Apr 03 2024 at 02:32):
what do you mean? they are bad examples?
Kim Morrison (Apr 03 2024 at 02:35):
The cases where they claim Lean is slow --- it would be interesting to see the Lean code they wrote, and to understand if it code be done better, and if not, why it is slow.
Kim Morrison (Apr 03 2024 at 02:35):
But they don't seem to have links, and I didn't want to dig.
Kyle Miller (Apr 03 2024 at 02:40):
They're the lean files in the Checker and Runtime folders. They've also got Base.lean
, which seems to be some common code (but it's not imported. Looks to be concatenated instead.)
Asei Inoue (Apr 03 2024 at 02:42):
LeanFRO should say “Lean is perfect and ultimate next-gen functional language!” to gather money..? :smirk:
Asei Inoue (Apr 03 2024 at 02:43):
(its a joke.)
Notification Bot (Apr 03 2024 at 03:39):
Junyan Xu has marked this topic as resolved.
Notification Bot (Apr 03 2024 at 03:39):
Junyan Xu has marked this topic as unresolved.
Henrik Böving (Apr 03 2024 at 07:09):
Scott Morrison said:
The cases where they claim Lean is slow --- it would be interesting to see the Lean code they wrote, and to understand if it code be done better, and if not, why it is slow.
Most of what is slow in their benchmarks are kernel reduction things of large expression trees, the programming parts are all fast.
Henrik Böving (Apr 03 2024 at 07:12):
Asei Inoue said:
Can we take this as good news that there is still room for Lean 4 to be faster?
Also yes there is always room for Lean to be faster. But I don't think it will be in the fashion that Kind and their other toy languages demonstrate
Mario Carneiro (Apr 03 2024 at 07:20):
I've been planning a rewrite of the kernel evaluator for some time. Maybe this is a good source to look at for alternative implementations; the Coq kernel is my other major guide on how to do better. The kernel evaluator has a lot of clear room for improvement
Mario Carneiro (Apr 03 2024 at 07:21):
This is tied up in progress with lean4lean because I absolutely don't want to accidentally introduce soundness issues rewriting a fundamental soundness-critical part of the system. I'm pretty sure this is the same reason it is relatively unoptimized until now
Yuri (Apr 15 2024 at 15:15):
The secret sauce behind Kind is HVM, which is based on Interaction Nets, or more specifically, Symmetric Interaction Combinators:
Higher-order Virtual Machine (HVM) is a pure functional runtime that is lazy, non-garbage-collected and massively parallel. It is also beta-optimal, meaning that, for higher-order computations, it can, in some cases, be exponentially (in the asymptotical sense) faster than alternatives, including Haskell's GHC.
That is possible due to a new model of computation, the Interaction Net, which supersedes the Turing Machine and the Lambda Calculus. Previous implementations of this model have been inefficient in practice, however, a recent breakthrough has drastically improved its efficiency, resulting in the HVM. Despite being relatively new, it already beats mature compilers in some cases, and is being continuously improved.
The first work on Kind lang was based on HVM1, which now has been superseded by HVM2. HVM2 is a combination of two projects: hvm-core and hvm-lang. The current work is focusing heavily on running computations on the GPU reaching billions of reductions per second.
HVM-Core is a parallel evaluator for extended Symmetric Interaction Combinators.
We provide a raw syntax for specifying nets and a Rust implementation that achieves up to 10 billion rewrites per second on Apple M3 Max CPU. HVM's optimal evaluation semantics and concurrent model of computation make it a great compile target for high-level languages seeking massive parallelism.
HVM-Lang is a lambda-calculus based language and serves as an Intermediate Representation for HVM-Core, offering a higher level syntax for writing programs based on the Interaction-Calculus.
Compilers that want to target the HVM should compile the source language to HVM-lang, which takes care of converting interaction calculus into the underlying interaction networks.
Note that HVM-lang is untyped and does not guarantee correctness or soundness. Compilers that don't want to implement the necessary check can instead transpile to the Kind language, which compiles to HVM-lang and implements type-level checking.
Programmers looking for an HVM-based programming language should also use Kind, which is designed to be user-interfacing.
This is a quite interesting work especially since Interaction Net (IN) computations are massively parallel. hvm-lang
which provides a lambda calculus layer on top of this pure IN runtime is able to reduce lambdas lazily and in parallel.
Kind is a dependently typed language (as far as I know, based on self types) that focuses on exploiting the massively parallel runtime for elaboration, etc.
Jason Rute (Apr 15 2024 at 17:56):
Would Kind be good for writing an external proof checker then?
Mario Carneiro (Apr 15 2024 at 19:47):
There is a note that says it's not complete for lambda calculus
Mario Carneiro (Apr 15 2024 at 19:58):
https://github.com/HigherOrderCO/HVM/blob/master/guide/HOW.md is a good overview, although there are other linked papers with string diagrams instead of this rule-based presentation
Chris Bailey (Apr 15 2024 at 21:29):
Mario Carneiro said:
I've been planning a rewrite of the kernel evaluator for some time.
I've been working on this as well, hmu if you want to compare notes.
Mario Carneiro (Apr 15 2024 at 22:01):
Not much notes on my end, just collecting candidate implementations like Coq, GHC, HVM
Enrico Borba (Apr 16 2024 at 13:57):
Chris Bailey said:
Mario Carneiro said:
I've been planning a rewrite of the kernel evaluator for some time.
I've been working on this as well, hmu if you want to compare notes.
Where is the kernel evaluator located code-wise? And are there any docs on the specifics of the kernel?
Henrik Böving (Apr 16 2024 at 14:07):
it's in src/kernel
and that depends on how specific you want to get, for the general idea there is #leantt but the precise details are not documented anywhere to my knowledge
Mario Carneiro (Apr 16 2024 at 15:31):
There is also the book @Chris Bailey wrote, https://ammkrn.github.io/type_checking_in_lean4/
Sofia Rodrigues (Apr 17 2024 at 19:05):
Hi! I'm the one who tried to create this version of the compiler for the Kind language (the one that is not maintained anymore and that is going to be replaced with a new one :P created by Taelin). HVM1 had a lot of problems that it inherits from the Interaction Nets model and if you take a look at the type checker, it is a simple CoC that does not implement any advanced technique. I think that it's fast because it does not implement anything that is useful for a theorem prover and you can reach unsoundness easily.
I'm saying this for my version of the compiler, the one that Taelin is writing right now is based on Interaction Type Theory which I don't know much about. I'm almost sure that Taelin is not going to implement a lot of things that modern theorem provers have in to make sure the type checker is fast.
The Lean4 type checker looks really slow in comparison with other ones, I think that Leo said that the type checker uses simple substitution instead of HOAS and HOAS based type checkers usually are faster. When we tried to benchmark the runtime, Lean4 performed really well and we got really impressed.
Mario Carneiro (Apr 17 2024 at 19:29):
yes that's right, the type checker is just doing substitution (and more than that, it uses locally nameless so it performs substitution much more often than you might think)
Victor Maia (Apr 23 2024 at 14:57):
Hi there. Just to clear up some misconceptions:
Kind is a general-purpose programming language that compiles to HVM, a parallel runtime. It does feature dependent types, which lets you prove theorems. Its type-level evaluator is very fast, because, as pointed, it is a thin CoC-based, NbE checker on top of HVM, thus, it inherits HVM's performance. That said, there is more to fast type-checking than type-level evaluation. Kind has no plans to add advanced optimizations, and its speed is merely due to "raw horsepower".
While Kind features theorem proving, its main goal isn't to be a fully-featured proof assistant. There are no plans to support tactics, cubical types, and so on. It isn't a full replacement to Lean, Coq and similar. That said, she is confused about some assertions. "Interaction Type Theory" is a research experiment that is unrelated to it. Instead, Kind2 is based on, and inspired by, András Kovac's smalltt. It features inductive families via λ-encodings, and termination will be ensured via EAL-typeability (which also makes evaluation sound under HVM runtime, removing the "problems" that Sofia mentions).
So, in short - Kind2 can be seen as a general-purpose functional programming language that targets the HVM, and aims to be as fast as possible. It does feature theorem proving and will be consistent, but isn't meant to be a full-featured proof assistant and isn't a substitute to Lean, Coq and similar.
Hope that clears it up!
Victor Maia (Apr 23 2024 at 15:02):
Also, it isn't maintained because we're moving to Kind2, which is based on HVM2 (a simpler, faster, formally-verified distillation of HVM1!). We had some misdirections on the way, which delayed things a little bit, but it will be there soon. These benchmarks will be updated eventually - but again, they're mostly regarding type-level evaluation, which is a small part of dependent type checking.
Asei Inoue (Apr 23 2024 at 15:04):
I think Lean is as fascinating as Kind as a general-purpose programming language.
What do you think are the drawbacks of Lean as a programming language?
Victor Maia (Apr 23 2024 at 15:10):
I don't think there are many! Lean is amazing and I'm highly impressed by it, specially by things like the interior mutation optimization. Its runtime performance surpassed my former expectations. I do think its type-checker could be faster, but I'm sure it is being worked. I also think Kind is cool on its own ways - for example, being able to run on GPUs is handy nowadays. Hopefully both can coexist!
Arthur Paulino (Apr 23 2024 at 23:20):
Victor, do you think an inet-based kernel for Lean for the purpose of speed up is a realistic idea?
Asei Inoue (Apr 26 2024 at 14:41):
@Victor Maia
Thanks for the answer.
It is certainly nice to have a GPU available.
I hadn't thought about whether GPUs are available in lean.
Asei Inoue (Apr 26 2024 at 14:41):
How difficult is it to actually use GPUs in Lean?
Mario Carneiro (Apr 26 2024 at 15:14):
There is essentially no support. My guess is that you would need to link to OpenGL or something to use "compute shaders"
Alok Singh (Apr 26 2024 at 21:14):
@Mario Carneiro I want MLIR support someday. Closest I could find was https://github.com/opencompl/ssa. Combined with @Tomas Skrivan 's scilean, it would be a nice little world to do deep learning in.
Asei Inoue (Apr 27 2024 at 08:06):
Wouldn't GPU support reduce build times for e.g. mathlib, or make tactics more lightweight? I think GPU support will one day be needed if we are going to do numerical computations with Lean, like sciLean.
Mario Carneiro (Apr 27 2024 at 18:41):
Using GPUs in theorem provers is quite difficult, because GPUs are not like CPUs, they work best when you have a lot of data to process similarly and it is difficult to get compilation, interpretation and elaboration into that form.
Alok Singh (Apr 27 2024 at 18:45):
Mario Carneiro said:
Using GPUs in theorem provers is quite difficult, because GPUs are not like CPUs, they work best when you have a lot of data to process similarly and it is difficult to get compilation, interpretation and elaboration into that form.
agree, gpus are breadthwise and a lot of proving is variable depth recursive structures. that's why the inet approach is interesting to me, for its autoparallelization, when it's possible to use Big compute on code. rewriting code to be parallel as an optimization is one thing but this tries to improve the underlying execution model
Chris Bailey (Apr 27 2024 at 21:57):
One of the authors of the book that describes the optimal reduction strategy used by OP published some reflections on optimal reduction relatively recently, including some thoughts about optimal reduction as it applies to theorem provers (https://arxiv.org/abs/1701.04240v1).
Chris Bailey (Apr 27 2024 at 22:02):
I only skimmed some of the relevant literature but my impression is that "parallelism" in this context seems to be about identifying related expressions and exploiting that to reuse the results of a reduction step, not necessarily about parallel computation in the sense of splitting a workload across hardware or threads or whatever?
Asei Inoue (Apr 28 2024 at 01:30):
Thank you for your kindness in teaching me in my ignorance
Asei Inoue (Apr 28 2024 at 01:30):
Is it difficult to implement parallel processing in Lean?
Kim Morrison (Apr 28 2024 at 07:16):
Yes. :-) It's being worked on, nonetheless, at least in terms of parallel elaboration of theorems in a single file. Completely irrelevant for GPUs.
Asei Inoue (Apr 28 2024 at 07:18):
Thank you and I look forward to Lean's future development...!
Andrés Goens (Apr 28 2024 at 10:17):
Alok Singh said:
Mario Carneiro I want MLIR support someday. Closest I could find was https://github.com/opencompl/ssa. Combined with Tomas Skrivan 's scilean, it would be a nice little world to do deep learning in.
that repo is more about modeling MLIR('s semantics) in Lean, not so much about using it for compilation, although of course it could help in the future. I know that @Henrik Böving, @Tobias Grosser and @Siddharth Bhat are working on an LLVM backend of the compiler (in a similar spirit to https://ieeexplore.ieee.org/document/9741279), but I'm not sure about the current status of that one.
Victor Maia (Apr 29 2024 at 15:29):
in the context of optimal λ-calculus reduction, "parallelism" is often used to mean both:
-
with optimal sharing, a single beta-reduction can "represent" many beta-reductions in "parallel".
-
interaction nets, which are used to implement optimal λ-calculus reduction, are very prone to parallel evaluation, because its reduction rules are fully local, require constant time/space, are asynchronous and strongly confluent (which means that not just the "final result" is the same, but the "work done" is the same, regardless of the order of operations; or, in other words, you can reduce things in parallel without the risk of generating extra work)
regarding running Lean on GPUs - if Lean has a simple core and you manage to target the HVM (which shouldn't be harder than, say, targeting JavaScript) - then that could be one way to run it on GPUs (:
it is true that GPUs are mostly breadth than depth, so, the kinds of program that benefit are these that recurse in a branching fashion, and then do a lot of work on the "tips". hopefully it isn't hard to design purely functional algorithms that behave this way. it often boils down to working more with trees and less with lists
Last updated: May 02 2025 at 03:31 UTC