Zulip Chat Archive

Stream: Machine Learning for Theorem Proving

Topic: Questions to the ML community


Shreyas Srinivas (Dec 29 2024 at 12:31):

Are there tools / ongoing ML research on :

  1. Decompiling lean proof terms into succinct and correct tactic proofs. I recall that there is already some LLM stuff on decompiling in other programming languages. The key point is of course for proofs to be succinct and use more powerful tactics like simp, linarith, or ring, rather than just output exact <term>
  2. Translating tactic level proofs from one ITP to another, say Coq to Lean or vice versa.

Jason Rute (Dec 30 2024 at 03:28):

I think both are good topics, but I know very little work done on either. As for the first, I don't recall seeing anything. But I also don't see a lot of direct applications (except for one I'll mention below). Do you have something in mind?

Jason Rute (Dec 30 2024 at 03:28):

A related topic is just cleaning up proofs like in Improver, #general > ImProver discussion. (Actually, one should check if Improver does term proof to tractic translation.) Also, it's been discussed that if RL agents like AlphaProof give really weird and non-idiomatic proofs, then maybe another AI agent needs to be used to clean them up.

Jason Rute (Dec 30 2024 at 03:28):

As for translation, I've seen it talked about occasionally. Everything from Lean to Coq (or visa-versa) to just Lean 3 to Lean 4 (when the port was happening). Somewhat recently, there was a thread about how GPT-4 is good at translating from Lean to Coq. #Machine Learning for Theorem Proving > ChatGPT can help translate between Coq and Lean. (I doubt it would be good in the other direction since GPT-4 seems bad at Lean syntax.) And another one about translating from Adga to TypeScript: #Machine Learning for Theorem Proving > Transpiling code Also, here are some much older musings of mine on the subject well before ChatGPT came out: #Machine Learning for Theorem Proving > Translating between ITPs with machine learning

Jason Rute (Dec 30 2024 at 03:28):

Also @Cyril Cohen was (or still is?) hiring a postdoc to work on translation between Lean and Coq. I think the project is to basically automatically translate theorems from Mathlib to math-comp and/or the other direction. #job postings > Postdoc: ML translation between formal math libraries

Jason Rute (Dec 30 2024 at 03:29):

One project I have thought about that would be useful and interesting is to combine old-school logical transpilation with ML translation. The "old-school" approach is to find some logical way to translate from one language to the other. If the source logic is weaker than the target (like Metamath to Lean, or HOL-Light to Coq) then it can be fairly easily done (as in @Mario Carneiro's translation from Metamath to Lean). There are also systems like Duducki for this sort of work, including the recent translation from HOL-Light to Coq. But the challenge is in alignment. You don't want to use the source version of natural numbers. You want to use the target version. One can of course manually align (as I think the above projects have done), but I think a very interesting project would be to do automatic alignment with AI. So (1) automatically find the matching definitions, (2) automatically prove they are isomorphic, (3) automatically translate the statement of one theorem to another along this equivalence, (4) automatically replace the messy theorem statement with a more idiomatic one, (5) translate the proof to one which doesn't use the source definitions and lemmas at all. Each step is harder than the previous (more or less). Here is an old work on ML alignment of definitions (before neural networks were common here).

Jason Rute (Dec 30 2024 at 03:30):

(And of course natural language to formal "auto-formalization" is a hot topic. In many ways formal to formal should be easier, so if we are making progress on auto-formalization, maybe we are making progress on formal-to-formal translation without even trying.)

Shreyas Srinivas (Dec 30 2024 at 13:38):

Jason Rute said:

I think both are good topics, but I know very little work done on either. As for the first, I don't recall seeing anything. But I also don't see a lot of direct applications (except for one I'll mention below). Do you have something in mind?

The motivation is actually point 2. I wondered if we could use the following pipeline for translation : Elaborate lean/coq tactic proof to term => translate proof terms more systematically to coq/lean
=> decompile to a succinct tactic proof.

The nice thing is that of course we don't have to test our decompiler against some artificial semantics, we know whether the decompiler accomplished its job by looking at lake build's output.

Shreyas Srinivas (Dec 30 2024 at 13:40):

A second (far more ambitious) motivation : I was thinking about the whole interpretability area in ML (and discussing with people who work on it) and I wondered if studying a good network performing these translations would help us study how neural nets deal with proofs and build smaller/more efficient networks, more systematically.

Jason Rute (Dec 30 2024 at 13:58):

Shreyas Srinivas said:

Jason Rute said:

I think both are good topics, but I know very little work done on either. As for the first, I don't recall seeing anything. But I also don't see a lot of direct applications (except for one I'll mention below). Do you have something in mind?

The motivation is actually point 2. I wondered if we could use the following pipeline for translation : Elaborate lean/coq tactic proof to term => translate proof terms more systematically to coq/lean
=> decompile to a succinct tactic proof.

Interesting approach (and similar to what I was suggesting above). I guess you still have to deal with two issues for the second step. (1) Definitions and lemmas would need to be aligned (and that alignment would be different for different Coq libraries) or left duplicated, and the (2) Lean/Coq foundations are close enough that it might be a bit tricky to translate at the kernel level from one to the other since you can’t have a nice embedding of one logic in the other. (Wait, I think it is known you can translate Lean 3’s kernel into Coq.)

Shreyas Srinivas (Dec 30 2024 at 14:03):

to clarify, I think ML can be useful in step 2 as well. When I say "more systematically" I mean that the translations could be localised in some sense (one subterm replaced by another)

Shreyas Srinivas (Dec 30 2024 at 14:05):

I think in the lean context, the more useful direction might be Isabelle HOL to Lean, since they apparently have a lot of classical math we don't (this is definitely true for algorithms). In this case the second step might be much more fuzzy

Luigi Massacci (Dec 31 2024 at 08:27):

The “problem” with Isabelle is that they also have a lot more very high powered tactics that are used quite frequently, and currently have no equivalent in lean. While the approach you describe might do away with that issue on the surface, I would hazard a guess that in the chain isabelle tactic -> isabelle proof term -> lean proof term -> lean tactics, when it comes to translating sledgehammer proofs, the last arrow is going to be nearly as hard as just generating the whole proof from scratch. I would expect this to have better luck in Coq, where it’s much more common to have handcrafted proof terms, which are relatively simple, as opposed to the monstrosities generated by SMT solvers.

Maybe after Duper is up and running in a more stable form direct tactic to tactic translation could become more doable, but of course then you get the annoyance that the simpler logic of Isabelle means rewriting is going to work in situations where it won’t in lean, etc

Josh Clune (Dec 31 2024 at 12:41):

Luigi Massacci said:

Maybe after Duper is up and running in a more stable form direct tactic to tactic translation could become more doable, but of course then you get the annoyance that the simpler logic of Isabelle means rewriting is going to work in situations where it won’t in lean, etc

For what it's worth, I suspect that Duper is already in a state where it could reproduce the work done by most metis calls. Aesop might similarly be able to reproduce the work done by most Isabelle auto calls, and although there isn't a stable Lean equivalent for Isabelle's smt calls yet, I believe that lean-smt is working in that direction. So we might be closer to having equivalents to Isabelle's automation than it seems.

I do agree though that translating problems from Isabelle's logic to "equivalent" problems in Lean's logic might result in harder problems depending on how the translation is done.

Shreyas Srinivas (Dec 31 2024 at 12:42):

From the utility point of view, even a translator that works for a substantial fraction of Isabelle proofs would be a big step forward.

Shreyas Srinivas (Dec 31 2024 at 12:44):

That being said, I like this problem purely from the point of view of my second motivation, namely to explore interpretation and looking for composable patterns in the model.

Kevin Buzzard (Dec 31 2024 at 12:50):

From the point of view of mathematics, I think that translating stuff from Isabelle to Lean is a far more inefficient approach than just redoing it in Lean anyway. This is what we did with the prime number theorem, for example, and it's what we're doing with finite-dimensionality of modular forms, and I don't really know of anything else in Isabelle AFP which we'd want in mathlib and which we don't have (although I'm open to being corrected here). But of course translating from Isabelle to Lean is independently an interesting question.

Shreyas Srinivas (Dec 31 2024 at 12:51):

I am looking at TCS stuff that I don't see any point in pushing to mathlib. But I recall someone mentioning that there is a bunch of stuff with asymptotics that math folks also care about.

Shreyas Srinivas (Dec 31 2024 at 12:53):

From Coq, there is a lot of stuff translating which to lean would be a huge amount of grunt work

Shreyas Srinivas (Dec 31 2024 at 12:53):

This is mostly software verification stuff. Iris and easycrypt(ssprove, thanks Bas Spitters) are the biggest ones, but there are a number of smaller developments as well

Jason Rute (Dec 31 2024 at 13:14):

@Daniel Windham might have interest in this discussion. He was motivated by translating between ITPs.

Kevin Buzzard (Dec 31 2024 at 13:18):

You're right about asmptotics and again my instinct is that it would just be easier to translate them by hand given the volume of material here

Kevin Buzzard (Dec 31 2024 at 13:22):

Autoformalization between two systems with such different type theories is still a pipe dream and even it becomes a reality then the translations would be very unlikely to be idiomatic and maintainable (in my opinion)

Kevin Buzzard (Dec 31 2024 at 13:23):

But I'm just saying that I don't see any argument for using these ideas to make mathlib better, I'm certainly not arguing against working on the idea in general. It sounds interesting and difficult (which is good)

Luigi Massacci (Dec 31 2024 at 14:14):

Kevin Buzzard said:

and I don't really know of anything else in Isabelle AFP which we'd want in mathlib and which we don't have

isn’t all the work of Eberl on complex path integrals in the afp?

Shreyas Srinivas (Dec 31 2024 at 15:15):

Fundamentally the problem of decompilation to tactics in Lean is a pattern matching problem on expression trees. Given an index of lemmas and some meta patterns describing the "shape" of terms generated by tactics, and the input term, the goal is to find subterms which can be successfully replaced by tactics + lemmas. I wonder if some form of recursive pattern matching guided by ML would be practical. I don't know how well neural networks fare at these tasks.

Shreyas Srinivas (Dec 31 2024 at 16:58):

To add to the above, I wouldn't be surprised to learn that there is a deterministic and composable set of tactics (perhaps SSR?) for which this can be accomplished much more easily.

Kevin Buzzard (Dec 31 2024 at 17:29):

Luigi Massacci said:

Kevin Buzzard said:

and I don't really know of anything else in Isabelle AFP which we'd want in mathlib and which we don't have

isn’t all the work of Eberl on complex path integrals in the afp?

I talked to Manuel about this quite a bit and he explained to me a more simple approach than what he implemented (which I talked about with David Loeffler), probably that's what we want.

Xiyu Zhai (Jan 02 2025 at 03:41):

It's hard to see the particular position of this approach in the future suppose that we can just easily do any proofs by human plus reasoning llm plus some autoformalization. Instead of translation between low level languages like Lean, Isabelle, maybe in the ideal world we should maintain a good informal proof and a stable great translator from informal to arbitrary formal language, which is my current project.

Frankly speaking, I believe the value of many things needs to be carefully reevaluated after we fully utilize the potential of llms. It's easy to underestimate the potential of LLMs, after seeing it can't even reliably do the Lean4 syntax instead of Lean3. But there exist so many ways of composing AI system instead of using just one pass of a single LLM to overcome these superficial problems without even collecting more Lean data.

Formalization, in my perspective, isn't a particularly intellectually challenging task. I fail to see why this would be a fundamental bottleneck for AI to make progress upon in the coming years.

Because the rate of AI progressing is so fast, I'm trying to warn people from starting huge projects without evaluating its usefulness if AI just made another leap. AI has this problem of research problems quickly fading away because a new genre of method completely made them obselete.

The elephant in the room is, what will the best workflow of formalization be like, after we figure out the right way to utilize LLMs? What is definitely not going to be disrupted by LLMs (assuming no significant additional resources are put to buff LLMs)?

Shreyas Srinivas (Jan 02 2025 at 11:41):

  1. It is an interesting challenge thanks to the second motivation I described above. Just an interesting problem in its own right. It is worthwhile looking for some combination of GoFAI and modern ML that might be cheaper, more understandable etc.
  2. AI progress is not easily predictable. The next gen LLM might indeed be slightly better at autoformalisation, but when applied in practice, LLMs seem to be uniformly disappointing so far. Benchmarks not reflecting reality and all that. In the equational theories project for example, where there was a lot of initial expectations around modern AI, the most effective tools turned out to be old fashioned theorem provers; by a wide margin.
  3. Even if 2 is not an issue, the newer models are behind increasingly expensive paywalls, which are only going to get higher when the VC driven hype is over, and the bills come due.

Bas Spitters (Jan 02 2025 at 12:12):

Shreyas Srinivas said:

This is mostly software verification stuff. Iris and easycrypt are the biggest ones, but there are a number of smaller developments as well

Easycrypt is a standalone tool, so not in Coq. Do you mean https://github.com/SSProve/ssprove ?

Eric Wieser (Jan 02 2025 at 12:13):

Xiyu Zhai said:

Formalization, in my perspective, isn't a particularly intellectually challenging task.

I think the fact that formalization is "just translation" is a common misconception; many formalizations in mathlib have required lots of careful thought and I believe sometimes even development of new mathematics.

Bas Spitters (Jan 02 2025 at 12:15):

Larry Paulson made a huge effort (semi-)automatically translating HOLlight's real analysis library to Isabelle. This may also be a source of inspiration.

Jason Rute (Jan 02 2025 at 13:09):

@Bas Spitters I didn’t realize it was an automatic translation. I had the impression (I guess wrong) that he mostly did it manually. Are there details on how he did it? Is assume that the close similarity of systems helped a lot.

Jason Rute (Jan 02 2025 at 13:13):

Here is a post from today by @Philip Zucker on using ChatGPT to translate from Lean to some new theorem prover. https://www.philipzucker.com/cody_sheffer/ It is not clear from the blog if ChatGPT was helpful or not.

Bas Spitters (Jan 02 2025 at 13:36):

@Jason Rute yes, there was less automation than I remembered:
https://lawrencecpaulson.github.io/2022/09/14/Libraries.html
https://lawrencecpaulson.github.io/2023/07/12/Metric_spaces.html

Sometimes I was able to port proofs just by eyeballing the HOL Light versions, but often I had to run them in a HOL Light session, occasionally line by line. Sledgehammer did most of the work, and half the time I had no idea what the actual argument was. Well, I don’t really need to know.

However, there is:
https://members.loria.fr/STourret/papers/isabelle24translation.pdf

Philip Zucker (Jan 02 2025 at 13:40):

Nothing very exotic. It was about par for the course of my experience using LLMs. Sometimes it nailed it, but pretty often it needed editting or just be tossed out. Going somewhat interactively lemma by lemma was much better than trying all at once. This was my transcript. https://chatgpt.com/share/6776116b-c4d4-8008-b3d0-d73c401feb05 Interested if there is something better I should be doing. One small hope was that knuckledragger being in python removed a barrier to the ML ecosystem, and that Z3 could paper over low level manipulations allowing the LLM to take higher level steps.

Xiyu Zhai (Jan 02 2025 at 13:46):

Eric Wieser said:

Xiyu Zhai said:

Formalization, in my perspective, isn't a particularly intellectually challenging task.

I think the fact that formalization is "just translation" is a common misconception; many formalizations in mathlib have required lots of careful thought and I believe sometimes even development of new mathematics.

I'm not an expert of formalizing advanced mathematics in any sense. So I just speak up instincts, which represents a random guy outside of typical formalization community but would like to use formalization to make things more rigorous. Feel free to correct me if wrong. I prefer things to be open and corrected than closed and remain unfixed. In my language, this counts as prover-specific fixes. Many have to do with specific prover lacks certain features or just requires a very clean minimal set of axioms (like without axiom of choice). I expect the amount of these new mathematics is finite in a sense. Otherwise, it raises the question whether modern mathematics really has a solid foundation or is waiting for formalization to resolve many potential issues. As Mathlib4 is in its very early stages, I believe these fixes are pretty often. But as things go on, I expect things to be more and more stable and standardized.

There's two kinds of formalization: formalization of a new branch of mathematics, which no similar field has been formalized; formalization of a piece of mathematics which no consider deep thinking is required. The second kind is what I believe is mostly chore and will be significantly accelerated by AI (not raw LLMs though). If one thinks more about analysis and combinatorics (for finite objects) instead of algebra, the second one is more prominent. I fully understand that the great thing of Lean is that Lean demonstrates the first kind can be solidly solved by provers like Lean.

Maybe there is other types of formalization. But I'll make things very clear in the future that I refer to the second kind only when I say translation is simple. The first kind is in my eyes the typical kind of fundamentally nontrivial work, which we are very fortunate to be able to work on as we are still very early in terms of autoformalization.

What would be more beneficial to the whole community, is to less focus on specific tooling, but a focus towards in general far more automatic pipeline.

Basically the difficulty of translating to formal languages is not something to boast about. The easier the translation is, the better the prover is as a tool to verify correctness. The foundation should be expressive enough to easily express what we need to prove. To convince broader community, the priority is not to show how "interesting" formalization is, but to show how "boring" and "trivial" it can be. The value of formal verification as a mathematical subject and the value of formal verification as a tooling can be contrary to each other. Mathematicians care about how painless it can be to help them check proofs. Engineers care about how it can quickly help iterate designs by reducing bugs. For majority of us outsiders, we mostly care about the functionality of Lean as a tool. Most computer science subjects are like this. Operating systems, programming languages are designed with first priority to be easy to use, rather than challenging to use. I believe if we choose a fixed range of mathematical domains, the difficulty of formalization is finite and will be eventually entirely solved by AI.

In short, in my amateur opinions, the end goal of the game is writing not a huge library of formalized mathematics. The end goal is creating a truly automatic pipeline with good principles to formalize mathematics, building upon previous experience in translating natural language to formal proofs.

Shreyas Srinivas (Jan 02 2025 at 15:31):

@Bas Spitters : yeah ssprove, sorry, I mix up these crypto framework names sometimes.

Daniel Windham (Jan 09 2025 at 20:53):

Thanks @Jason Rute for the tag. I'm finally catching up after the holiday break. @Jason Gross may soon have a student working on Coq-to-Lean (or similar) proof translations with LLMs. He's interested in this in part as a research question: formal-to-formal proof translation could be a good stepping stone towards LLM-based proof repair and informal-to-formal proof assistance.

Bas Spitters (Jan 10 2025 at 08:25):

@Jason Gross Can you tell us more about your plans? I know that @Cyril Cohen is also interested in this topic.

Jason Gross (Jan 10 2025 at 18:10):

We're targeting (semi)automatic LLM translation of software verification from Coq to Lean as a stepping stone to autoformalization of software verification, which itself is a stepping stone to getting o3 (or its successors) to generate the English which is then autoformalized, with the ultimate target being making it massively easier to verify both existing and newly written software.

For this target, I think it doesn't much matter that the translation is idiomatic except insofar as being non-idiomatic makes it longer/harder to do the proofs and/or to reuse existing results.

Shreyas Srinivas (Jan 10 2025 at 18:12):

@Jason Gross : is this a direct translation from tactic script to tactic script? Or is it like the pipeline I described above?

Jason Rute (Jan 10 2025 at 18:57):

@Jason Gross you have o3 access? :smile: But I’m interested to seeing how the project goes.

Jason Gross (Jan 10 2025 at 18:58):

@Jason Rute I don't have o3 access (yet), but I imagine by the time we get formal => formal working, I might have o3 (or equivalent) access / more of the public will have it

Jason Gross (Jan 10 2025 at 19:01):

@Shreyas Srinivas direct tactic script to tactic script. Otherwise I don't think it's a very good stepping stone for doing autoformalization. I'm thinking of possibly using https://github.com/SkySkimmer/coq-lean-import as a way of automatically validating the translations (that is, Coq ={LLM}=> Lean ={coq-lean-import}=> Coq should give ~ a coherent isomorphism

Jason Rute (Jan 10 2025 at 19:04):

@Jason Gross Maybe a more realistic question is about your budget. I think something like o3 with the hard thinking mode could do an amazing job especially if used in an agent-like framework where it had access to Lean/Coq internals like goal states and error messages. But that could cost $1000+ per problem. There is always a cost/benifit trade off with these AI systems. For example, a lot of agents have switched to use GPt-4o instead of GPT-4 for this very reason. The real challenge is to find the right tools for the job which balance cost and performance.

Jason Rute (Jan 10 2025 at 19:05):

But knowing the upper bound is also important, so expensive methods have their place.

Jason Rute (Jan 10 2025 at 19:06):

And costs will come down too.

Bas Spitters (Jan 10 2025 at 21:33):

Exciting @Jason Gross Yes, I was also thinking that the lean-import could play a role here. As you say, it's unlikely to give an equality, so HoTT or perhaps troqc may be relevant here.

llllvvuu (Jan 13 2025 at 02:21):

re: o3, my experience with o1 (when I tested it on some Olympiad stuff) is it seemed to be RL-ed so heavily towards “find the answer” that even when I ask it to prove things, it just outputs “find the answer” type reasoning and not proofs. I suspect it doesn’t really know how to prove things at all because there’s no verifier for informal-to-informal to train on.

Have other people found similar?

Jason Rute (Jan 13 2025 at 02:45):

@llllvvuu Given how o1 does on the Putnam, I kind of wondered if something like that could be going on. It is even possible that the reasoning traces are much more robust than the final “proof” would suggest. Like you said, it is a lot easier with RL to train it get the answer right than to get an informal proof right (even if internally it is doing something closer to the full proof, since that would guarantee a correct final answer).

llllvvuu (Jan 13 2025 at 03:29):

It seems to me that the accuracy of reasoning traces in LLMs in general, even with process reward modeling, greatly lag the accuracy of final answers. Indeed there is nothing requiring them to be accurate.

Jason Rute (Jan 13 2025 at 03:33):

But also there is nothing requiring them to be human understandable, so it is possible that they are more rigorous than they look on the surface. But this is just a hunch.

Ayush Agrawal (Jan 13 2025 at 03:47):

llllvvuu said:

o3, my experience with o1 (when I tested it on some Olympiad stuff) is it seemed to be RL-ed so heavily towards “find the answer” that even when I ask it to prove things, it just outputs “find the answer” type reasoning and not proofs. I suspect it doesn’t really know how to prove things at all because there’s no verifier for informal-to-informal to train on.

This is interesting. It would be great if you can share the output/ChatGPT link here!

Ayush Agrawal (Jan 13 2025 at 03:55):

I think it would be great to have a channel where everyone can share their experience with o1 series on mathematics - proving or solving and also the ChatGPT outputs.

Alex Meiburg (Jan 13 2025 at 15:16):

Jason Rute said:

But also there is nothing requiring them to be human understandable, so it is possible that they are more rigorous than they look on the surface. But this is just a hunch.

There's been the idea, for a while, that somewhere in embeddingspace, the model is thinking more intelligently than it knows how to put into words. (Put into tokens?) I didn't take this idea too seriously ~two years ago, but with RLed agents like o1/o3, I think it's quite plausible.

I think models like https://arxiv.org/abs/2412.17747 make this even more plausible. It's really encouraging the model to "think" in latent space across several non-human-readable tokens. It wasn't the first paper to try this, but it seems to be the first one to make it really work very well.

Shreyas Srinivas (Jan 13 2025 at 15:21):

What does this even mean? How does one gather conclusive evidence that this is happening?

llllvvuu (Jan 13 2025 at 16:21):

probing I assume

Alex Meiburg (Jan 13 2025 at 17:17):

What does this even mean?

Well, deep networks essentially are finding circuits that compute things. At first order they're learning circuits to predict words, and at second order they're building "circuits" over spans of paragraphs to reach logical conclusions, just like humans are. (What is a proof but a circuit building up proof terms?) When it's just pretraining+finetuning, the second order circuits are really just learned from imitating human arguments. If it can accurately mimic human arguments token-by-token, then we hope this generally steers it towards mostly-right answers. The first order circuits (how to predict a token from its preceding tokens) can operate however the network wants to.

If you do reinforcement learning, though, then you're sort of letting the second order circuit learn too. Instead of just imitating human arguments, it can synthesize its own arguments, possibly in a very different way, as long as it's producing correct answers. If you 'snap' each token produced to a text input before feeding it back in, then you're sort of anchoring this reasoning argument to be in text. Attention mechanisms still let it use intermediate-layer embeddings computed earlier, but this allows only "shallow" circuits in embedding space, and the deep second-order circuits are kept in human text.

But if you stop 'snapping' and let tokens just be arbitrary embedding vectors, then you can differentiate all the way back through the model and really learn the second-order circuits. This puts you back in the domain of LSTMs instead of transformers. It's more expensive to train, but the circuits are now arbitrary depth and can do more complicated argument styles.

This is the idea, at least.

How does one gather conclusive evidence that this is happening?

I think that's a wide open question. You do have this embedding token stream, so most evidence has to be something about observing what's going on in that stream, or some interventional experiments.

Shreyas Srinivas (Jan 22 2025 at 12:02):

A relevant thread from the Coq Zulip : https://coq.zulipchat.com/#narrow/channel/237977-Coq-users/topic/Translation.20of.20HOL-Light.20base.20library.20to.20Coq/near/495238985

Shreyas Srinivas (Jan 22 2025 at 12:02):

It is a translation of HOL Light to Coq


Last updated: May 02 2025 at 03:31 UTC