Zulip Chat Archive

Stream: Machine Learning for Theorem Proving

Topic: VibeRegistry / slopnet


GasStationManager (Feb 18 2026 at 08:02):

Something I'm working on and would love the community's feedback:

VibeRegistry: Verified Registry of AI-Assisted Lean Proofs.
(As a nod to a certain popular thread on this zulip, y'all can call this 'slopnet' if you like...)

This is a registry of external github repos of AI-generated/AI-assisted libraries of Lean proofs, and:

  • does automated secure verification using SafeVerify and comparator, as part of its CI
  • human experts can vett and sign-off on the specifications, that the Lean theorem statements match the informal statements in the literature.

Currently contain entries for two repos:

But certainly this is not limited to ML theory; other topics (e.g. Math, CS) are welcome! Suggested entries are appreciated, if you have / know of an autoformalized Lean library!

My goal for this repo is to explore a decentralized model of building libraries. Right now, if I want to potentially use an autoformalized library, two potential pain points are: whether its definitions and theorem statements are right, and whether the proofs are valid. This registry can hopefully serve as a trusted resource towards addressing these two concerns.

Michael Rothgang (Feb 18 2026 at 08:41):

Something very similar was discussed at #general > sloplib

Ralf Stephan (Feb 18 2026 at 10:49):

What type of repos do you include/expect? Simple collections of (interdependent) Lean files? Blueprint repos? I guess it would be more probable that a reviewer signs off a blueprint repo?

Bas Spitters (Feb 20 2026 at 15:56):

@GasStationManager Did you consider using LLMs to review the statements? It's not perfect, but it can be a start.

GasStationManager (Feb 27 2026 at 18:03):

@Ralf Stephan both blueprint repos and non blueprint repos are acceptable. I agree that blueprint makes it easier for human experts to sign off .

@Bas Spitters interesting point. I assume the authors of repos will already asked LLMs to review the statements, hopefully before proofs are attempted. There may be value for separate LLM reviews of statements; but this may be harder to get right; I have done brief experiments in related questions, and LLMs could hallucinate much more easily. Still, a worthwhile direction to explore.


Last updated: Feb 28 2026 at 14:05 UTC