Zulip Chat Archive

Stream: Machine Learning for Theorem Proving

Topic: Discussion: How will autoproving impact math?


Franz Huschenbeth (Jan 14 2026 at 16:59):

I have recently thought about how math might change, if we improve the autoproving capabilities of current AI systems. I have read the Paper of Alex Kontorovich The Shape of Math to come and have layed down some of my own ideas and thoughts in a short blog post.

I would love to hear different thoughts or also link me any other resources on this particular topic, that you have come across.

Notification Bot (Jan 14 2026 at 19:20):

This topic was moved here from #general > Discussion: How will autoproving impact math? by Patrick Massot.

Patrick Massot (Jan 14 2026 at 19:21):

@franzhusch I moved your message to a more appropriate channel where you will also find a lot of previous discussions of this topic.

Franz Huschenbeth (Jan 14 2026 at 19:30):

Patrick Massot said:

franzhusch I moved your message to a more appropriate channel where you will also find a lot of previous discussions of this topic.

Ah thanks a lot. I will look through the topics here to find some other thoughts and insights then.

Bbbbbbbbba (Jan 15 2026 at 09:23):

I don't think we are anywhere close to your definition of Math Singularity. AI have surpassed humans in Go a long time ago, and yet AI-vs-AI games (where AIs can have drastically different evaluations) indicate that we are nowhere near Go Singularity.

And technically Go Singularity is a special case of Math Singularity...

Franz Huschenbeth (Jan 15 2026 at 09:45):

I mainly introduced the concept to contemplate how the workflow of a mathematician might change by thinking about the extreme end of autoproving capabilities.

About the Go Singularity, I would argue that it was reached after our best humans were beat and certainly after the introduction of Alpha Zero. I mean that because the Singularity is from the perspective of humans. It is a point at which we cant tell the difference, it would not matter to the best human if he played the very best AI system in Go or a previous iteration, which was already past the Go Singularity. I dont mean Singularity in the sense of instantaneous progress.

That being said, I also think we are far from a Math Singularity, because of uncertainties around autoproving progress and the fact math has deceivingly simple statements, which are very hard to prove.

Yan Yablonovskiy 🇺🇦 (Jan 15 2026 at 10:25):

This may be of some historical interest to you : https://en.wikipedia.org/wiki/QED_manifesto

Bbbbbbbbba (Jan 15 2026 at 11:03):

Franz Huschenbeth said:

I mainly introduced the concept to contemplate how the workflow of a mathematician might change by thinking about the extreme end of autoproving capabilities.

About the Go Singularity, I would argue that it was reached after our best humans were beat and certainly after the introduction of Alpha Zero. I mean that because the Singularity is from the perspective of humans. It is a point at which we cant tell the difference, it would not matter to the best human if he played the very best AI system in Go or a previous iteration, which was already past the Go Singularity. I dont mean Singularity in the sense of instantaneous progress.

That being said, I also think we are far from a Math Singularity, because of uncertainties around autoproving progress and the fact math has deceivingly simple statements, which are very hard to prove.

As a matter of fact, I believe it does matter. In general new models can give top human players more handicap stones.

Franz Huschenbeth (Jan 15 2026 at 12:13):

Similarly we can always state more formalized statements and questions, of harder nature. Interesting.

I do agree that the concept is more open ended and more of a range than a single definite inflection point. Its makes more sense to frame it around the speed at which it can make progress, which as you point out is not superb in the game of Go. It remains to be seen, if the increased interest and budget poured into accelerating math AI systems, is enough to overcome the slowdown that Go AI systems have experienced.

Anh Nguyá»…n (Jan 15 2026 at 13:26):

Maybe (just my humble opinion) mathematics will come to a stage that we don’t have to worry about proof, just focus on higher ideas

Bryan Wang (Jan 15 2026 at 14:32):

Anh Nguyá»…n said:

Maybe (just my humble opinion) mathematics will come to a stage that we don’t have to worry about proof, just focus on higher ideas

In other words, physics :wink:

Alex Kontorovich (Jan 15 2026 at 15:29):

I looked at your post, and have a few comments.

  • Everyone is of course welcome to define things however they like, but just FYI my definition of "autoformalization" is not "the act of taking an informal statement and transferring it to Lean" (which I would call "auto-translation"), but rather taking the statement and its natural language proof and formalizing all of that in one fell swoop. (Or with periodic human intervention, which I call "quasi-autoformalization". This is much closer to the current workflow, and would be sufficient for my purposes.)
  • Also, regarding "A notable observation is that, autoformalization lacks inherent verification and cannot be fully automated": Szegedy's counterpoint is I think very interesting to ponder. The verification he proposes is the long-term stability and success of the theory, to formally prove everything we want to be true (and not more)!
  • The "Singularity" you point to and what it might mean is exactly the subject of Gowers's paper from the year 2000. (I always have to resist singing that phrase to the Conan O'Brien tune...) He discussed all of this in detail long ago, and it's worth returning there from time to time, to see how right he's been on so many other things he's said...

Franz Huschenbeth (Jan 15 2026 at 18:54):

@Alex Kontorovich I very much appreciate you taking the time and giving me comments on my post.

  • I have changed the usage of autoformalization to autotranslation in my post. I do see your definitions as a better fit.
  • I do see and appreciate the iterative aspect of Szegedy's counterpoint, which speaks for the robustness of the mathematical knowledge corpus. But the value provided of a mathematician will be in the verification of these formalized statements, making sure that as little as possible errors are introduced. Just as it currently is in respect to proofs. Especially, if we think about frontier research being done in Lean, where the "translation" then happens mainly from mind to formalized statement. I think partly my point here being that else we could just run it all through Autotranslation Systems and the only value provided would be to have thought of doing it for this specific problem, if it has not already been done. You I think have also talked about this point in terms of accountability, and the analogy between self driving cars and mathematicians formalizing statements.
  • I will try to get my hands on a copy to read it in depth (I now realize I have not watched enough Conan O'Brien to know exactly what his tune is).

Last updated: Feb 28 2026 at 14:05 UTC