Zulip Chat Archive

Stream: Machine Learning for Theorem Proving

Topic: Worry: AI doing most of the coding but no real acceleration


Deleted User 968128 (Sep 26 2025 at 20:12):

AI is starting to take over a lot of coding and science and is being mentioned more and more in new 'discoveries'.

I have a very big concern that between enfeeblement risk, the pain of disruption of true breakthroughs, and general human laziness that we won't see any increase in the pace of fundamentally valuable discovery - just more and more AI.

I believe this would be a nightmarish outcome, quickly leading to an almost slavish reliance on AI with little to no benefit for humanity.

Deleted User 968128 (Sep 26 2025 at 20:23):

I think the proper lens we need to start looking at AI is through the lens of slavery and consider the decline of slave societies over time as they got further and further away from labor.

Recently, while it's painful, I do applaud to a degree the desire to 'onshore' labor by a certain western leader. It's worth noting that it was a bipartisan effort applauded by his predecessor. The offshoring had hollowed out their ability to do important work. Warren Buffet wrote a very good essay on this a couple of decades ago - https://www.berkshirehathaway.com/letters/growing.pdf

Kevin Buzzard (Sep 27 2025 at 11:21):

It's not clear to me that this is related to theorem provers. I know you're new to the community here Tim but usually we stick to Lean on this forum (or AI related to Lean) rather than general speculation about AI.

Deleted User 968128 (Sep 27 2025 at 19:46):

Point taken, but I hope folks don't underestimate the dangers of enfeeblement risk caused by auto-formalization tools. As someone who's more familiar with how LLMs work in multiple domains and less about this domain, I'd have to say that theorem proving seems particularly vulnerable. I have more to say, but it doesn't seem to be a topic folks are interested in so I'll leave it there.

Bryan Wang (Sep 27 2025 at 23:13):

There's a thread about "dangers of auto-formalization": https://leanprover.zulipchat.com/#narrow/channel/219941-Machine-Learning-for-Theorem-Proving/topic/Danger.20of.20Autoformalizers.20.28e.2Eg.2E.20Gauss.29/with/539900591

Deleted User 968128 (Sep 28 2025 at 04:35):

If someone moves/join this thread to that one, I would have no complaints.

However, I hope people think about how proving theorems manually provides them with important insight that might start to fade as AI takes over. There is a density of enfeeblement risk that exists in this domain that is much greater than others.

As experts, it's not as easy to see yourself losing those insights. As a novice, I can see how I might never gain them.

Bryan Wang (Sep 28 2025 at 11:13):

I feel like there might still be some complaints if this entire thread is moved there... I linked there just to say that there are people here (including myself) who might agree fully with your last two messages, and are discussing in the specific context of auto-formalization. Perhaps something missing from that discussion was how we can actually mitigate these risks in the world of formalization, e.g. maybe by emphasizing code quality/organization/maintenance/'golfing' (things which are already being done in mathlib) when teaching or introducing Lean, rather than "just get the theorem proved and that's it"?

(deleted) (Sep 28 2025 at 15:47):

#Machine Learning for Theorem Proving > Theorem proving olympiad

Why don't we hold a contest to find out?

(deleted) (Sep 28 2025 at 15:48):

To find out whether AI really accelerates formalization or only serves as a crutch


Last updated: Dec 20 2025 at 21:32 UTC