Zulip Chat Archive
Stream: Machine Learning for Theorem Proving
Topic: DeepMind and Navier Stokes
Anh Nguyễn (Jun 26 2025 at 23:47):
Leni Aniva (Jun 26 2025 at 23:51):
I'll believe it when I see it. Why don't they tackle some less researched problems first like invariant subspace problem before diving into this one?
Anh Nguyễn (Jun 27 2025 at 00:49):
They said that they have been working 3 years now for that problems
Justin Asher (Jun 27 2025 at 01:04):
Seems to indicate some specialized AI method unless they pivoted to more general LLM usage.
Anh Nguyễn (Jun 27 2025 at 01:09):
In the article they mentioned the use of neural theorem prover
Thomas Zhu (Jun 27 2025 at 01:21):
No, the article does not mention neural theorem provers
Anh Nguyễn (Jun 27 2025 at 02:42):
Sorry, not in that article but in other article
Justin Asher (Jun 27 2025 at 02:49):
Reference?
Eric Taucher (Jun 27 2025 at 07:08):
In searching for neural theorem provers did not find it but did find
Machine Learning in PDE: Discovering New, Unstable Solutions By Javier Gómez-Serrano (May 2025) (Video Lecture)
The lecture notes the use of
- AlphaEvolve
- Physics-Informed Neural Networks (PINN) (pdf)
Deep Learning Poised to ‘Blow Up’ Famed Fluid Equations (April 2022) (Quanta Magazine)
Eric Taucher (Jun 27 2025 at 09:08):
ArXiv list for Gómez-Serrano, J
Eric Taucher (Jun 27 2025 at 10:10):
Thomas Zhu said:
No, the article does not mention neural theorem provers
neural theorem provers is a real term, was not sure myself so checked.
Deciding What Game to Play, What Mathematics Problem to Solve By Katie Collins (May 2025) (Video Lecture)
The intro page shows the paper:
HyperTree Proof Search for Neural Theorem Proving (pdf)
That paper references a paper with one of the authors being @Jason Rute
Proof Artifact Co-Training For Theorem Proving With Language Models (pdf)
We instrument Lean with a neural theorem prover driven by a Transformer language model and show that PACT improves theorem proving success rate on a held-out suite of test theorems from 32% to 48%.
Junyan Xu (Jun 27 2025 at 10:17):
Gowers also gave a talk in this conference (Mathematics for an by Large Language Models – 2025 Edition) at IHES
Jason Rute (Jun 27 2025 at 11:21):
My impression from the first article is that the idea is to use something like AlphaEvolve to find initial conditions which empirically seem to blow up in simulations and then probably use those as data to understand the problem well enough to give a rigorous proof. They say that this mathematician did something like this (using his own custom problem-specific machine learning tool) for Euler’s equations, but only after someone else already showed they blow up.
Jason Rute (Jun 27 2025 at 11:33):
Also, if people aren’t aware, Terence Tao (about a decade ago) proposed a different approach to showing NS blows up. The idea is to construct a self replicating machine (von Neumann machine) out of ideal fluid. That machine would make a smaller copy of itself and then transfer all its energy into that copy. Doing this recursively it would lead to a singularity. While I don’t know if anyone has followed up on that approach, if they did, Lean might be quite useful in bookkeeping all the components of this machine and keeping track of the error bounds.
Eric Taucher (Jun 27 2025 at 13:16):
Jason Rute said:
Terence Tao (about a decade ago) proposed a different approach to showing NS blows up.
https://arxiv.org/pdf/1402.0290
https://www.quantamagazine.org/a-fluid-new-path-in-grand-math-challenge-20140224/
https://gilkalai.wordpress.com/2014/02/07/navier-stokes-fluid-computers/
Jason Rute (Jun 27 2025 at 13:18):
And https://m.youtube.com/watch?v=uGVMrFf_V0I
Junyan Xu (Sep 18 2025 at 22:05):
They just posted https://deepmind.google/discover/blog/discovering-new-solutions-to-century-old-problems-in-fluid-dynamics/
David Michael Roberts (Sep 18 2025 at 23:58):
Direct link to arXiv paper: https://arxiv.org/abs/2509.14185
I thought the formatting was weird when I saw this earlier. I noticed at least one author with a google.com email address, but it didn't click it was DeepMind.
David Michael Roberts (Sep 19 2025 at 00:01):
There is certainly some language that feels very much "not mathematician-speak" in it, like "unstable singularities are exceptionally elusive; they require initial conditions tuned with infinite precision" Here's the abstract:
Whether singularities can form in fluids remains a foundational unanswered question in mathematics. This phenomenon occurs when solutions to governing equations, such as the 3D Euler equations, develop infinite gradients from smooth initial conditions. Historically, numerical approaches have primarily identified stable singularities. However, these are not expected to exist for key open problems, such as the boundary-free Euler and Navier-Stokes cases, where unstable singularities are hypothesized to play a crucial role. Here, we present the first systematic discovery of new families of unstable singularities. A stable singularity is a robust outcome, forming even if the initial state is slightly perturbed. In contrast, unstable singularities are exceptionally elusive; they require initial conditions tuned with infinite precision, being in a state of instability whereby infinitesimal perturbations immediately divert the solution from its blow-up trajectory. In particular, we present multiple new, unstable self-similar solutions for the incompressible porous media equation and the 3D Euler equation with boundary, revealing a simple empirical asymptotic formula relating the blow-up rate to the order of instability. Our approach combines curated machine learning architectures and training schemes with a high-precision Gauss-Newton optimizer, achieving accuracies that significantly surpass previous work across all discovered solutions. For specific solutions, we reach near double-float machine precision, attaining a level of accuracy constrained only by the round-off errors of the GPU hardware. This level of precision meets the requirements for rigorous mathematical validation via computer-assisted proofs. This work provides a new playbook for exploring the complex landscape of nonlinear partial differential equations (PDEs) and tackling long-standing challenges in mathematical physics.
The self-similarity of the solutions as mentioned in this makes me suspect something closer to what Tao was thinking of, as noted in an earlier comment in the thread.
Jason Rute (Sep 19 2025 at 06:39):
That exceptionally elusive line made sense to me, especially in the context of numerical simulations and experimental mathematics.
Jason Rute (Sep 19 2025 at 06:40):
(Not that I’ve read the paper or know much about fluid differential equations.)
Alex Meiburg (Sep 19 2025 at 14:39):
The paper feels like it was written for people interested in the ML aspects of automated discovery and computer verification - which is entirely reasonable, coming from GDM. I hope (expect?) there will be a follow-up paper focusing on the solutions and the topology/configuration of the actual instabilities themselves.
Deleted User 968128 (Sep 24 2025 at 00:37):
Eric Taucher said:
The lecture notes the use of
- AlphaEvolve
- Physics-Informed Neural Networks (PINN) (pdf)
Deep Learning Poised to ‘Blow Up’ Famed Fluid Equations (April 2022) (Quanta Magazine)
The paper said "We use small neural networks with thousands to tens of thousands of parameters"
"This (nearly) fully automated scheme, implemented using the open-source library kfac-jax56, allows us to rapidly iterate on modeling choices without expensive hyperparameter sweeps. In Figure 4 we demonstrate the efficacy of this optimizer, which is able to discover solutions with maximum residuals of O(10−8 ) within 50k iterations (≈ 3 A100 GPU hours), compared to standard optimization techniques."
this is something i haven't seen too often. Quite a step change improvement:
They cite this for the Gauss-Newton choice: https://arxiv.org/pdf/2402.10680
Last updated: Dec 20 2025 at 21:32 UTC