Kevin Lacker (Oct 06 2020 at 15:52):
Hey, there's a couple papers on openreview for ICLR 2021 that seem relevant to the task of solving IMO problems in Lean that I thought people might be interested in.
Learning Axioms to Compute Verifiable Symbolic Expression Equivalence Proofs Using Graph-to-Sequence Networks - https://openreview.net/forum?id=PkqwRo2wjuW
This is learning a map from a pair of trees of symbols to a sequence of rewrite rules that can transform one of the trees into the other. This seems promising, like it could be analogous to a simplification of a Lean formula. Or you could use the probabilities for the first part of the sequence as input into a nondeterministic tactic.
Do Transformers Understand Polynomial Simplification? - https://openreview.net/forum?id=yZkF6xqhfQ
The answer here seems to be roughly, "no". My reading is, the authors are basically trying to take an existing polynomial-simplification algorithm and use it to train a transformer to do the same thing, and it doesn't quite work. Not entirely surprising since I believe simpler tasks like long division also cannot be packed into a transfer. But this is useful in terms of the limitations of an AI component of a mathematical process.
Jason Rute (Oct 06 2020 at 22:32):
Thanks for the summaries! If you want to add more details about these papers or other ICLR papers, I'd love to see them and discuss them more. I often post summaries of papers on the #Machine Learning for Theorem Proving stream, which has readership both in and outside of the Lean community. (For example, it is where a lot of HOList questions get fielded.) It's a lot of work to read and understand a paper so I'm always happy if others want to contribute their insights.
Last updated: Aug 05 2021 at 03:12 UTC