Zulip Chat Archive

Stream: Machine Learning for Theorem Proving

Topic: CICM Machine Learning and Theorem Proving Session


Jason Rute (Jul 20 2020 at 13:02):

Next week Wednesday (July 29), there is a Machine Learning and Theorem Proving session of the Conference on Intelligent Computer Mathematics. The conference is now virtual (UCT+2). You must however register for free using a Google form. I'm going to take a half-day off of work to attend that session. The line-up of talks is pretty great:

Jason Rute (Jul 20 2020 at 13:02):

  • 15:00–16:00 CEST (UTC+2) @Christian Szegedy: (Invited Talk) A Promising Path Towards Autoformalization and General Artificial Intelligence
    • No abstract but looks looks to be quite provocative. I'd attend just to see this.

Jason Rute (Jul 20 2020 at 13:02):

  • 16:30–16:45 CEST (UTC+2) @Lasse @Josef Urban and Herman Geuvers: The Tactician: A Seamless, Interactive Tactic Learner and Prover for Coq
    • Abstract: We present Tactician, a tactic learner and prover for the Coq Proof Assistant. Tactician helps users make tactical proof decisions while they retain control over the general proof strategy. To this end, Tactician learns from previously written tactic scripts and gives users either suggestions about the next tactic to be executed or altogether takes over the burden of proof synthesis. Tactician’s goal is to provide users with a seamless, interactive, and intuitive experience together with robust and adaptive proof automation. In this paper, we give an overview of Tactician from the user’s point of view, regarding both day-to-day usage and issues of package dependency management while learning in the large. Finally, we give a peek into Tactician’s implementation as a Coq plugin and machine learning platform.
    • Proceedings paper: https://link.springer.com/chapter/10.1007/978-3-030-53518-6_17
    • Project website: https://coq-tactician.github.io
    • This seems to be the project I used to call TacticToe for Coq (discussed in this Zulip thread). I'm glad they gave its own name! I've been talking privately with Lasse, and it looks like there are some good things coming out of this project. From the abstract, it seems that they have both improved the system and have been trying this system out with users of Coq. I'd love to see what the feedback was!

Jason Rute (Jul 20 2020 at 13:03):

  • 16:45–17:00 CEST (UCT+2) Yutaka Nagashima: Simple Dataset for Proof Method Recommendation in Isabelle/HOL (Dataset Description)
    • Abstract: Recently, a growing number of researchers have applied machine learning to assist users of interactive theorem provers. However, the expressive nature of underlying logics and esoteric structures of proof scripts impede machine l earning practitioners from achieving a large scale success in this field. In this data showcase, we present a simplified dataset that represents essence of tactic-based interactive theorem proving in Isabelle/HOL. Our simple data format allows machine learning practitioners to test their machine learning algorithms for proof tactic recommendation in Isabelle/HOL, even if they are unfamiliar with theorem proving.
    • Proceedings paper: https://link.springer.com/chapter/10.1007/978-3-030-53518-6_21
    • This project has been discussed before in this Zulip thread.

Jason Rute (Jul 20 2020 at 13:03):

  • 17:00–17:15 CEST (UCT+2) Thibault Gauthier: Tree Neural Networks for HOL4
    • Abstract: We present an implementation of tree neural networks within the proof assistant HOL4. Their architecture makes them naturally suited for approximating functions whose domain is a set of formulas. We measure the performance of our implementation and compare it with other machine learning predictors on the tasks of evaluating arithmetical expressions and estimating the truth of propositional formulas.
    • Proceedings paper: https://link.springer.com/chapter/10.1007/978-3-030-53518-6_18

Jason Rute (Jul 20 2020 at 13:04):

  • 17:15–17:30 CEST (UCT+2) @Bartosz Piotrowski and @Josef Urban Guiding Connection Tableau by Recurrent Neural Networks
    • Abstract: We present an experiment on applying a sequence-to-sequence recurrent neural system for learning clause choices in the extension steps in the connection tableau proof calculus. The neural system encodes a sequence of literals (or clauses) from a current branch from a partial proof tree to a hidden state (represented as a real vector); using it the system selects a clause for extending the proof tree. We describe the data used for the training, the training setup and the results. We compare the recurrent neural approach with another learning system -- gradient boosted trees. We present also an experiment in which the neural system does not select the next clause for the extension step, but tries to encode it from the available logical symbols, which is in a sense a conjecturing task.
    • Proceedings paper: https://link.springer.com/chapter/10.1007/978-3-030-53518-6_23

Jason Rute (Jul 20 2020 at 13:04):

  • 17:30–17:45 CEST (UTC+2) @Josef Urban First Datasets and Experiments with Neural Conjecturing
    • Abstract: We describe several datasets and first experiments with creating conjectures by neural methods. The datasets are based on the Mizar Mathematical Library processed in several forms and the problems extracted from it by the MPTP system and proved by the E prover using the ENIGMA guidance. The conjecturing experiments use the Transformer architecture and in particular its GPT-2 implementation.
    • Proceedings paper: https://link.springer.com/chapter/10.1007/978-3-030-53518-6_24
    • See this Zulip thread for a lot of interesting discussion on this project and a similar one by the N2Formal team.

Christian Szegedy (Jul 20 2020 at 18:45):

Jason Rute said:

  • 15:00–16:00 CEST (UTC+2) Christian Szegedy: (Invited Talk) A Promising Path Towards Autoformalization and General Artificial Intelligence
    • No abstract but looks looks to be quite provocative. I'd attend just to see this.

I just try to be a conservative realist, but many perceive my views as provocative. In the past year my expectations have been only adjusted up rather than down.

Jason Rute (Jul 21 2020 at 23:05):

I just realized that @Christian Szegedy's invited talk also has a(n) (invited) paper here: http://link-springer-com-443.webvpn.fjmu.edu.cn/chapter/10.1007%2F978-3-030-53518-6_1
Actually, I realize now this paper comes from the preprint on the HOList website on which we've already had a fruitful discussion (flamewar???) in this Zulip conversation. It is a very interesting read.

Jason Rute (Jul 21 2020 at 23:05):

For what it is worth, I also have high hopes for autoformalization and other applications of machine learning to the understanding of mathematics. So many things which have come out in the last year alone could be directly applied to the task (advances in language modeling, reinforcement learning, symbolic reasoning, and program synthesis to name a few). It's pretty exciting stuff!

Jason Rute (Jul 21 2020 at 23:05):

I still, however, find it difficult to believe that any computer agent in the next 10 years is going to be able to "understand" and formalize my math research. :) I'll guess we will have to wait and see...


Last updated: Dec 20 2023 at 11:08 UTC