Topic: Hammer talk
Jason Rute (Jan 12 2020 at 14:54):
@Gabriel Ebner I am sorry I missed your hammer talk. I look forward to seeing the video later. In the meantime I read the slides and have some questions (which might have been answered in the Q&A). First, I want to say that this looks great!
- How useful would it be to improve the premise selection algorithm (using possibly a trained model)? Do you think that would be a benefit or is TF-IDF-based premise selection currently good enough?
- I didn't realize that Lean tactics could call C++ code. Is this hacked into the Lean source code? How easy would it be to swap out your tactic with another custom tactic using a mix of Lean and C++ code (or all C++ code)? (For example, any neural based tactic would probably need to call TensorFlow or PyTouch.)
- Less extreme, how easy would it be to swap your premise selector with another?
- Finally, how easy would it be to extract the information used by the hammer? Specifically, there could be some nice datasets that could come out of this. The one I have in mind is (a) the statements of all the theorems you attempted to prove, (b) the available premises to use for that theorem, (c) the premises selected by TF-IDF cosign similarity, (d) the ones used by the SMT solver, and (e) the ones used by the other tested tactics (like library_search). (And if possible and easy, (f) the ones used in the original proof.)
Reid Barton (Jan 12 2020 at 15:16):
About your second question: I'm pretty sure the hammer tactic calls C++ code the same way the built-in tactics do. That is, Gabriel modified Lean itself to add some C++ code and export it to Lean via a meta constant.
Jesse Michael Han (Jan 12 2020 at 15:56):
for the last question, (iirc) the hammer currently translates theorems in
mathlib to TPTP, and it would be easy to use his setup to extract a dataset (of just theorem statements) in that format
Gabriel Ebner (Jan 13 2020 at 02:40):
How useful would it be to improve the premise selection algorithm
There is lots of room for improvement here. Slide 28 shows the results we get if we select the lemmas contained in the proof term of the existing proof. Choosing the lemmas from the proof term doubles the success rate on average.
Gabriel Ebner (Jan 13 2020 at 02:48):
Is this hacked into the Lean source code?
Yes. It's in this file in my branch: https://github.com/gebner/lean/blob/05a7e979a1d8f87a2c4c6266dec0efd9bb03afa9/src/library/feature_search.cpp
How easy would it be to swap out your tactic with another custom tactic using a mix of Lean and C++ code (or all C++ code)?
If you have C++ code, you'd need to modify the Lean source code to call it (but that's not too hard, see https://github.com/gebner/lean/blob/05a7e979a1d8f87a2c4c6266dec0efd9bb03afa9/src/library/feature_search.cpp#L242-L258 for the part that does this). Another option is to call an external program (and parse stdout). In either case you'd need to modify the lean part to call the new selection code (not too hard either, but I haven't made an API to add other lemma selection algorithms).
Simon Cruanes (Jan 13 2020 at 02:53):
@Gabriel Ebner have you considered, longer term, cooperating with prover authors so that they output more detailed proofs?
For example, as far as I can tell, E-HO could possibly be modified to output not just the DAG of clauses produced by superposition, but also the substitutions used at each step. That makes proof reconstruction much easier since steps are either instantiations or essentially ground inferences.
I imagine that it'd incur a slowdown but that's a tradeoff.
Gabriel Ebner (Jan 13 2020 at 02:56):
Finally, how easy would it be to extract the information used by the hammer?
For the evaluation in the slides I've used this wonderfully hacky piece of code: https://github.com/gebner/mathlib/blob/f4120984839677564fabf5b80f7203374dc832ca/src/tactic/hammer/do_eval.lean#L80 It prints lots of stuff to stderr using a very ad-hoc format. All of the ones you have in mind are somewhere in that file, e.g.
axs on the line I linked to is the list of lemmas selected using the lemma selection and
lems.map prod.fst is the list of lemmas used by the external prover.
Gabriel Ebner (Jan 13 2020 at 03:20):
@Simon Cruanes In general, there are two things I'd like from provers: 1) a standardized proof output format, and 2) more detailed information for the inferences (in that order). IMHO the hard part about correctly parsing detailed proofs is that you'd need to write and test code for each prover, and then keep it up to date if a new version is released. (I haven't looked too much at the proof output of E-HO or zipperposition. It would be nice if you didn't have to do full HO unification to parse the proofs. If it's just FO unification then I'm not too concerned.) It's of course not impossible, but I've never been motivated enough to do it. (In GAPT we have such a parser for prover9---of course written by somebody else. I spent half a year fixing bugs there, and it still cannot parse the clausification part of the proof.)
Gabriel Ebner (Jan 13 2020 at 03:24):
Longer term it might make sense to consider parsing detailed proofs for the hammer, depending on how well reconstruction works. It adds quite a bit of complexity though, so it's not too eager to implement it: 1) the parser is a lot of work, and 2) if I want to do transformations such as turning type families parameterized by values into simple types (i.e., turning
zmod n into
Σ n, zmod n), I'd need to construct actual Lean proofs of these preprocessing steps.
Last updated: May 13 2021 at 06:15 UTC