Zulip Chat Archive
Stream: general
Topic: Verifying proof of a particular theorem
Kevin Shu (Jun 13 2020 at 17:18):
Is there a way to verify that a particular lemma has been proven in a particular lean file, in a way that is 'unhackable'? This would have applications, say, if a teacher wanted to have students submit some kind of homework in lean that would be verified automatically.
Patrick Massot (Jun 13 2020 at 17:22):
This has been discussed in the context of coding contests, but we don't have a bullet-proof workflow.
Patrick Massot (Jun 13 2020 at 17:22):
You can discuss this with @Donald Sebastian Leung. I don't remember what was the other similar thing.
Bhavik Mehta (Jun 13 2020 at 17:32):
The way used in codewars is to have a separate file (which the student cannot edit) which contains the theorem statement, and proof given by referencing the name of the student's proof; then use #print axioms
on this
Bhavik Mehta (Jun 13 2020 at 17:33):
This way you can verify that the student has proved what they were meant to prove, and the print axioms list should only contain the valid axioms in lean
Bhavik Mehta (Jun 13 2020 at 17:33):
I don't know of a convincing hack for this
Donald Sebastian Leung (Jun 14 2020 at 02:45):
IIRC it was mentioned that the #print axioms
method used by Codewars could still be subverted in principle by redefining the #print axioms
notation. The other similar thing mentioned by @Patrick Massot is probably Proving for Fun where Codewars derived its testing setup from (for Lean). The only difference in the setup between Codewars and Proving for Fun AFAIK is that for the latter, instead of using #print axioms
, they inspect the output of running leanchecker
on the check file instead which closes off any known loopholes (there was a brief period where they discovered a loophole with leanchecker
itself but I believe it has been fixed since).
@Kevin Shu You may wish to refer to selected comments on Codewars/codewars-runner-cli#773 for an extended discussion on automating the process of checking Lean submissions and cheat detection / prevention techniques.
Jannis Limperg (Jun 14 2020 at 15:05):
Lean can do arbitrary I/O in tactics. It should be easy to use this to subvert any fully automated testing harness. The Proving for Fun method, for example, could be tricked by replacing the leanchecker
executable. Further, this is a major security risk -- running arbitrary untrusted code -- which needs to be mitigated by proper sandboxing.
All this could be addressed by adding a flag to Lean which disables I/O in tactics (i.e. makes tactic.unsafe_run_io
a noop). This shouldn't break anything; neither the Lean standard library nor mathlib uses this function.
Sebastian Ullrich (Jun 14 2020 at 15:36):
Language-based sandboxing is hard to design and almost impossible to make 100% sound - just look at the mess in the JVM. If you want sandboxing, put the entire Lean process in a proper sandbox.
Reid Barton (Jun 14 2020 at 16:02):
At a minimum, you'd have to take away unchecked_cast
, but then the hunt is on for exploitable interpreter bugs, and this is not a road worth going down...
Jannis Limperg (Jun 14 2020 at 17:16):
Yes, language sandboxes in general are hard and I wouldn't rely on a hypothetical "safe Lean" to run untrusted code without any additional measures. But for the purposes of this thread, I think (without knowing much at all about this part of Lean) you could just disable Lean's I/O functionality entirely, i.e. all IO-related primitives are replaced with runtime errors. That would at least make exploitation much less trivial. Call it defence in depth if you will.
Scott Morrison (Jun 14 2020 at 23:41):
I don't know anything about this stuff, but someone recently was talking about overriding existing overrides. Can we "shut down" things like unchecked_cast
and unsafe_run_io
from Lean?
Last updated: Dec 20 2023 at 11:08 UTC