Zulip Chat Archive
Stream: IMO-grand-challenge
Topic: Reading list/sample AI
Joe Hendrix (Oct 04 2019 at 00:11):
Hi, I was wondering: are there any plans to have a reading list or sample AI for the competition? I ask as somebody interested in following this and understand what approaches work well, but without the capacity to build a team/compete in it.
Johan Commelin (Oct 04 2019 at 04:09):
Wait, I always interpreted "competion" as 1 AI team against all the human competitors. I'd rather bundle forces than have multiple AI teams competing against each other and not sharing ideas etc etc
Daniel Selsam (Oct 07 2019 at 13:28):
Hi, I was wondering: are there any plans to have a reading list or sample AI for the competition?
@Joe Hendrix A reading list would be difficult now, since there is no consensus on what existing paradigms might even help. Hopefully there will be enough experiments and incremental progress over time that we can build a useful reading list out of future papers.
Daniel Selsam (Oct 07 2019 at 13:40):
Wait, I always interpreted "competion" as 1 AI team against all the human competitors. I'd rather bundle forces than have multiple AI teams competing against each other and not sharing ideas etc etc
@Johan Commelin The first stage of the project---to establish that the problems and solutions can indeed be formalized, to build a dataset of problems for training and validation, and to establish the rules for what constitute winning---is 100% a community effort. It might even be nice for us all to work towards a community paper introducing the grand challenge and the dataset of historical problems. On the other hand, it is too early to foresee how the AI research itself will play out. There may be many pairwise-unrelated approaches of differing scale taken by different groups of people at different times.
Joe Hendrix (Oct 07 2019 at 18:04):
@Daniel Selsam I think I was mostly looking at reading list for motivation, i.e., what has motivated you or other organizers that this is achievable? After looking at a few of the IMO problems, competing in the IMO seems much harder to me than the Aristo/Project Halo goals, and they weren't expecting formal proofs in the answers to questions.
Brando Miranda (Oct 23 2019 at 17:23):
How does one get involved with the community paper?
Daniel Selsam (Oct 23 2019 at 17:31):
How does one get involved with the community paper?
@Brando Miranda I will share a preliminary project proposal in the relatively near future. Note that we want to wait until Lean4 is more mature (e.g. has a tactic framework) before beginning the formalization in earnest.
Brando Miranda (Oct 23 2019 at 18:40):
How does one get involved with the community paper?
Brando Miranda I will share a preliminary project proposal in the relatively near future. Note that we want to wait until Lean4 is more mature (e.g. has a tactic framework) before beginning the formalization in earnest.
How do I make sure I get informed when such a decision/proposal is out. :)
Brando Miranda (Oct 23 2019 at 18:46):
How does one get involved with the community paper?
Brando Miranda I will share a preliminary project proposal in the relatively near future. Note that we want to wait until Lean4 is more mature (e.g. has a tactic framework) before beginning the formalization in earnest.
Is the IMO challenge focus only on making a environment + data set using lean for ML using IMO problems or does also provide a lean environment for ML research/applications etc?
I am curious to learn to what extent lean can be used for ML. Is such a thing possible and how easy is it to do it in Lean?
Brando Miranda (Oct 23 2019 at 19:05):
(sorry if my msgs are on the wrong thread, learning to use ZULIP, won't happen again! :) )
Daniel Selsam (Oct 23 2019 at 22:48):
Is the IMO challenge focus only on making a environment + data set using lean for ML using IMO problems or does also provide a lean environment for ML research/applications etc?
@Brando Miranda The goal is to spur fundamental progress in computer science, and to eventually build an AI that can win gold. Different people involved have different thoughts about the role that ML will play in this endeavor. I believe that ML may be able to help a lot. I plan to wrap existing frameworks in Lean and to deeply integrate ML into the Lean tactic language, but the exact interfaces and abstractions are yet to be determined.
Brando Miranda (Oct 23 2019 at 22:51):
I plan to wrap existing frameworks in Lean and to deeply integrate ML into the Lean tactic language,
Can you clarify what that means? Even if its vaguely? Does that mean one can access and use Pytorch Tensorflow using Lean itself or accessing Lean from within Python (the universal ML language)?
Daniel Selsam (Oct 23 2019 at 23:14):
Does that mean one can access and use Pytorch Tensorflow using Lean itself or accessing Lean from within Python (the universal ML language)?
To clarify: I have no current plans to write a TF/PyTorch frontend in Lean. There are many options for how to connect Lean to a trained network at runtime. In prior work I used gRPC (e.g. in https://github.com/dselsam/neurocore-public), but it is also possible to wrap relevant parts of Lean in Python (https://github.com/dselsam/lean-python-bindings bit-rot but something like it could be resurrected), or to wrap C++/Python in Lean (e.g. in https://github.com/dselsam/certigrad). For me the pressing conceptual challenge is how to let users write high-level tactics that induce intractable search spaces, and how to have the system train models to guide those tactics as conveniently and generically as possible.
Brando Miranda (Oct 23 2019 at 23:28):
For me the pressing conceptual challenge is how to let users write high-level tactics that induce intractable search spaces, and how to have the system train models to guide those tactics as conveniently and generically as possible.
why is that the goal? Perhaps my intuition is different. Why would we want intractable tactic spaces? Why do we want the system to train conventionally or generically? Should the goal be tractable spaces that achieve the goal we want (i.e. solve the maths problems)? Or perhaps I misunderstood something.
Daniel Selsam (Oct 24 2019 at 02:38):
For me the pressing conceptual challenge is how to let users write high-level tactics that induce intractable search spaces, and how to have the system train models to guide those tactics as conveniently and generically as possible.
why is that the goal? Perhaps my intuition is different. Why would we want intractable tactic spaces? Why do we want the system to train conventionally or generically? Should the goal be tractable spaces that achieve the goal we want (i.e. solve the maths problems)? Or perhaps I misunderstood something.
We want to be able to conveniently write the parts of the strategies/tactics/tricks/tools that we can figure out how to write, but I hypothesize that it will prove very difficult to write these things down in sufficiently precise detail to produce brute-force-able search spaces. So we want learning to carry the rest of the burden, by learning how to search these spaces efficiently.
Brando Miranda (Oct 24 2019 at 20:56):
For me the pressing conceptual challenge is how to let users write high-level tactics that induce intractable search spaces, and how to have the system train models to guide those tactics as conveniently and generically as possible.
why is that the goal? Perhaps my intuition is different. Why would we want intractable tactic spaces? Why do we want the system to train conventionally or generically? Should the goal be tractable spaces that achieve the goal we want (i.e. solve the maths problems)? Or perhaps I misunderstood something.
We want to be able to conveniently write the parts of the strategies/tactics/tricks/tools that we can figure out how to write, but I hypothesize that it will prove very difficult to write these things down in sufficiently precise detail to produce brute-force-able search spaces. So we want learning to carry the rest of the burden, by learning how to search these spaces efficiently.
Ok I think I get it now. What you are saying is just to specify tactic spaces such that they are "interesting" spaces to search (not artificially restricted) and instead have the learning decide how to use/search these spaces.
My misunderstanding came from the word "intractable". If we can learn to search it, by definition its not intractable in my mind. I think its better to say "interesting" or "large" or something. But thanks for the clarification!
Caleb Helbling (Dec 18 2019 at 05:19):
I could be wrong, but is the goal of the IMO challenge to produce a human readable list of tactics used to solve IMO problems? I thought that to complete a proof you merely need to construct a value of the required type
Caleb Helbling (Dec 18 2019 at 05:20):
It would be easier to generate training data by randomly generating values, then get the type of the value and ask the AI to run in reverse
Caleb Helbling (Dec 18 2019 at 05:21):
Essentially you'd be randomly generating proofs and then asking what the proposition being proved is
Caleb Helbling (Dec 18 2019 at 05:25):
I saw this paper today on Hacker News: deep learning for symbolic mathematics https://openreview.net/pdf?id=S1eZYeHFDS
Johan Commelin (Dec 18 2019 at 05:50):
@Caleb Helbling A randomly generated proof will be a proof of a very weird and uninteresting proposition. That's the big problem with maths
Johan Commelin (Dec 18 2019 at 05:50):
You have a crazy-crazy big search space. And almost everything is uninteresting.
Caleb Helbling (Dec 18 2019 at 06:34):
Yes, but maybe the AI would learn from the uninteresting cases to generalize to the more interesting cases?
Caleb Helbling (Dec 18 2019 at 06:35):
I also realized that type checking is undecidable without type annotations in dependently typed languages, which sort of defeats the idea of randomly generating values and getting their types
Mario Carneiro (Dec 18 2019 at 06:35):
you would learn uninteresting heuristics if you look only at uninteresting problems
Mario Carneiro (Dec 18 2019 at 06:35):
Type checking in dependently typed languages is often decidable
Mario Carneiro (Dec 18 2019 at 06:36):
in lean's case it's not but the issues almost never come up in practice
Caleb Helbling (Dec 18 2019 at 06:37):
well interesting is a human defined label. Maybe there are proofs that are uninteresting to humans because they have no meaning but are actually rather complicated
Mario Carneiro (Dec 18 2019 at 06:37):
sure, there is an infinite family of such
Caleb Helbling (Dec 18 2019 at 06:39):
Also some interesting propositions might have simple proofs
Caleb Helbling (Dec 18 2019 at 06:40):
Would having a small core calculus help with keeping synthetically generated proofs interesting? It would help prune the branching factor at each step
Mario Carneiro (Dec 18 2019 at 06:47):
It helps, but not very much. The search space is still exponential
Mario Carneiro (Dec 18 2019 at 06:48):
And most proofs of interesting theorems are going to be at least 20 steps long, at which point you are already overwhelmed even with a very small branching factor
Mario Carneiro (Dec 18 2019 at 06:50):
If you want to get past that you have to use higher level "steps", like tactics that already have a lot of interestingness heuristics built in
Jesse Michael Han (Dec 18 2019 at 07:14):
Would having a small core calculus help with keeping synthetically generated proofs interesting?
consider resolution for propositional logic---if anything, the proofs become more boring and most of the steps are uninteresting
Last updated: Dec 20 2023 at 11:08 UTC