Zulip Chat Archive
Stream: Machine Learning for Theorem Proving
Topic: Harmonic: IMO Livestream
Alex Meiburg (Jul 28 2025 at 22:21):
Harmonic is announcing their IMO results, incl. their Lean proofs: https://x.com/i/broadcasts/1BdGYqEkOYyGX
Alex Meiburg (Jul 28 2025 at 22:22):
and from the QR code: https://github.com/harmonic-ai/IMO2025/
Joseph Myers (Jul 28 2025 at 23:57):
Do I understand correctly that the problems were presented to the AI in easy mode, with the answer included? That's what the video seemed to be saying, showing a file including the answer (like the StatementOnly files in the repository) and saying that was what was given to the AI.
Joseph Myers (Jul 29 2025 at 00:04):
I'll also ask more or less the same questions I put to ByteDance: (a) Could you add the formal statement of P6 that the AI didn't solve to the repository? (b) What if anything did you do to attempt to coordinate with other AIs on having a single common version of the problems in Lean? (c) Was there a particular reason for preparing your own Lean statements rather than having the IMO provide them?
Joseph Myers (Jul 29 2025 at 00:09):
And since people have discussed testing formal-to-formal AIs on formal-conjectures, and both Terry Tao and I (and Simon Frieder) have commented on issues of disclosing test methodology as well as what better methodology for evaluating mathematical AIs in future might look like: can you commit that, if you do solve an unsolved problem with AI, you will also disclose, at the same time as the solution to that problem, details of all the other problems the AI attempted without solving?
Joseph Myers (Jul 29 2025 at 00:21):
Thank you also for waiting a reasonable time after the IMO for the announcement. As I remarked to the other coordinators, I'm not convinced that the AI companies rushing to announce understand that much of the IMO community is travelling all over the world immediately after the IMO, and are largely out of contact while travelling and so not in a good position to respond to media requests for comment at that time - an issue I predicted in this channel before the IMO.
Justin Asher (Jul 29 2025 at 01:00):
One thing I am curious about is that each automated Lean theorem prover seems to have super long solutions. Does anyone know how much more efficiently a good Lean proof writer could write these solutions if given the informal versions? Or are they fundamentally long solutions to write in Lean? Both ByteDance and Harmonic have this feature.
Last year, AlphaProof seemed to have much more efficient solutions. For reference, Harmonic's solution of P1 2025 is 96k characters, whereas DeepMind AlphaProof's informal proof of P1 2024 was only 6.4k characters. Harmonic's also have a decent number of "have" claims, like ByteDance and Morph. Am I missing something?
Andy Jiang (Jul 29 2025 at 01:03):
Joseph Myers said:
And since people have discussed testing formal-to-formal AIs on formal-conjectures, and both Terry Tao and I (and Simon Frieder) have commented on issues of disclosing test methodology as well as what better methodology for evaluating mathematical AIs in future might look like: can you commit that, if you do solve an unsolved problem with AI, you will also disclose, at the same time as the solution to that problem, details of all the other problems the AI attempted without solving?
Not that my opinion matters, but I find this a strange requirement to impose on the companies. I don't see much value in this except to lower the impressiveness of any achievement. I would consider it, in an anthropomorphized way, like asking a mathematician after a talk, "yes but what about all the questions you failed to solve?"
Joseph Myers (Jul 29 2025 at 01:14):
When the explicit goal of your company is mathematical superintelligence, identifying the rate of successes and failures at solving problems, so that human mathematicians can compare it to their own experience, is extremely relevant to evaluating whether such a goal has been achieved.
Joseph Myers (Jul 29 2025 at 01:19):
Or: being part of the scientific community means disclosing scientifically relevant information about your experimental results, in a way that avoids publication bias. There's a strong argument that it would be unethical to make claims about the abilities of an AI on unsolved problems without disclosing failures as well as successes. Note that the IMO announcement included failing to solve P6, not just succeeding on other problems.
Andy Jiang (Jul 29 2025 at 01:28):
Joseph Myers said:
When the explicit goal of your company is mathematical superintelligence, identifying the rate of successes and failures at solving problems, so that human mathematicians can compare it to their own experience, is extremely relevant to evaluating whether such a goal has been achieved.
This I generally agree with. Although it's unclear what it means to "compare with their own experience" on a list of unsolved questions as presumably the experience is "I can't solve any otherwise I would've shortened the list". I think I just assume by default that if they claim one of the formal problems, say of formal-conjectures, that they probably attempted all of them plus whatever else they could've formalized at least for a little while (to figure out what the system is making most progress on). So in that sense I guess there's a case to be made if they only tried it on like 3 problems, disclosing this would boost their claims.
Joseph Myers (Jul 29 2025 at 01:39):
Justin Asher said:
Last year, AlphaProof seemed to have much more efficient solutions. For reference, Harmonic's solution of P1 2025 is 96k characters, whereas DeepMind AlphaProof's informal proof of P1 2024 was only 6.4k characters. Harmonic's are also plagued by "have" claims like ByteDance and Morph. Am I missing something?
have claims may not be a bad thing to structure proofs. Although at least idiomatic mathlib style would prefer lots of lemmas each with a short proof in place of long proofs of single lemmas (with the consequence that internal comments within proofs are rarely needed because you can tell the key ideas by looking at what lemmas are used).
These long proofs feel a lot more readable than AlphaProof's solutions last year. Also, cleaning up and golfing a formal proof ought to be a lot easier for an AI to do than finding the proof in the first place.
Joseph Myers (Jul 29 2025 at 01:44):
Andy Jiang said:
Joseph Myers said:
When the explicit goal of your company is mathematical superintelligence, identifying the rate of successes and failures at solving problems, so that human mathematicians can compare it to their own experience, is extremely relevant to evaluating whether such a goal has been achieved.
This I generally agree with. Although it's unclear what it means to "compare with their own experience" on a list of unsolved questions as presumably the experience is "I can't solve any otherwise I would've shortened the list". I think I just assume by default that if they claim one of the formal problems, say of formal-conjectures, that they probably attempted all of them plus whatever else they could've formalized at least for a little while (to figure out what the system is making most progress on). So in that sense I guess there's a case to be made if they only tried it on like 3 problems, disclosing this would boost their claims.
Well, it would be comparing with experience on different problems (the rate at which they, or their PhD students, had solved problems attempted, adjusted for perceived difficulty); not an exact science given different difficulty levels. And the choice of spending a lot of compute on a few problems or less compute on lots of problems is a basic part of the experimental design that ought to be disclosed.
Justin Asher (Jul 29 2025 at 02:04):
I agree that AlphaProof can be hard to read and debug.
If you look here, you can find some previous formalizations of IMO problems and their solutions in Lean. I ran a script and found that the longest IMO solution from there is 47k characters, so 96k is definitely more than what would be expected (P5 is also 98k).
And yes, there are not too many unnecessary have claims in Harmonic's solutions, albeit I have noticed an excess in other systems since they are trying to get compiler feedback and mimic the natural language proofs.
Andy Jiang (Jul 29 2025 at 02:11):
Justin Asher said:
I agree that AlphaProof can be hard to read and debug.
If you look here, you can find some previous formalizations of IMO problems and their solutions in Lean. I ran a script and found that the longest IMO solution from there is 47k characters, so 96k is definitely more than what would be expected (P5 is also 98k).
And yes, there are not too many unnecessary have claims in Harmonic's solutions, albeit I have noticed an excess in other systems since they are trying to get compiler feedback and mimic the natural language proofs.
Isn't there possible selection bias on what people formalized?
Justin Asher (Jul 29 2025 at 02:21):
Andy Jiang said:
Isn't there possible selection bias on what people formalized?
That's fair. I am still curious how much more optimally the proofs can be formalized for practical purposes. For instance, at one point, their system writes a single line that contains
exact ⟨ d1, ⟨ Nat.mem_properDivisors.mp ( Finset.mem_sort ( α := ℕ ) ( fun x1 x2 => x2 ≤ x1 ) |>.1 ( by rw [ x ] ; simp ( config := { decide := Bool.true } ) ) ) |>.1, Nat.mem_properDivisors.mp ( Finset.mem_sort ( α := ℕ ) ( fun x1 x2 => x2 ≤ x1 ) |>.1 ( by rw [ x ] ; simp ( config := { decide := Bool.true } ) ) ) |>.2 ⟩, d2, ⟨ Nat.mem_properDivisors.mp ( Finset.mem_sort ( α := ℕ ) ( fun x1 x2 => x2 ≤ x1 ) |>.1 ( by rw [ x ] ; simp ( config := { decide := Bool.true } ) ) ) |>.1, Nat.mem_properDivisors.mp ( Finset.mem_sort ( α := ℕ ) ( fun x1 x2 => x2 ≤ x1 ) |>.1 ( by rw [ x ] ; simp ( config := { decide := Bool.true } ) ) ) |>.2 ⟩, d3, ⟨ Nat.mem_properDivisors.mp ( Finset.mem_sort ( α := ℕ ) ( fun x1 x2 => x2 ≤ x1 ) |>.1 ( by rw [ x ] ; simp ( config := { decide := Bool.true } ) ) ) |>.1, Nat.mem_properDivisors.mp ( Finset.mem_sort ( α := ℕ ) ( fun x1 x2 => x2 ≤ x1 ) |>.1 ( by rw [ x ] ; simp ( config := { decide := Bool.true } ) ) ) |>.2 ⟩, by linarith, by linarith, rfl ⟩;
which seems atypical. Presumably the next generation of models will be better in this regard.
David Michael Roberts (Jul 29 2025 at 03:40):
These companies are selling a product, one which is (sometimes) billed by the volume of use. If they claim their product can solve X problems, then they had better say the success rate and under what conditions, otherwise they can say they can solve X, but it takes 100 attempts and a bazillion tokens on the biggest clusters, which their customers might like to know before buying the miracle machine.
Martin Dvořák (Jul 29 2025 at 06:37):
Are those exact? supposed to stay in the code, not get replaced by the proofs Lean found?
Martin Dvořák (Jul 29 2025 at 07:28):
Should we care that the axiom Lean.ofReduceBool is used by the proof of P3 and both proofs of P4?
Martin Dvořák (Jul 29 2025 at 07:36):
I don't know why the CI on GitHub fails, but everything builds fine on my computer.
Niels Voss (Jul 29 2025 at 07:49):
Can the use of Lean.ofReduceBook be eliminated, or does the proof depend on it in some critical way?
Niels Voss (Jul 29 2025 at 07:57):
Lean.ofReduceBool effectively adds an unbounded set of compiler extensions to the trusted codebase. My personal opinion (and feel free to disagree with me on this) is that in an adversarial context, Lean.ofReduceBool should be treated the same way as sorryAx.
Niels Voss (Jul 29 2025 at 08:01):
With that said, if all it takes is replacing native_decide with something like decide or decide +kernel to eliminate Lean.ofReduceBool, then I would consider the original LLM generated solution to be valid, since the manual editing at the end would have been negligible.
Kim Morrison (Jul 29 2025 at 08:40):
I did (earlier this morning) review the uses of native_decide. They look correct to me, but I agree with everyone's desire to see this not being used in AI generated proofs.
Kim Morrison (Jul 29 2025 at 08:43):
A scary feature of two of their native_decide calls is that the corresponding #eval calls will generate panics, and they are relying on the default values. True, but ... not reassuring.
Joseph Myers (Jul 29 2025 at 10:13):
Martin Dvořák said:
Are those
exact?supposed to stay in the code, not get replaced by the proofs Lean found?
Some of the AlphaProof solutions from last year used hint without replacing it by its output.
Martin Dvořák (Jul 29 2025 at 10:38):
I want to say that I am really happy that the solutions contain a complete theorem for every problem and that we can see how each statement was for formalized for the AI to prove.
Notification Bot (Jul 29 2025 at 19:51):
26 messages were moved from this topic to #Machine Learning for Theorem Proving > Guarding against exploits in AI code by Jason Rute.
Tudor (Jul 29 2025 at 23:07):
Thanks for the interest in our livestream! I'm Tudor, CEO of Harmonic. Here’s some thoughts on the points in the thread. If you’re curious about how the system does on a problem of your choice I would encourage just signing up and trying it! That will always be the best test, imo...
- The files we shared are just the lean proofs. In this version of Aristotle, a separate system proposed the answer, which is what shows up in the formal statement. The lean prover did not get any information about reasoning, etc, from this system; just the final answer. We don’t have more information to share on how this system works right now.
- On IMO formal statements: there was no official AI competition, so there were no official Lean formal statements. I think the IMO Committee is the right group to organize this next year if they think it’s interesting!
native_decide– it’s a very powerful tactic, so we’re not too eager to kill its usage. In fact I think Harmonic may be one of the top bug reporters for issues with it to the FRO :grinning:. We rarely find correctness issues with it, and quickly report them when we do as well as update Aristotle to avoid them. It is of course totally possible for us to turn offnative_decidefor any run of our system (with little reduction in performance) should we have an application in mind that requires a smaller trusted code base, we just dont find RL for reasoning to warrant it.- Formal statement for P6: sure, here you go:
noncomputable def _ANSWER_ : ℕ := sorry
/-
Consider a 2025×2025 grid of unit squares. Matilda wishes to place on the grid some
rectangular tiles, possibly of different sizes, such that each side of every tile lies on a grid line and
every unit square is covered by at most one tile.
Determine the minimum number of tiles Matilda needs to place so that each row and each column
of the grid has exactly one unit square that is not covered by any tile.
-/
theorem problem_IMO_2025_P6
(GoodCover : Set (Finset (NonemptyInterval (Fin 2025 × Fin 2025))))
(hGoodCover : ∀ S, S ∈ GoodCover ↔
Set.PairwiseDisjoint S.toSet SetLike.coe ∧
(∀ i, ∃! j, (i, j) ∉ ⋃ t ∈ S, (t : Set (Fin 2025 × Fin 2025))) ∧
(∀ j, ∃! i, (i, j) ∉ ⋃ t ∈ S, (t : Set (Fin 2025 × Fin 2025)))) :
IsLeast (Finset.card '' GoodCover) _ANSWER_ := by
sorry
Joseph Myers (Jul 30 2025 at 01:58):
I think if you have a proof using ofReduceBool (via native_decide or otherwise), it thus needs manual review (like Kim did), and if a proof (as opposed to a statement) needs manual review - if the generated olean can't pass through lean4checker because of use of ofReduceBool - its status is that of an informal solution to the problem, not a formally verified solution - anything less than a properly typed term depending only on the three standard axioms might be an informal solution, but not a formally verified one. As noted in the thread split off this one, native_decide is a reasonable tool at intermediate stages in writing a proof, just like sorry; it just shouldn't appear in the final claimed formal proof.
Joseph Myers (Jul 30 2025 at 02:03):
On answers: I suggest putting the same explanation in https://github.com/harmonic-ai/IMO2025/issues/1 where someone else has asked the same question (or in the repository README, of course).
Joseph Myers (Jul 30 2025 at 02:16):
On shared formal statements: it's true it was emphasized at the IMO that testing models on IMO problems was not a competition of the AIs (and human-oriented thresholds such as that for gold medals are rather arbitrary in scientific terms). What I'm interested in here is scientific evaluation of the capabilities of AIs. As indicated in where I suggested various ways of doing more nuanced evaluations than "can AI solve the IMO problems?", one thing that would be relevant for such evaluations is a common Lean version of the problems. So I made clear on Zulip in advance of the IMO that a common Lean version would be desirable for more comparable data in such an evaluation, and provided my suggested conventions (so challenging anyone disagreeing about the best conventions for such statements to enter into public discussion of their own preferred conventions - of course even with common conventions, different people are still going to come up with different formal statements of the more complicated or combinatorial problems), and I believe the IMO was ready to provide Lean versions if any of the AIs wanted them (at least the Chief Coordinator was asking me about providing such versions at the start of the IMO, a few days before the conclusion was reached that all the AIs only wanted the problems in English).
Joseph Myers (Jul 30 2025 at 02:20):
(I should add that geometry solutions based on a semi-formal domain-specific language and depending on checking diagrams for configuration / angle orientation information, including AlphaGeometry's one last year, also have the status of an informal solution, not a formal one, when you get precise about exactly what counts as a formally verified proof.)
Bhavik Mehta (Jul 30 2025 at 02:58):
Kim Morrison said:
A scary feature of two of their
native_decidecalls is that the corresponding#evalcalls will generate panics, and they are relying on the default values. True, but ... not reassuring.
It also generates panics if you do extract goal on them: taking the first native_decide from their P4 file, and extracting goal gives a panic. It's not hard to reproduce, taking definitions directly from their repo: https://live.lean-lang.org/#codez=JYWwDg9gTgLgBAWQIYwBYBtgCMBQOAmApgGZwDKcAFAHJwBccgqIQCU9T9AvDnHOofOiRQA5oQDOMAApQIYQlAAiwAG7Ax0MZyrUUAOjAy5ilWo1xqzXethUA7XECmRHFvNuvIaInTZ8pautiANoADAC6AIRwANTuIuJShr4mAYEAjBHRsZ4JPsb+GoEATBF4RKQAStpsLGyAWIScbpQUtAy1rAD0Va2uBCRwAMYA+gDWVKMtTKwMjA08lDR6+MlmFlbQ8JT2gCZEzszMgcMAtOnheGiE0IQgcIQAHjBQSP0whPiDqVSoQwDMbEOjv2CcA4cG+rEoqEGUDYHwAPHBKsEpnAADX/UFwIEg350EFYACebgAdigVIRBkR+sAiEA
Bhavik Mehta (Jul 30 2025 at 03:00):
Another two curious points about this repo:
- The statements of theorems don't match, from the Statement and solution lean files.
- The HarmonicLean.lean file only imports one thing on top of mathlib, which means running
lake buildfrom the root doesn't actually build or check any of the solutions (andlake exe mk_allfails)
Joseph Myers (Jul 30 2025 at 03:12):
The panics are maybe because of the use of [i]!(unsafe list element extraction) in the statement. In my statement in IMOLean I used List.take and List.sum, there are of course other ways of doing things if you want to express the sum of three elements explicitly by their indices, that don't depend on using unsafe definitions in the problem statement. I didn't put anything about avoiding such unsafe definitions in my suggested conventions because it never occurred to me that people would want to use such definitions in problem statements (as opposed to unverified programs) in the first place.
Bhavik Mehta (Jul 30 2025 at 04:13):
Joseph Myers said:
The panics are maybe because of the use of
[i]!(unsafe list element extraction) in the statement.
Yes, it's because of this, but it's also because it's combined with the use of native_decide which performs an evaluation. A proof of this avoiding ofReduceBool would probably go via docs#List.getElem!_eq_getElem?_getD, and not have any panics.
GasStationManager (Jul 30 2025 at 15:06):
Speaking as someone that is very interested in the formal verification aspects of Lean, as I am sure harmonic is as well:
My ideal future is not where everyone stops using native_decide; my ideal future would be when native_decide (or something equally as powerful) becomes safe enough to use, we could trust it enough to let AIs use it without needing manual verification, at least for formal verification tasks. Trusting the compiler is something I could (potentially) live with, but the known exploits are what is currently making it necessary to manually check native_decide proofs.
Joseph Myers (Jul 30 2025 at 19:02):
I expect that any cases of native_decide likely to be of use in mathematical proofs (other than the limited cases of very large computations such as verifying a terabyte-sized SAT certificate) are also ones where a tactic should be able to follow an evaluation procedure that generates a properly typed term, not depending on ofReduceBool, that can be exported from Lean and checked with an external checker.
Joseph Myers (Jul 30 2025 at 19:15):
Andy Jiang said:
Justin Asher said:
I agree that AlphaProof can be hard to read and debug.
If you look here, you can find some previous formalizations of IMO problems and their solutions in Lean. I ran a script and found that the longest IMO solution from there is 47k characters, so 96k is definitely more than what would be expected (P5 is also 98k).
And yes, there are not too many unnecessary have claims in Harmonic's solutions, albeit I have noticed an excess in other systems since they are trying to get compiler feedback and mimic the natural language proofs.
Isn't there possible selection bias on what people formalized?
The two longest IMO solutions in the mathlib archive are mine to IMO 2024 P5 and P3. Since I set out to formalize all six problems from IMO 2024 (still setting up geometrical theory in preparation for doing P4, PR reviews welcome), there's fairly limited scope for selection bias in those particular problems, and I think the distribution in terms of formalization difficulty is probably fairly consistent for the past 20 years or so (maybe perturbed slightly by the adoption of the Smith problem selection protocol in 2013 which ensures that problems 1, 2, 4, 5 include one from each of the four topic areas).
What is biased there is that (a) those solutions have been golfed in the course of the mathlib review process and (b) since I was aiming for the mathlib archive all along, I was also putting any more generally useful lemmas in mathlib proper, whereas an AI solving a problem that shows up API gaps in mathlib will embed whatever proofs it needs directly in its solution (though hopefully AIs might be able to suggest lemmas for mathlib in future). (I fully expect that human formalizations of 2025 P4 would result in extra API for properDivisors being added to mathlib, for example. We've already had discussions on Zulip of how human formalizations of P1 that work with my geometrically oriented statement that actually uses lines in Euclidean space would result in API being added to support manipulating such objects expressed with concrete coordinates.)
Emily Riehl (Jul 31 2025 at 16:12):
Tudor said:
The files we shared are just the lean proofs. In this version of Aristotle, a separate system proposed the answer, which is what shows up in the formal statement. The lean prover did not get any information about reasoning, etc, from this system; just the final answer. We don’t have more information to share on how this system works right now.
Thanks @Tudor for weighing in here.
-
Did the separate system generate the full "statement only" files, or just propose the numerical answer for humans to use in producing those files?
-
How long did this whole process take: first for the separate system and then for Aristotle to generate the formal proofs?
Geoffrey Irving (Aug 02 2025 at 18:09):
This fast_decide tactic is useful for replacing native_decide when decide and rfl don't work:
https://github.com/girving/interval/blob/main/Interval/Tactic/Decide.lean
Jason Rute (Aug 02 2025 at 20:28):
@Geoffrey Irving Does it actually help in practice?
Geoffrey Irving (Aug 02 2025 at 20:29):
Yes, I very frequently find cases where it is required, though the https://github.com/girving/interval repo is very calculation heavy which is why it comes up so much. I'm updating for recently mathlib now, and for some reason need to use it a lot more than I used to.
Jason Rute (Aug 02 2025 at 20:31):
Have you tried harmonics proofs but with your tactic instead of native_decide? Also if this works so well, why not add it to lean or mathlib, or even replace decide with this faster code?
Jason Rute (Aug 02 2025 at 20:33):
(If I recall correctly, Numina also had a number of native_decide proofs for minif2f that could be tried.)
Jason Rute (Aug 02 2025 at 20:35):
(And I think equational theories has a native_decide proof since decide was am too slow, but I imagine they would have already tried something like this.)
Geoffrey Irving (Aug 02 2025 at 20:36):
Day job mostly blocks me upstreaming stuff, alas (due to lack of time). And no, I haven't tried Harmonic's proofs.
Bhavik Mehta (Aug 03 2025 at 18:16):
How does fast_decide compare to decide +kernel?
Joachim Breitner (Aug 03 2025 at 18:58):
Geoffrey Irving schrieb:
This
fast_decidetactic is useful for replacingnative_decidewhendecideandrfldon't work:https://github.com/girving/interval/blob/main/Interval/Tactic/Decide.lean
May be subsumed by decide +kernel these days.
Alex Meiburg (Aug 03 2025 at 20:58):
For anyone curious on the stats relevant here:
- There is one native_decide in P3, that checks that
padicValNat 2 8 = 3. This doesn't work with any version ofdecide +kernelorfast_decide, becausepadicValNatis defined in terms ofNat.find, which is in terms of well-founded recursion, which the kernel cannot reduce. - In our first proof of P4, we have four native_decide's. One is checking that
3 ≤ ({1, 2, 3} : Finset Nat).card. This is easily done bydecideor even justrfl. The other three all involveFinset.sort, which ultimately is built on (Edit: corrected here)List.mergeSortwhich uses well-founded recursion and doesn't kernel reduce. So none of thesedecidetactics work on that, again except for actual native_decide. - In our second proof of P4, there are 11 native_decide's; 10 are for evaluating
padicValNat, and one is forNat.factorization(which is defined immediately in terms of padicValNat).
Aaron Liu (Aug 03 2025 at 21:04):
Alex Meiburg said:
- In our first proof of P4, we have four native_decide's. One is checking that
3 ≤ ({1, 2, 3} : Finset Nat).card. This is easily done bydecideor even justrfl. The other three all involveFinset.sort, which ultimately is built onQuot.liftand doesn't reduce in the kernel. So none of thesedecidetactics work on that, again except for actual native_decide.
The problem is probably not Quot.lift, but docs#List.mergeSort being defined in terms of well-founded recursion. Quot.lift actually reduces fine most of the time.
Alex Meiburg (Aug 03 2025 at 21:10):
oops, you're right! Good catch haha. (and here I was thinking to myself, I thought Quot.lift was ok...? but I'd convinced myself wrong.)
Geoffrey Irving (Aug 05 2025 at 20:02):
Joachim Breitner said:
Geoffrey Irving schrieb:
This
fast_decidetactic is useful for replacingnative_decidewhendecideandrfldon't work:https://github.com/girving/interval/blob/main/Interval/Tactic/Decide.lean
May be subsumed by
decide +kernelthese days.
Confirmed, all of my fast_decide uses can be replaced by decide +kernel. Neat!
Jared green (Aug 06 2025 at 19:02):
how long before aristotle comes out of beta?
Tudor (Aug 07 2025 at 02:28):
Working on it!
Last updated: Dec 20 2025 at 21:32 UTC