Zulip Chat Archive
Stream: Equational
Topic: independent equations and invariants
Jared green (Oct 16 2024 at 16:38):
which, if not all of them, of the independent pairs of equations can be proven so by an invariant that one preserves and the other changes?
Cody Roux (Oct 16 2024 at 16:51):
All of them, if "being true in a magma" counts as an "invariant". I would argue it kind of does, e.g. "number of x
-es" is captured by the free-commutative magma.
Jared green (Oct 16 2024 at 19:06):
thats not the kind of invariant i mean. i mean some a priori metric with a linear order.
Terence Tao (Oct 16 2024 at 19:29):
As @Cody Roux pointed out, technically any anti-implication "EquationX does not imply EquationY" can be disproven by evaluating both sides of EquationX and both sides of EquationY in the free magma modulo equation X; both sides will necessarily agree for the former and necessarily disagree for the latter, by the universal property. One can then impose some linear ordering on that free magma to make this an "a priori metric with a linear order", though it is admittedly an artificial one in most cases.
That said, some large files of automatically generated anti-implications using invariants were created in equational#389 (see also equational#526). One could possibly make some full runs here to see exactly how many of the ~14 million anti-implications can be picked up by "simple" invariants.
Jared green (Oct 16 2024 at 20:02):
ordering on terms, not elements.
Jared green (Oct 16 2024 at 20:03):
(they wouldnt be equations if the elements were different, right?)
Jose Brox (Oct 16 2024 at 20:40):
Terence Tao ha dicho:
As Cody Roux pointed out, technically any anti-implication "EquationX does not imply EquationY" can be disproven by evaluating both sides of EquationX and both sides of EquationY in the free magma modulo equation X; both sides will necessarily agree for the former and necessarily disagree for the latter, by the universal property. One can then impose some linear ordering on that free magma to make this an "a priori metric with a linear order", though it is admittedly an artificial one in most cases.
That said, some large files of automatically generated anti-implications using invariants were created in equational#389 (see also equational#526). One could possibly make some full runs here to see exactly how many of the ~14 million anti-implications can be picked up by "simple" invariants.
In addition, if the machine learning models turn out to produce a high success ratio, then they problably will be applying some invariants (specially the neural network ones) which we could later try to extract (we will see, we are on it).
Michael Bucko (Oct 16 2024 at 21:22):
Jose Brox schrieb:
Terence Tao ha dicho:
As Cody Roux pointed out, technically any anti-implication "EquationX does not imply EquationY" can be disproven by evaluating both sides of EquationX and both sides of EquationY in the free magma modulo equation X; both sides will necessarily agree for the former and necessarily disagree for the latter, by the universal property. One can then impose some linear ordering on that free magma to make this an "a priori metric with a linear order", though it is admittedly an artificial one in most cases.
That said, some large files of automatically generated anti-implications using invariants were created in equational#389 (see also equational#526). One could possibly make some full runs here to see exactly how many of the ~14 million anti-implications can be picked up by "simple" invariants.
In addition, if the machine learning models turn out to produce a high success ratio, then they problably will be applying some invariants (specially the neural network ones) which we could later try to extract (we will see, we are on it).
Two ideas.
- "Global e-graph-AI loop": Perhaps LLMs bootstrapped with human input / dataset (we'd feed them with implications and anti-implications, for a given set of equations) and tools could iteratively generate e-graphs the way we train models (perhaps in parallel - think forking them and merging), and them we could use them to further train those LLMs. E-graph (global) saturation would be a function of LLM's ability to explore the problem space. At first, there'll be a few e-graphs for a couple of projects. Eventually, one would be able to merge them the same way we merge ML models.
- "E-graph diffuser": A diffusion model could potentially iteratively "dream" an e-graph (the same ways it comes up with 3d games that people can play). That's a different approach to the problem, but very inspiring. Diffusion models evolve (different parametrization techniques, incl. the recent energy-based approaches). They've been successful in multiple fields and could be used to develop their own intuition in a slightly different way.
Last updated: May 02 2025 at 03:31 UTC