Zulip Chat Archive

Stream: new members

Topic: Robust Reverse Halting


John Doe (Dec 10 2025 at 15:56):

Here is a tiny Lean core capturing the dynamic idea I’m working with.

The setup is:

  • A temporal trace is Trace := ℕ → Prop.

  • Halts T means “there exists a time n with T n”.

  • A reverse–halting kit RHKit with the axiom DetectsMonotone induces an operator Rev on traces.

  • The key lemma is that, under DetectsMonotone K, the stabilised predicate Rev0 K is forced to coincide with the plain halting predicate Halts on all traces.

  • A LocalReading maps each (Γ, φ) to a trace; Prov says the trace ever holds, and verdict K is Rev0 K on that trace.

  • So the “verdict” of any admissible K collapses extensionally to provability-as-halting.

Lean code (minimal core):

import Mathlib.Data.Set.Basic
import Mathlib.Data.Nat.Basic

universe u v

namespace LogicDissoc

/-- A temporal trace: at each time `n`, a proposition may hold. -/
abbrev Trace :=   Prop

/-- `(up T) n` means “there exists `k ≤ n` with `T k`”. -/
def up (T : Trace) (n : ) : Prop :=
   k  n, T k

lemma up_mono (T : Trace) :
   {n m : }, n  m  up T n  up T m := by
  intro n m hnm h
  rcases h with k, hk_le_n, hk_T
  exact k, Nat.le_trans hk_le_n hnm, hk_T

/-- Direct halting predicate on traces. -/
def Halts (T : Trace) : Prop :=
   n : , T n

lemma exists_up_iff (T : Trace) :
  ( n, up T n)  Halts T := by
  constructor
  · rintro n, hn
    rcases hn with k, _, hk
    exact k, hk
  · rintro k, hk
    exact k, k, le_rfl, hk⟩⟩

/-- A reverse–halting kit. -/
structure RHKit where
  Proj : (  Prop)  Prop

/--
`DetectsMonotone K` says: on monotone families `X`,
`K.Proj X` coincides with `∃ n, X n`.
-/
structure DetectsMonotone (K : RHKit) : Prop where
  proj_of_mono :
     X :   Prop,
      ( {n m}, n  m  X n  X m) 
      (K.Proj X   n, X n)

/-- Revision operator obtained from a reverse–halting kit. -/
def Rev (K : RHKit) (T : Trace) : Prop :=
  K.Proj (fun n => up T n)

/-- Concrete alias. -/
def Rev0 (K : RHKit) : Trace  Prop :=
  Rev K

/-- Core lemma: any monotone–detecting kit collapses to plain halting. -/
lemma Rev0_iff_Halts (K : RHKit) (DK : DetectsMonotone K) (T : Trace) :
  Rev0 K T  Halts T := by
  unfold Rev0 Rev Halts
  have hmono :  {n m}, n  m  up T n  up T m := up_mono T
  have hx : K.Proj (fun n => up T n)   n, up T n :=
    (DetectsMonotone.proj_of_mono DK (fun n => up T n) (by
      intro n m hnm h; exact hmono hnm h))
  simpa [exists_up_iff T] using hx

/-! Local reading, provability and verdicts. -/

universe u' v'

/-- A local reading assigns a trace to each `(Γ, φ)`. -/
abbrev LocalReading (Context : Type v') (Sentence : Type u') :=
  Context  Sentence  Trace

/-- Provability as “halting of the local reading”. -/
def Prov {Context : Type v'} {Sentence : Type u'}
    (LR : LocalReading Context Sentence)
    (Γ : Context) (φ : Sentence) : Prop :=
   n : , LR Γ φ n

/-- Stabilised verdict via `Rev0`. -/
def verdict (K : RHKit)
    {Context : Type v'} {Sentence : Type u'}
    (LR : LocalReading Context Sentence)
    (Γ : Context) (φ : Sentence) : Prop :=
  Rev0 K (LR Γ φ)

/-- Verdict = provability for any admissible kit. -/
lemma verdict_iff_Prov
    (K : RHKit) (DK : DetectsMonotone K)
    {Context : Type v'} {Sentence : Type u'}
    (LR : LocalReading Context Sentence)
    (Γ : Context) (φ : Sentence) :
  verdict K LR Γ φ  Prov LR Γ φ := by
  unfold verdict Prov
  simpa [Halts] using Rev0_iff_Halts K DK (LR Γ φ)

end LogicDissoc

If anyone is interested, I also have a full version where this is connected to a standard semantic closure CloE (Galois connection between models/theories) via a “dynamic bridge”:

φ ∈ CloE Γ if and only if the local reading trace LR Γ φ halts,

and then the robust Rev–verdict is forced to coincide with semantic consequence.

Thank you

John Doe (Dec 12 2025 at 23:10):

Link to the repo : https://github.com/JohnDoe-collab-stack/RevHalt

RevHalt

A Lean 4 framework proving that computational truth (halting) is:

  • Canonical — independent of how you observe it
  • Inaccessible — no sound formal system fully captures it
  • Complementary — any sound theory can be strictly extended toward it

Unlike classical presentations of Gödel's theorems, which work inside a specific theory, RevHalt provides the abstract framework and treats theories (PA, ZFC) as instances to be plugged in.


Foundational perspective

Standard presentations of Gödel's incompleteness theorems work within a base theory (typically PA or ZFC) and derive limitative results about that theory's expressive power.

This project inverts the perspective:

  • RevHalt provides the abstract syntactic framework, grounded in Turing-style computability (halting, diagonalization, definability).
  • Formal theories become semantic instances of this framework, obtained by supplying concrete interpretations of the abstract interface (provability, truth, arithmetization).

In this formulation, the "proof strength" of any particular theory is not a primitive notion but rather emerges as a local property: it measures how much of the externally-defined computational truth the theory's provability predicate can capture.

Original contributions

This project establishes three main results, each with distinct novelty:

T1 — Canonicity of computational truth

Classical result (Turing): The halting problem is undecidable.

T1 proves something different: computational truth is objective — independent of the observation mechanism.

The framework introduces RHKit, an abstract "observation mechanism" for traces. T1 proves:

  • T1_traces: Any valid Kit yields the same verdict as standard halting
  • T1_uniqueness: Two valid Kits are extensionally equivalent
  • T1_semantics: Under the DynamicBridge hypothesis, Rev captures model-theoretic consequence

This is not Turing's theorem. It is a canonicity result: all valid observers converge to the same truth.

T2 — Abstract Turing-Gödel synthesis

Classical results: Turing (algorithmic undecidability) and Gödel I (true unprovable sentences) are typically presented separately.

T2 extracts their common abstract core via TuringGodelContext':

  • diagonal_program: the diagonal fixed-point axiom unifying both arguments
  • Result: no internal predicate can be simultaneously Total, Correct, and Complete

This is not a reformulation; it is an abstraction that reveals the structural unity of Turing and Gödel.

T3 — Complementarity: the central concept

Classical incompleteness is limitative: it tells you what theories cannot do.

T3 introduces a new conceptcomplementarity — and proves it formally:

Rev is the complement of any sound formal system.

This means:

  • Formal systems are not "failures" for being incomplete — they are structurally partial
  • Rev is not merely "bigger" than PA/ZFC — it is what they lack
  • Truth and provability are not opposed — they are complementary

The theorem T3_strong proves that this complementarity is structured and rich. The space of true-but-unprovable statements is not an amorphous limiting void, but a partitioned landscape.

We prove the existence of infinitely many disjoint, compatible directions of extension. This means that completing a theory is not a single forced step, but a dynamical choice — a navigation through the geography of computational truth.
This concept has no classical analog. It transforms incompleteness from a limitative statement into a structural dynamical relationship.


Syntax–semantics correspondence

The framework establishes a precise correspondence:

RevHalt (syntax) Instance L (semantics)
RMHalts e — halting defined by the computability model L.Truth (HaltEncode e) — truth as seen by the theory
M.PredDef — definability in the abstract model L.Provable via arithmetization — derivability as seen by the theory
Diagonalization (diagonal_halting) Arithmetization (repr_provable_not)

The theorems then express structural gaps between syntactic truth and semantic observability:

Theorem Interpretation
T1 : ∀ T, Rev0_K K T ↔ Halts T Canonicity: computational truth is objective, independent of observation mechanism
T2 : ∃ p, Truth p ∧ ¬Provable p Synthesis: no internal predicate captures external truth (Turing-Gödel core)
T2' : ∃ e, ¬Provable(H e) ∧ ¬Provable(¬H e) Independence: some halting facts are invisible to the semantic observer
T3 : ∃ T₁ ⊃ ProvableSet, sound Complementarity: Rev provides structured infinite extensions for any sound theory

This is the reverse of classical incompleteness proofs, which work in a theory about that theory. Here, the proofs work in RevHalt about any conforming semantic instance.


Structure

Core modules

Module Contents
RevHalt.lean Base layer: Trace, Halts, RHKit, TuringGodelContext', T2_impossibility
RevHalt.Unified.Core Abstract results: EnrichedContext, ProvableSet, true_but_unprovable_exists, independent_code_exists
RevHalt.Unified.RigorousModel Computability model: RigorousModel, SoundLogicDef, Arithmetization
RevHalt.Unified.Bridge Integration: SoundLogicEncoded, EnrichedContext_from_Encoded, RevHalt_Master_Complete

Entry point

import RevHalt.Unified
open RevHalt_Unified

Interface (M / L / A / E)

The framework factors assumptions into four components:

M — Computability model (RigorousModel)

  • Code, Program : Code → ℕ → Option ℕ
  • PredCode, PredDef : PredCode → Code → Prop (definability, not decidability)
  • diagonal_halting (fixed-point over definable predicates)
  • no_complement_halts (non-halting is not definable)

L — Logic (SoundLogicDef PropT)

  • Provable, Truth : PropT → Prop
  • soundness : Provable p → Truth p
  • Not, FalseP, consistent, absurd, truth_not_iff

A — Arithmetization (Arithmetization M PropT L)

  • repr_provable_not : for any G : Code → PropT, the predicate Provable (Not (G e)) is definable in PredCode.

E — Encoding (in SoundLogicEncoded)

  • HaltEncode : Code → PropT
  • encode_correct : RMHalts e ↔ Truth (HaltEncode e)

Main theorem

RevHalt_Master_Complete

For any semantic instance (M, K, L) satisfying the interface:

(1)  e, RealHalts e  Halts (compile e)           -- T1: canonicity
(2)  p, Truth p  ¬Provable p                     -- T2: synthesis
(3)  e, ¬Provable (H e)  ¬Provable (Not (H e))   -- T2': independence
(4)  T₁  ProvableSet,  p  T₁, Truth p         -- T3: complementarity

Demos

  • RevHalt_Demo_A : trivial model (empty provability)
  • RevHalt_Demo_C : non-trivial model (non-empty provability, structured predicates)
  • RevHalt/Demo/Template.lean : instantiation skeleton

Build

lake build
lake env lean RevHalt/Demo/All.lean

Design notes

  • PredDef is Prop-valued (definability), avoiding implicit decidability assumptions.
  • no_complement_halts blocks trivial instantiations where "not halts" would be definable.
  • Soundness (Provable → Truth) is an explicit hypothesis, not an ambient assumption.

Notification Bot (Dec 12 2025 at 23:25):

This topic was moved here from #lean4 > Robust Reverse Halting by Kevin Buzzard.

Kevin Buzzard (Dec 12 2025 at 23:27):

This has nothing to do with development of the core software so I've moved it to a more appropriate channel. Looks like slop to me but was reluctant to unilaterally junk because I don't know the area.

John Doe (Dec 12 2025 at 23:51):

Kevin Buzzard said:

This has nothing to do with development of the core software so I've moved it to a more appropriate channel. Looks like slop to me but was reluctant to unilaterally junk because I don't know the area.

Thanks for the triage. I understand this isn’t “core software” work, so moving it is fine.

On “slop”: the README is summarizing Lean-formalized results (not marketing text). The three main theorems are:

  • T1 (T1_traces): canonicity : for any valid RHKit, Rev0_K K T ↔ Halts T (all valid observers agree with standard halting).

  • T2 (T2_impossibility): abstract Turing–Gödel core : no internal predicate can be simultaneously total, correct, and complete with respect to RealHalts.

  • T3 (T3_strong): complementarity : for any sound theory, there are strict extensions (in structured, disjoint directions) toward computational truth.

Repo: <REPO_LINK>
Repro: lake update then lake build
Locations: T1_traces (RevHalt.lean), T2_impossibility (RevHalt.lean), T3_strong (RevHalt.lean).

thanks

Matt Diamond (Dec 13 2025 at 00:17):

Was an LLM involved with writing the code or the documentation? It would be good to disclose that if so

Thomas Murrills (Dec 13 2025 at 00:29):

And specifically, can you please elaborate on the degree to which you've used LLMs, and for what purposes? Using LLMs is fine in general, as long as it's clear exactly what they've been used for. :)

Also, for some context on Kevin's assessment here: we occasionally get people coming to the Lean Zulip out of the blue who post LLM-generated code and text, making fanciful claims. Some text in the messages you've sent (and in the repo) looks very LLMish on first glance; plus, the promise of a new framework without specific reference to existing research makes it look at least superficially similar to those cases. Hopefully you can understand the skeptical reaction given the experiences we've had here!

I've confirmed that your project does build, though, which is a great sign! :) (In many of the cases I've mentioned, this is not the case.)

John Doe (Dec 13 2025 at 02:01):

I’m anonymous because I want to be able to tell the truth without having to carry the personal weight of it.

I’ve always had difficulties expressing my ideas in my native language and with mathematical notation. For years I’ve been thinking about these topics in my head using a private internal "language". A few years ago, the arrival of LLMs gave me an interlocutor that could sometimes rephrase my ideas into ordinary words — at a time when most people around me just found me “weird”, so I couldn’t really discuss these ideas productively.

With an LLM, I was able to work by discussion: explore concepts, propose definitions, and iterate on many ideas. The hard part for me was determining whether my conclusions were actually true. That is how I discovered proof assistants.

Since then, I continued essentially the same way as before (conceptual work first), except that after the “paper”/discussion stage we would attempt formalization. In practice: the Lean code (and most prose) in this repository is LLM-generated; my contribution was specifying the conceptual/definitional constraints, pushing the system to respect them, and iterating by building, reading errors, revising statements/definitions, and repeating until the development compiled. Even after that, I repeatedly rebuilt the development from scratch to improve clarity and to be sure the formalization matched the concepts I had in mind.

A key part of my method is target selection. For me, the “targets” are structural constraints: constraints that are often implicit in standard notation, and which can look tautological or degenerate when stated explicitly. I focus on the ideas and the links between ideas, and I try to identify which additional constraint to target next in order to understand the overall constraint system. In practice, this is essentially a form of reverse engineering.

This was roughly six years of work from scratch using dictionaries, Wikipedia, and books, and then with ChatGPT; now I’m at the point where part of those ideas can be expressed and checked formally.

So the repository should be understood as LLM-generated Lean code and documentation, guided by my long-term conceptual and definitional constraints, rather than Lean code authored by me.

I understand the skeptical reaction given past experiences here and given my method. For me this is first a way to confront my ideas, and also a way to express myself. I’ll add an explicit “LLM disclosure” section to the repo (including which models were used and for what) and keep the README strictly checkable (theorem statements, assumptions, file locations, and build instructions).
T1 is a “for each locality / observer” convergence statement.

Formally: for any kit K satisfying DetectsMonotone K, and for any trace T,

Rev0_K K T ↔ Halts T.

Conceptually:

  • up is a closure/stabilization of a trace: it turns any trace into a monotone one (once a witness appears, it stays witnessed).

  • DetectsMonotone is a structural constraint on observers: on monotone processes, the observer’s projection must coincide with existential truth (∃ n, X n).

  • Therefore, after closing any trace with up, every admissible observer is forced to agree with standard halting. This is exactly why T1_uniqueness follows: any two valid kits are extensionally equivalent on all traces.

Repro: lake build (and see T1_traces / T1_uniqueness in RevHalt.lean).

Matt Diamond (Dec 13 2025 at 02:46):

So with T1, it sounds like you're trying to say something about predicates on infinite binary sequences (which you call Traces). You've got 2 predicates (Rev0_K and Halts) and you've proved that they're equivalent if DetectsMonotone holds. That makes sense, but I'm wondering about the significance here.

To review:

Rev0_K takes in a trace, converts it to Monotone by setting all indices after the initial 1 (if it exists) to 1, and calls the proj function of an RHKit object on this trace.

Halts is true if there's at least one 1 in the trace.

DetectsMonotone is a predicate that says that an RHKit's proj function can detect whether or not a trace halts (contains a 1) if the trace is Monotone.

Your theorem observes that Rev0_K (with some RHKit) and Halts are equivalent when DetectsMonotone is true. But we should expect that: DetectsMonotone means that RHKit can detect ones if the sequence is monotone, and we're monotonizing the sequence before we pass it in! As you observed in your lemma exists_up_iff, monotonizing a sequence preserves 1-existence, so it shouldn't be a surprise that a function which detects a 1 in Monotone sequences can always accomplish this when you pass it a monotonized sequence.

The thing is that your Monotone T requirement in DetectsMonotone could be replaced by any property which is preserved by an operation and the theorem would still hold. In a general sense, what the theorem is stating is: "if you have a function which detects a property P of an input when the input meets a condition C, and if you can transform the input to meet that condition in a P-preserving way, then you can compose your function with the modifier to detect P for all inputs." The problem (to me, anyway) is that it isn't clear what this has to do with locality or observers. The terminology feels like it expresses more than the definitions actually capture.

Just my 2 cents, though.

Matt Diamond (Dec 13 2025 at 02:56):

Also not trying to discourage you, btw! It can make sense to try to add rigor to a philosophical system even if the math isn't groundbreaking. What matters is that it helps you (and hopefully others) understand your ideas.

constraints that are often implicit in standard notation, and which can look tautological or degenerate when stated explicitly

could you provide an example of this? not sure what you're referring to here

John Doe (Dec 13 2025 at 03:41):

My viewpoint here is dynamic-first: the primary objects are processes T : ℕ → Prop and stabilization/closure operators on them (not static predicates on finished objects).

I take your point: T1 is immediate once you see up and the assumption DetectsMonotone. That is intentional. T1 is a coherence/interface lemma: it pins down what any admissible observer can return after stabilization. (“Observer” here just means a choice of kit / coding convention for projecting a stabilized trace; I’m not assuming or claiming any finitary, prefix-local, or computability property.)

More broadly, this is part of an architectural approach: I’m designing a constraint system on dynamics so that the whole development is coherent and compositional, not a collection of unrelated lemmas. T1 is the coherence pin that makes later layers inherit observer-independence rather than re-proving it case-by-case.

The intended effect is a form of dynamic self-consistency: admissibility + stabilization enforce global coherence by construction.

More precisely: DetectsMonotone is the admissibility condition on kits (it specifies correctness on stabilized/monotone evolutions), and up is the stabilization operation. Then T1 says that for each admissible kit K (equipped with DK : DetectsMonotone K), the composed verdict Rev0_K K := K.Proj ∘ up is uniquely determined and coincides extensionally with Halts on all traces. So the significance is not computational hardness; it’s uniqueness-by-constraints / observer-independence: once you fix the admissibility criterion and the stabilization step, the space of possible observers collapses to a single extensional behavior. This is what makes downstream notions (e.g. bridges from traces to semantic consequence / provability instances) independent of the particular K, rather than artifacts of terminology.

Re “implicit constraints”: one example is precisely the move from “eventually true” to a stabilized monotone witness predicate. Informally people talk about “detecting eventuality”, but they often implicitly assume the monotonicity convention (equivalently, invariance under up).

Matt Diamond (Dec 13 2025 at 06:52):

In RevHalt.lean you state:

T3 shows that Rev, as external truth, can always extend any sound theory

but you don't mention Rev at all after this line which is at the start of your statement of T2, so I'm not sure how you proved anything about it

John Doe (Dec 13 2025 at 10:13):

Matt Diamond said:

In RevHalt.lean you state:

T3 shows that Rev, as external truth, can always extend any sound theory

but you don't mention Rev at all after this line which is at the start of your statement of T2, so I'm not sure how you proved anything about it

True, your observation is correct.

RevHalt’s architecture is intentionally two-level:

  • Abstract level (RevHalt.lean): T2 and T3 are structural theorems proved for an arbitrary external predicate RealHalts inside a TuringGodelContext'. T3 is stated in terms of Truth / ProvableSet. Rev does not appear there — that’s by design.

  • Instantiated level (Unified/Bridge.lean): the TGContext_from_RM / EnrichedContext_from_Encoded construction instantiates RealHalts := Rev0_K K (rmCompile M e), and then encode_correct connects this RealHalts to Truth (HaltEncode e). At this level, the T3 statement (sound strict extension of ProvableSet) is obtained for that instantiated context, in the same chain as T1 + T2 (+ T2’).

So if the question is “where is the link between Rev and T3 proved?”, it’s in Unified/Bridge.lean (and the Unified/* modules), not in RevHalt.lean alone.

edit:

My goal is not “redo Gödel inside ZFC”, but to exhibit a common reference frame.

The Unified/* folder makes this explicit by factoring the development through an M/L/A/E interface:

  • M = computation model (RigorousModel, RMHalts, rmCompile, diagonal_halting, no_complement_halts)

  • L = logic (SoundLogicDef: Provable, Truth, Not, FalseP, soundness/consistency/absurd, truth_not_iff)

  • A = arithmetization (Arithmetization.repr_provable_not)

  • E = halting encoding (HaltEncode with encode_correct : RMHalts M e ↔ Truth (HaltEncode e))

At the abstract level (RevHalt.lean / Unified/Core.lean), T2/T3 are proved as structural theorems for any context satisfying this interface (in particular, for an abstract external predicate RealHalts in TuringGodelContext', and for Truth/ProvableSet in EnrichedContext). They do not depend on ZFC specifically.

Then Unified/Bridge.lean is the instantiation glue: it sets
RealHalts := Rev0_K K (rmCompile M e) (via TGContext_from_RM), and uses T1 (T1_traces) plus rm_compile_halts_equiv and encode_correct to connect that RealHalts back to the semantic side Truth (HaltEncode e). This is where the slogan “Rev (via this instantiation of RealHalts) yields a sound strict extension” becomes literal, with T1 + T2 (+ T2’) + T3 proved in one chain (RevHalt_Master_Complete).

What this architecture is meant to exhibit is compositionality/portability: once a theory/model provides an instance of M/L/A/E, the whole T1 → T2 → T2’ → T3 chain transports without rewriting proofs. This turns the framework into a reference/mapping layer for comparing different theories and models in the same format.

Then, you have the folder Demo/*


Last updated: Dec 20 2025 at 21:32 UTC