New Foundations is consistent

6.2 Building the tower

Definition 6.4

For a proper type index \( \alpha \), the main motive at \( \alpha \) is the type \( \mathsf{Motive}_\alpha \) consisting of model data at \( \alpha \), a position function for \( \mathsf{Tang}_\alpha \), and typed near-litters at \( \alpha \), such that if \( (x, S) \) is an \( \alpha \)-tangle and \( y \) is an atom or near-litter that occurs in the range of \( S_A \), then \( \iota (y) {\lt} \iota (x, S) \).

We define the main hypothesis

\[ \mathsf{Hypothesis}: \prod _{\alpha : \lambda } \mathsf{Motive}_\alpha \to \left( \prod _{\beta {\lt} \alpha } \mathsf{Motive}_\beta \right) \to \mathsf{Type}_{u+1} \]

at \( \alpha \), given \( \mathsf{Motive}_\alpha \) and \( \prod _{\beta {\lt} \alpha } \mathsf{Motive}_\beta \), to be the type consisting of the following data.

  • For \( \gamma {\lt} \beta \leq \alpha \), there is a map \( \mathsf{AllPerm}_\beta \to \mathsf{AllPerm}_\gamma \) that commutes with the coercions from \( \mathsf{AllPerm}_{(-)} \) to \( \mathsf{StrPerm}_{(-)} \).

  • Let \( \beta , \gamma {\lt} \alpha \) be distinct with \( \gamma \) proper. Let \( t : \mathsf{Tang}_\beta \) and \( \rho : \mathsf{AllPerm}_\alpha \). Then

    \[ (\rho _\gamma )_\bot (f_{\beta ,\gamma }(t)) = f_{\beta ,\gamma }(\rho _\beta (t)) \]
  • Suppose that \( (\rho (\beta ))_{\beta {\lt} \alpha } \) is a collection of allowable permutations such that whenever \( \beta , \gamma {\lt} \alpha \) are distinct, \( \gamma \) is proper, and \( t : \mathsf{Tang}_\gamma \), we have

    \[ (\rho (\gamma ))_\bot (f_{\beta ,\gamma }(t)) = f_{\beta ,\gamma }(\rho (\beta )(t)) \]

    Then there is an \( \alpha \)-allowable permutation \( \rho \) with \( \rho _\beta = \rho (\beta ) \) for each \( \beta {\lt} \alpha \).

  • For any \( \beta {\lt} \alpha \),

    \[ U_\alpha (x)(\beta ) \subseteq \operatorname {\mathsf{ran}}U_\beta \]
  • (extensionality) If \( \beta : \lambda \) is such that \( \beta {\lt} \alpha \), the map \( U_\alpha (-)(\beta ) : \mathsf{TSet}_\beta \to \operatorname {\mathsf{Set}}\mathsf{StrSet}_\beta \) is injective.

  • If \( \beta : \lambda \) is such that \( \beta {\lt} \alpha \), for every \( x : \mathsf{TSet}_\beta \) there is some \( y : \mathsf{TSet}_\alpha \) such that \( U_\alpha (y)(\beta ) = \{ x \} \), and we write \( \operatorname {\mathsf{singleton}}_\alpha (x) \) for this object \( y \).

The inductive step for the main motive is the function

\begin{align*} & \operatorname {\mathsf{Step}}_M : \prod _{\alpha : \lambda } \prod _{M : \prod _{\beta {\lt} \alpha } \mathsf{Motive}_\beta } \\ & \quad \left( \prod _{\beta : \lambda } \prod _{h : \beta {\lt} \alpha } \mathsf{Hypothesis}\ \beta \ (M\ \beta \ h)\ (\gamma \ h’ \mapsto M\ \gamma \ (\operatorname {\mathsf{trans}}(h’,h))) \right) \to \mathsf{Motive}_\alpha \end{align*}

given as follows. Given the hypotheses \( \alpha , M, H \), we use definition 3.12 to construct model data at level \( \alpha \). Then, combine this with \( H \) to create an instance of coherent data below level \( \alpha \). 1 By proposition 5.45, we conclude that \( \# \mathsf{Tang}_\alpha = \# \mu \), and so by proposition 3.15 there is a position function on \( \mathsf{Tang}_\alpha \) that respects the typed near-litters defined in definition 3.13. These data comprise the main motive at level \( \alpha \).

The inductive step for the main hypothesis is the function

\begin{align*} & \operatorname {\mathsf{Step}}_H : \prod _{\alpha : \lambda } \prod _{M : \prod _{\beta {\lt} \alpha } \mathsf{Motive}_\beta } \\ & \quad \prod _{H : \prod _{\beta : \lambda } \prod _{h : \beta {\lt} \alpha } \mathsf{Hypothesis}\ \beta \ (M\ \beta \ h)\ (\gamma \ h' \mapsto M\ \gamma \ (\operatorname {\mathsf{trans}}(h',h)))}\\ & \quad \mathsf{Hypothesis}\ \alpha \ (\operatorname {\mathsf{Step}}_M\ \alpha \ M\ H)\ M \end{align*}

given as follows. Given the hypotheses \( \alpha , M, H \), we use the definitions from definition 6.6 to obtain model data at level \( \alpha \) and coherent data below level \( \alpha \). The remaining proof obligations are handled by ??.

Theorem 6.8 model construction

There are noncomputable functions

\[ \operatorname {\mathsf{ComputeMotive}} : \prod _{\alpha : \lambda } \mathsf{Motive}_\alpha \]

and

\[ \operatorname {\mathsf{ComputeHypothesis}} : \prod _{\alpha : \lambda } \mathsf{Hypothesis}\ \alpha \ \operatorname {\mathsf{ComputeMotive}}_\alpha \ (\beta \ \_ \mapsto \operatorname {\mathsf{ComputeMotive}}_\beta ) \]

such that for each \( \alpha : \lambda \),

\begin{align*} \operatorname {\mathsf{ComputeMotive}}_\alpha = & \operatorname {\mathsf{Step}}_M\ \alpha \ (\beta \ \_ \mapsto \operatorname {\mathsf{ComputeMotive}}_\beta )\\ & (\operatorname {\mathsf{Step}}_H\ \alpha \ (\beta \ \_ \mapsto \operatorname {\mathsf{ComputeHypothesis}}_\beta )) \end{align*}
Proof

Direct from theorem 6.3.

We can then reconstruct, for each level \( \alpha \), the type of coherent data below \( \alpha \).

  1. This requires using the definition of new allowable permutations. There may be some difficulties here in converting between the types of model data at level \( \alpha \) together with model data at all \( \beta {\lt} \alpha \), and model data at all levels \( \beta \leq \alpha \).