# 1.1 Introduction

In this chapter, we explain how to construct families of loops to feed into the corrugation process explained at the end of the introduction.

Throughout this document, $$E$$ and $$F$$ will denote finite-dimensional real vector spaces.

Definition 1.1
#

A loop is a map defined on the circle $$𝕊^1 = ℝ/ℤ$$ with values in a finite-dimensional vector space. It can also freely be seen as $$1$$-periodic maps defined on $$ℝ$$.

The average of a loop $$γ$$ is $$\barγ := \int _{𝕊^1} γ(s)\, ds$$.

The support of a family $$γ$$ of loops in $$F$$ parametrized by $$E$$ is the closure of the set of $$x$$ in $$E$$ such that $$γ_x$$ is not a constant loop.

All of this chapter is devoted to proving the following proposition.

Proposition 1.2
#

Let $$K$$ a compact set in $$E$$. Let $$Ω$$ be an open set in $$E × F$$.

Let $$β$$ and $$g$$ be smooth maps from $$E$$ to $$F$$. Write $$Ω_x := \{ y ∈ F \mid (x, y) ∈ Ω\}$$, assume that $$β(x) ∈ Ω_x$$ for all $$x$$, and that $$g(x) = β(x)$$ near $$K$$.

If, for every $$x$$, $$g(x)$$ is in the convex hull of the connected component of $$Ω_x$$ containing $$β(x)$$, then there exists a smooth family of loops

$γ \! :E × [0, 1] × 𝕊^1 → F, (x, t, s) ↦ γ^t_x(s)$

such that, for all $$x ∈ E$$, all $$t ∈ ℝ$$ and all $$s ∈ 𝕊^1$$,

• $$γ^t_x(s) ∈ Ω_x$$

• $$γ^0_x(s) = γ^t_x(1) = β(x)$$

• $$\barγ^1_x = g(x)$$

• $$γ^t_x(s) = β(x)$$ if $$x$$ is near $$K$$.

Let us briefly sketch the geometric idea behind the above proposition if we pretend there is only one point $$x$$, and drop it from the notation, and also focus only on $$γ^1$$. By assumption, there is a finite collection of points $$p_i$$ in $$Ω$$ and $$λ_i ∈ [0, 1]$$ such that $$g$$ is the barycenter $$\sum λ_i p_i$$. Since $$Ω$$ is open and connected, there is a smooth loop $$γ_0$$ which goes through each $$p_i$$. The claim is that $$g$$ is the average value of $$γ = γ_0 ∘ h$$ for some self-diffeomorphism $$h$$ of $$𝕊^1$$. The idea is to choose $$h$$ such that $$γ$$ rushes to $$p_1$$, stays there during a time roughly $$λ_1$$, rushes to $$p_2$$, etc. But, in order to achieve average exactly $$g$$, it seems like $$h$$ needs to be a discontinuous piecewise constant map. The assumption that $$Ω$$ is open means that the convex hull is open, which gives enough slack to get away with a smooth $$h$$.

In the previous proof sketch, there is a lot of freedom in constructing $$γ$$, which is problematic when trying to do it consistently when $$x$$ varies.

# 1.2 Surrounding points

This section collects elementary results about convex sets in finite dimensional real vector spaces that will help to construct families of loops. In this section, $$E$$ is a real vector space with (finite) dimension $$d$$. The discussion will center around the following definition which is tailored to our ulterior needs.

Definition 1.3
#

A point $$x$$ in $$E$$ is surrounded by points $$p_0$$, …, $$p_d$$ if those points are affinely independent and there exist weights $$w_i ∈ (0, 1)$$ with sum $$1$$ such that $$x = \sum _i w_i p_i$$.

Note that, in the above definition, the number of points $$p_i$$ is fixed by the dimension $$d$$ of $$E$$, and that the weights $$w_i$$ are the barycentric coordinates of $$x$$ with respect to the affine basis $$p_0, \ldots , p_d$$.

The first important point in this definition is that surrounding is smoothly locally stable: if $$x$$ is surrounded by a collection of points $$p$$ then points that are close to $$y$$ are surrounded by every collection of points $$q$$ that is closed to $$p$$, and the relevant barycentric coordinates smoothly depend on $$y$$ and $$q$$. The precise statement follows.

Lemma 1.4
#

For every $$x$$ in $$E$$ and every collection of points $$p ∈ E^{d+1}$$ surrounding $$x$$, there is a function $$w \! :E × E^{d+1} → ℝ^{d+1}$$ such that, for every $$(y, q)$$ in a neighborhood of $$(x, p)$$,

• $$w$$ is smooth at $$(y, q)$$

• $$w(y, q) {\gt} 0$$

• $$\sum _{i=0}^d w_i(y, q) = 1$$

• $$y = \sum _{i=0}^d w_i(y, q)q_i$$.

Proof

Let:

\begin{align*} A = E \times \{ q \in E^{d+1} ~ |~ \mbox{$q$ is an affine basis for $E$} \} , \end{align*}

and define:

\begin{align*} w \! :A & \to ℝ^{d+1}\\ (y, q) & \mapsto \mbox{barycentric coordinates of $y$ with respect to $q$}. \end{align*}

If we fix an affine basis $$b$$ of $$E$$, we may express $$w$$ as a ratio of determinants in terms of coordinates relative to $$b$$. More precisely, by Cramer’s rule, if $$0 \le i \le d$$ and $$w_i$$ is the $$i^{\rm th}$$ component of $$w$$, then:

\begin{align*} w_i (y, q) = \det M_i (y, q) / \det N (q) \end{align*}

where $$N(q)$$ is the $$(d+1)\times (d+1)$$ matrix whose columns are the barycentric coordinates of the components of $$q$$ relative to $$b$$, and $$M_i (y, q)$$ is $$N(q)$$ except with column $$i$$ replaced by the barycentric coordinates of $$y$$ relative to $$b$$.

Since determinants are smooth functions and $$(y, q) \mapsto \det N(q)$$ is non-vanishing on $$A$$, $$w$$ is smooth on $$A$$.

Finally define:

\begin{align*} U = w^{-1}((0, \infty )^{d+1}), \end{align*}

and note that $$U$$ is open in $$A$$, since it is the preimage of an open set under the continuous map $$w$$. In fact since $$A$$ is open, $$U$$ is open as a subset of $$E \times E^{d+1}$$. Note that $$(x, p) \in U$$ since $$p$$ surrounds $$x$$.

We may extend $$w$$ to $$E \times E^{d+1}$$ by giving it any values at all outside $$A$$.

Then we need a criterion ensuring a point $$x$$ is surrounded by a collection of points taken in a given subset $$P$$. The first temptation is to hope that $$x$$ being in the interior of the convex hull of $$P$$ is enough. But this is not true. For instance the center of a square in a plane is in the interior of the convex hull of the set $$P$$ of vertices of the square, but it isn’t surrounded by any set of vertices. This counter example also shows that the stability lemma above is slightly less trivial than it sounds.

The rest of this section is devoted to the following result that proves no such issue arises when $$P$$ is open.

Proposition 1.5
#

If a point $$x$$ of $$E$$ lies in the convex hull of an open set $$P$$, then it is surrounded by some collection of points belonging to $$P$$.

This proposition will be proven at the end of this section. We’ll first need the Carathéodory lemma:

Lemma 1.6 Carathéodory’s lemma
#

If a point $$x$$ of $$E$$ lies in the convex hull of a set $$P$$, then $$x$$ belongs to the convex hull of a finite set of affinely independent points of $$P$$.

Proof

By assumption, there is a finite set of points $$t_i$$ in $$P$$ and weights $$f_i$$ such that $$x = \sum f_i t_i$$, each $$f_i$$ is non-negative and $$\sum f_i = 1$$. Choose such a set of points of minimum cardinality. We argue by contradiction that such a set must be affinely independent.

Thus suppose that there is some vanishing combination $$\sum g_i t_i$$ with $$\sum g_i = 0$$ and not all $$g_i$$ vanish. Let $$S = \{ i | g_i {\gt} 0\}$$. Let $$i_0$$ in $$S$$ be an index minimizing $$f_i/g_i$$. We shall obtain our contradiction by showing that $$x$$ belongs to the convex hull of the set $$\{ t_i| i \ne i_0\}$$, which has cardinality strictly smaller than $$\{ t_i\}$$.

We thus define new weights $$k_i = f_i - g_i f_{i_0}/g_{i_0}$$. These weights sum to $$\sum f_i - (\sum g_i)f_{i_0}/g_{i_0} = 1$$ and $$k_{i_0} = 0$$. Each $$k_i$$ is non-negative, thanks to the choice of $$i_0$$ if $$i$$ is in $$S$$ or using that $$f_i$$, $$-g_i$$ and $$f_{i_0}/g_{i_0}$$ are all non-negative when $$i$$ is not in $$S$$. It remain to compute

\begin{align*} \sum _{i ≠ i_0} k_i t_i & = \sum _i k_i t_i \\ & = \sum _i (f_i - g_i f_{i_0}/g_{i_0}) t_i \\ & = \sum _i f_i t_i - \left(\sum _i g_i t_i\right)f_{i_0}/g_{i_0}) \\ & = x \end{align*}

where we use $$k_{i_0} = 0$$ in the first equality.

Lemma 1.7
#

Given an affine basis $$b$$ of $$E$$, the interior of the convex hull of $$b$$ is the set of points with strictly positive barycentric coordinates.

Proof

For each $$i$$, let:

$w_i \! :E \to ℝ$

be the $$i^{\rm th}$$ barycentric coordinate with respect to the basis $$b$$. Since $$E$$ is finite-dimensional, each $$w_i$$ is a continuous open map. For such a map, the operation of taking interior commutes with preimage, and so we have:

\begin{align*} \operatorname{IntConv}(b) & = \operatorname{Int}\left(\bigcap _i w_i^{-1}([0, \infty ))\right)\\ & = \bigcap _i \operatorname{Int}(w_i^{-1}([0, \infty ))\\ & = \bigcap _i w_i^{-1}(\operatorname{Int}([0, \infty ))\\ & = \bigcap _i w_i^{-1}((0, \infty )) \end{align*}

as required.

Lemma 1.8
#

Given a point $$c$$ of $$E$$ and a real number $$t$$, let:

$h^c_t \! :E \to E$

be the homothety which dilates about $$c$$ by a scale of $$t$$.

Suppose $$c$$ belongs to the interior of a convex subset $$C$$ of $$E$$ and $$t {\gt} 1$$, then

$C \subseteq \operatorname{Int}(h^c_t(C))$
Proof

Since $$h^c_t$$ is a homeomorphism with inverse $$h^c_{t^{-1}}$$, taking $$s = t^{-1}$$, the required result is equivalent to showing:

$h^c_s(C) \subseteq \operatorname{Int}(C)$

where $$s \in (0, 1)$$.

Let $$x$$ be a point of $$C$$, we must show there exists an open neighborhood $$U$$ of $$h^c_s(x)$$, contained in $$C$$. In fact we claim:

$U = h^x_{1-s}(\operatorname{Int}(C))$

is such a set. Indeed $$U$$ is open since $$h^x_{1-s}$$ is a homeomorphism and $$U$$ contains $$h^c_s(x)$$ since:

$h^c_s(x) = h^x_{1-s}(c) \in h^x_{1-s}(\operatorname{Int}(C))$

since $$c$$ belongs to $$\operatorname{Int}(C)$$. Finally:

$h^x_{1-s}(\operatorname{Int}(C)) \subseteq h^x_{1-s}(C) \subseteq C$

where the second inclusion follows since $$C$$ is convex and contains $$x$$.

We are now ready to come back to Proposition 1.5.

Proof of Proposition 1.5

It follows from Lemma 1.7 that we need only show that $$E$$ has an affine basis $$b$$ of points belonging to $$P$$ such that $$x$$ lies in the interior of the convex hull of $$b$$.

Carathéodory’s lemma 1.6 provides affinely independent points $$p_0, \dots , p_k$$ in $$P$$ such that $$x$$ belongs to their convex hull. Since $$P$$ is open, we may extend $$p_i$$ to an affine basis

$\hat b = \{ p_0, \ldots , p_d\} ,$

where all points still belong to $$P$$. Note that $$x$$ belongs to the convex hull of $$\hat b$$.

Now let $$c$$ be a point in the interior of the convex hull of $$\hat b$$ (e.g., the centroid) and for each $$\epsilon {\gt} 0$$, consider the homothety

$h_{1+\epsilon } \! :E \to E,$

which dilates about $$c$$ by a scale of $$1 + \epsilon$$.

Since $$\hat b$$ is finite and contained in $$P$$, and $$P$$ is open, there exists $$\epsilon {\gt} 0$$ such that

$h_{1+\epsilon } (\hat b) \subseteq P.$

We claim the required basis is:

$b = h_{1+\epsilon } (\hat b)$

for any such $$\epsilon$$. Indeed, applying Lemma 1.8 to $$\operatorname{Conv}(\hat b)$$ we see:

\begin{align*} x \in \operatorname{Conv}(\hat b) & \subseteq \operatorname{Int}(h_{1+\epsilon } (\operatorname{Conv}(\hat b)))\\ & = \operatorname{Int}(\operatorname{Conv}(h_{1+\epsilon } (\hat b))) \end{align*}

as required.

# 1.3 Constructing loops

## 1.3.1 Surrounding families

It will be convenient to introduce some more vocabulary.

Definition 1.9
#

We say a loop $$γ$$ surrounds a vector $$v$$ if $$v$$ is surrounded by a collection of points belonging to the image of $$γ$$. Also, we fix a base point $$0$$ in $$𝕊^1$$ and say a loop is based at some point $$b$$ if $$0$$ is sent to $$b$$.

The first main task in proving Proposition 1.2 is to construct suitable families of loops $$γ_x$$ surrounding $$g(x)$$, by assembling local families of loops. Those will then be reparametrized to get the correct average in the next section. In this section, we will work only with continuous loops. This will make constructions easier and we will smooth those loops in the end, taking advantage of the fact that $$Ω$$ and the surrounding condition are open.

Thanks to Carathéodory’s lemma, constructing one such loop with values in some open $$O$$ is easy as soon as $$v$$ belongs to the convex hull of $$O$$.

Lemma 1.10
#

If a vector $$v$$ is in the convex hull of a connected open subset $$O$$ then, for every base point $$b ∈ O$$, there is a continuous family of loops $$γ \! :[0, 1] × 𝕊^1 → E, (t, s) ↦ γ^t(s)$$ such that, for all $$t$$ and $$s$$:

• $$γ^t$$ is based at $$b$$

• $$γ^0(s) = b$$

• $$γ^t(s) ∈ O$$

• $$γ^1$$ surrounds $$v$$

Proof

Since $$O$$ is open, Proposition 1.5 gives points $$p_i$$ in $$O$$ surrounding $$x$$. Since $$O$$ is open and connected, it is path connected. Let $$λ \! :[0, 1] → Ω_x$$ be a continuous path starting at $$b$$ and going through the points $$p_i$$. We can concatenate $$λ$$ and its opposite to get $$γ^1$$. This is a round-trip loop: it back-tracks when it reaches $$λ(1)$$ at $$s = 1/2$$. We then define $$γ^t$$ as the round-trip that stops at $$s = t/2$$, stays still until $$s = 1-t/2$$ and then backtracks.

Definition 1.11
#

A continuous family of loops $$γ \! :E × [0, 1] × 𝕊^1 → F, (x, t, s) ↦ γ^t_x(s)$$ surrounds a map $$g \! :E → F$$ with base $$β \! :E → F$$ on $$U ⊂ E$$ in $$Ω ⊂ E × F$$ if, for every $$x$$ in $$U$$, every $$t ∈ [0, 1]$$ and every $$s ∈ 𝕊^1$$,

• $$γ^t_x$$ is based at $$β(x)$$

• $$γ^0_x(s) = β(x)$$

• $$γ^1_x$$ surrounds $$g(x)$$

• $$(x,γ^t_x(s)) ∈ Ω$$.

The space of such families will be denoted by $$\operatorname{\mathcal{L}}(g, β, U, Ω)$$.

Families of surrounding loops are easy to construct locally.

Lemma 1.12
#

Assume $$Ω$$ is open over some neighborhood of $$x_0$$. If $$g(x_0)$$ is in the convex hull of the connected component of $$Ω_{x_0}$$ containing $$β(x_0)$$, then there is a continuous family of loops defined near $$x_0$$, based at $$β$$, taking value in $$Ω$$ and surrounding $$g$$.

Proof

In this proof we don’t mention the $$t$$ parameter since it plays no role, but it is still there. Lemma 1.10 gives a loop $$γ$$ based at $$β(x_0)$$, taking values in $$Ω_{x_0}$$ and surrounding $$g(x_0)$$. We set $$γ_x(s) = β(x) + (γ(s) - β(x_0))$$. Each $$γ_x$$ takes values in $$Ω_x$$ because $$Ω$$ is open over some neighborhood of $$x_0$$. Lemma 1.4 guarantees that this loop surrounds $$g(x)$$ for $$x$$ close enough to $$x_0$$.

The difficulty in constructing global families of surrounding loops is that there are plenty of surrounding loops and we need to choose them consistently. The key feature of the above definition is that the $$t$$ parameter not only allows us to cut out the corrugation process in the next chapter, but also brings a “satisfied or refund” guarantee, as explained in the next lemma.

Lemma 1.13
#

For every set $$U ⊂ E$$, $$\operatorname{\mathcal{L}}(g, β, U, Ω)$$ is “path connected”: for every $$γ_0$$ and $$γ_1$$ in $$\operatorname{\mathcal{L}}(g, β, U, Ω)$$, there is a continuous map $$δ \! :[0, 1] × E × [0, 1] × 𝕊^1 → F, (τ, x, t, s) ↦ δ^t_{τ, x}(s)$$ which interpolates between $$γ_0$$ and $$γ_1$$ in $$\operatorname{\mathcal{L}}(g, β, U, Ω)$$.

The construction below morally proves that each $$\operatorname{\mathcal{L}}(g, β, U, Ω)$$ is contractible, but we will not even specify a topology on those spaces. The definition of “path connected” in quotation marks is the above specific statement, and only this statement will be used.

Proof

Let $$ρ$$ be the piecewise affine map from $$ℝ$$ to $$ℝ$$ such that $$ρ(τ) = 1$$ if $$τ ≤ 1/2$$, $$ρ$$ is affine on $$[1/2, 1]$$, $$ρ(τ) = 0$$ if $$τ ≥ 1$$. We set

$δ_{τ, x}^t(s) = \begin{cases} γ_{0,x}^{ρ(τ)t}\left(\frac1{1 - τ} s\right) & \text{if s ≤ 1 - τ and τ {\lt} 1}\\ γ_{1,x}^{ρ(1-τ)t}\left(\frac1τ \big(s - (1- τ)\big)\right) & \text{if s ≥ 1 - τ and τ {\gt} 0}\\ \end{cases}$

It is clear that if $$s = 1 - τ$$ then both branches agree and are equal to $$β(x)$$. Therefore it is easy to see that $$δ$$ is continuous at $$(τ, x, t, s)$$ except when $$(τ,s)=(1,0)$$ or $$(τ,s)=(0,1)$$.

To show the continuity for $$(τ,s)=(1,0)$$, let $$K$$ be a compact neighborhood of $$x$$ in $$E$$. Then $$γ_0$$ is uniformly continuous on the compact set $$K × [0, 1] × 𝕊^1$$, which means that $$γ_{0,x'}^t$$ tends uniformly to the constant function $$s ↦ β(x)$$ as $$(x', t)$$ tends to $$(x, 0)$$. This means that $$γ_{0,x'}^{ρ(τ)t'}$$ tends uniformly to the constant function $$s ↦ β(x)$$ as $$(τ, x', t')$$ tends to $$(1, x, t)$$. This means that $$δ$$ is continuous at $$(τ,s)=(1,0)$$ (it is clear that the other branch also tends to $$β(x)$$). The continuity at $$(τ,s)=(0,1)$$ is entirely analogous.

The beautiful observation motivating the above formula is why each $$δ_{τ, x}^1$$ surrounds $$g(x)$$. The key is that the image of $$δ_{τ, x}^1$$ contains the image of $$γ_{0,x}^1$$ when $$τ ≤ 1/2$$, and contains the image of $$γ_{1,x}^1$$ when $$τ ≥ 1/2$$. Hence $$δ_{τ, x}^1$$ always surrounds $$g(x)$$.

Corollary 1.14
#

Let $$U_0$$ and $$U_1$$ be open sets in $$E$$. Let $$K_0 ⊂ U_0$$ and $$K_1 ⊂ U_1$$ be compact subsets. For any $$γ_0 ∈ \operatorname{\mathcal{L}}(U_0, g, β, Ω)$$ and $$γ_1 ∈ \operatorname{\mathcal{L}}(U_1, g, β, Ω)$$, there exists a neighborhood $$U$$ of $$K_0 ∪ K_1$$ and there exists $$γ ∈ \operatorname{\mathcal{L}}(U, g, β, Ω)$$ which coincides with $$γ_0$$ near $$K_0\cup U_1^c$$.

Proof

Let $$C_0 = K_0\cup U_1^c$$ and $$C_1 := K_1 ∖ U_0$$. Since $$C_0$$ and $$C_1$$ are disjoint closed sets, there is some continuous cut-off $$ρ \! :E → [0, 1]$$ which vanishes on a neighborhood of $$C_0$$ and equals one on a neighborhood of $$C_1$$.

Lemma 1.13 gives a homotopy of loops $$γ_τ$$ from $$γ_0$$ to $$γ_1$$ on $$U_0 ∩ U_1$$. Moreover, note that $$γ_τ$$ is defined on all of $$E$$. On $$U_0' ∪ (U_0 ∩ U_1) ∪ U_1'$$, which is a neighborhood of $$K_0 ∪ K_1$$, we set

$γ_x = γ_{ρ(x), x}$

which has the required properties.

Lemma 1.15
#

In the setup of Proposition 1.2, assume we have a continuous family $$γ$$ of loops defined near $$K$$ which is based at $$β$$, surrounds $$g$$ and such that each $$γ_x^t$$ takes values in $$Ω_x$$. Then there such a family which is defined on all of $$E$$ and agrees with $$γ$$ near $$K$$.

Proof

Lemma 1.12 proves the existence of local families of surrounding loops and Corollary 1.14 allows to patch such families hence Lemma B.9 proves global existence.

## 1.3.2 The reparametrization lemma

The second ingredient needed to prove Proposition 1.2 is a parametric reparametrization lemma. Gromov’s original proof of this lemma makes explicit use of a partition of unity. Motivated in particular by formalization purposes, we will first state more abstract versions whose statements do not involve any partition of unity but directly state a local-to-global property.

Lemma 1.16
#

Let $$E$$ and $$F$$ be real normed vector spaces. Assume that $$E$$ is finite dimensional. Let $$P$$ be a predicate on $$E \times F$$ such that for all $$x$$ in $$E$$, $$\{ y ~ |~ P (x, y) \}$$ is convex. Let $$n$$ be a natural number or $$+\infty$$. Assume that every $$x$$ has a neighbourhood $$U$$ on which there exists a $$C^n$$ function $$f$$ such that $$\forall x ∈ U, P(x, f(x))$$. Then there is a global $$C^n$$ function $$f$$ such that $$\forall x, P(x, f(x))$$.

Proof

The assumption give us an open cover $$(U_i)_{i ∈ I}$$ of $$E$$ and functions $$f_i \! :E → F$$ that are smooth on $$U_i$$ and such that $$P(x, f_i(x))$$ for all $$x$$ in $$U_i$$. Let $$ρ$$ be a smooth partition of unity associated to this cover. The function $$f = ∑ ρ_i f_i$$ is smooth on $$E$$ and the convexity assumption on $$P$$ ensures it satisfies $$\forall x, P(x, f(x))$$. Indeed each value $$f(x)$$ is a convex combination of finitely many values $$f_i(x)$$ where $$i$$ satisfies that $$x$$ is in $$U_i$$.

We will also need a version where $$F$$ is a space of smooth functions. Since there is no relevant norm to put on such a space, we cannot deduce this version from the above one.

Lemma 1.17
#

Let $$E₁$$, $$E₂$$ and $$F$$ be real vector spaces. Assume $$E₁$$ and $$E₂$$ are finite dimensional. Let $$n$$ be a natural number or $$+\infty$$. Let $$P$$ be a property of pairs $$(x, f)$$ with $$x ∈ E₁$$ and $$f : E₂ → F$$. Assume that, for every $$x$$, the space of functions $$f$$ such that $$P(x, f)$$ holds is convex. Assume that for every $$x₀$$ in $$E₁$$ there is a neighborhood $$U$$ of $$x₀$$ and a function $$φ : E₁ × E₂ → F$$ which is $$C^n$$ on $$U × E₂$$ and such that $$P(x, φ(x, \cdot ))$$ holds for every $$x$$ in $$U$$. There there is a global $$C^n$$ function $$φ \! :E₁ × E₂ → F$$ such that $$P(x, φ(x, \cdot ))$$ holds for every $$x$$.

Proof

This is completely analogous to the previous proof.

Lemma 1.18
#

Let $$γ \! :E × 𝕊^1 → F$$ be a smooth family of loops surrounding a map $$g$$. There is a smooth family $$φ \! :E × 𝕊^1 → 𝕊^1$$ such that each $$γ_x ∘ φ_x$$ has average $$g(x)$$ and $$φ_x(0) = 0$$.

Proof

Gromov’s main idea in order to prove this result is to translate the problem of constructing a family of circle maps $$φ$$ into the problem of constructing a family of smooth density functions $$f$$ on the circle. We introduce some vocabulary in order to describe this reduction. Let $$f \! :E × ℝ → ℝ$$ be a smooth positive function that is $$1$$-periodic in its second argument. We say that $$f$$ is a centering density for $$(γ, g)$$ at $$x$$ if $$f_x \! :ℝ → ℝ$$ has average value one when seen as a function on $$𝕊¹$$ and the average value of $$f_x γ_x$$ is $$g(x)$$. We claim that, in order to prove the lemma, it is sufficient to build such an $$f$$ which is centering at every $$x$$. Indeed, assume we have such an $$f$$. We then get a smooth family of $$ℤ$$-equivariant functions $$ψ \! :E × ℝ → ℝ$$ defined by $$ψ_x(t) = \int _0^tf_x(s)ds$$. Because $$ψ$$ is smooth and each $$ψ_x$$ is strictly monotone and $$ℤ$$-equivariant, one can check there is a smooth map $$φ : E × ℝ → ℝ$$ which is $$ℤ$$-equivariant and such that $$φ_x ∘ ψ_x = \operatorname{Id}$$ for each $$x$$. Seen as a family of functions from $$𝕊¹$$ to $$𝕊¹$$, those functions are suitable since, for every $$x$$, the change of variable formula gives:

$\int _{𝕊¹} γ_x ∘ φ_x(s)ds = \int _{𝕊¹} ψ_x'(s) γ_x ∘ φ_x(ψ_x(s))ds = \int _{𝕊¹} f_x(s) γ_x(s)ds = g(x).$

We now prove the existence of a function which is a centering density at every point of $$x$$. For any given $$x$$, this constraint is clearly convex. Hence Lemma 1.17 ensures it is enough to prove existence of functions that are centering densities in a neighborhood of any given point $$x$$. So we fix some $$x$$ in $$E$$.

Since $$γ_x$$ strictly surrounds $$g(x)$$, there are points $$s_1, …, s_{n+1}$$ in $$𝕊^1$$ such that $$g(x)$$ is surrounded by the corresponding points $$γ_x(s_j)$$.

Let $$f_1, …, f_{n+1}$$ be smooth positive periodic maps from $$ℝ$$ to $$ℝ$$ which average value $$1$$ on a period and such that the corresponding measures on $$𝕊¹$$ are very close to the Dirac measures on $$s_j$$, ie. for any function $$h$$, the average value of $$f_jh$$ is almost $$h(s_j)$$. We set $$p_j = \int f_jγ_x\, ds$$, which is almost $$γ_x(s_j)$$ so that $$g(x) = \sum w_j p_j$$ for some weights $$w_j$$ in the open interval $$(0, 1)$$ according to Lemma 1.4.

If $$x'$$ is in a sufficiently small neighborhood of $$x$$, Lemma 1.4 gives smooth weight functions $$w_j$$ such that $$g(x') = \sum w_j(x')p_j(x')$$. Hence we can set $$f_{x'}(s) = \sum w_j(x')f_j(s)$$.

## 1.3.3 Proof of the loop construction proposition

We finally assemble the ingredients from the previous two sections.

Proof of Proposition 1.2

Let $$γ^*$$ be a family of loops surrounding the origin in $$B_F(0,1)$$ the open unit ball in $$F$$, constructed using Lemma 1.12. For $$x$$ in some neighborhood $$U^*$$ of $$K$$ where $$g = β$$, we set $$γ_x = g(x) + εγ^*$$ where $$ε {\gt} 0$$ is sufficiently small to ensure that $$B_{E\times F}((x,β(x)),2ε)\subseteq \Omega$$ (recall $$Ω$$ is open and $$K$$ is compact). Lemma 1.15 extends this family to a continuous family of surrounding loops $$γ_x$$ for all $$x$$ (this is not yet our final $$γ$$).

We then need to approximate this continuous family by a smooth one. Some care is needed to ensure that it stays based at $$β$$. We can first reparametrize $$γ$$ on $$[0,1] \times 𝕊^1$$ to ensure that $$γ$$ is constant in a neighborhood of $$C = \{ (t, s) \in [0,1] \times 𝕊^1 \mid t = 0 \text{ or } s = 0\}$$. Using Lemma 1.16, we can find a smooth function that has distance at most $$ε$$ from $$γ$$ and coincides with $$γ$$ on $$C$$ (using the fact that $$γ$$ is already smooth near $$C$$). Since all loops that are sufficiently close to $$γ$$ still surround $$g$$, we can also ensure that the new smoothened $$γ$$ is still surrounding.

Then Lemma 1.18 gives a family of circle diffeomorphisms $$h_x$$ such that $$γ^1_x ∘ h_x$$ has average $$g(x)$$.

Finally we choose a cut-off function function $$χ$$ which vanishes near $$E ∖ U^*$$ and equals one near $$K$$. As our final family of loops, we choose $$χ(x)g(x) + (1-χ(x))(γ_x ∘ h_x)$$. This operation does not change the average values of these loops, because it rescales them around their average value, but makes them constant near $$K$$. Also, those loops stay in $$Ω$$, thanks to our choice of $$ε$$.