1

A Simpler Proof that π₁(S¹) is Z

 3 years ago
source link: https://homotopytypetheory.org/2012/06/07/a-simpler-proof-that-%CF%80%E2%82%81s%C2%B9-is-z/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
A Simpler Proof that π₁(S¹) is Z

A Simpler Proof that π₁(S¹) is Z

Last year, Mike Shulman proved that π₁(S¹) is Z in Homotopy Type Theory. While trying to understand Mike’s proof, I came up with a simplification that shortens the proof a bunch (100 lines of Agda as compared to 380 lines of Coq).

My proof uses essentially the same ideas—e.g., the universal cover comes up in the same way, and the functions witnessing the isomorphism are the same. The main new idea is a simpler way of proving the hard direction of “and the composites are the identity”: that if you start with a path on the circle, encode it as its winding number, and then decode it back as a path, then you get the original path. If you already understand Mike’s proof, the diff is essentially this: you can replace steps 2-5 at the end by

  1. Define an “encoding” function of type
    forall (x : circle), (base ~~> x) -> circle_cover x
    as the transport along circle_cover of 0.
  2. Use J to prove that decoding (Mike’s step 1) after encoding is the identity (this is the hard composite to work with, because it starts and ends with a path). The trick here is to ask the question for paths (base ~~> x) for an arbitrary x, not just for loops base ~~> base, so that you can use J.

This is clearly similar to Mike’s proof (we could even ask if these two proofs are homotopic!): his proof uses contractibility of a path space, which is exactly J. But the amount of machinery about total spaces that you need here is much less.

A question, though: I came up with this proof basically on type-theoretic grounds (“there must be a simpler proof term of this type”). Is there anything to say about it in geometric or higher-categorical terms? In the semantics, is it the same proof as Mike’s? A known but different proof? Or has the type theory suggested a new way of proving this result?

I’ll explain the proof in detail below the fold.

Definitions

First, the definitions. I have defined the circle S¹ so that the elimination form computes definitionally on points (but not paths). This gives the following interface:

    S¹ : Set
    base : S¹
    loop : base ≃ base

    S¹-rec : {C : Set} 
           -> (a : C)
           -> (p : a ≃ a)
           -> S¹ -> C
    S¹-rec a p Base = a      -- really an equation
    βloop/rec : {C : Set} 
           -> (a : C)
           -> (p : a ≃ a)
           -> resp (S¹-rec a p) loop ≃ p

    S¹-elim :  {C : S¹ -> Set} 
            -> (a : C base) 
               (p : subst C loop a ≃ a)
            -> (x : S¹) -> C x
    S¹-elim a p Base = a
    βloop/elim : {C : S¹ -> Set} 
                 -> (a : C base) (p : subst C loop a ≃ a)
                 -> respd (S¹-elim{C} a p) loop ≃ p

Notation: I use Id M N and M ≃ N interchangeably for the identity type/propositional equality/equivalence/paths (apologies to people who have started using ≃ for weak equivalence/adjoint equivalence/etc.—I had started using it as an infix symbol for general propositional equality). See Paths.agda for definitions if you’re confused. The one thing I should comment on is that I’m using the “logical” terminology for

   subst : {A : Set} (p : A -> Set) {x y : A} 
         -> Id x y -> p x -> p y
   resp : {A C : Set} {M N : A} (f : A -> C) 
        -> Id M N -> Id (f M) (f N)
   respd : {A : Set} {C : A -> Set} {M N : A} 
           (f : (x : A) -> C x) -> (p : Id M N) 
         -> Id (subst C p (f M)) (f N)

so “subst” = “transport” and “resp” = “maponpaths”.

For integers:

  data Int : Set where
    Pos : Nat -> Int
    Zero : Int
    Neg : Nat -> Int

This definition is nice for what follows, but has the somewhat unfortunate consequence that 1 is Pos 0, -1 is Neg 0, etc.

The Universal Cover

To define the universal cover, we’ll need the successor isomorphism succ≃ : Int ≃ Int, which is defined by applying univalence to the successor function succ : Int -> Int, which maps x to x + 1. The universal cover of the circle maps the circle into the universe, taking the base point to Int and the loop to succ≃:

  C : S¹ -> Set
  C = S¹-rec Int succ≃

To introduce the proof notation, let’s derive a little fact that we’ll need later: transporting in the cover with the loop is the successor function:

  subst-C-loop : subst C loop ≃ succ
  subst-C-loop = 
    subst C loop                  ≃〈 subst-resp C loop 〉
    subst (λ x → x) (resp C loop) ≃〈 resp (subst (λ x → x)) (βloop/rec Int succ≃) 〉 
    subst (λ x → x) succ≃         ≃〈 subst-univ _ 〉 
    succ ∎

The notation here is an Agda trick for writing out an equational deduction in something of a two-column proof, except the justifications come in between the things they prove equal:

    x ≃〈 reason x equals y 〉
    y 

In this case, we:

  1. Reassociate subst C loop into a subst with the identity function and a resp C, so that we can
  2. Use the HIT β-reduction rule for resp C loop (recall that C is defined by an S¹-rec.
  3. Use the β-reduction rule for univalence, which says that subst (λ x → x) on an isomorphism given explicitly by the univalence axiom selects the forward direction of the isomorphism.

Functions back and forth

My goal is to prove that Ω₁(S¹) is homotopy equivalent to Z—this can be improved to an adjoint equivalence and therefore a path in the universe. Because Z is an h-set, this entails, as easy corollaries, that π₁(S¹) (with the quotienting) is equivalent to Z, and that the higher loop spaces (and therefore homotopy groups) are trivial. To prove that π₁(S¹) and Z are isomorphic groups, we should also check that this map is group homomorphism—composition of paths gets mapped to addition—but I won’t do this here.

So the goal is to define functions

  base ≃ base  ->  Int
  Int  ->  base ≃ base

that compose to the identity.

To start, we can define loopn by induction on n:

  loop^ : Int -> base ≃ base
  loop^ Zero = Refl
  loop^ (Pos Z) = loop
  loop^ (Pos (S n)) = loop ∘ loop^ (Pos n)
  loop^ (Neg Z) = ! loop
  loop^ (Neg (S n)) = ! loop ∘ loop^ (Neg n)

Thinking of the integers as codes (names) for paths, this “decodes” an integer as a path.

The other direction “encodes” a path as its winding number, and can be defined by transporting along the universal cover:

  encode' : base ≃ base -> Int
  encode' p = subst C p Zero

The intuition here is that the action on paths of C takes loop to the successor function, and, because it’s functorial, therefore takes loopn to n successor functions composed together. Applying this composite to Zero therefore computes n, giving us the inverse of loop^.

Important Point 1: However, for what’s coming below, it’s useful to observe that encode can be given a more general type:

  encode : {a : S¹} ->  base ≃ a  ->  C a
  encode p = subst C p Zero

For any point a on the circle and path p from base to a, encode p is an element of the cover of a. There is no reason to restrict attention to loops at base. This generality is going to be important below.

The easy composite

It’s easy to see that if you start with a number, decode it as a path, and then re-encode it, you get back where you started: because you’re starting with a number, you can do induction! Here are the cases for 0 and positives (negatives are a symmetric 6 lines of code):

  encode-loop^ : (n : Int) -> encode (loop^ n) ≃ n
  encode-loop^ Zero = Refl
  encode-loop^ (Pos Z) = app≃ subst-C-loop
  encode-loop^ (Pos (S y)) = 
    subst C (loop ∘ loop^ (Pos y)) Zero         ≃〈 app≃ (subst-∘ C loop (loop^ (Pos y)) ) 〉
    subst C loop (subst C (loop^ (Pos y)) Zero) ≃〈 app≃ subst-C-loop 〉
    succ (subst C (loop^ (Pos y)) Zero)         ≃〈 resp succ (encode-loop^ (Pos y)) 〉 
    succ (Pos y) ∎

In the 0 case it’s true by computation. In the 1 case it’s the lemma we proved above. For other positives, you need to apply functoriality, use the lemma above for the first subst C loop, and then use the IH for the rest.

The hard composite

The other direction is harder, because you start with a loop p:

  stuck : {p : base ≃ base} -> p ≃ loop^ (encode p)
  stuck = ?

and there is no obvious elimination rule for p : base ≃ base (this is where UIP/Streicher’s K come in “normal” dependent type theory, but those clearly aren’t the right thing here). Indeed, this whole proof can be seen as constructing an induction principle for p : base ≃ base.

The trick is to generalize the problem to consider not just loops, but arbitrary paths base ≃ x, so that we can use J. To set this up, we first show that loop^ extends to a function

  decode : {a : S¹} -> C a -> base ≃ a
  decode {a} = 
    S¹-elim {\ x -> C x -> base ≃ x} 
            loop^ 
            (subst (\ x' →  C x' → base ≃ x') loop loop^                ≃〈 subst-→ C (Id base) loop loop^ 〉
             (\ y -> subst (Id base) loop (loop^ (subst C (! loop) y))) ≃〈 λ≃ (λ y → subst-Id-post loop (loop^ (subst C (! loop) y))) 〉
             (\ y -> loop ∘ (loop^ (subst C (! loop) y)))               ≃〈 λ≃ (λ y → resp (λ x' → loop ∘ loop^ x') (app≃ subst-C-!loop)) 〉
             (\ y -> loop ∘ (loop^ (pred y)))                           ≃〈 λ≃ shift 〉 
             (\ y → loop^ y) 
             ∎)
            a

The proof is an S¹-elim: When a is base, we use loop^. To show that this respects the loop, we apply the functorial action of (\ x' → C x' → base ≃ x'), then reduce subst C (! loop) to the predecessor function (analogous to subst C loop ≃ succ), and then apply shift:

  shift : (n : Int) -> (loop ∘ (loop^ (pred n))) ≃ loop^ n

The proof of shift is a simple induction on n (12 lines of code).

Important Point 2: At this point, we’re basically done! The reason is that, once we have

  encode : {a : S¹} ->  base ≃ a  ->  C a
  decode : {a : S¹} -> C a -> base ≃ a

we can generalize the theorem to

  decode-encode : ∀ {a} -> (p : base ≃ a) -> decode (encode p) ≃ p

polymorphically in a. So unlike above, where p:base ≃ base, p has a free end-point. Thus, we can use J to contract it to reflexivity:

  decode-encode {a} p = 
    jay1 (λ a' (p' : base ≃ a') → decode (encode p') ≃ p') p Refl

Here jay1 is the Paulin-Mohring J rule: it suffices to consider the case where a' is base and p' is Refl, in which case decode (encode Refl) reduces to Refl, and we’re done.

Analysis

Let me try an explanation for what’s going on here: The way it’s scattered around the code is a little funny, but I think the ingredients of an induction on the length of the path are hiding in decode-encode. decode-encode is tantamount to saying that every loop p:Id base base arises as loop^ n for some n (in particular, encode p). In decode-encode, we check the base case, when p is reflexivity. To define decode, the main lemma is shift, which is essentially saying that if p can be expressed as loop^ (pred n) then loop ∘ p can be expressed as loop^ n. This feels like an inductive step, where we show that the property is closed under going around the loop once. Somehow the general facts about inverses mean you don’t need to consider !loop explicitly. So: S¹-elim tells you that the only inductive case you need to consider is going around the loop; J tells you that the only base case you need to consider is reflexivity. Cover these two cases, and you cover all paths!

Finally, it’s important to note that Mike’s proof proves slightly more than just that Id base base is equivalent to Int. It also proves that Id base x is equivalent to C x for all x. I’ve done half of that above; the other half would be to show that encode-loop^ generalizes to

  encode-decode : ∀ {a} -> (n : C a) -> encode (decode n) ≃ n

I think you should be able to do this using an S¹-elim, but I haven’t tried.

Overall, I wanted to write up this simplification to see if anyone has thoughts about what’s going on here, and because it might make it easier to formalize other results in algebraic topology in HoTT.

Agda code is here.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK