## Stream: maths

#### Johan Commelin (Nov 07 2018 at 14:49):

I feel like it is time we get adjoint functors. We now have map and comap for over-categories. They form an adjoint pair, and this would allow us to prove useful stuff. Has anyone given thought to implementing adjunctions in Lean? Are there any traps that should be avoided?

#### Kenny Lau (Nov 07 2018 at 15:14):

and then refactor galois connection :p

#### Johan Commelin (Nov 07 2018 at 15:16):

Right. Are you interested in taking a look at the bottom of sheaf.lean on the sheaf branch? It is getting a big mess. As soon as we want to apply category theory to concrete stuff thing become rather unpleasant...

#### Patrick Massot (Nov 07 2018 at 15:20):

I don't like reading "As soon as we want to apply category theory to concrete stuff thing become rather unpleasant..." :sad:

#### Johan Commelin (Nov 07 2018 at 15:21):

I agree... and it is probably just that I'm not skilled enough in Lean.

#### Johan Commelin (Nov 07 2018 at 15:46):

It feels like category_theory is a monad: You can bind yourself into it. But you really shouldn't try to crawl out of it.

#### Johan Commelin (Nov 07 2018 at 15:46):

I think it is hopeless that we will be able to rewrite along equivalences of categories, right?

#### Johan Commelin (Nov 07 2018 at 15:47):

If X is a topological space, and U is an open subset of X, then over U is canonically equivalent to opens U. Is there any hope at all that I can teach Lean how to use this fact?

#### Johan Commelin (Nov 07 2018 at 15:48):

(Without 100 lines of scaffolding to show that I can transfer everything I want along my canonical equivalence.)

#### Johan Commelin (Nov 07 2018 at 15:59):

In this case it is even an isomorphism of categories. I'm not sure if that helps.

#### Kevin Buzzard (Nov 07 2018 at 16:36):

I think it is hopeless that we will be able to rewrite along equivalences of categories, right?

rofl how about we rewrite along isomorphisms of groups first!

:up:

#### Reid Barton (Nov 08 2018 at 19:28):

I feel like it is time we get adjoint functors. We now have map and comap for over-categories. They form an adjoint pair, and this would allow us to prove useful stuff. Has anyone given thought to implementing adjunctions in Lean? Are there any traps that should be avoided?

I have defined adjunctions a couple of times, including at https://github.com/rwbarton/lean-homotopy-theory/blob/lean-3.4.1/src/category_theory/adjunctions.lean (see also https://github.com/rwbarton/lean-homotopy-theory/blob/lean-3.4.1/src/category_theory/preserves_colimits.lean). But I haven't needed to push the theory very far yet, so I'm not sure about traps.

#### Reid Barton (Nov 08 2018 at 19:30):

One thing which is a bit annoying is to state the naturality of the isomorphism Hom(FX, Y) = Hom(X, GY) for an adjunction F : C -> D, G : D -> C.
If C and D have different morphism universes then where is this isomorphism happening...?

#### Johan Commelin (Nov 08 2018 at 21:00):

Probably in max v_1 v_2...

#### Johan Commelin (Nov 08 2018 at 21:01):

Maybe we can have an adjunctions branch where you push your stuff and we (Scott, you, me, maybe others) can play around with it till we think something is ready for merging.

#### Johan Commelin (Nov 08 2018 at 21:02):

For example, we could then merge that branch into the sheaf branch, and stress test it on that example.

#### Scott Morrison (Nov 08 2018 at 22:09):

When I've played with adjunctions I've settled on just putting everything in the same universe level.

#### Reid Barton (Nov 08 2018 at 22:42):

I'm not sure. It might be fine, especially if we have a good interface for lifting a category to a bigger universe.

#### Kevin Buzzard (Nov 09 2018 at 07:58):

I know that the philosophy is to be maximally universe polymorphic. But when I wrote schemes originally I spent almost my entire time working in one universe, just to see what happened, and I never ran into any problems (other than Mario telling me I should stop -- I mean I didn't run into any mathematical problems). Reading SGA the other day I see that Grothendieck also was happy with just one universe for a lot of the time. If being maximally universe polymorphic is causing problems then I might venture to suggest that being maximally universe polymorphic might simply not be that good an idea when working with categories.

#### Johan Commelin (Nov 09 2018 at 08:19):

I do think that a lot of our universe annotations could then go away. And the errors related to universe issues are also quite nasty and annoying. I agree with @Reid Barton that we would need a good way to turn lift a category to a higher universe, and I have no idea how hard this is.
@Scott Morrison What do you think? How much of the universe issues are maths-problems, and how much of it is just users fighting Lean?

#### Scott Morrison (Nov 10 2018 at 00:50):

I don't think we've had any universe problem in a long while in the category_theory development. I think the current setup, where you often have to say XXX.{u v}, so Lean knows which morphism universe level you intend, is mildly annoying. The current setup is a minimal envelope around supporting category.{v v} and category.{v+1 v}, which is all that ever turns up in practice. Any time more than one category is involved, and there is a potential problem with mismatching universe levels, my instinct is to put everything at the same universe level. (i.e., just like Grothendieck, we work in a single universe, called v, except that we also have u, which we think of as either being v or v+1 for small or large categories).

#### Scott Morrison (Nov 10 2018 at 00:50):

If that ever causes problems, then we'll announce we've learnt something, and deal with it, but for now I see no need to deal with mixed-universe level adjunctions, etc.

#### Kevin Buzzard (Nov 10 2018 at 00:56):

I guess Grothendieck fixed one universe u, and then talked about u-categories and u-small categories, which are these two cases.

#### Scott Morrison (Nov 10 2018 at 01:17):

We would suffer quite a bit by specialising to only u-categories and u-small categories, because we'd have to duplicate lots of theorems. Having category.{u v} lets us state theorems in both cases uniformly with not-that-much suffering, and we just remember that all the other values of u are not particularly relevant.

#### Kevin Buzzard (Nov 10 2018 at 08:33):

Yes, I'm now starting to understand the philosophy much better

#### Reid Barton (Nov 12 2018 at 04:00):

Status update on adjunctions: I managed to get as far as proving that right adjoints preserve limits and that any functor C -> D with C small and D cocomplete induces an adjunction (like the geometric realization/Sing adjunction).

#### Reid Barton (Nov 12 2018 at 04:00):

The code still needs some cleaning up, but I'll try to push it to community tomorrow

#### Johan Commelin (Nov 12 2018 at 06:53):

@Reid Barton That sounds fantastic! I'm looking forward to seeing the code.

Cool!

#### Reid Barton (Nov 12 2018 at 20:30):

I'm still not sure about the best way to deal with natural isomorphisms. Sometimes I want to compose natural isomorphisms together, in which case I want to view the isomorphism and its naturality as a single object, but often I also want to just work objectwise, and check naturality as needed later

#### Reid Barton (Nov 12 2018 at 20:32):

I'm looking at this proof on paper: "$\mathrm{Hom}(\tilde F (y c), d) = \mathrm{Hom}(y c, F^* d) = F^* d(c) = \mathrm{Hom}(F c, d)$ and so there is a natural isomorphism $F c \to \tilde F (y c)$"

#### Reid Barton (Nov 12 2018 at 20:32):

and trying to figure out how to explain it to Lean

#### Johan Commelin (Nov 12 2018 at 20:37):

apply yoneda_lemma; obviously ??? :lol:

#### Reid Barton (Nov 12 2018 at 21:59):

@Scott Morrison I left a bit of a mess in limits.lean regarding is_limit.equiv/is_limit.of_equiv and the colimit versions. I found it's often easier to just work with the equiv type, rather than iso and especially iso between natural transformations. In particular, is_limit.of_equiv is nontrivially (at least in Lean) harder to use than is_colimit.of_equiv--for is_limit.of_equiv you have to produce an inverse as a natural transformation, while the fact that it's natural is actually automatic.

#### Reid Barton (Nov 12 2018 at 21:59):

This should get sorted out somehow--maybe having both equiv and iso versions

#### Scott Morrison (Nov 12 2018 at 22:19):

@Reid Barton, isn't this what nat_iso.of_components is for?

#### Scott Morrison (Nov 12 2018 at 22:20):

You specify an iso in each component, and check naturality in just one direction.

#### Scott Morrison (Nov 12 2018 at 22:20):

Perhaps there should be a companion that let's you check naturality in the other direction instead.

#### Reid Barton (Nov 12 2018 at 22:20):

oh, I didn't see that

#### Scott Morrison (Nov 12 2018 at 22:20):

Sorry, I really should write some docs. :-(

#### Reid Barton (Nov 12 2018 at 22:20):

but anyways, I shouldn't need to check naturality in either direction

Ah, why is that?

#### Reid Barton (Nov 12 2018 at 22:21):

Well, because... I actually want to use something like is_limit.of_extensions_iso

#### Reid Barton (Nov 12 2018 at 22:21):

but nat_iso.of_components doesn't produce an is_iso

#### Reid Barton (Nov 12 2018 at 22:22):

The point is just that the thing which is supposed to be is_iso is already known to be natural

ah!

#### Scott Morrison (Nov 12 2018 at 22:22):

okay, sorry, I missed that

#### Scott Morrison (Nov 12 2018 at 22:23):

so we need is_iso.of_nat_trans, which takes an input an F \natt G, and and is_iso for each component?

#### Scott Morrison (Nov 12 2018 at 22:25):

Regarding your

I'm looking at this proof on paper: "$\mathrm{Hom}(\tilde F (y c), d) = \mathrm{Hom}(y c, F^* d) = F^* d(c) = \mathrm{Hom}(F c, d)$ and so there is a natural isomorphism $F c \to \tilde F (y c)$"

It seems we'd want to write:

apply nat_iso.of_components, -- giving us two goals; an iso in each component, and naturality of the forward direction,
{ intro X,
apply yoneda.ext,
<<<calc block goes here, doing the Hom set calculation>>> },
obviously -- to deal with naturality


#### Scott Morrison (Nov 12 2018 at 22:26):

I think a calc block should work fine with a string of isos.

#### Scott Morrison (Nov 12 2018 at 22:26):

Because there compositions of isos is marked with [trans].

#### Reid Barton (Nov 12 2018 at 22:26):

There are actually two naturalities(?) involved: in c and in d

#### Reid Barton (Nov 12 2018 at 22:27):

One of them gets consumed by yoneda.ext, the other to show the resulting transformation is natural in c

oh, of course

#### Scott Morrison (Nov 12 2018 at 22:28):

so after the calc block we'd have another obviously to discharge the naturality goal that yoneda.ext creates?

#### Scott Morrison (Nov 12 2018 at 22:28):

I guess yoneda.ext and nat_iso.of_components could both have obviously as an autoparam...

#### Reid Barton (Nov 12 2018 at 22:30):

Gotta run for a bit :running:

#### Scott Morrison (Nov 12 2018 at 22:30):

but that probably makes no sense; auto_param in a open argument of an applied function isn't going to help, because it should run until later...

#### Scott Morrison (Nov 13 2018 at 04:11):

Yeah, I was wishing for globular yesterday as well. I ended up writing a page long rewrite proof, corresponding to a commutative diagram built out of two hexagons and two squares (but interminable rewriting along category.assoc to actually use it), corresponding to a string diagram in which you just had to pull some cups and caps past each other. (This was for composition of monoidal functors.)

#### Reid Barton (Nov 15 2018 at 02:23):

I can't believe this resulted in a statement I could actually prove

    dsimp [canonical_diagram.cocone, canonical_diagram.to_original, canonical_diagram,
canonical_diagram.colimit_cocone, id_iso_yoneda_extension_yoneda,
equiv.refl, equiv.symm, equiv.trans, iso.hom_equiv_of_isos,
equiv.subtype_equiv_subtype, equiv.subtype_equiv_of_subtype, equiv.Pi_congr_right,
equiv.arrow_congr,
is_colimit.equiv,
restricted_yoneda, yoneda_extension, yoneda_extension_obj,
restricted_yoneda_yoneda_iso_id,
nat_iso.of_components, iso_of_equiv, yoneda_equiv],


#### Kevin Buzzard (Nov 15 2018 at 02:26):

I look at that and I can see why category theory has a reputation in some quarters of just being a bunch of trivialities...

#### Scott Morrison (Nov 15 2018 at 02:41):

Sounds like more rfl lemmas were needed.

#### Reid Barton (Nov 16 2018 at 23:22):

@Scott Morrison did you do a second force push to the adjunctions branch, or am I imagining things?

#### Scott Morrison (Nov 16 2018 at 23:22):

Oh, maybe I did. Sorry, did I mess things up? :-(

#### Reid Barton (Nov 16 2018 at 23:24):

No not really, but give me a heads up if you do a force push in the future, as it's easier to deal with if I know about it earlier

#### Reid Barton (Nov 16 2018 at 23:26):

I managed to prove that the category of colimit-preserving functors Set^C^op -> D is equivalent to the category of functors C -> D

#### Reid Barton (Nov 16 2018 at 23:26):

for D cocomplete of course

#### Reid Barton (Nov 16 2018 at 23:26):

though I'm not particularly happy with the proof yet

#### Reid Barton (Nov 16 2018 at 23:27):

It turns out there are a lot of statements to check there...

#### Reid Barton (Nov 16 2018 at 23:29):

I guess I also proved the "adjoint functor theorem" for such functors, along the way

#### Johan Commelin (Nov 26 2018 at 18:18):

@Reid Barton Shouldn't the left_triangle and right_triangle fields in adjunction get obviously auto_param? (I can testify that obviously will prove them in the case of comap f and map f between presheaf X and presheaf Y.

#### Johan Commelin (Nov 26 2018 at 18:25):

This is really slick:

def adj : adjunction (comap f) (map f) :=
{ unit   := unit f,
counit := counit f,
left_triangle  := by tidy,
right_triangle := by tidy }

instance comap.preserves_colimits : preserves_colimits (comap f) :=


#### Reid Barton (Nov 26 2018 at 18:25):

I suppose, though I think that defining adjunctions in terms of the unit and counit was actually the wrong idea in the first place

#### Johan Commelin (Nov 26 2018 at 19:08):

@Reid Barton Does that mean that I shouldn't try to apply this on my sheaf branch?

#### Reid Barton (Nov 26 2018 at 20:13):

Using the adjunctions branch as-is should be fine for now--I don't expect you will need to make major changes later. And more users of the code is good for trying out different designs.

#### Johan Commelin (Jan 17 2019 at 12:40):

I've been experimenting a bit with adjunctions. Here is a little teaser:

def discrete : Type u ⥤ Top.{u} :=
{ obj := λ X, ⟨X, ⊤⟩,
map := λ X Y f, ⟨f, continuous_top⟩ }

def trivial : Type u ⥤ Top.{u} :=
{ obj := λ X, ⟨X, ⊥⟩,
map := λ X Y f, ⟨f, continuous_bot⟩ }

{ hom_equiv := λ X Y,
{ to_fun := λ f, f,
inv_fun := λ f, ⟨f, continuous_top⟩,
left_inv := by tidy,
right_inv := by tidy },
unit := { app := λ X, id },
counit := { app := λ X, ⟨id, continuous_top⟩ } }

{ hom_equiv := λ X Y,
{ to_fun := λ f, ⟨f, continuous_bot⟩,
inv_fun := λ f, f,
left_inv := by tidy,
right_inv := by tidy },
unit := { app := λ X, ⟨id, continuous_bot⟩ },
counit := { app := λ X, id } }


#### Johan Commelin (Jan 17 2019 at 12:40):

Code can be found on the adjunctions-2 branch.

#### Johan Commelin (Jan 17 2019 at 12:42):

• It is nice if we can have adjunctions between functors F : C => D and G : D => C where C and D don't need to live in the same universe.
• We should learn from the metric_space hierarchy, and add redundant data in our definitions, with conditions that they are compatible.

#### Johan Commelin (Jan 17 2019 at 12:44):

So now there are 3 ways to define an adjunction (with the help of obviously :smiley:).
1. Like I did above: you specify hom_equiv, unit and counit.
2. You only give hom_equiv.
3. You only give unit and counit.
Here are two helper structures:

structure adjunction.core_hom_equiv (F : C ⥤ D) (G : D ⥤ C) :=
(hom_equiv : Π (X Y), (F.obj X ⟶ Y) ≃ (X ⟶ G.obj Y))
(hom_equiv_naturality_left' : Π {X' X Y} (f : X' ⟶ X) (g : F.obj X ⟶ Y),
(hom_equiv X' Y) (F.map f ≫ g) = f ≫ (hom_equiv X Y) g . obviously)
(hom_equiv_naturality_right' : Π {X Y Y'} (f : F.obj X ⟶ Y) (g : Y ⟶ Y'),
(hom_equiv X Y') (f ≫ g) = (hom_equiv X Y) f ≫ G.map g . obviously)


and

structure adjunction.core_unit_counit (F : C ⥤ D) (G : D ⥤ C) :=
(unit : functor.id C ⟹ F.comp G)
(counit : G.comp F ⟹ functor.id D)
(left_triangle' : (whisker_right unit F).vcomp (whisker_left F counit) = nat_trans.id _ . obviously)
(right_triangle' : (whisker_left G unit).vcomp (whisker_right counit G) = nat_trans.id _ . obviously)


#### Johan Commelin (Jan 17 2019 at 12:45):

We then have

/--
adjunction F G represents the data of an adjunction between two functors
F : C ⥤ D and G : D ⥤ C. F is the left adjoint and G is the right adjoint.
-/
structure adjunction (F : C ⥤ D) (G : D ⥤ C) extends
(unit_hom_equiv : Π {X}, unit.app X = (hom_equiv _ _).to_fun (𝟙 (F.obj X)) . obviously)
(counit_hom_equiv : Π {Y}, counit.app Y = (hom_equiv _ _).inv_fun (𝟙 (G.obj Y)) . obviously)


#### Johan Commelin (Jan 17 2019 at 12:46):

And finally

def of_core_hom_equiv (adj : core_hom_equiv F G) : adjunction F G := -- see github for the code
def of_core_unit_counit (adj : core_unit_counit F G) : adjunction F G := -- see github for the code


#### Patrick Massot (Jan 17 2019 at 15:31):

It looks very nice, except that it could look much nicer. Did you miss https://github.com/leanprover/lean/commit/95fa4cfb0a8774570d67bb231c1ab088a94e12bb#diff-50f7eff1a2547545a820cbbeee3a0b6eL15 ?

#### Johan Commelin (Jan 17 2019 at 15:32):

Yes, I guess we should quickly make a PR that uses the correct functor symbol

#### Johan Commelin (Jan 17 2019 at 18:23):

There is one drawback that I currently see with my approach. If the categories C and D do live in the same universe, then we can extract an isomorphism between the two Hom-bifunctors from the adjunction. We can also define an adjunction given such a natural isomorphism i. But the extracted isomorphism will not be defeq to i.

#### Johan Commelin (Jan 17 2019 at 18:24):

I do not see how to fix this if we also want to keep the option of adjunctions for different universe levels.

#### Johan Commelin (Jan 22 2019 at 14:40):

Here is a slightly non-trivial example of a Lean checked adjunction:

section
open mv_polynomial
local attribute [instance, priority 0] subtype.fintype set_fintype classical.prop_decidable

noncomputable def polynomial : Type u ⥤ CommRing.{u} :=
{ obj := λ α, ⟨mv_polynomial α ℤ, by apply_instance⟩,
map := λ α β f, ⟨eval₂ C (X ∘ f), by apply_instance⟩,
map_id' := λ α, subtype.ext.mpr $funext$ eval₂_eta,
map_comp' := λ α β γ f g, subtype.ext.mpr $funext$ λ p,
by apply mv_polynomial.induction_on p; intros;
simp only [*, eval₂_add, eval₂_mul, eval₂_C, eval₂_X, comp_val,
eq_self_iff_true, function.comp_app, types_comp] at * }

@[simp] lemma polynomial_obj_α {α : Type u} :
(polynomial.obj α).α = mv_polynomial α ℤ := rfl

@[simp] lemma polynomial_map_val {α β : Type u} {f : α → β} :
(polynomial.map f).val = eval₂ C (X ∘ f) := rfl

noncomputable def adj : adjunction polynomial (forget : CommRing ⥤ Type u) :=
{ hom_equiv := λ α R,
{ to_fun := λ f, f ∘ X,
inv_fun := λ f, ⟨eval₂ int.cast f, by apply_instance⟩,
left_inv := λ f, subtype.ext.mpr $funext$ λ p,
begin
have H0 := λ n, (congr (int.eq_cast' (f.val ∘ C)) (rfl : n = n)).symm,
have H1 := λ p₁ p₂, (@is_ring_hom.map_add _ _ _ _ f.val f.2 p₁ p₂).symm,
have H2 := λ p₁ p₂, (@is_ring_hom.map_mul _ _ _ _ f.val f.2 p₁ p₂).symm,
apply mv_polynomial.induction_on p; intros;
simp only [*, eval₂_add, eval₂_mul, eval₂_C, eval₂_X,
eq_self_iff_true, function.comp_app, hom_coe_app] at *
end,
right_inv := by tidy },
hom_equiv_naturality_left_symm' := λ X' X Y f g, subtype.ext.mpr $funext$ λ p,
begin
apply mv_polynomial.induction_on p; intros;
simp only [*, eval₂_mul, eval₂_add, eval₂_C, eval₂_X,
comp_val, equiv.coe_fn_symm_mk, hom_coe_app, polynomial_map_val,
eq_self_iff_true, function.comp_app, add_right_inj, types_comp] at *
end }

end


#### Johan Commelin (Jan 22 2019 at 14:41):

For some reason this code is slower than I would have hoped. And as you can see there is quite a bit of mv_polynomial.induction_on p, so I think there are some lemmas that could be factored out...

#### Reid Barton (Jan 22 2019 at 14:41):

When I was working on adjunctions I gave up on precisely this because everything was so slow

#### Johan Commelin (Jan 22 2019 at 14:42):

Somehow that doesn't feel like a good sign.

#### Reid Barton (Jan 22 2019 at 14:43):

https://leanprover.zulipchat.com/#narrow/stream/113488-general/topic/my_polynomial.20performance/near/147887874

Aah, I see.

I just pushed.

#### Patrick Massot (Jan 22 2019 at 14:47):

Is this obviously being slow of mv_polynomial?

#### Johan Commelin (Jan 22 2019 at 14:47):

rings.lean:115:18: information

parsing took 41.6ms
rings.lean:115:18: information

rings.lean:115:18: information

type checking of adj took 18.7ms
rings.lean:115:18: information

decl post-processing of adj took 17.5ms


#### Johan Commelin (Jan 22 2019 at 14:47):

No, I think it's mv_polynomial

#### Johan Commelin (Jan 22 2019 at 14:48):

Also:

rings.lean:100:18: information

parsing took 7.51ms
rings.lean:100:18: information

elaboration of polynomial took 4.11s
rings.lean:100:18: information

type checking of polynomial took 15.8ms
rings.lean:100:18: information

decl post-processing of polynomial took 15.3ms


#### Johan Commelin (Jan 22 2019 at 14:48):

And there is no obviously in

noncomputable def polynomial : Type u ⥤ CommRing.{u} :=
{ obj := λ α, ⟨mv_polynomial α ℤ, by apply_instance⟩,
map := λ α β f, ⟨eval₂ C (X ∘ f), by apply_instance⟩,
map_id' := λ α, subtype.ext.mpr $funext$ eval₂_eta,
map_comp' := λ α β γ f g, subtype.ext.mpr $funext$ λ p,
by apply mv_polynomial.induction_on p; intros;
simp only [*, eval₂_add, eval₂_mul, eval₂_C, eval₂_X, comp_val,
eq_self_iff_true, function.comp_app, types_comp] at * }


#### Johan Commelin (Jan 22 2019 at 14:49):

@Johannes Hölzl @Kenny Lau @Mario Carneiro Is there anything that can be done here? I guess lots of the stuff that we plan on doing will depend on mv_polynomial. All the number theory that Kevin and Sander are interested in will need it.

#### Patrick Massot (Jan 22 2019 at 14:49):

What happens if you sorry the last proof?

#### Johan Commelin (Jan 22 2019 at 14:50):

Then it drops from 4s to 1s.

#### Kenny Lau (Jan 22 2019 at 14:50):

Stuff involving mv_polynomial and polynomial have been known to be slow.

#### Kenny Lau (Jan 22 2019 at 14:50):

I don't think @Mario Carneiro knows how to fix it.

#### Johan Commelin (Jan 22 2019 at 14:51):

@Patrick Massot It spends 638ms in the second by apply_instance.

#### Johan Commelin (Jan 22 2019 at 14:53):

The last begin-end block in adj takes 10 seconds :scream:

#### Reid Barton (Jan 22 2019 at 14:55):

do you really need to simp at *?

#### Johan Commelin (Jan 22 2019 at 14:56):

Yes, some of the goals need that.

#### Reid Barton (Jan 22 2019 at 14:57):

I mean maybe you can do something more than simp but less than at *

#### Reid Barton (Jan 22 2019 at 14:58):

e.g. explicitly say where to simp

#### Johan Commelin (Jan 22 2019 at 14:59):

Aah, ok. But the context is very small, and I guess simp will fail quickly on atomic hypotheses...

#### Johan Commelin (Jan 22 2019 at 15:00):

Also, the places where I want to simplify are introduced by intros, and what gets intro'd depends on the goal.

#### Johan Commelin (Jan 22 2019 at 15:00):

The induction step generates 3 goals.

#### Johan Commelin (Jan 22 2019 at 15:00):

So if I want to be explicit in the simp-part, I need to tackle the 3 goals separately...

#### Reid Barton (Jan 28 2019 at 12:15):

@Johan Commelin, maybe is_left_adjoint should be a class, and left_adjoint_preserves_colimits an instance?

#### Johan Commelin (Jan 28 2019 at 19:45):

Probably yes... I wasn't sure yet when I wrote that code (and I had just experienced that it isn't always a good idea to make things classes, viz unique).

#### Johan Commelin (Jan 28 2019 at 19:45):

Feel free to upgrade it whenever you have code that comes close to it.

Last updated: May 06 2021 at 18:20 UTC