Zulip Chat Archive

Stream: mathlib4

Topic: Refactoring Localization


Andrew Yang (May 24 2024 at 08:08):

I plan to refactor docs#OreLocalization to allow localizations of X at S : Submonoid R for some MulAction R X, and replace docs#Localization and docs#LocalizedModule with it. Are there any suggestions or objections? If not, #13151 #13156 #13158 are the first sequence of PRs dealing with the LocalizedModule half of the refactor.

Kevin Buzzard (May 24 2024 at 08:17):

Am I right in thinking that right now if R is a commutative ring, S is a submonoid and M is an R-module then the only way to localise M at S is to tensor with Localization S? And your refactor will change this?

Andrew Yang (May 24 2024 at 08:18):

Right now we have docs#LocalizedModule. But LocalizedModule S R (R[S1]R[S^{-1}] as a module) and Localization S are currently not the same thing.

Kevin Buzzard (May 24 2024 at 09:06):

I don't have any objections to a refactor, but I'm interested in why you want to do it.

Andrew Yang (May 24 2024 at 09:17):

By unifying LocalizedModule and Localization, we can get results of submodules for free with what we already have about ideals. Plus we can get the OX\mathcal{O}_X-module M~\tilde{M} for free (hopefully) with docs#AlgebraicGeometry.Spec.structureSheaf. We will also get localization of modules over non-commutative rings, which would also show "left ore <-> strong rank condition" for domains (a todo somewhere in mathlib).

Eric Wieser (May 24 2024 at 09:20):

Andrew Yang said:

Right now we have docs#LocalizedModule. But LocalizedModule S R ($R[S^{-1}]$ as a module) and Localization S are currently not the same thing.

I think we can unify these without doing OreLocalization at the same time?

Eric Wieser (May 24 2024 at 09:21):

I did some work just before the port to make their relations defeq I think

Andrew Yang (May 24 2024 at 09:22):

Yeah but I want to generalize some results on LocalizedModule to non-commutative setting as well.

Patrick Massot (May 24 2024 at 13:42):

You should probably coordinate with @Hannah Fechtner who is also working on localization things.

Hannah Fechtner (May 24 2024 at 14:58):

thanks, Patrick! Andrew, I'll message you. I have some work that I'm almost ready to push which depends on the current OreLocalization file (namely, if you localize a presented monoid, the resulting group is isomorphic to the presented group on the same relations.generators)

Hannah Fechtner (May 24 2024 at 15:02):

I'm building a small API over OreLocalization for the simplified case in which R = S (aka the whole monoid satisfies the ore conditions). I'm also a bit hesitant about switching right to left -- I'm working towards a proof of a solution to the word problem on braids, and I've shown right common multiples already through disgustingly long proofs by induction. I'll look at it; it's probably possible to re-do for left common multiples, but I'd have to work out the math myself instead of following an already-written proof in a book

Eric Wieser (May 24 2024 at 15:03):

This sounds like an argument for doing this refactor in two steps; merge Localization and LocalizedModule (which won't interfere with Hannah's work), then later merge the result with Ore localizations

Eric Wieser (May 24 2024 at 15:04):

(that probably results in diffs that are easier to review too)

Andrew Yang (May 24 2024 at 15:37):

Are right ore localization more common in the literature than left ore?

Hannah Fechtner (May 24 2024 at 18:24):

I somewhat take that back - it would only be about 100 lines of code for me to switch from right to left. so I'm fine with that :)

I have seen right ore much more frequently than left ore, though both appear. I'm no expert, though!

Andrew Yang (May 24 2024 at 18:34):

One argument against doing the refactor in two steps is that it is twice the work. The fact that the proofs in the commutative case does not apply to the non-commutative case doesn't help too. Nonetheless the first PR #13151 only touches ore localization (flips left -> right & make it work for group actions / modules) and it should be useful even if we do not want the refactor on Localization.

Andrew Yang (May 24 2024 at 18:38):

Let me also tag the original authors @Jakob von Raumer. I remember there were some disscussions on this generalization when ore localization landed in mathlib but I couldn't find it now.

Jakob von Raumer (May 27 2024 at 07:41):

Thanks for the ping, I'll try to find the old thread

Jakob von Raumer (May 27 2024 at 07:44):

Can't find any discussion re left and right, most of the design choices were discussed in the PR

Andrew Yang (Jun 05 2024 at 18:15):

A ping for someone to review #13151. Now that we have OX\mathcal{O}_X modules in mathlib, I really want to make structureSheaf work for modules.

Andrew Yang (Jun 06 2024 at 11:15):

The next PR in the line is #13559, which is just a split of the big file and removes the comparison of OreLocalization with Localization, so that we can import it from MonoidLocalization.lean without new imports in the future.

Andrew Yang (Jun 06 2024 at 11:17):

Side question: Do we actually need the additive version of localization (docs#AddLocalization)? The only example I have is Int being the AddLocalization of Nat.

I can additivize OreLocalization as well but I don't think this is necessary if it will never be used.

Johan Commelin (Jun 06 2024 at 11:24):

I suppose we'll have more instances of AddLocalization in the future. Like K0K_0 of an abelian category.

Andrew Yang (Jun 06 2024 at 11:26):

Oh yeah they are additive. Will additivize OreLocalization then.

Andrew Yang (Jun 20 2024 at 18:47):

#13156 and #13943 are the refactoring PR, changing the definition of LocalizedModule and Localization respectively.

Andrew Yang (Jun 20 2024 at 20:20):

The latter PR speeds up slow files like Mathlib.AlgebraicGeometry.GammaSpecAdjunction by 50% but also slows some slow files like Mathlib.RingTheory.ClassGroup by 50%, with an overall +5% build time. I'm not sure what to make of this result and how to fix the slow downs.

Matthew Ballard (Jun 20 2024 at 22:16):

Quite a few irreducible_def’s were deleted. Are the new versions still irreducible?

Andrew Yang (Jun 21 2024 at 09:59):

Okay I restored some of them, and the +5% build time has dissapeared, but the slow down on the files still persist. I'm afraid I cannot make the remaining ones irreducible, as they might cause non-defeq diamonds.

Matthew Ballard (Jun 21 2024 at 17:01):

We have unseal now to open up the internals of an irreducible declaration to help with new diamonds at default/semireducible transparency

Matthew Ballard (Jun 21 2024 at 17:15):

Also, the +5% build lint is noise

Andrew Yang (Jun 21 2024 at 17:44):

I don't think unseal works for irreducible_def?

import Mathlib

irreducible_def x : Nat := 1

unseal x in
lemma foo : x = 1 := rfl -- fails

Matthew Ballard (Jun 21 2024 at 17:47):

I would write

import Mathlib

@[irreducible] def x : Nat := 1

unseal x in
lemma foo : x = 1 := rfl -- fails?

Matthew Ballard (Jun 21 2024 at 22:40):

This seems the problem. We used to have

/-- Exponentiation in a `Localization` is defined as `⟨a, b⟩ ^ n = ⟨a ^ n, b ^ n⟩`.

This is a separate `irreducible` def to ensure the elaborator doesn't waste its time
trying to unify some huge recursive definition with itself, but unfolded one step less.
-/
@[to_additive "Multiplication with a natural in an `AddLocalization` is defined as
`n • ⟨a, b⟩ = ⟨n • a, n • b⟩`.

This is a separate `irreducible` def to ensure the elaborator doesn't waste its time
trying to unify some huge recursive definition with itself, but unfolded one step less."]
protected irreducible_def npow :   Localization S  Localization S := (r S).commMonoid.npow

Now we have npow := npowRec(well not explicitly but from default insertion)

Other default fields also creep in.

Andrew Yang (Jun 23 2024 at 21:00):

Thanks for the help! However I (think I) made all the fields (including default ones) irreducible, and the performance is still the same. I've hand checked some of the slowed files and the speed of them seemed okay though. Not sure what to do now.

Andrew Yang (Jun 28 2024 at 02:01):

Mentioning this PR #13943 again as it is easy to rot. Are there more improvements to be made?

Johan Commelin (Jun 28 2024 at 11:37):

The bench results posted in https://github.com/leanprover-community/mathlib4/pull/13943#issuecomment-2185261213 are a bit discouraging. E.g. a file like ClassGroup seems to have > 50% more instructions.

Johan Commelin (Jun 28 2024 at 11:38):

I'm not sure if this should be a show stopper. But I would like to hear from some benchmark experts (@Matthew Ballard ?) if this is ok

Damiano Testa (Jun 28 2024 at 12:07):

Wrt the !bench output, I converted Michael's "significant" script and it would output what is below on the !bench for the PR. Would it be good to automatically update a comment with this kind of summary for every issued !bench-command?


File Instructions %
Increase
build +222.571⬝10⁹ (+0.18%)
Mathlib.RingTheory.IntegralRestrict +92.618⬝10⁹ (+57.69%)
Mathlib.RingTheory.DedekindDomain.Different +65.185⬝10⁹ (+45.48%)
Mathlib.RingTheory.DedekindDomain.Ideal +44.272⬝10⁹ (+34.26%)
Mathlib.RingTheory.ClassGroup +33.536⬝10⁹ (+50.14%)
Mathlib.Analysis.Fourier.FourierTransformDeriv +32.288⬝10⁹ (+8.27%)
Mathlib.LinearAlgebra.Dual +25.579⬝10⁹ (+6.90%)
Mathlib.AlgebraicGeometry.EllipticCurve.Group +25.159⬝10⁹ (+14.16%)
lint +18.896⬝10⁹ (+0.23%)
Mathlib.Analysis.SpecialFunctions.Pow.Deriv +18.852⬝10⁹ (+15.46%)
Mathlib.Analysis.InnerProductSpace.TwoDim +13.346⬝10⁹ (+10.44%)
Mathlib.RingTheory.WittVector.Isocrystal +13.279⬝10⁹ (+34.28%)
Mathlib.RingTheory.OreLocalization.Basic +11.506⬝10⁹ (+20.27%)
Mathlib.RingTheory.LocalProperties +11.445⬝10⁹ (+17.45%)
Mathlib.RingTheory.Kaehler.Basic +11.312⬝10⁹ (+3.44%)
Mathlib.RingTheory.OreLocalization.Ring +11.197⬝10⁹ (+42.30%)
Mathlib.LinearAlgebra.Matrix.SesquilinearForm +10.770⬝10⁹ (+10.22%)
Mathlib.RingTheory.TensorProduct.Basic +10.353⬝10⁹ (+3.69%)
Mathlib.Analysis.SpecialFunctions.Trigonometric.Deriv +10.80⬝10⁹ (+8.72%)
Mathlib.RingTheory.Ideal.Norm +7.860⬝10⁹ (+10.89%)
Mathlib.Algebra.Category.Ring.Instances +7.610⬝10⁹ (+86.78%)
Mathlib.LinearAlgebra.CliffordAlgebra.BaseChange +6.738⬝10⁹ (+8.38%)
Mathlib.RingTheory.GradedAlgebra.HomogeneousLocalization +6.340⬝10⁹ (+7.36%)
Mathlib.Topology.ContinuousFunction.StoneWeierstrass +6.174⬝10⁹ (+2.47%)
Mathlib.Analysis.InnerProductSpace.LaxMilgram +6.116⬝10⁹ (+23.21%)
Mathlib.RingTheory.DiscreteValuationRing.TFAE +6.114⬝10⁹ (+35.5%)
Mathlib.LinearAlgebra.QuadraticForm.TensorProduct +5.752⬝10⁹ (+6.73%)
Mathlib.Algebra.Lie.BaseChange +5.428⬝10⁹ (+7.11%)
Mathlib.RingTheory.Jacobson +5.289⬝10⁹ (+6.97%)
Mathlib.Algebra.Lie.TraceForm +5.239⬝10⁹ (+3.32%)
Mathlib.Analysis.Calculus.BumpFunction.FiniteDimension +5.107⬝10⁹ (+7.73%)
Mathlib.Analysis.InnerProductSpace.Basic +5.25⬝10⁹ (+1.90%)
Mathlib.FieldTheory.Minpoly.IsIntegrallyClosed +5.15⬝10⁹ (+18.91%)
Mathlib.LinearAlgebra.Matrix.BilinearForm +4.764⬝10⁹ (+6.11%)
Mathlib.RingTheory.WittVector.FrobeniusFractionField +4.709⬝10⁹ (+10.13%)
Mathlib.Analysis.Fourier.FourierTransform +4.310⬝10⁹ (+4.98%)
Mathlib.LinearAlgebra.Contraction +4.269⬝10⁹ (+6.33%)
Mathlib.FieldTheory.IsAlgClosed.Basic +4.120⬝10⁹ (+3.97%)
Mathlib.LinearAlgebra.BilinearForm.TensorProduct +4.99⬝10⁹ (+8.71%)
Mathlib.Analysis.InnerProductSpace.Rayleigh +4.58⬝10⁹ (+9.36%)
Mathlib.LinearAlgebra.Trace +4.32⬝10⁹ (+4.98%)
Mathlib.LinearAlgebra.QuadraticForm.Basic +3.995⬝10⁹ (+2.88%)
Mathlib.Analysis.Fourier.RiemannLebesgueLemma +3.954⬝10⁹ (+6.49%)
Mathlib.LinearAlgebra.Dimension.Localization +3.729⬝10⁹ (+11.95%)
Mathlib.LinearAlgebra.Orientation +3.708⬝10⁹ (+5.49%)
Mathlib.RingTheory.Ideal.Over +3.620⬝10⁹ (+7.46%)
Mathlib.LinearAlgebra.PerfectPairing +3.576⬝10⁹ (+9.81%)
Mathlib.RingTheory.DedekindDomain.PID +3.567⬝10⁹ (+16.56%)
Mathlib.Analysis.SpecialFunctions.PolarCoord +3.436⬝10⁹ (+10.78%)
Mathlib.Algebra.Module.Submodule.Localization +3.48⬝10⁹ (+7.97%)
Mathlib.RingTheory.IntegrallyClosed +2.938⬝10⁹ (+13.74%)
Mathlib.Geometry.Euclidean.Inversion.Calculus +2.710⬝10⁹ (+9.43%)
Mathlib.LinearAlgebra.CliffordAlgebra.Contraction +2.682⬝10⁹ (+2.40%)
Mathlib.Analysis.SpecialFunctions.ExpDeriv +2.603⬝10⁹ (+8.3%)
Mathlib.RingTheory.DedekindDomain.Dvr +2.592⬝10⁹ (+27.20%)
Mathlib.RingTheory.Localization.Integral +2.577⬝10⁹ (+5.86%)
Mathlib.RingTheory.Kaehler.Polynomial +2.559⬝10⁹ (+1.56%)
Mathlib.Analysis.SpecialFunctions.Log.Deriv +2.482⬝10⁹ (+5.77%)
Mathlib.Analysis.SpecialFunctions.Trigonometric.ArctanDeriv +2.481⬝10⁹ (+10.86%)
Mathlib.RingTheory.RingHomProperties +2.398⬝10⁹ (+11.9%)
Mathlib.Analysis.InnerProductSpace.Orientation +2.319⬝10⁹ (+4.48%)
Mathlib.LinearAlgebra.QuadraticForm.TensorProduct.Isometries +2.287⬝10⁹ (+2.80%)
Mathlib.LinearAlgebra.QuadraticForm.Dual +2.274⬝10⁹ (+7.74%)
Mathlib.RingTheory.IsTensorProduct +2.198⬝10⁹ (+1.83%)
Mathlib.Algebra.Module.LinearMap.Polynomial +2.194⬝10⁹ (+2.18%)
Mathlib.Analysis.NormedSpace.Dual +2.162⬝10⁹ (+5.2%)
Mathlib.LinearAlgebra.Coevaluation +2.70⬝10⁹ (+6.27%)
Mathlib.RingTheory.Localization.LocalizationLocalization +2.45⬝10⁹ (+8.96%)
Mathlib.RingTheory.Localization.FractionRing +1.962⬝10⁹ (+7.71%)
Mathlib.RingTheory.Localization.AtPrime +1.899⬝10⁹ (+13.46%)
Mathlib.RingTheory.Flat.Stability +1.759⬝10⁹ (+5.9%)
Mathlib.LinearAlgebra.RootSystem.Basic +1.754⬝10⁹ (+3.67%)
Mathlib.LinearAlgebra.Matrix.PosDef +1.702⬝10⁹ (+3.38%)
Mathlib.Topology.ContinuousFunction.NonUnitalFunctionalCalculus +1.650⬝10⁹ (+0.72%)
Mathlib.RingTheory.TensorProduct.MvPolynomial +1.588⬝10⁹ (+1.73%)
Mathlib.LinearAlgebra.QuadraticForm.Prod +1.526⬝10⁹ (+3.69%)
Mathlib.RingTheory.Localization.AsSubring +1.447⬝10⁹ (+9.7%)
Mathlib.FieldTheory.KummerExtension +1.409⬝10⁹ (+1.19%)
Mathlib.Analysis.SpecialFunctions.Arsinh +1.394⬝10⁹ (+7.38%)
Mathlib.Analysis.Convolution +1.333⬝10⁹ (+0.35%)
Mathlib.RingTheory.Polynomial.Cyclotomic.Roots +1.296⬝10⁹ (+6.27%)
Mathlib.RingTheory.Ideal.MinimalPrime +1.287⬝10⁹ (+8.75%)
Mathlib.MeasureTheory.Decomposition.SignedLebesgue +1.252⬝10⁹ (+3.13%)
Mathlib.LinearAlgebra.TensorProduct.Graded.Internal +1.217⬝10⁹ (+0.46%)
Mathlib.RingTheory.Localization.Away.AdjoinRoot +1.152⬝10⁹ (+8.75%)
Mathlib.RingTheory.DedekindDomain.IntegralClosure +1.146⬝10⁹ (+2.58%)
Mathlib.NumberTheory.Modular +1.111⬝10⁹ (+1.47%)
Mathlib.Algebra.GCDMonoid.IntegrallyClosed +1.85⬝10⁹ (+15.54%)
Decrease
Mathlib.AlgebraicGeometry.GammaSpecAdjunction -175.385⬝10⁹ (-46.17%)
Mathlib.AlgebraicGeometry.Scheme -96.446⬝10⁹ (-49.30%)
Mathlib.AlgebraicGeometry.AffineScheme -81.35⬝10⁹ (-34.50%)
Mathlib.AlgebraicGeometry.Spec -35.358⬝10⁹ (-28.77%)
Mathlib.RingTheory.Localization.Basic -19.575⬝10⁹ (-17.43%)
Mathlib.Topology.ContinuousFunction.UniqueCFC -10.113⬝10⁹ (-4.56%)
Mathlib.AlgebraicGeometry.ProjectiveSpectrum.Scheme -8.615⬝10⁹ (-3.15%)
Mathlib.MeasureTheory.Measure.FiniteMeasure -7.121⬝10⁹ (-8.56%)
Mathlib.GroupTheory.MonoidLocalization -6.634⬝10⁹ (-5.6%)
Mathlib.NumberTheory.RamificationInertia -5.762⬝10⁹ (-1.85%)
Mathlib.AlgebraicGeometry.ProjectiveSpectrum.StructureSheaf -4.454⬝10⁹ (-4.67%)
Mathlib.AlgebraicGeometry.FunctionField -4.361⬝10⁹ (-14.60%)
Mathlib.Algebra.Module.LocalizedModule -4.184⬝10⁹ (-2.2%)
Mathlib.AlgebraicGeometry.StructureSheaf -3.725⬝10⁹ (-2.30%)
Mathlib.AlgebraicGeometry.Properties -3.297⬝10⁹ (-6.0%)
Mathlib.AlgebraicGeometry.Morphisms.RingHomProperties -3.273⬝10⁹ (-0.89%)
Mathlib.Analysis.NormedSpace.Star.ContinuousFunctionalCalculus.Restrict -2.269⬝10⁹ (-1.66%)
Mathlib.RingTheory.WittVector.Identities -2.123⬝10⁹ (-12.35%)
Mathlib.AlgebraicGeometry.Morphisms.ClosedImmersion -2.108⬝10⁹ (-8.66%)

Matthew Ballard (Jun 28 2024 at 18:23):

It's not terrible. I would like to ID the underlying cause. I will take another look later today. Either way, we can proceed after that

Kevin Buzzard (Jun 29 2024 at 08:02):

Investigating this has been on my job list for two weeks but I've been the organiser of two workshops in those two weeks :-) I'll take a look now.

Kevin Buzzard (Jun 29 2024 at 08:22):

Master (note that this was so quick that we didn't even get a profiler time, I wish I could remember or know how to look up the way to decrease the "don't report if less than this number" profiler time default, it's rubbish being old and forgetful(yay hover on trace.profiler told me!)):

  [Meta.synthInstance]  DivisionRing (FractionRing A) 
  [] new goal DivisionRing (FractionRing A) 
  []  apply @Field.toDivisionRing to DivisionRing (FractionRing A) 
  []  apply inst✝²⁵ to Field (FractionRing A) 
  []  apply inst✝²⁶ to Field (FractionRing A) 
  []  apply FractionRing.field to Field (FractionRing A) 
  []  apply inst✝³ to IsDomain A 
  []  apply inst✝⁵ to IsDomain A 

Branch:

  [Meta.synthInstance] [0.024656]  DivisionRing (FractionRing A) 
  [] new goal DivisionRing (FractionRing A) 
  []  apply @OreLocalization.instDivisionRingNonZeroDivisors to DivisionRing (FractionRing A) 
  []  apply @IsSimpleAddGroup.toNontrivial to Nontrivial A 
  []  apply @AddGroupWithOne.toAddGroup to AddGroup A 
  []  apply @Ring.toAddGroupWithOne to AddGroupWithOne A 
  []  apply @StrictOrderedRing.toRing to Ring A 
  []  apply @LinearOrderedRing.toStrictOrderedRing to StrictOrderedRing A 
  []  apply @LinearOrderedCommRing.toLinearOrderedRing to LinearOrderedRing A 
  []  apply @LinearOrderedField.toLinearOrderedCommRing to LinearOrderedCommRing A 
  []  apply @StrictOrderedCommRing.toStrictOrderedRing to StrictOrderedRing A 
  []  apply @LinearOrderedCommRing.toStrictOrderedCommRing to StrictOrderedCommRing A 
  []  apply @OrderedRing.toRing to Ring A 
  []  apply @OrderedCommRing.toOrderedRing to OrderedRing A 
  []  apply @StrictOrderedCommRing.toOrderedCommRing to OrderedCommRing A 
  []  apply @StrictOrderedRing.toOrderedRing to OrderedRing A 
  []  apply @DivisionRing.toRing to Ring A 
  []  apply @Field.toDivisionRing to DivisionRing A 
  []  apply inst✝²⁵ to Field A 
  []  apply inst✝²⁶ to Field A 
  []  apply @LinearOrderedField.toField to Field A 
  []  apply @CommRing.toRing to Ring A 
  []  apply inst✝¹⁴ to CommRing A 
  []  apply inst✝¹⁵ to CommRing A 
  []  apply inst✝²⁸ to CommRing A 
  []  apply inst✝²⁹ to CommRing A 
  [resume] propagating CommRing A to subgoal CommRing A of Ring A 
  [resume] propagating Ring A to subgoal Ring A of AddGroupWithOne A 
  [resume] propagating AddGroupWithOne A to subgoal AddGroupWithOne A of AddGroup A 
  [resume] propagating AddGroup A to subgoal AddGroup A of Nontrivial A 
  []  apply @IsSimpleGroup.toNontrivial to Nontrivial A 
  []  apply @CommGroup.toGroup to Group A 
  []  apply @OrderedCommGroup.toCommGroup to CommGroup A 
  []  apply @LinearOrderedCommGroup.toOrderedCommGroup to OrderedCommGroup A 
  []  apply @LocalRing.toNontrivial to Nontrivial A 
  []  apply @IdemSemiring.toSemiring to Semiring A 
  []  apply @KleeneAlgebra.toIdemSemiring to IdemSemiring A 
  []  apply @IdemCommSemiring.toIdemSemiring to IdemSemiring A 
  []  apply @StrictOrderedSemiring.toSemiring to Semiring A 
  []  apply @LinearOrderedSemiring.toStrictOrderedSemiring to StrictOrderedSemiring A 
  []  apply @LinearOrderedCommSemiring.toLinearOrderedSemiring to LinearOrderedSemiring A 
  []  apply @LinearOrderedSemifield.toLinearOrderedCommSemiring to LinearOrderedCommSemiring A 
  []  apply @CanonicallyLinearOrderedSemifield.toLinearOrderedSemifield to LinearOrderedSemifield A 
  []  apply @LinearOrderedField.toLinearOrderedSemifield to LinearOrderedSemifield A 
  []  apply @LinearOrderedCommRing.toLinearOrderedCommSemiring to LinearOrderedCommSemiring A 
  []  apply @LinearOrderedRing.toLinearOrderedSemiring to LinearOrderedSemiring A 
  []  apply @StrictOrderedCommSemiring.toStrictOrderedSemiring to StrictOrderedSemiring A 
  []  apply @LinearOrderedCommSemiring.toStrictOrderedCommSemiring to StrictOrderedCommSemiring A 
  []  apply @StrictOrderedCommRing.toStrictOrderedCommSemiring to StrictOrderedCommSemiring A 
  []  apply @StrictOrderedRing.toStrictOrderedSemiring to StrictOrderedSemiring A 
  []  apply @OrderedSemiring.toSemiring to Semiring A 
  [] 164 more entries... 

(note the "164 more entries")

Kevin Buzzard (Jun 29 2024 at 08:34):

So we now have an exciting new option for proving that something is a division ring, which seems to take us on lots of adventures.
Master:

  [] new goal DivisionRing (FractionRing A) 
  [instances] #[@Field.toDivisionRing]

Branch:

  [] new goal DivisionRing (FractionRing A) 
  [instances] #[@Field.toDivisionRing, @OreLocalization.instDivisionRingNonZeroDivisors]

Andrew Yang (Jun 29 2024 at 08:40):

Is this unavoidable if I want to make FractionRing non-commutative?

Andrew Yang (Jun 29 2024 at 08:40):

It seems like the main culprit is the Nontrivial instances.

Kevin Buzzard (Jun 29 2024 at 08:42):

I've (hopefully) tried the dumb thing of "make Lean try the field version first" in #14257 , with this priority modification:

instance (priority := 900) : DivisionRing R[R⁰⁻¹] where
  mul_inv_cancel := OreLocalization.mul_inv_cancel
  inv_zero := OreLocalization.inv_zero
  nnqsmul := _
  qsmul := _

Benchmarking hopefully on the way (I don't know what it will compare my branch against, hopefully not your branch :-) )

Kevin Buzzard (Jun 29 2024 at 09:32):

Nope, that's not it

Kevin Buzzard (Jun 29 2024 at 11:00):

Lots of things are just twice as slow in some files. Here is a trace of the proof of ClassGroup.mk0_integralRep in Ringtheory/ClassGroup.lean:

Branch:

  [] [0.072115] rw [← ClassGroup.mk_mk0 (FractionRing R), eq_comm, ClassGroup.mk_eq_mk] ▶
  [] [0.190052] have fd_ne_zero : (algebraMap R (FractionRing R)) I.1.den ≠ 0 := by
      exact IsFractionRing.to_map_ne_zero_of_mem_nonZeroDivisors (SetLike.coe_mem _) ▶
  [] [0.032983] refine ⟨Units.mk0 (algebraMap R _ I.1.den) fd_ne_zero, ?_⟩ ▶
  [] [0.010530] apply Units.ext ▶
  [] [0.341623] rw [mul_comm, val_mul, coe_toPrincipalIdeal, val_mk0] ▶
  [] [0.088342] exact FractionalIdeal.den_mul_self_eq_num' R⁰ (FractionRing R) I ▶

And here it is on master:

  [] [0.032609] rw [← ClassGroup.mk_mk0 (FractionRing R), eq_comm, ClassGroup.mk_eq_mk] ▶
  [] [0.064560] have fd_ne_zero : (algebraMap R (FractionRing R)) I.1.den ≠ 0 := by
      exact IsFractionRing.to_map_ne_zero_of_mem_nonZeroDivisors (SetLike.coe_mem _) ▶
  [] [0.011499] refine ⟨Units.mk0 (algebraMap R _ I.1.den) fd_ne_zero, ?_⟩ ▶
  [] [0.006982] apply Units.ext ▶
  [] [0.132865] rw [mul_comm, val_mul, coe_toPrincipalIdeal, val_mk0] ▶
  [] [0.036618] exact FractionalIdeal.den_mul_self_eq_num' R⁰ (FractionRing R) I ▶

Kevin Buzzard (Jun 29 2024 at 11:11):

Internally typeclass inference is just taking longer to do those silly "oh look those ring instances were defeq after all" checks which are the bane of all our lives. Here's a random snapshot.

Branch:

  [] [0.003429]  fun n x  x ^ n =?= fun n x  x ^ n 
  [] [0.001111]  FractionalIdeal R (FractionRing R) =?= FractionalIdeal R (FractionRing R) 
  [] [0.002302]  x ^ n =?= x ^ n 

Master:

  [] [0.001835]  fun n x  x ^ n =?= fun n x  x ^ n 
  [] [0.001718]  x ^ n =?= x ^ n 

(we're in the middle of a proof of Semiring.npow = Semiring.npow, which occurs both on master and on the branch). The branch takes twice as long as master to deal with this, partly because on the branch, defeq is distracted by

@FractionalIdeal R inst✝¹¹ R (FractionRing R) OreLocalization.instCommRing OreLocalization.instAlgebra : Type u_1

=?=

@FractionalIdeal R inst✝¹¹ R (FractionRing R) EuclideanDomain.toCommRing OreLocalization.instAlgebra : Type u_1

(this check doesn't occur at all on master, or maybe it does happen but it takes < 0.001 s which to be honest is probably what's happening), and this is presumably because Localization.instCommRing (the instance corresponding to OreLocalization.instCommRing on master) was somehow closer to EuclideanDomain.toCommRing than OreLocalization.instCommRing is.

Kevin Buzzard (Jun 29 2024 at 11:32):

Basically, it seems that if you move localization away from being a commutative theory, then Lean has to work harder to reconcile things with the general commutative theory (Euclidean domains, fields etc) in the commutative setting :-( On the other hand it's probably a good idea to do it, and it's not like the file is unusable by any stretch of the imagination -- the class group file is still very fast and responsive, it's just that it happens to be 50% slower than it was when we were pretending that noncommutative rings didn't exist.

Ruben Van de Velde (Jun 29 2024 at 11:38):

Kevin Buzzard said:

it's just that it happens to be 50% slower than it was when we were pretending that noncommutative rings didn't exist.

Isn't that also what happens with human students?

Kevin Buzzard (Jun 29 2024 at 11:38):

Maybe the slowdown problem is completely independent of the PR, the fact that I'm still looking at traces of proofs of Semiring.npow = Semiring.npow at all is perhaps an indication that somewhere we've got our preferred parents wrong, and this PR is just making an already-existing problem worse. This problem is pretty mild though, as I say it's still very easy to use the files, things are taking 0.3 seconds instead of 0.2 seconds rather than taking 10 seconds instead of 5 seconds. I'm now with Matt: happy to merge. I suspect that this PR is not the problem, i.e. the solution is not by changing this PR, the problem is elsewhere and independent.

My instinct is to see what Matt thinks and if he's happy to merge it then let's go for it, and if he doesn't say anything for a couple of days because he's currently got his hands ridulously full right now then we merge it anyway, but if he's not happy then hopefully he can give me more of an indication as to what I should be doing to help here. Next week is the first week for a month that I'm not organising a workshop so perhaps I can get more than 0 things done.

Matthew Ballard (Jun 30 2024 at 12:10):

I am taking a look. Extending my deadline to this evening to finish

Matthew Ballard (Jul 01 2024 at 15:55):

A larger problem that affects us here: the simp keys for docs#map_zero are

#[DFunLike.coe, *, *, *, *, *, 0]

so Lean will try map_zero whenever it sees a coercion to a function through a FunLike instance applied to 0. However, this only works when we have an [ZeroHomClass F M N] so Lean will try to synthesize it dragging on performance, sometimes dramatically.

Matthew Ballard (Jul 01 2024 at 15:57):

This is a curse of unbundling things: the statement of the theorem is more "elementary" and the proof is not. The matching in the discrimination tree, which is the filter safeguard, lets it through when we match the more elementary conditions. Then Lean tries to find the non-elementary parts to make it work

Matthew Ballard (Jul 01 2024 at 15:59):

This happens a lot in LinearAlgebra also

Matthew Ballard (Jul 01 2024 at 16:31):

Why is docs#FractionRing an abbrev?

Andrew Yang (Jul 01 2024 at 16:32):

Because all the instances on it comes from instances on Localization (both in the PR, and in currently in mathlib)

Matthew Ballard (Jul 01 2024 at 17:39):

I've merged master to see if anything from 4.10 helps. After that I will delegate.

Kim Morrison (Jul 01 2024 at 23:44):

Matthew Ballard said:

A larger problem that affects us here: the simp keys for docs#map_zero are

#[DFunLike.coe, *, *, *, *, *, 0]

Yikes, this is really bad. We really need tooling to tell us when bad lemmas like this get marked as @[simp].

Johan Commelin (Jul 02 2024 at 06:21):

Matthew Ballard said:

This happens a lot in LinearAlgebra also

But exactly in LinearAlgebra, I would hope that the search is almost always succesful?

The 0 at the end is quite crucial.

Can we have stats for how often the search fails? In other words, how often do we call simp on an expression containing f 0 expression where f is not a 0-preserving map?

Andrew Yang (Jul 02 2024 at 06:25):

I have little idea what simp keys are, but in my experience I often have to replace map_zero in the squeezed simp lemmas to something more specific like LinearMap.map_zero for substantial performance gain. This is usually due to the ZeroHomClass type class synthesis. Either simp finds a target that is not a zero-hom and tries to find an instance for it for 5 seconds, or such an instance really do exist but it is slow to find.

Matthew Ballard (Jul 02 2024 at 11:17):

We could re-bundle FunLike but remove the projections to it as instances

Yaël Dillies (Jul 02 2024 at 13:10):

Maybe it's time to acknowledge that this is not the job of simp?

Yaël Dillies (Jul 02 2024 at 13:11):

Can we somehow get a simproc or a bespoke tactic to handle the map lemmas?

Johan Commelin (Jul 02 2024 at 13:19):

@Andrew Yang Thanks for sharing you experience.

@Matthew Ballard @Damiano Testa can we harvest expressions that are problematic?
I'm still surprised that there are many expressions of the form f 0 in mathlib where f is not zero-preserving, but f is some sort of bundled map.

This data might be very useful in coming up with solutions to the problem.

Damiano Testa (Jul 02 2024 at 13:22):

When you say "expressions" here you really mean Expressions? These would be in potentially intermediate goals/local hypotheses, right?

Kim Morrison (Jul 02 2024 at 13:22):

Hmmm, interesting idea to use a simproc. They are also indexed in a discrimination tree, but the advantage is that we get to pick the pattern, rather than just using the LHS of a lemma. It's not clear to me how to actually make use of that here, however.

Johan Commelin (Jul 02 2024 at 13:25):

Damiano Testa said:

When you say "expressions" here you really mean Expressions? These would be in potentially intermediate goals/local hypotheses, right?

Even just getting file + line number, to tell me, look at the goal state in such-and-such place.

Johan Commelin (Jul 02 2024 at 13:25):

I am mathematically curious why we are applying non-zero-preserving homs to 0.

Damiano Testa (Jul 02 2024 at 13:26):

Once you have that information, you might as well (pretty-)print what you found with position information.

Johan Commelin (Jul 02 2024 at 13:26):

If you have a vector x : Fin n -> Int or something like that. Then the first component is x 0. But that is not a problem, because x is an honest function. So you don't get a DFunlike.coe.

Johan Commelin (Jul 02 2024 at 13:27):

I guess maybe f 0 appears sometimes when f is a Finsupp? But that already seems rare.

Matthew Ballard (Jul 02 2024 at 13:27):

Off the top of my head there is an example in OreLocalization.Basic if you search for map_zero...

It tries to synth ZeroHomClass for the induced map of FractionRings instead of the map from A to FractionRing A

Matthew Ballard (Jul 02 2024 at 13:27):

I think it can be f blah blah blah blah ... blah 0 also?

Damiano Testa (Jul 02 2024 at 13:27):

This is slightly tricky, in particular I am not sure how to make the "f is bundled" precise. Should this be that the f that you apply to 0 is a projection of something that also has some typeclass assumption?

Johan Commelin (Jul 02 2024 at 13:27):

Sure f can be a large expression.

Johan Commelin (Jul 02 2024 at 13:28):

It should be DFunlike.coe f' where f' is honestly bundled.

Matthew Ballard (Jul 02 2024 at 13:28):

We can put an add in there etc in between the f and the 0

Damiano Testa (Jul 02 2024 at 13:29):

Ah, if it is only DFunlike.coe that you want, then this should be much simpler.

Matthew Ballard (Jul 02 2024 at 13:29):

Matthew Ballard said:

I think docs#AddConstMapClass.map_const is the winner here. The keys are

#[DFunLike.coe, *, *, *, *, *, *]

Witness the horror :) This is just "is there a DFunLike.coe of something"

Damiano Testa (Jul 02 2024 at 13:30):

Alright, so maybe just getting every subexpression that matches this -- do you have easy access to those keys from... an expression?

Damiano Testa (Jul 02 2024 at 13:30):

(Sorry, I have never used discrimination trees...)

Matthew Ballard (Jul 02 2024 at 13:31):

Yes...I need to finish my lean PR.

Damiano Testa (Jul 02 2024 at 13:32):

So, the plan is then to scour all the nodes in all infotrees that contain an Expr, scan these Exprs for something that matches whatever the "bad" key is and report.

Damiano Testa (Jul 02 2024 at 13:32):

This is going to be a slow process, but probably not too hard to write down.

Yaël Dillies (Jul 02 2024 at 13:33):

I think dropping the ...HomClass design is justified by the fact that (almost?) all hom class instances are purely type-based, as in they are of the form forall types, forall instances on the types, HomClass (HomType types) moreTypes where instances on the types is whatever is needed to make sense of HomType

Matthew Ballard (Jul 02 2024 at 13:35):

Do we have stats for most used simp lemmas?

Matthew Ballard (Jul 02 2024 at 13:36):

Most tried?

Damiano Testa (Jul 02 2024 at 13:36):

No, the stats weekly reports have not been merged and I have not thought about more stats in the wait... :upside_down:

Yaël Dillies (Jul 02 2024 at 13:36):

In particular, we do not have instances of the form "If F is a MulHomClass of injective functions between two groups of the same size, then it is a MonoidHomClass

Yaël Dillies (Jul 02 2024 at 13:38):

Namely, it is good enough to decide what things morphisms of your morphism type preserve solely from the morphism type. This is a rather limited subdomain of typeclass inference, so hopefully we can make it faster.

Damiano Testa (Jul 02 2024 at 13:42):

This discussion seems to be pointing towards the conclusion that typeclass inference is starting to fail at about the current size of mathlib, right?

Damiano Testa (Jul 02 2024 at 13:43):

I still wonder what would the situation be if Mathlib started with Lean 4, rather than having been ported from Lean 3.

Matthew Ballard (Jul 02 2024 at 13:49):

From past changes to core, I would have guessed they have some machinery for identifying promiscuous simp lemmas

Yaël Dillies (Jul 02 2024 at 16:26):

Damiano Testa said:

This discussion seems to be pointing towards the conclusion that typeclass inference is starting to fail at about the current size of mathlib, right?

*in some cases


Last updated: May 02 2025 at 03:31 UTC