## Stream: maths

### Topic: Algebraic geometry course

#### Kevin Buzzard (Jan 13 2020 at 10:39):

I'm starting my algebraic geometry course today, M4P33. Here are my notes for lecture 1. Any comments welcome. The lecture is 4-5 in London, I'll use Lean on my laptop at some point and I'll leave Zulip on so that if anyone posts in this thread it'll flash up on the screen lol

#### Kevin Buzzard (Jan 13 2020 at 10:40):

Will tidy it up later but need to do 100 other things first

Good luck!

#### Johan Commelin (Jan 13 2020 at 18:31):

@Kevin Buzzard How did the first lecture go?

#### Kevin Buzzard (Jan 13 2020 at 20:20):

Well I started late because something else over-ran beforehand, but it seemed to go fine. We talked about the union of two algebraic sets being an algebraic set.

#### Kevin Buzzard (Jan 14 2020 at 10:01):

I'm going to be formalising the proof of the Nullstellensatz in my course soon. Do we already have this in Lean?

#### Johan Commelin (Jan 14 2020 at 10:05):

I think @Lennard Henze was working on this

#### Johan Commelin (Jan 14 2020 at 10:05):

How are things going, Lennard?

#### Johan Commelin (Jan 14 2020 at 11:50):

@Kevin Buzzard Did you show those 4th year students Lean during the lecture? If so, how did they react?

#### Kevin Buzzard (Jan 14 2020 at 11:57):

It was a bit hectic :-) because I've got so much to do and had no real time to think about how to explain the Lean. It's in a better state now.

#### Agnishom Chattopadhyay (Jan 14 2020 at 14:03):

Naive Question: This does not begin with the definition of affine algebraic sets. If that was not taught in Lecture 1 of Algebraic Geometry, in which course was it taught?

#### Johan Commelin (Jan 14 2020 at 14:04):

Probably commutative algebra

#### Kevin Buzzard (Jan 14 2020 at 14:57):

We're following these notes by Martin Orr.

#### Kevin Buzzard (Jan 14 2020 at 14:58):

The definition is in there, except I have removed what I believe to be an unnecessary finiteness assumption.

#### Lennard Henze (Jan 14 2020 at 15:17):

Yes I was working on that, but I discontinued my work because I was overwhelmed by the task. My work so far can be found here https://github.com/Forsetes/mathlib/blob/hilberts_nullstellensatz/src/ring_theory/noether_normalization.lean

#### Kevin Buzzard (Jan 17 2020 at 10:39):

Lecture 3 of my course. Can anyone tell me how to get rid of those bloody Ps everywhere?

Use parameters

#### Mario Carneiro (Jan 17 2020 at 11:02):

Alternatively, bake the application of P into the notations for V and I as in Sebastian's comment

#### Kevin Buzzard (Jan 17 2020 at 11:10):

What automation does these proofs? Is there a good name for V and I? For all I know they are both comp.left_distrib

#### Kevin Buzzard (Jan 17 2020 at 11:11):

@Patrick Massot how close am I to getting lecture 3 into a beautiful html file?

#### Kevin Buzzard (Jan 17 2020 at 11:14):

@Mario Carneiro I thought notation didn't work with variables?

#### Mario Carneiro (Jan 17 2020 at 11:17):

I don't know any relevant automation for the last proof, but it is what I would call a one-liner

#### Mario Carneiro (Jan 17 2020 at 11:18):

You have a lot of ?? in the proof, did you want to ask a question about it?

#### Mario Carneiro (Jan 17 2020 at 11:19):

I don't often put free variables in notations because it's confusing, but I think Sebastian's idea will work

#### Kevin Buzzard (Jan 17 2020 at 11:25):

I cut and pasted one proof and then started switching V's to I's, and then it compiled before I had finished :-)

#### Kevin Buzzard (Jan 17 2020 at 11:26):

I don't often put free variables in notations because it's confusing, but I think Sebastian's idea will work

What is Sebastian's idea?

#### Mario Carneiro (Jan 17 2020 at 11:27):

Hm, I had to mess with it a bit, but with this preamble you can replace each 𝕍 P with 𝕍 and similarly for 𝕀:

variable {R : Type u}
variable {W : Type v}
variable (P : R → W → Prop)
include P

def 𝕍_ (S : set R) : set W :=
{x : W | ∀ f ∈ S, P f x}
notation 𝕍:max := 𝕍_ (by exact P)

def 𝕀_ (X : set W) : set R :=
{f : R | ∀ x ∈ X, P f x}
notation 𝕀:max := 𝕀_ (by exact P)


#### Mario Carneiro (Jan 17 2020 at 11:30):

It also works, without the notation, if you just change all the variables to parameters

#### Mario Carneiro (Jan 17 2020 at 11:32):

Oh I see, your proof of 𝕀𝕍𝕀_eq_𝕀 goes through because lean noticed that if it substitutes X for S and vice versa then V becomes I and vice versa... so 𝕀𝕍𝕀_eq_𝕀 and 𝕍𝕀𝕍_eq_𝕍 are actually the same proof

#### Mario Carneiro (Jan 17 2020 at 11:32):

See?

lemma 𝕀𝕍𝕀_eq_𝕀 (V : set W) : (𝕀 (𝕍 (𝕀 V))) = 𝕀 V :=
𝕍𝕀𝕍_eq_𝕍 _ _


#### Mario Carneiro (Jan 17 2020 at 11:33):

If you use parameters, this trick doesn't work

#### Patrick Massot (Jan 17 2020 at 11:55):

Patrick Massot how close am I to getting lecture 3 into a beautiful html file?

What do you want the html file to look like? Are you aiming for something as on the mathlib website or something as in my lecture notes?

#### Kevin Buzzard (Jan 17 2020 at 13:11):

Something I could slap up before the end of this lecture at 1:50pm

#### Kevin Buzzard (Jan 17 2020 at 13:11):

which I'm currently giving

#### Kevin Buzzard (Jan 17 2020 at 17:17):

do we not have nilradical in Lean?

Nope!

#### Alex J. Best (Jan 17 2020 at 17:22):

Or nilpotents etc. I added some bits at https://github.com/leanprover-community/mathlib/blob/ece70146a142c8b04563c21edb34c87d192b172b/src/ring_theory/dedekind_finite.lean but its almost nothing

#### Kevin Buzzard (Jan 17 2020 at 18:07):

Today my main achievement was that I managed to reproduce the proof which I knew as a kid of something which I knew couldn't possibly be the Nullstellensatz:

https://github.com/ImperialCollegeLondon/M4P33/blob/ac0a28f19936a7f861ff3c6c0dc57d9b6093e8e6/src/abstract_V_and_I.lean#L151

I managed to abstract out all of the algebraic geometry completely :D I only import data.set.basic (although there's a joke at the top of the file)

#### Kevin Buzzard (Jan 17 2020 at 18:44):

If you use parameters, this trick doesn't work

Right, because P needs to be reversed.

#### Kevin Buzzard (Jan 17 2020 at 18:46):

def 𝕍_ (S : set R) : set W :=
{x : W | ∀ f ∈ S, P f x}
notation 𝕍:max := 𝕍_ (by exact P)


This hack enables me to state theorems the way mathematicians would expect to see them, but when I'm working on proofs I still see the P in the tactic state. Is there any way to get rid of this?

#### Kevin Buzzard (Jan 17 2020 at 21:05):

Is there a one-line tactic which solves this sort of thing:

import data.set.lattice

open set

example (R : Type*) (A : Type*) (P : R → A → Prop)
(ι : Type*) (S : ι → set R) (x : A) :
(∀ (f : R), (f ∈ ⋃ (i : ι), S i) → P f x) ↔
∀ (i : ι) (f : R), f ∈ S i → P f x :=
begin
split,
intros,
apply a,
rw mem_Union,
use i,
assumption,
intros h f hf,
rw mem_Union at hf,
cases hf with i hi,
apply h i,
assumption,
end


I really feel like I'm following my nose at all points and so am hoping this is just the sort of thing that CS people have tacticised

#### Chris Hughes (Jan 17 2020 at 21:08):

simp, tauto

#### Kevin Buzzard (Jan 17 2020 at 21:47):

Do we have this:

/-- Over an infinite integral domain a polynomial f is zero iff it
evaluates to zero everywhere -/
lemma mv_polynomial.eval_eq_zero  {k : Type*} [integral_domain k] [infinite k]
{n : Type*} {f : mv_polynomial n k} :
(∀ x, eval x f = 0) ↔ f = 0 :=
begin
sorry
end


? I think you can prove it in the case when n is finite, by induction on the size of n, and then for general n you write the polynomial ring as a filtered colimit of finite n's.

I would like to prove this myself, if it's not there, but I suspect there might be some quite high-powered ways to do this. For example I don't know whether I'm supposed to be reducing to the case of n = fin m or n : fintype or what. The reduction needs the concept of the variables which appear in f, which might be some sort of monad bind thing?

#### Kevin Buzzard (Jan 17 2020 at 21:48):

@Alexander Uhlmann isolated this property at Xena yesterday.

#### Jesse Michael Han (Jan 18 2020 at 06:33):

simp, tauto

i was surprised finish couldn't do it alone, you have to spoon-feed it:

import data.set.lattice

open set

example (R : Type*) (A : Type*) (P : R → A → Prop)
(ι : Type*) (S : ι → set R) (x : A) :
(∀ (f : R), (f ∈ ⋃ (i : ι), S i) → P f x) ↔
∀ (i : ι) (f : R), f ∈ S i → P f x :=
by finish[iff_iff_implies_and_implies]


#### Kevin Buzzard (Jan 18 2020 at 11:52):

Urs Schreiber pointed me to this nLab page on Twitter. Does anyone have a clue what is going on here?

#### Johan Commelin (Jan 18 2020 at 14:24):

@Kevin Buzzard I've no idea what's going on there, but I do know that it is completely useless for our purposes. We're miles away from cohesive sheaf toposes.

#### Johan Commelin (Jan 18 2020 at 21:57):

@Kevin Buzzard I want to go to bed, so I'm just dumping what I hacked together:

import data.mv_polynomial

noncomputable theory
open_locale classical

open mv_polynomial polynomial

/-- Over an infinite integral domain a polynomial f is zero if it
evaluates to zero everywhere -/
lemma polynomial.eval_eq_zero
{k : Type*} [integral_domain k] [infinite k] {f : polynomial k}
(H : ∀ x, eval x f = 0) : f = 0 :=
begin
rcases infinite.exists_not_mem_finset (roots f) with ⟨x, hx⟩,
contrapose! hx with hf,
rw mem_roots hf,
apply H,
end

/-- Over an infinite integral domain a polynomial f is zero iff it
evaluates to zero everywhere -/
lemma polynomial.eval_eq_zero_iff
{k : Type*} [integral_domain k] [infinite k] {f : polynomial k} :
(∀ x, eval x f = 0) ↔ f = 0 :=
⟨polynomial.eval_eq_zero, by { rintro rfl x, exact rfl }⟩

def fin.equiv_empty : fin 0 ≃ empty :=
{ to_fun := λ i, false.elim $nat.not_lt_zero _ i.2, inv_fun := empty.elim, left_inv := λ i, false.elim$ nat.not_lt_zero _ i.2,
right_inv := λ x, x.elim }

def fin.equiv_pempty : fin 0 ≃ pempty :=
{ to_fun := λ i, false.elim $nat.not_lt_zero _ i.2, inv_fun := pempty.elim, left_inv := λ i, false.elim$ nat.not_lt_zero _ i.2,
right_inv := λ x, x.elim }

@[simp] lemma mv_polynomial.eval_rename
{k : Type*} [comm_semiring k]
{m : Type*} {n : Type*} (e : m ≃ n) (f : mv_polynomial m k) (x) :
eval x (rename e f) = eval (x ∘ e) f :=
by apply f.induction_on; { intros, simp * }

lemma mv_polynomial.equiv_eval_eq_zero
{k : Type*} [comm_semiring k]
{m : Type*} {n : Type*} (e : m ≃ n)
(H : (∀ f : mv_polynomial m k, (∀ x, eval x f = 0) → f = 0))
(f : mv_polynomial n k) (hf : ∀ x, eval x f = 0) : f = 0 :=
begin
let φ := ring_equiv_of_equiv k e,
suffices h : φ.symm f = 0, { simpa using congr_arg φ h },
apply H,
intro x,
show eval x (rename e.symm f) = 0,
simp [hf],
end

universe variables u v

def equiv.option {X : Type u} {Y : Type v} (e : X ≃ Y) : option X ≃ option Y :=
{ to_fun := option.map e,
inv_fun := option.map e.symm,
left_inv := λ x, by { cases x, refl, simp },
right_inv :=  λ x, by { cases x, refl, simp } }

def fin_succ_equiv_option (n : ℕ) : fin (n+1) ≃ option (fin n) :=
sorry

set_option pp.universes true

lemma fintype.induction {P : Π (X : Type*) [fintype X], Prop}
(h0 : P pempty) (hs : ∀ (X : Type*) [fintype X], P X → P (option X))
(he : ∀ {X Y : Type*} [fintype X] [fintype Y] (e : X ≃ Y), P X → P Y)
(X : Type*) [fintype X] : P X :=
begin
rcases fintype.exists_equiv_fin X with ⟨n, ⟨e⟩⟩,
apply he (equiv.ulift.trans e.symm), clear e,
induction n with n ih,
{ exact he (equiv.ulift.trans fin.equiv_pempty).symm h0 },
{ specialize hs _ ih,
apply he _ hs,
refine (equiv.ulift.trans _).symm,
refine (fin_succ_equiv_option n).trans _,
refine equiv.ulift.option.symm }
end

/-- Over an infinite integral domain a polynomial f is zero if it
evaluates to zero everywhere -/
lemma mv_polynomial.fin_eval_eq_zero
{k : Type*} [int : integral_domain k] [inf : infinite k]
{n : Type*} [fin : fintype n] :
∀ {f : mv_polynomial n k}, (∀ x, eval x f = 0) → f = 0 :=
begin
unfreezeI,
revert inf int k,
revert fin n,
refine fintype.induction _ _ _,
{ intros k int inf f H,
have h : (pempty_ring_equiv k) f = 0 := H _,
replace h := congr_arg (pempty_ring_equiv k).symm h,
rw [ring_equiv.symm_apply_apply] at h,
convert h, symmetry, exact ring_equiv.map_zero _ },
{ intros n _ ih k int inf f hf,
let φ := option_equiv_right k n,
suffices h : φ f = 0, { simpa using congr_arg φ.symm h },
apply ih,
{ sorry },
{ resetI, sorry } },
{ intros m n _ _ e H k int inf f,
apply mv_polynomial.equiv_eval_eq_zero e,
apply @H }
end

/-- Over an infinite integral domain a polynomial f is zero iff it
evaluates to zero everywhere -/
lemma mv_polynomial.eval_eq_zero_iff
{k : Type*} [integral_domain k] [infinite k]
{n : Type*} {f : mv_polynomial n k} :
(∀ x, eval x f = 0) ↔ f = 0 :=
begin
sorry
end


#### Johan Commelin (Jan 18 2020 at 21:57):

Could probably use some cleaning...

Thanks!

#### Kevin Buzzard (Jan 19 2020 at 17:18):

I'm reading proofs of the Nullstellensatz. Here is something which comes up more than once. Say $k$ is a field (probably a commutative semiring would work fine) and $R$ is a finitely-generated $k$-algebra: this means that $R$ is a ring which is isomorphic to a quotient of a polynomial ring mv_polynomial X k with X a fintype -- alternatively it means that there's a ring map $k\to R$ and there's some Y : finset R such that the induced ring hom mv_polynomial Y k -> R is surjective. Mathematicians would write $R=k[x_1,x_2,\ldots,x_n]$ and note that this is not a polynomial ring -- it's "the ring generated by $k$ and the set $\{x_1,x_2,\ldots,x_n\}$", so it's a quotient of a polynomial ring.

#### Johan Commelin (Jan 25 2020 at 19:16):

Didn't Kenny already define free_comm_monoid?

#### Yury G. Kudryashov (Jan 25 2020 at 19:26):

@Johan Commelin git grep doesn't find it. Are you talking about some PR/WIP?

#### Chris Hughes (Jan 25 2020 at 19:28):

free_comm_monoid is multiset.

#### Chris Hughes (Jan 25 2020 at 19:28):

Kenny did define free_comm_group

Aah, right

#### Kevin Buzzard (Jan 25 2020 at 20:58):

I am so confused by all of this. I thought the other day @Yury G. Kudryashov was saying "use dfinsupp not finsupp" and Chris was agreeing, and now he's saying "use monoid-algebra" and Chris is agreeing.

What is going on here? Is the idea that people are suggesting that the definition of mv_polynomial be changed -- and are possibly suggesting more than one change? If this change is made, then someone has to rewrite the 1000+ line mv_polynomial.lean with the new definition, I guess. I think that this would be an interesting exercise for someone like me. However I cannot begin to do it until someone formalises the definitions because I cannot see the difference between any of the definitions that people are suggesting -- they're all the same mathematically.

#### Chris Hughes (Jan 25 2020 at 20:59):

monoid algebra should be defined to be dfinsupp I think.

#### Kevin Buzzard (Jan 25 2020 at 21:04):

So currently it starts like this:

import algebra.ring
import data.finsupp data.polynomial data.equiv.algebra

noncomputable theory
local attribute [instance, priority 100] classical.prop_decidable

open set function finsupp lattice

universes u v w x
variables {α : Type u} {β : Type v} {γ : Type w} {δ : Type x}

/-- Multivariate polynomial, where σ is the index set of the variables and
α is the coefficient ring -/
def mv_polynomial (σ : Type*) (α : Type*) [comm_semiring α] := (σ →₀ ℕ) →₀ α


How do you think it should start? Same imports?

#### Chris Hughes (Jan 25 2020 at 21:06):

I think change →₀ to Π₀ and ℕ to an arbitrary monoid.

#### Chris Hughes (Jan 25 2020 at 21:13):

I also think dfinsupp should just be called direct_sum, since there are no applications I can think of where we don't want it to be this.

#### Yury G. Kudryashov (Jan 25 2020 at 22:41):

For me monoid_algebra and free_comm_monoid represent algebraic interface of what we need while finsupp / dfinsupp are specific types that can be used to fulfill this interface.

Currently mv_polynomial uses σ →₀ ℕ as an implementation of free_comm_monoid σ and M →₀ α as an implementation of monoid_algebra α M, M = σ →₀ ℕ. The definitions and theorems heavily rely on using these specific types. I think that we should define monoid_algebra separately from polynomials, prove universal property etc, then have mv_polynomial α R := monoid_algebra R (multiplicative $free_comm_monoid α), where free_comm_monoid α can be either α →₀ ℕ or multiset α. #### Kevin Buzzard (Jan 25 2020 at 22:50): Only the definitions and theorems in mv_polynomial.lean rely on these, right? Everything else should just use the 1000-line interface made there. So why does it matter? I'm not saying it doesn't, I'm just trying to understand this way of thinking. #### Yury G. Kudryashov (Jan 25 2020 at 22:51): We can implement monoid_algebra either on top of finsupp, or on top of dfinsupp — I don't care too much. We may also want to reimplement finsupp on top of dfinsupp to reduce code duplication. #### Kevin Buzzard (Jan 25 2020 at 22:51): so here is a case where you think the implementation details don't matter. #### Yury G. Kudryashov (Jan 25 2020 at 22:52): Few reasons: 1. Reduce code duplication between polynomial and mv_polynomial. 2. Make it easier to add other kinds of polynomial-like objects. #### Kevin Buzzard (Jan 25 2020 at 22:53): I see. #### Kevin Buzzard (Jan 25 2020 at 22:53): Yes, you can imagine that people will want variants at some point #### Yury G. Kudryashov (Jan 25 2020 at 22:53): I will want quasipolynomials. #### Yury G. Kudryashov (Jan 25 2020 at 22:55): E.g., to write solutions of $\dot x=Ax+e^{at}v$ where $a$ is an eigenvalue of $A$. #### Yury G. Kudryashov (Jan 25 2020 at 22:55): Or to deal with some local normal forms of differential equations. #### Kevin Buzzard (Jan 26 2020 at 16:31): OK so I know a direct proof of Zariski's Lemma which does not go via Noether normalisation. I am not sure of the best way to formalise it though. Statement: say $k$ is a field, $\sigma$ is a finite set, and$J\$ is a maximal ideal of the polynomial ring $R:=k[\sigma]$. Then the quotient field $K:=R/J$ is a finite extension of $k$ (we know the quotient is finitely-generated as a ring, the conclusion is that it is finitely-generated as a vector space).

The beginning of the proof looks like this. Induction on size of $\sigma$. If it's empty, no problem. For the inductive step, there are two cases. The easy case is when the image in $K$ of every element of $\sigma$ is algebraic over $k$, because then $K$ is finite by some tower law argument. In the other case, we choose some $i\in\sigma$ for which the corresponding element $x_i\in K$ is not algebraic over $k$ (i.e. the subring $k[x_i]$ of $K$ is isomorphic to a polynomial ring); we can then replace $k$ by the subfield $k(x_i)$ of $K$ (field of fractions computed within $K$) and now the induction hypothesis kicks in to give $K/k(x_i)$ finite and the proof goes on from there.

I am staring at this part of the proof and am not entirely sure of how to set things up so that they should in theory go smoothly. I need finitely-generated $k$-algebras, I need to set up the statement so that the induction procedure is not awful, I need to take a field of fractions within a bigger field, and so on. Does anyone dare to offer some suggestions as to how to formalise the relevant definitions? Once I am in a begin-end block I am a much happier person.

It is not relevant to this post, but the end of the proof is then this: each $x_j$ for $j\not=i$ is integral over $k(x_i)$ and hence over $k[x_i][1/D]$ for $D$ some non-zero polynomial which is a multiple of all the denominators that show up in all the min polys of all the $x_j$. So $K$ is a field integral over $k[x_i][1/D]$ and hence $k[x_i][1/D]$ is a field (this is Prop 5.7 in Atiyah--Macdonald), and it is slightly fiddly but not too hard to verify that this can't happen (one can check, for example, that $D$ must be in every maximal ideal of $k$ and hence that $1+D$ is a constant, contradiction)

#### Johan Commelin (Jan 27 2020 at 09:11):

If we are going to refactor mv_polynomial – which sounds like a good idea – shall we also make polynomial R defeq to mv_polynomial unit R, and rebuild the current API around that?

#### Mario Carneiro (Jan 27 2020 at 09:12):

I'm... surprised that's not the case

#### Mario Carneiro (Jan 27 2020 at 09:13):

It seems like we are converging to a noncomputable representation anyway, so this seems like the best way to reduce on proof work

#### Mario Carneiro (Jan 27 2020 at 09:13):

@Chris Hughes thoughts?

#### Johan Commelin (Jan 27 2020 at 09:14):

Could it even be an abbreviation?

#### Mario Carneiro (Jan 27 2020 at 09:15):

no, I don't think so. Given the issues we have had with typeclasses in this area I don't want to make anything harder than it needs to be

#### Mario Carneiro (Jan 27 2020 at 09:15):

I'm thinking just restate all the theorems and prove by direct reference to the multivariate version

#### Kevin Buzzard (Jan 27 2020 at 10:58):

I'm... surprised that's not the case

mv_polynomial imports polynomial!

why?

#### Mario Carneiro (Jan 27 2020 at 10:59):

it's possible the import is just there for conjunction theorems, in which case either one can come first and the conjunction theorems can just go to whichever file comes second

#### Chris Hughes (Jan 27 2020 at 11:31):

Chris Hughes thoughts?

I think making polynomial into mv_polynomial unit would be sensible. One part of the interface that's currently missing is to make a polynomial into mv_polynomial` by providing a name. I guess this is just a special case of the more general theorem that a map between monoids induces a map between monoid algebras.

#### Johan Commelin (Jan 27 2020 at 12:21):

I'm continuing this discussion in a new thread: https://leanprover.zulipchat.com/#narrow/stream/116395-maths/topic/Refactoring.20polynomials/near/186676397

Last updated: May 09 2021 at 09:11 UTC