Zulip Chat Archive

Stream: new members

Topic: Type of continuous functions on ℝ≥0 → ℝ


Lars Ericson (Dec 20 2020 at 21:34):

I would like to describe the type of continuous functions on ℝ≥0 → ℝ. I get this far

 #check @continuous nnreal  nnreal.topological_space -- real.topological_space

but there is nnreal but not real topological space.

When a mathematician say "continuous function on reals", is there a predefined space for that in mathlib?

Mario Carneiro (Dec 20 2020 at 21:35):

there is certainly an instance for topological_space real. Try #check (by apply_instance : topological_space real)

Mario Carneiro (Dec 20 2020 at 21:36):

You should just be able to leave a _ and lean will fill in things like nnreal.topological_space

Lars Ericson (Dec 20 2020 at 21:44):

Thank you @Mario Carneiro I can get to that with by_instance. With nnreal you can say nnreal.topological_space directly without the by_instance. It doesn't come up in the list of instances for docs#topological_space.

Mario Carneiro (Dec 20 2020 at 21:45):

In general, instances require a search procedure, they are not always single terms

Mario Carneiro (Dec 20 2020 at 21:45):

but lean looks them up automatically, you shouldn't supply them under normal circumstances

Mario Carneiro (Dec 20 2020 at 21:46):

the by apply_instance is only so that you can see what lean would look up in such a situation

Mario Carneiro (Dec 20 2020 at 21:47):

normally you just write continuous (f : nnreal -> real) and everything is taken care of automatically

Lars Ericson (Dec 20 2020 at 21:55):

I am trying to translate this text "The space Ω of elementary events for Brownian motion is the set of all continuous real functions Ω={𝜔:ℝ≥0→ℝ}. We refer to the 𝜔(𝑡) as Brownian trajectories." This typechecks and seems like a good translation into Lean:

def elementary_event (f: nnreal  ) := @continuous nnreal  nnreal.topological_space _ f
def Brownian_event_space := set {f : nnreal   | elementary_event f}
variable Ω : Brownian_event_space

Mario Carneiro (Dec 20 2020 at 22:24):

I would suggest something more like:

@[derive topological_space]
def Brownian_event_space := {f : nnreal   // continuous f}
local notation `Ω` := Brownian_event_space

instance : measurable_space Ω := borel _
instance : borel_space Ω := rfl

Mario Carneiro (Dec 20 2020 at 22:25):

That may not be the correct topology though

Mario Carneiro (Dec 20 2020 at 22:27):

Better, using the compact open topology:

import measure_theory.lebesgue_measure
import topology.compact_open

@[derive topological_space]
def Brownian_event_space := C(nnreal, )
local notation `Ω` := Brownian_event_space

noncomputable instance : measurable_space Ω := borel _
instance : borel_space Ω := rfl

Mario Carneiro (Dec 20 2020 at 22:31):

Also the use of set, the {x | p x} notation, and variable Ω : Brownian_event_space are incorrect in your example. It's important to distinguish between types and sets here, and Ω is the Brownian event space, it is not a member of that space

Lars Ericson (Dec 20 2020 at 23:04):

Thanks @Mario Carneiro . Now I am trying to build on that to define this:

"A cylinder set of Brownian trajectories for a finite set of times 0≤𝑡1<𝑡2<⋯<𝑡𝑛 and real intervals 𝐼𝑘=(𝑎𝑘,𝑏𝑘),1≤𝑘≤𝑛 is constructed as
𝐶(𝑡1,…,𝑡𝑛;𝐼1,…,𝐼𝑛)={𝜔∈Ω:∀1≤𝑘≤𝑛:𝜔(𝑡𝑘)∈𝐼𝑘}"

I can do this:

constant n : 
variable t: (fin n)  nnreal
variable I : (fin n)   × 
variable ω : C(nnreal, )
variable k : (fin n)
def interval (I : (fin n)   × ) (k: (fin n)) := set.Ioo (I k).1 (I k).2

#check (ω (t k)  (interval I k)) -- ⇑ω (t k) ∈ interval I k : Prop
#check {ω : C(nnreal, ) |  k : fin n, (ω (t k)  (interval I k))} -- {ω : C(nnreal, ℝ) | ∀ (k : fin n), ⇑ω (t k) ∈ interval I k} : set C(nnreal, ℝ)

But if I replace variable ω : C(nnreal, ℝ) by variable ω : Brownian_event_space, then it doesn't check because it doesn't think ω is a function.

Lars Ericson (Dec 20 2020 at 23:10):

Also I'm having a problem defining the cylinder_set:

def cylinder_set (n : )
                 (t : fin n  nnreal)
                 (I : (fin n)   ×  ) :=
  {ω : C(nnreal, ) |  k : fin n, (ω (t k)  (interval I k))}

as there is a problem in letting n be a parameter, I get this error I don't know how to fix:

type mismatch at application
  interval I
term
  I
has type
  fin n   × 
but is expected to have type
  fin _root_.n   × 
types contain aliased name(s): n
remark: the tactic `dedup` can be used to rename aliases
brownian_mo

Mario Carneiro (Dec 20 2020 at 23:19):

But if I replace variable ω : C(nnreal, ℝ) by variable ω : Brownian_event_space, then it doesn't check because it doesn't think ω is a function.

@[derive [topological_space, has_coe_to_fun]]
def Brownian_event_space := C(nnreal, )

Mario Carneiro (Dec 20 2020 at 23:20):

I'm not sure what you mean by (constant n : N) but I think simply (n : N) in the arguments to cylinder_set does what you want

Lars Ericson (Dec 20 2020 at 23:21):

Even though I have def Brownian_event_space := C(nnreal, ℝ) in context, if I say

def cylinder_set (t : fin n  nnreal)
                 (I : (fin n)   ×  ) :=
  {ω : Brownian_event_space |  k : fin n, (ω (t k)  (interval I k))}

then it crashes with

function expected at
  ω
term has type
  C(nnreal, )

It doesn't take the synonym.

Mario Carneiro (Dec 20 2020 at 23:22):

see above

Lars Ericson (Dec 20 2020 at 23:23):

Thanks, that works!

--import probability_space
import measure_theory.lebesgue_measure
import topology.compact_open

@[derive [topological_space, has_coe_to_fun]]
def Brownian_event_space := C(nnreal, )
local notation `Ω` := Brownian_event_space

noncomputable instance : measurable_space Ω := borel _
instance : borel_space Ω := rfl

def interval (n :  ) (I : (fin n)   × ) (k: (fin n)) := set.Ioo (I k).1 (I k).2

def cylinder_set (n : ) (t : fin n  nnreal)
                 (I : (fin n)   ×  ) :=
  {ω : Brownian_event_space |  k : fin n, (ω (t k)  (interval n I k))}

#check cylinder_set -- : Π (n : ℕ), (fin n → nnreal) → (fin n → ℝ × ℝ) → set Ω

Johan Commelin (Dec 21 2020 at 05:56):

Lars Ericson said:

I would like to describe the type of continuous functions on ℝ≥0 → ℝ. I get this far

 #check @continuous nnreal  nnreal.topological_space -- real.topological_space

but there is nnreal but not real topological space.

When a mathematician say "continuous function on reals", is there a predefined space for that in mathlib?

@Lars Ericson you have asked many questions of this type (preorder on nat, etc, etc...). It would be very very helpful if you would take the time to read #tpil, and go through the other resources out there (Natural number game, or anything else listed on https://leanprover-community.github.io/learn.html)

This question suggests a fundamental lack of understanding of how Lean works, and of the content that is currently in mathlib. If you are stuck on issues like this, then it is pointless trying to formalise anything nontrivial, and it will be a waste of time. For you and for us.

Lars Ericson (Dec 21 2020 at 12:32):

@Johan Commelin I have been putting my questions in "new members" to flag several things:

  • I am new to mathematics
  • I am new to theorem proving
  • I am new to Lean
  • I have no training
  • I am naive
  • I am curious
  • I like this topic
  • I need help
  • I have lots of naive questions

I have also, within my limited abilities, done my best to

  • Play the natural number game
  • Do every example in #tpil
  • Read the Reference Manual, such as it exists (many blank chapters)
  • Use the online search function of the #docs
  • Read the source code

As a newbie in that sense I think I have gone above and beyond.

I ask you and the others you include in "us" to please ignore my questions and get on with your life and your interests. Please do not feel compelled to respond to or address any question that I ask. You or "us" or whoever is the sponsor for Zulip have created a public forum. I am a member of the general public, asking sincere questions about a topic I am working hard on. Please respect that.

Lars Ericson (Dec 21 2020 at 13:02):

Also I find it a bit ironic that you would proudly wind up an article on Lean by saying "part of the experiment was to see whether a team holding no degree in logic or computer science could formalize something substantial", and then dump on me in a public forum for trying to do exactly that. Which lack of experience do you find more meritorious to make such an endeavor, a lack of experience in mathematics, or a lack of experience in logic and computer science? Or are you just boasting that mathematicians can do anything in logic and computer science, even without any specific training?

Kevin Buzzard (Dec 21 2020 at 13:32):

One key difference is that Johan knew the maths of perfectoid spaces well before he formalised a line of Lean code. You are trying to learn the maths and the Lean at the same time, and my experience with undergraduates indicates that this is a very inefficient way of doing things. I am quite excited to be spending next term teaching a Lean course to PhD students, where I will be covering mathematics which they know back to front (we will start with undergraduate stuff) and showing them how to do it in Lean. I think that this is the way to do things. You are trying to do it the other way around and this is like swimming against the tide.

Johan Commelin (Dec 21 2020 at 13:34):

@Lars Ericson My apologies if that came across as hurtful. It's totally fine to be new to Lean/maths/etc. It's totally fine to be naive/curious or to need help. Everybody asks naive questions. That's fine too.

What I'm trying to point out is that you seem to try to do 37 things at once. As the Dutch say, you might be putting "too much hay on your pitchfork". Hence my suggestion of focusing on the basics first, before diving into the formalisation of a piece of nontrivial maths.
All I'm saying is that if you do not know how to get the topology on the real numbers in Lean, then whatever you want to do in measure theory/probability theory in Lean is going to be very frustrating.

There is nothing wrong with not knowing how to get the topology on the reals in Lean. Most of us have gone through that phase. I'm just pointing out that you should focus on that level of questions first. In particular because we've answered very similar questions before, so the hope is that you learn from the techniques that we are teaching. But if those lessons get lost because you are trying to jump ahead to more difficult problems, then that is no good.

Johan Commelin (Dec 21 2020 at 13:35):

To add to what Kevin said: one of our hopes/dreams/goals is that interactive theorem provers will also be a valuable tool for learning new maths. At the moment, this is very rarely the case.

Julian Berman (Dec 21 2020 at 16:29):

@Lars Ericson FWIW I feel I'm in a similar boat (of "I wish I knew more math" and "I wish I knew more lean" and "I still feel like doing 30 things at once, so so long as it doesn't bother others too much so be it for now") -- just mentioning that I sympathize -- and if anything, folks here seem to be extremely patient with folks like us.

Lars Ericson (Dec 21 2020 at 16:59):

@Kevin Buzzard and @Johan Commelin and @Julian Berman thank you for your kind response. I am interested in learning how to efficiently discretize weak and strong approximations to solutions of stochastic differential equations. I have a book I bought in 1994 which has been sitting on my desk unread until about 3 years ago, when I decided to try to teach myself the subject. This resulted in a number of questions that I posted on Stack Exchange:

I found that there was a giant confusing stack of types to get right in the subject. This feeling is not unique. Tai-Danae Bradley describes a similar feeling in her blog:
"This website was originally created in 2015 as tool to help me transition from undergraduate to graduate level mathematics. Quite often, I'd find that the ideas of math are hidden behind a dense fog of formalities and technical jargon. Much of my transition process was learning how to fight through this fog in order to clearly see the ideas, concepts, and notions which lie beneath. Throughout this process I learned that writing —and drawing —helps immensely."

Instead of writing and drawing, I attempted, using sympy and newly added Python type facilities, to organize my reading on the topic. These questions were related to that effort:

I learned that the notion of types in more advanced mathematics is a shifting pile of sand, because there are 5 equivalent but differently founded definitions for every advanced concept. In Lean, what at first glance look like types are really different ways of structuring a formal proof based on going at the topic at hand based on one or another equivalent definition. You can see this kind of divergence very clearly in this discussion of what is a Polish space.

One of the sub-goals of my book club effort is to understand how to formally describe concepts like Ito integral, Ito differential and similarly for Stratonovich. A sub-goal of that sub-goal is to formally describe the probability space of stochastic processes. The clearest description of that which I have found is this one by Zeev Schuss. If you go searching for definitions of "Brownian motion probability space", you will find 5 or 6 more or less specific constructions, but none as clear or relatively constructive as Schuss.

For underlying measure theory, I found these notes by Gordan Zitkovic to be most clear, especially regarding the cylinder sets used in the construction described by Schuss.

I mentioned in another thread Wilfrid Kendall's implementation of stochastic differential in Reduce, Mathematica, Axiom and maybe Maple. The Axiom implementation represents a more informed attempt to "get the types right" and provide a facility for symbolic computing with SDEs. Lean is another way to help organize thinking (get the types right or get the proof structure right) which has been, in spite and because of my ignorance in measure theory and topology, a helpful learning experience. Implementing a well-typed (but otherwise not "proven") symbolic computation facility is a different task than automatically verifying a proof about an idea in the same topic. It took a little bit of doing Lean to really understand that it was not a programming language and was not intended to be used that way. It is a system for organizing and checking proofs.

Part of my learn-through-programming approach comes from an early experience. In 1976 I enrolled in the MIT High School Studies Program (HSSP), a free after-school activity. I spent my time at the LOGO Lab, a project by "educationalists" to each children mathematics by making them write computer graphics programs. This program was called turtle geometry. However it wasn't limited to math, they imprinted students with the idea that you could only know a topic by "operationalizing" it. This obviously goes against the grain of 99% of all graduate-level mathematics education (witness the large absence of core math on EdX and Coursera).

From LOGO my friends from HSSP and I moved onto the MIT AI Lab, in particular the Macsyma group. I learned Lisp and a bit of Macsyma. Later on, I worked for a year for Carl Engelman, who wrote Mathlab 68, the precursor to Macsyma. Using Macsyma for integration and differentiation of expressions and meeting these people also gave me the idea that learning by doing might be a good strategy.

As an undergrad one of my jobs was to help Dana Scott maintain the Franz Lisp implementation of Edinburgh LCF. I took a course from him on Domain Theory in which I learned absolutely nothing but got an A for attendance, as one of the 3 people signing up for the course. This was one of my first experiences in what it means not to have real training or maybe aptitude in topology. I got no organized feedback from anybody in college regarding what I didn't know that I didn't know, in particular no kind of systematic testing to assess my skill level and gaps in the relevant underlying mathematics. Still, CMU was happy to print me a BS in Math and send me off to Courant Institute, maybe the worst place in the world to be for someone ignorant about math.

At Courant I went from one thesis topic to another. The first was, for Chee Yap, to implement real algebraic number arithmetic in CAML (I met Gerard Huet at the 1987 Texas summer school on Functional Programming and through that ended up at INRIA for a year, where I did the CAML implementation), then Maple (I spent a month in Cantabria with Tomas Recio, where Maple was popular), then Mathematica (it's kind of nice), to support "topologically correct graphical editing". The grand goal of this effort is never to confuse left with right. It's hard. I had to implement things like Grobner bases and resultant calculations, without really knowing what I was doing. For this effort I earned some mockery from Bruno Buchberger when I gave a talk on the work in Linz. Eventually I moved onto a different topic. Chee forged on to successfully carried out his Exact Geometric Computation program to fruition with many other people.

My next thesis topic was for Jack Schwartz. He wanted me to implement the idea of Correct Program Technology in SETL. Basically, from preconditions on programs written as set-theoretic expressions, derive algorithms in the SETL programming language. I worked on this for a while. There was only one buggy SETL compiler and I found that every decision procedure he wanted me to implement was of use for at most one variable due to hyper-exponential complexity. I listed a few of these papers in another thread. I told Jack I wanted to move to Mathematica and focus on possible heuristics and average case complexity. He wasn't interested in anything not done in SETL (his language) and was not particularly interested in average case complexity (maybe he guessed that quantum computers would come along and rescue his line of decision procedures). So I finished that up with Bud Mishra and Jim Cox as thesis advisors and promptly dropped out of academia.

Life is short and I had it on my bucket list to finally learn this particular book by Kloden. I've gone at it a number of different ways. Lean has been an exciting and helpful entree into the guts of the math, which gets largely elided in somewhat "popular" presentations like the Kloeden book. I'm not good at doing proofs by hand. I can sometimes follow them if they are formally presented, as in Lean. So it's been working for me, even though it doesn't look like it to you, as you view the sequence of my questions. You may doubt that I'm getting anything out of this, but I really feel like I'm getting something. Every time I say "DONE" in a thread about a particular topic, I feel like I've moved along a little bit.

Mario Carneiro (Dec 21 2020 at 19:10):

@Lars Ericson Formalizing math is not the same as doing math in a mathematics department. There are specific tricks and techniques you have to learn, and just because it works well on paper doesn't mean it will translate well to the proof assistant. And this isn't even because the paper version was imprecise or suboptimal in some sense (although it may be); it is because mathematician authors are communicating to a different audience: human readers. Some things that are easy for a human are not easy for lean, and vice versa.

My advice is to put your grand formalization ideas on hold, and work through some basic material, and once you have done that, try to pick an area of mathlib that needs some filling out and contribute something. The first few things you contribute should be proofs; writing definitions is hard and unless you know what you are doing you will end up with definitions that are not suitable for mathlib. So far I have seen exceedingly little in the way of proofs in your posts; I've already stressed the importance of this for learning.

Once you have that out of the way, you will be in a position to ask much higher quality questions, and then perhaps we can get to laying the foundations for things like the Ito integral, stochastic differential equations and so on. Understand that most of this is miles out of the way at the moment though, and you first have to train before you can run a marathon.

Lars Ericson (Dec 21 2020 at 19:38):

PS On my StackExchange quest described above I left out two links on the concept of randomness of a sequence. To me a probability space is a way of counting a population. There is nothing random about it. We talk about random variables but they are deterministic functions of the sample space. For a sequence of choices to be random is a different concept. I had a few posts about that in StackExchange:

One possible criterion is compressibility. The StackExchange response was No to that. However, NIST likes compressibility as a test.

Mario Carneiro (Dec 21 2020 at 19:44):

See that sounds like a mathematics question to me, which should be figured out on math.SE or similar. Come here once it's "dead math" and you just want to formalize it

Mario Carneiro (Dec 21 2020 at 19:46):

If the mathematics itself is muddled, formalizing becomes much harder, and while it can be useful for crystallizing your thoughts on a problem, you probably need some proficiency with the proof assistant before that will be more than just pain on both fronts

Lars Ericson (Dec 22 2020 at 02:45):

Thank you @Mario Carneiro I will follow your advice to practice proofs. I will try only to ask proof questions while doing this and only if I really really have to. Here is my (grand) practice plan, in order, to proof check in Lean:

Lars Ericson (Dec 23 2020 at 05:48):

So far so good

import algebra.ring.basic
import tactic

-- Garrett Birkhoff and Saunders Mac Lane: A survey of modern algebra, 4th ed

universe u

theorem ch1_rule_1 (α : Type u) [comm_ring α ] (a b c : α) : (a+b)*c=a*c+b*c := add_mul a b c

theorem ch1_rule_2_plus (α : Type u) [comm_ring α ] (a : α) : (0 + a = a) := zero_add a

theorem ch1_rule_2_times (α : Type u) [comm_ring α ] (a : α) : (1 * a = a) := one_mul a

theorem ch1_rule_3 (α : Type u) [comm_ring α ] (z : α) : ( (a : α ), a + z = a)  z = 0 :=
begin
  intro h1,
  have h2 := h1 0,
  exact add_left_eq_self.mp (congr_arg (has_add.add z) (congr_arg (has_add.add z) (h1 z))),
end

theorem ch1_rule_4 (α : Type u) [comm_ring α ] (a b c : α) : a + b=a + c  b = c := (add_right_inj a).mp

theorem ch1_rule_5 (α : Type u) [comm_ring α ] (a x₁ x₂ : α) : (a + x₁ = 0  a + x₂ = 0)  x₁ = x₂ :=
begin
  intro h1,
  have h2 := h1.1,
  have h3 := h1.2,
  exact neg_unique h2 h3,
end

theorem ch1_rule_6 (α : Type u) [comm_ring α ] (a b x₁ x₂ : α) : (a + x₁ = b  a + x₂ = b)  x₁ = x₂ :=
begin
  intro h1,
  have h2 := h1.1,
  have h3 := h1.2,
  rw  h2 at h3,
  exact ch1_rule_4 α a x₁ x₂ (eq.symm h3),
end

theorem ch1_rule_7a (α : Type u) [comm_ring α ] (a : α) : a * 0 = 0 := mul_zero a
theorem ch1_rule_7b (α : Type u) [comm_ring α ] (a : α) : 0 * a = 0 := zero_mul a

Mario Carneiro (Dec 23 2020 at 06:09):

Since all of these are just copies of theorems from mathlib, you should take note of the way that your theorem differs, because those indicate probably-meaningful style differences. For example, you should use {} brackets around alpha (and the variables a b c when they can be inferred from hypotheses), you should not have a space before a close brace, and you should use hyp1 -> hyp2 -> conclusion instead of hyp1 /\ hyp2 -> conclusion

Lars Ericson (Dec 23 2020 at 15:26):

Revised:

import algebra.ring.basic
import tactic

-- Garrett Birkhoff and Saunders Mac Lane: A survey of modern algebra, 4th ed

universe u

variable {α : Type u}

theorem ch1_rule_1 [comm_ring α] {a b c : α} : (a+b)*c=a*c+b*c := add_mul a b c
theorem ch1_rule_2_plus [comm_ring α] {a : α} : (0 + a = a) := zero_add a
theorem ch1_rule_2_times [comm_ring α] {a : α} : (1 * a = a) := one_mul a

theorem ch1_rule_3 [comm_ring α] {z : α} : ( {a : α}, a + z = a)  z = 0 :=
begin
  intro h1,
  have h2 := h1 0,
  exact add_left_eq_self.mp (congr_arg (has_add.add z) (congr_arg (has_add.add z) (h1 z))),
end

theorem ch1_rule_4 [comm_ring α] {a b c : α} : a + b = a + c  b = c := (add_right_inj a).mp

theorem ch1_rule_5 [comm_ring α] {a x₁ x₂ : α} : a + x₁ = 0  a + x₂ = 0  x₁ = x₂ :=
begin
  intro h1,
  have h2 := h1.1,
  have h3 := h1.2,
  exact neg_unique h2 h3,
end

theorem ch1_rule_6 [comm_ring α] {a b x₁ x₂ : α} : a + x₁ = b  a + x₂ = b  x₁ = x₂ :=
begin
  intro h1,
  have h2 := h1.1,
  have h3 := h1.2,
  rw  h2 at h3,
  exact ch1_rule_4 α a x₁ x₂ (eq.symm h3),
end

theorem ch1_rule_7a [comm_ring α] {a : α} : a * 0 = 0 := mul_zero a
theorem ch1_rule_7b [comm_ring α] {a : α} : 0 * a = 0 := zero_mul a

theorem ch1_rule_8 [comm_ring α] {u : α} : ( {a : α}, a*u = a)  u=1 :=
begin
  intro h1,
  suggest,
end

Yakov Pechersky (Dec 23 2020 at 15:53):

You can even change the variable line to:

variables {α : Type u} [comm_ring α] {a b c :  α}

to reduce repetition down the file (and use b, c for non-a variables).

Lars Ericson (Dec 23 2020 at 18:36):

Thanks @Yakov Pechersky . I wanted to put the [comm_ring ] in the variable line but I was having trouble getting the syntax right. Also I overused {a: } in foralls above in proof statement. Using { on a forall makes it harder to instantiate the forall with a specific case. Here is revised proof plus one more rule and I also added namespace:

import algebra.ring.basic
import tactic

-- Garrett Birkhoff and Saunders Mac Lane: A survey of modern algebra, 4th ed

namespace ch1

universe u

variables {α : Type u} [comm_ring α] {a b c z x₁ x₂ :  α}

theorem rule_1 : (a+b)*c=a*c+b*c := add_mul a b c
theorem rule_2_plus : (0 + a = a) := zero_add a
theorem rule_2_times : (1 * a = a) := one_mul a

theorem rule_3 : ( (a : α), a + z = a)  z = 0 :=
begin
  intro h1,
  have h2 := h1 0,
  exact add_left_eq_self.mp (congr_arg (has_add.add z) (congr_arg (has_add.add z) (h1 z))),
end

theorem rule_4 : a + b = a + c  b = c := (add_right_inj a).mp

theorem rule_5 : a + x₁ = 0  a + x₂ = 0  x₁ = x₂ :=
begin
  intro h1,
  intro h2,
  exact neg_unique h1 h2,
end

theorem rule_6 : a + x₁ = b  a + x₂ = b  x₁ = x₂ :=
begin
  intro h1,
  intro h2,
  rw  h1 at h2,
  exact rule_4 (eq.symm h2),
end

theorem rule_7a : a * 0 = 0 := mul_zero a
theorem rule_7b  : 0 * a = 0 := zero_mul a

theorem rule_8 : ( (a : α), a*z = a)  z = 1 :=
begin
  intro h1,
  exact (eq_one_iff_eq_one_of_mul_eq_one (h1 1)).mp rfl,
end

end ch1

Last updated: Dec 20 2023 at 11:08 UTC