Adaptation notes are comments that are used to indicate that a piece of code has been changed to accommodate a change in Lean core. They typically require further action/maintenance to be taken in the future.
1.15. All tactics
This list of tactics is automatically generated and contains all tactics known in Mathlib, regardless of which package (Lean/Std/Batteries/Mathlib/etc.) defines them.
If you see two tactics which are almost identical, consider adding tactic_alt TAC to the variant version of a tactic to tell Lean to group them together.
#check
The #check t tactic elaborates the term t and then pretty prints it with its type as e : ty.
If t is an identifier, then it pretty prints a type declaration form
for the global constant t instead.
Use #check (t) to pretty print it as an elaborated expression.
Like the #check command, the #check tactic allows stuck typeclass instance problems.
These become metavariables in the output.
#count_heartbeats!
#count_heartbeats! in tac runs a tactic 10 times, counting the heartbeats used, and logs the range
and standard deviation. The tactic #count_heartbeats! n in tac runs it n times instead.
#find
#leansearch
Search LeanSearch from within Lean.
Queries should be a string that ends with a . or ?. This works as a command, as a term
and as a tactic as in the following examples. In tactic mode, only valid tactics are displayed.
#leansearch "If a natural number n is less than m, then the successor of n is less than the successor of m."
example := #leansearch "If a natural number n is less than m, then the successor of n is less than the successor of m."
example : 3 ≤ 5 := ⊢ 3 ≤ 5
⊢ 3 ≤ 5
All goals completed! 🐙
You can modify the LeanSearch URL by setting the LEANSEARCHCLIENT_LEANSEARCH_API_URL environment variable.
#loogle
Search Loogle from within Lean. This can be used as a command, term or tactic as in the following examples. In the case of a tactic, only valid tactics are displayed.
#loogle List ?a → ?a
example := #loogle List ?a → ?a
example : 3 ≤ 5 := ⊢ 3 ≤ 5
⊢ 3 ≤ 5
All goals completed! 🐙
Loogle Usage
Loogle finds definitions and lemmas in various ways:
By constant: 🔍 Real.sin finds all lemmas whose statement somehow mentions the sine function.
By lemma name substring: 🔍 "differ" finds all lemmas that have "differ" somewhere in their lemma name.
By subexpression: 🔍 _ * (_ ^ _) finds all lemmas whose statements somewhere include a product where the second argument is raised to some power.
The pattern can also be non-linear, as in 🔍 Real.sqrt ?a * Real.sqrt ?a
If the pattern has parameters, they are matched in any order. Both of these will find List.map: 🔍 (?a -> ?b) -> List ?a -> List ?b 🔍 List ?a -> (?a -> ?b) -> List ?b
By main conclusion: 🔍 |- tsum _ = _ * tsum _ finds all lemmas where the conclusion (the subexpression to the right of all → and ∀) has the given shape.
As before, if the pattern has parameters, they are matched against the hypotheses of the lemma in any order; for example, 🔍 |- _ < _ → tsum _ < tsum _ will find tsum_lt_tsum even though the hypothesis f i < g i is not the last.
If you pass more than one such search filter, separated by commas Loogle will return lemmas which match all of them. The search 🔍 Real.sin, "two", tsum, _ * _, _ ^ _, |- _ < _ → _ woould find all lemmas which mention the constants Real.sin and tsum, have "two" as a substring of the lemma name, include a product and a power somewhere in the type, and have a hypothesis of the form _ < _ (if there were any such lemmas). Metavariables (?a) are assigned independently in each filter.
You can modify the Loogle server URL by setting the LEANSEARCHCLIENT_LOOGLE_API_URL environment variable.
#loogle
#moogle
Search Moogle from within Lean.
Queries should be a string that ends with a . or ?. This works as a command, as a term
and as a tactic as in the following examples. In tactic mode, only valid tactics are displayed.
#moogle "If a natural number n is less than m, then the successor of n is less than the successor of m."
example := #moogle "If a natural number n is less than m, then the successor of n is less than the successor of m."
example : 3 ≤ 5 := ⊢ 3 ≤ 5
⊢ 3 ≤ 5
All goals completed! 🐙
You can modify the Moogle URL by setting the LEANSEARCHCLIENT_MOOGLE_API_URL environment variable.
#search
Search either Moogle or LeanSearch from within Lean, depending on the option leansearchclient.backend.
Queries should be a string that ends with a . or ?. This works as a command, as a term
and as a tactic as in the following examples. In tactic mode, only valid tactics are displayed.
#search "If a natural number n is less than m, then the successor of n is less than the successor of m." example := #search "If a natural number n is less than m, then the successor of n is less than the successor of m." example : 3 ≤ 5 := by #search "If a natural number n is less than m, then the successor of n is less than the successor of m." sorry In tactic mode, if the query string is not supplied, then [LeanStateSearch](https://premise-search.com) is queried based on the goal state.
#statesearch
Search LeanStateSearch from within Lean.
Your current main goal is sent as query. The revision to search can be set
using the statesearch.revision option. The number of results can be set
using the statesearch.queries option.
Hint: If you want to modify the query, you need to use the web interface.
set_option statesearch.queries 1 set_option statesearch.revision "v4.16.0" example : 0 ≤ 1 := by #statesearch sorry
You can modify the LeanStateSearch URL by setting the LEANSEARCHCLIENT_LEANSTATESEARCH_API_URL environment variable.
(
(tacs) executes a list of tactics in sequence, without requiring that
the goal be closed at the end like ·tacs. Like by itself, the tactics
can be either separated by newlines or ;.
<;>
t <;> [t1; t2; ...; tn] focuses on the first goal and applies t, which should result in n
subgoals. It then applies each ti to the corresponding goal and collects the resulting
subgoals.
<;>
tac <;> tac' runs tac on the main goal and tac' on each produced goal,
concatenating all goals produced by tac'.
Lean.Parser.Tactic.introMatch
The tactic
intro | pat1 => tac1 | pat2 => tac2
is the same as:
intro x match x with | pat1 => tac1 | pat2 => tac2
That is, intro can be followed by match arms and it introduces the values while
doing a pattern match. This is equivalent to fun with match arms in term mode.
Lean.Parser.Tactic.match
match performs case analysis on one or more expressions.
See Induction and Recursion.
The syntax for the match tactic is the same as term-mode match, except that
the match arms are tactics instead of expressions.
example (n : Nat) : n = n := n:ℕ⊢ n = n
match n with
n:ℕ⊢ 0 = 0 All goals completed! 🐙
n:ℕi:ℕ⊢ i + 1 = i + 1 All goals completed! 🐙
Lean.Parser.Tactic.open
open Foo in tacs (the tactic) acts like open Foo at command level,
but it opens a namespace only within the tactics tacs.
Lean.Parser.Tactic.set_option
set_option opt val in tacs (the tactic) acts like set_option opt val at the command level,
but it sets the option only within the tactics tacs.
Lean.Parser.Tactic.tacDepIfThenElse
In tactic mode, if h : t then tac1 else tac2 can be used as alternative syntax for:
by_cases h : t · tac1 · tac2
It performs case distinction on h : t or h : ¬t and tac1 and tac2 are the subproofs.
You can use ?_ or _ for either subproof to delay the goal to after the tactic, but
if a tactic sequence is provided for tac1 or tac2 then it will require the goal to be closed
by the end of the block.
Lean.Parser.Tactic.tacIfThenElse
In tactic mode, if t then tac1 else tac2 is alternative syntax for:
by_cases t
·tac1
·tac2
It performs case distinction on h† : t or h† : ¬t, where h† is an anonymous
hypothesis, and tac1 and tac2 are the subproofs. (It doesn't actually use
nondependent if, since this wouldn't add anything to the context and hence would be
useless for proving theorems. To actually insert an ite application use
refine if t then ?_ else ?_.)
_
_ in tactic position acts like the done tactic: it fails and gives the list
of goals if there are any. It is useful as a placeholder after starting a tactic block
such as by _ to make it syntactically correct and show the current goal.
abelTactic for evaluating equations in the language of additive, commutative monoids and groups.
abel and its variants work as both tactics and conv tactics.
-
abel1fails if the target is not an equality that is provable by the axioms of commutative monoids/groups. -
abel_nfrewrites all group expressions into a normal form.-
In tactic mode,
abel_nf at hcan be used to rewrite in a hypothesis. -
abel_nf (config := cfg)allows for additional configuration:-
red: the reducibility setting (overridden by!) -
zetaDelta: if true, local let variables can be unfolded (overridden by!) -
recursive: if true,abel_nfwill also recurse into atoms
-
-
-
abel!,abel1!,abel_nf!will use a more aggressive reducibility setting to identify atoms.
For example:
example [AddCommMonoid α] (a b : α) : a + (b + a) = a + a + b := α:Type u_1inst✝:AddCommMonoid αa:αb:α⊢ a + (b + a) = a + a + b All goals completed! 🐙
example [AddCommGroup α] (a : α) : (3 : ℤ) • a = a + (2 : ℤ) • a := α:Type u_1inst✝:AddCommGroup αa:α⊢ 3 • a = a + 2 • a All goals completed! 🐙
Future work
-
In mathlib 3,
abelaccepted additional optional arguments:syntax "abel" (&" raw" <|> &" term")? (location)? : tactic
It is undecided whether these features should be restored eventually.
absurd
Given a proof h of p, absurd h changes the goal to ⊢ ¬ p.
If p is a negation ¬q then the goal is changed to ⊢ q instead.
ac_change
ac_change g using n is convert_to g using n followed by ac_rfl. It is useful for
rearranging/reassociating e.g. sums:
example (a b c d e f g N : ℕ) : (a + b) + (c + d) + (e + f) + g ≤ N := by ac_change a + d + e + f + c + g + b ≤ _ -- ⊢ a + d + e + f + c + g + b ≤ N
ac_nf
ac_nf normalizes equalities up to application of an associative and commutative operator.
-
ac_nfnormalizes all hypotheses and the goal target of the goal. -
ac_nf at lnormalizes at location(s)l, wherelis either*or a list of hypotheses in the local context. In the latter case, a turnstile⊢or|-can also be used, to signify the target of the goal.
instance : Associative (α := Nat) (.+.) := ⟨Nat.add_assoc⟩ instance : Commutative (α := Nat) (.+.) := ⟨Nat.add_comm⟩ example (a b c d : Nat) : a + b + c + d = d + (b + c) + a := by ac_nf -- goal: a + (b + (c + d)) = a + (b + (c + d))
ac_rfl
ac_rfl proves equalities up to application of an associative and commutative operator.
instance : Associative (α := Nat) (.+.) := ⟨Nat.add_assoc⟩ instance : Commutative (α := Nat) (.+.) := ⟨Nat.add_comm⟩ example (a b c d : Nat) : a + b + c + d = d + (b + c) + a := by ac_rfl
aesop
aesop <clause>* tries to solve the current goal by applying a set of rules
registered with the @[aesop] attribute. See its
README for a tutorial and a
reference.
The variant aesop? prints the proof it found as a Try this suggestion.
Clauses can be used to customise the behaviour of an Aesop call. Available clauses are:
-
(add <phase> <priority> <builder> <rule>)adds a rule.<phase>isunsafe,safeornorm.<priority>is a percentage for unsafe rules and an integer for safe and norm rules.<rule>is the name of a declaration or local hypothesis.<builder>is the rule builder used to turn<rule>into an Aesop rule. Example:(add unsafe 50% apply Or.inl). -
(erase <rule>)disables a globally registered Aesop rule. Example:(erase Aesop.BuiltinRules.assumption). -
(rule_sets := [<ruleset>,*])enables or disables named sets of rules for this Aesop call. Example:(rule_sets := [-builtin, MyRuleSet]). -
(config { <opt> := <value> })adjusts Aesop's search options. SeeAesop.Options. -
(simp_config { <opt> := <value> })adjusts options for Aesop's built-insimprule. The given options are directly passed tosimp. For example,(simp_config := { zeta := false })makes Aesop usesimp (config := { zeta := false }).
aesop?
aesop <clause>* tries to solve the current goal by applying a set of rules
registered with the @[aesop] attribute. See its
README for a tutorial and a
reference.
The variant aesop? prints the proof it found as a Try this suggestion.
Clauses can be used to customise the behaviour of an Aesop call. Available clauses are:
-
(add <phase> <priority> <builder> <rule>)adds a rule.<phase>isunsafe,safeornorm.<priority>is a percentage for unsafe rules and an integer for safe and norm rules.<rule>is the name of a declaration or local hypothesis.<builder>is the rule builder used to turn<rule>into an Aesop rule. Example:(add unsafe 50% apply Or.inl). -
(erase <rule>)disables a globally registered Aesop rule. Example:(erase Aesop.BuiltinRules.assumption). -
(rule_sets := [<ruleset>,*])enables or disables named sets of rules for this Aesop call. Example:(rule_sets := [-builtin, MyRuleSet]). -
(config { <opt> := <value> })adjusts Aesop's search options. SeeAesop.Options. -
(simp_config { <opt> := <value> })adjusts options for Aesop's built-insimprule. The given options are directly passed tosimp. For example,(simp_config := { zeta := false })makes Aesop usesimp (config := { zeta := false }).
aesop_cat
A thin wrapper for aesop which adds the CategoryTheory rule set and
allows aesop to look through semireducible definitions when calling intros.
This tactic fails when it is unable to solve the goal, making it suitable for
use in auto-params.
aesop_cat_nonterminal
A variant of aesop_cat which does not fail when it is unable to solve the
goal. Use this only for exploration! Nonterminal aesop is even worse than
nonterminal simp.
aesop_graph
A variant of the aesop tactic for use in the graph library. Changes relative
to standard aesop:
-
We use the
SimpleGraphrule set in addition to the default rule sets. -
We instruct Aesop's
introrule to unfold withdefaulttransparency. -
We instruct Aesop to fail if it can't fully solve the goal. This allows us to use
aesop_graphfor auto-params.
aesop_graph_nonterminal
A variant of aesop_graph which does not fail if it is unable to solve the goal.
Use this only for exploration! Nonterminal Aesop is even worse than nonterminal simp.
aesop_mat
The aesop_mat tactic attempts to prove a set is contained in the ground set of a matroid.
It uses a [Matroid] ruleset, and is allowed to fail.
aesop_unfold
aesop_unfold
algebraize
Tactic that, given RingHoms, adds the corresponding Algebra and (if possible)
IsScalarTower instances, as well as Algebra corresponding to RingHom properties available
as hypotheses.
Example: given f : A →+* B and g : B →+* C, and hf : f.FiniteType, algebraize [f, g] will
add the instances Algebra A B, Algebra B C, and Algebra.FiniteType A B.
See the algebraize tag for instructions on what properties can be added.
The tactic also comes with a configuration option properties. If set to true (default), the
tactic searches through the local context for RingHom properties that can be converted to
Algebra properties. The macro algebraize_only calls
algebraize -properties,
so in other words it only adds Algebra and IsScalarTower instances.
algebraize_only
Version of algebraize, which only adds Algebra instances and IsScalarTower instances,
but does not try to add any instances about any properties tagged with
@[algebraize], like for example Finite or IsIntegral.
all_goals
all_goals tac runs tac on each goal, concatenating the resulting goals.
If the tactic fails on any goal, the entire all_goals tactic fails.
See also any_goals tac.
any_goals
any_goals tac applies the tactic tac to every goal,
concatenating the resulting goals for successful tactic applications.
If the tactic fails on all of the goals, the entire any_goals tactic fails.
This tactic is like all_goals try tac except that it fails if none of the applications of tac succeeds.
apply
apply t at i will use forward reasoning with t at the hypothesis i.
Explicitly, if t : α₁ → ⋯ → αᵢ → ⋯ → αₙ and i has type αᵢ, then this tactic will add
metavariables/goals for any terms of αⱼ for j = 1, …, i-1,
then replace the type of i with αᵢ₊₁ → ⋯ → αₙ by applying those metavariables and the
original i.
apply
apply e tries to match the current goal against the conclusion of e's type.
If it succeeds, then the tactic returns as many subgoals as the number of premises that
have not been fixed by type inference or type class resolution.
Non-dependent premises are added before dependent ones.
The apply tactic uses higher-order pattern matching, type class resolution,
and first-order unification with dependent types.
apply
apply?
Searches environment for definitions or theorems that can refine the goal using apply
with conditions resolved when possible with solve_by_elim.
The optional using clause provides identifiers in the local context that must be
used when closing the goal.
apply_assumption
apply_assumption looks for an assumption of the form ... → ∀ _, ... → head
where head matches the current goal.
You can specify additional rules to apply using apply_assumption [...].
By default apply_assumption will also try rfl, trivial, congrFun, and congrArg.
If you don't want these, or don't want to use all hypotheses, use apply_assumption only [...].
You can use apply_assumption [-h] to omit a local hypothesis.
You can use apply_assumption using [a₁, ...] to use all lemmas which have been labelled
with the attributes aᵢ (these attributes must be created using register_label_attr).
apply_assumption will use consequences of local hypotheses obtained via symm.
If apply_assumption fails, it will call exfalso and try again.
Thus if there is an assumption of the form P → ¬ Q, the new tactic state
will have two goals, P and Q.
You can pass a further configuration via the syntax apply_rules (config := {...}) lemmas.
The options supported are the same as for solve_by_elim (and include all the options for apply).
apply_fun
Apply a function to an equality or inequality in either a local hypothesis or the goal.
-
If we have
h : a = b, thenapply_fun f at hwill replace this withh : f a = f b. -
If we have
h : a ≤ b, thenapply_fun f at hwill replace this withh : f a ≤ f b, and create a subsidiary goalMonotone f.apply_funwill automatically attempt to discharge this subsidiary goal usingmono, or an explicit solution can be provided withapply_fun f at h using P, whereP : Monotone f. -
If we have
h : a < b, thenapply_fun f at hwill replace this withh : f a < f b, and create a subsidiary goalStrictMono fand behaves as in the previous case. -
If we have
h : a ≠ b, thenapply_fun f at hwill replace this withh : f a ≠ f b, and create a subsidiary goalInjective fand behaves as in the previous two cases. -
If the goal is
a ≠ b,apply_fun fwill replace this withf a ≠ f b. -
If the goal is
a = b,apply_fun fwill replace this withf a = f b, and create a subsidiary goalinjective f.apply_funwill automatically attempt to discharge this subsidiary goal using local hypotheses, or iffis actually anEquiv, or an explicit solution can be provided withapply_fun f using P, whereP : Injective f. -
If the goal is
a ≤ b(or similarly fora < b), andfis actually anOrderIso,apply_fun fwill replace the goal withf a ≤ f b. Iffis anything else (e.g. just a function, or anEquiv),apply_funwill fail.
Typical usage is:
open Function
example (X Y Z : Type) (f : X → Y) (g : Y → Z) (H : Injective <| g ∘ f) :
Injective f := X:TypeY:TypeZ:Typef:X → Yg:Y → ZH:Injective (g ∘ f)⊢ Injective f
X:TypeY:TypeZ:Typef:X → Yg:Y → ZH:Injective (g ∘ f)x:Xx':Xh:f x = f x'⊢ x = x'
X:TypeY:TypeZ:Typef:X → Yg:Y → ZH:Injective (g ∘ f)x:Xx':Xh:g (f x) = g (f x')⊢ x = x'
All goals completed! 🐙
The function f is handled similarly to how it would be handled by refine in that f can contain
placeholders. Named placeholders (like ?a or ?_) will produce new goals.
apply_gmonoid_gnpowRec_succ_tac
A tactic to for use as an optional value for GMonoid.gnpow_succ'.
apply_gmonoid_gnpowRec_zero_tac
A tactic to for use as an optional value for GMonoid.gnpow_zero'.
apply_mod_cast
Normalize casts in the goal and the given expression, then apply the expression to the goal.
apply_rules
apply_rules [l₁, l₂, ...] tries to solve the main goal by iteratively
applying the list of lemmas [l₁, l₂, ...] or by applying a local hypothesis.
If apply generates new goals, apply_rules iteratively tries to solve those goals.
You can use apply_rules [-h] to omit a local hypothesis.
apply_rules will also use rfl, trivial, congrFun and congrArg.
These can be disabled, as can local hypotheses, by using apply_rules only [...].
You can use apply_rules using [a₁, ...] to use all lemmas which have been labelled
with the attributes aᵢ (these attributes must be created using register_label_attr).
You can pass a further configuration via the syntax apply_rules (config := {...}).
The options supported are the same as for solve_by_elim (and include all the options for apply).
apply_rules will try calling symm on hypotheses and exfalso on the goal as needed.
This can be disabled with apply_rules (config := {symm := false, exfalso := false}).
You can bound the iteration depth using the syntax apply_rules (config := {maxDepth := n}).
Unlike solve_by_elim, apply_rules does not perform backtracking, and greedily applies
a lemma from the list until it gets stuck.
arith_mult
arith_mult solves goals of the form IsMultiplicative f for f : ArithmeticFunction R
by applying lemmas tagged with the user attribute arith_mult.
arith_mult?
arith_mult solves goals of the form IsMultiplicative f for f : ArithmeticFunction R
by applying lemmas tagged with the user attribute arith_mult, and prints out the generated
proof term.
array_get_dec
This tactic, added to the decreasing_trivial toolbox, proves that
sizeOf arr[i] < sizeOf arr, which is useful for well founded recursions
over a nested inductive like inductive T | mk : Array T → T.
array_mem_dec
This tactic, added to the decreasing_trivial toolbox, proves that sizeOf a < sizeOf arr
provided that a ∈ arr which is useful for well founded recursions over a nested inductive like
inductive T | mk : Array T → T.
as_aux_lemma
as_aux_lemma => tac does the same as tac, except that it wraps the resulting expression
into an auxiliary lemma. In some cases, this significantly reduces the size of expressions
because the proof term is not duplicated.
assumption
assumption tries to solve the main goal using a hypothesis of compatible type, or else fails.
Note also the ‹t› term notation, which is a shorthand for show t by assumption.
assumption_mod_cast
assumption_mod_cast is a variant of assumption that solves the goal
using a hypothesis. Unlike assumption, it first pre-processes the goal and
each hypothesis to move casts as far outwards as possible, so it can be used
in more situations.
Concretely, it runs norm_cast on the goal. For each local hypothesis h, it also
normalizes h with norm_cast and tries to use that to close the goal.
bddDefault
Sets are automatically bounded or cobounded in complete lattices. To use the same statements
in complete and conditionally complete lattices but let automation fill automatically the
boundedness proofs in complete lattices, we use the tactic bddDefault in the statements,
in the form (hA : BddAbove A := by bddDefault).
beta_reduce
beta_reduce at loc completely beta reduces the given location.
This also exists as a conv-mode tactic.
This means that whenever there is an applied lambda expression such as
(fun x => f x) y then the argument is substituted into the lambda expression
yielding an expression such as f y.
bicategory
Use the coherence theorem for bicategories to solve equations in a bicategory, where the two sides only differ by replacing strings of bicategory structural morphisms (that is, associators, unitors, and identities) with different strings of structural morphisms with the same source and target.
That is, bicategory can handle goals of the form
a ≫ f ≫ b ≫ g ≫ c = a' ≫ f ≫ b' ≫ g ≫ c'
where a = a', b = b', and c = c' can be proved using bicategory_coherence.
bicategory_coherence
Coherence tactic for bicategories.
Use pure_coherence instead, which is a frontend to this one.
bicategory_coherence
Close the goal of the form η.hom = θ.hom, where η and θ are 2-isomorphisms made up only of
associators, unitors, and identities.
example {B : Type} [Bicategory B] {a : B} :
(λ_ (𝟙 a)).hom = (ρ_ (𝟙 a)).hom := by
bicategory_coherence
bitwise_assoc_tac
Proving associativity of bitwise operations in general essentially boils down to a huge case distinction, so it is shorter to use this tactic instead of proving it in the general case.
borelize
The behaviour of borelize α depends on the existing assumptions on α.
-
if
αis a topological space with instances[MeasurableSpace α] [BorelSpace α], thenborelize αreplaces the former instance byborel α; -
otherwise,
borelize αadds instancesborel α : MeasurableSpace αand⟨rfl⟩ : BorelSpace α.
Finally, borelize α β γ runs borelize α; borelize β; borelize γ.
bound
bound tactic for proving inequalities via straightforward recursion on expression structure.
An example use case is
lemma le_sqr_add (c z : ℝ) (cz : ‖c‖ ≤ ‖z‖) (z3 : 3 ≤ ‖z‖) :
2 * ‖z‖ ≤ ‖z^2 + c‖ := c:ℝz:ℝcz:‖c‖ ≤ ‖z‖z3:3 ≤ ‖z‖⊢ 2 * ‖z‖ ≤ ‖z ^ 2 + c‖
calc ‖z^2 + c‖
_ ≥ ‖z^2‖ - ‖c‖ := c:ℝz:ℝcz:‖c‖ ≤ ‖z‖z3:3 ≤ ‖z‖⊢ ‖z ^ 2 + c‖ ≥ ‖z ^ 2‖ - ‖c‖ All goals completed! 🐙
_ ≥ ‖z^2‖ - ‖z‖ := c:ℝz:ℝcz:‖c‖ ≤ ‖z‖z3:3 ≤ ‖z‖⊢ ‖z ^ 2‖ - ‖c‖ ≥ ‖z ^ 2‖ - ‖z‖ All goals completed! 🐙
_ ≥ (‖z‖ - 1) * ‖z‖ := c:ℝz:ℝcz:‖c‖ ≤ ‖z‖z3:3 ≤ ‖z‖⊢ ‖z ^ 2‖ - ‖z‖ ≥ (‖z‖ - 1) * ‖z‖
All goals completed! 🐙
_ ≥ 2 * ‖z‖ := c:ℝz:ℝcz:‖c‖ ≤ ‖z‖z3:3 ≤ ‖z‖⊢ (‖z‖ - 1) * ‖z‖ ≥ 2 * ‖z‖ All goals completed! 🐙
bound is built on top of aesop, and uses
-
Apply lemmas registered via the
@[bound]attribute -
Forward lemmas registered via the
@[bound_forward]attribute -
Local hypotheses from the context
-
Optionally: additional hypotheses provided as
bound [h₀, h₁]or similar. These are added to the context as if byhave := hᵢ.
The functionality of bound overlaps with positivity and gcongr, but can jump back and forth
between 0 ≤ x and x ≤ y-type inequalities. For example, bound proves
0 ≤ c → b ≤ a → 0 ≤ a * c - b * c
by turning the goal into b * c ≤ a * c, then using mul_le_mul_of_nonneg_right. bound also
contains lemmas for goals of the form 1 ≤ x, 1 < x, x ≤ 1, x < 1. Conversely, gcongr can prove
inequalities for more types of relations, supports all positivity functionality, and is likely
faster since it is more specialized (not built atop aesop).
bv_check
This tactic works just like bv_decide but skips calling a SAT solver by using a proof that is
already stored on disk. It is called with the name of an LRAT file in the same directory as the
current Lean file:
bv_check "proof.lrat"bv_decide
Close fixed-width BitVec and Bool goals by obtaining a proof from an external SAT solver and
verifying it inside Lean. The solvable goals are currently limited to
-
the Lean equivalent of
QF_BV -
automatically splitting up
structures that contain information aboutBitVecorBool
example : ∀ (a b : BitVec 64), (a &&& b) + (a ^^^ b) = a ||| b := ⊢ ∀ (a b : BitVec 64), (a &&& b) + (a ^^^ b) = a ||| b
a✝:BitVec 64b✝:BitVec 64⊢ (a✝ &&& b✝) + (a✝ ^^^ b✝) = a✝ ||| b✝
All goals completed! 🐙
If bv_decide encounters an unknown definition it will be treated like an unconstrained BitVec
variable. Sometimes this enables solving goals despite not understanding the definition because
the precise properties of the definition do not matter in the specific proof.
If bv_decide fails to close a goal it provides a counter-example, containing assignments for all
terms that were considered as variables.
In order to avoid calling a SAT solver every time, the proof can be cached with bv_decide?.
If solving your problem relies inherently on using associativity or commutativity, consider enabling
the bv.ac_nf option.
Note: bv_decide uses ofReduceBool and thus trusts the correctness of the code generator.
Note: include import Std.Tactic.BVDecide
bv_decide
Close fixed-width BitVec and Bool goals by obtaining a proof from an external SAT solver and
verifying it inside Lean. The solvable goals are currently limited to
-
the Lean equivalent of
QF_BV -
automatically splitting up
structures that contain information aboutBitVecorBool
example : ∀ (a b : BitVec 64), (a &&& b) + (a ^^^ b) = a ||| b := ⊢ ∀ (a b : BitVec 64), (a &&& b) + (a ^^^ b) = a ||| b
a✝:BitVec 64b✝:BitVec 64⊢ (a✝ &&& b✝) + (a✝ ^^^ b✝) = a✝ ||| b✝
All goals completed! 🐙
If bv_decide encounters an unknown definition it will be treated like an unconstrained BitVec
variable. Sometimes this enables solving goals despite not understanding the definition because
the precise properties of the definition do not matter in the specific proof.
If bv_decide fails to close a goal it provides a counter-example, containing assignments for all
terms that were considered as variables.
In order to avoid calling a SAT solver every time, the proof can be cached with bv_decide?.
If solving your problem relies inherently on using associativity or commutativity, consider enabling
the bv.ac_nf option.
Note: bv_decide uses ofReduceBool and thus trusts the correctness of the code generator.
Note: include import Std.Tactic.BVDecide
bv_decide?
Suggest a proof script for a bv_decide tactic call. Useful for caching LRAT proofs.
Note: include import Std.Tactic.BVDecide
bv_decide?
Suggest a proof script for a bv_decide tactic call. Useful for caching LRAT proofs.
Note: include import Std.Tactic.BVDecide
bv_normalize
Run the normalization procedure of bv_decide only. Sometimes this is enough to solve basic
BitVec goals already.
Note: include import Std.Tactic.BVDecide
bv_normalize
Run the normalization procedure of bv_decide only. Sometimes this is enough to solve basic
BitVec goals already.
Note: include import Std.Tactic.BVDecide
bv_omega
by_cases
by_cases (h :)? p splits the main goal into two cases, assuming h : p in the first branch, and h : ¬ p in the second branch.
by_contra
by_contra h proves ⊢ p by contradiction,
introducing a hypothesis h : ¬p and proving False.
-
If
pis a negation¬q,h : qwill be introduced instead of¬¬q. -
If
pis decidable, it usesDecidable.byContradictioninstead ofClassical.byContradiction. -
If
his omitted, the introduced variable_: ¬pwill be anonymous.
by_contra!
If the target of the main goal is a proposition p,
by_contra! reduces the goal to proving False using the additional hypothesis this : ¬ p.
by_contra! h can be used to name the hypothesis h : ¬ p.
The hypothesis ¬ p will be negation normalized using push_neg.
For instance, ¬ a < b will be changed to b ≤ a.
by_contra! h : q will normalize negations in ¬ p, normalize negations in q,
and then check that the two normalized forms are equal.
The resulting hypothesis is the pre-normalized form, q.
If the name h is not explicitly provided, then this will be used as name.
This tactic uses classical reasoning.
It is a variant on the tactic by_contra.
Examples:
example : 1 < 2 := by by_contra! h -- h : 2 ≤ 1 ⊢ False example : 1 < 2 := by by_contra! h : ¬ 1 < 2 -- h : ¬ 1 < 2 ⊢ False
calc
Step-wise reasoning over transitive relations.
calc a = b := pab b = c := pbc ... y = z := pyz
proves a = z from the given step-wise proofs. = can be replaced with any
relation implementing the typeclass Trans. Instead of repeating the right-
hand sides, subsequent left-hand sides can be replaced with _.
calc a = b := pab _ = c := pbc ... _ = z := pyz
It is also possible to write the first relation as <lhs>\n _ = <rhs> := <proof>. This is useful for aligning relation symbols, especially on longer:
identifiers:
calc abc _ = bce := pabce _ = cef := pbcef ... _ = xyz := pwxyz
calc works as a term, as a tactic or as a conv tactic.
See Theorem Proving in Lean 4 for more information.
cancel_denoms
cancel_denoms
cancel_denoms attempts to remove numerals from the denominators of fractions.
It works on propositions that are field-valued inequalities.
variable [LinearOrderedField α] (a b c : α) example (h : a / 5 + b / 4 < c) : 4*a + 5*b < 20*c := by cancel_denoms at h exact h example (h : a > 0) : a / 5 > 0 := by cancel_denoms exact h
case
-
case _ : t => tacfinds the first goal that unifies withtand then solves it usingtacor else fails. Likeshow, it changes the type of the goal tot. The_can optionally be a case tag, in which case it only looks at goals whose tag would be considered bycase(goals with an exact tag match, followed by goals with the tag as a suffix, followed by goals with the tag as a prefix). -
case _ n₁ ... nₘ : t => tacadditionally names themmost recent hypotheses with inaccessible names to the given names. The names are renamed before matching againstt. The_can optionally be a case tag. -
case _ : t₁ | _ : t₂ | ... => tacis equivalent to(case _ : t₁ => tac); (case _ : t₂ => tac); ...but with all matching done on the original list of goals -- each goal is consumed as they are matched, so patterns may repeat or overlap. -
case _ : twill make the matched goal be the first goal.case _ : t₁ | _ : t₂ | ...makes the matched goals be the first goals in the given order. -
case _ : t := _andcase _ : t := ?mare the same ascase _ : tbut in the?mcase the goal tag is changed tom. In particular, the goal becomes metavariable?m.
case
-
case tag => tacfocuses on the goal with case nametagand solves it usingtac, or else fails. -
case tag x₁ ... xₙ => tacadditionally renames thenmost recent hypotheses with inaccessible names to the given names. -
case tag₁ | tag₂ => tacis equivalent to(case tag₁ => tac); (case tag₂ => tac).
case'
case' _ : t => tac is similar to the case _ : t => tac tactic,
but it does not ensure the goal has been solved after applying tac,
nor does it admit the goal if tac failed.
Recall that case closes the goal using sorry when tac fails,
and the tactic execution is not interrupted.
case'
cases
Assuming x is a variable in the local context with an inductive type,
cases x splits the main goal, producing one goal for each constructor of the
inductive type, in which the target is replaced by a general instance of that constructor.
If the type of an element in the local context depends on x,
that element is reverted and reintroduced afterward,
so that the case split affects that hypothesis as well.
cases detects unreachable cases and closes them automatically.
For example, given n : Nat and a goal with a hypothesis h : P n and target Q n,
cases n produces one goal with hypothesis h : P 0 and target Q 0,
and one goal with hypothesis h : P (Nat.succ a) and target Q (Nat.succ a).
Here the name a is chosen automatically and is not accessible.
You can use with to provide the variables names for each constructor.
-
cases e, whereeis an expression instead of a variable, generalizesein the goal, and then cases on the resulting variable. -
Given
as : List α,cases as with | nil => tac₁ | cons a as' => tac₂, uses tactictac₁for thenilcase, andtac₂for theconscase, andaandas'are used as names for the new variables introduced. -
cases h : e, whereeis a variable or an expression, performs cases oneas above, but also adds a hypothesish : e = ...to each hypothesis, where...is the constructor instance for that particular case.
cases'
The cases' tactic is similar to the cases tactic in Lean 4 core, but the syntax for giving
names is different:
example (h : p ∨ q) : q ∨ p := p:Propq:Proph:p ∨ q⊢ q ∨ p
cases h with
p:Propq:Prophp:p⊢ q ∨ p All goals completed! 🐙
p:Propq:Prophq:q⊢ q ∨ p All goals completed! 🐙
example (h : p ∨ q) : q ∨ p := p:Propq:Proph:p ∨ q⊢ q ∨ p
p:Propq:Prophp:p⊢ q ∨ pp:Propq:Prophq:q⊢ q ∨ p
p:Propq:Prophp:p⊢ q ∨ p All goals completed! 🐙
p:Propq:Prophq:q⊢ q ∨ p All goals completed! 🐙
example (h : p ∨ q) : q ∨ p := p:Propq:Proph:p ∨ q⊢ q ∨ p
p:Propq:Prophp:p⊢ q ∨ pp:Propq:Prophq:q⊢ q ∨ p
p:Propq:Prophp:p⊢ q ∨ p All goals completed! 🐙
p:Propq:Prophq:q⊢ q ∨ p All goals completed! 🐙
Prefer cases or rcases when possible, because these tactics promote structured proofs.
cases_first_enat
Finds the first ENat in the context and applies the cases tactic to it.
Then simplifies expressions involving ⊤ using the enat_to_nat_top simp set.
cases_type
-
cases_type Iapplies thecasestactic to a hypothesish : (I ...) -
cases_type I_1 ... I_napplies thecasestactic to a hypothesish : (I_1 ...)or ... orh : (I_n ...) -
cases_type* Iis shorthand for· repeat cases_type I -
cases_type! Ionly appliescasesif the number of resulting subgoals is <= 1.
Example: The following tactic destructs all conjunctions and disjunctions in the current goal.
cases_type* Or Andcases_type!
-
cases_type Iapplies thecasestactic to a hypothesish : (I ...) -
cases_type I_1 ... I_napplies thecasestactic to a hypothesish : (I_1 ...)or ... orh : (I_n ...) -
cases_type* Iis shorthand for· repeat cases_type I -
cases_type! Ionly appliescasesif the number of resulting subgoals is <= 1.
Example: The following tactic destructs all conjunctions and disjunctions in the current goal.
cases_type* Or Andcasesm
-
casesm papplies thecasestactic to a hypothesish : typeiftypematches the patternp. -
casesm p_1, ..., p_napplies thecasestactic to a hypothesish : typeiftypematches one of the given patterns. -
casesm* pis a more efficient and compact version of· repeat casesm p. It is more efficient because the pattern is compiled once. -
casesm! ponly appliescasesif the number of resulting subgoals is <= 1.
Example: The following tactic destructs all conjunctions and disjunctions in the current context.
casesm* _ ∨ _, _ ∧ _
casesm!
-
casesm papplies thecasestactic to a hypothesish : typeiftypematches the patternp. -
casesm p_1, ..., p_napplies thecasestactic to a hypothesish : typeiftypematches one of the given patterns. -
casesm* pis a more efficient and compact version of· repeat casesm p. It is more efficient because the pattern is compiled once. -
casesm! ponly appliescasesif the number of resulting subgoals is <= 1.
Example: The following tactic destructs all conjunctions and disjunctions in the current context.
casesm* _ ∨ _, _ ∧ _
cat_disch
A tactic for discharging easy category theory goals, widely used as an autoparameter.
Currently this defaults to the aesop_cat wrapper around aesop, but by setting
the option mathlib.tactic.category.grind to true, it will use the grind tactic instead.
cc
The congruence closure tactic cc tries to solve the goal by chaining
equalities from context and applying congruence (i.e. if a = b, then f a = f b).
It is a finishing tactic, i.e. it is meant to close
the current goal, not to make some inconclusive progress.
A mostly trivial example would be:
example (a b c : ℕ) (f : ℕ → ℕ) (h : a = b) (h' : b = c) : f a = f c := a:ℕb:ℕc:ℕf:ℕ → ℕh:a = bh':b = c⊢ f a = f c
All goals completed! 🐙
As an example requiring some thinking to do by hand, consider:
example (f : ℕ → ℕ) (x : ℕ)
(H1 : f (f (f x)) = x) (H2 : f (f (f (f (f x)))) = x) :
f x = x := f:ℕ → ℕx:ℕH1:f (f (f x)) = xH2:f (f (f (f (f x)))) = x⊢ f x = x
All goals completed! 🐙
cfc_cont_tac
A tactic used to automatically discharge goals relating to the continuous functional calculus, specifically concerning continuity of the functions involved.
cfc_tac
A tactic used to automatically discharge goals relating to the continuous functional calculus, specifically whether the element satisfies the predicate.
cfc_zero_tac
A tactic used to automatically discharge goals relating to the non-unital continuous functional
calculus, specifically concerning whether f 0 = 0.
change-
change tgt'will change the goal fromtgttotgt', assuming these are definitionally equal. -
change t' at hwill change hypothesish : tto have typet', assuming assumingtandt'are definitionally equal.
change
change?
change? term unifies term with the current goal, then suggests explicit change syntax
that uses the resulting unified term.
If term is not present, change? suggests the current goal itself. This is useful after tactics
which transform the goal while maintaining definitional equality, such as dsimp; those preceding
tactic calls can then be deleted.
example : (fun x : Nat => x) 0 = 1 := by change? 0 = _ -- `Try this: change 0 = 1`
check_compositions
For each composition f ≫ g in the goal,
which internally is represented as CategoryStruct.comp C inst X Y Z f g,
infer the types of f and g and check whether their sources and targets agree
with X, Y, and Z at "instances and reducible" transparency,
reporting any discrepancies.
An example:
example (j : J) :
colimit.ι ((F ⋙ G) ⋙ H) j ≫ (preservesColimitIso (G ⋙ H) F).inv =
H.map (G.map (colimit.ι F j)) := by
-- We know which lemma we want to use, and it's even a simp lemma, but `rw`
-- won't let us apply it
fail_if_success rw [ι_preservesColimitIso_inv]
fail_if_success rw [ι_preservesColimitIso_inv (G ⋙ H)]
fail_if_success simp only [ι_preservesColimitIso_inv]
-- This would work:
-- erw [ι_preservesColimitIso_inv (G ⋙ H)]
-- `check_compositions` checks if the two morphisms we're composing are
-- composed by abusing defeq, and indeed it tells us that we are abusing
-- definitional associativity of composition of functors here: it prints
-- the following.
-- info: In composition
-- colimit.ι ((F ⋙ G) ⋙ H) j ≫ (preservesColimitIso (G ⋙ H) F).inv
-- the source of
-- (preservesColimitIso (G ⋙ H) F).inv
-- is
-- colimit (F ⋙ G ⋙ H)
-- but should be
-- colimit ((F ⋙ G) ⋙ H)
check_compositions
-- In this case, we can "fix" this by reassociating in the goal, but
-- usually at this point the right thing to do is to back off and
-- check how we ended up with a bad goal in the first place.
dsimp only [Functor.assoc]
-- This would work now, but it is not needed, because simp works as well
-- rw [ι_preservesColimitIso_inv]
simp
choose-
choose a b h h' using hyptakes a hypothesishypof the form∀ (x : X) (y : Y), ∃ (a : A) (b : B), P x y a b ∧ Q x y a bfor someP Q : X → Y → A → B → Propand outputs into context a functiona : X → Y → A,b : X → Y → Band two assumptions:h : ∀ (x : X) (y : Y), P x y (a x y) (b x y)andh' : ∀ (x : X) (y : Y), Q x y (a x y) (b x y). It also works with dependent versions. -
choose! a b h h' using hypdoes the same, except that it will remove dependency of the functions on propositional arguments if possible. For example ifYis a proposition andAandBare nonempty in the above example then we will instead geta : X → A,b : X → B, and the assumptionsh : ∀ (x : X) (y : Y), P x y (a x) (b x)andh' : ∀ (x : X) (y : Y), Q x y (a x) (b x).
The using hyp part can be omitted,
which will effectively cause choose to start with an intro hyp.
Examples:
example (h : ∀ n m : ℕ, ∃ i j, m = n + i ∨ m + j = n) : True := h:∀ (n m : ℕ), ∃ i j, m = n + i ∨ m + j = n⊢ True
i:ℕ → ℕ → ℕj:ℕ → ℕ → ℕh:∀ (n m : ℕ), m = n + i n m ∨ m + j n m = n⊢ True
i:ℕ → ℕ → ℕj:ℕ → ℕ → ℕh:∀ (n m : ℕ), m = n + i n m ∨ m + j n m = n⊢ True
i:ℕ → ℕ → ℕj:ℕ → ℕ → ℕh:∀ (n m : ℕ), m = n + i n m ∨ m + j n m = n⊢ True
i:ℕ → ℕ → ℕj:ℕ → ℕ → ℕh:∀ (n m : ℕ), m = n + i n m ∨ m + j n m = n⊢ True
All goals completed! 🐙
example (h : ∀ i : ℕ, i < 7 → ∃ j, i < j ∧ j < i+i) : True := h:∀ i < 7, ∃ j, i < j ∧ j < i + i⊢ True
f:ℕ → ℕh:∀ i < 7, i < f ih':∀ i < 7, f i < i + i⊢ True
f:ℕ → ℕh:∀ i < 7, i < f ih':∀ i < 7, f i < i + i⊢ True
f:ℕ → ℕh:∀ i < 7, i < f ih':∀ i < 7, f i < i + i⊢ True
f:ℕ → ℕh:∀ i < 7, i < f ih':∀ i < 7, f i < i + i⊢ True
All goals completed! 🐙
choose!
-
choose a b h h' using hyptakes a hypothesishypof the form∀ (x : X) (y : Y), ∃ (a : A) (b : B), P x y a b ∧ Q x y a bfor someP Q : X → Y → A → B → Propand outputs into context a functiona : X → Y → A,b : X → Y → Band two assumptions:h : ∀ (x : X) (y : Y), P x y (a x y) (b x y)andh' : ∀ (x : X) (y : Y), Q x y (a x y) (b x y). It also works with dependent versions. -
choose! a b h h' using hypdoes the same, except that it will remove dependency of the functions on propositional arguments if possible. For example ifYis a proposition andAandBare nonempty in the above example then we will instead geta : X → A,b : X → B, and the assumptionsh : ∀ (x : X) (y : Y), P x y (a x) (b x)andh' : ∀ (x : X) (y : Y), Q x y (a x) (b x).
The using hyp part can be omitted,
which will effectively cause choose to start with an intro hyp.
Examples:
example (h : ∀ n m : ℕ, ∃ i j, m = n + i ∨ m + j = n) : True := h:∀ (n m : ℕ), ∃ i j, m = n + i ∨ m + j = n⊢ True
i:ℕ → ℕ → ℕj:ℕ → ℕ → ℕh:∀ (n m : ℕ), m = n + i n m ∨ m + j n m = n⊢ True
i:ℕ → ℕ → ℕj:ℕ → ℕ → ℕh:∀ (n m : ℕ), m = n + i n m ∨ m + j n m = n⊢ True
i:ℕ → ℕ → ℕj:ℕ → ℕ → ℕh:∀ (n m : ℕ), m = n + i n m ∨ m + j n m = n⊢ True
i:ℕ → ℕ → ℕj:ℕ → ℕ → ℕh:∀ (n m : ℕ), m = n + i n m ∨ m + j n m = n⊢ True
All goals completed! 🐙
example (h : ∀ i : ℕ, i < 7 → ∃ j, i < j ∧ j < i+i) : True := h:∀ i < 7, ∃ j, i < j ∧ j < i + i⊢ True
f:ℕ → ℕh:∀ i < 7, i < f ih':∀ i < 7, f i < i + i⊢ True
f:ℕ → ℕh:∀ i < 7, i < f ih':∀ i < 7, f i < i + i⊢ True
f:ℕ → ℕh:∀ i < 7, i < f ih':∀ i < 7, f i < i + i⊢ True
f:ℕ → ℕh:∀ i < 7, i < f ih':∀ i < 7, f i < i + i⊢ True
All goals completed! 🐙
classical
classical tacs runs tacs in a scope where Classical.propDecidable is a low priority
local instance.
Note that classical is a scoping tactic: it adds the instance only within the
scope of the tactic.
clean_wf
This tactic is used internally by lean before presenting the proof obligations from a well-founded
definition to the user via decreasing_by. It is not necessary to use this tactic manually.
clear
clear x... removes the given hypotheses, or fails if there are remaining
references to a hypothesis.
clear
Clears all hypotheses it can, except those provided after a minus sign. Example:
clear * - h₁ h₂clear!
A variant of clear which clears not only the given hypotheses but also any other hypotheses
depending on them
clear_value
-
clear_value x...clears the values of the given local definitions. A local definitionx : α := vbecomes a hypothesisx : α. -
clear_value (h : x = _)adds a hypothesish : x = vbefore clearing the value ofx. This is short forhave h : x = v := rfl; clear_value x. Any value definitionally equal tovcan be used in place of_. -
clear_value *clears values of all hypotheses that can be cleared. Fails if none can be cleared.
These syntaxes can be combined. For example, clear_value x y * ensures that x and y are cleared
while trying to clear all other local definitions,
and clear_value (hx : x = _) y * with hx does the same while first adding the hx : x = v hypothesis.
coherence
Use the coherence theorem for monoidal categories to solve equations in a monoidal equation, where the two sides only differ by replacing strings of monoidal structural morphisms (that is, associators, unitors, and identities) with different strings of structural morphisms with the same source and target.
That is, coherence can handle goals of the form
a ≫ f ≫ b ≫ g ≫ c = a' ≫ f ≫ b' ≫ g ≫ c'
where a = a', b = b', and c = c' can be proved using pure_coherence.
(If you have very large equations on which coherence is unexpectedly failing,
you may need to increase the typeclass search depth,
using e.g. set_option synthInstance.maxSize 500.)
compareOfLessAndEq_rfl
This attempts to prove that a given instance of compare is equal to compareOfLessAndEq by
introducing the arguments and trying the following approaches in order:
-
seeing if
rflworks -
seeing if the
compareat hand is nonetheless essentiallycompareOfLessAndEq, but, because of implicit arguments, requires us to unfold the defs and split theifs in the definition ofcompareOfLessAndEq -
seeing if we can split by cases on the arguments, then see if the defs work themselves out (useful when
compareis defined via amatchstatement, as it is forBool)
compat_simp_arith_all
compute_degree
compute_degree is a tactic to solve goals of the form
-
natDegree f = d, -
degree f = d, -
natDegree f ≤ d(or<), -
degree f ≤ d(or<), -
coeff f d = r, ifdis the degree off.
The tactic may leave goals of the form d' = d, d' ≤ d, d' < d, or r ≠ 0, where d' in ℕ
or WithBot ℕ is the tactic's guess of the degree, and r is the coefficient's guess of the
leading coefficient of f.
compute_degree applies norm_num to the left-hand side of all side goals, trying to close them.
The variant compute_degree! first applies compute_degree.
Then it uses norm_num on all the remaining goals and tries assumption.
compute_degree!
compute_degree is a tactic to solve goals of the form
-
natDegree f = d, -
degree f = d, -
natDegree f ≤ d(or<), -
degree f ≤ d(or<), -
coeff f d = r, ifdis the degree off.
The tactic may leave goals of the form d' = d, d' ≤ d, d' < d, or r ≠ 0, where d' in ℕ
or WithBot ℕ is the tactic's guess of the degree, and r is the coefficient's guess of the
leading coefficient of f.
compute_degree applies norm_num to the left-hand side of all side goals, trying to close them.
The variant compute_degree! first applies compute_degree.
Then it uses norm_num on all the remaining goals and tries assumption.
congr
Apply congruence (recursively) to goals of the form ⊢ f as = f bs and ⊢ f as ≍ f bs.
The optional parameter is the depth of the recursive applications.
This is useful when congr is too aggressive in breaking down the goal.
For example, given ⊢ f (g (x + y)) = f (g (y + x)),
congr produces the goals ⊢ x = y and ⊢ y = x,
while congr 2 produces the intended ⊢ x + y = y + x.
congr
Apply congruence (recursively) to goals of the form ⊢ f as = f bs and ⊢ f as ≍ f bs.
The optional parameter is the depth of the recursive applications.
This is useful when congr is too aggressive in breaking down the goal.
For example, given ⊢ f (g (x + y)) = f (g (y + x)),
congr produces the goals ⊢ x = y and ⊢ y = x,
while congr 2 produces the intended ⊢ x + y = y + x.
congr
Apply congruence (recursively) to goals of the form ⊢ f as = f bs and ⊢ f as ≍ f bs.
-
congr ncontrols the depth of the recursive applications. This is useful whencongris too aggressive in breaking down the goal. For example, given⊢ f (g (x + y)) = f (g (y + x)),congrproduces the goals⊢ x = yand⊢ y = x, whilecongr 2produces the intended⊢ x + y = y + x. -
If, at any point, a subgoal matches a hypothesis then the subgoal will be closed.
-
You can use
congr with p (: n)?to callext p (: n)?to all subgoals generated bycongr. For example, if the goal is⊢ f '' s = g '' sthencongr with xgenerates the goalx : α ⊢ f x = g x.
congr!
Equates pieces of the left-hand side of a goal to corresponding pieces of the right-hand side by
recursively applying congruence lemmas. For example, with ⊢ f as = g bs we could get
two goals ⊢ f = g and ⊢ as = bs.
Syntax:
congr! congr! n congr! with x y z congr! n with x y z
Here, n is a natural number and x, y, z are rintro patterns (like h, rfl, ⟨x, y⟩,
_, -, (h | h), etc.).
The congr! tactic is similar to congr but is more insistent in trying to equate left-hand sides
to right-hand sides of goals. Here is a list of things it can try:
-
If
Rin⊢ R x yis a reflexive relation, it will convert the goal to⊢ x = yif possible. The list of reflexive relations is maintained using the@[refl]attribute. As a special case,⊢ p ↔ qis converted to⊢ p = qduring congruence processing and then returned to⊢ p ↔ qform at the end. -
If there is a user congruence lemma associated to the goal (for instance, a
@[congr]-tagged lemma applying to⊢ List.map f xs = List.map g ys), then it will use that. -
It uses a congruence lemma generator at least as capable as the one used by
congrandsimp. If there is a subexpression that can be rewritten bysimp, thencongr!should be able to generate an equality for it. -
It can do congruences of pi types using lemmas like
implies_congrandpi_congr. -
Before applying congruences, it will run the
introstactic automatically. The introduced variables can be given names using awithclause. This helps when congruence lemmas provide additional assumptions in hypotheses. -
When there is an equality between functions, so long as at least one is obviously a lambda, we apply
funextorFunction.hfunext, which allows for congruence of lambda bodies. -
It can try to close goals using a few strategies, including checking definitional equality, trying to apply
Subsingleton.elimorproof_irrel_heq, and using theassumptiontactic.
The optional parameter is the depth of the recursive applications.
This is useful when congr! is too aggressive in breaking down the goal.
For example, given ⊢ f (g (x + y)) = f (g (y + x)),
congr! produces the goals ⊢ x = y and ⊢ y = x,
while congr! 2 produces the intended ⊢ x + y = y + x.
The congr! tactic also takes a configuration option, for example
congr! (transparency := .default) 2This overrides the default, which is to apply congruence lemmas at reducible transparency.
The congr! tactic is aggressive with equating two sides of everything. There is a predefined
configuration that uses a different strategy:
Try
congr! (config := .unfoldSameFun)
This only allows congruences between functions applications of definitionally equal functions,
and it applies congruence lemmas at default transparency (rather than just reducible).
This is somewhat like congr.
See Congr!.Config for all options.
congrm
congrm e is a tactic for proving goals of the form lhs = rhs, lhs ↔ rhs, lhs ≍ rhs,
or R lhs rhs when R is a reflexive relation.
The expression e is a pattern containing placeholders ?_,
and this pattern is matched against lhs and rhs simultaneously.
These placeholders generate new goals that state that corresponding subexpressions
in lhs and rhs are equal.
If the placeholders have names, such as ?m, then the new goals are given tags with those names.
Examples:
example {a b c d : ℕ} :
Nat.pred a.succ * (d + (c + a.pred)) = Nat.pred b.succ * (b + (c + d.pred)) := a:ℕb:ℕc:ℕd:ℕ⊢ a.succ.pred * (d + (c + a.pred)) = b.succ.pred * (b + (c + d.pred))
a:ℕb:ℕc:ℕd:ℕ⊢ a = ba:ℕb:ℕc:ℕd:ℕ⊢ d = ba:ℕb:ℕc:ℕd:ℕ⊢ c + a.pred = c + d.pred
/- Goals left:
case h1 ⊢ a = b
case h2 ⊢ d = b
case h3 ⊢ c + a.pred = c + d.pred
-/
a:ℕb:ℕc:ℕd:ℕ⊢ d = ba:ℕb:ℕc:ℕd:ℕ⊢ c + a.pred = c + d.pred
a:ℕb:ℕc:ℕd:ℕ⊢ c + a.pred = c + d.pred
All goals completed! 🐙
example {a b : ℕ} (h : a = b) : (fun y : ℕ => ∀ z, a + a = z) = (fun x => ∀ z, b + a = z) := a:ℕb:ℕh:a = b⊢ (fun y => ∀ (z : ℕ), a + a = z) = fun x => ∀ (z : ℕ), b + a = z
a:ℕb:ℕh:a = bx:ℕw:ℕ⊢ a = b
-- ⊢ a = b
All goals completed! 🐙
The congrm command is a convenient frontend to congr(...) congruence quotations.
If the goal is an equality, congrm e is equivalent to refine congr(e') where e' is
built from e by replacing each placeholder ?m by $(?m).
The pattern e is allowed to contain $(...) expressions to immediately substitute
equality proofs into the congruence, just like for congruence quotations.
congrm?
Display a widget panel allowing to generate a congrm call with holes specified by selecting
subexpressions in the goal.
constructor
If the main goal's target type is an inductive type, constructor solves it with
the first matching constructor, or else fails.
constructorm
-
constructorm p_1, ..., p_napplies theconstructortactic to the main goal iftypematches one of the given patterns. -
constructorm* pis a more efficient and compact version of· repeat constructorm p. It is more efficient because the pattern is compiled once.
Example: The following tactic proves any theorem like True ∧ (True ∨ True) consisting of
and/or/true:
constructorm* _ ∨ _, _ ∧ _, True
continuity
The tactic continuity solves goals of the form Continuous f by applying lemmas tagged with the
continuity user attribute.
fun_prop is a (usually more powerful) alternative to continuity.
continuity?
The tactic continuity solves goals of the form Continuous f by applying lemmas tagged with the
continuity user attribute.
contradiction
contradiction closes the main goal if its hypotheses are "trivially contradictory".
-
Inductive type/family with no applicable constructors
example (h : False) : p := p:Sort ?u.9h:False⊢ p All goals completed! 🐙 -
Injectivity of constructors
example (h : none = some true) : p := p:Sort ?u.75h:none = some true⊢ p All goals completed! 🐙 -- -
Decidable false proposition
example (h : 2 + 2 = 3) : p := p:Sort ?u.269h:2 + 2 = 3⊢ p All goals completed! 🐙 -
Contradictory hypotheses
example (h : p) (h' : ¬ p) : q := p:Propq:Sort ?u.17h:ph':¬p⊢ q All goals completed! 🐙 -
Other simple contradictions such as
example (x : Nat) (h : x ≠ x) : p := p:Sort ?u.17x:ℕh:x ≠ x⊢ p All goals completed! 🐙
contraposeTransforms the goal into its contrapositive.
-
contraposeturns a goalP → Qinto¬ Q → ¬ P -
contrapose hfirst reverts the local assumptionh, and then usescontraposeandintro h -
contrapose h with new_huses the namenew_hfor the introduced hypothesis
contrapose!
Transforms the goal into its contrapositive and uses pushes negations inside P and Q.
Usage matches contrapose
conv
conv => ... allows the user to perform targeted rewriting on a goal or hypothesis,
by focusing on particular subexpressions.
See https://lean-lang.org/theorem_proving_in_lean4/conv.html for more details.
Basic forms:
-
conv => cswill rewrite the goal with conv tacticscs. -
conv at h => cswill rewrite hypothesish. -
conv in pat => cswill rewrite the first subexpression matchingpat(seepattern).
conv?
Display a widget panel allowing to generate a conv call zooming to the subexpression selected
in the goal.
conv_lhs
conv_rhs
convert
The exact e and refine e tactics require a term e whose type is
definitionally equal to the goal. convert e is similar to refine e,
but the type of e is not required to exactly match the
goal. Instead, new goals are created for differences between the type
of e and the goal using the same strategies as the congr! tactic.
For example, in the proof state
n : ℕ, e : Prime (2 * n + 1) ⊢ Prime (n + n + 1)
the tactic convert e using 2 will change the goal to
⊢ n + n = 2 * n
In this example, the new goal can be solved using ring.
The using 2 indicates it should iterate the congruence algorithm up to two times,
where convert e would use an unrestricted number of iterations and lead to two
impossible goals: ⊢ HAdd.hAdd = HMul.hMul and ⊢ n = 2.
A variant configuration is convert (config := .unfoldSameFun) e, which only equates function
applications for the same function (while doing so at the higher default transparency).
This gives the same goal of ⊢ n + n = 2 * n without needing using 2.
The convert tactic applies congruence lemmas eagerly before reducing,
therefore it can fail in cases where exact succeeds:
def p (n : ℕ) := True example (h : p 0) : p 1 := by exact h -- succeeds example (h : p 0) : p 1 := by convert h -- fails, with leftover goal `1 = 0`
Limiting the depth of recursion can help with this. For example, convert h using 1 will work
in this case.
The syntax convert ← e will reverse the direction of the new goals
(producing ⊢ 2 * n = n + n in this example).
Internally, convert e works by creating a new goal asserting that
the goal equals the type of e, then simplifying it using
congr!. The syntax convert e using n can be used to control the
depth of matching (like congr! n). In the example, convert e using 1
would produce a new goal ⊢ n + n + 1 = 2 * n + 1.
Refer to the congr! tactic to understand the congruence operations. One of its many
features is that if x y : t and an instance Subsingleton t is in scope,
then any goals of the form x = y are solved automatically.
Like congr!, convert takes an optional with clause of rintro patterns,
for example convert e using n with x y z.
The convert tactic also takes a configuration option, for example
convert (config := {transparency := .default}) h
These are passed to congr!. See Congr!.Config for options.
convert_to
The convert_to tactic is for changing the type of the target or a local hypothesis,
but unlike the change tactic it will generate equality proof obligations using congr!
to resolve discrepancies.
-
convert_to tychanges the target toty -
convert_to ty using nusescongr! ninstead ofcongr! 1 -
convert_to ty at hchanges the type of the local hypothesishtoty. Any remainingcongr!goals come first.
Operating on the target, the tactic convert_to ty using n
is the same as convert (?_ : ty) using n.
The difference is that convert_to takes a type but convert takes a proof term.
Except for it also being able to operate on local hypotheses,
the syntax for convert_to is the same as for convert, and it has variations such as
convert_to ← g and convert_to (config := {transparency := .default}) g.
Note that convert_to ty at h may leave a copy of h if a later local hypotheses or the target
depends on it, just like in rw or simp.
dbg_trace
decide
decide attempts to prove the main goal (with target type p) by synthesizing an instance of Decidable p
and then reducing that instance to evaluate the truth value of p.
If it reduces to isTrue h, then h is a proof of p that closes the goal.
The target is not allowed to contain local variables or metavariables.
If there are local variables, you can first try using the revert tactic with these local variables to move them into the target,
or you can use the +revert option, described below.
Options:
-
decide +revertbegins by reverting local variables that the target depends on, after cleaning up the local context of irrelevant variables. A variable is relevant if it appears in the target, if it appears in a relevant variable, or if it is a proposition that refers to a relevant variable. -
decide +kerneluses kernel for reduction instead of the elaborator. It has two key properties: (1) since it uses the kernel, it ignores transparency and can unfold everything, and (2) it reduces theDecidableinstance only once instead of twice. -
decide +nativeuses the native code compiler (#eval) to evaluate theDecidableinstance, admitting the result via theLean.ofReduceBoolaxiom. This can be significantly more efficient than using reduction, but it is at the cost of increasing the size of the trusted code base. Namely, it depends on the correctness of the Lean compiler and all definitions with an@[implemented_by]attribute. Like with+kernel, theDecidableinstance is evaluated only once.
Limitation: In the default mode or +kernel mode, since decide uses reduction to evaluate the term,
Decidable instances defined by well-founded recursion might not work because evaluating them requires reducing proofs.
Reduction can also get stuck on Decidable instances with Eq.rec terms.
These can appear in instances defined using tactics (such as rw and simp).
To avoid this, create such instances using definitions such as decidable_of_iff instead.
Examples
Proving inequalities:
example : 2 + 2 ≠ 5 := ⊢ 2 + 2 ≠ 5 All goals completed! 🐙
Trying to prove a false proposition:
example : 1 ≠ 1 := by decide /- tactic 'decide' proved that the proposition 1 ≠ 1 is false -/
Trying to prove a proposition whose Decidable instance fails to reduce
opaque unknownProp : Prop open scoped Classical in example : unknownProp := by decide /- tactic 'decide' failed for proposition unknownProp since its 'Decidable' instance reduced to Classical.choice ⋯ rather than to the 'isTrue' constructor. -/
Properties and relations
For equality goals for types with decidable equality, usually rfl can be used in place of decide.
example : 1 + 1 = 2 := ⊢ 1 + 1 = 2 All goals completed! 🐙
example : 1 + 1 = 2 := ⊢ 1 + 1 = 2 All goals completed! 🐙
decreasing_tactic
decreasing_tactic is called by default on well-founded recursions in order
to synthesize a proof that recursive calls decrease along the selected
well founded relation. It can be locally overridden by using decreasing_by tac
on the recursive definition, and it can also be globally extended by adding
more definitions for decreasing_tactic (or decreasing_trivial,
which this tactic calls).
decreasing_trivial
Extensible helper tactic for decreasing_tactic. This handles the "base case"
reasoning after applying lexicographic order lemmas.
It can be extended by adding more macro definitions, e.g.
macro_rules | `(tactic| decreasing_trivial) => `(tactic| linarith)
decreasing_trivial_pre_omega
Variant of decreasing_trivial that does not use omega, intended to be used in core modules
before omega is available.
decreasing_with
Constructs a proof of decreasing along a well founded relation, by simplifying, then applying
lexicographic order lemmas and finally using ts to solve the base case. If it fails,
it prints a message to help the user diagnose an ill-founded recursive definition.
delta
delta id1 id2 ... delta-expands the definitions id1, id2, ....
This is a low-level tactic, it will expose how recursive definitions have been
compiled by Lean.
done
done succeeds iff there are no remaining goals.
dsimp
The dsimp tactic is the definitional simplifier. It is similar to simp but only
applies theorems that hold by reflexivity. Thus, the result is guaranteed to be
definitionally equal to the input.
dsimp!
dsimp! is shorthand for dsimp with autoUnfold := true.
This will rewrite with all equation lemmas, which can be used to
partially evaluate many definitions.
dsimp?
simp? takes the same arguments as simp, but reports an equivalent call to simp only
that would be sufficient to close the goal. This is useful for reducing the size of the simp
set in a local invocation to speed up processing.
example (x : Nat) : (if True then x + 2 else 3) = x + 2 := x:ℕ⊢ (if True then x + 2 else 3) = x + 2
All goals completed! 🐙 -- prints "Try this: simp only [ite_true]"
This command can also be used in simp_all and dsimp.
dsimp?!
simp? takes the same arguments as simp, but reports an equivalent call to simp only
that would be sufficient to close the goal. This is useful for reducing the size of the simp
set in a local invocation to speed up processing.
example (x : Nat) : (if True then x + 2 else 3) = x + 2 := x:ℕ⊢ (if True then x + 2 else 3) = x + 2
All goals completed! 🐙 -- prints "Try this: simp only [ite_true]"
This command can also be used in simp_all and dsimp.
eapply
eapply e is like apply e but it does not add subgoals for variables that appear
in the types of other goals. Note that this can lead to a failure where there are
no goals remaining but there are still metavariables in the term:
example (h : ∀ x : Nat, x = x → True) : True := by eapply h rfl -- no goals -- (kernel) declaration has metavariables '_example'
econstructor
econstructor is like constructor
(it calls apply using the first matching constructor of an inductive datatype)
except only non-dependent premises are added as new goals.
elementwise
elementwise!
enat_to_nat
enat_to_nat shifts all ENats in the context to Nat, rewriting propositions about them.
A typical use case is enat_to_nat; omega.
erw
erw?
erw? [r, ...] calls erw [r, ...] (at hypothesis h if written erw [r, ...] at h),
and then attempts to identify any subexpression which would block the use of rw instead.
It does so by identifying subexpressions which are defeq, but not at reducible transparency.
eta_expand
eta_expand at loc eta expands all sub-expressions at the given location.
It also beta reduces any applications of eta expanded terms, so it puts it
into an eta-expanded "normal form."
This also exists as a conv-mode tactic.
For example, if f takes two arguments, then f becomes fun x y => f x y
and f x becomes fun y => f x y.
This can be useful to turn, for example, a raw HAdd.hAdd into fun x y => x + y.
eta_reduce
eta_reduce at loc eta reduces all sub-expressions at the given location.
This also exists as a conv-mode tactic.
For example, fun x y => f x y becomes f after eta reduction.
eta_struct
eta_struct at loc transforms structure constructor applications such as S.mk x.1 ... x.n
(pretty printed as, for example, {a := x.a, b := x.b, ...}) into x.
This also exists as a conv-mode tactic.
The transformation is known as eta reduction for structures, and it yields definitionally equal expressions.
For example, given x : α × β, then (x.1, x.2) becomes x after this transformation.
exact
exact e closes the main goal if its target type matches that of e.
exact?
Searches environment for definitions or theorems that can solve the goal using exact
with conditions resolved by solve_by_elim.
The optional using clause provides identifiers in the local context that must be
used by exact? when closing the goal. This is most useful if there are multiple
ways to resolve the goal, and one wants to guide which lemma is used.
exact_mod_cast
Normalize casts in the goal and the given expression, then close the goal with exact.
exacts
Like exact, but takes a list of terms and checks that all goals are discharged after the tactic.
exfalso
exfalso converts a goal ⊢ tgt into ⊢ False by applying False.elim.
exists
exists e₁, e₂, ... is shorthand for refine ⟨e₁, e₂, ...⟩; try trivial.
It is useful for existential goals.
existsi
existsi e₁, e₂, ⋯ applies the tactic refine ⟨e₁, e₂, ⋯, ?_⟩. It's purpose is to instantiate
existential quantifiers.
Examples:
example : ∃ x : Nat, x = x := ⊢ ∃ x, x = x
⊢ 42 = 42
All goals completed! 🐙
example : ∃ x : Nat, ∃ y : Nat, x = y := ⊢ ∃ x y, x = y
⊢ 42 = 42
All goals completed! 🐙
expose_names
expose_names renames all inaccessible variables with accessible names, making them available
for reference in generated tactics. However, this renaming introduces machine-generated names
that are not fully under user control. expose_names is primarily intended as a preamble for
auto-generated end-game tactic scripts. It is also useful as an alternative to
set_option tactic.hygienic false. If explicit control over renaming is needed in the
middle of a tactic script, consider using structured tactic scripts with
match .. with, induction .. with, or intro with explicit user-defined names,
as well as tactics such as next, case, and rename_i.
ext
Applies extensionality lemmas that are registered with the @[ext] attribute.
-
ext pat*applies extensionality theorems as much as possible, using the patternspat*to introduce the variables in extensionality theorems usingrintro. For example, the patterns are used to name the variables introduced by lemmas such asfunext. -
Without patterns,
extapplies extensionality lemmas as much as possible but introduces anonymous hypotheses whenever needed. -
ext pat* : napplies ext theorems only up to depthn.
The ext1 pat* tactic is like ext pat* except that it only applies a single extensionality theorem.
Unused patterns will generate warning. Patterns that don't match the variables will typically result in the introduction of anonymous hypotheses.
ext1
ext1 pat* is like ext pat* except that it only applies a single extensionality theorem rather
than recursively applying as many extensionality theorems as possible.
The pat* patterns are processed using the rintro tactic.
If no patterns are supplied, then variables are introduced anonymously using the intros tactic.
extract_goal
-
extract_goalformats the current goal as a stand-alone theorem or definition after cleaning up the local context of irrelevant variables. A variable is relevant if (1) it occurs in the target type, (2) there is a relevant variable that depends on it, or (3) the type of the variable is a proposition that depends on a relevant variable.If the target is
False, then for convenienceextract_goalincludes all variables. -
extract_goal *formats the current goal without cleaning up the local context. -
extract_goal a b c ...formats the current goal after removing everything that the given variablesa,b,c, ... do not depend on. -
extract_goal ... using nameuses the namenamefor the theorem or definition rather than the autogenerated name.
The tactic tries to produce an output that can be copy-pasted and just work, but its success depends on whether the expressions are amenable to being unambiguously pretty printed.
The tactic responds to pretty printing options.
For example, set_option pp.all true in extract_goal gives the pp.all form.
extract_lets
Extracts let and have expressions from within the target or a local hypothesis,
introducing new local definitions.
-
extract_letsextracts all the lets from the target. -
extract_lets x y zextracts all the lets from the target and usesx,y, andzfor the first names. Using_for a name leaves it unnamed. -
extract_lets x y z at hoperates on the local hypothesishinstead of the target.
For example, given a local hypotheses if the form h : let x := v; b x, then extract_lets z at h
introduces a new local definition z := v and changes h to be h : b z.
fail_if_no_progress
fail_if_no_progress tacs evaluates tacs, and fails if no progress is made on the main goal
or the local context at reducible transparency.
fail_if_success
fail_if_success t fails if the tactic t succeeds.
false_or_by_contra
Changes the goal to False, retaining as much information as possible:
-
If the goal is
False, do nothing. -
If the goal is an implication or a function type, introduce the argument and restart. (In particular, if the goal is
x ≠ y, introducex = y.) -
Otherwise, for a propositional goal
P, replace it with¬ ¬ P(attempting to find aDecidableinstance, but otherwise falling back to working classically) and introduce¬ P. -
For a non-propositional goal use
False.elim.
fapply
fapply e is like apply e but it adds goals in the order they appear,
rather than putting the dependent goals first.
fconstructor
fconstructor is like constructor
(it calls apply using the first matching constructor of an inductive datatype)
except that it does not reorder goals.
field_simp
The goal of field_simp is to reduce an expression in a field to an expression of the form n / d
where neither n nor d contains any division symbol.
If the goal is an equality, this tactic will also clear the denominators, so that the proof
can normally be concluded by an application of ring.
For example,
example (a b c d x y : ℂ) (hx : x ≠ 0) (hy : y ≠ 0) :
a + b / x + c / x ^ 2 + d / x ^ 3 = a + x⁻¹ * (y * b / y + (d / x + c) / x) := a:ℂb:ℂc:ℂd:ℂx:ℂy:ℂhx:x ≠ 0hy:y ≠ 0⊢ a + b / x + c / x ^ 2 + d / x ^ 3 = a + x⁻¹ * (y * b / y + (d / x + c) / x)
a:ℂb:ℂc:ℂd:ℂx:ℂy:ℂhx:x ≠ 0hy:y ≠ 0⊢ x * (x * (a * x + b) + c) + d = a * x ^ 3 + (b * x ^ 2 + (d + x * c))
All goals completed! 🐙
Cancelling and combining denominators often requires "nonzeroness" side conditions. The field_simp
tactic attempts to discharge these, and will omit such steps if it cannot discharge the
corresponding side conditions. The discharger will try, among other things, positivity and
norm_num, and will also use any nonzeroness proofs included explicitly (e.g. field_simp [hx]).
If your expression is not completely reduced by field_simp, check the denominators of the
resulting expression and provide proofs that they are nonzero to enable further progress.
filter_upwards
filter_upwards [h₁, ⋯, hₙ] replaces a goal of the form s ∈ f and terms
h₁ : t₁ ∈ f, ⋯, hₙ : tₙ ∈ f with ∀ x, x ∈ t₁ → ⋯ → x ∈ tₙ → x ∈ s.
The list is an optional parameter, [] being its default value.
filter_upwards [h₁, ⋯, hₙ] with a₁ a₂ ⋯ aₖ is a short form for
{ filter_upwards [h₁, ⋯, hₙ], intro a₁ a₂ ⋯ aₖ }.
filter_upwards [h₁, ⋯, hₙ] using e is a short form for
{ filter_upwards [h1, ⋯, hn], exact e }.
Combining both shortcuts is done by writing filter_upwards [h₁, ⋯, hₙ] with a₁ a₂ ⋯ aₖ using e.
Note that in this case, the aᵢ terms can be used in e.
fin_cases
fin_cases h performs case analysis on a hypothesis of the form
h : A, where [Fintype A] is available, or
h : a ∈ A, where A : Finset X, A : Multiset X or A : List X.
As an example, in
example (f : ℕ → Prop) (p : Fin 3) (h0 : f 0) (h1 : f 1) (h2 : f 2) : f p.val := f:ℕ → Propp:Fin 3h0:f 0h1:f 1h2:f 2⊢ f ↑p
f:ℕ → Proph0:f 0h1:f 1h2:f 2⊢ f ↑((fun i => i) ⟨0, ⋯⟩)f:ℕ → Proph0:f 0h1:f 1h2:f 2⊢ f ↑((fun i => i) ⟨1, ⋯⟩)f:ℕ → Proph0:f 0h1:f 1h2:f 2⊢ f ↑((fun i => i) ⟨2, ⋯⟩); f:ℕ → Proph0:f 0h1:f 1h2:f 2⊢ f 0f:ℕ → Proph0:f 0h1:f 1h2:f 2⊢ f ↑((fun i => i) ⟨1, ⋯⟩)f:ℕ → Proph0:f 0h1:f 1h2:f 2⊢ f ↑((fun i => i) ⟨2, ⋯⟩)
all_goals All goals completed! 🐙
after fin_cases p; simp, there are three goals, f 0, f 1, and f 2.
fin_omega
Preprocessor for omega to handle inequalities in Fin.
Note that this involves a lot of case splitting, so may be slow.
find
finiteness
Tactic to solve goals of the form *** < ∞ and (equivalently) *** ≠ ∞ in the extended
nonnegative reals (ℝ≥0∞).
finiteness?
Tactic to solve goals of the form *** < ∞ and (equivalently) *** ≠ ∞ in the extended
nonnegative reals (ℝ≥0∞).
finiteness_nonterminal
Tactic to solve goals of the form *** < ∞ and (equivalently) *** ≠ ∞ in the extended
nonnegative reals (ℝ≥0∞).
focus
focus tac focuses on the main goal, suppressing all other goals, and runs tac on it.
Usually ·tac, which enforces that the goal is closed by tac, should be preferred.
forward
forward?
fun_cases
The fun_cases tactic is a convenience wrapper of the cases tactic when using a functional
cases principle.
The tactic invocation
fun_cases f x ... y ...`
is equivalent to
cases y, ... using f.fun_cases_unfolding x ...
where the arguments of f are used as arguments to f.fun_cases_unfolding or targets of the case
analysis, as appropriate.
The form
fun_cases f
(with no arguments to f) searches the goal for an unique eligible application of f, and uses
these arguments. An application of f is eligible if it is saturated and the arguments that will
become targets are free variables.
The form fun_cases f x y with | case1 => tac₁ | case2 x' ih => tac₂ works like with cases.
Under set_option tactic.fun_induction.unfolding true (the default), fun_induction uses the
f.fun_cases_unfolding theorem, which will try to automatically unfold the call to f in
the goal. With set_option tactic.fun_induction.unfolding false, it uses f.fun_cases instead.
fun_induction
The fun_induction tactic is a convenience wrapper around the induction tactic to use the the
functional induction principle.
The tactic invocation
fun_induction f x₁ ... xₙ y₁ ... yₘ
where f is a function defined by non-mutual structural or well-founded recursion, is equivalent to
induction y₁, ... yₘ using f.induct_unfolding x₁ ... xₙ
where the arguments of f are used as arguments to f.induct_unfolding or targets of the
induction, as appropriate.
The form
fun_induction f
(with no arguments to f) searches the goal for an unique eligible application of f, and uses
these arguments. An application of f is eligible if it is saturated and the arguments that will
become targets are free variables.
The forms fun_induction f x y generalizing z₁ ... zₙ and
fun_induction f x y with | case1 => tac₁ | case2 x' ih => tac₂ work like with induction.
Under set_option tactic.fun_induction.unfolding true (the default), fun_induction uses the
f.induct_unfolding induction principle, which will try to automatically unfold the call to f in
the goal. With set_option tactic.fun_induction.unfolding false, it uses f.induct instead.
fun_propTactic to prove function properties
funext
Apply function extensionality and introduce new hypotheses.
The tactic funext will keep applying the funext lemma until the goal target is not reducible to
|- ((fun x => ...) = (fun x => ...))
The variant funext h₁ ... hₙ applies funext n times, and uses the given identifiers to name the new hypotheses.
Patterns can be used like in the intro tactic. Example, given a goal
|- ((fun x : Nat × Bool => ...) = (fun x => ...))
funext (a, b) applies funext once and performs pattern matching on the newly introduced pair.
gcongr
The gcongr tactic applies "generalized congruence" rules, reducing a relational goal
between a LHS and RHS. For example,
example {a b x c d : ℝ} (h1 : a + 1 ≤ b + 1) (h2 : c + 2 ≤ d + 2) :
x ^ 2 * a + c ≤ x ^ 2 * b + d := a:ℝb:ℝx:ℝc:ℝd:ℝh1:a + 1 ≤ b + 1h2:c + 2 ≤ d + 2⊢ x ^ 2 * a + c ≤ x ^ 2 * b + d
a:ℝb:ℝx:ℝc:ℝd:ℝh1:a + 1 ≤ b + 1h2:c + 2 ≤ d + 2⊢ a ≤ ba:ℝb:ℝx:ℝc:ℝd:ℝh1:a + 1 ≤ b + 1h2:c + 2 ≤ d + 2⊢ c ≤ d
a:ℝb:ℝx:ℝc:ℝd:ℝh1:a + 1 ≤ b + 1h2:c + 2 ≤ d + 2⊢ a ≤ b All goals completed! 🐙
a:ℝb:ℝx:ℝc:ℝd:ℝh1:a + 1 ≤ b + 1h2:c + 2 ≤ d + 2⊢ c ≤ d All goals completed! 🐙
This example has the goal of proving the relation ≤ between a LHS and RHS both of the pattern
x ^ 2 * ?_ + ?_
(with inputs a, c on the left and b, d on the right); after the use of
gcongr, we have the simpler goals a ≤ b and c ≤ d.
A depth limit or a pattern can be provided explicitly; this is useful if a non-maximal match is desired:
example {a b c d x : ℝ} (h : a + c + 1 ≤ b + d + 1) :
x ^ 2 * (a + c) + 5 ≤ x ^ 2 * (b + d) + 5 := a:ℝb:ℝc:ℝd:ℝx:ℝh:a + c + 1 ≤ b + d + 1⊢ x ^ 2 * (a + c) + 5 ≤ x ^ 2 * (b + d) + 5
a:ℝb:ℝc:ℝd:ℝx:ℝh:a + c + 1 ≤ b + d + 1⊢ a + c ≤ b + d -- or `gcongr 2`
All goals completed! 🐙
The "generalized congruence" rules are the library lemmas which have been tagged with the
attribute @[gcongr]. For example, the first example constructs the proof term
add_le_add (mul_le_mul_of_nonneg_left ?_ (Even.pow_nonneg (even_two_mul 1) x)) ?_
using the generalized congruence lemmas add_le_add and mul_le_mul_of_nonneg_left.
The tactic attempts to discharge side goals to these "generalized congruence" lemmas (such as the
side goal 0 ≤ x ^ 2 in the above application of mul_le_mul_of_nonneg_left) using the tactic
gcongr_discharger, which wraps positivity but can also be extended. Side goals not discharged
in this way are left for the user.
gcongr will descend into binders (for example sums or suprema). To name the bound variables,
use with:
example {f g : ℕ → ℝ≥0∞} (h : ∀ n, f n ≤ g n) : ⨆ n, f n ≤ ⨆ n, g n := by
gcongr with i
exact h i
gcongr?
Display a widget panel allowing to generate a gcongr call with holes specified by selecting
subexpressions in the goal.
gcongr_discharger
generalize-
generalize ([h :] e = x),+replaces all occurrenceses in the main goal with a fresh hypothesisxs. Ifhis given,h : e = xis introduced as well. -
generalize e = x at h₁ ... hₙalso generalizes occurrences ofeinsideh₁, ...,hₙ. -
generalize e = x at *will generalize occurrences ofeeverywhere.
generalize_proofs
generalize_proofs ids* [at locs]? generalizes proofs in the current goal,
turning them into new local hypotheses.
-
generalize_proofsgeneralizes proofs in the target. -
generalize_proofs at h₁ h₂generalized proofs in hypothesesh₁andh₂. -
generalize_proofs at *generalizes proofs in the entire local context. -
generalize_proofs pf₁ pf₂ pf₃uses namespf₁,pf₂, andpf₃for the generalized proofs. These can be_to not name proofs.
If a proof is already present in the local context, it will use that rather than create a new local hypothesis.
When doing generalize_proofs at h, if h is a let binding, its value is cleared,
and furthermore if h duplicates a preceding local hypothesis then it is eliminated.
The tactic is able to abstract proofs from under binders, creating universally quantified
proofs in the local context.
To disable this, use generalize_proofs -abstract.
The tactic is also set to recursively abstract proofs from the types of the generalized proofs.
This can be controlled with the maxDepth configuration option,
with generalize_proofs (config := { maxDepth := 0 }) turning this feature off.
For example:
def List.nthLe {α} (l : List α) (n : ℕ) (_h : n < l.length) : α := sorry
example : List.nthLe [1, 2] 1 (by simp) = 2 := by
-- ⊢ [1, 2].nthLe 1 ⋯ = 2
generalize_proofs h
-- h : 1 < [1, 2].length
-- ⊢ [1, 2].nthLe 1 h = 2
get_elem_tactic
get_elem_tactic is the tactic automatically called by the notation arr[i]
to prove any side conditions that arise when constructing the term
(e.g. the index is in bounds of the array). It just delegates to
get_elem_tactic_extensible and gives a diagnostic error message otherwise;
users are encouraged to extend get_elem_tactic_extensible instead of this tactic.
get_elem_tactic_extensible
get_elem_tactic_extensible is an extensible tactic automatically called
by the notation arr[i] to prove any side conditions that arise when
constructing the term (e.g. the index is in bounds of the array).
The default behavior is to try simp +arith and omega
(for doing linear arithmetic in the index).
(Note that the core tactic get_elem_tactic has already tried
done and assumption before the extensible tactic is called.)
get_elem_tactic_trivial
get_elem_tactic_trivial has been deprecated in favour of get_elem_tactic_extensible.
ghost_calc
ghost_calc is a tactic for proving identities between polynomial functions.
Typically, when faced with a goal like
∀ (x y : 𝕎 R), verschiebung (x * frobenius y) = verschiebung x * yyou can
-
call
ghost_calc -
do a small amount of manual work -- maybe nothing, maybe
rintro, etc -
call
ghost_simp
and this will close the goal.
ghost_calc cannot detect whether you are dealing with unary or binary polynomial functions.
You must give it arguments to determine this.
If you are proving a universally quantified goal like the above,
call ghost_calc _ _.
If the variables are introduced already, call ghost_calc x y.
In the unary case, use ghost_calc _ or ghost_calc x.
ghost_calc is a light wrapper around type class inference.
All it does is apply the appropriate extensionality lemma and try to infer the resulting goals.
This is subtle and Lean's elaborator doesn't like it because of the HO unification involved,
so it is easier (and prettier) to put it in a tactic script.
grewrite
grewrite [e] works just like rewrite [e], but e can be a relation other than = or ↔.
For example,
variable {a b c d n : ℤ}
example (h₁ : a < b) (h₂ : b ≤ c) : a + d ≤ c + d := a:ℤb:ℤc:ℤd:ℤn:ℤh₁:a < bh₂:b ≤ c⊢ a + d ≤ c + d
a:ℤb:ℤc:ℤd:ℤn:ℤh₁:a < bh₂:b ≤ c⊢ c + d ≤ c + d; All goals completed! 🐙
example (h : a ≡ b [ZMOD n]) : a ^ 2 ≡ b ^ 2 [ZMOD n] := a:ℤb:ℤc:ℤd:ℤn:ℤh:a ≡ b [ZMOD n]⊢ a ^ 2 ≡ b ^ 2 [ZMOD n]
a:ℤb:ℤc:ℤd:ℤn:ℤh:a ≡ b [ZMOD n]⊢ b ^ 2 ≡ b ^ 2 [ZMOD n]; All goals completed! 🐙
example (h₁ : a ∣ b) (h₂ : b ∣ a ^ 2 * c) : a ∣ b ^ 2 * c := a:ℤb:ℤc:ℤd:ℤn:ℤh₁:a ∣ bh₂:b ∣ a ^ 2 * c⊢ a ∣ b ^ 2 * c
a:ℤb:ℤc:ℤd:ℤn:ℤh₁:a ∣ bh₂:b ∣ b ^ 2 * c⊢ b ∣ b ^ 2 * c
All goals completed! 🐙
To be able to use grewrite, the relevant lemmas need to be tagged with @[gcongr].
To rewrite inside a transitive relation, you can also give it an IsTrans instance.
grind
grind is a tactic inspired by modern SMT solvers. Picture a virtual whiteboard:
every time grind discovers a new equality, inequality, or logical fact,
it writes it on the board, groups together terms known to be equal,
and lets each reasoning engine read from and contribute to the shared workspace.
These engines work together to handle equality reasoning, apply known theorems,
propagate new facts, perform case analysis, and run specialized solvers
for domains like linear arithmetic and commutative rings.
grind is not designed for goals whose search space explodes combinatorially,
think large pigeonhole instances, graph‑coloring reductions, high‑order N‑queens boards,
or a 200‑variable Sudoku encoded as Boolean constraints. Such encodings require
thousands (or millions) of case‑splits that overwhelm grind’s branching search.
For bit‑level or combinatorial problems, consider using bv_decide.
bv_decide calls a state‑of‑the‑art SAT solver (CaDiCaL) and then returns a
compact, machine‑checkable certificate.
Equality reasoning
grind uses congruence closure to track equalities between terms.
When two terms are known to be equal, congruence closure automatically deduces
equalities between more complex expressions built from them.
For example, if a = b, then congruence closure will also conclude that f a = f b
for any function f. This forms the foundation for efficient equality reasoning in grind.
Here is an example:
example (f : Nat → Nat) (h : a = b) : f (f b) = f (f a) := a:ℕb:ℕf:ℕ → ℕh:a = b⊢ f (f b) = f (f a)
All goals completed! 🐙
Applying theorems using E-matching
To apply existing theorems, grind uses a technique called E-matching,
which finds matches for known theorem patterns while taking equalities into account.
Combined with congruence closure, E-matching helps grind discover
non-obvious consequences of theorems and equalities automatically.
Consider the following functions and theorems:
def f (a : Nat) : Nat :=
a + 1
def g (a : Nat) : Nat :=
a - 1
@[grind =]
theorem gf (x : Nat) : g (f x) = x := x:ℕ⊢ g (f x) = x
All goals completed! 🐙
The theorem gf asserts that g (f x) = x for all natural numbers x.
The attribute [grind =] instructs grind to use the left-hand side of the equation,
g (f x), as a pattern for E-matching.
Suppose we now have a goal involving:
example {a b} (h : f b = a) : g a = b := by
grind
Although g a is not an instance of the pattern g (f x),
it becomes one modulo the equation f b = a. By substituting a
with f b in g a, we obtain the term g (f b),
which matches the pattern g (f x) with the assignment x := b.
Thus, the theorem gf is instantiated with x := b,
and the new equality g (f b) = b is asserted.
grind then uses congruence closure to derive the implied equality
g a = g (f b) and completes the proof.
The pattern used to instantiate theorems affects the effectiveness of grind.
For example, the pattern g (f x) is too restrictive in the following case:
the theorem gf will not be instantiated because the goal does not even
contain the function symbol g.
example (h₁ : f b = a) (h₂ : f c = a) : b = c := by grind
You can use the command grind_pattern to manually select a pattern for a given theorem.
In the following example, we instruct grind to use f x as the pattern,
allowing it to solve the goal automatically:
grind_pattern gf => f x
example {a b c} (h₁ : f b = a) (h₂ : f c = a) : b = c := by
grind
You can enable the option trace.grind.ematch.instance to make grind print a
trace message for each theorem instance it generates.
You can also specify a multi-pattern to control when grind should apply a theorem.
A multi-pattern requires that all specified patterns are matched in the current context
before the theorem is applied. This is useful for theorems such as transitivity rules,
where multiple premises must be simultaneously present for the rule to apply.
The following example demonstrates this feature using a transitivity axiom for a binary relation R:
opaque R : Int → Int → Prop
axiom Rtrans {x y z : Int} : R x y → R y z → R x z
grind_pattern Rtrans => R x y, R y z
example {a b c d} : R a b → R b c → R c d → R a d := a:ℤb:ℤc:ℤd:ℤ⊢ R a b → R b c → R c d → R a d
All goals completed! 🐙
By specifying the multi-pattern R x y, R y z, we instruct grind to
instantiate Rtrans only when both R x y and R y z are available in the context.
In the example, grind applies Rtrans to derive R a c from R a b and R b c,
and can then repeat the same reasoning to deduce R a d from R a c and R c d.
Instead of using grind_pattern to explicitly specify a pattern,
you can use the @[grind] attribute or one of its variants, which will use a heuristic to
generate a (multi-)pattern. The complete list is available in the reference manual. The main ones are:
-
@[grind →]will select a multi-pattern from the hypotheses of the theorem (i.e. it will use the theorem for forwards reasoning). In more detail, it will traverse the hypotheses of the theorem from left-to-right, and each time it encounters a minimal indexable (i.e. has a constant as its head) subexpression which "covers" (i.e. fixes the value of) an argument which was not previously covered, it will add that subexpression as a pattern, until all arguments have been covered. -
@[grind ←]will select a multi-pattern from the conclusion of theorem (i.e. it will use the theorem for backwards reasoning). This may fail if not all the arguments to the theorem appear in the conclusion. -
@[grind]will traverse the conclusion and then the hypotheses left-to-right, adding patterns as they increase the coverage, stopping when all arguments are covered. -
@[grind =]checks that the conclusion of the theorem is an equality, and then uses the left-hand-side of the equality as a pattern. This may fail if not all of the arguments appear in the left-hand-side.
Here is the previous example again but using the attribute [grind →]
opaque R : Int → Int → Prop
@[grind →] axiom Rtrans {x y z : Int} : R x y → R y z → R x z
example {a b c d} : R a b → R b c → R c d → R a d := a:ℤb:ℤc:ℤd:ℤ⊢ R a b → R b c → R c d → R a d
All goals completed! 🐙
To control theorem instantiation and avoid generating an unbounded number of instances,
grind uses a generation counter. Terms in the original goal are assigned generation zero.
When grind applies a theorem using terms of generation ≤ n, any new terms it creates
are assigned generation n + 1. This limits how far the tactic explores when applying
theorems and helps prevent an excessive number of instantiations.
Key options:
-
grind (ematch := <num>)controls the number of E-matching rounds. -
grind [<name>, ...]instructsgrindto use the declarationnameduring E-matching. -
grind only [<name>, ...]is likegrind [<name>, ...]but does not use theorems tagged with@[grind]. -
grind (gen := <num>)sets the maximum generation.
Linear integer arithmetic (cutsat)
grind can solve goals that reduce to linear integer arithmetic (LIA) using an
integrated decision procedure called cutsat. It understands
-
equalities
p = 0 -
inequalities
p ≤ 0 -
disequalities
p ≠ 0 -
divisibility
d ∣ p
The solver incrementally assigns integer values to variables; when a partial assignment violates a constraint it adds a new, implied constraint and retries. This model-based search is complete for LIA.
Key options:
-
grind -cutsatdisable the solver (useful for debugging) -
grind +qliaaccept rational models (shrinks the search space but is incomplete for ℤ)
Examples:
example {x y : Int} : 2 * x + 4 * y ≠ 5 := x:ℤy:ℤ⊢ 2 * x + 4 * y ≠ 5
All goals completed! 🐙
-- Mixing equalities and inequalities.
example {x y : Int} :
2 * x + 3 * y = 0 → 1 ≤ x → y < 1 := x:ℤy:ℤ⊢ 2 * x + 3 * y = 0 → 1 ≤ x → y < 1
All goals completed! 🐙
-- Reasoning with divisibility.
example (a b : Int) :
2 ∣ a + 1 → 2 ∣ b + a → ¬ 2 ∣ b + 2 * a := a:ℤb:ℤ⊢ 2 ∣ a + 1 → 2 ∣ b + a → ¬2 ∣ b + 2 * a
All goals completed! 🐙
example (x y : Int) :
27 ≤ 11*x + 13*y →
11*x + 13*y ≤ 45 →
-10 ≤ 7*x - 9*y →
7*x - 9*y ≤ 4 → False := x:ℤy:ℤ⊢ 27 ≤ 11 * x + 13 * y → 11 * x + 13 * y ≤ 45 → -10 ≤ 7 * x - 9 * y → 7 * x - 9 * y ≤ 4 → False
All goals completed! 🐙
-- Types that implement the `ToInt` type-class.
example (a b c : UInt64)
: a ≤ 2 → b ≤ 3 → c - a - b = 0 → c ≤ 5 := a:UInt64b:UInt64c:UInt64⊢ a ≤ 2 → b ≤ 3 → c - a - b = 0 → c ≤ 5
All goals completed! 🐙
Algebraic solver (ring)
grind ships with an algebraic solver nick-named ring for goals that can
be phrased as polynomial equations (or disequations) over commutative rings,
semirings, or fields.
Works out of the box All core numeric types and relevant Mathlib types already provide the required type-class instances, so the solver is ready to use in most developments.
What it can decide:
-
equalities of the form
p = q -
disequalities
p ≠ q -
basic reasoning under field inverses (
a / b := a * b⁻¹) -
goals that mix ring facts with other
grindengines
Key options:
-
grind -ringturn the solver off (useful when debugging) -
grind (ringSteps := n)cap the number of steps performed by this procedure.
Examples
open Lean Grind
example [CommRing α] (x : α) : (x + 1) * (x - 1) = x^2 - 1 := by
grind
-- Characteristic 256 means 16 * 16 = 0.
example [CommRing α] [IsCharP α 256] (x : α) :
(x + 16) * (x - 16) = x^2 := by
grind
-- Works on built-in rings such as `UInt8`.
example (x : UInt8) : (x + 16) * (x - 16) = x^2 := by
grind
example [CommRing α] (a b c : α) :
a + b + c = 3 →
a^2 + b^2 + c^2 = 5 →
a^3 + b^3 + c^3 = 7 →
a^4 + b^4 = 9 - c^4 := by
grind
example [Field α] [NoNatZeroDivisors α] (a : α) :
1 / a + 1 / (2 * a) = 3 / (2 * a) := by
grind
Other options
-
grind (splits := <num>)caps the depth of the search tree. Once a branch performsnumsplitsgrindstops splitting further in that branch. -
grind -splitItedisables case splitting on if-then-else expressions. -
grind -splitMatchdisables case splitting onmatchexpressions. -
grind +splitImpinstructsgrindto split on any hypothesisA → Bwhose antecedentAis propositional. -
grind -linarithdisables the linear arithmetic solver for (ordered) modules and rings.
Additional Examples
example {a b} {as bs : List α} : (as ++ bs ++ [b]).getLastD a = b := α:Type u_1a:αb:αas:List αbs:List α⊢ (as ++ bs ++ [b]).getLastD a = b
All goals completed! 🐙
example (x : BitVec (w+1)) : (BitVec.cons x.msb (x.setWidth w)) = x := w:ℕx:BitVec (w + 1)⊢ BitVec.cons x.msb (BitVec.setWidth w x) = x
All goals completed! 🐙
example (as : Array α) (lo hi i j : Nat) :
lo ≤ i → i < j → j ≤ hi → j < as.size → min lo (as.size - 1) ≤ i := α:Type u_1as:Array αlo:ℕhi:ℕi:ℕj:ℕ⊢ lo ≤ i → i < j → j ≤ hi → j < as.size → min lo (as.size - 1) ≤ i
All goals completed! 🐙
grind?
groupTactic for normalizing expressions in multiplicative groups, without assuming commutativity, using only the group axioms without any information about which group is manipulated.
(For additive commutative groups, use the abel tactic instead.)
Example:
example {G : Type} [Group G] (a b c d : G) (h : c = (a*b^2)*((b*b)⁻¹*a⁻¹)*d) : a*c*d⁻¹ = a := G:Typeinst✝:Group Ga:Gb:Gc:Gd:Gh:c = a * b ^ 2 * ((b * b)⁻¹ * a⁻¹) * d⊢ a * c * d⁻¹ = a
G:Typeinst✝:Group Ga:Gb:Gc:Gd:Gh:c = d⊢ a * c * d⁻¹ = a -- normalizes `h` which becomes `h : c = d`
G:Typeinst✝:Group Ga:Gb:Gc:Gd:Gh:c = d⊢ a * d * d⁻¹ = a -- the goal is now `a*d*d⁻¹ = a`
All goals completed! 🐙 -- which then normalized and closed
grw
grw [e] works just like rw [e], but e can be a relation other than = or ↔.
For example,
variable {a b c d n : ℤ}
example (h₁ : a < b) (h₂ : b ≤ c) : a + d ≤ c + d := a:ℤb:ℤc:ℤd:ℤn:ℤh₁:a < bh₂:b ≤ c⊢ a + d ≤ c + d
grw [h₁, h₂All goals completed! 🐙
example (h : a ≡ b [ZMOD n]) : a ^ 2 ≡ b ^ 2 [ZMOD n] := a:ℤb:ℤc:ℤd:ℤn:ℤh:a ≡ b [ZMOD n]⊢ a ^ 2 ≡ b ^ 2 [ZMOD n]
grw [hAll goals completed! 🐙
example (h₁ : a ∣ b) (h₂ : b ∣ a ^ 2 * c) : a ∣ b ^ 2 * c := a:ℤb:ℤc:ℤd:ℤn:ℤh₁:a ∣ bh₂:b ∣ a ^ 2 * c⊢ a ∣ b ^ 2 * c
grw [h₁a:ℤb:ℤc:ℤd:ℤn:ℤh₁:a ∣ bh₂:b ∣ b ^ 2 * c⊢ b ∣ b ^ 2 * c at *
All goals completed! 🐙
To be able to use grw, the relevant lemmas need to be tagged with @[gcongr].
To rewrite inside a transitive relation, you can also give it an IsTrans instance.
guard_exprTactic to check equality of two expressions.
-
guard_expr e = e'checks thateande'are defeq at reducible transparency. -
guard_expr e =~ e'checks thateande'are defeq at default transparency. -
guard_expr e =ₛ e'checks thateande'are syntactically equal. -
guard_expr e =ₐ e'checks thateande'are alpha-equivalent.
Both e and e' are elaborated then have their metavariables instantiated before the equality
check. Their types are unified (using isDefEqGuarded) before synthetic metavariables are
processed, which helps with default instance handling.
guard_hypTactic to check that a named hypothesis has a given type and/or value.
-
guard_hyp h : tchecks the type up to reducible defeq, -
guard_hyp h :~ tchecks the type up to default defeq, -
guard_hyp h :ₛ tchecks the type up to syntactic equality, -
guard_hyp h :ₐ tchecks the type up to alpha equality. -
guard_hyp h := vchecks value up to reducible defeq, -
guard_hyp h :=~ vchecks value up to default defeq, -
guard_hyp h :=ₛ vchecks value up to syntactic equality, -
guard_hyp h :=ₐ vchecks the value up to alpha equality.
The value v is elaborated using the type of h as the expected type.
guard_hyp_nums
guard_hyp_nums n succeeds if there are exactly n hypotheses and fails otherwise.
Note that, depending on what options are set, some hypotheses in the local context might not be printed in the goal view. This tactic computes the total number of hypotheses, not the number of visible hypotheses.
guard_targetTactic to check that the target agrees with a given expression.
-
guard_target = echecks that the target is defeq at reducible transparency toe. -
guard_target =~ echecks that the target is defeq at default transparency toe. -
guard_target =ₛ echecks that the target is syntactically equal toe. -
guard_target =ₐ echecks that the target is alpha-equivalent toe.
The term e is elaborated with the type of the goal as the expected type, which is mostly
useful within conv mode.
have
have
The have tactic is for adding hypotheses to the local context of the main goal.
-
have h : t := eadds the hypothesish : tifeis a term of typet. -
have h := euses the type ofefort. -
have : t := eandhave := eusethisfor the name of the hypothesis. -
have pat := efor a patternpatis equivalent tomatch e with | pat => _, where_stands for the tactics that follow this one. It is convenient for types that have only one applicable constructor. For example, givenh : p ∧ q ∧ r,have ⟨h₁, h₂, h₃⟩ := hproduces the hypothesesh₁ : p,h₂ : q, andh₃ : r. -
The syntax
have (eq := h) pat := eis equivalent tomatch h : e with | pat => _, which adds the equationh : e = patto the local context.
The tactic supports all the same syntax variants and options as the have term.
have!?
-
have? using a, b, ctries to find a lemma which makes use of each of the local hypothesesa, b, c, and reports any results via trace messages. -
have? : h using a, b, conly returns lemmas whose type matchesh(which may contain_). -
have?! using a, b, cwill also callhaveto add results to the local goal state.
Note that have? (unlike apply?) does not inspect the goal at all,
only the types of the lemmas in the using clause.
have? should not be left in proofs; it is a search tool, like apply?.
Suggestions are printed as have := f a b c.
have?
-
have? using a, b, ctries to find a lemma which makes use of each of the local hypothesesa, b, c, and reports any results via trace messages. -
have? : h using a, b, conly returns lemmas whose type matchesh(which may contain_). -
have?! using a, b, cwill also callhaveto add results to the local goal state.
Note that have? (unlike apply?) does not inspect the goal at all,
only the types of the lemmas in the using clause.
have? should not be left in proofs; it is a search tool, like apply?.
Suggestions are printed as have := f a b c.
have?!
-
have? using a, b, ctries to find a lemma which makes use of each of the local hypothesesa, b, c, and reports any results via trace messages. -
have? : h using a, b, conly returns lemmas whose type matchesh(which may contain_). -
have?! using a, b, cwill also callhaveto add results to the local goal state.
Note that have? (unlike apply?) does not inspect the goal at all,
only the types of the lemmas in the using clause.
have? should not be left in proofs; it is a search tool, like apply?.
Suggestions are printed as have := f a b c.
hint
The hint tactic tries every tactic registered using register_hint tac,
and reports any that succeed.
induction
Assuming x is a variable in the local context with an inductive type,
induction x applies induction on x to the main goal,
producing one goal for each constructor of the inductive type,
in which the target is replaced by a general instance of that constructor
and an inductive hypothesis is added for each recursive argument to the constructor.
If the type of an element in the local context depends on x,
that element is reverted and reintroduced afterward,
so that the inductive hypothesis incorporates that hypothesis as well.
For example, given n : Nat and a goal with a hypothesis h : P n and target Q n,
induction n produces one goal with hypothesis h : P 0 and target Q 0,
and one goal with hypotheses h : P (Nat.succ a) and ih₁ : P a → Q a and target Q (Nat.succ a).
Here the names a and ih₁ are chosen automatically and are not accessible.
You can use with to provide the variables names for each constructor.
-
induction e, whereeis an expression instead of a variable, generalizesein the goal, and then performs induction on the resulting variable. -
induction e using rallows the user to specify the principle of induction that should be used. Herershould be a term whose result type must be of the formC t, whereCis a bound variable andtis a (possibly empty) sequence of bound variables -
induction e generalizing z₁ ... zₙ, wherez₁ ... zₙare variables in the local context, generalizes overz₁ ... zₙbefore applying the induction but then introduces them in each goal. In other words, the net effect is that each inductive hypothesis is generalized. -
Given
x : Nat,induction x with | zero => tac₁ | succ x' ih => tac₂uses tactictac₁for thezerocase, andtac₂for thesucccase.
induction'
The induction' tactic is similar to the induction tactic in Lean 4 core,
but with slightly different syntax (such as, no requirement to name the constructors).
open Nat
example (n : ℕ) : 0 < factorial n := n:ℕ⊢ 0 < n !
⊢ 0 < 0!n:ℕih:0 < n !⊢ 0 < (n + 1)!
⊢ 0 < 0! ⊢ 0 < 1
All goals completed! 🐙
n:ℕih:0 < n !⊢ 0 < (n + 1)! n:ℕih:0 < n !⊢ 0 < (n + 1) * n !
All goals completed! 🐙
example (n : ℕ) : 0 < factorial n := n:ℕ⊢ 0 < n !
⊢ 0 < 0!n✝:ℕa✝:0 < n✝!⊢ 0 < (n✝ + 1)!
case zero ⊢ 0 < 0!
⊢ 0 < 1
All goals completed! 🐙
case succ n ih n:ℕih:0 < n !⊢ 0 < (n + 1)!
n:ℕih:0 < n !⊢ 0 < (n + 1) * n !
All goals completed! 🐙
infer_instance
infer_instance is an abbreviation for exact inferInstance.
It synthesizes a value of any target type by typeclass inference.
inhabit
inhabit α tries to derive a Nonempty α instance and
then uses it to make an Inhabited α instance.
If the target is a Prop, this is done constructively. Otherwise, it uses Classical.choice.
init_ring
init_ring is an auxiliary tactic that discharges goals factoring init over ring operations.
injection
The injection tactic is based on the fact that constructors of inductive data
types are injections.
That means that if c is a constructor of an inductive datatype, and if (c t₁)
and (c t₂) are two terms that are equal then t₁ and t₂ are equal too.
If q is a proof of a statement of conclusion t₁ = t₂, then injection applies
injectivity to derive the equality of all arguments of t₁ and t₂ placed in
the same positions. For example, from (a::b) = (c::d) we derive a=c and b=d.
To use this tactic t₁ and t₂ should be constructor applications of the same constructor.
Given h : a::b = c::d, the tactic injection h adds two new hypothesis with types
a = c and b = d to the main goal.
The tactic injection h with h₁ h₂ uses the names h₁ and h₂ to name the new hypotheses.
injections
injections applies injection to all hypotheses recursively
(since injection can produce new hypotheses). Useful for destructing nested
constructor equalities like (a::b::c) = (d::e::f).
interval_cases
interval_cases n searches for upper and lower bounds on a variable n,
and if bounds are found,
splits into separate cases for each possible value of n.
As an example, in
example (n : ℕ) (w₁ : n ≥ 3) (w₂ : n < 5) : n = 3 ∨ n = 4 := n:ℕw₁:n ≥ 3w₂:n < 5⊢ n = 3 ∨ n = 4
n:ℕw₁:3 ≥ 3w₂:3 < 5⊢ 3 = 3 ∨ 3 = 4n:ℕw₁:4 ≥ 3w₂:4 < 5⊢ 4 = 3 ∨ 4 = 4
all_goals All goals completed! 🐙
after interval_cases n, the goals are 3 = 3 ∨ 3 = 4 and 4 = 3 ∨ 4 = 4.
You can also explicitly specify a lower and upper bound to use,
as interval_cases using hl, hu.
The hypotheses should be in the form hl : a ≤ n and hu : n < b,
in which case interval_cases calls fin_cases on the resulting fact n ∈ Set.Ico a b.
You can specify a name h for the new hypothesis,
as interval_cases h : n or interval_cases h : n using hl, hu.
intro
Introduces one or more hypotheses, optionally naming and/or pattern-matching them.
For each hypothesis to be introduced, the remaining main goal's target type must
be a let or function type.
-
introby itself introduces one anonymous hypothesis, which can be accessed by e.g.assumption. -
intro x yintroduces two hypotheses and names them. Individual hypotheses can be anonymized via_, or matched against a pattern:intro (a, b) -- ..., a : α, b : β ⊢ ... -
Alternatively,
introcan be combined with pattern matching much likefun:intro | n + 1, 0 => tac | ...
intros
Introduces zero or more hypotheses, optionally naming them.
-
introsis equivalent to repeatedly applyingintrountil the goal is not an obvious candidate forintro, which is to say that so long as the goal is aletor a pi type (e.g. an implication, function, or universal quantifier), theintrostactic will introduce an anonymous hypothesis. This tactic does not unfold definitions. -
intros x y ...is equivalent tointro x y ..., introducing hypotheses for each supplied argument and unfolding definitions as necessary. Each argument can be either an identifier or a_. An identifier indicates a name to use for the corresponding introduced hypothesis, and a_indicates that the hypotheses should be introduced anonymously.
Examples
Basic properties:
def AllEven (f : Nat → Nat) := ∀ n, f n % 2 = 0
-- Introduces the two obvious hypotheses automatically
example : ∀ (f : Nat → Nat), AllEven f → AllEven (fun k => f (k + 1)) := ⊢ ∀ (f : ℕ → ℕ), AllEven f → AllEven fun k => f (k + 1)
f✝:ℕ → ℕa✝:AllEven f✝⊢ AllEven fun k => f✝ (k + 1)
/- Tactic state
f✝ : Nat → Nat
a✝ : AllEven f✝
⊢ AllEven fun k => f✝ (k + 1) -/
All goals completed! 🐙
-- Introduces exactly two hypotheses, naming only the first
example : ∀ (f : Nat → Nat), AllEven f → AllEven (fun k => f (k + 1)) := ⊢ ∀ (f : ℕ → ℕ), AllEven f → AllEven fun k => f (k + 1)
g:ℕ → ℕa✝:AllEven g⊢ AllEven fun k => g (k + 1)
/- Tactic state
g : Nat → Nat
a✝ : AllEven g
⊢ AllEven fun k => g (k + 1) -/
All goals completed! 🐙
-- Introduces exactly three hypotheses, which requires unfolding `AllEven`
example : ∀ (f : Nat → Nat), AllEven f → AllEven (fun k => f (k + 1)) := ⊢ ∀ (f : ℕ → ℕ), AllEven f → AllEven fun k => f (k + 1)
f:ℕ → ℕh:AllEven fn:ℕ⊢ (fun k => f (k + 1)) n % 2 = 0
/- Tactic state
f : Nat → Nat
h : AllEven f
n : Nat
⊢ (fun k => f (k + 1)) n % 2 = 0 -/
All goals completed! 🐙
Implications:
example (p q : Prop) : p → q → p := p:Propq:Prop⊢ p → q → p
p:Propq:Propa✝¹:pa✝:q⊢ p
/- Tactic state
a✝¹ : p
a✝ : q
⊢ p -/
All goals completed! 🐙
Let bindings:
example : let n := 1; let k := 2; n + k = 3 := ⊢ let n := 1;
let k := 2;
n + k = 3
n✝:ℕ := 1k✝:ℕ := 2⊢ n✝ + k✝ = 3
/- n✝ : Nat := 1
k✝ : Nat := 2
⊢ n✝ + k✝ = 3 -/
All goals completed! 🐙
introv
The tactic introv allows the user to automatically introduce the variables of a theorem and
explicitly name the non-dependent hypotheses.
Any dependent hypotheses are assigned their default names.
Examples:
example : ∀ a b : Nat, a = b → b = a := by introv h, exact h.symm
The state after introv h is
a b : ℕ, h : a = b ⊢ b = a
example : ∀ a b : Nat, a = b → ∀ c, b = c → a = c := by introv h₁ h₂, exact h₁.trans h₂
The state after introv h₁ h₂ is
a b : ℕ, h₁ : a = b, c : ℕ, h₂ : b = c ⊢ a = c
isBoundedDefault
Filters are automatically bounded or cobounded in complete lattices. To use the same statements
in complete and conditionally complete lattices but let automation fill automatically the
boundedness proofs in complete lattices, we use the tactic isBoundedDefault in the statements,
in the form (hf : f.IsBounded (≥) := by isBoundedDefault).
itauto
A decision procedure for intuitionistic propositional logic. Unlike finish and tauto! this
tactic never uses the law of excluded middle (without the ! option), and the proof search is
tailored for this use case. (itauto! will work as a classical SAT solver, but the algorithm is
not very good in this situation.)
example (p : Prop) : ¬ (p ↔ ¬ p) := p:Prop⊢ ¬(p ↔ ¬p) All goals completed! 🐙
itauto [a, b] will additionally attempt case analysis on a and b assuming that it can derive
Decidable a and Decidable b. itauto * will case on all decidable propositions that it can
find among the atomic propositions, and itauto! * will case on all propositional atoms.
Warning: This can blow up the proof search, so it should be used sparingly.
itauto!
A decision procedure for intuitionistic propositional logic. Unlike finish and tauto! this
tactic never uses the law of excluded middle (without the ! option), and the proof search is
tailored for this use case. (itauto! will work as a classical SAT solver, but the algorithm is
not very good in this situation.)
example (p : Prop) : ¬ (p ↔ ¬ p) := p:Prop⊢ ¬(p ↔ ¬p) All goals completed! 🐙
itauto [a, b] will additionally attempt case analysis on a and b assuming that it can derive
Decidable a and Decidable b. itauto * will case on all decidable propositions that it can
find among the atomic propositions, and itauto! * will case on all propositional atoms.
Warning: This can blow up the proof search, so it should be used sparingly.
iterate
iterate n tac runs tac exactly n times.
iterate tac runs tac repeatedly until failure.
iterate's argument is a tactic sequence,
so multiple tactics can be run using iterate n (tac₁; tac₂; ⋯) or
iterate n
tac₁
tac₂
⋯leftApplies the first constructor when the goal is an inductive type with exactly two constructors, or fails otherwise.
example : True ∨ False := ⊢ True ∨ False
⊢ True
All goals completed! 🐙
let
The let tactic is for adding definitions to the local context of the main goal.
-
let x : t := eadds the definitionx : t := eifeis a term of typet. -
let x := euses the type ofefort. -
let : t := eandlet := eusethisfor the name of the hypothesis. -
let pat := efor a patternpatis equivalent tomatch e with | pat => _, where_stands for the tactics that follow this one. It is convenient for types that let only one applicable constructor. For example, givenp : α × β × γ,let ⟨x, y, z⟩ := pproduces the local variablesx : α,y : β, andz : γ. -
The syntax
let (eq := h) pat := eis equivalent tomatch h : e with | pat => _, which adds the equationh : e = patto the local context.
The tactic supports all the same syntax variants and options as the let term.
let
let
let_to_have
Transforms let expressions into have expressions when possible.
-
let_to_havetransformslets in the target. -
let_to_have at htransformslets in the given local hypothesis.
liftLift an expression to another type.
-
Usage:
'lift' expr 'to' expr ('using' expr)? ('with' id (id id?)?)?. -
If
n : ℤandhn : n ≥ 0then the tacticlift n to ℕ using hncreates a new constant of typeℕ, also namednand replaces all occurrences of the old variable(n : ℤ)with↑n(wherenin the new variable). It will removenandhnfrom the context.-
So for example the tactic
lift n to ℕ using hntransforms the goaln : ℤ, hn : n ≥ 0, h : P n ⊢ n = 3ton : ℕ, h : P ↑n ⊢ ↑n = 3(herePis some term of typeℤ → Prop).
-
-
The argument
using hnis optional, the tacticlift n to ℕdoes the same, but also creates a new subgoal thatn ≥ 0(wherenis the old variable). This subgoal will be placed at the top of the goal list.-
So for example the tactic
lift n to ℕtransforms the goaln : ℤ, h : P n ⊢ n = 3to two goalsn : ℤ, h : P n ⊢ n ≥ 0andn : ℕ, h : P ↑n ⊢ ↑n = 3.
-
-
You can also use
lift n to ℕ using ewhereeis any expression of typen ≥ 0. -
Use
lift n to ℕ with kto specify the name of the new variable. -
Use
lift n to ℕ with k hkto also specify the name of the equality↑k = n. In this case,nwill remain in the context. You can userflfor the name ofhkto substitutenaway (i.e. the default behavior). -
You can also use
lift e to ℕ with k hkwhereeis any expression of typeℤ. In this case, thehkwill always stay in the context, but it will be used to rewriteein all hypotheses and the target.-
So for example the tactic
lift n + 3 to ℕ using hn with k hktransforms the goaln : ℤ, hn : n + 3 ≥ 0, h : P (n + 3) ⊢ n + 3 = 2 * nto the goaln : ℤ, k : ℕ, hk : ↑k = n + 3, h : P ↑k ⊢ ↑k = 2 * n.
-
-
The tactic
lift n to ℕ using hwill removehfrom the context. If you want to keep it, specify it again as the third argument towith, like this:lift n to ℕ using h with n rfl h. -
More generally, this can lift an expression from
αtoβassuming that there is an instance ofCanLift α β. In this case the proof obligation is specified byCanLift.prf. -
Given an instance
CanLift β γ, it can also liftα → βtoα → γ; more generally, givenβ : Π a : α, Type*,γ : Π a : α, Type*, and[Π a : α, CanLift (β a) (γ a)], it automatically generates an instanceCanLift (Π a, β a) (Π a, γ a).
lift is in some sense dual to the zify tactic. lift (z : ℤ) to ℕ will change the type of an
integer z (in the supertype) to ℕ (the subtype), given a proof that z ≥ 0;
propositions concerning z will still be over ℤ. zify changes propositions about ℕ (the
subtype) to propositions about ℤ (the supertype), without changing the type of any variable.
lift_lets
Lifts let and have expressions within a term as far out as possible.
It is like extract_lets +lift, but the top-level lets at the end of the procedure
are not extracted as local hypotheses.
-
lift_letslifts let expressions in the target. -
lift_lets at hlifts let expressions at the given local hypothesis.
For example,
example : (let x := 1; x) = 1 := by lift_lets -- ⊢ let x := 1; x = 1 ...
liftable_prefixes
Internal tactic used in coherence.
Rewrites an equation f = g as f₀ ≫ f₁ = g₀ ≫ g₁,
where f₀ and g₀ are maximal prefixes of f and g (possibly after reassociating)
which are "liftable" (i.e. expressible as compositions of unitors and associators).
linarith
linarith attempts to find a contradiction between hypotheses that are linear (in)equalities.
Equivalently, it can prove a linear inequality by assuming its negation and proving False.
In theory, linarith should prove any goal that is true in the theory of linear arithmetic over
the rationals. While there is some special handling for non-dense orders like Nat and Int,
this tactic is not complete for these theories and will not prove every true goal. It will solve
goals over arbitrary types that instantiate CommRing, LinearOrder and IsStrictOrderedRing.
An example:
example (x y z : ℚ) (h1 : 2*x < 3*y) (h2 : -4*x + 2*z < 0)
(h3 : 12*y - 4* z < 0) : False := x:ℚy:ℚz:ℚh1:2 * x < 3 * yh2:-4 * x + 2 * z < 0h3:12 * y - 4 * z < 0⊢ False
All goals completed! 🐙
linarith will use all appropriate hypotheses and the negation of the goal, if applicable.
Disequality hypotheses require case splitting and are not normally considered
(see the splitNe option below).
linarith [t1, t2, t3] will additionally use proof terms t1, t2, t3.
linarith only [h1, h2, h3, t1, t2, t3] will use only the goal (if relevant), local hypotheses
h1, h2, h3, and proofs t1, t2, t3. It will ignore the rest of the local context.
linarith! will use a stronger reducibility setting to try to identify atoms. For example,
example (x : ℚ) : id x ≥ x := by linarith
will fail, because linarith will not identify x and id x. linarith! will.
This can sometimes be expensive.
linarith (config := { .. }) takes a config object with five
optional arguments:
-
dischargerspecifies a tactic to be used for reducing an algebraic equation in the proof stage. The default isring. Other options includesimpfor basic problems. -
transparencycontrols how hardlinarithwill try to match atoms to each other. By default it will only unfoldreducibledefinitions. -
If
splitHypothesesis true,linarithwill split conjunctions in the context into separate hypotheses. -
If
splitNeistrue,linarithwill case split on disequality hypotheses. For a givenx ≠ yhypothesis,linarithis run with bothx < yandx > y, and so this runs linarith exponentially many times with respect to the number of disequality hypotheses. (falseby default.) -
If
exfalsoisfalse,linarithwill fail when the goal is neither an inequality norFalse. (trueby default.) -
restrict_type(not yet implemented in mathlib4) will only use hypotheses that are inequalities overtp. This is useful if you have e.g. both integer- and rational-valued inequalities in the local context, which can sometimes confuse the tactic.
A variant, nlinarith, does some basic preprocessing to handle some nonlinear goals.
The option set_option trace.linarith true will trace certain intermediate stages of the linarith
routine.
linarith!
linarith attempts to find a contradiction between hypotheses that are linear (in)equalities.
Equivalently, it can prove a linear inequality by assuming its negation and proving False.
In theory, linarith should prove any goal that is true in the theory of linear arithmetic over
the rationals. While there is some special handling for non-dense orders like Nat and Int,
this tactic is not complete for these theories and will not prove every true goal. It will solve
goals over arbitrary types that instantiate CommRing, LinearOrder and IsStrictOrderedRing.
An example:
example (x y z : ℚ) (h1 : 2*x < 3*y) (h2 : -4*x + 2*z < 0)
(h3 : 12*y - 4* z < 0) : False := x:ℚy:ℚz:ℚh1:2 * x < 3 * yh2:-4 * x + 2 * z < 0h3:12 * y - 4 * z < 0⊢ False
All goals completed! 🐙
linarith will use all appropriate hypotheses and the negation of the goal, if applicable.
Disequality hypotheses require case splitting and are not normally considered
(see the splitNe option below).
linarith [t1, t2, t3] will additionally use proof terms t1, t2, t3.
linarith only [h1, h2, h3, t1, t2, t3] will use only the goal (if relevant), local hypotheses
h1, h2, h3, and proofs t1, t2, t3. It will ignore the rest of the local context.
linarith! will use a stronger reducibility setting to try to identify atoms. For example,
example (x : ℚ) : id x ≥ x := by linarith
will fail, because linarith will not identify x and id x. linarith! will.
This can sometimes be expensive.
linarith (config := { .. }) takes a config object with five
optional arguments:
-
dischargerspecifies a tactic to be used for reducing an algebraic equation in the proof stage. The default isring. Other options includesimpfor basic problems. -
transparencycontrols how hardlinarithwill try to match atoms to each other. By default it will only unfoldreducibledefinitions. -
If
splitHypothesesis true,linarithwill split conjunctions in the context into separate hypotheses. -
If
splitNeistrue,linarithwill case split on disequality hypotheses. For a givenx ≠ yhypothesis,linarithis run with bothx < yandx > y, and so this runs linarith exponentially many times with respect to the number of disequality hypotheses. (falseby default.) -
If
exfalsoisfalse,linarithwill fail when the goal is neither an inequality norFalse. (trueby default.) -
restrict_type(not yet implemented in mathlib4) will only use hypotheses that are inequalities overtp. This is useful if you have e.g. both integer- and rational-valued inequalities in the local context, which can sometimes confuse the tactic.
A variant, nlinarith, does some basic preprocessing to handle some nonlinear goals.
The option set_option trace.linarith true will trace certain intermediate stages of the linarith
routine.
linear_combination
The linear_combination tactic attempts to prove an (in)equality goal by exhibiting it as a
specified linear combination of (in)equality hypotheses, or other (in)equality proof terms, modulo
(A) moving terms between the LHS and RHS of the (in)equalities, and (B) a normalization tactic
which by default is ring-normalization.
Example usage:
example {a b : ℚ} (h1 : a = 1) (h2 : b = 3) : (a + b) / 2 = 2 := by
linear_combination (h1 + h2) / 2
example {a b : ℚ} (h1 : a ≤ 1) (h2 : b ≤ 3) : (a + b) / 2 ≤ 2 := by
linear_combination (h1 + h2) / 2
example {a b : ℚ} : 2 * a * b ≤ a ^ 2 + b ^ 2 := by
linear_combination sq_nonneg (a - b)
example {x y z w : ℤ} (h₁ : x * z = y ^ 2) (h₂ : y * w = z ^ 2) :
z * (x * w - y * z) = 0 := by
linear_combination w * h₁ + y * h₂
example {x : ℚ} (h : x ≥ 5) : x ^ 2 > 2 * x + 11 := by
linear_combination (x + 3) * h
example {R : Type*} [CommRing R] {a b : R} (h : a = b) : a ^ 2 = b ^ 2 := by
linear_combination (a + b) * h
example {A : Type*} [AddCommGroup A]
{x y z : A} (h1 : x + y = 10 • z) (h2 : x - y = 6 • z) :
2 • x = 2 • (8 • z) := by
linear_combination (norm := abel) h1 + h2
example (x y : ℤ) (h1 : x * y + 2 * x = 1) (h2 : x = y) :
x * y = -2 * y + 1 := by
linear_combination (norm := ring_nf) -2 * h2
-- leaves goal `⊢ x * y + x * 2 - 1 = 0`
The input e in linear_combination e is a linear combination of proofs of (in)equalities,
given as a sum/difference of coefficients multiplied by expressions.
The coefficients may be arbitrary expressions (with nonnegativity constraints in the case of
inequalities).
The expressions can be arbitrary proof terms proving (in)equalities;
most commonly they are hypothesis names h1, h2, ....
The left and right sides of all the (in)equalities should have the same type α, and the
coefficients should also have type α. For full functionality α should be a commutative ring --
strictly speaking, a commutative semiring with "cancellative" addition (in the semiring case,
negation and subtraction will be handled "formally" as if operating in the enveloping ring). If a
nonstandard normalization is used (for example abel or skip), the tactic will work over types
α with less algebraic structure: for equalities, the minimum is instances of
[Add α] [IsRightCancelAdd α] together with instances of whatever operations are used in the tactic
call.
The variant linear_combination (norm := tac) e specifies explicitly the "normalization tactic"
tac to be run on the subgoal(s) after constructing the linear combination.
-
The default normalization tactic is
ring1(for equalities) orMathlib.Tactic.Ring.prove{LE,LT}(for inequalities). These are finishing tactics: they close the goal or fail. -
When working in algebraic categories other than commutative rings -- for example fields, abelian groups, modules -- it is sometimes useful to use normalization tactics adapted to those categories (
field_simp,abel,module). -
To skip normalization entirely, use
skipas the normalization tactic. -
The
linear_combinationtactic creates a linear combination by adding the provided (in)equalities together from left to right, so iftacis not invariant under commutation of additive expressions, then the order of the input hypotheses can matter.
The variant linear_combination (exp := n) e will take the goal to the nth power before
subtracting the combination e. In other words, if the goal is t1 = t2,
linear_combination (exp := n) e will change the goal to (t1 - t2)^n = 0 before proceeding as
above. This variant is implemented only for linear combinations of equalities (i.e., not for
inequalities).
linear_combination'
linear_combination' attempts to simplify the target by creating a linear combination
of a list of equalities and subtracting it from the target.
The tactic will create a linear
combination by adding the equalities together from left to right, so the order
of the input hypotheses does matter. If the norm field of the
tactic is set to skip, then the tactic will simply set the user up to
prove their target using the linear combination instead of normalizing the subtraction.
Note: There is also a similar tactic linear_combination (no prime); this version is
provided for backward compatibility. Compared to this tactic, linear_combination:
-
drops the
←syntax for reversing an equation, instead offering this operation using the-syntax -
does not support multiplication of two hypotheses (
h1 * h2), division by a hypothesis (3 / h), or inversion of a hypothesis (h⁻¹) -
produces noisy output when the user adds or subtracts a constant to a hypothesis (
h + 3)
Note: The left and right sides of all the equalities should have the same
type, and the coefficients should also have this type. There must be
instances of Mul and AddGroup for this type.
-
The input
einlinear_combination' eis a linear combination of proofs of equalities, given as a sum/difference of coefficients multiplied by expressions. The coefficients may be arbitrary expressions. The expressions can be arbitrary proof terms proving equalities. Most commonly they are hypothesis namesh1, h2, .... -
linear_combination' (norm := tac) eruns the "normalization tactic"tacon the subgoal(s) after constructing the linear combination. -
linear_combination' (exp := n) ewill take the goal to thenth power before subtracting the combinatione. In other words, if the goal ist1 = t2,linear_combination' (exp := n) ewill change the goal to(t1 - t2)^n = 0before proceeding as above. This feature is not supported forlinear_combination2. -
linear_combination2 eis the same aslinear_combination' ebut it produces two subgoals instead of one: rather than proving that(a - b) - (a' - b') = 0wherea' = b'is the linear combination fromeanda = bis the goal, it instead attempts to provea = a'andb = b'. Because it does not use subtraction, this form is applicable also to semirings.-
Note that a goal which is provable by
linear_combination' emay not be provable bylinear_combination2 e; in general you may need to add a coefficient toeto make both sides match, as inlinear_combination2 e + c. -
You can also reverse equalities using
← h, so for example ifh₁ : a = bthen2 * (← h)is a proof of2 * b = 2 * a.
-
Example Usage:
example (x y : ℤ) (h1 : x*y + 2*x = 1) (h2 : x = y) : x*y = -2*y + 1 := by
linear_combination' 1*h1 - 2*h2
example (x y : ℤ) (h1 : x*y + 2*x = 1) (h2 : x = y) : x*y = -2*y + 1 := by
linear_combination' h1 - 2*h2
example (x y : ℤ) (h1 : x*y + 2*x = 1) (h2 : x = y) : x*y = -2*y + 1 := by
linear_combination' (norm := ring_nf) -2*h2
/- Goal: x * y + x * 2 - 1 = 0 -/
example (x y z : ℝ) (ha : x + 2*y - z = 4) (hb : 2*x + y + z = -2)
(hc : x + 2*y + z = 2) :
-3*x - 3*y - 4*z = 2 := by
linear_combination' ha - hb - 2*hc
example (x y : ℚ) (h1 : x + y = 3) (h2 : 3*x = 7) :
x*x*y + y*x*y + 6*x = 3*x*y + 14 := by
linear_combination' x*y*h1 + 2*h2
example (x y : ℤ) (h1 : x = -3) (h2 : y = 10) : 2*x = -6 := by
linear_combination' (norm := skip) 2*h1
simp
axiom qc : ℚ
axiom hqc : qc = 2*qc
example (a b : ℚ) (h : ∀ p q : ℚ, p = q) : 3*a + qc = 3*b + 2*qc := by
linear_combination' 3 * h a b + hqc
linear_combination2
linear_combination' attempts to simplify the target by creating a linear combination
of a list of equalities and subtracting it from the target.
The tactic will create a linear
combination by adding the equalities together from left to right, so the order
of the input hypotheses does matter. If the norm field of the
tactic is set to skip, then the tactic will simply set the user up to
prove their target using the linear combination instead of normalizing the subtraction.
Note: There is also a similar tactic linear_combination (no prime); this version is
provided for backward compatibility. Compared to this tactic, linear_combination:
-
drops the
←syntax for reversing an equation, instead offering this operation using the-syntax -
does not support multiplication of two hypotheses (
h1 * h2), division by a hypothesis (3 / h), or inversion of a hypothesis (h⁻¹) -
produces noisy output when the user adds or subtracts a constant to a hypothesis (
h + 3)
Note: The left and right sides of all the equalities should have the same
type, and the coefficients should also have this type. There must be
instances of Mul and AddGroup for this type.
-
The input
einlinear_combination' eis a linear combination of proofs of equalities, given as a sum/difference of coefficients multiplied by expressions. The coefficients may be arbitrary expressions. The expressions can be arbitrary proof terms proving equalities. Most commonly they are hypothesis namesh1, h2, .... -
linear_combination' (norm := tac) eruns the "normalization tactic"tacon the subgoal(s) after constructing the linear combination. -
linear_combination' (exp := n) ewill take the goal to thenth power before subtracting the combinatione. In other words, if the goal ist1 = t2,linear_combination' (exp := n) ewill change the goal to(t1 - t2)^n = 0before proceeding as above. This feature is not supported forlinear_combination2. -
linear_combination2 eis the same aslinear_combination' ebut it produces two subgoals instead of one: rather than proving that(a - b) - (a' - b') = 0wherea' = b'is the linear combination fromeanda = bis the goal, it instead attempts to provea = a'andb = b'. Because it does not use subtraction, this form is applicable also to semirings.-
Note that a goal which is provable by
linear_combination' emay not be provable bylinear_combination2 e; in general you may need to add a coefficient toeto make both sides match, as inlinear_combination2 e + c. -
You can also reverse equalities using
← h, so for example ifh₁ : a = bthen2 * (← h)is a proof of2 * b = 2 * a.
-
Example Usage:
example (x y : ℤ) (h1 : x*y + 2*x = 1) (h2 : x = y) : x*y = -2*y + 1 := by
linear_combination' 1*h1 - 2*h2
example (x y : ℤ) (h1 : x*y + 2*x = 1) (h2 : x = y) : x*y = -2*y + 1 := by
linear_combination' h1 - 2*h2
example (x y : ℤ) (h1 : x*y + 2*x = 1) (h2 : x = y) : x*y = -2*y + 1 := by
linear_combination' (norm := ring_nf) -2*h2
/- Goal: x * y + x * 2 - 1 = 0 -/
example (x y z : ℝ) (ha : x + 2*y - z = 4) (hb : 2*x + y + z = -2)
(hc : x + 2*y + z = 2) :
-3*x - 3*y - 4*z = 2 := by
linear_combination' ha - hb - 2*hc
example (x y : ℚ) (h1 : x + y = 3) (h2 : 3*x = 7) :
x*x*y + y*x*y + 6*x = 3*x*y + 14 := by
linear_combination' x*y*h1 + 2*h2
example (x y : ℤ) (h1 : x = -3) (h2 : y = 10) : 2*x = -6 := by
linear_combination' (norm := skip) 2*h1
simp
axiom qc : ℚ
axiom hqc : qc = 2*qc
example (a b : ℚ) (h : ∀ p q : ℚ, p = q) : 3*a + qc = 3*b + 2*qc := by
linear_combination' 3 * h a b + hqc
map_tacs
Assuming there are n goals, map_tacs [t1; t2; ...; tn] applies each ti to the respective
goal and leaves the resulting subgoals.
massumption
massumption is like assumption, but operating on a stateful Std.Do.SPred goal.
example (P Q : SPred σs) : Q ⊢ₛ P → Q := by mintro _ _ massumption
massumption
massumption is like assumption, but operating on a stateful Std.Do.SPred goal.
example (P Q : SPred σs) : Q ⊢ₛ P → Q := by mintro _ _ massumption
match
match_scalars
Given a goal which is an equality in a type M (with M an AddCommMonoid), parse the LHS and
RHS of the goal as linear combinations of M-atoms over some semiring R, and reduce the goal to
the respective equalities of the R-coefficients of each atom.
For example, this produces the goal ⊢ a * 1 + b * 1 = (b + a) * 1:
example [AddCommMonoid M] [Semiring R] [Module R M] (a b : R) (x : M) :
a • x + b • x = (b + a) • x := by
match_scalars
This produces the two goals ⊢ a * (a * 1) + b * (b * 1) = 1 (from the x atom) and
⊢ a * -(b * 1) + b * (a * 1) = 0 (from the y atom):
example [AddCommGroup M] [Ring R] [Module R M] (a b : R) (x : M) :
a • (a • x - b • y) + (b • a • y + b • b • x) = x := by
match_scalars
This produces the goal ⊢ -2 * (a * 1) = a * (-2 * 1):
example [AddCommGroup M] [Ring R] [Module R M] (a : R) (x : M) :
-(2:R) • a • x = a • (-2:ℤ) • x := by
match_scalars
The scalar type for the goals produced by the match_scalars tactic is the largest scalar type
encountered; for example, if ℕ, ℚ and a characteristic-zero field K all occur as scalars, then
the goals produced are equalities in K. A variant of push_cast is used internally in
match_scalars to interpret scalars from the other types in this largest type.
If the set of scalar types encountered is not totally ordered (in the sense that for all rings R,
S encountered, it holds that either Algebra R S or Algebra S R), then the match_scalars
tactic fails.
match_target
mcases
Like rcases, but operating on stateful Std.Do.SPred goals.
Example: Given a goal h : (P ∧ (Q ∨ R) ∧ (Q → R)) ⊢ₛ R,
mcases h with ⟨-, ⟨hq | hr⟩, hqr⟩ will yield two goals:
(hq : Q, hqr : Q → R) ⊢ₛ R and (hr : R) ⊢ₛ R.
That is, mcases h with pat has the following semantics, based on pat:
-
pat=□h'renameshtoh'in the stateful context, regardless of whetherhis pure -
pat=⌜h'⌝introducesh' : φto the pure local context ifh : ⌜φ⌝(c.f.Lean.Elab.Tactic.Do.ProofMode.IsPure) -
pat=h'is likepat=⌜h'⌝ifhis pure (c.f.Lean.Elab.Tactic.Do.ProofMode.IsPure), otherwise it is likepat=□h'. -
pat=_renameshto an inaccessible name -
pat=-discardsh -
⟨pat₁, pat₂⟩matches on conjunctions and existential quantifiers and recurses viapat₁andpat₂. -
⟨pat₁ | pat₂⟩matches on disjunctions, matching the left alternative viapat₁and the right alternative viapat₂.
mcases
Like rcases, but operating on stateful Std.Do.SPred goals.
Example: Given a goal h : (P ∧ (Q ∨ R) ∧ (Q → R)) ⊢ₛ R,
mcases h with ⟨-, ⟨hq | hr⟩, hqr⟩ will yield two goals:
(hq : Q, hqr : Q → R) ⊢ₛ R and (hr : R) ⊢ₛ R.
That is, mcases h with pat has the following semantics, based on pat:
-
pat=□h'renameshtoh'in the stateful context, regardless of whetherhis pure -
pat=⌜h'⌝introducesh' : φto the pure local context ifh : ⌜φ⌝(c.f.Lean.Elab.Tactic.Do.ProofMode.IsPure) -
pat=h'is likepat=⌜h'⌝ifhis pure (c.f.Lean.Elab.Tactic.Do.ProofMode.IsPure), otherwise it is likepat=□h'. -
pat=_renameshto an inaccessible name -
pat=-discardsh -
⟨pat₁, pat₂⟩matches on conjunctions and existential quantifiers and recurses viapat₁andpat₂. -
⟨pat₁ | pat₂⟩matches on disjunctions, matching the left alternative viapat₁and the right alternative viapat₂.
mclear
mclear
mconstructor
mconstructor is like constructor, but operating on a stateful Std.Do.SPred goal.
example (Q : SPred σs) : Q ⊢ₛ Q ∧ Q := by mintro HQ mconstructor <;> mexact HQ
mconstructor
mconstructor is like constructor, but operating on a stateful Std.Do.SPred goal.
example (Q : SPred σs) : Q ⊢ₛ Q ∧ Q := by mintro HQ mconstructor <;> mexact HQ
measurability
The tactic measurability solves goals of the form Measurable f, AEMeasurable f,
StronglyMeasurable f, AEStronglyMeasurable f μ, or MeasurableSet s by applying lemmas tagged
with the measurability user attribute.
fun_prop is a (usually more powerful) alternative to measurability
if your goal does not involve MeasurableSet.
measurability!
measurability!?
measurability?
The tactic measurability? solves goals of the form Measurable f, AEMeasurable f,
StronglyMeasurable f, AEStronglyMeasurable f μ, or MeasurableSet s by applying lemmas tagged
with the measurability user attribute, and suggests a faster proof script that can be substituted
for the tactic call in case of success.
mem_tac
mem_tac_aux
mexact
mexact is like exact, but operating on a stateful Std.Do.SPred goal.
example (Q : SPred σs) : Q ⊢ₛ Q := by mstart mintro HQ mexact HQ
mexact
mexact is like exact, but operating on a stateful Std.Do.SPred goal.
example (Q : SPred σs) : Q ⊢ₛ Q := by mstart mintro HQ mexact HQ
mexfalso
mexfalso is like exfalso, but operating on a stateful Std.Do.SPred goal.
example (P : SPred σs) : ⌜False⌝ ⊢ₛ P := by mintro HP mexfalso mexact HP
mexfalso
mexfalso is like exfalso, but operating on a stateful Std.Do.SPred goal.
example (P : SPred σs) : ⌜False⌝ ⊢ₛ P := by mintro HP mexfalso mexact HP
mexists
mexists
mfld_set_tac
A very basic tactic to show that sets showing up in manifolds coincide or are included in one another.
mframe
mframe infers which hypotheses from the stateful context can be moved into the pure context.
This is useful because pure hypotheses "survive" the next application of modus ponens
(Std.Do.SPred.mp) and transitivity (Std.Do.SPred.entails.trans).
It is used as part of the mspec tactic.
example (P Q : SPred σs) : ⊢ₛ ⌜p⌝ ∧ Q ∧ ⌜q⌝ ∧ ⌜r⌝ ∧ P ∧ ⌜s⌝ ∧ ⌜t⌝ → Q := by mintro _ mframe /- `h : p ∧ q ∧ r ∧ s ∧ t` in the pure context -/ mcases h with hP mexact h
mframe
mframe infers which hypotheses from the stateful context can be moved into the pure context.
This is useful because pure hypotheses "survive" the next application of modus ponens
(Std.Do.SPred.mp) and transitivity (Std.Do.SPred.entails.trans).
It is used as part of the mspec tactic.
example (P Q : SPred σs) : ⊢ₛ ⌜p⌝ ∧ Q ∧ ⌜q⌝ ∧ ⌜r⌝ ∧ P ∧ ⌜s⌝ ∧ ⌜t⌝ → Q := by mintro _ mframe /- `h : p ∧ q ∧ r ∧ s ∧ t` in the pure context -/ mcases h with hP mexact h
mhave
mhave
mintro
Like intro, but introducing stateful hypotheses into the stateful context of the Std.Do.SPred
proof mode.
That is, given a stateful goal (hᵢ : Hᵢ)* ⊢ₛ P → T, mintro h transforms
into (hᵢ : Hᵢ)*, (h : P) ⊢ₛ T.
Furthermore, mintro ∀s is like intro s, but preserves the stateful goal.
That is, mintro ∀s brings the topmost state variable s:σ in scope and transforms
(hᵢ : Hᵢ)* ⊢ₛ T (where the entailment is in Std.Do.SPred (σ::σs)) into
(hᵢ : Hᵢ s)* ⊢ₛ T s (where the entailment is in Std.Do.SPred σs).
Beyond that, mintro supports the full syntax of mcases patterns
(mintro pat = (mintro h; mcases h with pat), and can perform multiple
introductions in sequence.
mintro
Like intro, but introducing stateful hypotheses into the stateful context of the Std.Do.SPred
proof mode.
That is, given a stateful goal (hᵢ : Hᵢ)* ⊢ₛ P → T, mintro h transforms
into (hᵢ : Hᵢ)*, (h : P) ⊢ₛ T.
Furthermore, mintro ∀s is like intro s, but preserves the stateful goal.
That is, mintro ∀s brings the topmost state variable s:σ in scope and transforms
(hᵢ : Hᵢ)* ⊢ₛ T (where the entailment is in Std.Do.SPred (σ::σs)) into
(hᵢ : Hᵢ s)* ⊢ₛ T s (where the entailment is in Std.Do.SPred σs).
Beyond that, mintro supports the full syntax of mcases patterns
(mintro pat = (mintro h; mcases h with pat), and can perform multiple
introductions in sequence.
mleave
Leaves the stateful proof mode of Std.Do.SPred, tries to eta-expand through all definitions
related to the logic of the Std.Do.SPred and gently simplifies the resulting pure Lean
proposition. This is often the right thing to do after mvcgen in order for automation to prove
the goal.
mleave
Leaves the stateful proof mode of Std.Do.SPred, tries to eta-expand through all definitions
related to the logic of the Std.Do.SPred and gently simplifies the resulting pure Lean
proposition. This is often the right thing to do after mvcgen in order for automation to prove
the goal.
mleft
mleft is like left, but operating on a stateful Std.Do.SPred goal.
example (P Q : SPred σs) : P ⊢ₛ P ∨ Q := by mintro HP mleft mexact HP
mleft
mleft is like left, but operating on a stateful Std.Do.SPred goal.
example (P Q : SPred σs) : P ⊢ₛ P ∨ Q := by mintro HP mleft mexact HP
mod_cases
-
The tactic
mod_cases h : e % 3will perform a case disjunction one. Ife : ℤ, then it will yield subgoals containing the assumptionsh : e ≡ 0 [ZMOD 3],h : e ≡ 1 [ZMOD 3],h : e ≡ 2 [ZMOD 3]respectively. Ife : ℕinstead, then it works similarly, except with[MOD 3]instead of[ZMOD 3]. -
In general,
mod_cases h : e % nworks whennis a positive numeral andeis an expression of typeℕorℤ. -
If
his omitted as inmod_cases e % n, it will be default-namedH.
module
Given a goal which is an equality in a type M (with M an AddCommMonoid), parse the LHS and
RHS of the goal as linear combinations of M-atoms over some commutative semiring R, and prove
the goal by checking that the LHS- and RHS-coefficients of each atom are the same up to
ring-normalization in R.
(If the proofs of coefficient-wise equality will require more reasoning than just
ring-normalization, use the tactic match_scalars instead, and then prove coefficient-wise equality
by hand.)
Example uses of the module tactic:
example [AddCommMonoid M] [CommSemiring R] [Module R M] (a b : R) (x : M) :
a • x + b • x = (b + a) • x := M:Type u_1R:Type u_2inst✝²:AddCommMonoid Minst✝¹:CommSemiring Rinst✝:Module R Ma:Rb:Rx:M⊢ a • x + b • x = (b + a) • x
All goals completed! 🐙
example [AddCommMonoid M] [Field K] [CharZero K] [Module K M] (x : M) :
(2:K)⁻¹ • x + (3:K)⁻¹ • x + (6:K)⁻¹ • x = x := M:Type u_1K:Type u_2inst✝³:AddCommMonoid Minst✝²:Field Kinst✝¹:CharZero Kinst✝:Module K Mx:M⊢ 2⁻¹ • x + 3⁻¹ • x + 6⁻¹ • x = x
All goals completed! 🐙
example [AddCommGroup M] [CommRing R] [Module R M] (a : R) (v w : M) :
(1 + a ^ 2) • (v + w) - a • (a • v - w) = v + (1 + a + a ^ 2) • w := M:Type u_1R:Type u_2inst✝²:AddCommGroup Minst✝¹:CommRing Rinst✝:Module R Ma:Rv:Mw:M⊢ (1 + a ^ 2) • (v + w) - a • (a • v - w) = v + (1 + a + a ^ 2) • w
All goals completed! 🐙
example [AddCommGroup M] [CommRing R] [Module R M] (a b μ ν : R) (x y : M) :
(μ - ν) • a • x = (a • μ • x + b • ν • y) - ν • (a • x + b • y) := M:Type u_1R:Type u_2inst✝²:AddCommGroup Minst✝¹:CommRing Rinst✝:Module R Ma:Rb:Rμ:Rν:Rx:My:M⊢ (μ - ν) • a • x = a • μ • x + b • ν • y - ν • (a • x + b • y)
All goals completed! 🐙
monicity
monicity tries to solve a goal of the form Monic f.
It converts the goal into a goal of the form natDegree f ≤ n and one of the form f.coeff n = 1
and calls compute_degree on those two goals.
The variant monicity! starts like monicity, but calls compute_degree! on the two side-goals.
monicity!
monicity tries to solve a goal of the form Monic f.
It converts the goal into a goal of the form natDegree f ≤ n and one of the form f.coeff n = 1
and calls compute_degree on those two goals.
The variant monicity! starts like monicity, but calls compute_degree! on the two side-goals.
mono
mono applies monotonicity rules and local hypotheses repetitively. For example,
example (x y z k : ℤ)
(h : 3 ≤ (4 : ℤ))
(h' : z ≤ y) :
(k + 3 + x) - y ≤ (k + 4 + x) - z := x:ℤy:ℤz:ℤk:ℤh:3 ≤ 4h':z ≤ y⊢ k + 3 + x - y ≤ k + 4 + x - z
All goals completed! 🐙
monoidal
Use the coherence theorem for monoidal categories to solve equations in a monoidal category, where the two sides only differ by replacing strings of monoidal structural morphisms (that is, associators, unitors, and identities) with different strings of structural morphisms with the same source and target.
That is, monoidal can handle goals of the form
a ≫ f ≫ b ≫ g ≫ c = a' ≫ f ≫ b' ≫ g ≫ c'
where a = a', b = b', and c = c' can be proved using monoidal_coherence.
monoidal_coherence
Close the goal of the form η = θ, where η and θ are 2-isomorphisms made up only of
associators, unitors, and identities.
example {C : Type} [Category C] [MonoidalCategory C] :
(λ_ (𝟙_ C)).hom = (ρ_ (𝟙_ C)).hom := by
monoidal_coherence
monoidal_coherence
Coherence tactic for monoidal categories.
Use pure_coherence instead, which is a frontend to this one.
move_add
The tactic move_add rearranges summands of expressions.
Calling move_add [a, ← b, ...] matches a, b,... with summands in the main goal.
It then moves a to the far right and b to the far left of each addition in which they appear.
The side to which the summands are moved is determined by the presence or absence of the arrow ←.
The inputs a, b,... can be any terms, also with underscores.
The tactic uses the first "new" summand that unifies with each one of the given inputs.
There is a multiplicative variant, called move_mul.
There is also a general tactic for a "binary associative commutative operation": move_oper.
In this case the syntax requires providing first a term whose head symbol is the operation.
E.g. move_oper HAdd.hAdd [...] is the same as move_add, while move_oper Max.max [...]
rearranges maxs.
move_mul
The tactic move_add rearranges summands of expressions.
Calling move_add [a, ← b, ...] matches a, b,... with summands in the main goal.
It then moves a to the far right and b to the far left of each addition in which they appear.
The side to which the summands are moved is determined by the presence or absence of the arrow ←.
The inputs a, b,... can be any terms, also with underscores.
The tactic uses the first "new" summand that unifies with each one of the given inputs.
There is a multiplicative variant, called move_mul.
There is also a general tactic for a "binary associative commutative operation": move_oper.
In this case the syntax requires providing first a term whose head symbol is the operation.
E.g. move_oper HAdd.hAdd [...] is the same as move_add, while move_oper Max.max [...]
rearranges maxs.
move_oper
The tactic move_add rearranges summands of expressions.
Calling move_add [a, ← b, ...] matches a, b,... with summands in the main goal.
It then moves a to the far right and b to the far left of each addition in which they appear.
The side to which the summands are moved is determined by the presence or absence of the arrow ←.
The inputs a, b,... can be any terms, also with underscores.
The tactic uses the first "new" summand that unifies with each one of the given inputs.
There is a multiplicative variant, called move_mul.
There is also a general tactic for a "binary associative commutative operation": move_oper.
In this case the syntax requires providing first a term whose head symbol is the operation.
E.g. move_oper HAdd.hAdd [...] is the same as move_add, while move_oper Max.max [...]
rearranges maxs.
mpure
mpure moves a pure hypothesis from the stateful context into the pure context.
example (Q : SPred σs) (ψ : φ → ⊢ₛ Q): ⌜φ⌝ ⊢ₛ Q := by mintro Hφ mpure Hφ mexact (ψ Hφ)
mpure
mpure moves a pure hypothesis from the stateful context into the pure context.
example (Q : SPred σs) (ψ : φ → ⊢ₛ Q): ⌜φ⌝ ⊢ₛ Q := by mintro Hφ mpure Hφ mexact (ψ Hφ)
mpure_intro
mpure_intro operates on a stateful Std.Do.SPred goal of the form P ⊢ₛ ⌜φ⌝.
It leaves the stateful proof mode (thereby discarding P), leaving the regular goal φ.
theorem simple : ⊢ₛ (⌜True⌝ : SPred σs) := by mpure_intro exact True.intro
mpure_intro
mpure_intro
mpure_intro operates on a stateful Std.Do.SPred goal of the form P ⊢ₛ ⌜φ⌝.
It leaves the stateful proof mode (thereby discarding P), leaving the regular goal φ.
theorem simple : ⊢ₛ (⌜True⌝ : SPred σs) := by mpure_intro exact True.intro
mrefine
Like refine, but operating on stateful Std.Do.SPred goals.
example (P Q R : SPred σs) : (P ∧ Q ∧ R) ⊢ₛ P ∧ R := by mintro ⟨HP, HQ, HR⟩ mrefine ⟨HP, HR⟩ example (ψ : Nat → SPred σs) : ψ 42 ⊢ₛ ∃ x, ψ x := by mintro H mrefine ⟨⌜42⌝, H⟩
mrefine
Like refine, but operating on stateful Std.Do.SPred goals.
example (P Q R : SPred σs) : (P ∧ Q ∧ R) ⊢ₛ P ∧ R := by mintro ⟨HP, HQ, HR⟩ mrefine ⟨HP, HR⟩ example (ψ : Nat → SPred σs) : ψ 42 ⊢ₛ ∃ x, ψ x := by mintro H mrefine ⟨⌜42⌝, H⟩
mrename_i
mrename_i
mreplace
mreplace
mrevert
mrevert is like revert, but operating on a stateful Std.Do.SPred goal.
example (P Q R : SPred σs) : P ∧ Q ∧ R ⊢ₛ P → R := by mintro ⟨HP, HQ, HR⟩ mrevert HR mrevert HP mintro HP' mintro HR' mexact HR'
mrevert
mrevert is like revert, but operating on a stateful Std.Do.SPred goal.
example (P Q R : SPred σs) : P ∧ Q ∧ R ⊢ₛ P → R := by mintro ⟨HP, HQ, HR⟩ mrevert HR mrevert HP mintro HP' mintro HR' mexact HR'
mright
mright is like right, but operating on a stateful Std.Do.SPred goal.
example (P Q : SPred σs) : P ⊢ₛ Q ∨ P := by mintro HP mright mexact HP
mright
mright is like right, but operating on a stateful Std.Do.SPred goal.
example (P Q : SPred σs) : P ⊢ₛ Q ∨ P := by mintro HP mright mexact HP
mspec
mspec is an apply-like tactic that applies a Hoare triple specification to the target of the
stateful goal.
Given a stateful goal H ⊢ₛ wp⟦prog⟧ Q', mspec foo_spec will instantiate
foo_spec : ... → ⦃P⦄ foo ⦃Q⦄, match foo against prog and produce subgoals for
the verification conditions ?pre : H ⊢ₛ P and ?post : Q ⊢ₚ Q'.
-
If
prog = x >>= f, thenmspec Specs.bindis tried first so thatfoois matched againstxinstead. Tacticmspec_no_binddoes not attempt to do this decomposition. -
If
?preor?postfollow by.rfl, then they are discharged automatically. -
?postis automatically simplified into constituent⊢ₛentailments on success and failure continuations. -
?preand?post.*goals introduce their stateful hypothesis under an inaccessible name. You can give it a name with themrename_itactic. -
Any uninstantiated MVar arising from instantiation of
foo_specbecomes a new subgoal. -
If the target of the stateful goal looks like
fun s => _thenmspecwill firstmintro ∀s. -
If
Phas schematic variables that can be instantiated by doingmintro ∀s, for examplefoo_spec : ∀(n:Nat), ⦃fun s => ⌜n = s⌝⦄ foo ⦃Q⦄, thenmspecwill domintro ∀sfirst to instantiaten = s. -
Right before applying the spec, the
mframetactic is used, which has the following effect: Any hypothesisHᵢin the goalh₁:H₁, h₂:H₂, ..., hₙ:Hₙ ⊢ₛ Tthat is pure (i.e., equivalent to some⌜φᵢ⌝) will be moved into the pure context ashᵢ:φᵢ.
Additionally, mspec can be used without arguments or with a term argument:
-
mspecwithout argument will try and look up a spec forxregistered with@[spec]. -
mspec (foo_spec blah ?bleh)will elaborate its argument as a term with expected type⦃?P⦄ x ⦃?Q⦄and introduce?blehas a subgoal. This is useful to pass an invariant to e.g.,Specs.forIn_listand leave the inductive step as a hole.
mspec
mspec is an apply-like tactic that applies a Hoare triple specification to the target of the
stateful goal.
Given a stateful goal H ⊢ₛ wp⟦prog⟧ Q', mspec foo_spec will instantiate
foo_spec : ... → ⦃P⦄ foo ⦃Q⦄, match foo against prog and produce subgoals for
the verification conditions ?pre : H ⊢ₛ P and ?post : Q ⊢ₚ Q'.
-
If
prog = x >>= f, thenmspec Specs.bindis tried first so thatfoois matched againstxinstead. Tacticmspec_no_binddoes not attempt to do this decomposition. -
If
?preor?postfollow by.rfl, then they are discharged automatically. -
?postis automatically simplified into constituent⊢ₛentailments on success and failure continuations. -
?preand?post.*goals introduce their stateful hypothesis under an inaccessible name. You can give it a name with themrename_itactic. -
Any uninstantiated MVar arising from instantiation of
foo_specbecomes a new subgoal. -
If the target of the stateful goal looks like
fun s => _thenmspecwill firstmintro ∀s. -
If
Phas schematic variables that can be instantiated by doingmintro ∀s, for examplefoo_spec : ∀(n:Nat), ⦃fun s => ⌜n = s⌝⦄ foo ⦃Q⦄, thenmspecwill domintro ∀sfirst to instantiaten = s. -
Right before applying the spec, the
mframetactic is used, which has the following effect: Any hypothesisHᵢin the goalh₁:H₁, h₂:H₂, ..., hₙ:Hₙ ⊢ₛ Tthat is pure (i.e., equivalent to some⌜φᵢ⌝) will be moved into the pure context ashᵢ:φᵢ.
Additionally, mspec can be used without arguments or with a term argument:
-
mspecwithout argument will try and look up a spec forxregistered with@[spec]. -
mspec (foo_spec blah ?bleh)will elaborate its argument as a term with expected type⦃?P⦄ x ⦃?Q⦄and introduce?blehas a subgoal. This is useful to pass an invariant to e.g.,Specs.forIn_listand leave the inductive step as a hole.
mspec_no_bind
mspec_no_simp $spec first tries to decompose Bind.binds before applying $spec.
This variant of mspec_no_simp does not; mspec_no_bind $spec is defined as
try with_reducible mspec_no_bind Std.Do.Spec.bind
mspec_no_bind $specmspec_no_simp
mspecialize
mspecialize is like specialize, but operating on a stateful Std.Do.SPred goal.
It specializes a hypothesis from the stateful context with hypotheses from either the pure
or stateful context or pure terms.
example (P Q : SPred σs) : P ⊢ₛ (P → Q) → Q := by mintro HP HPQ mspecialize HPQ HP mexact HPQ example (y : Nat) (P Q : SPred σs) (Ψ : Nat → SPred σs) (hP : ⊢ₛ P) : ⊢ₛ Q → (∀ x, P → Q → Ψ x) → Ψ (y + 1) := by mintro HQ HΨ mspecialize HΨ (y + 1) hP HQ mexact HΨ
mspecialize
mspecialize is like specialize, but operating on a stateful Std.Do.SPred goal.
It specializes a hypothesis from the stateful context with hypotheses from either the pure
or stateful context or pure terms.
example (P Q : SPred σs) : P ⊢ₛ (P → Q) → Q := by mintro HP HPQ mspecialize HPQ HP mexact HPQ example (y : Nat) (P Q : SPred σs) (Ψ : Nat → SPred σs) (hP : ⊢ₛ P) : ⊢ₛ Q → (∀ x, P → Q → Ψ x) → Ψ (y + 1) := by mintro HQ HΨ mspecialize HΨ (y + 1) hP HQ mexact HΨ
mspecialize_pure
mspecialize_pure is like mspecialize, but it specializes a hypothesis from the
pure context with hypotheses from either the pure or stateful context or pure terms.
example (y : Nat) (P Q : SPred σs) (Ψ : Nat → SPred σs) (hP : ⊢ₛ P) (hΨ : ∀ x, ⊢ₛ P → Q → Ψ x) : ⊢ₛ Q → Ψ (y + 1) := by mintro HQ mspecialize_pure (hΨ (y + 1)) hP HQ => HΨ mexact HΨ
mspecialize_pure
mspecialize_pure is like mspecialize, but it specializes a hypothesis from the
pure context with hypotheses from either the pure or stateful context or pure terms.
example (y : Nat) (P Q : SPred σs) (Ψ : Nat → SPred σs) (hP : ⊢ₛ P) (hΨ : ∀ x, ⊢ₛ P → Q → Ψ x) : ⊢ₛ Q → Ψ (y + 1) := by mintro HQ mspecialize_pure (hΨ (y + 1)) hP HQ => HΨ mexact HΨ
mstart
mstart
mstop
Stops the stateful proof mode of Std.Do.SPred.
This will simply forget all the names given to stateful hypotheses and pretty-print
a bit differently.
mstop
Stops the stateful proof mode of Std.Do.SPred.
This will simply forget all the names given to stateful hypotheses and pretty-print
a bit differently.
mvcgen
mvcgen will break down a Hoare triple proof goal like ⦃P⦄ prog ⦃Q⦄ into verification conditions,
provided that all functions used in prog have specifications registered with @[spec].
A verification condition is an entailment in the stateful logic of Std.Do.SPred
in which the original program prog no longer occurs.
Verification conditions are introduced by the mspec tactic; see the mspec tactic for what they
look like.
When there's no applicable mspec spec, mvcgen will try and rewrite an application
prog = f a b c with the simp set registered via @[spec].
When used like mvcgen +noLetElim [foo_spec, bar_def, instBEqFloat], mvcgen will additionally
-
add a Hoare triple specification
foo_spec : ... → ⦃P⦄ foo ... ⦃Q⦄tospecset for a functionfoooccurring inprog, -
unfold a definition
def bar_def ... := ...inprog, -
unfold any method of the
instBEqFloat : BEq Floatinstance inprog. -
it will no longer substitute away
let-expressions that occur at most once inP,Qorprog.
Furthermore, mvcgen tries to close trivial verification conditions by SPred.entails.rfl or
the tactic sequence try (mpure_intro; trivial). The variant mvcgen_no_trivial does not do this.
For debugging purposes there is also mvcgen_step 42 which will do at most 42 VC generation
steps. This is useful for bisecting issues with the generated VCs.
mvcgen
mvcgen will break down a Hoare triple proof goal like ⦃P⦄ prog ⦃Q⦄ into verification conditions,
provided that all functions used in prog have specifications registered with @[spec].
A verification condition is an entailment in the stateful logic of Std.Do.SPred
in which the original program prog no longer occurs.
Verification conditions are introduced by the mspec tactic; see the mspec tactic for what they
look like.
When there's no applicable mspec spec, mvcgen will try and rewrite an application
prog = f a b c with the simp set registered via @[spec].
When used like mvcgen +noLetElim [foo_spec, bar_def, instBEqFloat], mvcgen will additionally
-
add a Hoare triple specification
foo_spec : ... → ⦃P⦄ foo ... ⦃Q⦄tospecset for a functionfoooccurring inprog, -
unfold a definition
def bar_def ... := ...inprog, -
unfold any method of the
instBEqFloat : BEq Floatinstance inprog. -
it will no longer substitute away
let-expressions that occur at most once inP,Qorprog.
Furthermore, mvcgen tries to close trivial verification conditions by SPred.entails.rfl or
the tactic sequence try (mpure_intro; trivial). The variant mvcgen_no_trivial does not do this.
For debugging purposes there is also mvcgen_step 42 which will do at most 42 VC generation
steps. This is useful for bisecting issues with the generated VCs.
mvcgen_no_trivial
Like mvcgen, but does not attempt to prove trivial VCs via mpure_intro; trivial.
mvcgen_step
Like mvcgen_no_trivial, but mvcgen_step 42 will only do 42 steps of the VC generation procedure.
This is helpful for bisecting bugs in mvcgen and tracing its execution.
mvcgen_trivial
mvcgen_trivial is the tactic automatically called by mvcgen to discharge VCs.
It tries to discharge the VC by applying (try mpure_intro); trivial and otherwise delegates to
mvcgen_trivial_extensible.
Users are encouraged to extend mvcgen_trivial_extensible instead of this tactic in order not to
override the default (try mpure_intro); trivial behavior.
native_decide
native_decide is a synonym for decide +native.
It will attempt to prove a goal of type p by synthesizing an instance
of Decidable p and then evaluating it to isTrue ... Unlike decide, this
uses #eval to evaluate the decidability instance.
This should be used with care because it adds the entire lean compiler to the trusted
part, and the axiom Lean.ofReduceBool will show up in #print axioms for theorems using
this method or anything that transitively depends on them. Nevertheless, because it is
compiled, this can be significantly more efficient than using decide, and for very
large computations this is one way to run external programs and trust the result.
example : (List.range 1000).length = 1000 := ⊢ (List.range 1000).length = 1000 All goals completed! 🐙
next
next => tac focuses on the next goal and solves it using tac, or else fails.
next x₁ ... xₙ => tac additionally renames the n most recent hypotheses with
inaccessible names to the given names.
nlinarith
An extension of linarith with some preprocessing to allow it to solve some nonlinear arithmetic
problems. (Based on Coq's nra tactic.) See linarith for the available syntax of options,
which are inherited by nlinarith; that is, nlinarith! and nlinarith only [h1, h2] all work as
in linarith. The preprocessing is as follows:
-
For every subterm
a ^ 2ora * ain a hypothesis or the goal, the assumption0 ≤ a ^ 2or0 ≤ a * ais added to the context. -
For every pair of hypotheses
a1 R1 b1,a2 R2 b2in the context,R1, R2 ∈ {<, ≤, =}, the assumption0 R' (b1 - a1) * (b2 - a2)is added to the context (non-recursively), whereR ∈ {<, ≤, =}is the appropriate comparison derived fromR1, R2.
nlinarith!
An extension of linarith with some preprocessing to allow it to solve some nonlinear arithmetic
problems. (Based on Coq's nra tactic.) See linarith for the available syntax of options,
which are inherited by nlinarith; that is, nlinarith! and nlinarith only [h1, h2] all work as
in linarith. The preprocessing is as follows:
-
For every subterm
a ^ 2ora * ain a hypothesis or the goal, the assumption0 ≤ a ^ 2or0 ≤ a * ais added to the context. -
For every pair of hypotheses
a1 R1 b1,a2 R2 b2in the context,R1, R2 ∈ {<, ≤, =}, the assumption0 R' (b1 - a1) * (b2 - a2)is added to the context (non-recursively), whereR ∈ {<, ≤, =}is the appropriate comparison derived fromR1, R2.
nofun
The tactic nofun is shorthand for exact nofun: it introduces the assumptions, then performs an
empty pattern match, closing the goal if the introduced pattern is impossible.
noncomm_ringA tactic for simplifying identities in not-necessarily-commutative rings.
An example:
example {R : Type*} [Ring R] (a b c : R) : a * (b + c + c - b) = 2 * a * c := R:Type u_1inst✝:Ring Ra:Rb:Rc:R⊢ a * (b + c + c - b) = 2 * a * c
All goals completed! 🐙
You can use noncomm_ring [h] to also simplify using h.
nontriviality
Attempts to generate a Nontrivial α hypothesis.
The tactic first checks to see that there is not already a Nontrivial α instance
before trying to synthesize one using other techniques.
If the goal is an (in)equality, the type α is inferred from the goal.
Otherwise, the type needs to be specified in the tactic invocation, as nontriviality α.
The nontriviality tactic will first look for strict inequalities amongst the hypotheses,
and use these to derive the Nontrivial instance directly.
Otherwise, it will perform a case split on Subsingleton α ∨ Nontrivial α, and attempt to discharge
the Subsingleton goal using simp [h₁, h₂, ..., hₙ, nontriviality], where [h₁, h₂, ..., hₙ] is
a list of additional simp lemmas that can be passed to nontriviality using the syntax
nontriviality α using h₁, h₂, ..., hₙ.
example {R : Type} [OrderedRing R] {a : R} (h : 0 < a) : 0 < a := by
nontriviality -- There is now a `Nontrivial R` hypothesis available.
assumption
example {R : Type} [CommRing R] {r s : R} : r * s = s * r := R:Typeinst✝:CommRing Rr:Rs:R⊢ r * s = s * r
R:Typeinst✝:CommRing Rr:Rs:Ra✝:Nontrivial R⊢ r * s = s * r -- There is now a `Nontrivial R` hypothesis available.
All goals completed! 🐙
example {R : Type} [OrderedRing R] {a : R} (h : 0 < a) : (2 : ℕ) ∣ 4 := by
nontriviality R -- there is now a `Nontrivial R` hypothesis available.
dec_trivial
def myeq {α : Type} (a b : α) : Prop := a = b
example {α : Type} (a b : α) (h : a = b) : myeq a b := by
success_if_fail nontriviality α -- Fails
nontriviality α using myeq -- There is now a `Nontrivial α` hypothesis available
assumption
norm_cast
The norm_cast family of tactics is used to normalize certain coercions (casts) in expressions.
-
norm_castnormalizes casts in the target. -
norm_cast at hnormalizes casts in hypothesish.
The tactic is basically a version of simp with a specific set of lemmas to move casts
upwards in the expression.
Therefore even in situations where non-terminal simp calls are discouraged (because of fragility),
norm_cast is considered to be safe.
It also has special handling of numerals.
For instance, given an assumption
a b : ℤ h : ↑a + ↑b < (10 : ℚ)
writing norm_cast at h will turn h into
h : a + b < 10
There are also variants of basic tactics that use norm_cast to normalize expressions during
their operation, to make them more flexible about the expressions they accept
(we say that it is a tactic modulo the effects of norm_cast):
-
exact_mod_castforexactandapply_mod_castforapply. Writingexact_mod_cast handapply_mod_cast hwill normalize casts in the goal andhbefore usingexact horapply h. -
rw_mod_castforrw. It appliesnorm_castbetween rewrites. -
assumption_mod_castforassumption. This is effectivelynorm_cast at *; assumption, but more efficient. It normalizes casts in the goal and, for every hypothesishin the context, it will try to normalize casts inhand useexact h.
See also push_cast, which moves casts inwards rather than lifting them outwards.
norm_num
Normalize numerical expressions. Supports the operations + - * / ⁻¹ ^ and %
over numerical types such as ℕ, ℤ, ℚ, ℝ, ℂ and some general algebraic types,
and can prove goals of the form A = B, A ≠ B, A < B and A ≤ B, where A and B are
numerical expressions. It also has a relatively simple primality prover.
nth_rewrite
nth_rewrite is a variant of rewrite that only changes the n₁, ..., nₖᵗʰ occurrence of
the expression to be rewritten. nth_rewrite n₁ ... nₖ [eq₁, eq₂,..., eqₘ] will rewrite the
n₁, ..., nₖᵗʰ occurrence of each of the m equalities eqᵢin that order. Occurrences are
counted beginning with 1 in order of precedence.
For example,
example (h : a = 1) : a + a + a + a + a = 5 := by nth_rewrite 2 3 [h] /- a: ℕ h: a = 1 ⊢ a + 1 + 1 + a + a = 5 -/
Notice that the second and third occurrences of a from the left have been rewritten by
nth_rewrite.
To understand the importance of order of precedence, consider the example below
example (a b c : Nat) : (a + b) + c = (b + a) + c := by nth_rewrite 2 [Nat.add_comm] -- ⊢ (b + a) + c = (b + a) + c
Here, although the occurrence parameter is 2, (a + b) is rewritten to (b + a). This happens
because in order of precedence, the first occurrence of _ + _ is the one that adds a + b to c.
The occurrence in a + b counts as the second occurrence.
If a term t is introduced by rewriting with eqᵢ, then this instance of t will be counted as an
occurrence of t for all subsequent rewrites of t with eqⱼ for j > i. This behaviour is
illustrated by the example below
example (h : a = a + b) : a + a + a + a + a = 0 := by nth_rewrite 3 [h, h] /- a b: ℕ h: a = a + b ⊢ a + a + (a + b + b) + a + a = 0 -/
Here, the first nth_rewrite with h introduces an additional occurrence of a in the goal.
That is, the goal state after the first rewrite looks like below
/- a b: ℕ h: a = a + b ⊢ a + a + (a + b) + a + a = 0 -/
This new instance of a also turns out to be the third occurrence of a. Therefore,
the next nth_rewrite with h rewrites this a.
nth_rw
nth_rw is a variant of rw that only changes the n₁, ..., nₖᵗʰ occurrence of the expression
to be rewritten. Like rw, and unlike nth_rewrite, it will try to close the goal by trying rfl
afterwards. nth_rw n₁ ... nₖ [eq₁, eq₂,..., eqₘ] will rewrite the n₁, ..., nₖᵗʰ occurrence of
each of the m equalities eqᵢin that order. Occurrences are counted beginning with 1 in
order of precedence. For example,
example (h : a = 1) : a + a + a + a + a = 5 := by nth_rw 2 3 [h] /- a: ℕ h: a = 1 ⊢ a + 1 + 1 + a + a = 5 -/
Notice that the second and third occurrences of a from the left have been rewritten by
nth_rw.
To understand the importance of order of precedence, consider the example below
example (a b c : Nat) : (a + b) + c = (b + a) + c := by nth_rewrite 2 [Nat.add_comm] -- ⊢ (b + a) + c = (b + a) + c
Here, although the occurrence parameter is 2, (a + b) is rewritten to (b + a). This happens
because in order of precedence, the first occurrence of _ + _ is the one that adds a + b to c.
The occurrence in a + b counts as the second occurrence.
If a term t is introduced by rewriting with eqᵢ, then this instance of t will be counted as an
occurrence of t for all subsequent rewrites of t with eqⱼ for j > i. This behaviour is
illustrated by the example below
example (h : a = a + b) : a + a + a + a + a = 0 := by nth_rw 3 [h, h] /- a b: ℕ h: a = a + b ⊢ a + a + (a + b + b) + a + a = 0 -/
Here, the first nth_rw with h introduces an additional occurrence of a in the goal. That is,
the goal state after the first rewrite looks like below
/- a b: ℕ h: a = a + b ⊢ a + a + (a + b) + a + a = 0 -/
This new instance of a also turns out to be the third occurrence of a. Therefore,
the next nth_rw with h rewrites this a.
Further, nth_rw will close the remaining goal with rfl if possible.
observe
observe hp : p asserts the proposition p, and tries to prove it using exact?.
If no proof is found, the tactic fails.
In other words, this tactic is equivalent to have hp : p := by exact?.
If hp is omitted, then the placeholder this is used.
The variant observe? hp : p will emit a trace message of the form have hp : p := proof_term.
This may be particularly useful to speed up proofs.
observe?
observe hp : p asserts the proposition p, and tries to prove it using exact?.
If no proof is found, the tactic fails.
In other words, this tactic is equivalent to have hp : p := by exact?.
If hp is omitted, then the placeholder this is used.
The variant observe? hp : p will emit a trace message of the form have hp : p := proof_term.
This may be particularly useful to speed up proofs.
observe?
observe hp : p asserts the proposition p, and tries to prove it using exact?.
If no proof is found, the tactic fails.
In other words, this tactic is equivalent to have hp : p := by exact?.
If hp is omitted, then the placeholder this is used.
The variant observe? hp : p will emit a trace message of the form have hp : p := proof_term.
This may be particularly useful to speed up proofs.
obtain
The obtain tactic is a combination of have and rcases. See rcases for
a description of supported patterns.
obtain ⟨patt⟩ : type := proof
is equivalent to
have h : type := proof rcases h with ⟨patt⟩
If ⟨patt⟩ is omitted, rcases will try to infer the pattern.
If type is omitted, := proof is required.
omega
The omega tactic, for resolving integer and natural linear arithmetic problems.
It is not yet a full decision procedure (no "dark" or "grey" shadows), but should be effective on many problems.
We handle hypotheses of the form x = y, x < y, x ≤ y, and k ∣ x for x y in Nat or Int
(and k a literal), along with negations of these statements.
We decompose the sides of the inequalities as linear combinations of atoms.
If we encounter x / k or x % k for literal integers k we introduce new auxiliary variables
and the relevant inequalities.
On the first pass, we do not perform case splits on natural subtraction.
If omega fails, we recursively perform a case split on
a natural subtraction appearing in a hypothesis, and try again.
The options
omega +splitDisjunctions +splitNatSub +splitNatAbs +splitMinMax
can be used to:
-
splitDisjunctions: split any disjunctions found in the context, if the problem is not otherwise solvable. -
splitNatSub: for each appearance of((a - b : Nat) : Int), split ona ≤ bif necessary. -
splitNatAbs: for each appearance ofInt.natAbs a, split on0 ≤ aif necessary. -
splitMinMax: for each occurrence ofmin a b, split onmin a b = a ∨ min a b = bCurrently, all of these are on by default.
on_goal
on_goal n => tacSeq creates a block scope for the n-th goal and tries the sequence
of tactics tacSeq on it.
on_goal -n => tacSeq does the same, but the n-th goal is chosen by counting from the
bottom.
The goal is not required to be solved and any resulting subgoals are inserted back into the list of goals, replacing the chosen goal.
order
A finishing tactic for solving goals in arbitrary Preorder, PartialOrder,
or LinearOrder. Supports ⊤, ⊥, and lattice operations.
peelPeels matching quantifiers off of a given term and the goal and introduces the relevant variables.
-
peel epeels all quantifiers (at reducible transparency), usingthisfor the name of the peeled hypothesis. -
peel e with hispeel ebut names the peeled hypothesish. Ifhis_then usesthisfor the name of the peeled hypothesis. -
peel n epeelsnquantifiers (at default transparency). -
peel n e with x y z ... hpeelsnquantifiers, names the peeled hypothesish, and usesx,y,z, and so on to name the introduced variables; these names may be_. Ifhis_then usesthisfor the name of the peeled hypothesis. The length of the list of variables does not need to equaln. -
peel e with x₁ ... xₙ hispeel n e with x₁ ... xₙ h.
There are also variants that apply to an iff in the goal:
-
peel npeelsnquantifiers in an iff. -
peel with x₁ ... xₙpeelsnquantifiers in an iff and names them.
Given p q : ℕ → Prop, h : ∀ x, p x, and a goal ⊢ : ∀ x, q x, the tactic peel h with x h'
will introduce x : ℕ, h' : p x into the context and the new goal will be ⊢ q x. This works
with ∃, as well as ∀ᶠ and ∃ᶠ, and it can even be applied to a sequence of quantifiers. Note
that this is a logically weaker setup, so using this tactic is not always feasible.
For a more complex example, given a hypothesis and a goal:
h : ∀ ε > (0 : ℝ), ∃ N : ℕ, ∀ n ≥ N, 1 / (n + 1 : ℝ) < ε ⊢ ∀ ε > (0 : ℝ), ∃ N : ℕ, ∀ n ≥ N, 1 / (n + 1 : ℝ) ≤ ε
(which differ only in </≤), applying peel h with ε hε N n hn h_peel will yield a tactic state:
h : ∀ ε > (0 : ℝ), ∃ N : ℕ, ∀ n ≥ N, 1 / (n + 1 : ℝ) < ε ε : ℝ hε : 0 < ε N n : ℕ hn : N ≤ n h_peel : 1 / (n + 1 : ℝ) < ε ⊢ 1 / (n + 1 : ℝ) ≤ ε
and the goal can be closed with exact h_peel.le.
Note that in this example, h and the goal are logically equivalent statements, but peel
cannot be immediately applied to show that the goal implies h.
In addition, peel supports goals of the form (∀ x, p x) ↔ ∀ x, q x, or likewise for any
other quantifier. In this case, there is no hypothesis or term to supply, but otherwise the syntax
is the same. So for such goals, the syntax is peel 1 or peel with x, and after which the
resulting goal is p x ↔ q x. The congr! tactic can also be applied to goals of this form using
congr! 1 with x. While congr! applies congruence lemmas in general, peel can be relied upon
to only apply to outermost quantifiers.
Finally, the user may supply a term e via ... using e in order to close the goal
immediately. In particular, peel h using e is equivalent to peel h; exact e. The using syntax
may be paired with any of the other features of peel.
This tactic works by repeatedly applying lemmas such as forall_imp, Exists.imp,
Filter.Eventually.mp, Filter.Frequently.mp, and Filter.Eventually.of_forall.
pgame_wf_tac
Discharges proof obligations of the form ⊢ Subsequent .. arising in termination proofs
of definitions using well-founded recursion on PGame.
pi_lower_bound
Create a proof of a < π for a fixed rational number a, given a witness, which is a
sequence of rational numbers √2 < r 1 < r 2 < ... < r n < 2 satisfying the property that
√(2 + r i) ≤ r(i+1), where r 0 = 0 and √(2 - r n) ≥ a/2^(n+1).
pi_upper_bound
Create a proof of π < a for a fixed rational number a, given a witness, which is a
sequence of rational numbers √2 < r 1 < r 2 < ... < r n < 2 satisfying the property that
√(2 + r i) ≥ r(i+1), where r 0 = 0 and √(2 - r n) ≤ (a - 1/4^n) / 2^(n+1).
pick_goal
pick_goal n will move the n-th goal to the front.
pick_goal -n will move the n-th goal (counting from the bottom) to the front.
See also Tactic.rotate_goals, which moves goals from the front to the back and vice-versa.
plausible
plausible considers a proof goal and tries to generate examples
that would contradict the statement.
Let's consider the following proof goal.
xs : List Nat, h : ∃ (x : Nat) (H : x ∈ xs), x < 3 ⊢ ∀ (y : Nat), y ∈ xs → y < 5
The local constants will be reverted and an instance will be found for
Testable (∀ (xs : List Nat), (∃ x ∈ xs, x < 3) → (∀ y ∈ xs, y < 5)).
The Testable instance is supported by an instance of Sampleable (List Nat),
Decidable (x < 3) and Decidable (y < 5).
Examples will be created in ascending order of size (more or less)
The first counter-examples found will be printed and will result in an error:
=================== Found problems! xs := [1, 28] x := 1 y := 28 -------------------
If plausible successfully tests 100 examples, it acts like
admit. If it gives up or finds a counter-example, it reports an error.
For more information on writing your own Sampleable and Testable
instances, see Testing.Plausible.Testable.
Optional arguments given with plausible (config : { ... })
-
numInst(default 100): number of examples to test properties with -
maxSize(default 100): final size argument
Options:
-
set_option trace.plausible.decoration true: print the proposition with quantifier annotations -
set_option trace.plausible.discarded true: print the examples discarded because they do not satisfy assumptions -
set_option trace.plausible.shrink.steps true: trace the shrinking of counter-example -
set_option trace.plausible.shrink.candidates true: print the lists of candidates considered when shrinking each variable -
set_option trace.plausible.instance true: print the instances oftestablebeing used to test the proposition -
set_option trace.plausible.success true: print the tested samples that satisfy a property
pnat_to_nat
pnat_to_nat shifts all PNats in the context to Nat, rewriting propositions about them.
A typical use case is pnat_to_nat; omega.
polyrith
Attempts to prove polynomial equality goals through polynomial arithmetic
on the hypotheses (and additional proof terms if the user specifies them).
It proves the goal by generating an appropriate call to the tactic
linear_combination. If this call succeeds, the call to linear_combination
is suggested to the user.
-
polyrithwill use all relevant hypotheses in the local context. -
polyrith [t1, t2, t3]will add proof terms t1, t2, t3 to the local context. -
polyrith only [h1, h2, h3, t1, t2, t3]will use only local hypothesesh1,h2,h3, and proofst1,t2,t3. It will ignore the rest of the local context.
Notes:
-
This tactic only works with a working internet connection, since it calls Sage using the SageCell web API at https://sagecell.sagemath.org/. Many thanks to the Sage team and organization for allowing this use.
-
This tactic assumes that the user has
curlavailable on path.
Examples:
example (x y : ℚ) (h1 : x*y + 2*x = 1) (h2 : x = y) :
x*y = -2*y + 1 := by
polyrith
-- Try this: linear_combination h1 - 2 * h2
example (x y z w : ℚ) (hzw : z = w) : x*z + 2*y*z = x*w + 2*y*w := by
polyrith
-- Try this: linear_combination (2 * y + x) * hzw
axiom scary : ∀ a b : ℚ, a + b = 0
example (a b c d : ℚ) (h : a + b = 0) (h2: b + c = 0) : a + b + c + d = 0 := by
polyrith only [scary c d, h]
-- Try this: linear_combination scary c d + h
positivity
Tactic solving goals of the form 0 ≤ x, 0 < x and x ≠ 0. The tactic works recursively
according to the syntax of the expression x, if the atoms composing the expression all have
numeric lower bounds which can be proved positive/nonnegative/nonzero by norm_num. This tactic
either closes the goal or fails.
Examples:
example {a : ℤ} (ha : 3 < a) : 0 ≤ a ^ 3 + a := a:ℤha:3 < a⊢ 0 ≤ a ^ 3 + a All goals completed! 🐙
example {a : ℤ} (ha : 1 < a) : 0 < |(3:ℤ) + a| := a:ℤha:1 < a⊢ 0 < |3 + a| All goals completed! 🐙
example {b : ℤ} : 0 ≤ max (-3) (b ^ 2) := b:ℤ⊢ 0 ≤ max (-3) (b ^ 2) All goals completed! 🐙
pure_coherence
pure_coherence uses the coherence theorem for monoidal categories to prove the goal.
It can prove any equality made up only of associators, unitors, and identities.
example {C : Type} [Category C] [MonoidalCategory C] :
(λ_ (𝟙_ C)).hom = (ρ_ (𝟙_ C)).hom := by
pure_coherence
Users will typically just use the coherence tactic,
which can also cope with identities of the form
a ≫ f ≫ b ≫ g ≫ c = a' ≫ f ≫ b' ≫ g ≫ c'
where a = a', b = b', and c = c' can be proved using pure_coherence
push_cast
push_cast rewrites the goal to move certain coercions (casts) inward, toward the leaf nodes.
This uses norm_cast lemmas in the forward direction.
For example, ↑(a + b) will be written to ↑a + ↑b.
-
push_castmoves casts inward in the goal. -
push_cast at hmoves casts inward in the hypothesish. It can be used with extra simp lemmas with, for example,push_cast [Int.add_zero].
Example:
example (a b : Nat)
(h1 : ((a + b : Nat) : Int) = 10)
(h2 : ((a + b + 0 : Nat) : Int) = 10) :
((a + b : Nat) : Int) = 10 := a:ℕb:ℕh1:↑(a + b) = 10h2:↑(a + b + 0) = 10⊢ ↑(a + b) = 10
/-
h1 : ↑(a + b) = 10
h2 : ↑(a + b + 0) = 10
⊢ ↑(a + b) = 10
-/
a:ℕb:ℕh1:↑(a + b) = 10h2:↑(a + b + 0) = 10⊢ ↑a + ↑b = 10
/- Now
⊢ ↑a + ↑b = 10
-/
a:ℕb:ℕh2:↑(a + b + 0) = 10h1:↑a + ↑b = 10⊢ ↑a + ↑b = 10
a:ℕb:ℕh1:↑a + ↑b = 10h2:↑a + ↑b = 10⊢ ↑a + ↑b = 10
/- Now
h1 h2 : ↑a + ↑b = 10
-/
All goals completed! 🐙
See also norm_cast.
push_neg
Push negations into the conclusion of a hypothesis.
For instance, a hypothesis h : ¬ ∀ x, ∃ y, x ≤ y will be transformed by push_neg at h into
h : ∃ x, ∀ y, y < x. Variable names are conserved.
This tactic pushes negations inside expressions. For instance, given a hypothesis
h : ¬ ∀ ε > 0, ∃ δ > 0, ∀ x, |x - x₀| ≤ δ → |f x - y₀| ≤ ε)
writing push_neg at h will turn h into
h : ∃ ε, ε > 0 ∧ ∀ δ, δ > 0 → (∃ x, |x - x₀| ≤ δ ∧ ε < |f x - y₀|),
(The pretty printer does not use the abbreviations ∀ δ > 0 and ∃ ε > 0 but this issue
has nothing to do with push_neg).
Note that names are conserved by this tactic, contrary to what would happen with simp
using the relevant lemmas. One can also use this tactic at the goal using push_neg,
at every hypothesis and the goal using push_neg at * or at selected hypotheses and the goal
using say push_neg at h h' ⊢ as usual.
This tactic has two modes: in standard mode, it transforms ¬(p ∧ q) into p → ¬q, whereas in
distrib mode it produces ¬p ∨ ¬q. To use distrib mode, use set_option push_neg.use_distrib true.
qify
The qify tactic is used to shift propositions from ℕ or ℤ to ℚ.
This is often useful since ℚ has well-behaved division.
example (a b c x y z : ℕ) (h : ¬ x*y*z < 0) : c < a + 3*b := a:ℕb:ℕc:ℕx:ℕy:ℕz:ℕh:¬x * y * z < 0⊢ c < a + 3 * b
a:ℕb:ℕc:ℕx:ℕy:ℕz:ℕh:¬x * y * z < 0⊢ ↑c < ↑a + 3 * ↑b
a:ℕb:ℕc:ℕx:ℕy:ℕz:ℕh:¬↑x * ↑y * ↑z < 0⊢ ↑c < ↑a + 3 * ↑b
/-
h : ¬↑x * ↑y * ↑z < 0
⊢ ↑c < ↑a + 3 * ↑b
-/
All goals completed! 🐙
qify can be given extra lemmas to use in simplification. This is especially useful in the
presence of nat subtraction: passing ≤ arguments will allow push_cast to do more work.
example (a b c : ℤ) (h : a / b = c) (hab : b ∣ a) (hb : b ≠ 0) : a = c * b := a:ℤb:ℤc:ℤh:a / b = chab:b ∣ ahb:b ≠ 0⊢ a = c * b
a:ℤb:ℤc:ℤhab:b ∣ ah:↑a / ↑b = ↑chb:↑b ≠ 0⊢ ↑a = ↑c * ↑b
All goals completed! 🐙
qify makes use of the @[zify_simps] and @[qify_simps] attributes to move propositions,
and the push_cast tactic to simplify the ℚ-valued expressions.
rcases
rcases is a tactic that will perform cases recursively, according to a pattern. It is used to
destructure hypotheses or expressions composed of inductive types like h1 : a ∧ b ∧ c ∨ d or
h2 : ∃ x y, trans_rel R x y. Usual usage might be rcases h1 with ⟨ha, hb, hc⟩ | hd or
rcases h2 with ⟨x, y, _ | ⟨z, hxz, hzy⟩⟩ for these examples.
Each element of an rcases pattern is matched against a particular local hypothesis (most of which
are generated during the execution of rcases and represent individual elements destructured from
the input expression). An rcases pattern has the following grammar:
-
A name like
x, which names the active hypothesis asx. -
A blank
_, which does nothing (letting the automatic naming system used bycasesname the hypothesis). -
A hyphen
-, which clears the active hypothesis and any dependents. -
The keyword
rfl, which expects the hypothesis to beh : a = b, and callssubston the hypothesis (which has the effect of replacingbwithaeverywhere or vice versa). -
A type ascription
p : ty, which sets the type of the hypothesis totyand then matches it againstp. (Of course,tymust unify with the actual type ofhfor this to work.) -
A tuple pattern
⟨p1, p2, p3⟩, which matches a constructor with many arguments, or a series of nested conjunctions or existentials. For example if the active hypothesis isa ∧ b ∧ c, then the conjunction will be destructured, andp1will be matched againsta,p2againstband so on. -
A
@before a tuple pattern as in@⟨p1, p2, p3⟩will bind all arguments in the constructor, while leaving the@off will only use the patterns on the explicit arguments. -
An alternation pattern
p1 | p2 | p3, which matches an inductive type with multiple constructors, or a nested disjunction likea ∨ b ∨ c.
A pattern like ⟨a, b, c⟩ | ⟨d, e⟩ will do a split over the inductive datatype,
naming the first three parameters of the first constructor as a,b,c and the
first two of the second constructor d,e. If the list is not as long as the
number of arguments to the constructor or the number of constructors, the
remaining variables will be automatically named. If there are nested brackets
such as ⟨⟨a⟩, b | c⟩ | d then these will cause more case splits as necessary.
If there are too many arguments, such as ⟨a, b, c⟩ for splitting on
∃ x, ∃ y, p x, then it will be treated as ⟨a, ⟨b, c⟩⟩, splitting the last
parameter as necessary.
rcases also has special support for quotient types: quotient induction into Prop works like
matching on the constructor quot.mk.
rcases h : e with PAT will do the same as rcases e with PAT with the exception that an
assumption h : e = PAT will be added to the context.
rcongr
Repeatedly apply congr and ext, using the given patterns as arguments for ext.
There are two ways this tactic stops:
-
congrfails (makes no progress), after having already appliedext. -
congrcanceled out the last usage ofext. In this case, the state is reverted to before thecongrwas applied.
For example, when the goal is
⊢ (fun x => f x + 3) '' s = (fun x => g x + 3) '' s
then rcongr x produces the goal
x : α ⊢ f x = g x
This gives the same result as congr; ext x; congr.
In contrast, congr would produce
⊢ (fun x => f x + 3) = (fun x => g x + 3)
and congr with x (or congr; ext x) would produce
x : α ⊢ f x + 3 = g x + 3
recover
Modifier recover for a tactic (sequence) to debug cases where goals are closed incorrectly.
The tactic recover tacs for a tactic (sequence) tacs applies the tactics and then adds goals
that are not closed, starting from the original goal.
reduce
reduce at loc completely reduces the given location.
This also exists as a conv-mode tactic.
This does the same transformation as the #reduce command.
reduce_mod_char
The tactic reduce_mod_char looks for numeric expressions in characteristic p
and reduces these to lie between 0 and p.
For example:
example : (5 : ZMod 4) = 1 := by reduce_mod_char example : (X ^ 2 - 3 * X + 4 : (ZMod 4)[X]) = X ^ 2 + X := by reduce_mod_char
It also handles negation, turning it into multiplication by p - 1,
and similarly subtraction.
This tactic uses the type of the subexpression to figure out if it is indeed of positive
characteristic, for improved performance compared to trying to synthesise a CharP instance.
The variant reduce_mod_char! also tries to use CharP R n hypotheses in the context.
(Limitations of the typeclass system mean the tactic can't search for a CharP R n instance if
n is not yet known; use have : CharP R n := inferInstance; reduce_mod_char! as a workaround.)
reduce_mod_char!
The tactic reduce_mod_char looks for numeric expressions in characteristic p
and reduces these to lie between 0 and p.
For example:
example : (5 : ZMod 4) = 1 := by reduce_mod_char example : (X ^ 2 - 3 * X + 4 : (ZMod 4)[X]) = X ^ 2 + X := by reduce_mod_char
It also handles negation, turning it into multiplication by p - 1,
and similarly subtraction.
This tactic uses the type of the subexpression to figure out if it is indeed of positive
characteristic, for improved performance compared to trying to synthesise a CharP instance.
The variant reduce_mod_char! also tries to use CharP R n hypotheses in the context.
(Limitations of the typeclass system mean the tactic can't search for a CharP R n instance if
n is not yet known; use have : CharP R n := inferInstance; reduce_mod_char! as a workaround.)
refine
refine e behaves like exact e, except that named (?x) or unnamed (?_)
holes in e that are not solved by unification with the main goal's target type
are converted into new goals, using the hole's name, if any, as the goal case name.
refine'
refine' e behaves like refine e, except that unsolved placeholders (_)
and implicit parameters are also converted into new goals.
refine_lift
Auxiliary macro for lifting have/suffices/let/...
It makes sure the "continuation" ?_ is the main goal after refining.
refold_let
refold_let x y z at loc looks for the bodies of local definitions x, y, and z at the given
location and replaces them with x, y, or z. This is the inverse of "zeta reduction."
This also exists as a conv-mode tactic.
rel
The rel tactic applies "generalized congruence" rules to solve a relational goal by
"substitution". For example,
example {a b x c d : ℝ} (h1 : a ≤ b) (h2 : c ≤ d) :
x ^ 2 * a + c ≤ x ^ 2 * b + d := a:ℝb:ℝx:ℝc:ℝd:ℝh1:a ≤ bh2:c ≤ d⊢ x ^ 2 * a + c ≤ x ^ 2 * b + d
All goals completed! 🐙
In this example we "substitute" the hypotheses a ≤ b and c ≤ d into the LHS x ^ 2 * a + c of
the goal and obtain the RHS x ^ 2 * b + d, thus proving the goal.
The "generalized congruence" rules used are the library lemmas which have been tagged with the
attribute @[gcongr]. For example, the first example constructs the proof term
add_le_add (mul_le_mul_of_nonneg_left h1 (pow_bit0_nonneg x 1)) h2
using the generalized congruence lemmas add_le_add and mul_le_mul_of_nonneg_left. If there are
no applicable generalized congruence lemmas, the tactic fails.
The tactic attempts to discharge side goals to these "generalized congruence" lemmas (such as the
side goal 0 ≤ x ^ 2 in the above application of mul_le_mul_of_nonneg_left) using the tactic
gcongr_discharger, which wraps positivity but can also be extended. If the side goals cannot
be discharged in this way, the tactic fails.
rename
rename t => x renames the most recent hypothesis whose type matches t
(which may contain placeholders) to x, or fails if no such hypothesis could be found.
rename'
rename_bvar
-
rename_bvar old → newrenames all bound variables namedoldtonewin the target. -
rename_bvar old → new at hdoes the same in hypothesish.
example (P : ℕ → ℕ → Prop) (h : ∀ n, ∃ m, P n m) : ∀ l, ∃ m, P l m := P:ℕ → ℕ → Proph:∀ (n : ℕ), ∃ m, P n m⊢ ∀ (l : ℕ), ∃ m, P l m
P:ℕ → ℕ → Proph:∀ (q : ℕ), ∃ m, P q m⊢ ∀ (l : ℕ), ∃ m, P l m -- h is now ∀ (q : ℕ), ∃ (m : ℕ), P q m,
P:ℕ → ℕ → Proph:∀ (q : ℕ), ∃ m, P q m⊢ ∀ (l : ℕ), ∃ n, P l n -- target is now ∀ (l : ℕ), ∃ (n : ℕ), P k n,
All goals completed! 🐙 -- Lean does not care about those bound variable names
Note: name clashes are resolved automatically.
repeat
repeat tac repeatedly applies tac so long as it succeeds.
The tactic tac may be a tactic sequence, and if tac fails at any point in its execution,
repeat will revert any partial changes that tac made to the tactic state.
The tactic tac should eventually fail, otherwise repeat tac will run indefinitely.
See also:
-
try tacis likerepeat tacbut will applytacat most once. -
repeat' tacrecursively appliestacto each goal. -
first | tac1 | tac2implements the backtracking used byrepeat
repeat'
repeat' tac recursively applies tac on all of the goals so long as it succeeds.
That is to say, if tac produces multiple subgoals, then repeat' tac is applied to each of them.
See also:
-
repeat tacsimply repeatedly appliestac. -
repeat1' tacisrepeat' tacbut requires thattacsucceed for some goal at least once.
repeat1
repeat1 tac applies tac to main goal at least once. If the application succeeds,
the tactic is applied recursively to the generated subgoals until it eventually fails.
repeat1'
repeat1' tac recursively applies to tac on all of the goals so long as it succeeds,
but repeat1' tac fails if tac succeeds on none of the initial goals.
See also:
-
repeat tacsimply appliestacrepeatedly. -
repeat' tacis likerepeat1' tacbut it does not require thattacsucceed at least once.
replace
Acts like have, but removes a hypothesis with the same name as
this one if possible. For example, if the state is:
Then after replace h : β the state will be:
case h f : α → β h : α ⊢ β f : α → β h : β ⊢ goal
whereas have h : β would result in:
case h f : α → β h : α ⊢ β f : α → β h✝ : α h : β ⊢ goal
replace
Acts like have, but removes a hypothesis with the same name as
this one if possible. For example, if the state is:
f : α → β h : α ⊢ goal
Then after replace h := f h the state will be:
f : α → β h : β ⊢ goal
whereas have h := f h would result in:
f : α → β h† : α h : β ⊢ goal
This can be used to simulate the specialize and apply at tactics of Coq.
revert
revert x... is the inverse of intro x...: it moves the given hypotheses
into the main goal's target type.
rewrite
rewrite [e] applies identity e as a rewrite rule to the target of the main goal.
If e is preceded by left arrow (← or <-), the rewrite is applied in the reverse direction.
If e is a defined constant, then the equational theorems associated with e are used.
This provides a convenient way to unfold e.
-
rewrite [e₁, ..., eₙ]applies the given rules sequentially. -
rewrite [e] at lrewriteseat location(s)l, wherelis either*or a list of hypotheses in the local context. In the latter case, a turnstile⊢or|-can also be used, to signify the target of the goal.
Using rw (occs := .pos L) [e],
where L : List Nat, you can control which "occurrences" are rewritten.
(This option applies to each rule, so usually this will only be used with a single rule.)
Occurrences count from 1.
At each allowed occurrence, arguments of the rewrite rule e may be instantiated,
restricting which later rewrites can be found.
(Disallowed occurrences do not result in instantiation.)
(occs := .neg L) allows skipping specified occurrences.
rewrite!
rewrite! is like rewrite,
but can also insert casts to adjust types that depend on the LHS of a rewrite.
It is available as an ordinary tactic and a conv tactic.
The sort of casts that are inserted is controlled by the castMode configuration option.
By default, only proof terms are casted;
by proof irrelevance, this adds no observable complexity.
With rewrite! +letAbs (castMode := .all), casts are inserted whenever necessary.
This means that the 'motive is not type correct' error never occurs,
at the expense of creating potentially complicated terms.
rfl
This tactic applies to a goal whose target has the form x ~ x,
where ~ is equality, heterogeneous equality or any relation that
has a reflexivity lemma tagged with the attribute @[refl].
rfl'
rfl' is similar to rfl, but disables smart unfolding and unfolds all kinds of definitions,
theorems included (relevant for declarations defined by well-founded recursion).
rfl_cat
rfl_cat is a macro for intros; rfl which is attempted in aesop_cat before
doing the more expensive aesop tactic.
This gives a speedup because simp (called by aesop) can be very slow.
https://github.com/leanprover-community/mathlib4/pull/25475 contains measurements from June 2025.
Implementation notes:
-
refine id ?_: In some cases it is important that the type of the proof matches the expected type exactly. e.g. if the goal is2 = 1 + 1, therfltactic will give a proof of type2 = 2. Starting a proof withrefine id ?_is a trick to make sure that the proof has exactly the expected type, in this case2 = 1 + 1. See also https://leanprover.zulipchat.com/#narrow/channel/270676-lean4/topic/changing.20a.20proof.20can.20break.20a.20later.20proof -
apply_rfl:rflis a macro that attempts botheq_reflandapply_rfl. Sinceapply_rflsubsumeseq_refl, we can useapply_rflinstead. This fails twice as fast asrfl.
rify
The rify tactic is used to shift propositions from ℕ, ℤ or ℚ to ℝ.
Although less useful than its cousins zify and qify, it can be useful when your
goal or context already involves real numbers.
In the example below, assumption hn is about natural numbers, hk is about integers
and involves casting a natural number to ℤ, and the conclusion is about real numbers.
The proof uses rify to lift both assumptions to ℝ before calling linarith.
example {n : ℕ} {k : ℤ} (hn : 8 ≤ n) (hk : 2 * k ≤ n + 2) :
(0 : ℝ) < n - k - 1 := n:ℕk:ℤhn:8 ≤ nhk:2 * k ≤ ↑n + 2⊢ 0 < ↑n - ↑k - 1
n:ℕk:ℤhn:8 ≤ ↑nhk:2 * ↑k ≤ ↑n + 2⊢ 0 < ↑n - ↑k - 1 /- Now have hn : 8 ≤ (n : ℝ) hk : 2 * (k : ℝ) ≤ (n : ℝ) + 2 -/
All goals completed! 🐙
rify makes use of the @[zify_simps], @[qify_simps] and @[rify_simps] attributes to move
propositions, and the push_cast tactic to simplify the ℝ-valued expressions.
rify can be given extra lemmas to use in simplification. This is especially useful in the
presence of nat subtraction: passing ≤ arguments will allow push_cast to do more work.
example (a b c : ℕ) (h : a - b < c) (hab : b ≤ a) : a < b + c := a:ℕb:ℕc:ℕh:a - b < chab:b ≤ a⊢ a < b + c
a:ℕb:ℕc:ℕhab:b ≤ ah:↑a - ↑b < ↑c⊢ ↑a < ↑b + ↑c
All goals completed! 🐙
Note that zify or qify would work just as well in the above example (and zify is the natural
choice since it is enough to get rid of the pathological ℕ subtraction).
rightApplies the second constructor when the goal is an inductive type with exactly two constructors, or fails otherwise.
example {p q : Prop} (h : q) : p ∨ q := p:Propq:Proph:q⊢ p ∨ q
p:Propq:Proph:q⊢ q
All goals completed! 🐙
ring
Tactic for evaluating expressions in commutative (semi)rings, allowing for variables in the
exponent. If the goal is not appropriate for ring (e.g. not an equality) ring_nf will be
suggested.
-
ring!will use a more aggressive reducibility setting to determine equality of atoms. -
ring1fails if the target is not an equality.
For example:
example (n : ℕ) (m : ℤ) : 2^(n+1) * m = 2 * 2^n * m := n:ℕm:ℤ⊢ 2 ^ (n + 1) * m = 2 * 2 ^ n * m All goals completed! 🐙
example (a b : ℤ) (n : ℕ) : (a + b)^(n + 2) = (a^2 + b^2 + a * b + b * a) * (a + b)^n := a:ℤb:ℤn:ℕ⊢ (a + b) ^ (n + 2) = (a ^ 2 + b ^ 2 + a * b + b * a) * (a + b) ^ n All goals completed! 🐙
example (x y : ℕ) : x + id y = y + id x := x:ℕy:ℕ⊢ x + id y = y + id x All goals completed! 🐙
example (x : ℕ) (h : x * 2 > 5): x + x > 5 := x:ℕh:x * 2 > 5⊢ x + x > 5 x:ℕh:x * 2 > 5⊢ x * 2 > 5; All goals completed! 🐙 -- suggests ring_nf
ring!
Tactic for evaluating expressions in commutative (semi)rings, allowing for variables in the
exponent. If the goal is not appropriate for ring (e.g. not an equality) ring_nf will be
suggested.
-
ring!will use a more aggressive reducibility setting to determine equality of atoms. -
ring1fails if the target is not an equality.
For example:
example (n : ℕ) (m : ℤ) : 2^(n+1) * m = 2 * 2^n * m := n:ℕm:ℤ⊢ 2 ^ (n + 1) * m = 2 * 2 ^ n * m All goals completed! 🐙
example (a b : ℤ) (n : ℕ) : (a + b)^(n + 2) = (a^2 + b^2 + a * b + b * a) * (a + b)^n := a:ℤb:ℤn:ℕ⊢ (a + b) ^ (n + 2) = (a ^ 2 + b ^ 2 + a * b + b * a) * (a + b) ^ n All goals completed! 🐙
example (x y : ℕ) : x + id y = y + id x := x:ℕy:ℕ⊢ x + id y = y + id x All goals completed! 🐙
example (x : ℕ) (h : x * 2 > 5): x + x > 5 := x:ℕh:x * 2 > 5⊢ x + x > 5 x:ℕh:x * 2 > 5⊢ x * 2 > 5; All goals completed! 🐙 -- suggests ring_nf
ring1
Tactic for solving equations of commutative (semi)rings, allowing variables in the exponent.
-
This version of
ringfails if the target is not an equality. -
The variant
ring1!will use a more aggressive reducibility setting to determine equality of atoms.
ring1!
Tactic for solving equations of commutative (semi)rings, allowing variables in the exponent.
-
This version of
ringfails if the target is not an equality. -
The variant
ring1!will use a more aggressive reducibility setting to determine equality of atoms.
ring1_nf
ring1_nf!
ring_nf
Simplification tactic for expressions in the language of commutative (semi)rings, which rewrites all ring expressions into a normal form.
-
ring_nf!will use a more aggressive reducibility setting to identify atoms. -
ring_nf (config := cfg)allows for additional configuration:-
red: the reducibility setting (overridden by!) -
zetaDelta: if true, local let variables can be unfolded (overridden by!) -
recursive: if true,ring_nfwill also recurse into atoms
-
-
ring_nfworks as both a tactic and a conv tactic. In tactic mode,ring_nf at hcan be used to rewrite in a hypothesis.
This can be used non-terminally to normalize ring expressions in the goal such as
⊢ P (x + x + x) ~> ⊢ P (x * 3), as well as being able to prove some equations that
ring cannot because they involve ring reasoning inside a subterm, such as
sin (x + y) + sin (y + x) = 2 * sin (x + y).
ring_nf!
Simplification tactic for expressions in the language of commutative (semi)rings, which rewrites all ring expressions into a normal form.
-
ring_nf!will use a more aggressive reducibility setting to identify atoms. -
ring_nf (config := cfg)allows for additional configuration:-
red: the reducibility setting (overridden by!) -
zetaDelta: if true, local let variables can be unfolded (overridden by!) -
recursive: if true,ring_nfwill also recurse into atoms
-
-
ring_nfworks as both a tactic and a conv tactic. In tactic mode,ring_nf at hcan be used to rewrite in a hypothesis.
This can be used non-terminally to normalize ring expressions in the goal such as
⊢ P (x + x + x) ~> ⊢ P (x * 3), as well as being able to prove some equations that
ring cannot because they involve ring reasoning inside a subterm, such as
sin (x + y) + sin (y + x) = 2 * sin (x + y).
rintro
The rintro tactic is a combination of the intros tactic with rcases to
allow for destructuring patterns while introducing variables. See rcases for
a description of supported patterns. For example, rintro (a | ⟨b, c⟩) ⟨d, e⟩
will introduce two variables, and then do case splits on both of them producing
two subgoals, one with variables a d e and the other with b c d e.
rintro, unlike rcases, also supports the form (x y : ty) for introducing
and type-ascripting multiple variables at once, similar to binders.
rotate_left
rotate_left n rotates goals to the left by n. That is, rotate_left 1
takes the main goal and puts it to the back of the subgoal list.
If n is omitted, it defaults to 1.
rotate_right
Rotate the goals to the right by n. That is, take the goal at the back
and push it to the front n times. If n is omitted, it defaults to 1.
rsuffices
The rsuffices tactic is an alternative version of suffices, that allows the usage
of any syntax that would be valid in an obtain block. This tactic just calls obtain
on the expression, and then rotate_left.
rw
rw!
rw?
rw??
rw?? is an interactive tactic that suggests rewrites for any expression selected by the user.
To use it, shift-click an expression in the goal or a hypothesis that you want to rewrite.
Clicking on one of the rewrite suggestions will paste the relevant rewrite tactic into the editor.
The rewrite suggestions are grouped and sorted by the pattern that the rewrite lemmas match with. Rewrites that don't change the goal and rewrites that create the same goal as another rewrite are filtered out, as well as rewrites that have new metavariables in the replacement expression. To see all suggestions, click on the filter button (▼) in the top right.
rw_mod_castRewrites with the given rules, normalizing casts prior to each step.
saturate
saturate?
says
If you write X says, where X is a tactic that produces a "Try this: Y" message,
then you will get a message "Try this: X says Y".
Once you've clicked to replace X says with X says Y,
afterwards X says Y will only run Y.
The typical usage case is:
simp? [X] says simp only [X, Y, Z]
If you use set_option says.verify true (set automatically during CI) then X says Y
runs X and verifies that it still prints "Try this: Y".
set
set!
show
show t finds the first goal whose target unifies with t. It makes that the main goal,
performs the unification, and replaces the target with the unified version of t.
show_term
show_term tac runs tac, then prints the generated term in the form
"exact X Y Z" or "refine X ?_ Z" (prefixed by expose_names if necessary)
if there are remaining subgoals.
(For some tactics, the printed term will not be human readable.)
simp
The simp tactic uses lemmas and hypotheses to simplify the main goal target or
non-dependent hypotheses. It has many variants:
-
simpsimplifies the main goal target using lemmas tagged with the attribute[simp]. -
simp [h₁, h₂, ..., hₙ]simplifies the main goal target using the lemmas tagged with the attribute[simp]and the givenhᵢ's, where thehᵢ's are expressions.- -
If an
hᵢis a defined constantf, thenfis unfolded. Iffhas equational lemmas associated with it (and is not a projection or areducibledefinition), these are used to rewrite withf. -
simp [*]simplifies the main goal target using the lemmas tagged with the attribute[simp]and all hypotheses. -
simp only [h₁, h₂, ..., hₙ]is likesimp [h₁, h₂, ..., hₙ]but does not use[simp]lemmas. -
simp [-id₁, ..., -idₙ]simplifies the main goal target using the lemmas tagged with the attribute[simp], but removes the ones namedidᵢ. -
simp at h₁ h₂ ... hₙsimplifies the hypothesesh₁ : T₁...hₙ : Tₙ. If the target or another hypothesis depends onhᵢ, a new simplified hypothesishᵢis introduced, but the old one remains in the local context. -
simp at *simplifies all the hypotheses and the target. -
simp [*] at *simplifies target and all (propositional) hypotheses using the other hypotheses.
simp!
simp! is shorthand for simp with autoUnfold := true.
This will rewrite with all equation lemmas, which can be used to
partially evaluate many definitions.
simp?
simp? takes the same arguments as simp, but reports an equivalent call to simp only
that would be sufficient to close the goal. This is useful for reducing the size of the simp
set in a local invocation to speed up processing.
example (x : Nat) : (if True then x + 2 else 3) = x + 2 := x:ℕ⊢ (if True then x + 2 else 3) = x + 2
All goals completed! 🐙 -- prints "Try this: simp only [ite_true]"
This command can also be used in simp_all and dsimp.
simp?!
simp? takes the same arguments as simp, but reports an equivalent call to simp only
that would be sufficient to close the goal. This is useful for reducing the size of the simp
set in a local invocation to speed up processing.
example (x : Nat) : (if True then x + 2 else 3) = x + 2 := x:ℕ⊢ (if True then x + 2 else 3) = x + 2
All goals completed! 🐙 -- prints "Try this: simp only [ite_true]"
This command can also be used in simp_all and dsimp.
simp_all
simp_all is a stronger version of simp [*] at * where the hypotheses and target
are simplified multiple times until no simplification is applicable.
Only non-dependent propositional hypotheses are considered.
simp_all!
simp_all! is shorthand for simp_all with autoUnfold := true.
This will rewrite with all equation lemmas, which can be used to
partially evaluate many definitions.
simp_all?
simp? takes the same arguments as simp, but reports an equivalent call to simp only
that would be sufficient to close the goal. This is useful for reducing the size of the simp
set in a local invocation to speed up processing.
example (x : Nat) : (if True then x + 2 else 3) = x + 2 := x:ℕ⊢ (if True then x + 2 else 3) = x + 2
All goals completed! 🐙 -- prints "Try this: simp only [ite_true]"
This command can also be used in simp_all and dsimp.
simp_all?!
simp? takes the same arguments as simp, but reports an equivalent call to simp only
that would be sufficient to close the goal. This is useful for reducing the size of the simp
set in a local invocation to speed up processing.
example (x : Nat) : (if True then x + 2 else 3) = x + 2 := x:ℕ⊢ (if True then x + 2 else 3) = x + 2
All goals completed! 🐙 -- prints "Try this: simp only [ite_true]"
This command can also be used in simp_all and dsimp.
simp_all_arith
simp_all_arith has been deprecated. It was a shorthand for simp_all +arith +decide.
Note that +decide is not needed for reducing arithmetic terms since simprocs have been added to Lean.
simp_all_arith!
simp_all_arith! has been deprecated. It was a shorthand for simp_all! +arith +decide.
Note that +decide is not needed for reducing arithmetic terms since simprocs have been added to Lean.
simp_arith
simp_arith has been deprecated. It was a shorthand for simp +arith +decide.
Note that +decide is not needed for reducing arithmetic terms since simprocs have been added to Lean.
simp_arith!
simp_arith! has been deprecated. It was a shorthand for simp! +arith +decide.
Note that +decide is not needed for reducing arithmetic terms since simprocs have been added to Lean.
simp_intro
The simp_intro tactic is a combination of simp and intro: it will simplify the types of
variables as it introduces them and uses the new variables to simplify later arguments
and the goal.
-
simp_intro x y zintroduces variables namedx y z -
simp_intro x y z ..introduces variables namedx y zand then keeps introducing_binders -
simp_intro (config := cfg) (discharger := tac) x y .. only [h₁, h₂]:simp_introtakes the same options assimp(seesimp)
example : x + 0 = y → x = z := x:ℕy:ℕz:ℕ⊢ x + 0 = y → x = z
x:ℕy:ℕz:ℕh:x = y⊢ y = z
-- h: x = y ⊢ y = z
All goals completed! 🐙
simp_rw
simp_rw functions as a mix of simp and rw. Like rw, it applies each
rewrite rule in the given order, but like simp it repeatedly applies these
rules and also under binders like ∀ x, ..., ∃ x, ... and fun x ↦....
Usage:
-
simp_rw [lemma_1, ..., lemma_n]will rewrite the goal by applying the lemmas in that order. A lemma preceded by←is applied in the reverse direction. -
simp_rw [lemma_1, ..., lemma_n] at h₁ ... hₙwill rewrite the given hypotheses. -
simp_rw [...] at *rewrites in the whole context: all hypotheses and the goal.
Lemmas passed to simp_rw must be expressions that are valid arguments to simp.
For example, neither simp nor rw can solve the following, but simp_rw can:
example {a : ℕ}
(h1 : ∀ a b : ℕ, a - 1 ≤ b ↔ a ≤ b + 1)
(h2 : ∀ a b : ℕ, a ≤ b ↔ ∀ c, c < a → c < b) :
(∀ b, a - 1 ≤ b) = ∀ b c : ℕ, c < a → c < b + 1 := a:ℕh1:∀ (a b : ℕ), a - 1 ≤ b ↔ a ≤ b + 1h2:∀ (a b : ℕ), a ≤ b ↔ ∀ c < a, c < b⊢ (∀ (b : ℕ), a - 1 ≤ b) = ∀ (b c : ℕ), c < a → c < b + 1
All goals completed! 🐙
simp_wf
Unfold definitions commonly used in well founded relation definitions.
Since Lean 4.12, Lean unfolds these definitions automatically before presenting the goal to the
user, and this tactic should no longer be necessary. Calls to simp_wf can be removed or replaced
by plain calls to simp.
simpa
This is a "finishing" tactic modification of simp. It has two forms.
-
simpa [rules, ⋯] using ewill simplify the goal and the type ofeusingrules, then try to close the goal usinge.
Simplifying the type of e makes it more likely to match the goal
(which has also been simplified). This construction also tends to be
more robust under changes to the simp lemma set.
-
simpa [rules, ⋯]will simplify the goal and the type of a hypothesisthisif present in the context, then try to close the goal using theassumptiontactic.
simpa!
This is a "finishing" tactic modification of simp. It has two forms.
-
simpa [rules, ⋯] using ewill simplify the goal and the type ofeusingrules, then try to close the goal usinge.
Simplifying the type of e makes it more likely to match the goal
(which has also been simplified). This construction also tends to be
more robust under changes to the simp lemma set.
-
simpa [rules, ⋯]will simplify the goal and the type of a hypothesisthisif present in the context, then try to close the goal using theassumptiontactic.
simpa?
This is a "finishing" tactic modification of simp. It has two forms.
-
simpa [rules, ⋯] using ewill simplify the goal and the type ofeusingrules, then try to close the goal usinge.
Simplifying the type of e makes it more likely to match the goal
(which has also been simplified). This construction also tends to be
more robust under changes to the simp lemma set.
-
simpa [rules, ⋯]will simplify the goal and the type of a hypothesisthisif present in the context, then try to close the goal using theassumptiontactic.
simpa?!
This is a "finishing" tactic modification of simp. It has two forms.
-
simpa [rules, ⋯] using ewill simplify the goal and the type ofeusingrules, then try to close the goal usinge.
Simplifying the type of e makes it more likely to match the goal
(which has also been simplified). This construction also tends to be
more robust under changes to the simp lemma set.
-
simpa [rules, ⋯]will simplify the goal and the type of a hypothesisthisif present in the context, then try to close the goal using theassumptiontactic.
sizeOf_list_dec
This tactic, added to the decreasing_trivial toolbox, proves that
sizeOf a < sizeOf as when a ∈ as, which is useful for well founded recursions
over a nested inductive like inductive T | mk : List T → T.
sleep
The tactic sleep ms sleeps for ms milliseconds and does nothing.
It is used for debugging purposes only.
slice_lhs
slice_lhs a b => tac zooms to the left-hand side, uses associativity for categorical
composition as needed, zooms in on the a-th through b-th morphisms, and invokes tac.
slice_rhs
slice_rhs a b => tac zooms to the right-hand side, uses associativity for categorical
composition as needed, zooms in on the a-th through b-th morphisms, and invokes tac.
solve_by_elim
solve_by_elim calls apply on the main goal to find an assumption whose head matches
and then repeatedly calls apply on the generated subgoals until no subgoals remain,
performing at most maxDepth (defaults to 6) recursive steps.
solve_by_elim discharges the current goal or fails.
solve_by_elim performs backtracking if subgoals can not be solved.
By default, the assumptions passed to apply are the local context, rfl, trivial,
congrFun and congrArg.
The assumptions can be modified with similar syntax as for simp:
-
solve_by_elim [h₁, h₂, ..., hᵣ]also applies the given expressions. -
solve_by_elim only [h₁, h₂, ..., hᵣ]does not include the local context,rfl,trivial,congrFun, orcongrArgunless they are explicitly included. -
solve_by_elim [-h₁, ... -hₙ]removes the given local hypotheses. -
solve_by_elim using [a₁, ...]uses all lemmas which have been labelled with the attributesaᵢ(these attributes must be created usingregister_label_attr).
solve_by_elim* tries to solve all goals together, using backtracking if a solution for one goal
makes other goals impossible.
(Adding or removing local hypotheses may not be well-behaved when starting with multiple goals.)
Optional arguments passed via a configuration argument as solve_by_elim (config := { ... })
-
maxDepth: number of attempts at discharging generated subgoals -
symm: adds all hypotheses derived bysymm(defaults totrue). -
exfalso: allow callingexfalsoand trying again ifsolve_by_elimfails (defaults totrue). -
transparency: change the transparency mode when callingapply. Defaults to.default, but it is often useful to change to.reducible, so semireducible definitions will not be unfolded when trying to apply a lemma.
See also the doc-comment for Lean.Meta.Tactic.Backtrack.BacktrackConfig for the options
proc, suspend, and discharge which allow further customization of solve_by_elim.
Both apply_assumption and apply_rules are implemented via these hooks.
sorry
The sorry tactic is a temporary placeholder for an incomplete tactic proof,
closing the main goal using exact sorry.
This is intended for stubbing-out incomplete parts of a proof while still having a syntactically correct proof skeleton.
Lean will give a warning whenever a proof uses sorry, so you aren't likely to miss it,
but you can double check if a theorem depends on sorry by looking for sorryAx in the output
of the #print axioms my_thm command, the axiom used by the implementation of sorry.
specialize
The tactic specialize h a₁ ... aₙ works on local hypothesis h.
The premises of this hypothesis, either universal quantifications or
non-dependent implications, are instantiated by concrete terms coming
from arguments a₁ ... aₙ.
The tactic adds a new hypothesis with the same name h := h a₁ ... aₙ
and tries to clear the previous one.
specialize_all
specialize_all x runs specialize h x for all hypotheses h where this tactic succeeds.
split
The split tactic is useful for breaking nested if-then-else and match expressions into separate cases.
For a match expression with n cases, the split tactic generates at most n subgoals.
For example, given n : Nat, and a target if n = 0 then Q else R, split will generate
one goal with hypothesis n = 0 and target Q, and a second goal with hypothesis
¬n = 0 and target R. Note that the introduced hypothesis is unnamed, and is commonly
renamed using the case or next tactics.
-
splitwill split the goal (target). -
split at hwill split the hypothesish.
split_ifs
Splits all if-then-else-expressions into multiple goals.
Given a goal of the form g (if p then x else y), split_ifs will produce
two goals: p ⊢ g x and ¬p ⊢ g y.
If there are multiple ite-expressions, then split_ifs will split them all,
starting with a top-most one whose condition does not contain another
ite-expression.
split_ifs at * splits all ite-expressions in all hypotheses as well as the goal.
split_ifs with h₁ h₂ h₃ overrides the default names for the hypotheses.
squeeze_scope
The squeeze_scope tactic allows aggregating multiple calls to simp coming from the same syntax
but in different branches of execution, such as in cases x <;> simp.
The reported simp call covers all simp lemmas used by this syntax.
@[simp] def bar (z : Nat) := 1 + z
@[simp] def baz (z : Nat) := 1 + z
@[simp] def foo : Nat → Nat → Nat
| 0, z => bar z
| _+1, z => baz z
example : foo x y = 1 + y := x:ℕy:ℕ⊢ foo x y = 1 + y
y:ℕ⊢ foo 0 y = 1 + yy:ℕn✝:ℕ⊢ foo (n✝ + 1) y = 1 + y y:ℕ⊢ foo 0 y = 1 + yy:ℕn✝:ℕ⊢ foo (n✝ + 1) y = 1 + y All goals completed! 🐙 -- two printouts:
-- "Try this: simp only [foo, bar]"
-- "Try this: simp only [foo, baz]"
example : foo x y = 1 + y := x:ℕy:ℕ⊢ foo x y = 1 + y
squeeze_scope
y:ℕ⊢ foo 0 y = 1 + yy:ℕn✝:ℕ⊢ foo (n✝ + 1) y = 1 + y y:ℕ⊢ foo 0 y = 1 + yy:ℕn✝:ℕ⊢ foo (n✝ + 1) y = 1 + y simp -- only one printout: "Try this: simp only [foo, baz, bar]"
stop
stop is a helper tactic for "discarding" the rest of a proof:
it is defined as repeat sorry.
It is useful when working on the middle of a complex proofs,
and less messy than commenting the remainder of the proof.
subsingleton
The subsingleton tactic tries to prove a goal of the form x = y or x ≍ y
using the fact that the types involved are subsingletons
(a type with exactly zero or one terms).
To a first approximation, it does apply Subsingleton.elim.
As a nicety, subsingleton first runs the intros tactic.
-
If the goal is an equality, it either closes the goal or fails.
-
subsingleton [inst1, inst2, ...]can be used to add additionalSubsingletoninstances to the local context. This can be more flexible thanhave := inst1; have := inst2; ...; subsingletonsince the tactic does not require that all placeholders be solved for.
Techniques the subsingleton tactic can apply:
-
proof irrelevance
-
heterogeneous proof irrelevance (via
proof_irrel_heq) -
using
Subsingleton(viaSubsingleton.elim) -
proving
BEqinstances are equal if they are both lawful (vialawful_beq_subsingleton)
Properties
The tactic is careful not to accidentally specialize Sort _ to Prop,
avoiding the following surprising behavior of apply Subsingleton.elim:
example (α : Sort _) (x y : α) : x = y := by apply Subsingleton.elim
The reason this example goes through is that
it applies the ∀ (p : Prop), Subsingleton p instance,
specializing the universe level metavariable in Sort _ to 0.
subst
subst x... substitutes each hypothesis x with a definition found in the local context,
then eliminates the hypothesis.
-
If
xis a local definition, then its definition is used. -
Otherwise, if there is a hypothesis of the form
x = eore = x, theneis used for the definition ofx.
If h : a = b, then subst h may be used if either a or b unfolds to a local hypothesis.
This is similar to the cases h tactic.
See also: subst_vars for substituting all local hypotheses that have a defining equation.
subst_eqs
subst_eq repeatedly substitutes according to the equality proof hypotheses in the context,
replacing the left side of the equality with the right, until no more progress can be made.
success_if_fail_with_msg
success_if_fail_with_msg msg tacs runs tacs and succeeds only if they fail with the message
msg.
msg can be any term that evaluates to an explicit String.
suffices
suffices
Given a main goal ctx ⊢ t, suffices h : t' from e replaces the main goal with ctx ⊢ t',
e must have type t in the context ctx, h : t'.
The variant suffices h : t' by tac is a shorthand for suffices h : t' from by tac.
If h : is omitted, the name this is used.
suggest_premises
#suggest_premises will suggest premises for the current goal, using the currently registered premise selector.
The suggestions are printed in the order of their confidence, from highest to lowest.
swap_var
swap_var swap_rule₁, swap_rule₂, ⋯ applies swap_rule₁ then swap_rule₂ then ⋯.
A swap_rule is of the form x y or x ↔ y, and "applying it" means swapping the variable name
x by y and vice-versa on all hypotheses and the goal.
example {P Q : Prop} (q : P) (p : Q) : P ∧ Q := P:PropQ:Propq:Pp:Q⊢ P ∧ Q
P:PropQ:Propp:Pq:Q⊢ P ∧ Q
All goals completed! 🐙
symm-
symmapplies to a goal whose target has the formt ~ uwhere~is a symmetric relation, that is, a relation which has a symmetry lemma tagged with the attribute [symm]. It replaces the target withu ~ t. -
symm at hwill rewrite a hypothesish : t ~ utoh : u ~ t.
symm_saturate
For every hypothesis h : a ~ b where a @[symm] lemma is available,
add a hypothesis h_symm : b ~ a.
tauto
tauto breaks down assumptions of the form _ ∧ _, _ ∨ _, _ ↔ _ and ∃ _, _
and splits a goal of the form _ ∧ _, _ ↔ _ or ∃ _, _ until it can be discharged
using reflexivity or solve_by_elim.
This is a finishing tactic: it either closes the goal or raises an error.
The Lean 3 version of this tactic by default attempted to avoid classical reasoning
where possible. This Lean 4 version makes no such attempt. The itauto tactic
is designed for that purpose.
tauto_set
tauto_set attempts to prove tautologies involving hypotheses and goals of the form X ⊆ Y
or X = Y, where X, Y are expressions built using ∪, ∩, , and ᶜ from finitely many
variables of type Set α. It also unfolds expressions of the form Disjoint A B and
symmDiff A B.
Examples:
example {α} (A B C D : Set α) (h1 : A ⊆ B) (h2 : C ⊆ D) : C \ B ⊆ D \ A := α:Type u_1A:Set αB:Set αC:Set αD:Set αh1:A ⊆ Bh2:C ⊆ D⊢ C \ B ⊆ D \ A
All goals completed! 🐙
example {α} (A B C : Set α) (h1 : A ⊆ B ∪ C) : (A ∩ B) ∪ (A ∩ C) = A := α:Type u_1A:Set αB:Set αC:Set αh1:A ⊆ B ∪ C⊢ A ∩ B ∪ A ∩ C = A
All goals completed! 🐙
tfae_finish
tfae_finish is used to close goals of the form TFAE [P₁, P₂, ...] once a sufficient collection
of hypotheses of the form Pᵢ → Pⱼ or Pᵢ ↔ Pⱼ have been introduced to the local context.
tfae_have can be used to conveniently introduce these hypotheses; see tfae_have.
Example:
example : TFAE [P, Q, R] := by tfae_have 1 → 2 := sorry /- proof of P → Q -/ tfae_have 2 → 1 := sorry /- proof of Q → P -/ tfae_have 2 ↔ 3 := sorry /- proof of Q ↔ R -/ tfae_finish
tfae_have
"Goal-style" tfae_have syntax is deprecated. Now, tfae_have ... should be followedby := ...; see below for the new behavior. This warning can be turned off with set_option Mathlib.Tactic.TFAE.useDeprecated true.
tfae_have introduces hypotheses for proving goals of the form TFAE [P₁, P₂, ...]. Specifically,
tfae_have i <arrow> j := ... introduces a hypothesis of type Pᵢ <arrow> Pⱼ to the local
context, where <arrow> can be →, ←, or ↔. Note that i and j are natural number indices
(beginning at 1) used to specify the propositions P₁, P₂, ... that appear in the goal.
example (h : P → R) : TFAE [P, Q, R] := by tfae_have 1 → 3 := h ...
The resulting context now includes tfae_1_to_3 : P → R.
Once sufficient hypotheses have been introduced by tfae_have, tfae_finish can be used to close
the goal. For example,
example : TFAE [P, Q, R] := by tfae_have 1 → 2 := sorry /- proof of P → Q -/ tfae_have 2 → 1 := sorry /- proof of Q → P -/ tfae_have 2 ↔ 3 := sorry /- proof of Q ↔ R -/ tfae_finish
All features of have are supported by tfae_have, including naming, matching,
destructuring, and goal creation. These are demonstrated below.
example : TFAE [P, Q] := by -- assert `tfae_1_to_2 : P → Q`: tfae_have 1 → 2 := sorry -- assert `hpq : P → Q`: tfae_have hpq : 1 → 2 := sorry -- match on `p : P` and prove `Q` via `f p`: tfae_have 1 → 2 | p => f p -- assert `pq : P → Q`, `qp : Q → P`: tfae_have ⟨pq, qp⟩ : 1 ↔ 2 := sorry -- assert `h : P → Q`; `?a` is a new goal: tfae_have h : 1 → 2 := f ?a ...
tfae_have
tfae_have introduces hypotheses for proving goals of the form TFAE [P₁, P₂, ...]. Specifically,
tfae_have i <arrow> j := ... introduces a hypothesis of type Pᵢ <arrow> Pⱼ to the local
context, where <arrow> can be →, ←, or ↔. Note that i and j are natural number indices
(beginning at 1) used to specify the propositions P₁, P₂, ... that appear in the goal.
example (h : P → R) : TFAE [P, Q, R] := by tfae_have 1 → 3 := h ...
The resulting context now includes tfae_1_to_3 : P → R.
Once sufficient hypotheses have been introduced by tfae_have, tfae_finish can be used to close
the goal. For example,
example : TFAE [P, Q, R] := by tfae_have 1 → 2 := sorry /- proof of P → Q -/ tfae_have 2 → 1 := sorry /- proof of Q → P -/ tfae_have 2 ↔ 3 := sorry /- proof of Q ↔ R -/ tfae_finish
All features of have are supported by tfae_have, including naming, matching,
destructuring, and goal creation. These are demonstrated below.
example : TFAE [P, Q] := by -- assert `tfae_1_to_2 : P → Q`: tfae_have 1 → 2 := sorry -- assert `hpq : P → Q`: tfae_have hpq : 1 → 2 := sorry -- match on `p : P` and prove `Q` via `f p`: tfae_have 1 → 2 | p => f p -- assert `pq : P → Q`, `qp : Q → P`: tfae_have ⟨pq, qp⟩ : 1 ↔ 2 := sorry -- assert `h : P → Q`; `?a` is a new goal: tfae_have h : 1 → 2 := f ?a ...
toFinite_tac
A tactic (for use in default params) that applies Set.toFinite to synthesize a Set.Finite
term.
to_encard_tac
A tactic useful for transferring proofs for encard to their corresponding card statements
trans
trans applies to a goal whose target has the form t ~ u where ~ is a transitive relation,
that is, a relation which has a transitivity lemma tagged with the attribute [trans].
-
trans sreplaces the goal with the two subgoalst ~ sands ~ u. -
If
sis omitted, then a metavariable is used instead.
Additionally, trans also applies to a goal whose target has the form t → u,
in which case it replaces the goal with t → s and s → u.
trivial
trivial tries different simple tactics (e.g., rfl, contradiction, ...)
to close the current goal.
You can use the command macro_rules to extend the set of tactics used. Example:
macro_rules | `(tactic| trivial) => `(tactic| simp)
try?
unfold-
unfold idunfolds all occurrences of definitionidin the target. -
unfold id1 id2 ...is equivalent tounfold id1; unfold id2; .... -
unfold id at hunfolds at the hypothesish.
Definitions can be either global or local definitions.
For non-recursive global definitions, this tactic is identical to delta.
For recursive global definitions, it uses the "unfolding lemma" id.eq_def,
which is generated for each recursive definition, to unfold according to the recursive definition given by the user.
Only one level of unfolding is performed, in contrast to simp only [id], which unfolds definition id recursively.
unfold?
Replace the selected expression with a definitional unfolding.
-
After each unfolding, we apply
whnfCoreto simplify the expression. -
Explicit natural number expressions are evaluated.
-
Unfolds of class projections of instances marked with
@[default_instance]are not shown. This is relevant for notational type classes like+: we don't want to suggestAdd.add a bas an unfolding ofa + b. Similarly forOfNat n : Natwhich unfolds inton : Nat.
To use unfold?, shift-click an expression in the tactic state.
This gives a list of rewrite suggestions for the selected expression.
Click on a suggestion to replace unfold? by a tactic that performs this rewrite.
unfold_projs
unfold_projs at loc unfolds projections of class instances at the given location.
This also exists as a conv-mode tactic.
unhygienic
unhygienic tacs runs tacs with name hygiene disabled.
This means that tactics that would normally create inaccessible names will instead
make regular variables. Warning: Tactics may change their variable naming
strategies at any time, so code that depends on autogenerated names is brittle.
Users should try not to use unhygienic if possible.
example : ∀ x : Nat, x = x := ⊢ ∀ (x : ℕ), x = x unhygienic
x:ℕ⊢ x = x -- x would normally be intro'd as inaccessible
All goals completed! 🐙 -- refer to x
uniqueDiffWithinAt_Ici_Iic_univ
An auxiliary tactic closing goals UniqueDiffWithinAt ℝ s a where
s ∈ {Iic a, Ici a, univ}.
unreachable!
This tactic causes a panic when run (at compile time).
(This is distinct from exact unreachable!, which inserts code which will panic at run time.)
It is intended for tests to assert that a tactic will never be executed, which is otherwise an
unusual thing to do (and the unreachableTactic linter will give a warning if you do).
The unreachableTactic linter has a special exception for uses of unreachable!.
example : True := ⊢ True All goals completed! 🐙 <;> unreachable!
use
use e₁, e₂, ⋯ is similar to exists, but unlike exists it is equivalent to applying the tactic
refine ⟨e₁, e₂, ⋯, ?_, ⋯, ?_⟩ with any number of placeholders (rather than just one) and
then trying to close goals associated to the placeholders with a configurable discharger (rather
than just try trivial).
Examples:
example : ∃ x : Nat, x = x := ⊢ ∃ x, x = x All goals completed! 🐙
example : ∃ x : Nat, ∃ y : Nat, x = y := ⊢ ∃ x y, x = y All goals completed! 🐙
example : ∃ x : String × String, x.1 = x.2 := ⊢ ∃ x, x.1 = x.2 All goals completed! 🐙
use! e₁, e₂, ⋯ is similar but it applies constructors everywhere rather than just for
goals that correspond to the last argument of a constructor. This gives the effect that
nested constructors are being flattened out, with the supplied values being used along the
leaves and nodes of the tree of constructors.
With use! one can feed in each 42 one at a time:
example : ∃ p : Nat × Nat, p.1 = p.2 := by use! 42, 42 example : ∃ p : Nat × Nat, p.1 = p.2 := by use! (42, 42)
The second line makes use of the fact that use! tries refining with the argument before
applying a constructor. Also note that use/use! by default uses a tactic
called use_discharger to discharge goals, so use! 42 will close the goal in this example since
use_discharger applies rfl, which as a consequence solves for the other Nat metavariable.
These tactics take an optional discharger to handle remaining explicit Prop constructor arguments.
By default it is use (discharger := try with_reducible use_discharger) e₁, e₂, ⋯.
To turn off the discharger and keep all goals, use (discharger := skip).
To allow "heavy refls", use (discharger := try use_discharger).
use!
use e₁, e₂, ⋯ is similar to exists, but unlike exists it is equivalent to applying the tactic
refine ⟨e₁, e₂, ⋯, ?_, ⋯, ?_⟩ with any number of placeholders (rather than just one) and
then trying to close goals associated to the placeholders with a configurable discharger (rather
than just try trivial).
Examples:
example : ∃ x : Nat, x = x := ⊢ ∃ x, x = x All goals completed! 🐙
example : ∃ x : Nat, ∃ y : Nat, x = y := ⊢ ∃ x y, x = y All goals completed! 🐙
example : ∃ x : String × String, x.1 = x.2 := ⊢ ∃ x, x.1 = x.2 All goals completed! 🐙
use! e₁, e₂, ⋯ is similar but it applies constructors everywhere rather than just for
goals that correspond to the last argument of a constructor. This gives the effect that
nested constructors are being flattened out, with the supplied values being used along the
leaves and nodes of the tree of constructors.
With use! one can feed in each 42 one at a time:
example : ∃ p : Nat × Nat, p.1 = p.2 := by use! 42, 42 example : ∃ p : Nat × Nat, p.1 = p.2 := by use! (42, 42)
The second line makes use of the fact that use! tries refining with the argument before
applying a constructor. Also note that use/use! by default uses a tactic
called use_discharger to discharge goals, so use! 42 will close the goal in this example since
use_discharger applies rfl, which as a consequence solves for the other Nat metavariable.
These tactics take an optional discharger to handle remaining explicit Prop constructor arguments.
By default it is use (discharger := try with_reducible use_discharger) e₁, e₂, ⋯.
To turn off the discharger and keep all goals, use (discharger := skip).
To allow "heavy refls", use (discharger := try use_discharger).
use_discharger
Default discharger to try to use for the use and use! tactics.
This is similar to the trivial tactic but doesn't do things like contradiction or decide.
use_finite_instance
wait_for_unblock_async
Spawns a logSnapshotTask that waits for unblock to be called, which is expected to happen in a
subsequent document version that does not invalidate this tactic. Complains if cancellation token
was set before unblocking, i.e. if the tactic was invalidated after all.
whnf
whnf at loc puts the given location into weak-head normal form.
This also exists as a conv-mode tactic.
Weak-head normal form is when the outer-most expression has been fully reduced, the expression may contain subexpressions which have not been reduced.
with_panel_widgets
Display the selected panel widgets in the nested tactic script. For example,
assuming we have written a GeometryDisplay component,
by with_panel_widgets [GeometryDisplay]
simp
rflwill show the geometry display alongside the usual tactic state throughout the proof.
with_reducible
with_reducible tacs executes tacs using the reducible transparency setting.
In this setting only definitions tagged as [reducible] are unfolded.
with_reducible_and_instances
with_reducible_and_instances tacs executes tacs using the .instances transparency setting.
In this setting only definitions tagged as [reducible] or type class instances are unfolded.
with_unfolding_all
with_unfolding_all tacs executes tacs using the .all transparency setting.
In this setting all definitions that are not opaque are unfolded.
wlog
wlog h : P will add an assumption h : P to the main goal, and add a side goal that requires
showing that the case h : ¬ P can be reduced to the case where P holds (typically by symmetry).
The side goal will be at the top of the stack. In this side goal, there will be two additional assumptions:
-
h : ¬ P: the assumption thatPdoes not hold -
this: which is the statement that in the old contextPsuffices to prove the goal. By default, the namethisis used, but the idiomwith Hcan be added to specify the name:wlog h : P with H.
Typically, it is useful to use the variant wlog h : P generalizing x y,
to revert certain parts of the context before creating the new goal.
In this way, the wlog-claim this can be applied to x and y in different orders
(exploiting symmetry, which is the typical use case).
By default, the entire context is reverted.
zify
The zify tactic is used to shift propositions from Nat to Int.
This is often useful since Int has well-behaved subtraction.
example (a b c x y z : Nat) (h : ¬ x*y*z < 0) : c < a + 3*b := by zify zify at h /- h : ¬↑x * ↑y * ↑z < 0 ⊢ ↑c < ↑a + 3 * ↑b -/
zify can be given extra lemmas to use in simplification. This is especially useful in the
presence of nat subtraction: passing ≤ arguments will allow push_cast to do more work.
example (a b c : Nat) (h : a - b < c) (hab : b ≤ a) : false := by zify [hab] at h /- h : ↑a - ↑b < ↑c -/
zify makes use of the @[zify_simps] attribute to move propositions,
and the push_cast tactic to simplify the Int-valued expressions.
zify is in some sense dual to the lift tactic.
lift (z : Int) to Nat will change the type of an
integer z (in the supertype) to Nat (the subtype), given a proof that z ≥ 0;
propositions concerning z will still be over Int.
zify changes propositions about Nat (the subtype) to propositions about Int (the supertype),
without changing the type of any variable.
Hey, I'm only here as a hack. Please ignore me and that thing below.
trace.grind.split
Default value: false
enable/disable tracing for the given module and submodules