Adaptation notes are comments that are used to indicate that a piece of code has been changed to accommodate a change in Lean core. They typically require further action/maintenance to be taken in the future.
1.15. All tactics
This list of tactics is automatically generated and contains all tactics known in Mathlib, regardless of which package (Lean/Std/Batteries/Mathlib/etc.) defines them.
If you see two tactics which are almost identical, consider adding tactic_alt TAC
to the variant version of a tactic to tell Lean to group them together.
#check
The #check t
tactic elaborates the term t
and then pretty prints it with its type as e : ty
.
If t
is an identifier, then it pretty prints a type declaration form
for the global constant t
instead.
Use #check (t)
to pretty print it as an elaborated expression.
Like the #check
command, the #check
tactic allows stuck typeclass instance problems.
These become metavariables in the output.
#count_heartbeats!
#count_heartbeats! in tac
runs a tactic 10 times, counting the heartbeats used, and logs the range
and standard deviation. The tactic #count_heartbeats! n in tac
runs it n
times instead.
#find
#leansearch
Search LeanSearch from within Lean.
Queries should be a string that ends with a .
or ?
. This works as a command, as a term
and as a tactic as in the following examples. In tactic mode, only valid tactics are displayed.
#leansearch "If a natural number n is less than m, then the successor of n is less than the successor of m."
example := #leansearch "If a natural number n is less than m, then the successor of n is less than the successor of m."
example : 3 ≤ 5 := ⊢ 3 ≤ 5
⊢ 3 ≤ 5
All goals completed! 🐙
#loogle
Search Loogle from within Lean. This can be used as a command, term or tactic as in the following examples. In the case of a tactic, only valid tactics are displayed.
All goals completed! 🐙
example := #loogle List ?a → ?a
example : 3 ≤ 5 := ⊢ 3 ≤ 5
⊢ 3 ≤ 5
All goals completed! 🐙
Loogle Usage
Loogle finds definitions and lemmas in various ways:
By constant: 🔍 Real.sin finds all lemmas whose statement somehow mentions the sine function.
By lemma name substring: 🔍 "differ" finds all lemmas that have "differ" somewhere in their lemma name.
By subexpression: 🔍 _ * (_ ^ _) finds all lemmas whose statements somewhere include a product where the second argument is raised to some power.
The pattern can also be non-linear, as in 🔍 Real.sqrt ?a * Real.sqrt ?a
If the pattern has parameters, they are matched in any order. Both of these will find List.map: 🔍 (?a -> ?b) -> List ?a -> List ?b 🔍 List ?a -> (?a -> ?b) -> List ?b
By main conclusion: 🔍 |- tsum _ = _ * tsum _ finds all lemmas where the conclusion (the subexpression to the right of all → and ∀) has the given shape.
As before, if the pattern has parameters, they are matched against the hypotheses of the lemma in any order; for example, 🔍 |- _ < _ → tsum _ < tsum _ will find tsum_lt_tsum even though the hypothesis f i < g i is not the last.
If you pass more than one such search filter, separated by commas Loogle will return lemmas which match all of them. The search 🔍 Real.sin, "two", tsum, _ * _, _ ^ _, |- _ < _ → _ woould find all lemmas which mention the constants Real.sin and tsum, have "two" as a substring of the lemma name, include a product and a power somewhere in the type, and have a hypothesis of the form _ < _ (if there were any such lemmas). Metavariables (?a) are assigned independently in each filter.
#loogle
#moogle
Search Moogle from within Lean.
Queries should be a string that ends with a .
or ?
. This works as a command, as a term
and as a tactic as in the following examples. In tactic mode, only valid tactics are displayed.
#moogle "If a natural number n is less than m, then the successor of n is less than the successor of m."
example := #moogle "If a natural number n is less than m, then the successor of n is less than the successor of m."
example : 3 ≤ 5 := ⊢ 3 ≤ 5
⊢ 3 ≤ 5
All goals completed! 🐙
(
(tacs)
executes a list of tactics in sequence, without requiring that
the goal be closed at the end like · tacs
. Like by
itself, the tactics
can be either separated by newlines or ;
.
<;>
t <;> [t1; t2; ...; tn]
focuses on the first goal and applies t
, which should result in n
subgoals. It then applies each ti
to the corresponding goal and collects the resulting
subgoals.
<;>
tac <;> tac'
runs tac
on the main goal and tac'
on each produced goal,
concatenating all goals produced by tac'
.
Lean.Parser.Tactic.introMatch
The tactic
intro | pat1 => tac1 | pat2 => tac2
is the same as:
intro x match x with | pat1 => tac1 | pat2 => tac2
That is, intro
can be followed by match arms and it introduces the values while
doing a pattern match. This is equivalent to fun
with match arms in term mode.
Lean.Parser.Tactic.match
match
performs case analysis on one or more expressions.
See Induction and Recursion.
The syntax for the match
tactic is the same as term-mode match
, except that
the match arms are tactics instead of expressions.
example (n : Nat) : n = n := n:ℕ⊢ n = n
match n with
| 0 => All goals completed! 🐙
| i+1 => All goals completed! 🐙
Lean.Parser.Tactic.open
open Foo in tacs
(the tactic) acts like open Foo
at command level,
but it opens a namespace only within the tactics tacs
.
Lean.Parser.Tactic.set_option
set_option opt val in tacs
(the tactic) acts like set_option opt val
at the command level,
but it sets the option only within the tactics tacs
.
Lean.Parser.Tactic.tacDepIfThenElse
In tactic mode, if h : t then tac1 else tac2
can be used as alternative syntax for:
by_cases h : t · tac1 · tac2
It performs case distinction on h : t
or h : ¬t
and tac1
and tac2
are the subproofs.
You can use ?_
or _
for either subproof to delay the goal to after the tactic, but
if a tactic sequence is provided for tac1
or tac2
then it will require the goal to be closed
by the end of the block.
Lean.Parser.Tactic.tacIfThenElse
In tactic mode, if t then tac1 else tac2
is alternative syntax for:
by_cases t
· tac1
· tac2
It performs case distinction on h† : t
or h† : ¬t
, where h†
is an anonymous
hypothesis, and tac1
and tac2
are the subproofs. (It doesn't actually use
nondependent if
, since this wouldn't add anything to the context and hence would be
useless for proving theorems. To actually insert an ite
application use
refine if t then ?_ else ?_
.)
_
_
in tactic position acts like the done
tactic: it fails and gives the list
of goals if there are any. It is useful as a placeholder after starting a tactic block
such as by _
to make it syntactically correct and show the current goal.
abel
Tactic for evaluating expressions in abelian groups.
-
abel!
will use a more aggressive reducibility setting to determine equality of atoms. -
abel1
fails if the target is not an equality.
For example:
example [AddCommMonoid α] (a b : α) : a + (b + a) = a + a + b := α:Type u_1inst✝:AddCommMonoid αa:αb:α⊢ a + (b + a) = a + a + b All goals completed! 🐙
example [AddCommGroup α] (a : α) : (3 : ℤ) • a = a + (2 : ℤ) • a := α:Type u_1inst✝:AddCommGroup αa:α⊢ 3 • a = a + 2 • a All goals completed! 🐙
abel!
Unsupported legacy syntax from mathlib3, which allowed passing additional terms to abel!
.
abel!
Tactic for evaluating expressions in abelian groups.
-
abel!
will use a more aggressive reducibility setting to determine equality of atoms. -
abel1
fails if the target is not an equality.
For example:
example [AddCommMonoid α] (a b : α) : a + (b + a) = a + a + b := α:Type u_1inst✝:AddCommMonoid αa:αb:α⊢ a + (b + a) = a + a + b All goals completed! 🐙
example [AddCommGroup α] (a : α) : (3 : ℤ) • a = a + (2 : ℤ) • a := α:Type u_1inst✝:AddCommGroup αa:α⊢ 3 • a = a + 2 • a All goals completed! 🐙
abel1
Tactic for solving equations in the language of
additive, commutative monoids and groups.
This version of abel
fails if the target is not an equality
that is provable by the axioms of commutative monoids/groups.
abel1!
will use a more aggressive reducibility setting to identify atoms.
This can prove goals that abel
cannot, but is more expensive.
abel1!
Tactic for solving equations in the language of
additive, commutative monoids and groups.
This version of abel
fails if the target is not an equality
that is provable by the axioms of commutative monoids/groups.
abel1!
will use a more aggressive reducibility setting to identify atoms.
This can prove goals that abel
cannot, but is more expensive.
abel_nf
Simplification tactic for expressions in the language of abelian groups, which rewrites all group expressions into a normal form.
-
abel_nf!
will use a more aggressive reducibility setting to identify atoms. -
abel_nf (config := cfg)
allows for additional configuration:-
red
: the reducibility setting (overridden by!
) -
recursive
: if true,abel_nf
will also recurse into atoms
-
-
abel_nf
works as both a tactic and a conv tactic. In tactic mode,abel_nf at h
can be used to rewrite in a hypothesis.
abel_nf!
Simplification tactic for expressions in the language of abelian groups, which rewrites all group expressions into a normal form.
-
abel_nf!
will use a more aggressive reducibility setting to identify atoms. -
abel_nf (config := cfg)
allows for additional configuration:-
red
: the reducibility setting (overridden by!
) -
recursive
: if true,abel_nf
will also recurse into atoms
-
-
abel_nf
works as both a tactic and a conv tactic. In tactic mode,abel_nf at h
can be used to rewrite in a hypothesis.
absurd
Given a proof h
of p
, absurd h
changes the goal to ⊢ ¬ p
.
If p
is a negation ¬q
then the goal is changed to ⊢ q
instead.
ac_change
ac_change g using n
is convert_to g using n
followed by ac_rfl
. It is useful for
rearranging/reassociating e.g. sums:
example (a b c d e f g N : ℕ) : (a + b) + (c + d) + (e + f) + g ≤ N := by ac_change a + d + e + f + c + g + b ≤ _ -- ⊢ a + d + e + f + c + g + b ≤ N
ac_nf
ac_nf
normalizes equalities up to application of an associative and commutative operator.
-
ac_nf
normalizes all hypotheses and the goal target of the goal. -
ac_nf at l
normalizes at location(s)l
, wherel
is either*
or a list of hypotheses in the local context. In the latter case, a turnstile⊢
or|-
can also be used, to signify the target of the goal.
instance : Associative (α := Nat) (.+.) := ⟨Nat.add_assoc⟩ instance : Commutative (α := Nat) (.+.) := ⟨Nat.add_comm⟩ example (a b c d : Nat) : a + b + c + d = d + (b + c) + a := by ac_nf -- goal: a + (b + (c + d)) = a + (b + (c + d))
ac_rfl
ac_rfl
proves equalities up to application of an associative and commutative operator.
instance : Associative (α := Nat) (.+.) := ⟨Nat.add_assoc⟩ instance : Commutative (α := Nat) (.+.) := ⟨Nat.add_comm⟩ example (a b c d : Nat) : a + b + c + d = d + (b + c) + a := by ac_rfl
aesop
aesop <clause>*
tries to solve the current goal by applying a set of rules
registered with the @[aesop]
attribute. See its
README for a tutorial and a
reference.
The variant aesop?
prints the proof it found as a Try this
suggestion.
Clauses can be used to customise the behaviour of an Aesop call. Available clauses are:
-
(add <phase> <priority> <builder> <rule>)
adds a rule.<phase>
isunsafe
,safe
ornorm
.<priority>
is a percentage for unsafe rules and an integer for safe and norm rules.<rule>
is the name of a declaration or local hypothesis.<builder>
is the rule builder used to turn<rule>
into an Aesop rule. Example:(add unsafe 50% apply Or.inl)
. -
(erase <rule>)
disables a globally registered Aesop rule. Example:(erase Aesop.BuiltinRules.assumption)
. -
(rule_sets := [<ruleset>,*])
enables or disables named sets of rules for this Aesop call. Example:(rule_sets := [-builtin, MyRuleSet])
. -
(config { <opt> := <value> })
adjusts Aesop's search options. SeeAesop.Options
. -
(simp_config { <opt> := <value> })
adjusts options for Aesop's built-insimp
rule. The given options are directly passed tosimp
. For example,(simp_config := { zeta := false })
makes Aesop usesimp (config := { zeta := false })
.
aesop?
aesop <clause>*
tries to solve the current goal by applying a set of rules
registered with the @[aesop]
attribute. See its
README for a tutorial and a
reference.
The variant aesop?
prints the proof it found as a Try this
suggestion.
Clauses can be used to customise the behaviour of an Aesop call. Available clauses are:
-
(add <phase> <priority> <builder> <rule>)
adds a rule.<phase>
isunsafe
,safe
ornorm
.<priority>
is a percentage for unsafe rules and an integer for safe and norm rules.<rule>
is the name of a declaration or local hypothesis.<builder>
is the rule builder used to turn<rule>
into an Aesop rule. Example:(add unsafe 50% apply Or.inl)
. -
(erase <rule>)
disables a globally registered Aesop rule. Example:(erase Aesop.BuiltinRules.assumption)
. -
(rule_sets := [<ruleset>,*])
enables or disables named sets of rules for this Aesop call. Example:(rule_sets := [-builtin, MyRuleSet])
. -
(config { <opt> := <value> })
adjusts Aesop's search options. SeeAesop.Options
. -
(simp_config { <opt> := <value> })
adjusts options for Aesop's built-insimp
rule. The given options are directly passed tosimp
. For example,(simp_config := { zeta := false })
makes Aesop usesimp (config := { zeta := false })
.
aesop_cat
A thin wrapper for aesop
which adds the CategoryTheory
rule set and
allows aesop
to look through semireducible definitions when calling intros
.
This tactic fails when it is unable to solve the goal, making it suitable for
use in auto-params.
aesop_cat_nonterminal
A variant of aesop_cat
which does not fail when it is unable to solve the
goal. Use this only for exploration! Nonterminal aesop
is even worse than
nonterminal simp
.
aesop_graph
A variant of the aesop
tactic for use in the graph library. Changes relative
to standard aesop
:
-
We use the
SimpleGraph
rule set in addition to the default rule sets. -
We instruct Aesop's
intro
rule to unfold withdefault
transparency. -
We instruct Aesop to fail if it can't fully solve the goal. This allows us to use
aesop_graph
for auto-params.
aesop_graph_nonterminal
A variant of aesop_graph
which does not fail if it is unable to solve the goal.
Use this only for exploration! Nonterminal Aesop is even worse than nonterminal simp
.
aesop_mat
The aesop_mat
tactic attempts to prove a set is contained in the ground set of a matroid.
It uses a [Matroid]
ruleset, and is allowed to fail.
aesop_unfold
aesop_unfold
algebraize
Tactic that, given RingHom
s, adds the corresponding Algebra
and (if possible)
IsScalarTower
instances, as well as Algebra
corresponding to RingHom
properties available
as hypotheses.
Example: given f : A →+* B
and g : B →+* C
, and hf : f.FiniteType
, algebraize [f, g]
will
add the instances Algebra A B
, Algebra B C
, and Algebra.FiniteType A B
.
See the algebraize
tag for instructions on what properties can be added.
The tactic also comes with a configuration option properties
. If set to true
(default), the
tactic searches through the local context for RingHom
properties that can be converted to
Algebra
properties. The macro algebraize_only
calls
algebraize (config := {properties := false})
,
so in other words it only adds Algebra
and IsScalarTower
instances.
algebraize_only
Version of algebraize
, which only adds Algebra
instances and IsScalarTower
instances,
but does not try to add any instances about any properties tagged with
@[algebraize]
, like for example Finite
or IsIntegral
.
all_goals
all_goals tac
runs tac
on each goal, concatenating the resulting goals.
If the tactic fails on any goal, the entire all_goals
tactic fails.
See also any_goals tac
.
any_goals
any_goals tac
applies the tactic tac
to every goal,
concating the resulting goals for successful tactic applications.
If the tactic fails on all of the goals, the entire any_goals
tactic fails.
This tactic is like all_goals try tac
except that it fails if none of the applications of tac
succeeds.
apply
apply
apply t at i
will use forward reasoning with t
at the hypothesis i
.
Explicitly, if t : α₁ → ⋯ → αᵢ → ⋯ → αₙ
and i
has type αᵢ
, then this tactic will add
metavariables/goals for any terms of αⱼ
for j = 1, …, i-1
,
then replace the type of i
with αᵢ₊₁ → ⋯ → αₙ
by applying those metavariables and the
original i
.
apply
apply e
tries to match the current goal against the conclusion of e
's type.
If it succeeds, then the tactic returns as many subgoals as the number of premises that
have not been fixed by type inference or type class resolution.
Non-dependent premises are added before dependent ones.
The apply
tactic uses higher-order pattern matching, type class resolution,
and first-order unification with dependent types.
apply?
Searches environment for definitions or theorems that can refine the goal using apply
with conditions resolved when possible with solve_by_elim
.
The optional using
clause provides identifiers in the local context that must be
used when closing the goal.
apply_assumption
apply_assumption
looks for an assumption of the form ... → ∀ _, ... → head
where head
matches the current goal.
You can specify additional rules to apply using apply_assumption [...]
.
By default apply_assumption
will also try rfl
, trivial
, congrFun
, and congrArg
.
If you don't want these, or don't want to use all hypotheses, use apply_assumption only [...]
.
You can use apply_assumption [-h]
to omit a local hypothesis.
You can use apply_assumption using [a₁, ...]
to use all lemmas which have been labelled
with the attributes aᵢ
(these attributes must be created using register_label_attr
).
apply_assumption
will use consequences of local hypotheses obtained via symm
.
If apply_assumption
fails, it will call exfalso
and try again.
Thus if there is an assumption of the form P → ¬ Q
, the new tactic state
will have two goals, P
and Q
.
You can pass a further configuration via the syntax apply_rules (config := {...}) lemmas
.
The options supported are the same as for solve_by_elim
(and include all the options for apply
).
apply_fun
Apply a function to an equality or inequality in either a local hypothesis or the goal.
-
If we have
h : a = b
, thenapply_fun f at h
will replace this withh : f a = f b
. -
If we have
h : a ≤ b
, thenapply_fun f at h
will replace this withh : f a ≤ f b
, and create a subsidiary goalMonotone f
.apply_fun
will automatically attempt to discharge this subsidiary goal usingmono
, or an explicit solution can be provided withapply_fun f at h using P
, whereP : Monotone f
. -
If we have
h : a < b
, thenapply_fun f at h
will replace this withh : f a < f b
, and create a subsidiary goalStrictMono f
and behaves as in the previous case. -
If we have
h : a ≠ b
, thenapply_fun f at h
will replace this withh : f a ≠ f b
, and create a subsidiary goalInjective f
and behaves as in the previous two cases. -
If the goal is
a ≠ b
,apply_fun f
will replace this withf a ≠ f b
. -
If the goal is
a = b
,apply_fun f
will replace this withf a = f b
, and create a subsidiary goalinjective f
.apply_fun
will automatically attempt to discharge this subsidiary goal using local hypotheses, or iff
is actually anEquiv
, or an explicit solution can be provided withapply_fun f using P
, whereP : Injective f
. -
If the goal is
a ≤ b
(or similarly fora < b
), andf
is actually anOrderIso
,apply_fun f
will replace the goal withf a ≤ f b
. Iff
is anything else (e.g. just a function, or anEquiv
),apply_fun
will fail.
Typical usage is:
open Function
example (X Y Z : Type) (f : X → Y) (g : Y → Z) (H : Injective <| g ∘ f) :
Injective f := X:TypeY:TypeZ:Typef:X → Yg:Y → ZH:Injective (g ∘ f)⊢ Injective f
X:TypeY:TypeZ:Typef:X → Yg:Y → ZH:Injective (g ∘ f)x:Xx':Xh:f x = f x'⊢ x = x'
X:TypeY:TypeZ:Typef:X → Yg:Y → ZH:Injective (g ∘ f)x:Xx':Xh:g (f x) = g (f x')⊢ x = x'
All goals completed! 🐙
The function f
is handled similarly to how it would be handled by refine
in that f
can contain
placeholders. Named placeholders (like ?a
or ?_
) will produce new goals.
apply_gmonoid_gnpowRec_succ_tac
A tactic to for use as an optional value for GMonoid.gnpow_succ'
.
apply_gmonoid_gnpowRec_zero_tac
A tactic to for use as an optional value for GMonoid.gnpow_zero'
.
apply_mod_cast
Normalize casts in the goal and the given expression, then apply
the expression to the goal.
apply_rules
apply_rules [l₁, l₂, ...]
tries to solve the main goal by iteratively
applying the list of lemmas [l₁, l₂, ...]
or by applying a local hypothesis.
If apply
generates new goals, apply_rules
iteratively tries to solve those goals.
You can use apply_rules [-h]
to omit a local hypothesis.
apply_rules
will also use rfl
, trivial
, congrFun
and congrArg
.
These can be disabled, as can local hypotheses, by using apply_rules only [...]
.
You can use apply_rules using [a₁, ...]
to use all lemmas which have been labelled
with the attributes aᵢ
(these attributes must be created using register_label_attr
).
You can pass a further configuration via the syntax apply_rules (config := {...})
.
The options supported are the same as for solve_by_elim
(and include all the options for apply
).
apply_rules
will try calling symm
on hypotheses and exfalso
on the goal as needed.
This can be disabled with apply_rules (config := {symm := false, exfalso := false})
.
You can bound the iteration depth using the syntax apply_rules (config := {maxDepth := n})
.
Unlike solve_by_elim
, apply_rules
does not perform backtracking, and greedily applies
a lemma from the list until it gets stuck.
arith_mult
arith_mult
solves goals of the form IsMultiplicative f
for f : ArithmeticFunction R
by applying lemmas tagged with the user attribute arith_mult
.
arith_mult?
arith_mult
solves goals of the form IsMultiplicative f
for f : ArithmeticFunction R
by applying lemmas tagged with the user attribute arith_mult
, and prints out the generated
proof term.
array_get_dec
This tactic, added to the decreasing_trivial
toolbox, proves that
sizeOf arr[i] < sizeOf arr
, which is useful for well founded recursions
over a nested inductive like inductive T | mk : Array T → T
.
array_mem_dec
This tactic, added to the decreasing_trivial
toolbox, proves that sizeOf a < sizeOf arr
provided that a ∈ arr
which is useful for well founded recursions over a nested inductive like
inductive T | mk : Array T → T
.
assumption
assumption
tries to solve the main goal using a hypothesis of compatible type, or else fails.
Note also the ‹t›
term notation, which is a shorthand for show t by assumption
.
assumption_mod_cast
assumption_mod_cast
is a variant of assumption
that solves the goal
using a hypothesis. Unlike assumption
, it first pre-processes the goal and
each hypothesis to move casts as far outwards as possible, so it can be used
in more situations.
Concretely, it runs norm_cast
on the goal. For each local hypothesis h
, it also
normalizes h
with norm_cast
and tries to use that to close the goal.
bddDefault
Sets are automatically bounded or cobounded in complete lattices. To use the same statements
in complete and conditionally complete lattices but let automation fill automatically the
boundedness proofs in complete lattices, we use the tactic bddDefault
in the statements,
in the form (hA : BddAbove A := by bddDefault)
.
beta_reduce
beta_reduce at loc
completely beta reduces the given location.
This also exists as a conv
-mode tactic.
This means that whenever there is an applied lambda expression such as
(fun x => f x) y
then the argument is substituted into the lambda expression
yielding an expression such as f y
.
bicategory
Use the coherence theorem for bicategories to solve equations in a bicategory, where the two sides only differ by replacing strings of bicategory structural morphisms (that is, associators, unitors, and identities) with different strings of structural morphisms with the same source and target.
That is, bicategory
can handle goals of the form
a ≫ f ≫ b ≫ g ≫ c = a' ≫ f ≫ b' ≫ g ≫ c'
where a = a'
, b = b'
, and c = c'
can be proved using bicategory_coherence
.
bicategory_coherence
Coherence tactic for bicategories.
Use pure_coherence
instead, which is a frontend to this one.
bicategory_coherence
Close the goal of the form η = θ
, where η
and θ
are 2-isomorphisms made up only of
associators, unitors, and identities.
example {B : Type} [Bicategory B] {a : B} : (λ_ (𝟙 a)).hom = (ρ_ (𝟙 a)).hom := by bicategory_coherence
bitwise_assoc_tac
Proving associativity of bitwise operations in general essentially boils down to a huge case distinction, so it is shorter to use this tactic instead of proving it in the general case.
borelize
The behaviour of borelize α
depends on the existing assumptions on α
.
-
if
α
is a topological space with instances[MeasurableSpace α] [BorelSpace α]
, thenborelize α
replaces the former instance byborel α
; -
otherwise,
borelize α
adds instancesborel α : MeasurableSpace α
and⟨rfl⟩ : BorelSpace α
.
Finally, borelize α β γ
runs borelize α; borelize β; borelize γ
.
bound
bound
tactic for proving inequalities via straightforward recursion on expression structure.
An example use case is
-- Calc example: A weak lower bound for `z ↦ z^2 + c` lemma le_sqr_add {c z : ℂ} (cz : abs c ≤ abs z) (z3 : 3 ≤ abs z) : 2 * abs z ≤ abs (z^2 + c) := by calc abs (z^2 + c) _ ≥ abs (z^2) - abs c := by bound _ ≥ abs (z^2) - abs z := by bound _ ≥ (abs z - 1) * abs z := by rw [mul_comm, mul_sub_one, ← pow_two, ← abs.map_pow] _ ≥ 2 * abs z := by bound
bound
is built on top of aesop
, and uses
-
Apply lemmas registered via the
@[bound]
attribute -
Forward lemmas registered via the
@[bound_forward]
attribute -
Local hypotheses from the context
-
Optionally: additional hypotheses provided as
bound [h₀, h₁]
or similar. These are added to the context as if byhave := hᵢ
.
The functionality of bound
overlaps with positivity
and gcongr
, but can jump back and forth
between 0 ≤ x
and x ≤ y
-type inequalities. For example, bound
proves
0 ≤ c → b ≤ a → 0 ≤ a * c - b * c
by turning the goal into b * c ≤ a * c
, then using mul_le_mul_of_nonneg_right
. bound
also
contains lemmas for goals of the form 1 ≤ x, 1 < x, x ≤ 1, x < 1
. Conversely, gcongr
can prove
inequalities for more types of relations, supports all positivity
functionality, and is likely
faster since it is more specialized (not built atop aesop
).
bv_check
This tactic works just like bv_decide
but skips calling a SAT solver by using a proof that is
already stored on disk. It is called with the name of an LRAT file in the same directory as the
current Lean file:
bv_check "proof.lrat"
bv_decide
Close fixed-width BitVec
and Bool
goals by obtaining a proof from an external SAT solver and
verifying it inside Lean. The solvable goals are currently limited to
-
the Lean equivalent of
QF_BV
-
automatically splitting up
structure
s that contain information aboutBitVec
orBool
example : ∀ (a b : BitVec 64), (a &&& b) + (a ^^^ b) = a ||| b := ⊢ ∀ (a b : BitVec 64), (a &&& b) + (a ^^^ b) = a ||| b
a✝:BitVec 64b✝:BitVec 64⊢ (a✝ &&& b✝) + (a✝ ^^^ b✝) = a✝ ||| b✝
All goals completed! 🐙
If bv_decide
encounters an unknown definition it will be treated like an unconstrained BitVec
variable. Sometimes this enables solving goals despite not understanding the definition because
the precise properties of the definition do not matter in the specific proof.
If bv_decide
fails to close a goal it provides a counter-example, containing assignments for all
terms that were considered as variables.
In order to avoid calling a SAT solver every time, the proof can be cached with bv_decide?
.
If solving your problem relies inherently on using associativity or commutativity, consider enabling
the bv.ac_nf
option.
Note: bv_decide
uses ofReduceBool
and thus trusts the correctness of the code generator.
bv_decide?
Suggest a proof script for a bv_decide
tactic call. Useful for caching LRAT proofs.
bv_normalize
Run the normalization procedure of bv_decide
only. Sometimes this is enough to solve basic
BitVec
goals already.
bv_omega
by_cases
by_cases (h :)? p
splits the main goal into two cases, assuming h : p
in the first branch, and h : ¬ p
in the second branch.
by_contra
by_contra h
proves ⊢ p
by contradiction,
introducing a hypothesis h : ¬p
and proving False
.
-
If
p
is a negation¬q
,h : q
will be introduced instead of¬¬q
. -
If
p
is decidable, it usesDecidable.byContradiction
instead ofClassical.byContradiction
. -
If
h
is omitted, the introduced variable_: ¬p
will be anonymous.
by_contra!
If the target of the main goal is a proposition p
,
by_contra!
reduces the goal to proving False
using the additional hypothesis this : ¬ p
.
by_contra! h
can be used to name the hypothesis h : ¬ p
.
The hypothesis ¬ p
will be negation normalized using push_neg
.
For instance, ¬ a < b
will be changed to b ≤ a
.
by_contra! h : q
will normalize negations in ¬ p
, normalize negations in q
,
and then check that the two normalized forms are equal.
The resulting hypothesis is the pre-normalized form, q
.
If the name h
is not explicitly provided, then this
will be used as name.
This tactic uses classical reasoning.
It is a variant on the tactic by_contra
.
Examples:
example : 1 < 2 := by by_contra! h -- h : 2 ≤ 1 ⊢ False example : 1 < 2 := by by_contra! h : ¬ 1 < 2 -- h : ¬ 1 < 2 ⊢ False
calc
Step-wise reasoning over transitive relations.
calc a = b := pab b = c := pbc ... y = z := pyz
proves a = z
from the given step-wise proofs. =
can be replaced with any
relation implementing the typeclass Trans
. Instead of repeating the right-
hand sides, subsequent left-hand sides can be replaced with _
.
calc a = b := pab _ = c := pbc ... _ = z := pyz
It is also possible to write the first relation as <lhs>\n _ = <rhs> := <proof>
. This is useful for aligning relation symbols, especially on longer:
identifiers:
calc abc _ = bce := pabce _ = cef := pbcef ... _ = xyz := pwxyz
calc
works as a term, as a tactic or as a conv
tactic.
See Theorem Proving in Lean 4 for more information.
cancel_denoms
cancel_denoms
attempts to remove numerals from the denominators of fractions.
It works on propositions that are field-valued inequalities.
variable [LinearOrderedField α] (a b c : α)
example (h : a / 5 + b / 4 < c) : 4*a + 5*b < 20*c := α:Type u_1inst✝:LinearOrderedField αa:αb:αc:αh:a / 5 + b / 4 < c⊢ 4 * a + 5 * b < 20 * c
α:Type u_1inst✝:LinearOrderedField αa:αb:αc:αh:4 * a + 5 * b < 20 * c⊢ 4 * a + 5 * b < 20 * c
All goals completed! 🐙
example (h : a > 0) : a / 5 > 0 := α:Type u_1inst✝:LinearOrderedField αa:αb:αc:αh:a > 0⊢ a / 5 > 0
α:Type u_1inst✝:LinearOrderedField αa:αb:αc:αh:a > 0⊢ 0 < a
All goals completed! 🐙
cancel_denoms
case
-
case tag => tac
focuses on the goal with case nametag
and solves it usingtac
, or else fails. -
case tag x₁ ... xₙ => tac
additionally renames then
most recent hypotheses with inaccessible names to the given names. -
case tag₁ | tag₂ => tac
is equivalent to(case tag₁ => tac); (case tag₂ => tac)
.
case
-
case _ : t => tac
finds the first goal that unifies witht
and then solves it usingtac
or else fails. Likeshow
, it changes the type of the goal tot
. The_
can optionally be a case tag, in which case it only looks at goals whose tag would be considered bycase
(goals with an exact tag match, followed by goals with the tag as a suffix, followed by goals with the tag as a prefix). -
case _ n₁ ... nₘ : t => tac
additionally names them
most recent hypotheses with inaccessible names to the given names. The names are renamed before matching againstt
. The_
can optionally be a case tag. -
case _ : t₁ | _ : t₂ | ... => tac
is equivalent to(case _ : t₁ => tac); (case _ : t₂ => tac); ...
but with all matching done on the original list of goals -- each goal is consumed as they are matched, so patterns may repeat or overlap. -
case _ : t
will make the matched goal be the first goal.case _ : t₁ | _ : t₂ | ...
makes the matched goals be the first goals in the given order. -
case _ : t := _
andcase _ : t := ?m
are the same ascase _ : t
but in the?m
case the goal tag is changed tom
. In particular, the goal becomes metavariable?m
.
case'
case'
case' _ : t => tac
is similar to the case _ : t => tac
tactic,
but it does not ensure the goal has been solved after applying tac
,
nor does it admit the goal if tac
failed.
Recall that case
closes the goal using sorry
when tac
fails,
and the tactic execution is not interrupted.
cases
Assuming x
is a variable in the local context with an inductive type,
cases x
splits the main goal, producing one goal for each constructor of the
inductive type, in which the target is replaced by a general instance of that constructor.
If the type of an element in the local context depends on x
,
that element is reverted and reintroduced afterward,
so that the case split affects that hypothesis as well.
cases
detects unreachable cases and closes them automatically.
For example, given n : Nat
and a goal with a hypothesis h : P n
and target Q n
,
cases n
produces one goal with hypothesis h : P 0
and target Q 0
,
and one goal with hypothesis h : P (Nat.succ a)
and target Q (Nat.succ a)
.
Here the name a
is chosen automatically and is not accessible.
You can use with
to provide the variables names for each constructor.
-
cases e
, wheree
is an expression instead of a variable, generalizese
in the goal, and then cases on the resulting variable. -
Given
as : List α
,cases as with | nil => tac₁ | cons a as' => tac₂
, uses tactictac₁
for thenil
case, andtac₂
for thecons
case, anda
andas'
are used as names for the new variables introduced. -
cases h : e
, wheree
is a variable or an expression, performs cases one
as above, but also adds a hypothesish : e = ...
to each hypothesis, where...
is the constructor instance for that particular case.
cases'
The cases'
tactic is similar to the cases
tactic in Lean 4 core, but the syntax for giving
names is different:
example (h : p ∨ q) : q ∨ p := p:Propq:Proph:p ∨ q⊢ q ∨ p
cases h with
| inl hp => All goals completed! 🐙
| inr hq => All goals completed! 🐙
example (h : p ∨ q) : q ∨ p := p:Propq:Proph:p ∨ q⊢ q ∨ p
p:Propq:Prophp:p⊢ q ∨ pp:Propq:Prophq:q⊢ q ∨ p
p:Propq:Prophp:p⊢ q ∨ p All goals completed! 🐙
p:Propq:Prophq:q⊢ q ∨ p All goals completed! 🐙
example (h : p ∨ q) : q ∨ p := p:Propq:Proph:p ∨ q⊢ q ∨ p
p:Propq:Prophp:p⊢ q ∨ pp:Propq:Prophq:q⊢ q ∨ p
p:Propq:Prophp:p⊢ q ∨ p All goals completed! 🐙
p:Propq:Prophq:q⊢ q ∨ p All goals completed! 🐙
Prefer cases
or rcases
when possible, because these tactics promote structured proofs.
cases_type
-
cases_type I
applies thecases
tactic to a hypothesish : (I ...)
-
cases_type I_1 ... I_n
applies thecases
tactic to a hypothesish : (I_1 ...)
or ... orh : (I_n ...)
-
cases_type* I
is shorthand for· repeat cases_type I
-
cases_type! I
only appliescases
if the number of resulting subgoals is <= 1.
Example: The following tactic destructs all conjunctions and disjunctions in the current goal.
cases_type* Or And
cases_type!
-
cases_type I
applies thecases
tactic to a hypothesish : (I ...)
-
cases_type I_1 ... I_n
applies thecases
tactic to a hypothesish : (I_1 ...)
or ... orh : (I_n ...)
-
cases_type* I
is shorthand for· repeat cases_type I
-
cases_type! I
only appliescases
if the number of resulting subgoals is <= 1.
Example: The following tactic destructs all conjunctions and disjunctions in the current goal.
cases_type* Or And
casesm
-
casesm p
applies thecases
tactic to a hypothesish : type
iftype
matches the patternp
. -
casesm p_1, ..., p_n
applies thecases
tactic to a hypothesish : type
iftype
matches one of the given patterns. -
casesm* p
is a more efficient and compact version of· repeat casesm p
. It is more efficient because the pattern is compiled once. -
casesm! p
only appliescases
if the number of resulting subgoals is <= 1.
Example: The following tactic destructs all conjunctions and disjunctions in the current context.
casesm* _ ∨ _, _ ∧ _
casesm!
-
casesm p
applies thecases
tactic to a hypothesish : type
iftype
matches the patternp
. -
casesm p_1, ..., p_n
applies thecases
tactic to a hypothesish : type
iftype
matches one of the given patterns. -
casesm* p
is a more efficient and compact version of· repeat casesm p
. It is more efficient because the pattern is compiled once. -
casesm! p
only appliescases
if the number of resulting subgoals is <= 1.
Example: The following tactic destructs all conjunctions and disjunctions in the current context.
casesm* _ ∨ _, _ ∧ _
cc
The congruence closure tactic cc
tries to solve the goal by chaining
equalities from context and applying congruence (i.e. if a = b
, then f a = f b
).
It is a finishing tactic, i.e. it is meant to close
the current goal, not to make some inconclusive progress.
A mostly trivial example would be:
example (a b c : ℕ) (f : ℕ → ℕ) (h: a = b) (h' : b = c) : f a = f c := a:ℕb:ℕc:ℕf:ℕ → ℕh:a = bh':b = c⊢ f a = f c
All goals completed! 🐙
As an example requiring some thinking to do by hand, consider:
example (f : ℕ → ℕ) (x : ℕ)
(H1 : f (f (f x)) = x) (H2 : f (f (f (f (f x)))) = x) :
f x = x := f:ℕ → ℕx:ℕH1:f (f (f x)) = xH2:f (f (f (f (f x)))) = x⊢ f x = x
All goals completed! 🐙
cfc_cont_tac
A tactic used to automatically discharge goals relating to the continuous functional calculus, specifically concerning continuity of the functions involved.
cfc_tac
A tactic used to automatically discharge goals relating to the continuous functional calculus, specifically whether the element satisfies the predicate.
cfc_zero_tac
A tactic used to automatically discharge goals relating to the non-unital continuous functional
calculus, specifically concerning whether f 0 = 0
.
change
change
-
change tgt'
will change the goal fromtgt
totgt'
, assuming these are definitionally equal. -
change t' at h
will change hypothesish : t
to have typet'
, assuming assumingt
andt'
are definitionally equal.
change?
change? term
unifies term
with the current goal, then suggests explicit change
syntax
that uses the resulting unified term.
If term
is not present, change?
suggests the current goal itself. This is useful after tactics
which transform the goal while maintaining definitional equality, such as dsimp
; those preceding
tactic calls can then be deleted.
example : (fun x : Nat => x) 0 = 1 := by change? 0 = _ -- `Try this: change 0 = 1`
check_compositions
For each composition f ≫ g
in the goal,
which internally is represented as CategoryStruct.comp C inst X Y Z f g
,
infer the types of f
and g
and check whether their sources and targets agree
with X
, Y
, and Z
at "instances and reducible" transparency,
reporting any discrepancies.
An example:
example (j : J) : colimit.ι ((F ⋙ G) ⋙ H) j ≫ (preservesColimitIso (G ⋙ H) F).inv = H.map (G.map (colimit.ι F j)) := by -- We know which lemma we want to use, and it's even a simp lemma, but `rw` -- won't let us apply it fail_if_success rw [ι_preservesColimitIso_inv] fail_if_success rw [ι_preservesColimitIso_inv (G ⋙ H)] fail_if_success simp only [ι_preservesColimitIso_inv] -- This would work: -- erw [ι_preservesColimitIso_inv (G ⋙ H)] -- `check_compositions` checks if the two morphisms we're composing are -- composed by abusing defeq, and indeed it tells us that we are abusing -- definitional associativity of composition of functors here: it prints -- the following. -- info: In composition -- colimit.ι ((F ⋙ G) ⋙ H) j ≫ (preservesColimitIso (G ⋙ H) F).inv -- the source of -- (preservesColimitIso (G ⋙ H) F).inv -- is -- colimit (F ⋙ G ⋙ H) -- but should be -- colimit ((F ⋙ G) ⋙ H) check_compositions -- In this case, we can "fix" this by reassociating in the goal, but -- usually at this point the right thing to do is to back off and -- check how we ended up with a bad goal in the first place. dsimp only [Functor.assoc] -- This would work now, but it is not needed, because simp works as well -- rw [ι_preservesColimitIso_inv] simp
checkpoint
checkpoint tac
acts the same as tac
, but it caches the input and output of tac
,
and if the file is re-elaborated and the input matches, the tactic is not re-run and
its effects are reapplied to the state. This is useful for improving responsiveness
when working on a long tactic proof, by wrapping expensive tactics with checkpoint
.
See the save
tactic, which may be more convenient to use.
(TODO: do this automatically and transparently so that users don't have to use this combinator explicitly.)
choose
-
choose a b h h' using hyp
takes a hypothesishyp
of the form∀ (x : X) (y : Y), ∃ (a : A) (b : B), P x y a b ∧ Q x y a b
for someP Q : X → Y → A → B → Prop
and outputs into context a functiona : X → Y → A
,b : X → Y → B
and two assumptions:h : ∀ (x : X) (y : Y), P x y (a x y) (b x y)
andh' : ∀ (x : X) (y : Y), Q x y (a x y) (b x y)
. It also works with dependent versions. -
choose! a b h h' using hyp
does the same, except that it will remove dependency of the functions on propositional arguments if possible. For example ifY
is a proposition andA
andB
are nonempty in the above example then we will instead geta : X → A
,b : X → B
, and the assumptionsh : ∀ (x : X) (y : Y), P x y (a x) (b x)
andh' : ∀ (x : X) (y : Y), Q x y (a x) (b x)
.
The using hyp
part can be omitted,
which will effectively cause choose
to start with an intro hyp
.
Examples:
example (h : ∀ n m : ℕ, ∃ i j, m = n + i ∨ m + j = n) : True := h:∀ (n m : ℕ), ∃ i j, m = n + i ∨ m + j = n⊢ True
i:ℕ → ℕ → ℕj:ℕ → ℕ → ℕh:∀ (n m : ℕ), m = n + i n m ∨ m + j n m = n⊢ True
i:ℕ → ℕ → ℕj:ℕ → ℕ → ℕh:∀ (n m : ℕ), m = n + i n m ∨ m + j n m = n⊢ True
i:ℕ → ℕ → ℕj:ℕ → ℕ → ℕh:∀ (n m : ℕ), m = n + i n m ∨ m + j n m = n⊢ True
i:ℕ → ℕ → ℕj:ℕ → ℕ → ℕh:∀ (n m : ℕ), m = n + i n m ∨ m + j n m = n⊢ True
All goals completed! 🐙
example (h : ∀ i : ℕ, i < 7 → ∃ j, i < j ∧ j < i+i) : True := h:∀ i < 7, ∃ j, i < j ∧ j < i + i⊢ True
f:ℕ → ℕh:∀ i < 7, i < f ih':∀ i < 7, f i < i + i⊢ True
f:ℕ → ℕh:∀ i < 7, i < f ih':∀ i < 7, f i < i + i⊢ True
f:ℕ → ℕh:∀ i < 7, i < f ih':∀ i < 7, f i < i + i⊢ True
f:ℕ → ℕh:∀ i < 7, i < f ih':∀ i < 7, f i < i + i⊢ True
All goals completed! 🐙
choose!
-
choose a b h h' using hyp
takes a hypothesishyp
of the form∀ (x : X) (y : Y), ∃ (a : A) (b : B), P x y a b ∧ Q x y a b
for someP Q : X → Y → A → B → Prop
and outputs into context a functiona : X → Y → A
,b : X → Y → B
and two assumptions:h : ∀ (x : X) (y : Y), P x y (a x y) (b x y)
andh' : ∀ (x : X) (y : Y), Q x y (a x y) (b x y)
. It also works with dependent versions. -
choose! a b h h' using hyp
does the same, except that it will remove dependency of the functions on propositional arguments if possible. For example ifY
is a proposition andA
andB
are nonempty in the above example then we will instead geta : X → A
,b : X → B
, and the assumptionsh : ∀ (x : X) (y : Y), P x y (a x) (b x)
andh' : ∀ (x : X) (y : Y), Q x y (a x) (b x)
.
The using hyp
part can be omitted,
which will effectively cause choose
to start with an intro hyp
.
Examples:
example (h : ∀ n m : ℕ, ∃ i j, m = n + i ∨ m + j = n) : True := h:∀ (n m : ℕ), ∃ i j, m = n + i ∨ m + j = n⊢ True
i:ℕ → ℕ → ℕj:ℕ → ℕ → ℕh:∀ (n m : ℕ), m = n + i n m ∨ m + j n m = n⊢ True
i:ℕ → ℕ → ℕj:ℕ → ℕ → ℕh:∀ (n m : ℕ), m = n + i n m ∨ m + j n m = n⊢ True
i:ℕ → ℕ → ℕj:ℕ → ℕ → ℕh:∀ (n m : ℕ), m = n + i n m ∨ m + j n m = n⊢ True
i:ℕ → ℕ → ℕj:ℕ → ℕ → ℕh:∀ (n m : ℕ), m = n + i n m ∨ m + j n m = n⊢ True
All goals completed! 🐙
example (h : ∀ i : ℕ, i < 7 → ∃ j, i < j ∧ j < i+i) : True := h:∀ i < 7, ∃ j, i < j ∧ j < i + i⊢ True
f:ℕ → ℕh:∀ i < 7, i < f ih':∀ i < 7, f i < i + i⊢ True
f:ℕ → ℕh:∀ i < 7, i < f ih':∀ i < 7, f i < i + i⊢ True
f:ℕ → ℕh:∀ i < 7, i < f ih':∀ i < 7, f i < i + i⊢ True
f:ℕ → ℕh:∀ i < 7, i < f ih':∀ i < 7, f i < i + i⊢ True
All goals completed! 🐙
classical
classical tacs
runs tacs
in a scope where Classical.propDecidable
is a low priority
local instance.
Note that classical
is a scoping tactic: it adds the instance only within the
scope of the tactic.
clean_wf
This tactic is used internally by lean before presenting the proof obligations from a well-founded
definition to the user via decreasing_by
. It is not necessary to use this tactic manually.
clear
Clears all hypotheses it can, except those provided after a minus sign. Example:
clear * - h₁ h₂
clear
clear x...
removes the given hypotheses, or fails if there are remaining
references to a hypothesis.
clear!
A variant of clear
which clears not only the given hypotheses but also any other hypotheses
depending on them
clear_value
clear_value n₁ n₂ ...
clears the bodies of the local definitions n₁, n₂ ...
, changing them
into regular hypotheses. A hypothesis n : α := t
is changed to n : α
.
The order of n₁ n₂ ...
does not matter, and values will be cleared in reverse order of
where they appear in the context.
coherence
Use the coherence theorem for monoidal categories to solve equations in a monoidal equation, where the two sides only differ by replacing strings of monoidal structural morphisms (that is, associators, unitors, and identities) with different strings of structural morphisms with the same source and target.
That is, coherence
can handle goals of the form
a ≫ f ≫ b ≫ g ≫ c = a' ≫ f ≫ b' ≫ g ≫ c'
where a = a'
, b = b'
, and c = c'
can be proved using pure_coherence
.
(If you have very large equations on which coherence
is unexpectedly failing,
you may need to increase the typeclass search depth,
using e.g. set_option synthInstance.maxSize 500
.)
compareOfLessAndEq_rfl
This attempts to prove that a given instance of compare
is equal to compareOfLessAndEq
by
introducing the arguments and trying the following approaches in order:
-
seeing if
rfl
works -
seeing if the
compare
at hand is nonetheless essentiallycompareOfLessAndEq
, but, because of implicit arguments, requires us to unfold the defs and split theif
s in the definition ofcompareOfLessAndEq
-
seeing if we can split by cases on the arguments, then see if the defs work themselves out (useful when
compare
is defined via amatch
statement, as it is forBool
)
compute_degree
compute_degree
is a tactic to solve goals of the form
-
natDegree f = d
, -
degree f = d
, -
natDegree f ≤ d
, -
degree f ≤ d
, -
coeff f d = r
, ifd
is the degree off
.
The tactic may leave goals of the form d' = d
, d' ≤ d
, or r ≠ 0
, where d'
in ℕ
or
WithBot ℕ
is the tactic's guess of the degree, and r
is the coefficient's guess of the
leading coefficient of f
.
compute_degree
applies norm_num
to the left-hand side of all side goals, trying to close them.
The variant compute_degree!
first applies compute_degree
.
Then it uses norm_num
on all the remaining goals and tries assumption
.
compute_degree!
compute_degree
is a tactic to solve goals of the form
-
natDegree f = d
, -
degree f = d
, -
natDegree f ≤ d
, -
degree f ≤ d
, -
coeff f d = r
, ifd
is the degree off
.
The tactic may leave goals of the form d' = d
, d' ≤ d
, or r ≠ 0
, where d'
in ℕ
or
WithBot ℕ
is the tactic's guess of the degree, and r
is the coefficient's guess of the
leading coefficient of f
.
compute_degree
applies norm_num
to the left-hand side of all side goals, trying to close them.
The variant compute_degree!
first applies compute_degree
.
Then it uses norm_num
on all the remaining goals and tries assumption
.
congr
Apply congruence (recursively) to goals of the form ⊢ f as = f bs
and ⊢ HEq (f as) (f bs)
.
The optional parameter is the depth of the recursive applications.
This is useful when congr
is too aggressive in breaking down the goal.
For example, given ⊢ f (g (x + y)) = f (g (y + x))
,
congr
produces the goals ⊢ x = y
and ⊢ y = x
,
while congr 2
produces the intended ⊢ x + y = y + x
.
congr
Apply congruence (recursively) to goals of the form ⊢ f as = f bs
and ⊢ HEq (f as) (f bs)
.
The optional parameter is the depth of the recursive applications.
This is useful when congr
is too aggressive in breaking down the goal.
For example, given ⊢ f (g (x + y)) = f (g (y + x))
,
congr
produces the goals ⊢ x = y
and ⊢ y = x
,
while congr 2
produces the intended ⊢ x + y = y + x
.
congr
Apply congruence (recursively) to goals of the form ⊢ f as = f bs
and ⊢ HEq (f as) (f bs)
.
-
congr n
controls the depth of the recursive applications. This is useful whencongr
is too aggressive in breaking down the goal. For example, given⊢ f (g (x + y)) = f (g (y + x))
,congr
produces the goals⊢ x = y
and⊢ y = x
, whilecongr 2
produces the intended⊢ x + y = y + x
. -
If, at any point, a subgoal matches a hypothesis then the subgoal will be closed.
-
You can use
congr with p (: n)?
to callext p (: n)?
to all subgoals generated bycongr
. For example, if the goal is⊢ f '' s = g '' s
thencongr with x
generates the goalx : α ⊢ f x = g x
.
congr!
Equates pieces of the left-hand side of a goal to corresponding pieces of the right-hand side by
recursively applying congruence lemmas. For example, with ⊢ f as = g bs
we could get
two goals ⊢ f = g
and ⊢ as = bs
.
Syntax:
congr! congr! n congr! with x y z congr! n with x y z
Here, n
is a natural number and x
, y
, z
are rintro
patterns (like h
, rfl
, ⟨x, y⟩
,
_
, -
, (h | h)
, etc.).
The congr!
tactic is similar to congr
but is more insistent in trying to equate left-hand sides
to right-hand sides of goals. Here is a list of things it can try:
-
If
R
in⊢ R x y
is a reflexive relation, it will convert the goal to⊢ x = y
if possible. The list of reflexive relations is maintained using the@[refl]
attribute. As a special case,⊢ p ↔ q
is converted to⊢ p = q
during congruence processing and then returned to⊢ p ↔ q
form at the end. -
If there is a user congruence lemma associated to the goal (for instance, a
@[congr]
-tagged lemma applying to⊢ List.map f xs = List.map g ys
), then it will use that. -
It uses a congruence lemma generator at least as capable as the one used by
congr
andsimp
. If there is a subexpression that can be rewritten bysimp
, thencongr!
should be able to generate an equality for it. -
It can do congruences of pi types using lemmas like
implies_congr
andpi_congr
. -
Before applying congruences, it will run the
intros
tactic automatically. The introduced variables can be given names using awith
clause. This helps when congruence lemmas provide additional assumptions in hypotheses. -
When there is an equality between functions, so long as at least one is obviously a lambda, we apply
funext
orFunction.hfunext
, which allows for congruence of lambda bodies. -
It can try to close goals using a few strategies, including checking definitional equality, trying to apply
Subsingleton.elim
orproof_irrel_heq
, and using theassumption
tactic.
The optional parameter is the depth of the recursive applications.
This is useful when congr!
is too aggressive in breaking down the goal.
For example, given ⊢ f (g (x + y)) = f (g (y + x))
,
congr!
produces the goals ⊢ x = y
and ⊢ y = x
,
while congr! 2
produces the intended ⊢ x + y = y + x
.
The congr!
tactic also takes a configuration option, for example
congr! (config := {transparency := .default}) 2
This overrides the default, which is to apply congruence lemmas at reducible transparency.
The congr!
tactic is aggressive with equating two sides of everything. There is a predefined
configuration that uses a different strategy:
Try
congr! (config := .unfoldSameFun)
This only allows congruences between functions applications of definitionally equal functions,
and it applies congruence lemmas at default transparency (rather than just reducible).
This is somewhat like congr
.
See Congr!.Config
for all options.
congrm
congrm e
is a tactic for proving goals of the form lhs = rhs
, lhs ↔ rhs
, HEq lhs rhs
,
or R lhs rhs
when R
is a reflexive relation.
The expression e
is a pattern containing placeholders ?_
,
and this pattern is matched against lhs
and rhs
simultaneously.
These placeholders generate new goals that state that corresponding subexpressions
in lhs
and rhs
are equal.
If the placeholders have names, such as ?m
, then the new goals are given tags with those names.
Examples:
example {a b c d : ℕ} :
Nat.pred a.succ * (d + (c + a.pred)) = Nat.pred b.succ * (b + (c + d.pred)) := a:ℕb:ℕc:ℕd:ℕ⊢ a.succ.pred * (d + (c + a.pred)) = b.succ.pred * (b + (c + d.pred))
a:ℕb:ℕc:ℕd:ℕ⊢ a = ba:ℕb:ℕc:ℕd:ℕ⊢ d = ba:ℕb:ℕc:ℕd:ℕ⊢ c + a.pred = c + d.pred
/- Goals left:
case h1 ⊢ a = b
case h2 ⊢ d = b
case h3 ⊢ c + a.pred = c + d.pred
-/
a:ℕb:ℕc:ℕd:ℕ⊢ d = ba:ℕb:ℕc:ℕd:ℕ⊢ c + a.pred = c + d.pred
a:ℕb:ℕc:ℕd:ℕ⊢ c + a.pred = c + d.pred
All goals completed! 🐙
example {a b : ℕ} (h : a = b) : (fun y : ℕ => ∀ z, a + a = z) = (fun x => ∀ z, b + a = z) := a:ℕb:ℕh:a = b⊢ (fun y => ∀ (z : ℕ), a + a = z) = fun x => ∀ (z : ℕ), b + a = z
a:ℕb:ℕh:a = bx:ℕw:ℕ⊢ a = b
-- ⊢ a = b
All goals completed! 🐙
The congrm
command is a convenient frontend to congr(...)
congruence quotations.
If the goal is an equality, congrm e
is equivalent to refine congr(e')
where e'
is
built from e
by replacing each placeholder ?m
by $(?m)
.
The pattern e
is allowed to contain $(...)
expressions to immediately substitute
equality proofs into the congruence, just like for congruence quotations.
congrm?
Display a widget panel allowing to generate a congrm
call with holes specified by selecting
subexpressions in the goal.
constructor
If the main goal's target type is an inductive type, constructor
solves it with
the first matching constructor, or else fails.
constructorm
-
constructorm p_1, ..., p_n
applies theconstructor
tactic to the main goal iftype
matches one of the given patterns. -
constructorm* p
is a more efficient and compact version of· repeat constructorm p
. It is more efficient because the pattern is compiled once.
Example: The following tactic proves any theorem like True ∧ (True ∨ True)
consisting of
and/or/true:
constructorm* _ ∨ _, _ ∧ _, True
continuity
The tactic continuity
solves goals of the form Continuous f
by applying lemmas tagged with the
continuity
user attribute.
continuity?
The tactic continuity
solves goals of the form Continuous f
by applying lemmas tagged with the
continuity
user attribute.
contradiction
contradiction
closes the main goal if its hypotheses are "trivially contradictory".
-
Inductive type/family with no applicable constructors
example (h : False) : p := p:Sort ?u.9h:False⊢ p All goals completed! 🐙
-
Injectivity of constructors
example (h : none = some true) : p := p:Sort ?u.75h:none = some true⊢ p All goals completed! 🐙 --
-
Decidable false proposition
example (h : 2 + 2 = 3) : p := p:Sort ?u.203h:2 + 2 = 3⊢ p All goals completed! 🐙
-
Contradictory hypotheses
example (h : p) (h' : ¬ p) : q := p:Propq:Sort ?u.17h:ph':¬p⊢ q All goals completed! 🐙
-
Other simple contradictions such as
example (x : Nat) (h : x ≠ x) : p := p:Sort ?u.17x:ℕh:x ≠ x⊢ p All goals completed! 🐙
contrapose
Transforms the goal into its contrapositive.
-
contrapose
turns a goalP → Q
into¬ Q → ¬ P
-
contrapose h
first reverts the local assumptionh
, and then usescontrapose
andintro h
-
contrapose h with new_h
uses the namenew_h
for the introduced hypothesis
contrapose!
Transforms the goal into its contrapositive and uses pushes negations inside P
and Q
.
Usage matches contrapose
conv
conv => ...
allows the user to perform targeted rewriting on a goal or hypothesis,
by focusing on particular subexpressions.
See https://lean-lang.org/theorem_proving_in_lean4/conv.html for more details.
Basic forms:
-
conv => cs
will rewrite the goal with conv tacticscs
. -
conv at h => cs
will rewrite hypothesish
. -
conv in pat => cs
will rewrite the first subexpression matchingpat
(seepattern
).
conv?
Display a widget panel allowing to generate a conv
call zooming to the subexpression selected
in the goal.
conv_lhs
conv_rhs
convert
The exact e
and refine e
tactics require a term e
whose type is
definitionally equal to the goal. convert e
is similar to refine e
,
but the type of e
is not required to exactly match the
goal. Instead, new goals are created for differences between the type
of e
and the goal using the same strategies as the congr!
tactic.
For example, in the proof state
n : ℕ, e : Prime (2 * n + 1) ⊢ Prime (n + n + 1)
the tactic convert e using 2
will change the goal to
⊢ n + n = 2 * n
In this example, the new goal can be solved using ring
.
The using 2
indicates it should iterate the congruence algorithm up to two times,
where convert e
would use an unrestricted number of iterations and lead to two
impossible goals: ⊢ HAdd.hAdd = HMul.hMul
and ⊢ n = 2
.
A variant configuration is convert (config := .unfoldSameFun) e
, which only equates function
applications for the same function (while doing so at the higher default
transparency).
This gives the same goal of ⊢ n + n = 2 * n
without needing using 2
.
The convert
tactic applies congruence lemmas eagerly before reducing,
therefore it can fail in cases where exact
succeeds:
def p (n : ℕ) := True example (h : p 0) : p 1 := by exact h -- succeeds example (h : p 0) : p 1 := by convert h -- fails, with leftover goal `1 = 0`
Limiting the depth of recursion can help with this. For example, convert h using 1
will work
in this case.
The syntax convert ← e
will reverse the direction of the new goals
(producing ⊢ 2 * n = n + n
in this example).
Internally, convert e
works by creating a new goal asserting that
the goal equals the type of e
, then simplifying it using
congr!
. The syntax convert e using n
can be used to control the
depth of matching (like congr! n
). In the example, convert e using 1
would produce a new goal ⊢ n + n + 1 = 2 * n + 1
.
Refer to the congr!
tactic to understand the congruence operations. One of its many
features is that if x y : t
and an instance Subsingleton t
is in scope,
then any goals of the form x = y
are solved automatically.
Like congr!
, convert
takes an optional with
clause of rintro
patterns,
for example convert e using n with x y z
.
The convert
tactic also takes a configuration option, for example
convert (config := {transparency := .default}) h
These are passed to congr!
. See Congr!.Config
for options.
convert_to
The convert_to
tactic is for changing the type of the target or a local hypothesis,
but unlike the change
tactic it will generate equality proof obligations using congr!
to resolve discrepancies.
-
convert_to ty
changes the target toty
-
convert_to ty using n
usescongr! n
instead ofcongr! 1
-
convert_to ty at h
changes the type of the local hypothesish
toty
. Any remainingcongr!
goals come first.
Operating on the target, the tactic convert_to ty using n
is the same as convert (?_ : ty) using n
.
The difference is that convert_to
takes a type but convert
takes a proof term.
Except for it also being able to operate on local hypotheses,
the syntax for convert_to
is the same as for convert
, and it has variations such as
convert_to ← g
and convert_to (config := {transparency := .default}) g
.
Note that convert_to ty at h
may leave a copy of h
if a later local hypotheses or the target
depends on it, just like in rw
or simp
.
count_heartbeats
count_heartbeats
is deprecated in favour of #count_heartbeats
since "2025-01-12"
dbg_trace
decide
decide
attempts to prove the main goal (with target type p
) by synthesizing an instance of Decidable p
and then reducing that instance to evaluate the truth value of p
.
If it reduces to isTrue h
, then h
is a proof of p
that closes the goal.
The target is not allowed to contain local variables or metavariables.
If there are local variables, you can first try using the revert
tactic with these local variables to move them into the target,
or you can use the +revert
option, described below.
Options:
-
decide +revert
begins by reverting local variables that the target depends on, after cleaning up the local context of irrelevant variables. A variable is relevant if it appears in the target, if it appears in a relevant variable, or if it is a proposition that refers to a relevant variable. -
decide +kernel
uses kernel for reduction instead of the elaborator. It has two key properties: (1) since it uses the kernel, it ignores transparency and can unfold everything, and (2) it reduces theDecidable
instance only once instead of twice. -
decide +native
uses the native code compiler (#eval
) to evaluate theDecidable
instance, admitting the result via theLean.ofReduceBool
axiom. This can be significantly more efficient than using reduction, but it is at the cost of increasing the size of the trusted code base. Namely, it depends on the correctness of the Lean compiler and all definitions with an@[implemented_by]
attribute. Like with+kernel
, theDecidable
instance is evaluated only once.
Limitation: In the default mode or +kernel
mode, since decide
uses reduction to evaluate the term,
Decidable
instances defined by well-founded recursion might not work because evaluating them requires reducing proofs.
Reduction can also get stuck on Decidable
instances with Eq.rec
terms.
These can appear in instances defined using tactics (such as rw
and simp
).
To avoid this, create such instances using definitions such as decidable_of_iff
instead.
Examples
Proving inequalities:
example : 2 + 2 ≠ 5 := ⊢ 2 + 2 ≠ 5 All goals completed! 🐙
Trying to prove a false proposition:
example : 1 ≠ 1 := by decide /- tactic 'decide' proved that the proposition 1 ≠ 1 is false -/
Trying to prove a proposition whose Decidable
instance fails to reduce
opaque unknownProp : Prop open scoped Classical in example : unknownProp := by decide /- tactic 'decide' failed for proposition unknownProp since its 'Decidable' instance reduced to Classical.choice ⋯ rather than to the 'isTrue' constructor. -/
Properties and relations
For equality goals for types with decidable equality, usually rfl
can be used in place of decide
.
example : 1 + 1 = 2 := ⊢ 1 + 1 = 2 All goals completed! 🐙
example : 1 + 1 = 2 := ⊢ 1 + 1 = 2 All goals completed! 🐙
decreasing_tactic
decreasing_tactic
is called by default on well-founded recursions in order
to synthesize a proof that recursive calls decrease along the selected
well founded relation. It can be locally overridden by using decreasing_by tac
on the recursive definition, and it can also be globally extended by adding
more definitions for decreasing_tactic
(or decreasing_trivial
,
which this tactic calls).
decreasing_trivial
Extensible helper tactic for decreasing_tactic
. This handles the "base case"
reasoning after applying lexicographic order lemmas.
It can be extended by adding more macro definitions, e.g.
macro_rules | `(tactic| decreasing_trivial) => `(tactic| linarith)
decreasing_trivial_pre_omega
Variant of decreasing_trivial
that does not use omega
, intended to be used in core modules
before omega
is available.
decreasing_with
Constructs a proof of decreasing along a well founded relation, by simplifying, then applying
lexicographic order lemmas and finally using ts
to solve the base case. If it fails,
it prints a message to help the user diagnose an ill-founded recursive definition.
delta
delta id1 id2 ...
delta-expands the definitions id1
, id2
, ....
This is a low-level tactic, it will expose how recursive definitions have been
compiled by Lean.
done
done
succeeds iff there are no remaining goals.
dsimp
The dsimp
tactic is the definitional simplifier. It is similar to simp
but only
applies theorems that hold by reflexivity. Thus, the result is guaranteed to be
definitionally equal to the input.
dsimp!
dsimp!
is shorthand for dsimp
with autoUnfold := true
.
This will rewrite with all equation lemmas, which can be used to
partially evaluate many definitions.
dsimp?
simp?
takes the same arguments as simp
, but reports an equivalent call to simp only
that would be sufficient to close the goal. This is useful for reducing the size of the simp
set in a local invocation to speed up processing.
example (x : Nat) : (if True then x + 2 else 3) = x + 2 := x:ℕ⊢ (if True then x + 2 else 3) = x + 2
All goals completed! 🐙 -- prints "Try this: simp only [ite_true]"
This command can also be used in simp_all
and dsimp
.
dsimp?!
simp?
takes the same arguments as simp
, but reports an equivalent call to simp only
that would be sufficient to close the goal. This is useful for reducing the size of the simp
set in a local invocation to speed up processing.
example (x : Nat) : (if True then x + 2 else 3) = x + 2 := x:ℕ⊢ (if True then x + 2 else 3) = x + 2
All goals completed! 🐙 -- prints "Try this: simp only [ite_true]"
This command can also be used in simp_all
and dsimp
.
eapply
eapply e
is like apply e
but it does not add subgoals for variables that appear
in the types of other goals. Note that this can lead to a failure where there are
no goals remaining but there are still metavariables in the term:
example (h : ∀ x : Nat, x = x → True) : True := by eapply h rfl -- no goals -- (kernel) declaration has metavariables '_example'
econstructor
econstructor
is like constructor
(it calls apply
using the first matching constructor of an inductive datatype)
except only non-dependent premises are added as new goals.
elementwise
elementwise!
erw
erw?
eta_expand
eta_expand at loc
eta expands all sub-expressions at the given location.
It also beta reduces any applications of eta expanded terms, so it puts it
into an eta-expanded "normal form."
This also exists as a conv
-mode tactic.
For example, if f
takes two arguments, then f
becomes fun x y => f x y
and f x
becomes fun y => f x y
.
This can be useful to turn, for example, a raw HAdd.hAdd
into fun x y => x + y
.
eta_reduce
eta_reduce at loc
eta reduces all sub-expressions at the given location.
This also exists as a conv
-mode tactic.
For example, fun x y => f x y
becomes f
after eta reduction.
eta_struct
eta_struct at loc
transforms structure constructor applications such as S.mk x.1 ... x.n
(pretty printed as, for example, {a := x.a, b := x.b, ...}
) into x
.
This also exists as a conv
-mode tactic.
The transformation is known as eta reduction for structures, and it yields definitionally equal expressions.
For example, given x : α × β
, then (x.1, x.2)
becomes x
after this transformation.
exact
exact e
closes the main goal if its target type matches that of e
.
exact?
Searches environment for definitions or theorems that can solve the goal using exact
with conditions resolved by solve_by_elim
.
The optional using
clause provides identifiers in the local context that must be
used by exact?
when closing the goal. This is most useful if there are multiple
ways to resolve the goal, and one wants to guide which lemma is used.
exact_mod_cast
Normalize casts in the goal and the given expression, then close the goal with exact
.
exacts
Like exact
, but takes a list of terms and checks that all goals are discharged after the tactic.
exfalso
exfalso
converts a goal ⊢ tgt
into ⊢ False
by applying False.elim
.
exists
exists e₁, e₂, ...
is shorthand for refine ⟨e₁, e₂, ...⟩; try trivial
.
It is useful for existential goals.
existsi
existsi e₁, e₂, ⋯
applies the tactic refine ⟨e₁, e₂, ⋯, ?_⟩
. It's purpose is to instantiate
existential quantifiers.
Examples:
example : ∃ x : Nat, x = x := ⊢ ∃ x, x = x
⊢ 42 = 42
All goals completed! 🐙
example : ∃ x : Nat, ∃ y : Nat, x = y := ⊢ ∃ x y, x = y
⊢ 42 = 42
All goals completed! 🐙
ext
Applies extensionality lemmas that are registered with the @[ext]
attribute.
-
ext pat*
applies extensionality theorems as much as possible, using the patternspat*
to introduce the variables in extensionality theorems usingrintro
. For example, the patterns are used to name the variables introduced by lemmas such asfunext
. -
Without patterns,
ext
applies extensionality lemmas as much as possible but introduces anonymous hypotheses whenever needed. -
ext pat* : n
applies ext theorems only up to depthn
.
The ext1 pat*
tactic is like ext pat*
except that it only applies a single extensionality theorem.
Unused patterns will generate warning. Patterns that don't match the variables will typically result in the introduction of anonymous hypotheses.
ext1
ext1 pat*
is like ext pat*
except that it only applies a single extensionality theorem rather
than recursively applying as many extensionality theorems as possible.
The pat*
patterns are processed using the rintro
tactic.
If no patterns are supplied, then variables are introduced anonymously using the intros
tactic.
extract_goal
-
extract_goal
formats the current goal as a stand-alone theorem or definition after cleaning up the local context of irrelevant variables. A variable is relevant if (1) it occurs in the target type, (2) there is a relevant variable that depends on it, or (3) the type of the variable is a proposition that depends on a relevant variable.If the target is
False
, then for convenienceextract_goal
includes all variables. -
extract_goal *
formats the current goal without cleaning up the local context. -
extract_goal a b c ...
formats the current goal after removing everything that the given variablesa
,b
,c
, ... do not depend on. -
extract_goal ... using name
uses the namename
for the theorem or definition rather than the autogenerated name.
The tactic tries to produce an output that can be copy-pasted and just work, but its success depends on whether the expressions are amenable to being unambiguously pretty printed.
The tactic responds to pretty printing options.
For example, set_option pp.all true in extract_goal
gives the pp.all
form.
extract_lets
The extract_lets at h
tactic takes a local hypothesis of the form h : let x := v; b
and introduces a new local definition x := v
while changing h
to be h : b
. It can be thought
of as being a cases
tactic for let
expressions. It can also be thought of as being like
intros at h
for let
expressions.
For example, if h : let x := 1; x = x
, then extract_lets x at h
introduces x : Nat := 1
and
changes h
to h : x = x
.
Just like intros
, the extract_lets
tactic either takes a list of names, in which case
that specifies the number of let
bindings that must be extracted, or it takes no names, in which
case all the let
bindings are extracted.
The tactic extract_lets
(without at
) or extract_lets at h ⊢
acts as a weaker
form of intros
on the goal that only introduces obvious let
s.
fail_if_no_progress
fail_if_no_progress tacs
evaluates tacs
, and fails if no progress is made on the main goal
or the local context at reducible transparency.
fail_if_success
fail_if_success t
fails if the tactic t
succeeds.
false_or_by_contra
Changes the goal to False
, retaining as much information as possible:
-
If the goal is
False
, do nothing. -
If the goal is an implication or a function type, introduce the argument and restart. (In particular, if the goal is
x ≠ y
, introducex = y
.) -
Otherwise, for a propositional goal
P
, replace it with¬ ¬ P
(attempting to find aDecidable
instance, but otherwise falling back to working classically) and introduce¬ P
. -
For a non-propositional goal use
False.elim
.
fapply
fapply e
is like apply e
but it adds goals in the order they appear,
rather than putting the dependent goals first.
fconstructor
fconstructor
is like constructor
(it calls apply
using the first matching constructor of an inductive datatype)
except that it does not reorder goals.
field_simp
The goal of field_simp
is to reduce an expression in a field to an expression of the form n / d
where neither n
nor d
contains any division symbol, just using the simplifier (with a carefully
crafted simpset named field_simps
) to reduce the number of division symbols whenever possible by
iterating the following steps:
-
write an inverse as a division
-
in any product, move the division to the right
-
if there are several divisions in a product, group them together at the end and write them as a single division
-
reduce a sum to a common denominator
If the goal is an equality, this simpset will also clear the denominators, so that the proof
can normally be concluded by an application of ring
.
field_simp [hx, hy]
is a short form for
simp (disch := field_simp_discharge) [-one_div, -one_divp, -mul_eq_zero, hx, hy, field_simps]
Note that this naive algorithm will not try to detect common factors in denominators to reduce the
complexity of the resulting expression. Instead, it relies on the ability of ring
to handle
complicated expressions in the next step.
As always with the simplifier, reduction steps will only be applied if the preconditions of the lemmas can be checked. This means that proofs that denominators are nonzero should be included. The fact that a product is nonzero when all factors are, and that a power of a nonzero number is nonzero, are included in the simpset, but more complicated assertions (especially dealing with sums) should be given explicitly. If your expression is not completely reduced by the simplifier invocation, check the denominators of the resulting expression and provide proofs that they are nonzero to enable further progress.
To check that denominators are nonzero, field_simp
will look for facts in the context, and
will try to apply norm_num
to close numerical goals.
The invocation of field_simp
removes the lemma one_div
from the simpset, as this lemma
works against the algorithm explained above. It also removes
mul_eq_zero : x * y = 0 ↔ x = 0 ∨ y = 0
, as norm_num
can not work on disjunctions to
close goals of the form 24 ≠ 0
, and replaces it with mul_ne_zero : x ≠ 0 → y ≠ 0 → x * y ≠ 0
creating two goals instead of a disjunction.
For example,
example (a b c d x y : ℂ) (hx : x ≠ 0) (hy : y ≠ 0) :
a + b / x + c / x^2 + d / x^3 = a + x⁻¹ * (y * b / y + (d / x + c) / x) := a:ℂb:ℂc:ℂd:ℂx:ℂy:ℂhx:x ≠ 0hy:y ≠ 0⊢ a + b / x + c / x ^ 2 + d / x ^ 3 = a + x⁻¹ * (y * b / y + (d / x + c) / x)
a:ℂb:ℂc:ℂd:ℂx:ℂy:ℂhx:x ≠ 0hy:y ≠ 0⊢ (((a * x + b) * x ^ 2 + c * x) * x ^ 3 + d * (x * x ^ 2)) * (x * (x * x)) =
(a * (x * (x * x)) + (b * (x * x) + (d + c * x))) * (x * x ^ 2 * x ^ 3)
All goals completed! 🐙
Moreover, the field_simp
tactic can also take care of inverses of units in
a general (commutative) monoid/ring and partial division /ₚ
, see Algebra.Group.Units
for the definition. Analogue to the case above, the lemma one_divp
is removed from the simpset
as this works against the algorithm. If you have objects with an IsUnit x
instance like
(x : R) (hx : IsUnit x)
, you should lift them with
lift x to Rˣ using id hx; rw [IsUnit.unit_of_val_units] clear hx
before using field_simp
.
See also the cancel_denoms
tactic, which tries to do a similar simplification for expressions
that have numerals in denominators.
The tactics are not related: cancel_denoms
will only handle numeric denominators, and will try to
entirely remove (numeric) division from the expression by multiplying by a factor.
filter_upwards
filter_upwards [h₁, ⋯, hₙ]
replaces a goal of the form s ∈ f
and terms
h₁ : t₁ ∈ f, ⋯, hₙ : tₙ ∈ f
with ∀ x, x ∈ t₁ → ⋯ → x ∈ tₙ → x ∈ s
.
The list is an optional parameter, []
being its default value.
filter_upwards [h₁, ⋯, hₙ] with a₁ a₂ ⋯ aₖ
is a short form for
{ filter_upwards [h₁, ⋯, hₙ], intros a₁ a₂ ⋯ aₖ }
.
filter_upwards [h₁, ⋯, hₙ] using e
is a short form for
{ filter_upwards [h1, ⋯, hn], exact e }
.
Combining both shortcuts is done by writing filter_upwards [h₁, ⋯, hₙ] with a₁ a₂ ⋯ aₖ using e
.
Note that in this case, the aᵢ
terms can be used in e
.
fin_cases
fin_cases h
performs case analysis on a hypothesis of the form
h : A
, where [Fintype A]
is available, or
h : a ∈ A
, where A : Finset X
, A : Multiset X
or A : List X
.
As an example, in
example (f : ℕ → Prop) (p : Fin 3) (h0 : f 0) (h1 : f 1) (h2 : f 2) : f p.val := f:ℕ → Propp:Fin 3h0:f 0h1:f 1h2:f 2⊢ f ↑p
f:ℕ → Proph0:f 0h1:f 1h2:f 2⊢ f ↑((fun i => i) ⟨0, ⋯⟩)f:ℕ → Proph0:f 0h1:f 1h2:f 2⊢ f ↑((fun i => i) ⟨1, ⋯⟩)f:ℕ → Proph0:f 0h1:f 1h2:f 2⊢ f ↑((fun i => i) ⟨2, ⋯⟩); f:ℕ → Proph0:f 0h1:f 1h2:f 2⊢ f 0f:ℕ → Proph0:f 0h1:f 1h2:f 2⊢ f ↑((fun i => i) ⟨1, ⋯⟩)f:ℕ → Proph0:f 0h1:f 1h2:f 2⊢ f ↑((fun i => i) ⟨2, ⋯⟩)
all_goals All goals completed! 🐙
after fin_cases p; simp
, there are three goals, f 0
, f 1
, and f 2
.
fin_omega
Preprocessor for omega
to handle inequalities in Fin
.
Note that this involves a lot of case splitting, so may be slow.
find
finiteness
Tactic to solve goals of the form *** < ∞
and (equivalently) *** ≠ ∞
in the extended
nonnegative reals (ℝ≥0∞
).
finiteness?
Tactic to solve goals of the form *** < ∞
and (equivalently) *** ≠ ∞
in the extended
nonnegative reals (ℝ≥0∞
).
finiteness_nonterminal
Tactic to solve goals of the form *** < ∞
and (equivalently) *** ≠ ∞
in the extended
nonnegative reals (ℝ≥0∞
).
focus
focus tac
focuses on the main goal, suppressing all other goals, and runs tac
on it.
Usually · tac
, which enforces that the goal is closed by tac
, should be preferred.
forward
forward?
fun_prop
Tactic to prove function properties
funext
Apply function extensionality and introduce new hypotheses.
The tactic funext
will keep applying the funext
lemma until the goal target is not reducible to
|- ((fun x => ...) = (fun x => ...))
The variant funext h₁ ... hₙ
applies funext
n
times, and uses the given identifiers to name the new hypotheses.
Patterns can be used like in the intro
tactic. Example, given a goal
|- ((fun x : Nat × Bool => ...) = (fun x => ...))
funext (a, b)
applies funext
once and performs pattern matching on the newly introduced pair.
gcongr
The gcongr
tactic applies "generalized congruence" rules, reducing a relational goal
between a LHS and RHS matching the same pattern to relational subgoals between the differing
inputs to the pattern. For example,
example {a b x c d : ℝ} (h1 : a + 1 ≤ b + 1) (h2 : c + 2 ≤ d + 2) :
x ^ 2 * a + c ≤ x ^ 2 * b + d := a:ℝb:ℝx:ℝc:ℝd:ℝh1:a + 1 ≤ b + 1h2:c + 2 ≤ d + 2⊢ x ^ 2 * a + c ≤ x ^ 2 * b + d
a:ℝb:ℝx:ℝc:ℝd:ℝh1:a + 1 ≤ b + 1h2:c + 2 ≤ d + 2⊢ a ≤ ba:ℝb:ℝx:ℝc:ℝd:ℝh1:a + 1 ≤ b + 1h2:c + 2 ≤ d + 2⊢ c ≤ d
a:ℝb:ℝx:ℝc:ℝd:ℝh1:a + 1 ≤ b + 1h2:c + 2 ≤ d + 2⊢ a ≤ b All goals completed! 🐙
a:ℝb:ℝx:ℝc:ℝd:ℝh1:a + 1 ≤ b + 1h2:c + 2 ≤ d + 2⊢ c ≤ d All goals completed! 🐙
This example has the goal of proving the relation ≤
between a LHS and RHS both of the pattern
x ^ 2 * ?_ + ?_
(with inputs a
, c
on the left and b
, d
on the right); after the use of
gcongr
, we have the simpler goals a ≤ b
and c ≤ d
.
A pattern can be provided explicitly; this is useful if a non-maximal match is desired:
example {a b c d x : ℝ} (h : a + c + 1 ≤ b + d + 1) :
x ^ 2 * (a + c) + 5 ≤ x ^ 2 * (b + d) + 5 := a:ℝb:ℝc:ℝd:ℝx:ℝh:a + c + 1 ≤ b + d + 1⊢ x ^ 2 * (a + c) + 5 ≤ x ^ 2 * (b + d) + 5
a:ℝb:ℝc:ℝd:ℝx:ℝh:a + c + 1 ≤ b + d + 1⊢ a + c ≤ b + d
All goals completed! 🐙
The "generalized congruence" rules used are the library lemmas which have been tagged with the
attribute @[gcongr]
. For example, the first example constructs the proof term
add_le_add (mul_le_mul_of_nonneg_left _ (pow_bit0_nonneg x 1)) _
using the generalized congruence lemmas add_le_add
and mul_le_mul_of_nonneg_left
.
The tactic attempts to discharge side goals to these "generalized congruence" lemmas (such as the
side goal 0 ≤ x ^ 2
in the above application of mul_le_mul_of_nonneg_left
) using the tactic
gcongr_discharger
, which wraps positivity
but can also be extended. Side goals not discharged
in this way are left for the user.
gcongr?
Display a widget panel allowing to generate a gcongr
call with holes specified by selecting
subexpressions in the goal.
gcongr_discharger
generalize
-
generalize ([h :] e = x),+
replaces all occurrencese
s in the main goal with a fresh hypothesisx
s. Ifh
is given,h : e = x
is introduced as well. -
generalize e = x at h₁ ... hₙ
also generalizes occurrences ofe
insideh₁
, ...,hₙ
. -
generalize e = x at *
will generalize occurrences ofe
everywhere.
generalize_proofs
generalize_proofs ids* [at locs]?
generalizes proofs in the current goal,
turning them into new local hypotheses.
-
generalize_proofs
generalizes proofs in the target. -
generalize_proofs at h₁ h₂
generalized proofs in hypothesesh₁
andh₂
. -
generalize_proofs at *
generalizes proofs in the entire local context. -
generalize_proofs pf₁ pf₂ pf₃
uses namespf₁
,pf₂
, andpf₃
for the generalized proofs. These can be_
to not name proofs.
If a proof is already present in the local context, it will use that rather than create a new local hypothesis.
When doing generalize_proofs at h
, if h
is a let binding, its value is cleared,
and furthermore if h
duplicates a preceding local hypothesis then it is eliminated.
The tactic is able to abstract proofs from under binders, creating universally quantified
proofs in the local context.
To disable this, use generalize_proofs (config := { abstract := false })
.
The tactic is also set to recursively abstract proofs from the types of the generalized proofs.
This can be controlled with the maxDepth
configuration option,
with generalize_proofs (config := { maxDepth := 0 })
turning this feature off.
For example:
example : List.nthLe [1, 2] 1 (by simp) = 2 := by -- ⊢ [1, 2].nthLe 1 ⋯ = 2 generalize_proofs h -- h : 1 < [1, 2].length -- ⊢ [1, 2].nthLe 1 h = 2
get_elem_tactic
get_elem_tactic
is the tactic automatically called by the notation arr[i]
to prove any side conditions that arise when constructing the term
(e.g. the index is in bounds of the array). It just delegates to
get_elem_tactic_trivial
and gives a diagnostic error message otherwise;
users are encouraged to extend get_elem_tactic_trivial
instead of this tactic.
get_elem_tactic_trivial
get_elem_tactic_trivial
is an extensible tactic automatically called
by the notation arr[i]
to prove any side conditions that arise when
constructing the term (e.g. the index is in bounds of the array).
The default behavior is to just try trivial
(which handles the case
where i < arr.size
is in the context) and simp_arith
and omega
(for doing linear arithmetic in the index).
ghost_calc
ghost_calc
is a tactic for proving identities between polynomial functions.
Typically, when faced with a goal like
∀ (x y : 𝕎 R), verschiebung (x * frobenius y) = verschiebung x * y
you can
-
call
ghost_calc
-
do a small amount of manual work -- maybe nothing, maybe
rintro
, etc -
call
ghost_simp
and this will close the goal.
ghost_calc
cannot detect whether you are dealing with unary or binary polynomial functions.
You must give it arguments to determine this.
If you are proving a universally quantified goal like the above,
call ghost_calc _ _
.
If the variables are introduced already, call ghost_calc x y
.
In the unary case, use ghost_calc _
or ghost_calc x
.
ghost_calc
is a light wrapper around type class inference.
All it does is apply the appropriate extensionality lemma and try to infer the resulting goals.
This is subtle and Lean's elaborator doesn't like it because of the HO unification involved,
so it is easier (and prettier) to put it in a tactic script.
grind
grind?
group
Tactic for normalizing expressions in multiplicative groups, without assuming commutativity, using only the group axioms without any information about which group is manipulated.
(For additive commutative groups, use the abel
tactic instead.)
Example:
example {G : Type} [Group G] (a b c d : G) (h : c = (a*b^2)*((b*b)⁻¹*a⁻¹)*d) : a*c*d⁻¹ = a := G:Typeinst✝:Group Ga:Gb:Gc:Gd:Gh:c = a * b ^ 2 * ((b * b)⁻¹ * a⁻¹) * d⊢ a * c * d⁻¹ = a
G:Typeinst✝:Group Ga:Gb:Gc:Gd:Gh:c = d⊢ a * c * d⁻¹ = a -- normalizes `h` which becomes `h : c = d`
G:Typeinst✝:Group Ga:Gb:Gc:Gd:Gh:c = d⊢ a * d * d⁻¹ = a -- the goal is now `a*d*d⁻¹ = a`
All goals completed! 🐙 -- which then normalized and closed
guard_expr
Tactic to check equality of two expressions.
-
guard_expr e = e'
checks thate
ande'
are defeq at reducible transparency. -
guard_expr e =~ e'
checks thate
ande'
are defeq at default transparency. -
guard_expr e =ₛ e'
checks thate
ande'
are syntactically equal. -
guard_expr e =ₐ e'
checks thate
ande'
are alpha-equivalent.
Both e
and e'
are elaborated then have their metavariables instantiated before the equality
check. Their types are unified (using isDefEqGuarded
) before synthetic metavariables are
processed, which helps with default instance handling.
guard_hyp
Tactic to check that a named hypothesis has a given type and/or value.
-
guard_hyp h : t
checks the type up to reducible defeq, -
guard_hyp h :~ t
checks the type up to default defeq, -
guard_hyp h :ₛ t
checks the type up to syntactic equality, -
guard_hyp h :ₐ t
checks the type up to alpha equality. -
guard_hyp h := v
checks value up to reducible defeq, -
guard_hyp h :=~ v
checks value up to default defeq, -
guard_hyp h :=ₛ v
checks value up to syntactic equality, -
guard_hyp h :=ₐ v
checks the value up to alpha equality.
The value v
is elaborated using the type of h
as the expected type.
guard_hyp_nums
guard_hyp_nums n
succeeds if there are exactly n
hypotheses and fails otherwise.
Note that, depending on what options are set, some hypotheses in the local context might not be printed in the goal view. This tactic computes the total number of hypotheses, not the number of visible hypotheses.
guard_target
Tactic to check that the target agrees with a given expression.
-
guard_target = e
checks that the target is defeq at reducible transparency toe
. -
guard_target =~ e
checks that the target is defeq at default transparency toe
. -
guard_target =ₛ e
checks that the target is syntactically equal toe
. -
guard_target =ₐ e
checks that the target is alpha-equivalent toe
.
The term e
is elaborated with the type of the goal as the expected type, which is mostly
useful within conv
mode.
have
have
The have
tactic is for adding hypotheses to the local context of the main goal.
-
have h : t := e
adds the hypothesish : t
ife
is a term of typet
. -
have h := e
uses the type ofe
fort
. -
have : t := e
andhave := e
usethis
for the name of the hypothesis. -
have pat := e
for a patternpat
is equivalent tomatch e with | pat => _
, where_
stands for the tactics that follow this one. It is convenient for types that have only one applicable constructor. For example, givenh : p ∧ q ∧ r
,have ⟨h₁, h₂, h₃⟩ := h
produces the hypothesesh₁ : p
,h₂ : q
, andh₃ : r
.
have!?
-
have? using a, b, c
tries to find a lemma which makes use of each of the local hypothesesa, b, c
, and reports any results via trace messages. -
have? : h using a, b, c
only returns lemmas whose type matchesh
(which may contain_
). -
have?! using a, b, c
will also callhave
to add results to the local goal state.
Note that have?
(unlike apply?
) does not inspect the goal at all,
only the types of the lemmas in the using
clause.
have?
should not be left in proofs; it is a search tool, like apply?
.
Suggestions are printed as have := f a b c
.
have?
-
have? using a, b, c
tries to find a lemma which makes use of each of the local hypothesesa, b, c
, and reports any results via trace messages. -
have? : h using a, b, c
only returns lemmas whose type matchesh
(which may contain_
). -
have?! using a, b, c
will also callhave
to add results to the local goal state.
Note that have?
(unlike apply?
) does not inspect the goal at all,
only the types of the lemmas in the using
clause.
have?
should not be left in proofs; it is a search tool, like apply?
.
Suggestions are printed as have := f a b c
.
have?!
-
have? using a, b, c
tries to find a lemma which makes use of each of the local hypothesesa, b, c
, and reports any results via trace messages. -
have? : h using a, b, c
only returns lemmas whose type matchesh
(which may contain_
). -
have?! using a, b, c
will also callhave
to add results to the local goal state.
Note that have?
(unlike apply?
) does not inspect the goal at all,
only the types of the lemmas in the using
clause.
have?
should not be left in proofs; it is a search tool, like apply?
.
Suggestions are printed as have := f a b c
.
hint
The hint
tactic tries every tactic registered using register_hint tac
,
and reports any that succeed.
induction
Assuming x
is a variable in the local context with an inductive type,
induction x
applies induction on x
to the main goal,
producing one goal for each constructor of the inductive type,
in which the target is replaced by a general instance of that constructor
and an inductive hypothesis is added for each recursive argument to the constructor.
If the type of an element in the local context depends on x
,
that element is reverted and reintroduced afterward,
so that the inductive hypothesis incorporates that hypothesis as well.
For example, given n : Nat
and a goal with a hypothesis h : P n
and target Q n
,
induction n
produces one goal with hypothesis h : P 0
and target Q 0
,
and one goal with hypotheses h : P (Nat.succ a)
and ih₁ : P a → Q a
and target Q (Nat.succ a)
.
Here the names a
and ih₁
are chosen automatically and are not accessible.
You can use with
to provide the variables names for each constructor.
-
induction e
, wheree
is an expression instead of a variable, generalizese
in the goal, and then performs induction on the resulting variable. -
induction e using r
allows the user to specify the principle of induction that should be used. Herer
should be a term whose result type must be of the formC t
, whereC
is a bound variable andt
is a (possibly empty) sequence of bound variables -
induction e generalizing z₁ ... zₙ
, wherez₁ ... zₙ
are variables in the local context, generalizes overz₁ ... zₙ
before applying the induction but then introduces them in each goal. In other words, the net effect is that each inductive hypothesis is generalized. -
Given
x : Nat
,induction x with | zero => tac₁ | succ x' ih => tac₂
uses tactictac₁
for thezero
case, andtac₂
for thesucc
case.
induction'
The induction'
tactic is similar to the induction
tactic in Lean 4 core,
but with slightly different syntax (such as, no requirement to name the constructors).
open Nat
example (n : ℕ) : 0 < factorial n := n:ℕ⊢ 0 < n !
⊢ 0 < 0!n:ℕih:0 < n !⊢ 0 < (n + 1)!
⊢ 0 < 0! ⊢ 0 < 1
All goals completed! 🐙
n:ℕih:0 < n !⊢ 0 < (n + 1)! n:ℕih:0 < n !⊢ 0 < (n + 1) * n !
All goals completed! 🐙
example (n : ℕ) : 0 < factorial n := n:ℕ⊢ 0 < n !
⊢ 0 < 0!n✝:ℕa✝:0 < n✝!⊢ 0 < (n✝ + 1)!
case zero ⊢ 0 < 0!
⊢ 0 < 1
All goals completed! 🐙
case succ n ih n:ℕih:0 < n !⊢ 0 < (n + 1)!
n:ℕih:0 < n !⊢ 0 < (n + 1) * n !
All goals completed! 🐙
infer_instance
infer_instance
is an abbreviation for exact inferInstance
.
It synthesizes a value of any target type by typeclass inference.
inhabit
inhabit α
tries to derive a Nonempty α
instance and
then uses it to make an Inhabited α
instance.
If the target is a Prop
, this is done constructively. Otherwise, it uses Classical.choice
.
init_ring
init_ring
is an auxiliary tactic that discharges goals factoring init
over ring operations.
injection
The injection
tactic is based on the fact that constructors of inductive data
types are injections.
That means that if c
is a constructor of an inductive datatype, and if (c t₁)
and (c t₂)
are two terms that are equal then t₁
and t₂
are equal too.
If q
is a proof of a statement of conclusion t₁ = t₂
, then injection applies
injectivity to derive the equality of all arguments of t₁
and t₂
placed in
the same positions. For example, from (a::b) = (c::d)
we derive a=c
and b=d
.
To use this tactic t₁
and t₂
should be constructor applications of the same constructor.
Given h : a::b = c::d
, the tactic injection h
adds two new hypothesis with types
a = c
and b = d
to the main goal.
The tactic injection h with h₁ h₂
uses the names h₁
and h₂
to name the new hypotheses.
injections
injections
applies injection
to all hypotheses recursively
(since injection
can produce new hypotheses). Useful for destructing nested
constructor equalities like (a::b::c) = (d::e::f)
.
interval_cases
interval_cases n
searches for upper and lower bounds on a variable n
,
and if bounds are found,
splits into separate cases for each possible value of n
.
As an example, in
example (n : ℕ) (w₁ : n ≥ 3) (w₂ : n < 5) : n = 3 ∨ n = 4 := n:ℕw₁:n ≥ 3w₂:n < 5⊢ n = 3 ∨ n = 4
n:ℕw₁:3 ≥ 3w₂:3 < 5⊢ 3 = 3 ∨ 3 = 4n:ℕw₁:4 ≥ 3w₂:4 < 5⊢ 4 = 3 ∨ 4 = 4
all_goals All goals completed! 🐙
after interval_cases n
, the goals are 3 = 3 ∨ 3 = 4
and 4 = 3 ∨ 4 = 4
.
You can also explicitly specify a lower and upper bound to use,
as interval_cases using hl, hu
.
The hypotheses should be in the form hl : a ≤ n
and hu : n < b
,
in which case interval_cases
calls fin_cases
on the resulting fact n ∈ Set.Ico a b
.
You can specify a name h
for the new hypothesis,
as interval_cases h : n
or interval_cases h : n using hl, hu
.
intro
Introduces one or more hypotheses, optionally naming and/or pattern-matching them.
For each hypothesis to be introduced, the remaining main goal's target type must
be a let
or function type.
-
intro
by itself introduces one anonymous hypothesis, which can be accessed by e.g.assumption
. -
intro x y
introduces two hypotheses and names them. Individual hypotheses can be anonymized via_
, or matched against a pattern:intro (a, b) -- ..., a : α, b : β ⊢ ...
-
Alternatively,
intro
can be combined with pattern matching much likefun
:intro | n + 1, 0 => tac | ...
intros
Introduces zero or more hypotheses, optionally naming them.
-
intros
is equivalent to repeatedly applyingintro
until the goal is not an obvious candidate forintro
, which is to say that so long as the goal is alet
or a pi type (e.g. an implication, function, or universal quantifier), theintros
tactic will introduce an anonymous hypothesis. This tactic does not unfold definitions. -
intros x y ...
is equivalent tointro x y ...
, introducing hypotheses for each supplied argument and unfolding definitions as necessary. Each argument can be either an identifier or a_
. An identifier indicates a name to use for the corresponding introduced hypothesis, and a_
indicates that the hypotheses should be introduced anonymously.
Examples
Basic properties:
def AllEven (f : Nat → Nat) := ∀ n, f n % 2 = 0
-- Introduces the two obvious hypotheses automatically
example : ∀ (f : Nat → Nat), AllEven f → AllEven (fun k => f (k + 1)) := ⊢ ∀ (f : ℕ → ℕ), AllEven f → AllEven fun k => f (k + 1)
f✝:ℕ → ℕa✝:AllEven f✝⊢ AllEven fun k => f✝ (k + 1)
/- Tactic state
f✝ : Nat → Nat
a✝ : AllEven f✝
⊢ AllEven fun k => f✝ (k + 1) -/
All goals completed! 🐙
-- Introduces exactly two hypotheses, naming only the first
example : ∀ (f : Nat → Nat), AllEven f → AllEven (fun k => f (k + 1)) := ⊢ ∀ (f : ℕ → ℕ), AllEven f → AllEven fun k => f (k + 1)
g:ℕ → ℕa✝:AllEven g⊢ AllEven fun k => g (k + 1)
/- Tactic state
g : Nat → Nat
a✝ : AllEven g
⊢ AllEven fun k => g (k + 1) -/
All goals completed! 🐙
-- Introduces exactly three hypotheses, which requires unfolding `AllEven`
example : ∀ (f : Nat → Nat), AllEven f → AllEven (fun k => f (k + 1)) := ⊢ ∀ (f : ℕ → ℕ), AllEven f → AllEven fun k => f (k + 1)
f:ℕ → ℕh:AllEven fn:ℕ⊢ (fun k => f (k + 1)) n % 2 = 0
/- Tactic state
f : Nat → Nat
h : AllEven f
n : Nat
⊢ (fun k => f (k + 1)) n % 2 = 0 -/
All goals completed! 🐙
Implications:
example (p q : Prop) : p → q → p := p:Propq:Prop⊢ p → q → p
p:Propq:Propa✝¹:pa✝:q⊢ p
/- Tactic state
a✝¹ : p
a✝ : q
⊢ p -/
All goals completed! 🐙
Let bindings:
example : let n := 1; let k := 2; n + k = 3 := ⊢ let n := 1;
let k := 2;
n + k = 3
n✝:ℕ := 1k✝:ℕ := 2⊢ n✝ + k✝ = 3
/- n✝ : Nat := 1
k✝ : Nat := 2
⊢ n✝ + k✝ = 3 -/
All goals completed! 🐙
introv
The tactic introv
allows the user to automatically introduce the variables of a theorem and
explicitly name the non-dependent hypotheses.
Any dependent hypotheses are assigned their default names.
Examples:
example : ∀ a b : Nat, a = b → b = a := by introv h, exact h.symm
The state after introv h
is
a b : ℕ, h : a = b ⊢ b = a
example : ∀ a b : Nat, a = b → ∀ c, b = c → a = c := by introv h₁ h₂, exact h₁.trans h₂
The state after introv h₁ h₂
is
a b : ℕ, h₁ : a = b, c : ℕ, h₂ : b = c ⊢ a = c
isBoundedDefault
Filters are automatically bounded or cobounded in complete lattices. To use the same statements
in complete and conditionally complete lattices but let automation fill automatically the
boundedness proofs in complete lattices, we use the tactic isBoundedDefault
in the statements,
in the form (hf : f.IsBounded (≥) := by isBoundedDefault)
.
itauto
A decision procedure for intuitionistic propositional logic. Unlike finish
and tauto!
this
tactic never uses the law of excluded middle (without the !
option), and the proof search is
tailored for this use case. (itauto!
will work as a classical SAT solver, but the algorithm is
not very good in this situation.)
example (p : Prop) : ¬ (p ↔ ¬ p) := p:Prop⊢ ¬(p ↔ ¬p) All goals completed! 🐙
itauto [a, b]
will additionally attempt case analysis on a
and b
assuming that it can derive
Decidable a
and Decidable b
. itauto *
will case on all decidable propositions that it can
find among the atomic propositions, and itauto! *
will case on all propositional atoms.
Warning: This can blow up the proof search, so it should be used sparingly.
itauto!
A decision procedure for intuitionistic propositional logic. Unlike finish
and tauto!
this
tactic never uses the law of excluded middle (without the !
option), and the proof search is
tailored for this use case. (itauto!
will work as a classical SAT solver, but the algorithm is
not very good in this situation.)
example (p : Prop) : ¬ (p ↔ ¬ p) := p:Prop⊢ ¬(p ↔ ¬p) All goals completed! 🐙
itauto [a, b]
will additionally attempt case analysis on a
and b
assuming that it can derive
Decidable a
and Decidable b
. itauto *
will case on all decidable propositions that it can
find among the atomic propositions, and itauto! *
will case on all propositional atoms.
Warning: This can blow up the proof search, so it should be used sparingly.
iterate
iterate n tac
runs tac
exactly n
times.
iterate tac
runs tac
repeatedly until failure.
iterate
's argument is a tactic sequence,
so multiple tactics can be run using iterate n (tac₁; tac₂; ⋯)
or
iterate n
tac₁
tac₂
⋯
left
Applies the first constructor when the goal is an inductive type with exactly two constructors, or fails otherwise.
example : True ∨ False := ⊢ True ∨ False
⊢ True
All goals completed! 🐙
let
The let
tactic is for adding definitions to the local context of the main goal.
-
let x : t := e
adds the definitionx : t := e
ife
is a term of typet
. -
let x := e
uses the type ofe
fort
. -
let : t := e
andlet := e
usethis
for the name of the hypothesis. -
let pat := e
for a patternpat
is equivalent tomatch e with | pat => _
, where_
stands for the tactics that follow this one. It is convenient for types that let only one applicable constructor. For example, givenp : α × β × γ
,let ⟨x, y, z⟩ := p
produces the local variablesx : α
,y : β
, andz : γ
.
let
let
lift
Lift an expression to another type.
-
Usage:
'lift' expr 'to' expr ('using' expr)? ('with' id (id id?)?)?
. -
If
n : ℤ
andhn : n ≥ 0
then the tacticlift n to ℕ using hn
creates a new constant of typeℕ
, also namedn
and replaces all occurrences of the old variable(n : ℤ)
with↑n
(wheren
in the new variable). It will removen
andhn
from the context.-
So for example the tactic
lift n to ℕ using hn
transforms the goaln : ℤ, hn : n ≥ 0, h : P n ⊢ n = 3
ton : ℕ, h : P ↑n ⊢ ↑n = 3
(hereP
is some term of typeℤ → Prop
).
-
-
The argument
using hn
is optional, the tacticlift n to ℕ
does the same, but also creates a new subgoal thatn ≥ 0
(wheren
is the old variable). This subgoal will be placed at the top of the goal list.-
So for example the tactic
lift n to ℕ
transforms the goaln : ℤ, h : P n ⊢ n = 3
to two goalsn : ℤ, h : P n ⊢ n ≥ 0
andn : ℕ, h : P ↑n ⊢ ↑n = 3
.
-
-
You can also use
lift n to ℕ using e
wheree
is any expression of typen ≥ 0
. -
Use
lift n to ℕ with k
to specify the name of the new variable. -
Use
lift n to ℕ with k hk
to also specify the name of the equality↑k = n
. In this case,n
will remain in the context. You can userfl
for the name ofhk
to substituten
away (i.e. the default behavior). -
You can also use
lift e to ℕ with k hk
wheree
is any expression of typeℤ
. In this case, thehk
will always stay in the context, but it will be used to rewritee
in all hypotheses and the target.-
So for example the tactic
lift n + 3 to ℕ using hn with k hk
transforms the goaln : ℤ, hn : n + 3 ≥ 0, h : P (n + 3) ⊢ n + 3 = 2 * n
to the goaln : ℤ, k : ℕ, hk : ↑k = n + 3, h : P ↑k ⊢ ↑k = 2 * n
.
-
-
The tactic
lift n to ℕ using h
will removeh
from the context. If you want to keep it, specify it again as the third argument towith
, like this:lift n to ℕ using h with n rfl h
. -
More generally, this can lift an expression from
α
toβ
assuming that there is an instance ofCanLift α β
. In this case the proof obligation is specified byCanLift.prf
. -
Given an instance
CanLift β γ
, it can also liftα → β
toα → γ
; more generally, givenβ : Π a : α, Type*
,γ : Π a : α, Type*
, and[Π a : α, CanLift (β a) (γ a)]
, it automatically generates an instanceCanLift (Π a, β a) (Π a, γ a)
.
lift
is in some sense dual to the zify
tactic. lift (z : ℤ) to ℕ
will change the type of an
integer z
(in the supertype) to ℕ
(the subtype), given a proof that z ≥ 0
;
propositions concerning z
will still be over ℤ
. zify
changes propositions about ℕ
(the
subtype) to propositions about ℤ
(the supertype), without changing the type of any variable.
lift_lets
Lift all the let
bindings in the type of an expression as far out as possible.
When applied to the main goal, this gives one the ability to intro
embedded let
expressions.
For example,
example : (let x := 1; x) = 1 := ⊢ (let x := 1;
x) =
1
⊢ let x := 1;
x = 1
-- ⊢ let x := 1; x = 1
x:ℕ := 1⊢ x = 1
All goals completed! 🐙
During the lifting process, let bindings are merged if they have the same type and value.
liftable_prefixes
Internal tactic used in coherence
.
Rewrites an equation f = g
as f₀ ≫ f₁ = g₀ ≫ g₁
,
where f₀
and g₀
are maximal prefixes of f
and g
(possibly after reassociating)
which are "liftable" (i.e. expressible as compositions of unitors and associators).
linarith
linarith
attempts to find a contradiction between hypotheses that are linear (in)equalities.
Equivalently, it can prove a linear inequality by assuming its negation and proving False
.
In theory, linarith
should prove any goal that is true in the theory of linear arithmetic over
the rationals. While there is some special handling for non-dense orders like Nat
and Int
,
this tactic is not complete for these theories and will not prove every true goal. It will solve
goals over arbitrary types that instantiate LinearOrderedCommRing
.
An example:
example (x y z : ℚ) (h1 : 2*x < 3*y) (h2 : -4*x + 2*z < 0)
(h3 : 12*y - 4* z < 0) : False := x:ℚy:ℚz:ℚh1:2 * x < 3 * yh2:-4 * x + 2 * z < 0h3:12 * y - 4 * z < 0⊢ False
All goals completed! 🐙
linarith
will use all appropriate hypotheses and the negation of the goal, if applicable.
Disequality hypotheses require case splitting and are not normally considered
(see the splitNe
option below).
linarith [t1, t2, t3]
will additionally use proof terms t1, t2, t3
.
linarith only [h1, h2, h3, t1, t2, t3]
will use only the goal (if relevant), local hypotheses
h1
, h2
, h3
, and proofs t1
, t2
, t3
. It will ignore the rest of the local context.
linarith!
will use a stronger reducibility setting to try to identify atoms. For example,
example (x : ℚ) : id x ≥ x := by linarith
will fail, because linarith
will not identify x
and id x
. linarith!
will.
This can sometimes be expensive.
linarith (config := { .. })
takes a config object with five
optional arguments:
-
discharger
specifies a tactic to be used for reducing an algebraic equation in the proof stage. The default isring
. Other options includesimp
for basic problems. -
transparency
controls how hardlinarith
will try to match atoms to each other. By default it will only unfoldreducible
definitions. -
If
splitHypotheses
is true,linarith
will split conjunctions in the context into separate hypotheses. -
If
splitNe
istrue
,linarith
will case split on disequality hypotheses. For a givenx ≠ y
hypothesis,linarith
is run with bothx < y
andx > y
, and so this runs linarith exponentially many times with respect to the number of disequality hypotheses. (false
by default.) -
If
exfalso
isfalse
,linarith
will fail when the goal is neither an inequality norFalse
. (true
by default.) -
restrict_type
(not yet implemented in mathlib4) will only use hypotheses that are inequalities overtp
. This is useful if you have e.g. both integer and rational valued inequalities in the local context, which can sometimes confuse the tactic.
A variant, nlinarith
, does some basic preprocessing to handle some nonlinear goals.
The option set_option trace.linarith true
will trace certain intermediate stages of the linarith
routine.
linarith!
linarith
attempts to find a contradiction between hypotheses that are linear (in)equalities.
Equivalently, it can prove a linear inequality by assuming its negation and proving False
.
In theory, linarith
should prove any goal that is true in the theory of linear arithmetic over
the rationals. While there is some special handling for non-dense orders like Nat
and Int
,
this tactic is not complete for these theories and will not prove every true goal. It will solve
goals over arbitrary types that instantiate LinearOrderedCommRing
.
An example:
example (x y z : ℚ) (h1 : 2*x < 3*y) (h2 : -4*x + 2*z < 0)
(h3 : 12*y - 4* z < 0) : False := x:ℚy:ℚz:ℚh1:2 * x < 3 * yh2:-4 * x + 2 * z < 0h3:12 * y - 4 * z < 0⊢ False
All goals completed! 🐙
linarith
will use all appropriate hypotheses and the negation of the goal, if applicable.
Disequality hypotheses require case splitting and are not normally considered
(see the splitNe
option below).
linarith [t1, t2, t3]
will additionally use proof terms t1, t2, t3
.
linarith only [h1, h2, h3, t1, t2, t3]
will use only the goal (if relevant), local hypotheses
h1
, h2
, h3
, and proofs t1
, t2
, t3
. It will ignore the rest of the local context.
linarith!
will use a stronger reducibility setting to try to identify atoms. For example,
example (x : ℚ) : id x ≥ x := by linarith
will fail, because linarith
will not identify x
and id x
. linarith!
will.
This can sometimes be expensive.
linarith (config := { .. })
takes a config object with five
optional arguments:
-
discharger
specifies a tactic to be used for reducing an algebraic equation in the proof stage. The default isring
. Other options includesimp
for basic problems. -
transparency
controls how hardlinarith
will try to match atoms to each other. By default it will only unfoldreducible
definitions. -
If
splitHypotheses
is true,linarith
will split conjunctions in the context into separate hypotheses. -
If
splitNe
istrue
,linarith
will case split on disequality hypotheses. For a givenx ≠ y
hypothesis,linarith
is run with bothx < y
andx > y
, and so this runs linarith exponentially many times with respect to the number of disequality hypotheses. (false
by default.) -
If
exfalso
isfalse
,linarith
will fail when the goal is neither an inequality norFalse
. (true
by default.) -
restrict_type
(not yet implemented in mathlib4) will only use hypotheses that are inequalities overtp
. This is useful if you have e.g. both integer and rational valued inequalities in the local context, which can sometimes confuse the tactic.
A variant, nlinarith
, does some basic preprocessing to handle some nonlinear goals.
The option set_option trace.linarith true
will trace certain intermediate stages of the linarith
routine.
linear_combination
The linear_combination
tactic attempts to prove an (in)equality goal by exhibiting it as a
specified linear combination of (in)equality hypotheses, or other (in)equality proof terms, modulo
(A) moving terms between the LHS and RHS of the (in)equalities, and (B) a normalization tactic
which by default is ring-normalization.
Example usage:
example {a b : ℚ} (h1 : a = 1) (h2 : b = 3) : (a + b) / 2 = 2 := by linear_combination (h1 + h2) / 2 example {a b : ℚ} (h1 : a ≤ 1) (h2 : b ≤ 3) : (a + b) / 2 ≤ 2 := by linear_combination (h1 + h2) / 2 example {a b : ℚ} : 2 * a * b ≤ a ^ 2 + b ^ 2 := by linear_combination sq_nonneg (a - b) example {x y z w : ℤ} (h₁ : x * z = y ^ 2) (h₂ : y * w = z ^ 2) : z * (x * w - y * z) = 0 := by linear_combination w * h₁ + y * h₂ example {x : ℚ} (h : x ≥ 5) : x ^ 2 > 2 * x + 11 := by linear_combination (x + 3) * h example {R : Type*} [CommRing R] {a b : R} (h : a = b) : a ^ 2 = b ^ 2 := by linear_combination (a + b) * h example {A : Type*} [AddCommGroup A] {x y z : A} (h1 : x + y = 10 • z) (h2 : x - y = 6 • z) : 2 • x = 2 • (8 • z) := by linear_combination (norm := abel) h1 + h2 example (x y : ℤ) (h1 : x * y + 2 * x = 1) (h2 : x = y) : x * y = -2 * y + 1 := by linear_combination (norm := ring_nf) -2 * h2 -- leaves goal `⊢ x * y + x * 2 - 1 = 0`
The input e
in linear_combination e
is a linear combination of proofs of (in)equalities,
given as a sum/difference of coefficients multiplied by expressions.
The coefficients may be arbitrary expressions (with nonnegativity constraints in the case of
inequalities).
The expressions can be arbitrary proof terms proving (in)equalities;
most commonly they are hypothesis names h1
, h2
, ....
The left and right sides of all the (in)equalities should have the same type α
, and the
coefficients should also have type α
. For full functionality α
should be a commutative ring --
strictly speaking, a commutative semiring with "cancellative" addition (in the semiring case,
negation and subtraction will be handled "formally" as if operating in the enveloping ring). If a
nonstandard normalization is used (for example abel
or skip
), the tactic will work over types
α
with less algebraic structure: for equalities, the minimum is instances of
[Add α] [IsRightCancelAdd α]
together with instances of whatever operations are used in the tactic
call.
The variant linear_combination (norm := tac) e
specifies explicitly the "normalization tactic"
tac
to be run on the subgoal(s) after constructing the linear combination.
-
The default normalization tactic is
ring1
(for equalities) orMathlib.Tactic.Ring.prove{LE,LT}
(for inequalities). These are finishing tactics: they close the goal or fail. -
When working in algebraic categories other than commutative rings -- for example fields, abelian groups, modules -- it is sometimes useful to use normalization tactics adapted to those categories (
field_simp
,abel
,module
). -
To skip normalization entirely, use
skip
as the normalization tactic. -
The
linear_combination
tactic creates a linear combination by adding the provided (in)equalities together from left to right, so iftac
is not invariant under commutation of additive expressions, then the order of the input hypotheses can matter.
The variant linear_combination (exp := n) e
will take the goal to the n
th power before
subtracting the combination e
. In other words, if the goal is t1 = t2
,
linear_combination (exp := n) e
will change the goal to (t1 - t2)^n = 0
before proceeding as
above. This variant is implemented only for linear combinations of equalities (i.e., not for
inequalities).
linear_combination'
linear_combination'
attempts to simplify the target by creating a linear combination
of a list of equalities and subtracting it from the target.
The tactic will create a linear
combination by adding the equalities together from left to right, so the order
of the input hypotheses does matter. If the norm
field of the
tactic is set to skip
, then the tactic will simply set the user up to
prove their target using the linear combination instead of normalizing the subtraction.
Note: There is also a similar tactic linear_combination
(no prime); this version is
provided for backward compatibility. Compared to this tactic, linear_combination
:
-
drops the
←
syntax for reversing an equation, instead offering this operation using the-
syntax -
does not support multiplication of two hypotheses (
h1 * h2
), division by a hypothesis (3 / h
), or inversion of a hypothesis (h⁻¹
) -
produces noisy output when the user adds or subtracts a constant to a hypothesis (
h + 3
)
Note: The left and right sides of all the equalities should have the same
type, and the coefficients should also have this type. There must be
instances of Mul
and AddGroup
for this type.
-
The input
e
inlinear_combination' e
is a linear combination of proofs of equalities, given as a sum/difference of coefficients multiplied by expressions. The coefficients may be arbitrary expressions. The expressions can be arbitrary proof terms proving equalities. Most commonly they are hypothesis namesh1, h2, ...
. -
linear_combination' (norm := tac) e
runs the "normalization tactic"tac
on the subgoal(s) after constructing the linear combination. -
linear_combination' (exp := n) e
will take the goal to then
th power before subtracting the combinatione
. In other words, if the goal ist1 = t2
,linear_combination' (exp := n) e
will change the goal to(t1 - t2)^n = 0
before proceeding as above. This feature is not supported forlinear_combination2
. -
linear_combination2 e
is the same aslinear_combination' e
but it produces two subgoals instead of one: rather than proving that(a - b) - (a' - b') = 0
wherea' = b'
is the linear combination frome
anda = b
is the goal, it instead attempts to provea = a'
andb = b'
. Because it does not use subtraction, this form is applicable also to semirings.-
Note that a goal which is provable by
linear_combination' e
may not be provable bylinear_combination2 e
; in general you may need to add a coefficient toe
to make both sides match, as inlinear_combination2 e + c
. -
You can also reverse equalities using
← h
, so for example ifh₁ : a = b
then2 * (← h)
is a proof of2 * b = 2 * a
.
-
Example Usage:
example (x y : ℤ) (h1 : x*y + 2*x = 1) (h2 : x = y) : x*y = -2*y + 1 := by linear_combination' 1*h1 - 2*h2 example (x y : ℤ) (h1 : x*y + 2*x = 1) (h2 : x = y) : x*y = -2*y + 1 := by linear_combination' h1 - 2*h2 example (x y : ℤ) (h1 : x*y + 2*x = 1) (h2 : x = y) : x*y = -2*y + 1 := by linear_combination' (norm := ring_nf) -2*h2 /- Goal: x * y + x * 2 - 1 = 0 -/ example (x y z : ℝ) (ha : x + 2*y - z = 4) (hb : 2*x + y + z = -2) (hc : x + 2*y + z = 2) : -3*x - 3*y - 4*z = 2 := by linear_combination' ha - hb - 2*hc example (x y : ℚ) (h1 : x + y = 3) (h2 : 3*x = 7) : x*x*y + y*x*y + 6*x = 3*x*y + 14 := by linear_combination' x*y*h1 + 2*h2 example (x y : ℤ) (h1 : x = -3) (h2 : y = 10) : 2*x = -6 := by linear_combination' (norm := skip) 2*h1 simp axiom qc : ℚ axiom hqc : qc = 2*qc example (a b : ℚ) (h : ∀ p q : ℚ, p = q) : 3*a + qc = 3*b + 2*qc := by linear_combination' 3 * h a b + hqc
linear_combination2
linear_combination'
attempts to simplify the target by creating a linear combination
of a list of equalities and subtracting it from the target.
The tactic will create a linear
combination by adding the equalities together from left to right, so the order
of the input hypotheses does matter. If the norm
field of the
tactic is set to skip
, then the tactic will simply set the user up to
prove their target using the linear combination instead of normalizing the subtraction.
Note: There is also a similar tactic linear_combination
(no prime); this version is
provided for backward compatibility. Compared to this tactic, linear_combination
:
-
drops the
←
syntax for reversing an equation, instead offering this operation using the-
syntax -
does not support multiplication of two hypotheses (
h1 * h2
), division by a hypothesis (3 / h
), or inversion of a hypothesis (h⁻¹
) -
produces noisy output when the user adds or subtracts a constant to a hypothesis (
h + 3
)
Note: The left and right sides of all the equalities should have the same
type, and the coefficients should also have this type. There must be
instances of Mul
and AddGroup
for this type.
-
The input
e
inlinear_combination' e
is a linear combination of proofs of equalities, given as a sum/difference of coefficients multiplied by expressions. The coefficients may be arbitrary expressions. The expressions can be arbitrary proof terms proving equalities. Most commonly they are hypothesis namesh1, h2, ...
. -
linear_combination' (norm := tac) e
runs the "normalization tactic"tac
on the subgoal(s) after constructing the linear combination. -
linear_combination' (exp := n) e
will take the goal to then
th power before subtracting the combinatione
. In other words, if the goal ist1 = t2
,linear_combination' (exp := n) e
will change the goal to(t1 - t2)^n = 0
before proceeding as above. This feature is not supported forlinear_combination2
. -
linear_combination2 e
is the same aslinear_combination' e
but it produces two subgoals instead of one: rather than proving that(a - b) - (a' - b') = 0
wherea' = b'
is the linear combination frome
anda = b
is the goal, it instead attempts to provea = a'
andb = b'
. Because it does not use subtraction, this form is applicable also to semirings.-
Note that a goal which is provable by
linear_combination' e
may not be provable bylinear_combination2 e
; in general you may need to add a coefficient toe
to make both sides match, as inlinear_combination2 e + c
. -
You can also reverse equalities using
← h
, so for example ifh₁ : a = b
then2 * (← h)
is a proof of2 * b = 2 * a
.
-
Example Usage:
example (x y : ℤ) (h1 : x*y + 2*x = 1) (h2 : x = y) : x*y = -2*y + 1 := by linear_combination' 1*h1 - 2*h2 example (x y : ℤ) (h1 : x*y + 2*x = 1) (h2 : x = y) : x*y = -2*y + 1 := by linear_combination' h1 - 2*h2 example (x y : ℤ) (h1 : x*y + 2*x = 1) (h2 : x = y) : x*y = -2*y + 1 := by linear_combination' (norm := ring_nf) -2*h2 /- Goal: x * y + x * 2 - 1 = 0 -/ example (x y z : ℝ) (ha : x + 2*y - z = 4) (hb : 2*x + y + z = -2) (hc : x + 2*y + z = 2) : -3*x - 3*y - 4*z = 2 := by linear_combination' ha - hb - 2*hc example (x y : ℚ) (h1 : x + y = 3) (h2 : 3*x = 7) : x*x*y + y*x*y + 6*x = 3*x*y + 14 := by linear_combination' x*y*h1 + 2*h2 example (x y : ℤ) (h1 : x = -3) (h2 : y = 10) : 2*x = -6 := by linear_combination' (norm := skip) 2*h1 simp axiom qc : ℚ axiom hqc : qc = 2*qc example (a b : ℚ) (h : ∀ p q : ℚ, p = q) : 3*a + qc = 3*b + 2*qc := by linear_combination' 3 * h a b + hqc
map_tacs
Assuming there are n
goals, map_tacs [t1; t2; ...; tn]
applies each ti
to the respective
goal and leaves the resulting subgoals.
match
match_scalars
Given a goal which is an equality in a type M
(with M
an AddCommMonoid
), parse the LHS and
RHS of the goal as linear combinations of M
-atoms over some semiring R
, and reduce the goal to
the respective equalities of the R
-coefficients of each atom.
For example, this produces the goal ⊢ a * 1 + b * 1 = (b + a) * 1
:
example [AddCommMonoid M] [Semiring R] [Module R M] (a b : R) (x : M) : a • x + b • x = (b + a) • x := by match_scalars
This produces the two goals ⊢ a * (a * 1) + b * (b * 1) = 1
(from the x
atom) and
⊢ a * -(b * 1) + b * (a * 1) = 0
(from the y
atom):
example [AddCommGroup M] [Ring R] [Module R M] (a b : R) (x : M) : a • (a • x - b • y) + (b • a • y + b • b • x) = x := by match_scalars
This produces the goal ⊢ -2 * (a * 1) = a * (-2 * 1)
:
example [AddCommGroup M] [Ring R] [Module R M] (a : R) (x : M) : -(2:R) • a • x = a • (-2:ℤ) • x := by match_scalars
The scalar type for the goals produced by the match_scalars
tactic is the largest scalar type
encountered; for example, if ℕ
, ℚ
and a characteristic-zero field K
all occur as scalars, then
the goals produced are equalities in K
. A variant of push_cast
is used internally in
match_scalars
to interpret scalars from the other types in this largest type.
If the set of scalar types encountered is not totally ordered (in the sense that for all rings R
,
S
encountered, it holds that either Algebra R S
or Algebra S R
), then the match_scalars
tactic fails.
match_target
measurability
The tactic measurability
solves goals of the form Measurable f
, AEMeasurable f
,
StronglyMeasurable f
, AEStronglyMeasurable f μ
, or MeasurableSet s
by applying lemmas tagged
with the measurability
user attribute.
measurability!
measurability!?
measurability?
The tactic measurability?
solves goals of the form Measurable f
, AEMeasurable f
,
StronglyMeasurable f
, AEStronglyMeasurable f μ
, or MeasurableSet s
by applying lemmas tagged
with the measurability
user attribute, and suggests a faster proof script that can be substituted
for the tactic call in case of success.
mem_tac
mem_tac_aux
mfld_set_tac
A very basic tactic to show that sets showing up in manifolds coincide or are included in one another.
mod_cases
-
The tactic
mod_cases h : e % 3
will perform a case disjunction one
. Ife : ℤ
, then it will yield subgoals containing the assumptionsh : e ≡ 0 [ZMOD 3]
,h : e ≡ 1 [ZMOD 3]
,h : e ≡ 2 [ZMOD 3]
respectively. Ife : ℕ
instead, then it works similarly, except with[MOD 3]
instead of[ZMOD 3]
. -
In general,
mod_cases h : e % n
works whenn
is a positive numeral ande
is an expression of typeℕ
orℤ
. -
If
h
is omitted as inmod_cases e % n
, it will be default-namedH
.
module
Given a goal which is an equality in a type M
(with M
an AddCommMonoid
), parse the LHS and
RHS of the goal as linear combinations of M
-atoms over some commutative semiring R
, and prove
the goal by checking that the LHS- and RHS-coefficients of each atom are the same up to
ring-normalization in R
.
(If the proofs of coefficient-wise equality will require more reasoning than just
ring-normalization, use the tactic match_scalars
instead, and then prove coefficient-wise equality
by hand.)
Example uses of the module
tactic:
example [AddCommMonoid M] [CommSemiring R] [Module R M] (a b : R) (x : M) :
a • x + b • x = (b + a) • x := M:Type u_1R:Type u_2inst✝²:AddCommMonoid Minst✝¹:CommSemiring Rinst✝:Module R Ma:Rb:Rx:M⊢ a • x + b • x = (b + a) • x
All goals completed! 🐙
example [AddCommMonoid M] [Field K] [CharZero K] [Module K M] (x : M) :
(2:K)⁻¹ • x + (3:K)⁻¹ • x + (6:K)⁻¹ • x = x := M:Type u_1K:Type u_2inst✝³:AddCommMonoid Minst✝²:Field Kinst✝¹:CharZero Kinst✝:Module K Mx:M⊢ 2⁻¹ • x + 3⁻¹ • x + 6⁻¹ • x = x
All goals completed! 🐙
example [AddCommGroup M] [CommRing R] [Module R M] (a : R) (v w : M) :
(1 + a ^ 2) • (v + w) - a • (a • v - w) = v + (1 + a + a ^ 2) • w := M:Type u_1R:Type u_2inst✝²:AddCommGroup Minst✝¹:CommRing Rinst✝:Module R Ma:Rv:Mw:M⊢ (1 + a ^ 2) • (v + w) - a • (a • v - w) = v + (1 + a + a ^ 2) • w
All goals completed! 🐙
example [AddCommGroup M] [CommRing R] [Module R M] (a b μ ν : R) (x y : M) :
(μ - ν) • a • x = (a • μ • x + b • ν • y) - ν • (a • x + b • y) := M:Type u_1R:Type u_2inst✝²:AddCommGroup Minst✝¹:CommRing Rinst✝:Module R Ma:Rb:Rμ:Rν:Rx:My:M⊢ (μ - ν) • a • x = a • μ • x + b • ν • y - ν • (a • x + b • y)
All goals completed! 🐙
monicity
monicity
tries to solve a goal of the form Monic f
.
It converts the goal into a goal of the form natDegree f ≤ n
and one of the form f.coeff n = 1
and calls compute_degree
on those two goals.
The variant monicity!
starts like monicity
, but calls compute_degree!
on the two side-goals.
monicity!
monicity
tries to solve a goal of the form Monic f
.
It converts the goal into a goal of the form natDegree f ≤ n
and one of the form f.coeff n = 1
and calls compute_degree
on those two goals.
The variant monicity!
starts like monicity
, but calls compute_degree!
on the two side-goals.
mono
mono
applies monotonicity rules and local hypotheses repetitively. For example,
example (x y z k : ℤ)
(h : 3 ≤ (4 : ℤ))
(h' : z ≤ y) :
(k + 3 + x) - y ≤ (k + 4 + x) - z := x:ℤy:ℤz:ℤk:ℤh:3 ≤ 4h':z ≤ y⊢ k + 3 + x - y ≤ k + 4 + x - z
All goals completed! 🐙
monoidal
Use the coherence theorem for monoidal categories to solve equations in a monoidal category, where the two sides only differ by replacing strings of monoidal structural morphisms (that is, associators, unitors, and identities) with different strings of structural morphisms with the same source and target.
That is, monoidal
can handle goals of the form
a ≫ f ≫ b ≫ g ≫ c = a' ≫ f ≫ b' ≫ g ≫ c'
where a = a'
, b = b'
, and c = c'
can be proved using monoidal_coherence
.
monoidal_coherence
Coherence tactic for monoidal categories.
Use pure_coherence
instead, which is a frontend to this one.
monoidal_coherence
Close the goal of the form η = θ
, where η
and θ
are 2-isomorphisms made up only of
associators, unitors, and identities.
example {C : Type} [Category C] [MonoidalCategory C] : (λ_ (𝟙_ C)).hom = (ρ_ (𝟙_ C)).hom := by monoidal_coherence
move_add
The tactic move_add
rearranges summands of expressions.
Calling move_add [a, ← b, ...]
matches a, b,...
with summands in the main goal.
It then moves a
to the far right and b
to the far left of each addition in which they appear.
The side to which the summands are moved is determined by the presence or absence of the arrow ←
.
The inputs a, b,...
can be any terms, also with underscores.
The tactic uses the first "new" summand that unifies with each one of the given inputs.
There is a multiplicative variant, called move_mul
.
There is also a general tactic for a "binary associative commutative operation": move_oper
.
In this case the syntax requires providing first a term whose head symbol is the operation.
E.g. move_oper HAdd.hAdd [...]
is the same as move_add
, while move_oper Max.max [...]
rearranges max
s.
move_mul
The tactic move_add
rearranges summands of expressions.
Calling move_add [a, ← b, ...]
matches a, b,...
with summands in the main goal.
It then moves a
to the far right and b
to the far left of each addition in which they appear.
The side to which the summands are moved is determined by the presence or absence of the arrow ←
.
The inputs a, b,...
can be any terms, also with underscores.
The tactic uses the first "new" summand that unifies with each one of the given inputs.
There is a multiplicative variant, called move_mul
.
There is also a general tactic for a "binary associative commutative operation": move_oper
.
In this case the syntax requires providing first a term whose head symbol is the operation.
E.g. move_oper HAdd.hAdd [...]
is the same as move_add
, while move_oper Max.max [...]
rearranges max
s.
move_oper
The tactic move_add
rearranges summands of expressions.
Calling move_add [a, ← b, ...]
matches a, b,...
with summands in the main goal.
It then moves a
to the far right and b
to the far left of each addition in which they appear.
The side to which the summands are moved is determined by the presence or absence of the arrow ←
.
The inputs a, b,...
can be any terms, also with underscores.
The tactic uses the first "new" summand that unifies with each one of the given inputs.
There is a multiplicative variant, called move_mul
.
There is also a general tactic for a "binary associative commutative operation": move_oper
.
In this case the syntax requires providing first a term whose head symbol is the operation.
E.g. move_oper HAdd.hAdd [...]
is the same as move_add
, while move_oper Max.max [...]
rearranges max
s.
native_decide
native_decide
is a synonym for decide +native
.
It will attempt to prove a goal of type p
by synthesizing an instance
of Decidable p
and then evaluating it to isTrue ..
. Unlike decide
, this
uses #eval
to evaluate the decidability instance.
This should be used with care because it adds the entire lean compiler to the trusted
part, and the axiom Lean.ofReduceBool
will show up in #print axioms
for theorems using
this method or anything that transitively depends on them. Nevertheless, because it is
compiled, this can be significantly more efficient than using decide
, and for very
large computations this is one way to run external programs and trust the result.
example : (List.range 1000).length = 1000 := ⊢ (List.range 1000).length = 1000 All goals completed! 🐙
next
next => tac
focuses on the next goal and solves it using tac
, or else fails.
next x₁ ... xₙ => tac
additionally renames the n
most recent hypotheses with
inaccessible names to the given names.
nlinarith
An extension of linarith
with some preprocessing to allow it to solve some nonlinear arithmetic
problems. (Based on Coq's nra
tactic.) See linarith
for the available syntax of options,
which are inherited by nlinarith
; that is, nlinarith!
and nlinarith only [h1, h2]
all work as
in linarith
. The preprocessing is as follows:
-
For every subterm
a ^ 2
ora * a
in a hypothesis or the goal, the assumption0 ≤ a ^ 2
or0 ≤ a * a
is added to the context. -
For every pair of hypotheses
a1 R1 b1
,a2 R2 b2
in the context,R1, R2 ∈ {<, ≤, =}
, the assumption0 R' (b1 - a1) * (b2 - a2)
is added to the context (non-recursively), whereR ∈ {<, ≤, =}
is the appropriate comparison derived fromR1, R2
.
nlinarith!
An extension of linarith
with some preprocessing to allow it to solve some nonlinear arithmetic
problems. (Based on Coq's nra
tactic.) See linarith
for the available syntax of options,
which are inherited by nlinarith
; that is, nlinarith!
and nlinarith only [h1, h2]
all work as
in linarith
. The preprocessing is as follows:
-
For every subterm
a ^ 2
ora * a
in a hypothesis or the goal, the assumption0 ≤ a ^ 2
or0 ≤ a * a
is added to the context. -
For every pair of hypotheses
a1 R1 b1
,a2 R2 b2
in the context,R1, R2 ∈ {<, ≤, =}
, the assumption0 R' (b1 - a1) * (b2 - a2)
is added to the context (non-recursively), whereR ∈ {<, ≤, =}
is the appropriate comparison derived fromR1, R2
.
nofun
The tactic nofun
is shorthand for exact nofun
: it introduces the assumptions, then performs an
empty pattern match, closing the goal if the introduced pattern is impossible.
noncomm_ring
A tactic for simplifying identities in not-necessarily-commutative rings.
An example:
example {R : Type*} [Ring R] (a b c : R) : a * (b + c + c - b) = 2 * a * c := R:Type u_1inst✝:Ring Ra:Rb:Rc:R⊢ a * (b + c + c - b) = 2 * a * c
All goals completed! 🐙
You can use noncomm_ring [h]
to also simplify using h
.
nontriviality
Attempts to generate a Nontrivial α
hypothesis.
The tactic first checks to see that there is not already a Nontrivial α
instance
before trying to synthesize one using other techniques.
If the goal is an (in)equality, the type α
is inferred from the goal.
Otherwise, the type needs to be specified in the tactic invocation, as nontriviality α
.
The nontriviality
tactic will first look for strict inequalities amongst the hypotheses,
and use these to derive the Nontrivial
instance directly.
Otherwise, it will perform a case split on Subsingleton α ∨ Nontrivial α
, and attempt to discharge
the Subsingleton
goal using simp [h₁, h₂, ..., hₙ, nontriviality]
, where [h₁, h₂, ..., hₙ]
is
a list of additional simp
lemmas that can be passed to nontriviality
using the syntax
nontriviality α using h₁, h₂, ..., hₙ
.
example {R : Type} [OrderedRing R] {a : R} (h : 0 < a) : 0 < a := R:Typeinst✝:OrderedRing Ra:Rh:0 < a⊢ 0 < a
R:Typeinst✝¹:OrderedRing Ra:Rh:0 < ainst✝:Nontrivial R⊢ 0 < a -- There is now a `Nontrivial R` hypothesis available.
All goals completed! 🐙
example {R : Type} [CommRing R] {r s : R} : r * s = s * r := R:Typeinst✝:CommRing Rr:Rs:R⊢ r * s = s * r
R:Typeinst✝:CommRing Rr:Rs:Ra✝:Nontrivial R⊢ r * s = s * r -- There is now a `Nontrivial R` hypothesis available.
All goals completed! 🐙
example {R : Type} [OrderedRing R] {a : R} (h : 0 < a) : (2 : ℕ) ∣ 4 := by nontriviality R -- there is now a `Nontrivial R` hypothesis available. dec_trivial
def myeq {α : Type} (a b : α) : Prop := a = b example {α : Type} (a b : α) (h : a = b) : myeq a b := by success_if_fail nontriviality α -- Fails nontriviality α using myeq -- There is now a `Nontrivial α` hypothesis available assumption
norm_cast
The norm_cast
family of tactics is used to normalize certain coercions (casts) in expressions.
-
norm_cast
normalizes casts in the target. -
norm_cast at h
normalizes casts in hypothesish
.
The tactic is basically a version of simp
with a specific set of lemmas to move casts
upwards in the expression.
Therefore even in situations where non-terminal simp
calls are discouraged (because of fragility),
norm_cast
is considered to be safe.
It also has special handling of numerals.
For instance, given an assumption
a b : ℤ h : ↑a + ↑b < (10 : ℚ)
writing norm_cast at h
will turn h
into
h : a + b < 10
There are also variants of basic tactics that use norm_cast
to normalize expressions during
their operation, to make them more flexible about the expressions they accept
(we say that it is a tactic modulo the effects of norm_cast
):
-
exact_mod_cast
forexact
andapply_mod_cast
forapply
. Writingexact_mod_cast h
andapply_mod_cast h
will normalize casts in the goal andh
before usingexact h
orapply h
. -
rw_mod_cast
forrw
. It appliesnorm_cast
between rewrites. -
assumption_mod_cast
forassumption
. This is effectivelynorm_cast at *; assumption
, but more efficient. It normalizes casts in the goal and, for every hypothesish
in the context, it will try to normalize casts inh
and useexact h
.
See also push_cast
, which moves casts inwards rather than lifting them outwards.
norm_num
Normalize numerical expressions. Supports the operations +
-
*
/
⁻¹
^
and %
over numerical types such as ℕ
, ℤ
, ℚ
, ℝ
, ℂ
and some general algebraic types,
and can prove goals of the form A = B
, A ≠ B
, A < B
and A ≤ B
, where A
and B
are
numerical expressions. It also has a relatively simple primality prover.
nth_rewrite
nth_rewrite
is a variant of rewrite
that only changes the n₁, ..., nₖ
ᵗʰ occurrence of
the expression to be rewritten. nth_rewrite n₁ ... nₖ [eq₁, eq₂,..., eqₘ]
will rewrite the
n₁, ..., nₖ
ᵗʰ occurrence of each of the m
equalities eqᵢ
in that order. Occurrences are
counted beginning with 1
in order of precedence.
For example,
example (h : a = 1) : a + a + a + a + a = 5 := by nth_rewrite 2 3 [h] /- a: ℕ h: a = 1 ⊢ a + 1 + 1 + a + a = 5 -/
Notice that the second occurrence of a
from the left has been rewritten by nth_rewrite
.
To understand the importance of order of precedence, consider the example below
example (a b c : Nat) : (a + b) + c = (b + a) + c := by nth_rewrite 2 [Nat.add_comm] -- ⊢ (b + a) + c = (b + a) + c
Here, although the occurrence parameter is 2
, (a + b)
is rewritten to (b + a)
. This happens
because in order of precedence, the first occurrence of _ + _
is the one that adds a + b
to c
.
The occurrence in a + b
counts as the second occurrence.
If a term t
is introduced by rewriting with eqᵢ
, then this instance of t
will be counted as an
occurrence of t
for all subsequent rewrites of t
with eqⱼ
for j > i
. This behaviour is
illustrated by the example below
example (h : a = a + b) : a + a + a + a + a = 0 := by nth_rewrite 3 [h, h] /- a b: ℕ h: a = a + b ⊢ a + a + (a + b + b) + a + a = 0 -/
Here, the first nth_rewrite
with h
introduces an additional occurrence of a
in the goal.
That is, the goal state after the first rewrite looks like below
/- a b: ℕ h: a = a + b ⊢ a + a + (a + b) + a + a = 0 -/
This new instance of a
also turns out to be the third occurrence of a
. Therefore,
the next nth_rewrite
with h
rewrites this a
.
nth_rw
nth_rw
is a variant of rw
that only changes the n₁, ..., nₖ
ᵗʰ occurrence of the expression
to be rewritten. Like rw
, and unlike nth_rewrite
, it will try to close the goal by trying rfl
afterwards. nth_rw n₁ ... nₖ [eq₁, eq₂,..., eqₘ]
will rewrite the n₁, ..., nₖ
ᵗʰ occurrence of
each of the m
equalities eqᵢ
in that order. Occurrences are counted beginning with 1
in
order of precedence. For example,
example (h : a = 1) : a + a + a + a + a = 5 := by nth_rw 2 3 [h] /- a: ℕ h: a = 1 ⊢ a + 1 + 1 + a + a = 5 -/
Notice that the second occurrence of a
from the left has been rewritten by nth_rewrite
.
To understand the importance of order of precedence, consider the example below
example (a b c : Nat) : (a + b) + c = (b + a) + c := by nth_rewrite 2 [Nat.add_comm] -- ⊢ (b + a) + c = (b + a) + c
Here, although the occurrence parameter is 2
, (a + b)
is rewritten to (b + a)
. This happens
because in order of precedence, the first occurrence of _ + _
is the one that adds a + b
to c
.
The occurrence in a + b
counts as the second occurrence.
If a term t
is introduced by rewriting with eqᵢ
, then this instance of t
will be counted as an
occurrence of t
for all subsequent rewrites of t
with eqⱼ
for j > i
. This behaviour is
illustrated by the example below
example (h : a = a + b) : a + a + a + a + a = 0 := by nth_rw 3 [h, h] /- a b: ℕ h: a = a + b ⊢ a + a + (a + b + b) + a + a = 0 -/
Here, the first nth_rw
with h
introduces an additional occurrence of a
in the goal. That is,
the goal state after the first rewrite looks like below
/- a b: ℕ h: a = a + b ⊢ a + a + (a + b) + a + a = 0 -/
This new instance of a
also turns out to be the third occurrence of a
. Therefore,
the next nth_rw
with h
rewrites this a
.
Further, nth_rw
will close the remaining goal with rfl
if possible.
observe
observe hp : p
asserts the proposition p
, and tries to prove it using exact?
.
If no proof is found, the tactic fails.
In other words, this tactic is equivalent to have hp : p := by exact?
.
If hp
is omitted, then the placeholder this
is used.
The variant observe? hp : p
will emit a trace message of the form have hp : p := proof_term
.
This may be particularly useful to speed up proofs.
observe?
observe hp : p
asserts the proposition p
, and tries to prove it using exact?
.
If no proof is found, the tactic fails.
In other words, this tactic is equivalent to have hp : p := by exact?
.
If hp
is omitted, then the placeholder this
is used.
The variant observe? hp : p
will emit a trace message of the form have hp : p := proof_term
.
This may be particularly useful to speed up proofs.
observe?
observe hp : p
asserts the proposition p
, and tries to prove it using exact?
.
If no proof is found, the tactic fails.
In other words, this tactic is equivalent to have hp : p := by exact?
.
If hp
is omitted, then the placeholder this
is used.
The variant observe? hp : p
will emit a trace message of the form have hp : p := proof_term
.
This may be particularly useful to speed up proofs.
obtain
The obtain
tactic is a combination of have
and rcases
. See rcases
for
a description of supported patterns.
obtain ⟨patt⟩ : type := proof
is equivalent to
have h : type := proof rcases h with ⟨patt⟩
If ⟨patt⟩
is omitted, rcases
will try to infer the pattern.
If type
is omitted, := proof
is required.
omega
The omega
tactic, for resolving integer and natural linear arithmetic problems.
It is not yet a full decision procedure (no "dark" or "grey" shadows), but should be effective on many problems.
We handle hypotheses of the form x = y
, x < y
, x ≤ y
, and k ∣ x
for x y
in Nat
or Int
(and k
a literal), along with negations of these statements.
We decompose the sides of the inequalities as linear combinations of atoms.
If we encounter x / k
or x % k
for literal integers k
we introduce new auxiliary variables
and the relevant inequalities.
On the first pass, we do not perform case splits on natural subtraction.
If omega
fails, we recursively perform a case split on
a natural subtraction appearing in a hypothesis, and try again.
The options
omega +splitDisjunctions +splitNatSub +splitNatAbs +splitMinMax
can be used to:
-
splitDisjunctions
: split any disjunctions found in the context, if the problem is not otherwise solvable. -
splitNatSub
: for each appearance of((a - b : Nat) : Int)
, split ona ≤ b
if necessary. -
splitNatAbs
: for each appearance ofInt.natAbs a
, split on0 ≤ a
if necessary. -
splitMinMax
: for each occurrence ofmin a b
, split onmin a b = a ∨ min a b = b
Currently, all of these are on by default.
on_goal
on_goal n => tacSeq
creates a block scope for the n
-th goal and tries the sequence
of tactics tacSeq
on it.
on_goal -n => tacSeq
does the same, but the n
-th goal is chosen by counting from the
bottom.
The goal is not required to be solved and any resulting subgoals are inserted back into the list of goals, replacing the chosen goal.
peel
Peels matching quantifiers off of a given term and the goal and introduces the relevant variables.
-
peel e
peels all quantifiers (at reducible transparency), usingthis
for the name of the peeled hypothesis. -
peel e with h
ispeel e
but names the peeled hypothesish
. Ifh
is_
then usesthis
for the name of the peeled hypothesis. -
peel n e
peelsn
quantifiers (at default transparency). -
peel n e with x y z ... h
peelsn
quantifiers, names the peeled hypothesish
, and usesx
,y
,z
, and so on to name the introduced variables; these names may be_
. Ifh
is_
then usesthis
for the name of the peeled hypothesis. The length of the list of variables does not need to equaln
. -
peel e with x₁ ... xₙ h
ispeel n e with x₁ ... xₙ h
.
There are also variants that apply to an iff in the goal:
-
peel n
peelsn
quantifiers in an iff. -
peel with x₁ ... xₙ
peelsn
quantifiers in an iff and names them.
Given p q : ℕ → Prop
, h : ∀ x, p x
, and a goal ⊢ : ∀ x, q x
, the tactic peel h with x h'
will introduce x : ℕ
, h' : p x
into the context and the new goal will be ⊢ q x
. This works
with ∃
, as well as ∀ᶠ
and ∃ᶠ
, and it can even be applied to a sequence of quantifiers. Note
that this is a logically weaker setup, so using this tactic is not always feasible.
For a more complex example, given a hypothesis and a goal:
h : ∀ ε > (0 : ℝ), ∃ N : ℕ, ∀ n ≥ N, 1 / (n + 1 : ℝ) < ε ⊢ ∀ ε > (0 : ℝ), ∃ N : ℕ, ∀ n ≥ N, 1 / (n + 1 : ℝ) ≤ ε
(which differ only in <
/≤
), applying peel h with ε hε N n hn h_peel
will yield a tactic state:
h : ∀ ε > (0 : ℝ), ∃ N : ℕ, ∀ n ≥ N, 1 / (n + 1 : ℝ) < ε ε : ℝ hε : 0 < ε N n : ℕ hn : N ≤ n h_peel : 1 / (n + 1 : ℝ) < ε ⊢ 1 / (n + 1 : ℝ) ≤ ε
and the goal can be closed with exact h_peel.le
.
Note that in this example, h
and the goal are logically equivalent statements, but peel
cannot be immediately applied to show that the goal implies h
.
In addition, peel
supports goals of the form (∀ x, p x) ↔ ∀ x, q x
, or likewise for any
other quantifier. In this case, there is no hypothesis or term to supply, but otherwise the syntax
is the same. So for such goals, the syntax is peel 1
or peel with x
, and after which the
resulting goal is p x ↔ q x
. The congr!
tactic can also be applied to goals of this form using
congr! 1 with x
. While congr!
applies congruence lemmas in general, peel
can be relied upon
to only apply to outermost quantifiers.
Finally, the user may supply a term e
via ... using e
in order to close the goal
immediately. In particular, peel h using e
is equivalent to peel h; exact e
. The using
syntax
may be paired with any of the other features of peel
.
This tactic works by repeatedly applying lemmas such as forall_imp
, Exists.imp
,
Filter.Eventually.mp
, Filter.Frequently.mp
, and Filter.Eventually.of_forall
.
pgame_wf_tac
Discharges proof obligations of the form ⊢ Subsequent ..
arising in termination proofs
of definitions using well-founded recursion on PGame
.
pi_lower_bound
Create a proof of a < π
for a fixed rational number a
, given a witness, which is a
sequence of rational numbers √2 < r 1 < r 2 < ... < r n < 2
satisfying the property that
√(2 + r i) ≤ r(i+1)
, where r 0 = 0
and √(2 - r n) ≥ a/2^(n+1)
.
pi_upper_bound
Create a proof of π < a
for a fixed rational number a
, given a witness, which is a
sequence of rational numbers √2 < r 1 < r 2 < ... < r n < 2
satisfying the property that
√(2 + r i) ≥ r(i+1)
, where r 0 = 0
and √(2 - r n) ≤ (a - 1/4^n) / 2^(n+1)
.
pick_goal
pick_goal n
will move the n
-th goal to the front.
pick_goal -n
will move the n
-th goal (counting from the bottom) to the front.
See also Tactic.rotate_goals
, which moves goals from the front to the back and vice-versa.
plausible
plausible
considers a proof goal and tries to generate examples
that would contradict the statement.
Let's consider the following proof goal.
xs : List Nat, h : ∃ (x : Nat) (H : x ∈ xs), x < 3 ⊢ ∀ (y : Nat), y ∈ xs → y < 5
The local constants will be reverted and an instance will be found for
Testable (∀ (xs : List Nat), (∃ x ∈ xs, x < 3) → (∀ y ∈ xs, y < 5))
.
The Testable
instance is supported by an instance of Sampleable (List Nat)
,
Decidable (x < 3)
and Decidable (y < 5)
.
Examples will be created in ascending order of size (more or less)
The first counter-examples found will be printed and will result in an error:
=================== Found problems! xs := [1, 28] x := 1 y := 28 -------------------
If plausible
successfully tests 100 examples, it acts like
admit. If it gives up or finds a counter-example, it reports an error.
For more information on writing your own Sampleable
and Testable
instances, see Testing.Plausible.Testable
.
Optional arguments given with plausible (config : { ... })
-
numInst
(default 100): number of examples to test properties with -
maxSize
(default 100): final size argument
Options:
-
set_option trace.plausible.decoration true
: print the proposition with quantifier annotations -
set_option trace.plausible.discarded true
: print the examples discarded because they do not satisfy assumptions -
set_option trace.plausible.shrink.steps true
: trace the shrinking of counter-example -
set_option trace.plausible.shrink.candidates true
: print the lists of candidates considered when shrinking each variable -
set_option trace.plausible.instance true
: print the instances oftestable
being used to test the proposition -
set_option trace.plausible.success true
: print the tested samples that satisfy a property
polyrith
Attempts to prove polynomial equality goals through polynomial arithmetic
on the hypotheses (and additional proof terms if the user specifies them).
It proves the goal by generating an appropriate call to the tactic
linear_combination
. If this call succeeds, the call to linear_combination
is suggested to the user.
-
polyrith
will use all relevant hypotheses in the local context. -
polyrith [t1, t2, t3]
will add proof terms t1, t2, t3 to the local context. -
polyrith only [h1, h2, h3, t1, t2, t3]
will use only local hypothesesh1
,h2
,h3
, and proofst1
,t2
,t3
. It will ignore the rest of the local context.
Notes:
-
This tactic only works with a working internet connection, since it calls Sage using the SageCell web API at https://sagecell.sagemath.org/. Many thanks to the Sage team and organization for allowing this use.
-
This tactic assumes that the user has
curl
available on path.
Examples:
example (x y : ℚ) (h1 : x*y + 2*x = 1) (h2 : x = y) :
x*y = -2*y + 1 := x:ℚy:ℚh1:x * y + 2 * x = 1h2:x = y⊢ x * y = -2 * y + 1
All goals completed! 🐙
-- Try this: linear_combination h1 - 2 * h2
example (x y z w : ℚ) (hzw : z = w) : x*z + 2*y*z = x*w + 2*y*w := x:ℚy:ℚz:ℚw:ℚhzw:z = w⊢ x * z + 2 * y * z = x * w + 2 * y * w
All goals completed! 🐙
-- Try this: linear_combination (2 * y + x) * hzw
axiom scary : ∀ a b : ℚ, a + b = 0
example (a b c d : ℚ) (h : a + b = 0) (h2: b + c = 0) : a + b + c + d = 0 := a:ℚb:ℚc:ℚd:ℚh:a + b = 0h2:b + c = 0⊢ a + b + c + d = 0
All goals completed! 🐙
-- Try this: linear_combination scary c d + h
positivity
Tactic solving goals of the form 0 ≤ x
, 0 < x
and x ≠ 0
. The tactic works recursively
according to the syntax of the expression x
, if the atoms composing the expression all have
numeric lower bounds which can be proved positive/nonnegative/nonzero by norm_num
. This tactic
either closes the goal or fails.
Examples:
example {a : ℤ} (ha : 3 < a) : 0 ≤ a ^ 3 + a := a:ℤha:3 < a⊢ 0 ≤ a ^ 3 + a All goals completed! 🐙
example {a : ℤ} (ha : 1 < a) : 0 < |(3:ℤ) + a| := a:ℤha:1 < a⊢ 0 < |3 + a| All goals completed! 🐙
example {b : ℤ} : 0 ≤ max (-3) (b ^ 2) := b:ℤ⊢ 0 ≤ -3 ⊔ b ^ 2 All goals completed! 🐙
pure_coherence
pure_coherence
uses the coherence theorem for monoidal categories to prove the goal.
It can prove any equality made up only of associators, unitors, and identities.
example {C : Type} [Category C] [MonoidalCategory C] : (λ_ (𝟙_ C)).hom = (ρ_ (𝟙_ C)).hom := by pure_coherence
Users will typically just use the coherence
tactic,
which can also cope with identities of the form
a ≫ f ≫ b ≫ g ≫ c = a' ≫ f ≫ b' ≫ g ≫ c'
where a = a'
, b = b'
, and c = c'
can be proved using pure_coherence
push_cast
push_cast
rewrites the goal to move certain coercions (casts) inward, toward the leaf nodes.
This uses norm_cast
lemmas in the forward direction.
For example, ↑(a + b)
will be written to ↑a + ↑b
.
-
push_cast
moves casts inward in the goal. -
push_cast at h
moves casts inward in the hypothesish
. It can be used with extra simp lemmas with, for example,push_cast [Int.add_zero]
.
Example:
example (a b : Nat)
(h1 : ((a + b : Nat) : Int) = 10)
(h2 : ((a + b + 0 : Nat) : Int) = 10) :
((a + b : Nat) : Int) = 10 := a:ℕb:ℕh1:↑(a + b) = 10h2:↑(a + b + 0) = 10⊢ ↑(a + b) = 10
/-
h1 : ↑(a + b) = 10
h2 : ↑(a + b + 0) = 10
⊢ ↑(a + b) = 10
-/
a:ℕb:ℕh1:↑(a + b) = 10h2:↑(a + b + 0) = 10⊢ ↑a + ↑b = 10
/- Now
⊢ ↑a + ↑b = 10
-/
a:ℕb:ℕh2:↑(a + b + 0) = 10h1:↑a + ↑b = 10⊢ ↑a + ↑b = 10
a:ℕb:ℕh1:↑a + ↑b = 10h2:↑a + ↑b = 10⊢ ↑a + ↑b = 10
/- Now
h1 h2 : ↑a + ↑b = 10
-/
All goals completed! 🐙
See also norm_cast
.
push_neg
Push negations into the conclusion of a hypothesis.
For instance, a hypothesis h : ¬ ∀ x, ∃ y, x ≤ y
will be transformed by push_neg at h
into
h : ∃ x, ∀ y, y < x
. Variable names are conserved.
This tactic pushes negations inside expressions. For instance, given a hypothesis
h : ¬ ∀ ε > 0, ∃ δ > 0, ∀ x, |x - x₀| ≤ δ → |f x - y₀| ≤ ε)
writing push_neg at h
will turn h
into
h : ∃ ε, ε > 0 ∧ ∀ δ, δ > 0 → (∃ x, |x - x₀| ≤ δ ∧ ε < |f x - y₀|),
(The pretty printer does not use the abbreviations ∀ δ > 0
and ∃ ε > 0
but this issue
has nothing to do with push_neg
).
Note that names are conserved by this tactic, contrary to what would happen with simp
using the relevant lemmas. One can also use this tactic at the goal using push_neg
,
at every hypothesis and the goal using push_neg at *
or at selected hypotheses and the goal
using say push_neg at h h' ⊢
as usual.
This tactic has two modes: in standard mode, it transforms ¬(p ∧ q)
into p → ¬q
, whereas in
distrib mode it produces ¬p ∨ ¬q
. To use distrib mode, use set_option push_neg.use_distrib true
.
qify
The qify
tactic is used to shift propositions from ℕ
or ℤ
to ℚ
.
This is often useful since ℚ
has well-behaved division.
example (a b c x y z : ℕ) (h : ¬ x*y*z < 0) : c < a + 3*b := a:ℕb:ℕc:ℕx:ℕy:ℕz:ℕh:¬x * y * z < 0⊢ c < a + 3 * b
a:ℕb:ℕc:ℕx:ℕy:ℕz:ℕh:¬x * y * z < 0⊢ ↑c < ↑a + 3 * ↑b
a:ℕb:ℕc:ℕx:ℕy:ℕz:ℕh:¬↑x * ↑y * ↑z < 0⊢ ↑c < ↑a + 3 * ↑b
/-
h : ¬↑x * ↑y * ↑z < 0
⊢ ↑c < ↑a + 3 * ↑b
-/
All goals completed! 🐙
qify
can be given extra lemmas to use in simplification. This is especially useful in the
presence of nat subtraction: passing ≤
arguments will allow push_cast
to do more work.
example (a b c : ℤ) (h : a / b = c) (hab : b ∣ a) (hb : b ≠ 0) : a = c * b := a:ℤb:ℤc:ℤh:a / b = chab:b ∣ ahb:b ≠ 0⊢ a = c * b
a:ℤb:ℤc:ℤhab:b ∣ ah:↑a / ↑b = ↑chb:↑b ≠ 0⊢ ↑a = ↑c * ↑b
All goals completed! 🐙
qify
makes use of the @[zify_simps]
and @[qify_simps]
attributes to move propositions,
and the push_cast
tactic to simplify the ℚ
-valued expressions.
rcases
rcases
is a tactic that will perform cases
recursively, according to a pattern. It is used to
destructure hypotheses or expressions composed of inductive types like h1 : a ∧ b ∧ c ∨ d
or
h2 : ∃ x y, trans_rel R x y
. Usual usage might be rcases h1 with ⟨ha, hb, hc⟩ | hd
or
rcases h2 with ⟨x, y, _ | ⟨z, hxz, hzy⟩⟩
for these examples.
Each element of an rcases
pattern is matched against a particular local hypothesis (most of which
are generated during the execution of rcases
and represent individual elements destructured from
the input expression). An rcases
pattern has the following grammar:
-
A name like
x
, which names the active hypothesis asx
. -
A blank
_
, which does nothing (letting the automatic naming system used bycases
name the hypothesis). -
A hyphen
-
, which clears the active hypothesis and any dependents. -
The keyword
rfl
, which expects the hypothesis to beh : a = b
, and callssubst
on the hypothesis (which has the effect of replacingb
witha
everywhere or vice versa). -
A type ascription
p : ty
, which sets the type of the hypothesis toty
and then matches it againstp
. (Of course,ty
must unify with the actual type ofh
for this to work.) -
A tuple pattern
⟨p1, p2, p3⟩
, which matches a constructor with many arguments, or a series of nested conjunctions or existentials. For example if the active hypothesis isa ∧ b ∧ c
, then the conjunction will be destructured, andp1
will be matched againsta
,p2
againstb
and so on. -
A
@
before a tuple pattern as in@⟨p1, p2, p3⟩
will bind all arguments in the constructor, while leaving the@
off will only use the patterns on the explicit arguments. -
An alternation pattern
p1 | p2 | p3
, which matches an inductive type with multiple constructors, or a nested disjunction likea ∨ b ∨ c
.
A pattern like ⟨a, b, c⟩ | ⟨d, e⟩
will do a split over the inductive datatype,
naming the first three parameters of the first constructor as a,b,c
and the
first two of the second constructor d,e
. If the list is not as long as the
number of arguments to the constructor or the number of constructors, the
remaining variables will be automatically named. If there are nested brackets
such as ⟨⟨a⟩, b | c⟩ | d
then these will cause more case splits as necessary.
If there are too many arguments, such as ⟨a, b, c⟩
for splitting on
∃ x, ∃ y, p x
, then it will be treated as ⟨a, ⟨b, c⟩⟩
, splitting the last
parameter as necessary.
rcases
also has special support for quotient types: quotient induction into Prop works like
matching on the constructor quot.mk
.
rcases h : e with PAT
will do the same as rcases e with PAT
with the exception that an
assumption h : e = PAT
will be added to the context.
rcongr
Repeatedly apply congr
and ext
, using the given patterns as arguments for ext
.
There are two ways this tactic stops:
-
congr
fails (makes no progress), after having already appliedext
. -
congr
canceled out the last usage ofext
. In this case, the state is reverted to before thecongr
was applied.
For example, when the goal is
⊢ (fun x => f x + 3) '' s = (fun x => g x + 3) '' s
then rcongr x
produces the goal
x : α ⊢ f x = g x
This gives the same result as congr; ext x; congr
.
In contrast, congr
would produce
⊢ (fun x => f x + 3) = (fun x => g x + 3)
and congr with x
(or congr; ext x
) would produce
x : α ⊢ f x + 3 = g x + 3
recover
Modifier recover
for a tactic (sequence) to debug cases where goals are closed incorrectly.
The tactic recover tacs
for a tactic (sequence) tacs
applies the tactics and then adds goals
that are not closed, starting from the original goal.
reduce
reduce at loc
completely reduces the given location.
This also exists as a conv
-mode tactic.
This does the same transformation as the #reduce
command.
reduce_mod_char
The tactic reduce_mod_char
looks for numeric expressions in characteristic p
and reduces these to lie between 0
and p
.
For example:
example : (5 : ZMod 4) = 1 := by reduce_mod_char example : (X ^ 2 - 3 * X + 4 : (ZMod 4)[X]) = X ^ 2 + X := by reduce_mod_char
It also handles negation, turning it into multiplication by p - 1
,
and similarly subtraction.
This tactic uses the type of the subexpression to figure out if it is indeed of positive
characteristic, for improved performance compared to trying to synthesise a CharP
instance.
The variant reduce_mod_char!
also tries to use CharP R n
hypotheses in the context.
(Limitations of the typeclass system mean the tactic can't search for a CharP R n
instance if
n
is not yet known; use have : CharP R n := inferInstance; reduce_mod_char!
as a workaround.)
reduce_mod_char!
The tactic reduce_mod_char
looks for numeric expressions in characteristic p
and reduces these to lie between 0
and p
.
For example:
example : (5 : ZMod 4) = 1 := by reduce_mod_char example : (X ^ 2 - 3 * X + 4 : (ZMod 4)[X]) = X ^ 2 + X := by reduce_mod_char
It also handles negation, turning it into multiplication by p - 1
,
and similarly subtraction.
This tactic uses the type of the subexpression to figure out if it is indeed of positive
characteristic, for improved performance compared to trying to synthesise a CharP
instance.
The variant reduce_mod_char!
also tries to use CharP R n
hypotheses in the context.
(Limitations of the typeclass system mean the tactic can't search for a CharP R n
instance if
n
is not yet known; use have : CharP R n := inferInstance; reduce_mod_char!
as a workaround.)
refine
refine e
behaves like exact e
, except that named (?x
) or unnamed (?_
)
holes in e
that are not solved by unification with the main goal's target type
are converted into new goals, using the hole's name, if any, as the goal case name.
refine'
refine' e
behaves like refine e
, except that unsolved placeholders (_
)
and implicit parameters are also converted into new goals.
refine_lift
Auxiliary macro for lifting have/suffices/let/...
It makes sure the "continuation" ?_
is the main goal after refining.
refold_let
refold_let x y z at loc
looks for the bodies of local definitions x
, y
, and z
at the given
location and replaces them with x
, y
, or z
. This is the inverse of "zeta reduction."
This also exists as a conv
-mode tactic.
rel
The rel
tactic applies "generalized congruence" rules to solve a relational goal by
"substitution". For example,
example {a b x c d : ℝ} (h1 : a ≤ b) (h2 : c ≤ d) :
x ^ 2 * a + c ≤ x ^ 2 * b + d := a:ℝb:ℝx:ℝc:ℝd:ℝh1:a ≤ bh2:c ≤ d⊢ x ^ 2 * a + c ≤ x ^ 2 * b + d
All goals completed! 🐙
In this example we "substitute" the hypotheses a ≤ b
and c ≤ d
into the LHS x ^ 2 * a + c
of
the goal and obtain the RHS x ^ 2 * b + d
, thus proving the goal.
The "generalized congruence" rules used are the library lemmas which have been tagged with the
attribute @[gcongr]
. For example, the first example constructs the proof term
add_le_add (mul_le_mul_of_nonneg_left h1 (pow_bit0_nonneg x 1)) h2
using the generalized congruence lemmas add_le_add
and mul_le_mul_of_nonneg_left
. If there are
no applicable generalized congruence lemmas, the tactic fails.
The tactic attempts to discharge side goals to these "generalized congruence" lemmas (such as the
side goal 0 ≤ x ^ 2
in the above application of mul_le_mul_of_nonneg_left
) using the tactic
gcongr_discharger
, which wraps positivity
but can also be extended. If the side goals cannot
be discharged in this way, the tactic fails.
rename
rename t => x
renames the most recent hypothesis whose type matches t
(which may contain placeholders) to x
, or fails if no such hypothesis could be found.
rename'
rename_bvar
-
rename_bvar old → new
renames all bound variables namedold
tonew
in the target. -
rename_bvar old → new at h
does the same in hypothesish
.
example (P : ℕ → ℕ → Prop) (h : ∀ n, ∃ m, P n m) : ∀ l, ∃ m, P l m := P:ℕ → ℕ → Proph:∀ (q : ℕ), ∃ m, P q m⊢ ∀ (l : ℕ), ∃ n, P l n
P:ℕ → ℕ → Proph:∀ (q : ℕ), ∃ m, P q m⊢ ∀ (l : ℕ), ∃ n, P l n -- h is now ∀ (q : ℕ), ∃ (m : ℕ), P q m,
P:ℕ → ℕ → Proph:∀ (q : ℕ), ∃ m, P q m⊢ ∀ (l : ℕ), ∃ n, P l n -- target is now ∀ (l : ℕ), ∃ (n : ℕ), P k n,
All goals completed! 🐙 -- Lean does not care about those bound variable names
Note: name clashes are resolved automatically.
repeat
repeat tac
repeatedly applies tac
so long as it succeeds.
The tactic tac
may be a tactic sequence, and if tac
fails at any point in its execution,
repeat
will revert any partial changes that tac
made to the tactic state.
The tactic tac
should eventually fail, otherwise repeat tac
will run indefinitely.
See also:
-
try tac
is likerepeat tac
but will applytac
at most once. -
repeat' tac
recursively appliestac
to each goal. -
first | tac1 | tac2
implements the backtracking used byrepeat
repeat'
repeat' tac
recursively applies tac
on all of the goals so long as it succeeds.
That is to say, if tac
produces multiple subgoals, then repeat' tac
is applied to each of them.
See also:
-
repeat tac
simply repeatedly appliestac
. -
repeat1' tac
isrepeat' tac
but requires thattac
succeed for some goal at least once.
repeat1
repeat1 tac
applies tac
to main goal at least once. If the application succeeds,
the tactic is applied recursively to the generated subgoals until it eventually fails.
repeat1'
repeat1' tac
recursively applies to tac
on all of the goals so long as it succeeds,
but repeat1' tac
fails if tac
succeeds on none of the initial goals.
See also:
-
repeat tac
simply appliestac
repeatedly. -
repeat' tac
is likerepeat1' tac
but it does not require thattac
succeed at least once.
replace
Acts like have
, but removes a hypothesis with the same name as
this one if possible. For example, if the state is:
Then after replace h : β
the state will be:
case h f : α → β h : α ⊢ β f : α → β h : β ⊢ goal
whereas have h : β
would result in:
case h f : α → β h : α ⊢ β f : α → β h✝ : α h : β ⊢ goal
replace
Acts like have
, but removes a hypothesis with the same name as
this one if possible. For example, if the state is:
f : α → β h : α ⊢ goal
Then after replace h := f h
the state will be:
f : α → β h : β ⊢ goal
whereas have h := f h
would result in:
f : α → β h† : α h : β ⊢ goal
This can be used to simulate the specialize
and apply at
tactics of Coq.
revert
revert x...
is the inverse of intro x...
: it moves the given hypotheses
into the main goal's target type.
rewrite
rewrite [e]
applies identity e
as a rewrite rule to the target of the main goal.
If e
is preceded by left arrow (←
or <-
), the rewrite is applied in the reverse direction.
If e
is a defined constant, then the equational theorems associated with e
are used.
This provides a convenient way to unfold e
.
-
rewrite [e₁, ..., eₙ]
applies the given rules sequentially. -
rewrite [e] at l
rewritese
at location(s)l
, wherel
is either*
or a list of hypotheses in the local context. In the latter case, a turnstile⊢
or|-
can also be used, to signify the target of the goal.
Using rw (occs := .pos L) [e]
,
where L : List Nat
, you can control which "occurrences" are rewritten.
(This option applies to each rule, so usually this will only be used with a single rule.)
Occurrences count from 1
.
At each allowed occurrence, arguments of the rewrite rule e
may be instantiated,
restricting which later rewrites can be found.
(Disallowed occurrences do not result in instantiation.)
(occs := .neg L)
allows skipping specified occurrences.
rfl
This tactic applies to a goal whose target has the form x ~ x
,
where ~
is equality, heterogeneous equality or any relation that
has a reflexivity lemma tagged with the attribute @[refl].
rfl'
rfl'
is similar to rfl
, but disables smart unfolding and unfolds all kinds of definitions,
theorems included (relevant for declarations defined by well-founded recursion).
rify
The rify
tactic is used to shift propositions from ℕ
, ℤ
or ℚ
to ℝ
.
Although less useful than its cousins zify
and qify
, it can be useful when your
goal or context already involves real numbers.
In the example below, assumption hn
is about natural numbers, hk
is about integers
and involves casting a natural number to ℤ
, and the conclusion is about real numbers.
The proof uses rify
to lift both assumptions to ℝ
before calling linarith
.
example {n : ℕ} {k : ℤ} (hn : 8 ≤ n) (hk : 2 * k ≤ n + 2) :
(0 : ℝ) < n - k - 1 := n:ℕk:ℤhn:8 ≤ nhk:2 * k ≤ ↑n + 2⊢ 0 < ↑n - ↑k - 1
n:ℕk:ℤhn:8 ≤ ↑nhk:2 * ↑k ≤ ↑n + 2⊢ 0 < ↑n - ↑k - 1 /- Now have hn : 8 ≤ (n : ℝ) hk : 2 * (k : ℝ) ≤ (n : ℝ) + 2-/
All goals completed! 🐙
rify
makes use of the @[zify_simps]
, @[qify_simps]
and @[rify_simps]
attributes to move
propositions, and the push_cast
tactic to simplify the ℝ
-valued expressions.
rify
can be given extra lemmas to use in simplification. This is especially useful in the
presence of nat subtraction: passing ≤
arguments will allow push_cast
to do more work.
example (a b c : ℕ) (h : a - b < c) (hab : b ≤ a) : a < b + c := a:ℕb:ℕc:ℕh:a - b < chab:b ≤ a⊢ a < b + c
a:ℕb:ℕc:ℕhab:b ≤ ah:↑a - ↑b < ↑c⊢ ↑a < ↑b + ↑c
All goals completed! 🐙
Note that zify
or qify
would work just as well in the above example (and zify
is the natural
choice since it is enough to get rid of the pathological ℕ
subtraction).
right
Applies the second constructor when the goal is an inductive type with exactly two constructors, or fails otherwise.
example {p q : Prop} (h : q) : p ∨ q := p:Propq:Proph:q⊢ p ∨ q
p:Propq:Proph:q⊢ q
All goals completed! 🐙
ring
Tactic for evaluating expressions in commutative (semi)rings, allowing for variables in the
exponent. If the goal is not appropriate for ring
(e.g. not an equality) ring_nf
will be
suggested.
-
ring!
will use a more aggressive reducibility setting to determine equality of atoms. -
ring1
fails if the target is not an equality.
For example:
example (n : ℕ) (m : ℤ) : 2^(n+1) * m = 2 * 2^n * m := n:ℕm:ℤ⊢ 2 ^ (n + 1) * m = 2 * 2 ^ n * m All goals completed! 🐙
example (a b : ℤ) (n : ℕ) : (a + b)^(n + 2) = (a^2 + b^2 + a * b + b * a) * (a + b)^n := a:ℤb:ℤn:ℕ⊢ (a + b) ^ (n + 2) = (a ^ 2 + b ^ 2 + a * b + b * a) * (a + b) ^ n All goals completed! 🐙
example (x y : ℕ) : x + id y = y + id x := x:ℕy:ℕ⊢ x + id y = y + id x All goals completed! 🐙
example (x : ℕ) (h : x * 2 > 5): x + x > 5 := x:ℕh:x * 2 > 5⊢ x + x > 5 x:ℕh:x * 2 > 5⊢ x * 2 > 5; All goals completed! 🐙 -- suggests ring_nf
ring!
Tactic for evaluating expressions in commutative (semi)rings, allowing for variables in the
exponent. If the goal is not appropriate for ring
(e.g. not an equality) ring_nf
will be
suggested.
-
ring!
will use a more aggressive reducibility setting to determine equality of atoms. -
ring1
fails if the target is not an equality.
For example:
example (n : ℕ) (m : ℤ) : 2^(n+1) * m = 2 * 2^n * m := n:ℕm:ℤ⊢ 2 ^ (n + 1) * m = 2 * 2 ^ n * m All goals completed! 🐙
example (a b : ℤ) (n : ℕ) : (a + b)^(n + 2) = (a^2 + b^2 + a * b + b * a) * (a + b)^n := a:ℤb:ℤn:ℕ⊢ (a + b) ^ (n + 2) = (a ^ 2 + b ^ 2 + a * b + b * a) * (a + b) ^ n All goals completed! 🐙
example (x y : ℕ) : x + id y = y + id x := x:ℕy:ℕ⊢ x + id y = y + id x All goals completed! 🐙
example (x : ℕ) (h : x * 2 > 5): x + x > 5 := x:ℕh:x * 2 > 5⊢ x + x > 5 x:ℕh:x * 2 > 5⊢ x * 2 > 5; All goals completed! 🐙 -- suggests ring_nf
ring1
Tactic for solving equations of commutative (semi)rings, allowing variables in the exponent.
-
This version of
ring
fails if the target is not an equality. -
The variant
ring1!
will use a more aggressive reducibility setting to determine equality of atoms.
ring1!
Tactic for solving equations of commutative (semi)rings, allowing variables in the exponent.
-
This version of
ring
fails if the target is not an equality. -
The variant
ring1!
will use a more aggressive reducibility setting to determine equality of atoms.
ring1_nf
ring1_nf!
ring_nf
Simplification tactic for expressions in the language of commutative (semi)rings, which rewrites all ring expressions into a normal form.
-
ring_nf!
will use a more aggressive reducibility setting to identify atoms. -
ring_nf (config := cfg)
allows for additional configuration:-
red
: the reducibility setting (overridden by!
) -
recursive
: if true,ring_nf
will also recurse into atoms
-
-
ring_nf
works as both a tactic and a conv tactic. In tactic mode,ring_nf at h
can be used to rewrite in a hypothesis.
This can be used non-terminally to normalize ring expressions in the goal such as
⊢ P (x + x + x)
~> ⊢ P (x * 3)
, as well as being able to prove some equations that
ring
cannot because they involve ring reasoning inside a subterm, such as
sin (x + y) + sin (y + x) = 2 * sin (x + y)
.
ring_nf!
Simplification tactic for expressions in the language of commutative (semi)rings, which rewrites all ring expressions into a normal form.
-
ring_nf!
will use a more aggressive reducibility setting to identify atoms. -
ring_nf (config := cfg)
allows for additional configuration:-
red
: the reducibility setting (overridden by!
) -
recursive
: if true,ring_nf
will also recurse into atoms
-
-
ring_nf
works as both a tactic and a conv tactic. In tactic mode,ring_nf at h
can be used to rewrite in a hypothesis.
This can be used non-terminally to normalize ring expressions in the goal such as
⊢ P (x + x + x)
~> ⊢ P (x * 3)
, as well as being able to prove some equations that
ring
cannot because they involve ring reasoning inside a subterm, such as
sin (x + y) + sin (y + x) = 2 * sin (x + y)
.
rintro
The rintro
tactic is a combination of the intros
tactic with rcases
to
allow for destructuring patterns while introducing variables. See rcases
for
a description of supported patterns. For example, rintro (a | ⟨b, c⟩) ⟨d, e⟩
will introduce two variables, and then do case splits on both of them producing
two subgoals, one with variables a d e
and the other with b c d e
.
rintro
, unlike rcases
, also supports the form (x y : ty)
for introducing
and type-ascripting multiple variables at once, similar to binders.
rotate_left
rotate_left n
rotates goals to the left by n
. That is, rotate_left 1
takes the main goal and puts it to the back of the subgoal list.
If n
is omitted, it defaults to 1
.
rotate_right
Rotate the goals to the right by n
. That is, take the goal at the back
and push it to the front n
times. If n
is omitted, it defaults to 1
.
rsuffices
The rsuffices
tactic is an alternative version of suffices
, that allows the usage
of any syntax that would be valid in an obtain
block. This tactic just calls obtain
on the expression, and then rotate_left
.
rw
rw?
rw_mod_cast
Rewrites with the given rules, normalizing casts prior to each step.
rw_search
rw_search
attempts to solve an equality goal
by repeatedly rewriting using lemmas from the library.
If no solution is found,
the best sequence of rewrites found before maxHeartbeats
elapses is returned.
The search is a best-first search, minimising the Levenshtein edit distance between
the pretty-printed expressions on either side of the equality.
(The strings are tokenized at spaces,
separating delimiters (
, )
, [
, ]
, and ,
into their own tokens.)
You can use rw_search [-my_lemma, -my_theorem]
to prevent rw_search
from using the names theorems.
saturate
saturate?
save
save
is defined to be the same as skip
, but the elaborator has
special handling for occurrences of save
in tactic scripts and will transform
by tac1; save; tac2
to by (checkpoint tac1); tac2
, meaning that the effect of tac1
will be cached and replayed. This is useful for improving responsiveness
when working on a long tactic proof, by using save
after expensive tactics.
(TODO: do this automatically and transparently so that users don't have to use this combinator explicitly.)
says
If you write X says
, where X
is a tactic that produces a "Try this: Y" message,
then you will get a message "Try this: X says Y".
Once you've clicked to replace X says
with X says Y
,
afterwards X says Y
will only run Y
.
The typical usage case is:
simp? [X] says simp only [X, Y, Z]
If you use set_option says.verify true
(set automatically during CI) then X says Y
runs X
and verifies that it still prints "Try this: Y".
set
set!
show
show t
finds the first goal whose target unifies with t
. It makes that the main goal,
performs the unification, and replaces the target with the unified version of t
.
show_term
show_term tac
runs tac
, then prints the generated term in the form
"exact X Y Z" or "refine X ?_ Z" if there are remaining subgoals.
(For some tactics, the printed term will not be human readable.)
simp
The simp
tactic uses lemmas and hypotheses to simplify the main goal target or
non-dependent hypotheses. It has many variants:
-
simp
simplifies the main goal target using lemmas tagged with the attribute[simp]
. -
simp [h₁, h₂, ..., hₙ]
simplifies the main goal target using the lemmas tagged with the attribute[simp]
and the givenhᵢ
's, where thehᵢ
's are expressions.- -
If an
hᵢ
is a defined constantf
, thenf
is unfolded. Iff
has equational lemmas associated with it (and is not a projection or areducible
definition), these are used to rewrite withf
. -
simp [*]
simplifies the main goal target using the lemmas tagged with the attribute[simp]
and all hypotheses. -
simp only [h₁, h₂, ..., hₙ]
is likesimp [h₁, h₂, ..., hₙ]
but does not use[simp]
lemmas. -
simp [-id₁, ..., -idₙ]
simplifies the main goal target using the lemmas tagged with the attribute[simp]
, but removes the ones namedidᵢ
. -
simp at h₁ h₂ ... hₙ
simplifies the hypothesesh₁ : T₁
...hₙ : Tₙ
. If the target or another hypothesis depends onhᵢ
, a new simplified hypothesishᵢ
is introduced, but the old one remains in the local context. -
simp at *
simplifies all the hypotheses and the target. -
simp [*] at *
simplifies target and all (propositional) hypotheses using the other hypotheses.
simp!
simp!
is shorthand for simp
with autoUnfold := true
.
This will rewrite with all equation lemmas, which can be used to
partially evaluate many definitions.
simp?
simp?
takes the same arguments as simp
, but reports an equivalent call to simp only
that would be sufficient to close the goal. This is useful for reducing the size of the simp
set in a local invocation to speed up processing.
example (x : Nat) : (if True then x + 2 else 3) = x + 2 := x:ℕ⊢ (if True then x + 2 else 3) = x + 2
All goals completed! 🐙 -- prints "Try this: simp only [ite_true]"
This command can also be used in simp_all
and dsimp
.
simp?!
simp?
takes the same arguments as simp
, but reports an equivalent call to simp only
that would be sufficient to close the goal. This is useful for reducing the size of the simp
set in a local invocation to speed up processing.
example (x : Nat) : (if True then x + 2 else 3) = x + 2 := x:ℕ⊢ (if True then x + 2 else 3) = x + 2
All goals completed! 🐙 -- prints "Try this: simp only [ite_true]"
This command can also be used in simp_all
and dsimp
.
simp_all
simp_all
is a stronger version of simp [*] at *
where the hypotheses and target
are simplified multiple times until no simplification is applicable.
Only non-dependent propositional hypotheses are considered.
simp_all!
simp_all!
is shorthand for simp_all
with autoUnfold := true
.
This will rewrite with all equation lemmas, which can be used to
partially evaluate many definitions.
simp_all?
simp?
takes the same arguments as simp
, but reports an equivalent call to simp only
that would be sufficient to close the goal. This is useful for reducing the size of the simp
set in a local invocation to speed up processing.
example (x : Nat) : (if True then x + 2 else 3) = x + 2 := x:ℕ⊢ (if True then x + 2 else 3) = x + 2
All goals completed! 🐙 -- prints "Try this: simp only [ite_true]"
This command can also be used in simp_all
and dsimp
.
simp_all?!
simp?
takes the same arguments as simp
, but reports an equivalent call to simp only
that would be sufficient to close the goal. This is useful for reducing the size of the simp
set in a local invocation to speed up processing.
example (x : Nat) : (if True then x + 2 else 3) = x + 2 := x:ℕ⊢ (if True then x + 2 else 3) = x + 2
All goals completed! 🐙 -- prints "Try this: simp only [ite_true]"
This command can also be used in simp_all
and dsimp
.
simp_arith
simp_arith
is shorthand for simp
with arith := true
and decide := true
.
This enables the use of normalization by linear arithmetic.
simp_arith!
simp_arith!
is shorthand for simp_arith
with autoUnfold := true
.
This will rewrite with all equation lemmas, which can be used to
partially evaluate many definitions.
simp_intro
The simp_intro
tactic is a combination of simp
and intro
: it will simplify the types of
variables as it introduces them and uses the new variables to simplify later arguments
and the goal.
-
simp_intro x y z
introduces variables namedx y z
-
simp_intro x y z ..
introduces variables namedx y z
and then keeps introducing_
binders -
simp_intro (config := cfg) (discharger := tac) x y .. only [h₁, h₂]
:simp_intro
takes the same options assimp
(seesimp
)
example : x + 0 = y → x = z := x:ℕy:ℕz:ℕ⊢ x + 0 = y → x = z
x:ℕy:ℕz:ℕh:x = y⊢ y = z
-- h: x = y ⊢ y = z
All goals completed! 🐙
simp_rw
simp_rw
functions as a mix of simp
and rw
. Like rw
, it applies each
rewrite rule in the given order, but like simp
it repeatedly applies these
rules and also under binders like ∀ x, ...
, ∃ x, ...
and fun x ↦...
.
Usage:
-
simp_rw [lemma_1, ..., lemma_n]
will rewrite the goal by applying the lemmas in that order. A lemma preceded by←
is applied in the reverse direction. -
simp_rw [lemma_1, ..., lemma_n] at h₁ ... hₙ
will rewrite the given hypotheses. -
simp_rw [...] at *
rewrites in the whole context: all hypotheses and the goal.
Lemmas passed to simp_rw
must be expressions that are valid arguments to simp
.
For example, neither simp
nor rw
can solve the following, but simp_rw
can:
example {a : ℕ}
(h1 : ∀ a b : ℕ, a - 1 ≤ b ↔ a ≤ b + 1)
(h2 : ∀ a b : ℕ, a ≤ b ↔ ∀ c, c < a → c < b) :
(∀ b, a - 1 ≤ b) = ∀ b c : ℕ, c < a → c < b + 1 := a:ℕh1:∀ (a b : ℕ), a - 1 ≤ b ↔ a ≤ b + 1h2:∀ (a b : ℕ), a ≤ b ↔ ∀ c < a, c < b⊢ (∀ (b : ℕ), a - 1 ≤ b) = ∀ (b c : ℕ), c < a → c < b + 1
simp_rw [h1, All goals completed! 🐙]
simp_wf
Unfold definitions commonly used in well founded relation definitions.
Since Lean 4.12, Lean unfolds these definitions automatically before presenting the goal to the
user, and this tactic should no longer be necessary. Calls to simp_wf
can be removed or replaced
by plain calls to simp
.
simpa
This is a "finishing" tactic modification of simp
. It has two forms.
-
simpa [rules, ⋯] using e
will simplify the goal and the type ofe
usingrules
, then try to close the goal usinge
.
Simplifying the type of e
makes it more likely to match the goal
(which has also been simplified). This construction also tends to be
more robust under changes to the simp lemma set.
-
simpa [rules, ⋯]
will simplify the goal and the type of a hypothesisthis
if present in the context, then try to close the goal using theassumption
tactic.
simpa!
This is a "finishing" tactic modification of simp
. It has two forms.
-
simpa [rules, ⋯] using e
will simplify the goal and the type ofe
usingrules
, then try to close the goal usinge
.
Simplifying the type of e
makes it more likely to match the goal
(which has also been simplified). This construction also tends to be
more robust under changes to the simp lemma set.
-
simpa [rules, ⋯]
will simplify the goal and the type of a hypothesisthis
if present in the context, then try to close the goal using theassumption
tactic.
simpa?
This is a "finishing" tactic modification of simp
. It has two forms.
-
simpa [rules, ⋯] using e
will simplify the goal and the type ofe
usingrules
, then try to close the goal usinge
.
Simplifying the type of e
makes it more likely to match the goal
(which has also been simplified). This construction also tends to be
more robust under changes to the simp lemma set.
-
simpa [rules, ⋯]
will simplify the goal and the type of a hypothesisthis
if present in the context, then try to close the goal using theassumption
tactic.
simpa?!
This is a "finishing" tactic modification of simp
. It has two forms.
-
simpa [rules, ⋯] using e
will simplify the goal and the type ofe
usingrules
, then try to close the goal usinge
.
Simplifying the type of e
makes it more likely to match the goal
(which has also been simplified). This construction also tends to be
more robust under changes to the simp lemma set.
-
simpa [rules, ⋯]
will simplify the goal and the type of a hypothesisthis
if present in the context, then try to close the goal using theassumption
tactic.
sizeOf_list_dec
This tactic, added to the decreasing_trivial
toolbox, proves that
sizeOf a < sizeOf as
when a ∈ as
, which is useful for well founded recursions
over a nested inductive like inductive T | mk : List T → T
.
sleep
The tactic sleep ms
sleeps for ms
milliseconds and does nothing.
It is used for debugging purposes only.
slice_lhs
slice_lhs a b => tac
zooms to the left hand side, uses associativity for categorical
composition as needed, zooms in on the a
-th through b
-th morphisms, and invokes tac
.
slice_rhs
slice_rhs a b => tac
zooms to the right hand side, uses associativity for categorical
composition as needed, zooms in on the a
-th through b
-th morphisms, and invokes tac
.
solve_by_elim
solve_by_elim
calls apply
on the main goal to find an assumption whose head matches
and then repeatedly calls apply
on the generated subgoals until no subgoals remain,
performing at most maxDepth
(defaults to 6) recursive steps.
solve_by_elim
discharges the current goal or fails.
solve_by_elim
performs backtracking if subgoals can not be solved.
By default, the assumptions passed to apply
are the local context, rfl
, trivial
,
congrFun
and congrArg
.
The assumptions can be modified with similar syntax as for simp
:
-
solve_by_elim [h₁, h₂, ..., hᵣ]
also applies the given expressions. -
solve_by_elim only [h₁, h₂, ..., hᵣ]
does not include the local context,rfl
,trivial
,congrFun
, orcongrArg
unless they are explicitly included. -
solve_by_elim [-h₁, ... -hₙ]
removes the given local hypotheses. -
solve_by_elim using [a₁, ...]
uses all lemmas which have been labelled with the attributesaᵢ
(these attributes must be created usingregister_label_attr
).
solve_by_elim*
tries to solve all goals together, using backtracking if a solution for one goal
makes other goals impossible.
(Adding or removing local hypotheses may not be well-behaved when starting with multiple goals.)
Optional arguments passed via a configuration argument as solve_by_elim (config := { ... })
-
maxDepth
: number of attempts at discharging generated subgoals -
symm
: adds all hypotheses derived bysymm
(defaults totrue
). -
exfalso
: allow callingexfalso
and trying again ifsolve_by_elim
fails (defaults totrue
). -
transparency
: change the transparency mode when callingapply
. Defaults to.default
, but it is often useful to change to.reducible
, so semireducible definitions will not be unfolded when trying to apply a lemma.
See also the doc-comment for Lean.Meta.Tactic.Backtrack.BacktrackConfig
for the options
proc
, suspend
, and discharge
which allow further customization of solve_by_elim
.
Both apply_assumption
and apply_rules
are implemented via these hooks.
sorry
The sorry
tactic is a temporary placeholder for an incomplete tactic proof,
closing the main goal using exact sorry
.
This is intended for stubbing-out incomplete parts of a proof while still having a syntactically correct proof skeleton.
Lean will give a warning whenever a proof uses sorry
, so you aren't likely to miss it,
but you can double check if a theorem depends on sorry
by looking for sorryAx
in the output
of the #print axioms my_thm
command, the axiom used by the implementation of sorry
.
specialize
The tactic specialize h a₁ ... aₙ
works on local hypothesis h
.
The premises of this hypothesis, either universal quantifications or
non-dependent implications, are instantiated by concrete terms coming
from arguments a₁
... aₙ
.
The tactic adds a new hypothesis with the same name h := h a₁ ... aₙ
and tries to clear the previous one.
specialize_all
specialize_all x
runs specialize h x
for all hypotheses h
where this tactic succeeds.
split
The split
tactic is useful for breaking nested if-then-else and match
expressions into separate cases.
For a match
expression with n
cases, the split
tactic generates at most n
subgoals.
For example, given n : Nat
, and a target if n = 0 then Q else R
, split
will generate
one goal with hypothesis n = 0
and target Q
, and a second goal with hypothesis
¬n = 0
and target R
. Note that the introduced hypothesis is unnamed, and is commonly
renamed used the case
or next
tactics.
-
split
will split the goal (target). -
split at h
will split the hypothesish
.
split_ifs
Splits all if-then-else-expressions into multiple goals.
Given a goal of the form g (if p then x else y)
, split_ifs
will produce
two goals: p ⊢ g x
and ¬p ⊢ g y
.
If there are multiple ite-expressions, then split_ifs
will split them all,
starting with a top-most one whose condition does not contain another
ite-expression.
split_ifs at *
splits all ite-expressions in all hypotheses as well as the goal.
split_ifs with h₁ h₂ h₃
overrides the default names for the hypotheses.
squeeze_scope
The squeeze_scope
tactic allows aggregating multiple calls to simp
coming from the same syntax
but in different branches of execution, such as in cases x <;> simp
.
The reported simp
call covers all simp lemmas used by this syntax.
@[simp] def bar (z : Nat) := 1 + z
@[simp] def baz (z : Nat) := 1 + z
@[simp] def foo : Nat → Nat → Nat
| 0, z => bar z
| _+1, z => baz z
example : foo x y = 1 + y := x:ℕy:ℕ⊢ foo x y = 1 + y
y:ℕ⊢ foo 0 y = 1 + yy:ℕn✝:ℕ⊢ foo (n✝ + 1) y = 1 + y y:ℕ⊢ foo 0 y = 1 + yy:ℕn✝:ℕ⊢ foo (n✝ + 1) y = 1 + y All goals completed! 🐙 -- two printouts:
-- "Try this: simp only [foo, bar]"
-- "Try this: simp only [foo, baz]"
example : foo x y = 1 + y := x:ℕy:ℕ⊢ foo x y = 1 + y
squeeze_scope
y:ℕ⊢ foo 0 y = 1 + yy:ℕn✝:ℕ⊢ foo (n✝ + 1) y = 1 + y y:ℕ⊢ foo 0 y = 1 + yy:ℕn✝:ℕ⊢ foo (n✝ + 1) y = 1 + y simp -- only one printout: "Try this: simp only [foo, baz, bar]"
stop
stop
is a helper tactic for "discarding" the rest of a proof:
it is defined as repeat sorry
.
It is useful when working on the middle of a complex proofs,
and less messy than commenting the remainder of the proof.
subsingleton
The subsingleton
tactic tries to prove a goal of the form x = y
or HEq x y
using the fact that the types involved are subsingletons
(a type with exactly zero or one terms).
To a first approximation, it does apply Subsingleton.elim
.
As a nicety, subsingleton
first runs the intros
tactic.
-
If the goal is an equality, it either closes the goal or fails.
-
subsingleton [inst1, inst2, ...]
can be used to add additionalSubsingleton
instances to the local context. This can be more flexible thanhave := inst1; have := inst2; ...; subsingleton
since the tactic does not require that all placeholders be solved for.
Techniques the subsingleton
tactic can apply:
-
proof irrelevance
-
heterogeneous proof irrelevance (via
proof_irrel_heq
) -
using
Subsingleton
(viaSubsingleton.elim
) -
proving
BEq
instances are equal if they are both lawful (vialawful_beq_subsingleton
)
Properties
The tactic is careful not to accidentally specialize Sort _
to Prop
,
avoiding the following surprising behavior of apply Subsingleton.elim
:
example (α : Sort _) (x y : α) : x = y := by apply Subsingleton.elim
The reason this example
goes through is that
it applies the ∀ (p : Prop), Subsingleton p
instance,
specializing the universe level metavariable in Sort _
to 0
.
subst
subst x...
substitutes each x
with e
in the goal if there is a hypothesis
of type x = e
or e = x
.
If x
is itself a hypothesis of type y = e
or e = y
, y
is substituted instead.
subst_eqs
subst_eq
repeatedly substitutes according to the equality proof hypotheses in the context,
replacing the left side of the equality with the right, until no more progress can be made.
success_if_fail_with_msg
success_if_fail_with_msg msg tacs
runs tacs
and succeeds only if they fail with the message
msg
.
msg
can be any term that evaluates to an explicit String
.
suffices
suffices
Given a main goal ctx ⊢ t
, suffices h : t' from e
replaces the main goal with ctx ⊢ t'
,
e
must have type t
in the context ctx, h : t'
.
The variant suffices h : t' by tac
is a shorthand for suffices h : t' from by tac
.
If h :
is omitted, the name this
is used.
swap_var
swap_var swap_rule₁, swap_rule₂, ⋯
applies swap_rule₁
then swap_rule₂
then ⋯
.
A swap_rule is of the form x y
or x ↔ y
, and "applying it" means swapping the variable name
x
by y
and vice-versa on all hypotheses and the goal.
example {P Q : Prop} (q : P) (p : Q) : P ∧ Q := P:PropQ:Propp:Pq:Q⊢ P ∧ Q
P:PropQ:Propp:Pq:Q⊢ P ∧ Q
All goals completed! 🐙
symm
-
symm
applies to a goal whose target has the formt ~ u
where~
is a symmetric relation, that is, a relation which has a symmetry lemma tagged with the attribute [symm]. It replaces the target withu ~ t
. -
symm at h
will rewrite a hypothesish : t ~ u
toh : u ~ t
.
symm_saturate
For every hypothesis h : a ~ b
where a @[symm]
lemma is available,
add a hypothesis h_symm : b ~ a
.
tauto
tauto
breaks down assumptions of the form _ ∧ _
, _ ∨ _
, _ ↔ _
and ∃ _, _
and splits a goal of the form _ ∧ _
, _ ↔ _
or ∃ _, _
until it can be discharged
using reflexivity
or solve_by_elim
.
This is a finishing tactic: it either closes the goal or raises an error.
The Lean 3 version of this tactic by default attempted to avoid classical reasoning
where possible. This Lean 4 version makes no such attempt. The itauto
tactic
is designed for that purpose.
tauto_set
tauto_set
attempts to prove tautologies involving hypotheses and goals of the form X ⊆ Y
or X = Y
, where X
, Y
are expressions built using ∪, ∩, , and ᶜ from finitely many
variables of type Set α
. It also unfolds expressions of the form Disjoint A B
and
symmDiff A B
.
Examples:
example {α} (A B C D : Set α) (h1 : A ⊆ B) (h2 : C ⊆ D) : C \ B ⊆ D \ A := α:Type u_1A:Set αB:Set αC:Set αD:Set αh1:A ⊆ Bh2:C ⊆ D⊢ C \ B ⊆ D \ A
All goals completed! 🐙
example {α} (A B C : Set α) (h1 : A ⊆ B ∪ C) : (A ∩ B) ∪ (A ∩ C) = A := α:Type u_1A:Set αB:Set αC:Set αh1:A ⊆ B ∪ C⊢ A ∩ B ∪ A ∩ C = A
All goals completed! 🐙
tfae_finish
tfae_finish
is used to close goals of the form TFAE [P₁, P₂, ...]
once a sufficient collection
of hypotheses of the form Pᵢ → Pⱼ
or Pᵢ ↔ Pⱼ
have been introduced to the local context.
tfae_have
can be used to conveniently introduce these hypotheses; see tfae_have
.
Example:
example : TFAE [P, Q, R] := by tfae_have 1 → 2 := sorry /- proof of P → Q -/ tfae_have 2 → 1 := sorry /- proof of Q → P -/ tfae_have 2 ↔ 3 := sorry /- proof of Q ↔ R -/ tfae_finish
tfae_have
tfae_have
introduces hypotheses for proving goals of the form TFAE [P₁, P₂, ...]
. Specifically,
tfae_have i <arrow> j := ...
introduces a hypothesis of type Pᵢ <arrow> Pⱼ
to the local
context, where <arrow>
can be →
, ←
, or ↔
. Note that i
and j
are natural number indices
(beginning at 1) used to specify the propositions P₁, P₂, ...
that appear in the goal.
example (h : P → R) : TFAE [P, Q, R] := by tfae_have 1 → 3 := h ...
The resulting context now includes tfae_1_to_3 : P → R
.
Once sufficient hypotheses have been introduced by tfae_have
, tfae_finish
can be used to close
the goal. For example,
example : TFAE [P, Q, R] := by tfae_have 1 → 2 := sorry /- proof of P → Q -/ tfae_have 2 → 1 := sorry /- proof of Q → P -/ tfae_have 2 ↔ 3 := sorry /- proof of Q ↔ R -/ tfae_finish
All features of have
are supported by tfae_have
, including naming, matching,
destructuring, and goal creation. These are demonstrated below.
example : TFAE [P, Q] := by -- assert `tfae_1_to_2 : P → Q`: tfae_have 1 → 2 := sorry -- assert `hpq : P → Q`: tfae_have hpq : 1 → 2 := sorry -- match on `p : P` and prove `Q` via `f p`: tfae_have 1 → 2 | p => f p -- assert `pq : P → Q`, `qp : Q → P`: tfae_have ⟨pq, qp⟩ : 1 ↔ 2 := sorry -- assert `h : P → Q`; `?a` is a new goal: tfae_have h : 1 → 2 := f ?a ...
tfae_have
"Goal-style" tfae_have
syntax is deprecated. Now, tfae_have ...
should be followedby := ...
; see below for the new behavior. This warning can be turned off with set_option Mathlib.Tactic.TFAE.useDeprecated true
.
tfae_have
introduces hypotheses for proving goals of the form TFAE [P₁, P₂, ...]
. Specifically,
tfae_have i <arrow> j := ...
introduces a hypothesis of type Pᵢ <arrow> Pⱼ
to the local
context, where <arrow>
can be →
, ←
, or ↔
. Note that i
and j
are natural number indices
(beginning at 1) used to specify the propositions P₁, P₂, ...
that appear in the goal.
example (h : P → R) : TFAE [P, Q, R] := by tfae_have 1 → 3 := h ...
The resulting context now includes tfae_1_to_3 : P → R
.
Once sufficient hypotheses have been introduced by tfae_have
, tfae_finish
can be used to close
the goal. For example,
example : TFAE [P, Q, R] := by tfae_have 1 → 2 := sorry /- proof of P → Q -/ tfae_have 2 → 1 := sorry /- proof of Q → P -/ tfae_have 2 ↔ 3 := sorry /- proof of Q ↔ R -/ tfae_finish
All features of have
are supported by tfae_have
, including naming, matching,
destructuring, and goal creation. These are demonstrated below.
example : TFAE [P, Q] := by -- assert `tfae_1_to_2 : P → Q`: tfae_have 1 → 2 := sorry -- assert `hpq : P → Q`: tfae_have hpq : 1 → 2 := sorry -- match on `p : P` and prove `Q` via `f p`: tfae_have 1 → 2 | p => f p -- assert `pq : P → Q`, `qp : Q → P`: tfae_have ⟨pq, qp⟩ : 1 ↔ 2 := sorry -- assert `h : P → Q`; `?a` is a new goal: tfae_have h : 1 → 2 := f ?a ...
toFinite_tac
A tactic (for use in default params) that applies Set.toFinite
to synthesize a Set.Finite
term.
to_encard_tac
A tactic useful for transferring proofs for encard
to their corresponding card
statements
trans
trans
applies to a goal whose target has the form t ~ u
where ~
is a transitive relation,
that is, a relation which has a transitivity lemma tagged with the attribute [trans].
-
trans s
replaces the goal with the two subgoalst ~ s
ands ~ u
. -
If
s
is omitted, then a metavariable is used instead.
Additionally, trans
also applies to a goal whose target has the form t → u
,
in which case it replaces the goal with t → s
and s → u
.
trivial
trivial
tries different simple tactics (e.g., rfl
, contradiction
, ...)
to close the current goal.
You can use the command macro_rules
to extend the set of tactics used. Example:
macro_rules | `(tactic| trivial) => `(tactic| simp)
try?
unfold
-
unfold id
unfolds all occurrences of definitionid
in the target. -
unfold id1 id2 ...
is equivalent tounfold id1; unfold id2; ...
. -
unfold id at h
unfolds at the hypothesish
.
Definitions can be either global or local definitions.
For non-recursive global definitions, this tactic is identical to delta
.
For recursive global definitions, it uses the "unfolding lemma" id.eq_def
,
which is generated for each recursive definition, to unfold according to the recursive definition given by the user.
Only one level of unfolding is performed, in contrast to simp only [id]
, which unfolds definition id
recursively.
unfold?
Replace the selected expression with a definitional unfolding.
-
After each unfolding, we apply
whnfCore
to simplify the expression. -
Explicit natural number expressions are evaluated.
-
Unfolds of class projections of instances marked with
@[default_instance]
are not shown. This is relevant for notational type classes like+
: we don't want to suggestAdd.add a b
as an unfolding ofa + b
. Similarly forOfNat n : Nat
which unfolds inton : Nat
.
To use unfold?
, shift-click an expression in the tactic state.
This gives a list of rewrite suggestions for the selected expression.
Click on a suggestion to replace unfold?
by a tactic that performs this rewrite.
unfold_let
This tactic is subsumed by the unfold
tactic.
unfold_let x y z at loc
unfolds the local definitions x
, y
, and z
at the given
location, which is known as "zeta reduction."
This also exists as a conv
-mode tactic.
If no local definitions are given, then all local definitions are unfolded.
This variant also exists as the conv
-mode tactic zeta
.
unfold_projs
unfold_projs at loc
unfolds projections of class instances at the given location.
This also exists as a conv
-mode tactic.
unhygienic
unhygienic tacs
runs tacs
with name hygiene disabled.
This means that tactics that would normally create inaccessible names will instead
make regular variables. Warning: Tactics may change their variable naming
strategies at any time, so code that depends on autogenerated names is brittle.
Users should try not to use unhygienic
if possible.
example : ∀ x : Nat, x = x := ⊢ ∀ (x : ℕ), x = x unhygienic
x:ℕ⊢ x = x -- x would normally be intro'd as inaccessible
All goals completed! 🐙 -- refer to x
uniqueDiffWithinAt_Ici_Iic_univ
An auxiliary tactic closing goals UniqueDiffWithinAt ℝ s a
where
s ∈ {Iic a, Ici a, univ}
.
unreachable!
This tactic causes a panic when run (at compile time).
(This is distinct from exact unreachable!
, which inserts code which will panic at run time.)
It is intended for tests to assert that a tactic will never be executed, which is otherwise an
unusual thing to do (and the unreachableTactic
linter will give a warning if you do).
The unreachableTactic
linter has a special exception for uses of unreachable!
.
example : True := ⊢ True All goals completed! 🐙 <;> unreachable!
use
use e₁, e₂, ⋯
is similar to exists
, but unlike exists
it is equivalent to applying the tactic
refine ⟨e₁, e₂, ⋯, ?_, ⋯, ?_⟩
with any number of placeholders (rather than just one) and
then trying to close goals associated to the placeholders with a configurable discharger (rather
than just try trivial
).
Examples:
example : ∃ x : Nat, x = x := ⊢ ∃ x, x = x All goals completed! 🐙
example : ∃ x : Nat, ∃ y : Nat, x = y := ⊢ ∃ x y, x = y All goals completed! 🐙
example : ∃ x : String × String, x.1 = x.2 := ⊢ ∃ x, x.1 = x.2 All goals completed! 🐙
use! e₁, e₂, ⋯
is similar but it applies constructors everywhere rather than just for
goals that correspond to the last argument of a constructor. This gives the effect that
nested constructors are being flattened out, with the supplied values being used along the
leaves and nodes of the tree of constructors.
With use!
one can feed in each 42
one at a time:
example : ∃ p : Nat × Nat, p.1 = p.2 := by use! 42, 42 example : ∃ p : Nat × Nat, p.1 = p.2 := by use! (42, 42)
The second line makes use of the fact that use!
tries refining with the argument before
applying a constructor. Also note that use
/use!
by default uses a tactic
called use_discharger
to discharge goals, so use! 42
will close the goal in this example since
use_discharger
applies rfl
, which as a consequence solves for the other Nat
metavariable.
These tactics take an optional discharger to handle remaining explicit Prop
constructor arguments.
By default it is use (discharger := try with_reducible use_discharger) e₁, e₂, ⋯
.
To turn off the discharger and keep all goals, use (discharger := skip)
.
To allow "heavy refls", use (discharger := try use_discharger)
.
use!
use e₁, e₂, ⋯
is similar to exists
, but unlike exists
it is equivalent to applying the tactic
refine ⟨e₁, e₂, ⋯, ?_, ⋯, ?_⟩
with any number of placeholders (rather than just one) and
then trying to close goals associated to the placeholders with a configurable discharger (rather
than just try trivial
).
Examples:
example : ∃ x : Nat, x = x := ⊢ ∃ x, x = x All goals completed! 🐙
example : ∃ x : Nat, ∃ y : Nat, x = y := ⊢ ∃ x y, x = y All goals completed! 🐙
example : ∃ x : String × String, x.1 = x.2 := ⊢ ∃ x, x.1 = x.2 All goals completed! 🐙
use! e₁, e₂, ⋯
is similar but it applies constructors everywhere rather than just for
goals that correspond to the last argument of a constructor. This gives the effect that
nested constructors are being flattened out, with the supplied values being used along the
leaves and nodes of the tree of constructors.
With use!
one can feed in each 42
one at a time:
example : ∃ p : Nat × Nat, p.1 = p.2 := by use! 42, 42 example : ∃ p : Nat × Nat, p.1 = p.2 := by use! (42, 42)
The second line makes use of the fact that use!
tries refining with the argument before
applying a constructor. Also note that use
/use!
by default uses a tactic
called use_discharger
to discharge goals, so use! 42
will close the goal in this example since
use_discharger
applies rfl
, which as a consequence solves for the other Nat
metavariable.
These tactics take an optional discharger to handle remaining explicit Prop
constructor arguments.
By default it is use (discharger := try with_reducible use_discharger) e₁, e₂, ⋯
.
To turn off the discharger and keep all goals, use (discharger := skip)
.
To allow "heavy refls", use (discharger := try use_discharger)
.
use_discharger
Default discharger to try to use for the use
and use!
tactics.
This is similar to the trivial
tactic but doesn't do things like contradiction
or decide
.
use_finite_instance
whnf
whnf at loc
puts the given location into weak-head normal form.
This also exists as a conv
-mode tactic.
Weak-head normal form is when the outer-most expression has been fully reduced, the expression may contain subexpressions which have not been reduced.
with_panel_widgets
Display the selected panel widgets in the nested tactic script. For example,
assuming we have written a GeometryDisplay
component,
by with_panel_widgets [GeometryDisplay]
simp
rfl
will show the geometry display alongside the usual tactic state throughout the proof.
with_reducible
with_reducible tacs
executes tacs
using the reducible transparency setting.
In this setting only definitions tagged as [reducible]
are unfolded.
with_reducible_and_instances
with_reducible_and_instances tacs
executes tacs
using the .instances
transparency setting.
In this setting only definitions tagged as [reducible]
or type class instances are unfolded.
with_unfolding_all
with_unfolding_all tacs
executes tacs
using the .all
transparency setting.
In this setting all definitions that are not opaque are unfolded.
wlog
wlog h : P
will add an assumption h : P
to the main goal, and add a side goal that requires
showing that the case h : ¬ P
can be reduced to the case where P
holds (typically by symmetry).
The side goal will be at the top of the stack. In this side goal, there will be two additional assumptions:
-
h : ¬ P
: the assumption thatP
does not hold -
this
: which is the statement that in the old contextP
suffices to prove the goal. By default, the namethis
is used, but the idiomwith H
can be added to specify the name:wlog h : P with H
.
Typically, it is useful to use the variant wlog h : P generalizing x y
,
to revert certain parts of the context before creating the new goal.
In this way, the wlog-claim this
can be applied to x
and y
in different orders
(exploiting symmetry, which is the typical use case).
By default, the entire context is reverted.
zify
The zify
tactic is used to shift propositions from Nat
to Int
.
This is often useful since Int
has well-behaved subtraction.
example (a b c x y z : Nat) (h : ¬ x*y*z < 0) : c < a + 3*b := by zify zify at h /- h : ¬↑x * ↑y * ↑z < 0 ⊢ ↑c < ↑a + 3 * ↑b -/
zify
can be given extra lemmas to use in simplification. This is especially useful in the
presence of nat subtraction: passing ≤
arguments will allow push_cast
to do more work.
example (a b c : Nat) (h : a - b < c) (hab : b ≤ a) : false := by zify [hab] at h /- h : ↑a - ↑b < ↑c -/
zify
makes use of the @[zify_simps]
attribute to move propositions,
and the push_cast
tactic to simplify the Int
-valued expressions.
zify
is in some sense dual to the lift
tactic.
lift (z : Int) to Nat
will change the type of an
integer z
(in the supertype) to Nat
(the subtype), given a proof that z ≥ 0
;
propositions concerning z
will still be over Int
.
zify
changes propositions about Nat
(the subtype) to propositions about Int
(the supertype),
without changing the type of any variable.