Zulip Chat Archive
Stream: Is there code for X?
Topic: undergraduate calculus
Kevin Buzzard (Apr 13 2020 at 13:36):
Can we do things like this now?
import analysis.complex.exponential open real example : deriv (λ x, cos (sin x) * exp x) = λ x, (cos(sin(x))-sin(sin(x))*cos(x))*exp(x) := sorry
Kenny Lau (Apr 13 2020 at 13:43):
import analysis.complex.exponential open complex example (x) : deriv (λ x, cos (sin x) * exp x) x = (cos(sin(x))-sin(sin(x))*cos(x))*exp(x) := by rw [deriv_mul (differentiable_at.comp _ (differentiable_cos _) (differentiable_sin _)) (differentiable_exp _), deriv.comp _ (differentiable_cos _) (differentiable_sin _), deriv_cos, deriv_sin, deriv_exp, smul_eq_mul]; ring
Kenny Lau (Apr 13 2020 at 13:43):
I cheated rather (un)subtly though
Patrick Massot (Apr 13 2020 at 13:51):
Kevin's question was clearly under-specified.
Patrick Massot (Apr 13 2020 at 13:51):
So I think it's fair.
Patrick Massot (Apr 13 2020 at 13:55):
deriv.comp
being expressed in term of smul
is bugging me. I guess this comes from our general base field bias, but it's still bugging me. @Sebastien Gouezel ?
Kevin Buzzard (Apr 13 2020 at 14:02):
So you changed the question from real to complex? Can we not do real yet?
Johan Commelin (Apr 13 2020 at 14:02):
example : deriv (λ x, cos (sin x) * exp x) = λ x, (cos(sin(x))-sin(sin(x))*cos(x))*exp(x) := begin funext x, rw [deriv_mul _ _, deriv_exp, ← add_mul, add_comm], { congr' 2, show deriv (cos ∘ sin) x = _, rw [deriv.comp, deriv_sin, deriv_cos, smul_eq_mul], { ring }, { apply differentiable_cos }, { apply differentiable_sin }, }, { apply differentiable_cos.comp differentiable_sin }, { apply differentiable_exp } end
Johan Commelin (Apr 13 2020 at 14:02):
That's for real
Patrick Massot (Apr 13 2020 at 14:03):
Johan, your version is complex as well
Kevin Buzzard (Apr 13 2020 at 14:03):
It works for real
Patrick Massot (Apr 13 2020 at 14:03):
Sure
Patrick Massot (Apr 13 2020 at 14:04):
You can put (x : real)
in the statement
Kevin Buzzard (Apr 13 2020 at 14:05):
gaagh if I had found deriv.comp
I probably wouldn't have even asked this. I thought I checked? Maybe I tried deriv_comp
?
Kevin Buzzard (Apr 13 2020 at 14:05):
I found deriv_cos
, maybe that was what threw me off.
Johan Commelin (Apr 13 2020 at 14:05):
Patrick Massot said:
Johan, your version is complex as well
Why is my version complex? After the funext I have a x : real
in my goal state
Kevin Buzzard (Apr 13 2020 at 14:06):
if you change open real
to open complex
it still works
Patrick Massot (Apr 13 2020 at 14:06):
The most readable version so far is:
example (x) : deriv (λ x, cos (sin x) * exp x) x = (cos(sin(x))-sin(sin(x))*cos(x))*exp(x) := begin rw [deriv_mul, deriv.comp, deriv_cos, deriv_sin, deriv_exp, smul_eq_mul], ring, apply differentiable_cos, apply differentiable_sin, apply differentiable_cos.comp differentiable_sin, apply differentiable_exp, end
Johan Commelin (Apr 13 2020 at 14:07):
Anyway, the interesting question is... how do we make this by simp; ring
Patrick Massot (Apr 13 2020 at 14:07):
Which clearly shows we need a tactic taking care of the four apply at the end.
Johan Commelin (Apr 13 2020 at 14:07):
Or by power_through_analysis
Kevin Buzzard (Apr 13 2020 at 14:07):
I was wondering how readable we could get it. I was thinking of undergraduates.
Kevin Buzzard (Apr 13 2020 at 14:07):
I am pretty sure that an UG at my university solves this question using an algorithm
Kevin Buzzard (Apr 13 2020 at 14:08):
They could even differentiate log(x)+log(-x)
using the same algorithm
Patrick Massot (Apr 13 2020 at 14:23):
This is all very weird. Does anyone know why
import analysis.complex.exponential open complex example (x : ℂ) : differentiable_at ℂ sin x := differentiable_cos x
gives a time-out? I would expect it to fail very quickly. Note that sin and cos mistake is intentional. I want a typing error, but not a timeout.
Johan Commelin (Apr 13 2020 at 14:23):
Had the same problem!
Kevin Buzzard (Apr 13 2020 at 14:35):
import analysis.complex.exponential open complex set_option trace.type_context.is_def_eq true set_option trace.type_context.is_def_eq_detail true example (x : ℂ) : differentiable_at ℂ sin x := differentiable_cos x
Kevin Buzzard (Apr 13 2020 at 14:36):
https://gist.github.com/kbuzzard/b995aa029459a043b807695ea6d84ae7
Kevin Buzzard (Apr 13 2020 at 14:37):
It is plausible that sin = cos
Kevin Buzzard (Apr 13 2020 at 14:38):
It takes a long time to fail, and it seems to ask this question many many times
[type_context.is_def_eq_detail] [18]: sin x' =?= cos x' [type_context.is_def_eq_detail] [19]: (exp (-x' * I) - exp (x' * I)) * I / 2 =?= (exp (x' * I) + exp (-x' * I)) / 2 [type_context.is_def_eq_detail] [20]: algebra.div ((exp (-x' * I) - exp (x' * I)) * I) 2 =?= algebra.div (exp (x' * I) + exp (-x' * I)) 2 [type_context.is_def_eq_detail] unfold left&right: algebra.div [type_context.is_def_eq_detail] [21]: (exp (-x' * I) - exp (x' * I)) * I * 2⁻¹ =?= (exp (x' * I) + exp (-x' * I)) * 2⁻¹ [type_context.is_def_eq_detail] [22]: mul_zero_class.mul ((exp (-x' * I) - exp (x' * I)) * I) 2⁻¹ =?= mul_zero_class.mul (exp (x' * I) + exp (-x' * I)) 2⁻¹ [type_context.is_def_eq_detail] [23]: semiring.mul ((exp (-x' * I) - exp (x' * I)) * I) 2⁻¹ =?= semiring.mul (exp (x' * I) + exp (-x' * I)) 2⁻¹ [type_context.is_def_eq_detail] [24]: ring.mul ((exp (-x' * I) - exp (x' * I)) * I) 2⁻¹ =?= ring.mul (exp (x' * I) + exp (-x' * I)) 2⁻¹ [type_context.is_def_eq_detail] [25]: division_ring.mul ((exp (-x' * I) - exp (x' * I)) * I) 2⁻¹ =?= division_ring.mul (exp (x' * I) + exp (-x' * I)) 2⁻¹ [type_context.is_def_eq_detail] [26]: field.mul ((exp (-x' * I) - exp (x' * I)) * I) 2⁻¹ =?= field.mul (exp (x' * I) + exp (-x' * I)) 2⁻¹ [type_context.is_def_eq_detail] [27]: comm_ring.mul ((exp (-x' * I) - exp (x' * I)) * I) 2⁻¹ =?= comm_ring.mul (exp (x' * I) + exp (-x' * I)) 2⁻¹ [type_context.is_def_eq_detail] [28]: (exp (-x' * I) - exp (x' * I)) * I * 2⁻¹ =?= (exp (x' * I) + exp (-x' * I)) * 2⁻¹ [type_context.is_def_eq_detail] [29]: (λ (z w : ℂ), {re := z.re * w.re - z.im * w.im, im := z.re * w.im + z.im * w.re}) ((exp (-x' * I) - exp (x' * I)) * I) 2⁻¹ =?= (λ (z w : ℂ), {re := z.re * w.re - z.im * w.im, im := z.re * w.im + z.im * w.re}) (exp (x' * I) + exp (-x' * I)) 2⁻¹ [type_context.is_def_eq_detail] after whnf_core: {re := ((exp (-x' * I) - exp (x' * I)) * I).re * 2⁻¹.re - ((exp (-x' * I) - exp (x' * I)) * I).im * 2⁻¹.im, im := ((exp (-x' * I) - exp (x' * I)) * I).re * 2⁻¹.im + ((exp (-x' * I) - exp (x' * I)) * I).im * 2⁻¹.re} =?= {re := (exp (x' * I) + exp (-x' * I)).re * 2⁻¹.re - (exp (x' * I) + exp (-x' * I)).im * 2⁻¹.im, im := (exp (x' * I) + exp (-x' * I)).re * 2⁻¹.im + (exp (x' * I) + exp (-x' * I)).im * 2⁻¹.re} [type_context.is_def_eq_detail] [30]: ((exp (-x' * I) - exp (x' * I)) * I).re * 2⁻¹.re - ((exp (-x' * I) - exp (x' * I)) * I).im * 2⁻¹.im =?= (exp (x' * I) + exp (-x' * I)).re * 2⁻¹.re - (exp (x' * I) + exp (-x' * I)).im * 2⁻¹.im [type_context.is_def_eq_detail] [31]: algebra.sub (((exp (-x' * I) - exp (x' * I)) * I).re * 2⁻¹.re) (((exp (-x' * I) - exp (x' * I)) * I).im * 2⁻¹.im) =?= algebra.sub ((exp (x' * I) + exp (-x' * I)).re * 2⁻¹.re) ((exp (x' * I) + exp (-x' * I)).im * 2⁻¹.im) [type_context.is_def_eq_detail] unfold left&right: algebra.sub [type_context.is_def_eq_detail] [32]: ((exp (-x' * I) - exp (x' * I)) * I).re * 2⁻¹.re + -(((exp (-x' * I) - exp (x' * I)) * I).im * 2⁻¹.im) =?= (exp (x' * I) + exp (-x' * I)).re * 2⁻¹.re + -((exp (x' * I) + exp (-x' * I)).im * 2⁻¹.im) [type_context.is_def_eq_detail] [33]: add_semigroup.add (((exp (-x' * I) - exp (x' * I)) * I).re * 2⁻¹.re) (-(((exp (-x' * I) - exp (x' * I)) * I).im * 2⁻¹.im)) =?= add_semigroup.add ((exp (x' * I) + exp (-x' * I)).re * 2⁻¹.re) (-((exp (x' * I) + exp (-x' * I)).im * 2⁻¹.im)) [type_context.is_def_eq_detail] [34]: add_monoid.add (((exp (-x' * I) - exp (x' * I)) * I).re * 2⁻¹.re) (-(((exp (-x' * I) - exp (x' * I)) * I).im * 2⁻¹.im)) =?= add_monoid.add ((exp (x' * I) + exp (-x' * I)).re * 2⁻¹.re) (-((exp (x' * I) + exp (-x' * I)).im * 2⁻¹.im)) [type_context.is_def_eq_detail] [35]: add_group.add (((exp (-x' * I) - exp (x' * I)) * I).re * 2⁻¹.re) (-(((exp (-x' * I) - exp (x' * I)) * I).im * 2⁻¹.im)) =?= add_group.add ((exp (x' * I) + exp (-x' * I)).re * 2⁻¹.re) (-((exp (x' * I) + exp (-x' * I)).im * 2⁻¹.im)) [type_context.is_def_eq_detail] [36]: add_comm_group.add (((exp (-x' * I) - exp (x' * I)) * I).re * 2⁻¹.re) (-(((exp (-x' * I) - exp (x' * I)) * I).im * 2⁻¹.im)) =?= add_comm_group.add ((exp (x' * I) + exp (-x' * I)).re * 2⁻¹.re) (-((exp (x' * I) + exp (-x' * I)).im * 2⁻¹.im)) [type_context.is_def_eq_detail] [37]: ring.add (((exp (-x' * I) - exp (x' * I)) * I).re * 2⁻¹.re) (-(((exp (-x' * I) - exp (x' * I)) * I).im * 2⁻¹.im)) =?= ring.add ((exp (x' * I) + exp (-x' * I)).re * 2⁻¹.re) (-((exp (x' * I) + exp (-x' * I)).im * 2⁻¹.im)) [type_context.is_def_eq_detail] [38]: comm_ring.add (((exp (-x' * I) - exp (x' * I)) * I).re * 2⁻¹.re) (-(((exp (-x' * I) - exp (x' * I)) * I).im * 2⁻¹.im)) =?= comm_ring.add ((exp (x' * I) + exp (-x' * I)).re * 2⁻¹.re) (-((exp (x' * I) + exp (-x' * I)).im * 2⁻¹.im)) [type_context.is_def_eq_detail] [39]: comm_ring.add (((exp (-x' * I) - exp (x' * I)) * I).re * 2⁻¹.re) (-(((exp (-x' * I) - exp (x' * I)) * I).im * 2⁻¹.im)) =?= comm_ring.add ((exp (x' * I) + exp (-x' * I)).re * 2⁻¹.re) (-((exp (x' * I) + exp (-x' * I)).im * 2⁻¹.im)) [type_context.is_def_eq_detail] [40]: ((exp (-x' * I) - exp (x' * I)) * I).re * 2⁻¹.re =?= (exp (x' * I) + exp (-x' * I)).re * 2⁻¹.re [type_context.is_def_eq_detail] [41]: no_zero_divisors.mul ((exp (-x' * I) - exp (x' * I)) * I).re 2⁻¹.re =?= no_zero_divisors.mul (exp (x' * I) + exp (-x' * I)).re 2⁻¹.re [type_context.is_def_eq_detail] [42]: domain.mul ((exp (-x' * I) - exp (x' * I)) * I).re 2⁻¹.re =?= domain.mul (exp (x' * I) + exp (-x' * I)).re 2⁻¹.re [type_context.is_def_eq_detail] [43]: linear_ordered_ring.mul ((exp (-x' * I) - exp (x' * I)) * I).re 2⁻¹.re =?= linear_ordered_ring.mul (exp (x' * I) + exp (-x' * I)).re 2⁻¹.re [type_context.is_def_eq_detail] [44]: linear_ordered_comm_ring.mul ((exp (-x' * I) - exp (x' * I)) * I).re 2⁻¹.re =?= linear_ordered_comm_ring.mul (exp (x' * I) + exp (-x' * I)).re 2⁻¹.re [type_context.is_def_eq_detail] [45]: comm_ring.mul ((exp (-x' * I) - exp (x' * I)) * I).re 2⁻¹.re =?= comm_ring.mul (exp (x' * I) + exp (-x' * I)).re 2⁻¹.re [type_context.is_def_eq_detail] [46]: comm_ring.mul ((exp (-x' * I) - exp (x' * I)) * I).re 2⁻¹.re =?= comm_ring.mul (exp (x' * I) + exp (-x' * I)).re 2⁻¹.re [type_context.is_def_eq_detail] [47]: ((exp (-x' * I) - exp (x' * I)) * I).re =?= (exp (x' * I) + exp (-x' * I)).re [type_context.is_def_eq_detail] [48]: (exp (-x' * I) - exp (x' * I)).re * I.re - (exp (-x' * I) - exp (x' * I)).im * I.im =?= (exp (x' * I)).re + (exp (-x' * I)).re [type_context.is_def_eq_detail] [49]: algebra.sub ((exp (-x' * I) - exp (x' * I)).re * I.re) ((exp (-x' * I) - exp (x' * I)).im * I.im) =?= distrib.add (exp (x' * I)).re (exp (-x' * I)).re [type_context.is_def_eq_detail] unfold left: algebra.sub [type_context.is_def_eq_detail] [50]: (exp (-x' * I) - exp (x' * I)).re * I.re + -((exp (-x' * I) - exp (x' * I)).im * I.im) =?= distrib.add (exp (x' * I)).re (exp (-x' * I)).re [type_context.is_def_eq_detail] [51]: add_semigroup.add ((exp (-x' * I) - exp (x' * I)).re * I.re) (-((exp (-x' * I) - exp (x' * I)).im * I.im)) =?= ring.add (exp (x' * I)).re (exp (-x' * I)).re [type_context.is_def_eq_detail] [52]: add_monoid.add ((exp (-x' * I) - exp (x' * I)).re * I.re) (-((exp (-x' * I) - exp (x' * I)).im * I.im)) =?= comm_ring.add (exp (x' * I)).re (exp (-x' * I)).re [type_context.is_def_eq_detail] [53]: add_group.add ((exp (-x' * I) - exp (x' * I)).re * I.re) (-((exp (-x' * I) - exp (x' * I)).im * I.im)) =?= comm_ring.add (exp (x' * I)).re (exp (-x' * I)).re [type_context.is_def_eq_detail] [54]: add_comm_group.add ((exp (-x' * I) - exp (x' * I)).re * I.re) (-((exp (-x' * I) - exp (x' * I)).im * I.im)) =?= comm_ring.add (exp (x' * I)).re (exp (-x' * I)).re [type_context.is_def_eq_detail] [55]: ring.add ((exp (-x' * I) - exp (x' * I)).re * I.re) (-((exp (-x' * I) - exp (x' * I)).im * I.im)) =?= comm_ring.add (exp (x' * I)).re (exp (-x' * I)).re [type_context.is_def_eq_detail] [56]: comm_ring.add ((exp (-x' * I) - exp (x' * I)).re * I.re) (-((exp (-x' * I) - exp (x' * I)).im * I.im)) =?= comm_ring.add (exp (x' * I)).re (exp (-x' * I)).re [type_context.is_def_eq_detail] [57]: comm_ring.add ((exp (-x' * I) - exp (x' * I)).re * I.re) (-((exp (-x' * I) - exp (x' * I)).im * I.im)) =?= comm_ring.add (exp (x' * I)).re (exp (-x' * I)).re [type_context.is_def_eq_detail] [58]: (exp (-x' * I) - exp (x' * I)).re * I.re =?= (exp (x' * I)).re [type_context.is_def_eq_detail] [59]: no_zero_divisors.mul (exp (-x' * I) - exp (x' * I)).re I.re =?= (exp (x' * I)).re [type_context.is_def_eq_detail] [60]: domain.mul (exp (-x' * I) - exp (x' * I)).re I.re =?= (exp (x' * I)).re [type_context.is_def_eq_detail] [61]: linear_ordered_ring.mul (exp (-x' * I) - exp (x' * I)).re I.re =?= (exp (x' * I)).re [type_context.is_def_eq_detail] [62]: linear_ordered_comm_ring.mul (exp (-x' * I) - exp (x' * I)).re I.re =?= (exp (x' * I)).re [type_context.is_def_eq_detail] [63]: comm_ring.mul (exp (-x' * I) - exp (x' * I)).re I.re =?= (exp (x' * I)).re [type_context.is_def_eq_detail] [64]: comm_ring.mul (exp (-x' * I) - exp (x' * I)).re I.re =?= (exp (x' * I)).re [type_context.is_def_eq_detail] [65]: comm_ring.mul =?= re
Johan Commelin (Apr 13 2020 at 14:40):
attribute [irreducible] sin cos
doesn't seem to help
Kevin Buzzard (Apr 13 2020 at 14:42):
Lean gets into a loop trying to prove sin x' =?= cos x'
. If you search for this string in the gist you can see it first occurs at depth [8], then [9], then [10] etc.
Johan Commelin (Apr 13 2020 at 14:42):
Yes, but shouldn't it help to make them irreducible?
Kevin Buzzard (Apr 13 2020 at 14:44):
The poison, or part of it, is this line:
[type_context.is_def_eq_detail] [5]: asymptotics.is_o (λ (x' : ?m_1), sin x' - sin x - ⇑f' (x' - x)) (λ (x' : ?m_1), x' - x) (nhds x) =?= asymptotics.is_o (λ (x' : ℂ), cos x' - cos ?m_8 - ⇑f' (x' - ?m_8)) (λ (x' : ℂ), x' - ?m_8) (nhds ?m_8)
Kevin Buzzard (Apr 13 2020 at 14:44):
Lean never recovers from this, it seems.
Kevin Buzzard (Apr 13 2020 at 14:45):
I should remark that because this stream was somehow added in a way that most people didn't notice, I can see right now that e.g. Reid and Rob are not subscribed to it.
Kevin Buzzard (Apr 13 2020 at 14:45):
or Gabriel
Johan Commelin (Apr 13 2020 at 14:45):
@Reid Barton @Rob Lewis @Gabriel Ebner are you interested in this stream?
Kevin Buzzard (Apr 13 2020 at 14:46):
I'll post in #general
Reid Barton (Apr 13 2020 at 14:46):
I think when it was created I was a little confused about why, but given that people are now using it then sure.
Patrick Massot (Apr 13 2020 at 14:51):
It's also true that this stream is very easy to derail. For instance, Kevin's question was legitimate in this thread, but then I should have switched when I hit this issue.
Sebastien Gouezel (Apr 13 2020 at 16:14):
Here is my version:
example (x : ℝ) : deriv (λ x, cos (sin x) * exp x) x = (cos(sin(x))-sin(sin(x))*cos(x))*exp(x) := begin apply has_deriv_at.deriv, convert ((has_deriv_at_cos _).comp _ (has_deriv_at_sin x)).mul (has_deriv_at_exp x), rw smul_eq_mul, ring end
The smul_eq_mul
shouldn't be here, I'll work on removing it.
Sebastien Gouezel (Apr 13 2020 at 19:00):
This becomes
example (x : ℝ) : deriv (λ x, cos (sin x) * exp x) x = (cos(sin(x))-sin(sin(x))*cos(x))*exp(x) := begin convert (((has_deriv_at_cos _).comp _ (has_deriv_at_sin x)).mul (has_deriv_at_exp x)).deriv, ring end
with my next PR. The best thing is if you don't want to compute the derivative by hand. You write
example (x : ℝ) : deriv (λ x, cos (sin x) * exp x) x = 0 := begin convert (((has_deriv_at_cos _).comp _ (has_deriv_at_sin x)).mul (has_deriv_at_exp x)).deriv, ring, end
which of course doesn't work, but the goal after the last ring
is 0 = -(sin (sin x) * cos x * exp x) + (cos ∘ sin) x * exp x
, giving you on the right a formula for the derivative.
Sebastien Gouezel (Apr 13 2020 at 19:08):
See #2410
Johan Commelin (Apr 13 2020 at 19:12):
If there were a chain rule emoji, I would use it now (-;
Scott Morrison (Apr 14 2020 at 00:45):
Patrick Massot said:
Which clearly shows we need a tactic taking care of the four apply at the end.
The right tactic to build on is solve_by_elim
. In general, if a sequence of apply Xi
and exact Xj
succeed, then solve_by_elim [X1, X2, ..., Xn]
should also succeed.
Scott Morrison (Apr 14 2020 at 00:46):
You can also specify an attribute, so tagging differentiable_cos, differentiable_sin, differentiable_cos.comp, differentiable_exp
with [differentiable]
and writing solve_by_elim with differentiable
should succeed.
Scott Morrison (Apr 14 2020 at 00:46):
We can also provide ways to automatically collect families of lemmas, I think!
Scott Morrison (Apr 14 2020 at 00:49):
I think it would be great to see how far continuity and differentiability problems can be solved like this.
Mario Carneiro (Apr 14 2020 at 00:50):
Does the apply bug cause any problems here? Does solve_by_elim
use apply
or a homegrown version?
Scott Morrison (Apr 14 2020 at 02:43):
Can you remind me "the apply bug"?
Scott Morrison (Apr 14 2020 at 02:43):
We recently fixed a bug in solve_by_elim
because of stuck metavariables.
Reid Barton (Apr 14 2020 at 02:44):
Something about apply
being confused about the number of _
s to insert
Scott Morrison (Apr 14 2020 at 02:44):
Which I think would have prevented this example working, because the second application of differentiable_sin
would have been blocked by having different values of the hidden implicit metavariables.
Scott Morrison (Apr 14 2020 at 02:44):
solve_by_elim
uses the built-in apply
.
Scott Morrison (Apr 14 2020 at 02:44):
So it would be great to know of an example where it fails because of whatever this issue is!
Reid Barton (Apr 14 2020 at 02:45):
i.e. apply f x
is really refine f x _ _ ... _
and if you get the number of _
s wrong then it probably won't work (since the type will be a function and not a Prop/whatever, or vice versa)
Reid Barton (Apr 14 2020 at 02:45):
try applying any lemma for continuity that doesn't take exactly one argument, IIRC
Patrick Massot (Apr 14 2020 at 07:55):
Any such solution will hit https://leanprover.zulipchat.com/#narrow/stream/113488-general/topic/sin.20.3D.20cos.20timeout with or without the apply bug. This is what started this sin vs cos discussion.
Sebastien Gouezel (Apr 14 2020 at 11:47):
See #2416 making is_o
irreducible. I think it should be possible to get formulas for the derivative just by tagging some lemmas as simp lemmas, I will try this now.
Yury G. Kudryashov (Apr 14 2020 at 17:41):
Will it help to make only is_O_with
irreducible?
Yury G. Kudryashov (Apr 14 2020 at 17:46):
I also want to have an easy way to "apply" is_o
to h : 0 < c
. If it isn't possible with irreducible
is_o
(didn't test), could you please add something like is_o.def : ∀ {{c}}, 0 < c → ∃ C, ...
and is_o.def' : ∀ {{c}}, 0 < c → is_O_with c f g l
?
Yury G. Kudryashov (Apr 14 2020 at 17:46):
BTW, why do you make only two of three irreducible
?
Sebastien Gouezel (Apr 14 2020 at 20:26):
example (x : ℝ) : deriv (λ x, cos (sin x) * exp x) x = (cos(sin(x))-sin(sin(x))*cos(x))*exp(x) := by { simp, ring }
see #2419
Patrick Massot (Apr 14 2020 at 20:27):
Doesn't it mean too many simp lemmas? Do we need a simp set attribute here?
Sebastien Gouezel (Apr 14 2020 at 20:33):
Yes, that's one of the options I considered. Since all the lemmas I tag start with differentiable
or differentiable_at
or deriv
, they should be very easy to discard in other situations, so I don't expect any serious performance hit, but I might be wrong on this and I would be happy to hear more informed opinions.
Sebastien Gouezel (Apr 14 2020 at 20:35):
In fact, I don't think it is important that Lean is able to compute directly the derivative of (λ x, cos (sin x) * exp x)
(we never do this in real maths), except for one reason: to show to undergrads that Lean can do nice things. And for this goal this feature should be as simple as possible, and I think it is hard to make something more simple than just a simp
call.
Johan Commelin (Apr 14 2020 at 20:37):
simp with deriv
is slightly more complicated, but not too bad, if this turns out to be a performance hit
Sebastien Gouezel (Apr 14 2020 at 20:43):
Yes, if there is a performance hit I will definitely go for a simpset attribute. A dedicated tactic would be even more efficient (here, to check differentiability of exp (exp (exp (... x)))
, simp
will go through the whole expression (length n
), reach the last exp
, and then ask if the rest of the expression (with n-1
functions) is differentiable. Then if will again go through the complete subexpression (length n-1
) and so on, giving in the end something quadratic while this should be linear). I don't really mind since this should never be applied in real complicated examples.
Yury G. Kudryashov (Apr 14 2020 at 20:51):
A nice project for someone (not me) would be to formalize the following Arnold's problem:
The legend (as told by his nephew) says that he came up with a similar limit while commuting to MSU, computed it, then understood how to prove the answer with no computation, then asked it on the oral exam for PhD students (though he understood that the problem is too hard for students, so he asked professors).
Reid Barton (Apr 14 2020 at 20:51):
```latex
Johan Commelin (Apr 14 2020 at 20:51):
```latex x + y ```
Johan Commelin (Apr 14 2020 at 20:52):
Stupid mathjax doesn't know \arcsin
?
Kevin Buzzard (Apr 14 2020 at 21:43):
Of course I asked this question precisely for undergraduate-related reasons, I don't care if simp does it, but perhaps one of the CS guys should tell us their opinion
Kevin Buzzard (Apr 14 2020 at 21:44):
I don't think it's unreasonable to have different tactics doing different jobs, hardly anyone at Xena knows what simp does, they just try it on anything
Kevin Buzzard (Apr 14 2020 at 21:44):
I guess Chris knows:-)
Gabriel Ebner (Apr 15 2020 at 08:29):
Sebastien Gouezel said:
[..] to check differentiability of
exp (exp (exp (... x)))
,simp
will go through the whole expression (lengthn
), reach the lastexp
, and then ask if the rest of the expression (withn-1
functions) is differentiable. Then if will again go through the complete subexpression (lengthn-1
) and so on, giving in the end something quadratic while this should be linear).
Did you actually have this issue? The simplifier has a cache, so this should have linear runtime.
Mario Carneiro (Apr 15 2020 at 08:31):
Isn't the output quadratic anyway? (The dedup term is linear but I doubt lean can keep that up.)
Yury G. Kudryashov (Apr 15 2020 at 11:13):
As far as I understand, the only way to avoid quadratic output with the current setup is to use has_fderiv_at
Yury G. Kudryashov (Apr 15 2020 at 11:14):
Though of course I may be wrong
Sebastien Gouezel (Apr 15 2020 at 12:18):
In fact I don't really care about the quadratic behavior because I think we will just use this on toy examples. And if simp indeed has a cache (I didn't know this), then it might not even be there.
Last updated: Dec 20 2023 at 11:08 UTC