Documentation

Mathlib.Probability.Variance

Variance of random variables #

We define the variance of a real-valued random variable as Var[X] = 𝔼[(X - 𝔼[X])^2] (in the ProbabilityTheory locale).

Main definitions #

Main results #

def ProbabilityTheory.evariance {Ω : Type u_1} :
{x : MeasurableSpace Ω} → (Ω)MeasureTheory.Measure ΩENNReal

The ℝ≥0∞-valued variance of a real-valued random variable defined as the Lebesgue integral of (X - 𝔼[X])^2.

Equations
Instances For
    def ProbabilityTheory.variance {Ω : Type u_1} :
    {x : MeasurableSpace Ω} → (Ω)MeasureTheory.Measure Ω

    The -valued variance of a real-valued random variable defined by applying ENNReal.toReal to evariance.

    Equations
    Instances For
      theorem ProbabilityTheory.evariance_eq_lintegral_ofReal {Ω : Type u_1} {m : MeasurableSpace Ω} (X : Ω) (μ : MeasureTheory.Measure Ω) :
      ProbabilityTheory.evariance X μ = ∫⁻ (ω : Ω), ENNReal.ofReal ((X ω - ∫ (x : Ω), X xμ) ^ 2)μ
      theorem MeasureTheory.Memℒp.variance_eq_of_integral_eq_zero {Ω : Type u_1} {m : MeasurableSpace Ω} {X : Ω} {μ : MeasureTheory.Measure Ω} (hX : MeasureTheory.Memℒp X 2 μ) (hXint : ∫ (x : Ω), X xμ = 0) :
      ProbabilityTheory.variance X μ = ∫ (x : Ω), (X ^ 2) xμ
      theorem MeasureTheory.Memℒp.variance_eq {Ω : Type u_1} {m : MeasurableSpace Ω} {X : Ω} {μ : MeasureTheory.Measure Ω} [MeasureTheory.IsFiniteMeasure μ] (hX : MeasureTheory.Memℒp X 2 μ) :
      ProbabilityTheory.variance X μ = ∫ (x : Ω), ((X - fun (x : Ω) => ∫ (x : Ω), X xμ) ^ 2) xμ
      theorem ProbabilityTheory.evariance_eq_zero_iff {Ω : Type u_1} {m : MeasurableSpace Ω} {X : Ω} {μ : MeasureTheory.Measure Ω} (hX : AEMeasurable X μ) :
      ProbabilityTheory.evariance X μ = 0 X =ᵐ[μ] fun (x : Ω) => ∫ (x : Ω), X xμ
      theorem ProbabilityTheory.evariance_mul {Ω : Type u_1} {m : MeasurableSpace Ω} (c : ) (X : Ω) (μ : MeasureTheory.Measure Ω) :
      Equations
      • One or more equations did not get rendered due to their size.
      Instances For
        theorem ProbabilityTheory.variance_mul {Ω : Type u_1} {m : MeasurableSpace Ω} (c : ) (X : Ω) (μ : MeasureTheory.Measure Ω) :
        ProbabilityTheory.variance (fun (ω : Ω) => c * X ω) μ = c ^ 2 * ProbabilityTheory.variance X μ
        Equations
        • One or more equations did not get rendered due to their size.
        Instances For
          theorem ProbabilityTheory.variance_def' {Ω : Type u_1} {m : MeasurableSpace Ω} {μ : MeasureTheory.Measure Ω} [MeasureTheory.IsProbabilityMeasure μ] {X : Ω} (hX : MeasureTheory.Memℒp X 2 μ) :
          ProbabilityTheory.variance X μ = ∫ (x : Ω), (X ^ 2) xμ - (∫ (x : Ω), X xμ) ^ 2
          theorem ProbabilityTheory.evariance_def' {Ω : Type u_1} {m : MeasurableSpace Ω} {μ : MeasureTheory.Measure Ω} [MeasureTheory.IsProbabilityMeasure μ] {X : Ω} (hX : MeasureTheory.AEStronglyMeasurable X μ) :
          ProbabilityTheory.evariance X μ = ∫⁻ (ω : Ω), (X ω‖₊ ^ 2)μ - ENNReal.ofReal ((∫ (x : Ω), X xμ) ^ 2)
          theorem ProbabilityTheory.meas_ge_le_evariance_div_sq {Ω : Type u_1} {m : MeasurableSpace Ω} {μ : MeasureTheory.Measure Ω} {X : Ω} (hX : MeasureTheory.AEStronglyMeasurable X μ) {c : NNReal} (hc : c 0) :
          μ {ω : Ω | c |X ω - ∫ (x : Ω), X xμ|} ProbabilityTheory.evariance X μ / c ^ 2

          Chebyshev's inequality for ℝ≥0∞-valued variance.

          theorem ProbabilityTheory.meas_ge_le_variance_div_sq {Ω : Type u_1} {m : MeasurableSpace Ω} {μ : MeasureTheory.Measure Ω} [MeasureTheory.IsFiniteMeasure μ] {X : Ω} (hX : MeasureTheory.Memℒp X 2 μ) {c : } (hc : 0 < c) :
          μ {ω : Ω | c |X ω - ∫ (x : Ω), X xμ|} ENNReal.ofReal (ProbabilityTheory.variance X μ / c ^ 2)

          Chebyshev's inequality: one can control the deviation probability of a real random variable from its expectation in terms of the variance.

          The variance of the sum of two independent random variables is the sum of the variances.

          theorem ProbabilityTheory.IndepFun.variance_sum {Ω : Type u_1} {m : MeasurableSpace Ω} {μ : MeasureTheory.Measure Ω} [MeasureTheory.IsProbabilityMeasure μ] {ι : Type u_2} {X : ιΩ} {s : Finset ι} (hs : is, MeasureTheory.Memℒp (X i) 2 μ) (h : (↑s).Pairwise fun (i j : ι) => ProbabilityTheory.IndepFun (X i) (X j) μ) :
          ProbabilityTheory.variance (∑ is, X i) μ = is, ProbabilityTheory.variance (X i) μ

          The variance of a finite sum of pairwise independent random variables is the sum of the variances.

          theorem ProbabilityTheory.variance_le_sub_mul_sub {Ω : Type u_1} {m : MeasurableSpace Ω} {μ : MeasureTheory.Measure Ω} [MeasureTheory.IsProbabilityMeasure μ] {a : } {b : } {X : Ω} (h : ∀ᵐ (ω : Ω) ∂μ, X ω Set.Icc a b) (hX : AEMeasurable X μ) :
          ProbabilityTheory.variance X μ (b - ∫ (x : Ω), X xμ) * (∫ (x : Ω), X xμ - a)

          The Bhatia-Davis inequality on variance

          The variance of a random variable X satisfying a ≤ X ≤ b almost everywhere is at most (b - 𝔼 X) * (𝔼 X - a).

          theorem ProbabilityTheory.variance_le_sq_of_bounded {Ω : Type u_1} {m : MeasurableSpace Ω} {μ : MeasureTheory.Measure Ω} [MeasureTheory.IsProbabilityMeasure μ] {a : } {b : } {X : Ω} (h : ∀ᵐ (ω : Ω) ∂μ, X ω Set.Icc a b) (hX : AEMeasurable X μ) :

          Popoviciu's inequality on variance

          The variance of a random variable X satisfying a ≤ X ≤ b almost everywhere is at most ((b - a) / 2) ^ 2.