LeanAide GPT-4 documentation

Module: Mathlib.Probability.Variance


ProbabilityTheory.meas_ge_le_variance_div_sq

theorem ProbabilityTheory.meas_ge_le_variance_div_sq : ∀ {Ω : Type u_1} [inst : MeasureTheory.MeasureSpace Ω] [inst_1 : MeasureTheory.IsFiniteMeasure MeasureTheory.volume] {X : Ω → ℝ}, MeasureTheory.Memℒp X 2 MeasureTheory.volume → ∀ {c : ℝ}, 0 < c → ↑↑MeasureTheory.volume {ω | c ≤ |X ω - ∫ (a : Ω), X a|} ≤ ENNReal.ofReal (ProbabilityTheory.variance X MeasureTheory.volume / c ^ 2)

**Chebyshev's Inequality**: This theorem states that for any real-valued random variable `X` on a probability space `Ω` with a finite measure, if `X` belongs to L^2(Ω) (i.e., `X` is almost everywhere strongly measurable and its second moment is finite), then for any real number `c > 0`, the measure of the set of outcomes `ω` where the absolute deviation of `X` from its expected value is at least `c`, is bounded above by the variance of `X` divided by `c` squared. In simpler terms, it gives an upper bound on the probability that the absolute deviation from the mean is greater than a given threshold in terms of the variance and the threshold.

More concisely: For any real-valued random variable `X` on a finite measure probability space with finite second moment, the measure of outcomes where `|X - Ē(X)| ≥ c` is bounded above by `Var(X) / c²`, where `Ē(X)` is the expected value of `X` and `Var(X)` is the variance of `X`.

ProbabilityTheory.IndepFun.variance_sum

theorem ProbabilityTheory.IndepFun.variance_sum : ∀ {Ω : Type u_1} [inst : MeasureTheory.MeasureSpace Ω] [inst_1 : MeasureTheory.IsProbabilityMeasure MeasureTheory.volume] {ι : Type u_2} {X : ι → Ω → ℝ} {s : Finset ι}, (∀ i ∈ s, MeasureTheory.Memℒp (X i) 2 MeasureTheory.volume) → ((↑s).Pairwise fun i j => ProbabilityTheory.IndepFun (X i) (X j) MeasureTheory.volume) → ProbabilityTheory.variance (s.sum fun i => X i) MeasureTheory.volume = s.sum fun i => ProbabilityTheory.variance (X i) MeasureTheory.volume

The theorem `ProbabilityTheory.IndepFun.variance_sum` states that for any finite set of pairwise independent random variables, the variance of the sum of these random variables is equal to the sum of their individual variances. This is applicable under the conditions that each of the random variables in the set is square-integrable (i.e., belongs to the L^2 space) and the set of random variables is pairwise independent. In the context of the theorem, a random variable is a real-valued function on a measure space. The theorem is a formal mathematical statement of the principle that the variance of the sum of independent variables is the sum of their variances.

More concisely: For any finite set of pairwise independent, square-integrable random variables, the variance of their sum equals the sum of their individual variances.

ProbabilityTheory.IndepFun.variance_add

theorem ProbabilityTheory.IndepFun.variance_add : ∀ {Ω : Type u_1} [inst : MeasureTheory.MeasureSpace Ω] [inst_1 : MeasureTheory.IsProbabilityMeasure MeasureTheory.volume] {X Y : Ω → ℝ}, MeasureTheory.Memℒp X 2 MeasureTheory.volume → MeasureTheory.Memℒp Y 2 MeasureTheory.volume → ProbabilityTheory.IndepFun X Y MeasureTheory.volume → ProbabilityTheory.variance (X + Y) MeasureTheory.volume = ProbabilityTheory.variance X MeasureTheory.volume + ProbabilityTheory.variance Y MeasureTheory.volume

The theorem `ProbabilityTheory.IndepFun.variance_add` states that for any two real-valued random variables `X` and `Y` defined on a measure space `Ω` with measure `MeasureTheory.volume`, if `X` and `Y` are both part of the space `L^2` (i.e., their 2-norm is finite) and are independent, then the variance of the sum of `X` and `Y` is equal to the sum of the variance of `X` and the variance of `Y`. This theorem is a formalization of the property in probability theory that the variance of the sum of two independent random variables is the sum of their variances.

More concisely: For independent random variables X and Y in L^2 of a measure space Ω with finite variance, the variance of X + Y equals the sum of the variances of X and Y.

ProbabilityTheory.meas_ge_le_evariance_div_sq

theorem ProbabilityTheory.meas_ge_le_evariance_div_sq : ∀ {Ω : Type u_1} [inst : MeasureTheory.MeasureSpace Ω] {X : Ω → ℝ}, MeasureTheory.AEStronglyMeasurable X MeasureTheory.volume → ∀ {c : NNReal}, c ≠ 0 → ↑↑MeasureTheory.volume {ω | ↑c ≤ |X ω - ∫ (a : Ω), X a|} ≤ ProbabilityTheory.evariance X MeasureTheory.volume / ↑c ^ 2

The theorem states Chebyshev's inequality for a variance that takes real values ranging from zero to positive infinity. Given a real-valued random variable `X` on a measure space `Ω`, which is almost everywhere strongly measurable with respect to the Lebesgue measure (`MeasureTheory.volume`), the measure of the set of outcomes `ω` for which the absolute deviation of `X` from its mean is at least a non-zero constant `c`, is less than or equal to the variance of `X` divided by the square of `c`. This provides a bound on the probability that `X` deviates from its expected value by a certain amount.

More concisely: For a real-valued, almost everywhere strongly measurable random variable `X` on the Lebesgue measure space with finite variance, the measure of outcomes `ω` with |X(ω) - E[X]| ≥ `c` is bounded above by variance(X) / `c`², where `c` is a non-zero constant.