LeanAide GPT-4 documentation

Module: Mathlib.Probability.Distributions.Uniform


MeasureTheory.pdf.IsUniform.integral_eq

theorem MeasureTheory.pdf.IsUniform.integral_eq : ∀ {Ω : Type u_2} {x : MeasurableSpace Ω} {ℙ : MeasureTheory.Measure Ω} {X : Ω → ℝ} {s : Set ℝ}, MeasureTheory.pdf.IsUniform X s ℙ MeasureTheory.volume → ∫ (x : Ω), X x ∂ℙ = (↑↑MeasureTheory.volume s)⁻¹.toReal * ∫ (x : ℝ) in s, x

The theorem `MeasureTheory.pdf.IsUniform.integral_eq` states that for any measurable space `Ω` and associated measure `ℙ`, and any real random variable `X` with uniform distribution over a set `s`, the expectation of `X` is given by the inverse of the Lebesgue measure of `s`, multiplied by the integral over `s` of `x` with respect to the Lebesgue measure. This means that the average value of the random variable `X` is given by the average of all values in `s`, where each value is weighted equally due to the uniform distribution.

More concisely: For any measurable space with uniform measure and a measurable random variable with a uniform distribution over a set, the expectation is equal to the integral of the variable over the set divided by the Lebesgue measure of the set.

MeasureTheory.pdf.IsUniform.pdf_eq

theorem MeasureTheory.pdf.IsUniform.pdf_eq : ∀ {E : Type u_1} [inst : MeasurableSpace E] {μ : MeasureTheory.Measure E} {Ω : Type u_2} {x : MeasurableSpace Ω} {ℙ : MeasureTheory.Measure Ω} {X : Ω → E} {s : Set E}, MeasurableSet s → MeasureTheory.pdf.IsUniform X s ℙ μ → μ.ae.EventuallyEq (MeasureTheory.pdf X ℙ μ) (s.indicator ((↑↑μ s)⁻¹ • 1))

The theorem `MeasureTheory.pdf.IsUniform.pdf_eq` states that for any random variable `X` mapping from a measure space `Ω` to a measurable space `E`, with a probability measure `ℙ` on `Ω` and a measure `μ` on `E`, and for any measurable set `s` in `E`, if `X` has a uniform distribution on `s`, then the probability density function (pdf) of `X` is equal almost everywhere (with respect to the measure `μ`) to the indicator function of `s` scaled by the reciprocal of the measure of `s`. That is, for almost every point in `E`, the density of `X` at that point is equal to the indicator function of `s` at that point (which is 1 if the point is in `s` and 0 otherwise) times the reciprocal of the measure of `s`, effectively describing a uniform distribution on `s`.

More concisely: For any measurable set `s` in a measurable space `(E, μ)` and a random variable `X` mapping from a measure space `(Ω, ℙ)` to `(E, μ)` with a uniform distribution on `s`, the probability density function of `X` almost everywhere equals the scaled indicator function of `s`. In other words, the density is almost everywhere equal to the indicator function multiplied by the reciprocal of the measure of `s`.