MeasureTheory.pdf.indepFun_iff_pdf_prod_eq_pdf_mul_pdf
theorem MeasureTheory.pdf.indepFun_iff_pdf_prod_eq_pdf_mul_pdf :
∀ {Ω : Type u_1} {E : Type u_2} [inst : MeasurableSpace E] {m : MeasurableSpace Ω} {ℙ : MeasureTheory.Measure Ω}
{μ : MeasureTheory.Measure E} {F : Type u_3} [inst_1 : MeasurableSpace F] {ν : MeasureTheory.Measure F}
{X : Ω → E} {Y : Ω → F} [inst_2 : MeasureTheory.IsFiniteMeasure ℙ] [inst_3 : MeasureTheory.SigmaFinite μ]
[inst_4 : MeasureTheory.SigmaFinite ν] [inst_5 : MeasureTheory.HasPDF (fun ω => (X ω, Y ω)) ℙ (μ.prod ν)],
ProbabilityTheory.IndepFun X Y ℙ ↔
(μ.prod ν).ae.EventuallyEq (MeasureTheory.pdf (fun ω => (X ω, Y ω)) ℙ (μ.prod ν)) fun z =>
MeasureTheory.pdf X ℙ μ z.1 * MeasureTheory.pdf Y ℙ ν z.2
This theorem states that for any two random variables `X` and `Y` with respective measures `μ` and `ν`, and a probability measure `ℙ` on a given measurable space, `X` and `Y` are independent if and only if the almost everywhere equality holds between the joint probability density function (pdf) of `X` and `Y` and the product of the marginal pdfs of `X` and `Y`. In mathematical terms, `X` and `Y` are independent (i.e., `ProbabilityTheory.IndepFun X Y ℙ`) if and only if for almost every `z`, the joint pdf of `X` and `Y` equals the product of the individual pdfs of `X` and `Y`, that is, `μ.prod ν`.ae.EventuallyEq (MeasureTheory.pdf (fun ω => (X ω, Y ω)) ℙ (μ.prod ν)) fun z => MeasureTheory.pdf X ℙ μ z.1 * MeasureTheory.pdf Y ℙ ν z.2`.
More concisely: Two random variables X and Y are independent with respect to a probability measure ℙ if and only if their joint probability density function equals the product of their marginal density functions almost everywhere.
|
MeasureTheory.measurable_pdf
theorem MeasureTheory.measurable_pdf :
∀ {Ω : Type u_1} {E : Type u_2} [inst : MeasurableSpace E] {m : MeasurableSpace Ω} (X : Ω → E)
(ℙ : MeasureTheory.Measure Ω) (μ : autoParam (MeasureTheory.Measure E) _auto✝),
Measurable (MeasureTheory.pdf X ℙ μ)
The theorem `MeasureTheory.measurable_pdf` states that for all types `Ω` and `E`, where `E` has a `MeasurableSpace` instance, and for any `MeasurableSpace Ω`, a mapping `X` from `Ω` to `E`, a measure `ℙ` on `Ω`, and an (optional) measure `μ` on `E`, the probability density function (pdf) of a random variable `X`, which is the Radon-Nikodym derivative of the push-forward measure of `ℙ` along `X` with respect to `μ`, is a measurable function. In other words, for every measurable set in the codomain, the preimage under the pdf is also a measurable set in the domain.
More concisely: The Radon-Nikodym derivative of the push-forward measure of a probability measure along a measurable function is a measurable function.
|
MeasureTheory.pdf.integral_pdf_smul
theorem MeasureTheory.pdf.integral_pdf_smul :
∀ {Ω : Type u_1} {E : Type u_2} [inst : MeasurableSpace E] {m : MeasurableSpace Ω} {ℙ : MeasureTheory.Measure Ω}
{μ : MeasureTheory.Measure E} {F : Type u_3} [inst_1 : NormedAddCommGroup F] [inst_2 : NormedSpace ℝ F]
[inst_3 : MeasureTheory.IsFiniteMeasure ℙ] {X : Ω → E} [inst_4 : MeasureTheory.HasPDF X ℙ μ] {f : E → F},
MeasureTheory.AEStronglyMeasurable f μ →
∫ (x : E), (MeasureTheory.pdf X ℙ μ x).toReal • f x ∂μ = ∫ (x : Ω), f (X x) ∂ℙ
**The Law of the Unconscious Statistician**: For any random variable `X`, any function `f` that is "almost everywhere strongly measurable" with respect to a measure `μ` on the codomain of `X`, the expectation of `f ∘ X` (i.e., the function `f` composed with the random variable `X`) with respect to the measure `ℙ` is equal to the integral of the product of the pdf of `X` and `f`, with respect to the measure `μ`. Here, the pdf of `X` is the Radon-Nikodym derivative of the push-forward measure of `ℙ` along `X` with respect to `μ`.
More concisely: The expectation of the composed function `f ∘ X` with respect to probability measure `ℙ` is equal to the integral of `X`'s probability density function `f` multiplied by `f` with respect to the measure `μ`.
|
MeasureTheory.HasPDF.absolutelyContinuous
theorem MeasureTheory.HasPDF.absolutelyContinuous :
∀ {Ω : Type u_1} {E : Type u_2} [inst : MeasurableSpace E] {x : MeasurableSpace Ω} {X : Ω → E}
{ℙ : MeasureTheory.Measure Ω} {μ : MeasureTheory.Measure E} [hX : MeasureTheory.HasPDF X ℙ μ],
(MeasureTheory.Measure.map X ℙ).AbsolutelyContinuous μ
The theorem states that for any measure space `Ω`, any measurable space `E`, any `Ω`-valued random variable `X` in `E`, and any two measures `ℙ` on `Ω` and `μ` on `E`, if `X` has a probability density function with respect to measures `ℙ` and `μ`, then the pushforward measure of `ℙ` under `X` is absolutely continuous with respect to `μ`. In other words, if for any set in `E`, its `μ`-measure is zero, then its pushforward `ℙ`-measure is also zero. This provides a relationship between the distribution of a random variable and the underlying probability measure.
More concisely: If an Ω-valued random variable X on a measurable space (E, Σ) has a density function with respect to measures ℙ on Ω and μ on E, then the pushforward measure ℙ ∘ X is absolutely continuous with respect to μ.
|
MeasureTheory.HasPDF.quasiMeasurePreserving_of_measurable
theorem MeasureTheory.HasPDF.quasiMeasurePreserving_of_measurable :
∀ {Ω : Type u_1} {E : Type u_2} [inst : MeasurableSpace E] {x : MeasurableSpace Ω} (X : Ω → E)
(ℙ : MeasureTheory.Measure Ω) (μ : MeasureTheory.Measure E) [inst_1 : MeasureTheory.HasPDF X ℙ μ],
Measurable X → MeasureTheory.Measure.QuasiMeasurePreserving X ℙ μ
The theorem `MeasureTheory.HasPDF.quasiMeasurePreserving_of_measurable` states that for any random variable `X`, mapping from a measurable space `Ω` to another measurable space `E`, and given two measures `ℙ` and `μ` on `Ω` and `E` respectively, if `X` has a probability density function with respect to `ℙ` and `μ`, and `X` is a measurable function, then `X` is a quasi-measure preserving function from `ℙ` to `μ`. This means that for any measurable set in `E`, the measure of its preimage under `X` with respect to `ℙ` is equal to the measure of the set itself with respect to `μ`, except possibly on a set of measure zero.
More concisely: If a measurable function `X` from a measurable space `(Ω, ℙ)` to another measurable space `(E, μ)` has a probability density function with respect to `ℙ` and `μ`, then `X` is quasi-measure preserving, meaning the preimage of any measurable set in `E` under `X` has the same measure with respect to `ℙ` and `μ`, up to a set of measure zero.
|
MeasureTheory.pdf.integral_mul_eq_integral
theorem MeasureTheory.pdf.integral_mul_eq_integral :
∀ {Ω : Type u_1} {m : MeasurableSpace Ω} {ℙ : MeasureTheory.Measure Ω} [inst : MeasureTheory.IsFiniteMeasure ℙ]
{X : Ω → ℝ} [inst : MeasureTheory.HasPDF X ℙ MeasureTheory.volume],
∫ (x : ℝ), x * (MeasureTheory.pdf X ℙ MeasureTheory.volume x).toReal = ∫ (x : Ω), X x ∂ℙ
The theorem states that for a real-valued random variable `X` with a probability density function `f`, the expectation of `X` is equal to the integral over all real numbers `x` of the product of `x` and `f(x)` with respect to the Lebesgue measure. This is to say, the expected value of the random variable `X` is equivalent to the integral of `x` multiplied by the probability density function of `X`, taken with respect to the Lebesgue measure.
More concisely: The expectation of a real-valued random variable with respect to its Lebesgue measure is equal to the integral of the product of the variable and its probability density function.
|
MeasureTheory.pdf.quasiMeasurePreserving_hasPDF
theorem MeasureTheory.pdf.quasiMeasurePreserving_hasPDF :
∀ {Ω : Type u_1} {E : Type u_2} [inst : MeasurableSpace E] {m : MeasurableSpace Ω} {ℙ : MeasureTheory.Measure Ω}
{μ : MeasureTheory.Measure E} {F : Type u_3} [inst_1 : MeasurableSpace F] {ν : MeasureTheory.Measure F}
{X : Ω → E} [inst_2 : MeasureTheory.HasPDF X ℙ μ],
AEMeasurable X ℙ →
∀ {g : E → F},
MeasureTheory.Measure.QuasiMeasurePreserving g μ ν →
(MeasureTheory.Measure.map g (MeasureTheory.Measure.map X ℙ)).HaveLebesgueDecomposition ν →
MeasureTheory.HasPDF (g ∘ X) ℙ ν
The theorem `MeasureTheory.pdf.quasiMeasurePreserving_hasPDF` states that if a random variable `X` has a probability density function (`HasPDF`), and it is transformed under a `QuasiMeasurePreserving` map `g`, then the transformed variable `g ∘ X` also has a probability density function. This is under the condition that the map `g` preserves the measure structure between the measures `μ` and `ν`, and that the pushforward measures under `g` and `X` have a Lebesgue decomposition with respect to `ν`. The theorem is particularly useful when dealing with a probability measure and a real-valued random variable. Also, the random variable `X` needs to be almost everywhere measurable.
More concisely: If `X` is a real-valued, almost everywhere measurable random variable with a probability density function under measure `μ`, and `g` is a QuasiMeasurePreserving map preserving the measure structure between `μ` and `ν`, then `g ∘ X` has a probability density function under `ν`.
|
MeasureTheory.HasPDF.aemeasurable
theorem MeasureTheory.HasPDF.aemeasurable :
∀ {Ω : Type u_1} {E : Type u_2} [inst : MeasurableSpace E] {x : MeasurableSpace Ω} (X : Ω → E)
(ℙ : MeasureTheory.Measure Ω) (μ : MeasureTheory.Measure E) [hX : MeasureTheory.HasPDF X ℙ μ], AEMeasurable X ℙ
This theorem states that for any random variable `X` taking values in a measurable space `E` and defined on a sample space `Ω`, if `X` has a probability density function with respect to a probability measure `ℙ` on `Ω` and a measure `μ` on `E`, then `X` is "almost everywhere measurable". In other words, there exists a measurable function that is equal to `X` almost everywhere with respect to the measure `ℙ`. The "almost everywhere" condition allows the equality to fail on a set of measure zero.
More concisely: If a random variable `X` on a sample space `Ω` with values in a measurable space `E` has a probability density function with respect to measures `ℙ` on `Ω` and `μ` on `E`, then `X` is equal almost everywhere to a measurable function.
|
Real.hasPDF_iff_of_aemeasurable
theorem Real.hasPDF_iff_of_aemeasurable :
∀ {Ω : Type u_1} {m : MeasurableSpace Ω} {ℙ : MeasureTheory.Measure Ω} [inst : MeasureTheory.IsFiniteMeasure ℙ]
{X : Ω → ℝ},
AEMeasurable X ℙ →
(MeasureTheory.HasPDF X ℙ MeasureTheory.volume ↔
(MeasureTheory.Measure.map X ℙ).AbsolutelyContinuous MeasureTheory.volume)
A real-valued random variable `X` has a probability density function (PDF) with respect to the Lebesgue measure `λ` (notated as `HasPDF X ℙ λ`), if and only if the push-forward measure of a probability measure `ℙ` along `X` is absolutely continuous with respect to `λ`. This is true under the condition that `X` is almost everywhere measurable with respect to `ℙ`, which means `X` coincides almost everywhere with a measurable function, and that `ℙ` is a finite measure. The push-forward measure of `ℙ` along `X` is defined as the measure of the set of all pre-images of `X` under `ℙ`. Absolute continuity here implies that if a set has Lebesgue measure zero, its pre-image under the push-forward measure also has measure zero.
More concisely: A real-valued random variable `X` has a probability density function (PDF) with respect to Lebesgue measure `λ` if and only if the push-forward measure of probability measure `ℙ` along `X` is absolutely continuous with respect to `λ`, given that `X` is almost everywhere measurable with respect to `ℙ` and `ℙ` is a finite measure.
|
MeasureTheory.pdf.lintegral_pdf_mul
theorem MeasureTheory.pdf.lintegral_pdf_mul :
∀ {Ω : Type u_1} {E : Type u_2} [inst : MeasurableSpace E] {m : MeasurableSpace Ω} {ℙ : MeasureTheory.Measure Ω}
{μ : MeasureTheory.Measure E} {X : Ω → E} [inst_1 : MeasureTheory.HasPDF X ℙ μ] {f : E → ENNReal},
AEMeasurable f μ → ∫⁻ (x : E), MeasureTheory.pdf X ℙ μ x * f x ∂μ = ∫⁻ (x : Ω), f (X x) ∂ℙ
This theorem is a version of the "Law of the Unconscious Statistician" for nonnegative random variables. It essentially states that for any measurable space `E`, any measure space `Ω`, any probability measure `ℙ` on `Ω`, any measure `μ` on `E`, any random variable `X` from `Ω` to `E` that has a probability density function (PDF) with respect to `μ`, and any function `f` from `E` to the extended nonnegative real numbers that is almost everywhere measurable with respect to `μ`, the expected value (or Lebesgue integral) of the product of the PDF of `X` and `f` with respect to `μ` equals the expected value of the function `f` composed with `X` with respect to `ℙ`. In mathematical notation, this is expressed as ∫⁻ (x : E), MeasureTheory.pdf X ℙ μ x * f x ∂μ = ∫⁻ (x : Ω), f (X x) ∂ℙ. This theorem allows us to calculate the expected value of a function of a random variable directly from its PDF.
More concisely: For any measurable spaces E and Ω, probability measure ℙ on Ω, measure μ on E, nonnegative random variable X from Ω to E with PDF respect to μ, and almost everywhere measurable function f from E to the extended nonnegative real numbers, ∫⁻(x : E). pdf X ℙ μ x * f x dμ = ∫⁻(ω : Ω). f (X ω) dℙ.
|
MeasureTheory.map_eq_withDensity_pdf
theorem MeasureTheory.map_eq_withDensity_pdf :
∀ {Ω : Type u_1} {E : Type u_2} [inst : MeasurableSpace E] {m : MeasurableSpace Ω} (X : Ω → E)
(ℙ : MeasureTheory.Measure Ω) (μ : autoParam (MeasureTheory.Measure E) _auto✝) [hX : MeasureTheory.HasPDF X ℙ μ],
MeasureTheory.Measure.map X ℙ = μ.withDensity (MeasureTheory.pdf X ℙ μ)
The theorem `MeasureTheory.map_eq_withDensity_pdf` states that for any measure spaces `Ω` and `E`, any random variable `X : Ω → E`, any measure `ℙ` on `Ω`, and any (automatically determined) measure `μ` on `E`, if `X` has a Probability Density Function (PDF) with respect to measures `ℙ` and `μ`, then the push-forward measure of `ℙ` along `X` is equal to the measure `μ` weighted by the PDF of `X`.
More concisely: If random variable X from measure space Ω to E has a Probability Density Function (PDF) with respect to measures ℙ on Ω and μ on E, then X�#{ℙ} = μ ∘ X ∘ #.
Here, X�#{ℙ} denotes the push-forward measure of ℙ along X, and #. represents the function composition symbol.
|
MeasureTheory.pdf_def
theorem MeasureTheory.pdf_def :
∀ {Ω : Type u_1} {E : Type u_2} [inst : MeasurableSpace E] {x : MeasurableSpace Ω} {ℙ : MeasureTheory.Measure Ω}
{μ : MeasureTheory.Measure E} {X : Ω → E}, MeasureTheory.pdf X ℙ μ = (MeasureTheory.Measure.map X ℙ).rnDeriv μ
The theorem `MeasureTheory.pdf_def` describes the relationship between the probability density function (pdf) and the Radon-Nikodym derivative of the push-forward measure. Specifically, for any measurable spaces `Ω` and `E`, a measure `ℙ` on `Ω`, a measure `μ` on `E`, and a random variable `X` from `Ω` to `E`, the pdf of `X` with respect to measures `ℙ` and `μ` is equal to the Radon-Nikodym derivative of the push-forward measure of `ℙ` along `X` with respect to `μ`. In simpler terms, this theorem provides a theoretical framework for calculating the probability distribution of a transformed random variable in terms of the original random variable and its transformation.
More concisely: The pdf of a random variable with respect to two measures is equal to the Radon-Nikodym derivative of the push-forward measure of the first measure along the random variable with respect to the second measure.
|
MeasureTheory.hasPDF_of_map_eq_withDensity
theorem MeasureTheory.hasPDF_of_map_eq_withDensity :
∀ {Ω : Type u_1} {E : Type u_2} [inst : MeasurableSpace E] {x : MeasurableSpace Ω} {X : Ω → E}
{ℙ : MeasureTheory.Measure Ω} {μ : MeasureTheory.Measure E},
AEMeasurable X ℙ →
∀ (f : E → ENNReal),
AEMeasurable f μ → MeasureTheory.Measure.map X ℙ = μ.withDensity f → MeasureTheory.HasPDF X ℙ μ
This theorem states that, for any two types `Ω` and `E` with `E` being a measurable space, for any `X` that is a function from `Ω` to `E`, for any measure `ℙ` on `Ω` and measure `μ` on `E`, if `X` is almost everywhere measurable with respect to `ℙ`, then for any function `f` from `E` to the extended nonnegative real numbers, if `f` is also almost everywhere measurable with respect to `μ` and the pushforward of the measure `ℙ` by `X` equals to the measure `μ` with a density of `f`, then `X` has a probability density function with respect to the measures `ℙ` and `μ`.
More concisely: If a function `X:` `Ω` `->` `E` from a measurable space `(Ω, ℙ)` to another measurable space `(E, μ)` is almost everywhere measurable with respect to `ℙ` and the pushforward measure `X#ℙ = μ` holds with a density `f: E -> ℝ+`, then `X` has a probability density function `g:` `Ω -> ℝ+` such that `ℙ(X⁻¹(dμ)) = g dℙ`.
|