ProbabilityTheory.IndepFun.integrable_left_of_integrable_mul
theorem ProbabilityTheory.IndepFun.integrable_left_of_integrable_mul :
∀ {Ω : Type u_1} {mΩ : MeasurableSpace Ω} {μ : MeasureTheory.Measure Ω} {β : Type u_2} [inst : MeasurableSpace β]
{X Y : Ω → β} [inst_1 : NormedDivisionRing β] [inst_2 : BorelSpace β],
ProbabilityTheory.IndepFun X Y μ →
MeasureTheory.Integrable (X * Y) μ →
MeasureTheory.AEStronglyMeasurable X μ →
MeasureTheory.AEStronglyMeasurable Y μ → ¬μ.ae.EventuallyEq Y 0 → MeasureTheory.Integrable X μ
The theorem `ProbabilityTheory.IndepFun.integrable_left_of_integrable_mul` states that, given two independent real-valued random variables `X` and `Y` defined on a probability space `Ω` with measurable space structure `mΩ` and measure `μ`, if the product `X*Y` is integrable and `Y` is not almost everywhere equal to zero, then `X` is also integrable. Furthermore, both `X` and `Y` are required to be almost everywhere strongly measurable, which means they are almost everywhere equal to the limit of a sequence of simple functions. The spaces of `X` and `Y` are assumed to be a normed division ring and a Borel space.
More concisely: If independent random variables X and Y, defined on a probability space with measure μ and assuming X and Y are almost everywhere strongly measurable and their product X*Y is integrable, then X is integrable.
|
ProbabilityTheory.IndepFun.integral_mul
theorem ProbabilityTheory.IndepFun.integral_mul :
∀ {Ω : Type u_1} {mΩ : MeasurableSpace Ω} {μ : MeasureTheory.Measure Ω} {X Y : Ω → ℝ},
ProbabilityTheory.IndepFun X Y μ →
MeasureTheory.AEStronglyMeasurable X μ →
MeasureTheory.AEStronglyMeasurable Y μ →
MeasureTheory.integral μ (X * Y) = MeasureTheory.integral μ X * MeasureTheory.integral μ Y
This theorem, named `ProbabilityTheory.IndepFun.integral_mul`, asserts the following statement in probability theory: Given a measurable space `Ω`, a measure `μ` on this space, and two real-valued random variables `X` and `Y` defined on this space, if `X` and `Y` are independent with respect to the measure `μ`, and both `X` and `Y` are "almost everywhere strongly measurable" (meaning that each random variable is almost everywhere equal to the limit of a sequence of simple functions), then the Bochner integral of the product of `X` and `Y` (which can be viewed as the expected value of the product of `X` and `Y`) is equal to the product of the Bochner integrals of `X` and `Y` separately (which are the expected values of `X` and `Y`, respectively). This result is a formal version of the property that the expected value of the product of two independent random variables is the product of their expected values.
More concisely: If two almost everywhere strongly measurable, real-valued random variables `X` and `Y` defined on a measurable space `(Ω, μ)` are independent, then their Bochner integrals (expected values) satisfy `∫(X ⊝ Y) dμ = ∫X dμ ⊝ ∫Y dμ`, where `⊝` denotes the pointwise multiplication of functions.
|
ProbabilityTheory.IndepFun.integrable_mul
theorem ProbabilityTheory.IndepFun.integrable_mul :
∀ {Ω : Type u_1} {mΩ : MeasurableSpace Ω} {μ : MeasureTheory.Measure Ω} {β : Type u_2} [inst : MeasurableSpace β]
{X Y : Ω → β} [inst_1 : NormedDivisionRing β] [inst_2 : BorelSpace β],
ProbabilityTheory.IndepFun X Y μ →
MeasureTheory.Integrable X μ → MeasureTheory.Integrable Y μ → MeasureTheory.Integrable (X * Y) μ
This theorem states that given two independent, integrable, real-valued random variables (represented as functions `X` and `Y` from a measurable space `Ω` to a division normed ring `β`), their product is also integrable.
In more detail, for any types `Ω` and `β` with `Ω` being a measurable space and `β` being a Borel space and a normed division ring, and any measure `μ` on `Ω`, if `X` and `Y` are independent functions (in the sense of probability theory) from `Ω` to `β`, and both `X` and `Y` are integrable (in the sense that they are measurable and the integral of their norms is finite), then the function that maps each point of `Ω` to the product of `X` and `Y` at that point is also integrable.
More concisely: Given two independent, integrable real-valued random variables `X` and `Y` from a measurable space `(Ω, μ)` to a division normed ring `β`, their product `X * Y` is an integrable function.
|
ProbabilityTheory.lintegral_mul_eq_lintegral_mul_lintegral_of_indepFun'
theorem ProbabilityTheory.lintegral_mul_eq_lintegral_mul_lintegral_of_indepFun' :
∀ {Ω : Type u_1} {mΩ : MeasurableSpace Ω} {μ : MeasureTheory.Measure Ω} {f g : Ω → ENNReal},
AEMeasurable f μ →
AEMeasurable g μ →
ProbabilityTheory.IndepFun f g μ → ∫⁻ (ω : Ω), (f * g) ω ∂μ = (∫⁻ (ω : Ω), f ω ∂μ) * ∫⁻ (ω : Ω), g ω ∂μ
This theorem, named `ProbabilityTheory.lintegral_mul_eq_lintegral_mul_lintegral_of_indepFun'`, states that given a measurable space $\Omega$ along with a measure $\mu$ on it and two functions `f` and `g` from $\Omega$ to the extended nonnegative real numbers (denoted by `[0, ∞]`), if both `f` and `g` are almost everywhere measurable and independent, then the expected value of the product of `f` and `g` (denoted by `E[f * g]` or $\int^\infty (f * g) \, d\mu$) equals the product of the expected values of `f` and `g` (denoted by `E[f] * E[g]` or $\int^\infty f \, d\mu * \int^\infty g \, d\mu$). This result is a generalization of the theorem `lintegral_mul_eq_lintegral_mul_lintegral_of_indepFun`.
More concisely: For measurable spaces with a measure, if functions `f` and `g` are almost everywhere measurable and independent, then `E[f * g] = E[f] * E[g]`, where `E[•]` denotes the expected value.
|
ProbabilityTheory.IndepFun.integral_mul_of_nonneg
theorem ProbabilityTheory.IndepFun.integral_mul_of_nonneg :
∀ {Ω : Type u_1} {mΩ : MeasurableSpace Ω} {μ : MeasureTheory.Measure Ω} {X Y : Ω → ℝ},
ProbabilityTheory.IndepFun X Y μ →
0 ≤ X →
0 ≤ Y →
AEMeasurable X μ →
AEMeasurable Y μ →
MeasureTheory.integral μ (X * Y) = MeasureTheory.integral μ X * MeasureTheory.integral μ Y
The theorem `ProbabilityTheory.IndepFun.integral_mul_of_nonneg` states that for any measure space `Ω`, any measure `μ` on `Ω`, and any two random variables `X` and `Y` on `Ω` valued in real numbers, if `X` and `Y` are independent (in the sense of probability theory), and if both `X` and `Y` are nonnegative and almost everywhere measurable, then the Bochner integral of the product of `X` and `Y` with respect to `μ` is equal to the product of the Bochner integral of `X` with respect to `μ` and the Bochner integral of `Y` with respect to `μ`. This is essentially a statement about the behavior of the integral of the product of two independent nonnegative random variables.
More concisely: If two independent, nonnegative, and almost everywhere measurable random variables X and Y on a measure space Ω have Bochner integrals with respect to a measure μ, then their product has the same Bochner integral as the product of their individual integrals.
|
ProbabilityTheory.lintegral_mul_eq_lintegral_mul_lintegral_of_indepFun
theorem ProbabilityTheory.lintegral_mul_eq_lintegral_mul_lintegral_of_indepFun :
∀ {Ω : Type u_1} {mΩ : MeasurableSpace Ω} {μ : MeasureTheory.Measure Ω} {f g : Ω → ENNReal},
Measurable f →
Measurable g →
ProbabilityTheory.IndepFun f g μ → ∫⁻ (ω : Ω), (f * g) ω ∂μ = (∫⁻ (ω : Ω), f ω ∂μ) * ∫⁻ (ω : Ω), g ω ∂μ
This theorem, in the field of probability theory, states that if `f` and `g` are independent random variables with values in the extended nonnegative real numbers (usually denoted as [0, ∞]), then the expected value of the product of `f` and `g` is equal to the product of the expected values of `f` and `g`. The expectation is calculated with respect to a measure `μ` on a measurable space `Ω`. The variables `f` and `g` are functions from `Ω` to [0, ∞] and are assumed to be measurable, meaning the preimage of every measurable set under these functions is also measurable.
More concisely: If `f` and `g` are independent, measurable, extended nonnegative real-valued random variables, then `E[fg] = E[f] * E[g]`.
|
ProbabilityTheory.lintegral_mul_indicator_eq_lintegral_mul_lintegral_indicator
theorem ProbabilityTheory.lintegral_mul_indicator_eq_lintegral_mul_lintegral_indicator :
∀ {Ω : Type u_1} {f : Ω → ENNReal} {Mf mΩ : MeasurableSpace Ω} {μ : MeasureTheory.Measure Ω},
Mf ≤ mΩ →
∀ (c : ENNReal) {T : Set Ω},
MeasurableSet T →
ProbabilityTheory.IndepSets {s | MeasurableSet s} {T} μ →
Measurable f →
∫⁻ (ω : Ω), f ω * T.indicator (fun x => c) ω ∂μ =
(∫⁻ (ω : Ω), f ω ∂μ) * ∫⁻ (ω : Ω), T.indicator (fun x => c) ω ∂μ
This theorem states that for a random variable `f` with values in the extended nonnegative real numbers `[0, ∞]`, and an event `T` in a measurable space `Ω`, if `f` is independent of `T`, then the expectation of the product of `f` and the indicator function of `T` with constant value `c`, is equal to the product of the expectation of `f` and the expectation of the indicator function. This is formalized as: `∫⁻ (ω : Ω), f ω * T.indicator (fun x => c) ω ∂μ = (∫⁻ (ω : Ω), f ω ∂μ) * ∫⁻ (ω : Ω), T.indicator (fun x => c) ω ∂μ`. This theorem is useful when dealing with the product of two functions in the computation of the Lebesgue integral, particularly when the measures of the spaces they are defined on are independent.
More concisely: The expectation of the product of an independent random variable `f` in the extended nonnegative real numbers and an indicator function of an event `T` with constant value `c`, is equal to the product of the expectations of `f` and the indicator function.
|
ProbabilityTheory.indepFun_iff_integral_comp_mul
theorem ProbabilityTheory.indepFun_iff_integral_comp_mul :
∀ {Ω : Type u_1} {mΩ : MeasurableSpace Ω} {μ : MeasureTheory.Measure Ω} [inst : MeasureTheory.IsFiniteMeasure μ]
{β : Type u_2} {β' : Type u_3} {mβ : MeasurableSpace β} {mβ' : MeasurableSpace β'} {f : Ω → β} {g : Ω → β'}
{hfm : Measurable f} {hgm : Measurable g},
ProbabilityTheory.IndepFun f g μ ↔
∀ {φ : β → ℝ} {ψ : β' → ℝ},
Measurable φ →
Measurable ψ →
MeasureTheory.Integrable (φ ∘ f) μ →
MeasureTheory.Integrable (ψ ∘ g) μ →
MeasureTheory.integral μ (φ ∘ f * ψ ∘ g) =
MeasureTheory.integral μ (φ ∘ f) * MeasureTheory.integral μ (ψ ∘ g)
The theorem `ProbabilityTheory.indepFun_iff_integral_comp_mul` states that for any probability space `Ω` with a measure `μ`, functions `f` and `g` mapping from `Ω` to arbitrary measurable spaces `β` and `β'` respectively, are independent if and only if for all measurable functions `φ` and `ψ` mapping from `β` and `β'` to real numbers `ℝ` and satisfying the appropriate integrability conditions, the expectation of the product of `φ ∘ f` and `ψ ∘ g` (i.e., `E[(φ ∘ f) * (ψ ∘ g)]`) equals the product of the expectations of `φ ∘ f` and `ψ ∘ g` (i.e., `E[φ ∘ f] * E[ψ ∘ g]`). Here, the expectations are calculated as the Bochner integrals over the probability space. This characterizes the independence of `f` and `g` in terms of the integral of the composition of these functions with `φ` and `ψ`.
More concisely: For any probability space and measure, functions `f` and `g` are independent if and only if the expectations of the products of measurable functions `φ` and `ψ` composed with `f` and `g`, respectively, equal their product of individual expectations.
|
ProbabilityTheory.lintegral_mul_eq_lintegral_mul_lintegral_of_independent_measurableSpace
theorem ProbabilityTheory.lintegral_mul_eq_lintegral_mul_lintegral_of_independent_measurableSpace :
∀ {Ω : Type u_1} {f g : Ω → ENNReal} {Mf Mg mΩ : MeasurableSpace Ω} {μ : MeasureTheory.Measure Ω},
Mf ≤ mΩ →
Mg ≤ mΩ →
ProbabilityTheory.Indep Mf Mg μ →
Measurable f → Measurable g → ∫⁻ (ω : Ω), f ω * g ω ∂μ = (∫⁻ (ω : Ω), f ω ∂μ) * ∫⁻ (ω : Ω), g ω ∂μ
This theorem states that if `f` and `g` are independent random variables taking values in the extended nonnegative real numbers (denoted as `ℝ≥0∞`), then the expected value of the product of `f` and `g` is equal to the product of their individual expected values. This statement assumes that the measurable spaces for the domains of `f` and `g` are independent with respect to a measure `μ`. This approach is akin to using the sigma-algebra approach to independence. A more common version of this theorem that directly uses the independence of random variables can be found in `lintegral_mul_eq_lintegral_mul_lintegral_of_indepFun`.
More concisely: If `f` and `g` are independent random variables with values in the extended nonnegative real numbers and have measurable domains, then their expected product equals the product of their individual expected values: `∫(f ⊤ dμ) * ∫(g ⊤ dμ) = (∫f dμ) * (∫g dμ)`.
|
ProbabilityTheory.IndepFun.integral_mul_of_integrable
theorem ProbabilityTheory.IndepFun.integral_mul_of_integrable :
∀ {Ω : Type u_1} {mΩ : MeasurableSpace Ω} {μ : MeasureTheory.Measure Ω} {X Y : Ω → ℝ},
ProbabilityTheory.IndepFun X Y μ →
MeasureTheory.Integrable X μ →
MeasureTheory.Integrable Y μ →
MeasureTheory.integral μ (X * Y) = MeasureTheory.integral μ X * MeasureTheory.integral μ Y
The theorem `ProbabilityTheory.IndepFun.integral_mul_of_integrable` states that for any measure space `Ω` with a measure `μ`, and any two independent, integrable real-valued functions `X` and `Y` (which can be thought of as random variables), the integral of the product of `X` and `Y` with respect to `μ` is equal to the product of the integrals of `X` and `Y` with respect to `μ`. This is a fundamental result in probability theory often referred to as the independence of expectation values, and the proof involves decomposing the functions into their positive and negative parts.
More concisely: For independent, integrable real-valued functions X and Y on a measure space (Ω, μ), the integral of their product X * Y equals the product of their integrals: ∫(X * Y) dμ = ∫X dμ * ∫Y dμ.
|
ProbabilityTheory.IndepFun.integrable_right_of_integrable_mul
theorem ProbabilityTheory.IndepFun.integrable_right_of_integrable_mul :
∀ {Ω : Type u_1} {mΩ : MeasurableSpace Ω} {μ : MeasureTheory.Measure Ω} {β : Type u_2} [inst : MeasurableSpace β]
{X Y : Ω → β} [inst_1 : NormedDivisionRing β] [inst_2 : BorelSpace β],
ProbabilityTheory.IndepFun X Y μ →
MeasureTheory.Integrable (X * Y) μ →
MeasureTheory.AEStronglyMeasurable X μ →
MeasureTheory.AEStronglyMeasurable Y μ → ¬μ.ae.EventuallyEq X 0 → MeasureTheory.Integrable Y μ
The theorem states that if we have two independent real-valued random variables, say X and Y, defined on a measurable space with a given measure, and the product of these two random variables is integrable, and X is not almost everywhere equal to zero, then Y is also integrable. Here, a random variable being integrable means that it is measurable and the integral of the absolute value of the random variable (over the entire space) with respect to the measure is finite. A random variable being "almost everywhere strongly measurable" means it is almost everywhere equal to the limit of a sequence of simple functions. And two random variables are independent if the sigma-algebras generated by them are independent.
More concisely: If two independent real-valued random variables X and Y, defined on a measurable space with a given measure, have integrable product and X is not almost everywhere equal to zero, then Y is integrable.
|