Mathlib.Probability.Independence.Basic._auxLemma.1
theorem Mathlib.Probability.Independence.Basic._auxLemma.1 :
∀ {α : Type u} {a : α} {p : α → Prop}, (∀ᶠ (x : α) in pure a, p x) = p a
This theorem from the Mathlib.Probability.Independence.Basic library in Lean 4 states that for any type `α`, element `a` of that type, and property `p` that elements of type `α` might have, the property that all elements `x` of type `α` in the pure singleton set `{a}` satisfy `p` is equivalent to `a` itself satisfying `p`. Intuitively, this theorem expresses that if a property is true for all elements in a singleton set, it must be true for that single element. It appears to be a supporting lemma used in discussions of probability and independence.
More concisely: For any type `α`, element `a` of `α`, and property `p`: the property `p` holds for all elements in the singleton set `{a}` if and only if `p` holds for `a`.
|
ProbabilityTheory.iIndepFun.indepFun_finset
theorem ProbabilityTheory.iIndepFun.indepFun_finset :
∀ {Ω : Type u_1} {ι : Type u_2} {mΩ : MeasurableSpace Ω} {μ : MeasureTheory.Measure Ω} {β : ι → Type u_10}
{m : (i : ι) → MeasurableSpace (β i)} {f : (i : ι) → Ω → β i} (S T : Finset ι),
Disjoint S T →
ProbabilityTheory.iIndepFun m f μ →
(∀ (i : ι), Measurable (f i)) → ProbabilityTheory.IndepFun (fun a i => f (↑i) a) (fun a i => f (↑i) a) μ
This theorem in probability theory states that if we have a family (`f`) of mutually independent random variables (denoted by `iIndepFun m f μ`), along with two disjoint finite index sets `S` and `T`, then the tuples formed by `f i` for `i` in `S` and `i` in `T` are independent. This is provided that each function `f i` is measurable. In more formal terms, if `f` is a function from an index set `ι` to a set of random variables in a measurable space `Ω`, and `S` and `T` are two disjoint subsets of `ι`, then the tuple of variables `(f i)_i` for `i ∈ S` is independent of the tuple `(f i)_i` for `i ∈ T`, given the conditions stated above.
More concisely: If `f` is a family of mutually independent, measurable random variables indexed by `ι`, and `S` and `T` are disjoint subsets of `ι`, then the sub-vectors `(f i)_i` for `i` in `S` and `(f i)_i` for `i` in `T` are independent.
|
ProbabilityTheory.iIndepFun.isProbabilityMeasure
theorem ProbabilityTheory.iIndepFun.isProbabilityMeasure :
∀ {Ω : Type u_1} {ι : Type u_2} {mΩ : MeasurableSpace Ω} {μ : MeasureTheory.Measure Ω} {β : ι → Type u_10}
{m : (i : ι) → MeasurableSpace (β i)} {f : (i : ι) → Ω → β i},
ProbabilityTheory.iIndepFun m f μ → MeasureTheory.IsProbabilityMeasure μ
This theorem states that for any type `Ω`, index type `ι`, measurable space `mΩ` over `Ω`, measure `μ` over `Ω`, and a function `f` from the index type to functions from `Ω` to an indexed type `β`, if the function `f` is independent under the measure `μ`, then `μ` is a probability measure. In terms of probability theory, this means that if we have a collection of random variables that are independent, then the measure used to model the probability is indeed a probability measure, which is a measure with total mass equal to 1.
More concisely: If a function from an index type to measurable functions on a measurable space preserves the independence of indices under a measure, then the measure is a probability measure.
|
ProbabilityTheory.IndepFun.comp
theorem ProbabilityTheory.IndepFun.comp :
∀ {Ω : Type u_1} {β : Type u_6} {β' : Type u_7} {γ : Type u_8} {γ' : Type u_9} {mΩ : MeasurableSpace Ω}
{μ : MeasureTheory.Measure Ω} {f : Ω → β} {g : Ω → β'} {_mβ : MeasurableSpace β} {_mβ' : MeasurableSpace β'}
{_mγ : MeasurableSpace γ} {_mγ' : MeasurableSpace γ'} {φ : β → γ} {ψ : β' → γ'},
ProbabilityTheory.IndepFun f g μ → Measurable φ → Measurable ψ → ProbabilityTheory.IndepFun (φ ∘ f) (ψ ∘ g) μ
The theorem `ProbabilityTheory.IndepFun.comp` states that for all spaces `Ω`, `β`, `β'`, `γ`, `γ'` with associated measurable spaces `mΩ`, `_mβ`, `_mβ'`, `_mγ`, `_mγ'`, and a measure `μ` on `Ω`, if we have two functions `f : Ω → β` and `g : Ω → β'` that are independent with respect to measure `μ`, and if we have two measurable functions `φ : β → γ` and `ψ : β' → γ'`, then the compositions `φ ∘ f` and `ψ ∘ g` are also independent with respect to measure μ. In other words, if `f` and `g` are independent, then any measurable functions of `f` and `g` are also independent.
More concisely: If two measurable functions are independent with respect to a measure, then the compositions of any measurable functions from their respective domains to a common codomain are also independent with respect to that measure.
|
ProbabilityTheory.iIndepSet.iIndep_comap_mem
theorem ProbabilityTheory.iIndepSet.iIndep_comap_mem :
∀ {Ω : Type u_1} {ι : Type u_2} {x : MeasurableSpace Ω} {μ : MeasureTheory.Measure Ω} {f : ι → Set Ω},
ProbabilityTheory.iIndepSet f μ →
ProbabilityTheory.iIndep (fun i => MeasurableSpace.comap (fun x => x ∈ f i) ⊤) μ
This theorem, also known as the reverse direction of `ProbabilityTheory.iIndep_comap_mem_iff`, states that for any types `Ω` and `ι`, a measurable space `x` on `Ω`, a measure `μ` on `Ω`, and a function `f` from `ι` to the set of `Ω`, if the set of functions `f` is independent (in the sense that the generated measurable space structures are independent), then the property of independence is preserved when each function `f i` is mapped to the reverse image under the function that sends `x` to the membership of `x` in `f i` in the complete measurable space. Essentially, this theorem guarantees the preservation of independence under the process of reverse-image mapping in a complete measurable space.
More concisely: If functions with independent generated measurable structures on a measure space are mapped to their reverse images under a function sending an element to its membership in the function's domain, the resulting functions remain independent in the complete measurable space.
|
ProbabilityTheory.iIndepSets.iIndep
theorem ProbabilityTheory.iIndepSets.iIndep :
∀ {Ω : Type u_1} {ι : Type u_2} {m0 : MeasurableSpace Ω} {μ : MeasureTheory.Measure Ω}
[inst : MeasureTheory.IsProbabilityMeasure μ] (m : ι → MeasurableSpace Ω),
(∀ (i : ι), m i ≤ m0) →
∀ (π : ι → Set (Set Ω)),
(∀ (n : ι), IsPiSystem (π n)) →
(∀ (i : ι), m i = MeasurableSpace.generateFrom (π i)) →
ProbabilityTheory.iIndepSets π μ → ProbabilityTheory.iIndep m μ
This theorem, named `iIndep`, from probability theory, states that for any types `Ω` and `ι`, given a measurable space `m0` on `Ω`, a measure `μ` on `Ω`, and a function `m` from `ι` to measurable spaces on `Ω`, if `m i` is less than or equal to `m0` for all `i` in `ι`, and given a function `π` from `ι` to sets of sets on `Ω` where each `π n` is a π-system, and if the measurable space `m i` is generated from the π-system `π i` for each `i` in `ι`, then if the sets `π` are independent under the measure `μ`, it follows that the measurable spaces `m` are independent under the measure `μ`.
In simpler terms, this theorem states that if we have a collection of measurable spaces that are generated from independent π-systems (collections of subsets closed under binary intersection of non-disjoint sets), then the measurable spaces themselves are independent. This theorem is instrumental in the development of independent events and random variables in probability theory.
More concisely: Given measurable spaces `m i` on `Ω` generated by π-systems `π i` that are independent under a measure `μ`, the measurable spaces `m i` are independent under `μ`.
|