LeanAide GPT-4 documentation

Module: Mathlib.Data.Matrix.Block


Matrix.blockDiagonal'_apply

theorem Matrix.blockDiagonal'_apply : ∀ {o : Type u_4} {m' : o → Type u_7} {n' : o → Type u_8} {α : Type u_12} [inst : DecidableEq o] [inst_1 : Zero α] (M : (i : o) → Matrix (m' i) (n' i) α) (ik : (i : o) × m' i) (jk : (i : o) × n' i), Matrix.blockDiagonal' M ik jk = if h : ik.fst = jk.fst then M ik.fst ik.snd (cast ⋯ jk.snd) else 0

The theorem `Matrix.blockDiagonal'_apply` states the following: Given an indexed family of matrices `M : (i : o) → Matrix (m' i) (n' i) α` where `o` is a type with decidable equality, `α` is a type with a zero element, and `m'` and `n'` are functions producing types indexed by `o`, for any index-paired values `ik : (i : o) × m' i` and `jk : (i : o) × n' i`, the matrix entry at `ik, jk` of the block diagonal matrix formed from `M`, namely `Matrix.blockDiagonal' M ik jk`, is determined as follows: If `ik.fst = jk.fst` (i.e., `ik` and `jk` belong to the same block), then the matrix entry is the corresponding entry in the matrix `M ik.fst`, otherwise it is zero.

More concisely: The theorem `Matrix.blockDiagonal'_apply` asserts that the entry in the block diagonal matrix formed from an indexed family of matrices `M` at index-paired values `ik` and `jk` is `M ik.fst.(jk.fst)` if `ik.fst = jk.fst`, and zero otherwise.

Matrix.ext_iff_blocks

theorem Matrix.ext_iff_blocks : ∀ {l : Type u_1} {m : Type u_2} {n : Type u_3} {o : Type u_4} {α : Type u_12} {A B : Matrix (n ⊕ o) (l ⊕ m) α}, A = B ↔ A.toBlocks₁₁ = B.toBlocks₁₁ ∧ A.toBlocks₁₂ = B.toBlocks₁₂ ∧ A.toBlocks₂₁ = B.toBlocks₂₁ ∧ A.toBlocks₂₂ = B.toBlocks₂₂

The theorem `Matrix.ext_iff_blocks` states that for all types `l`, `m`, `n`, `o`, and `α`, and for all block matrices `A` and `B` with entries of type `α` and rows indexed by the disjoint union of `n` and `o`, and columns indexed by the disjoint union of `l` and `m`, `A` equals `B` if and only if all four corresponding blocks of `A` and `B` are equal. These four blocks are the upper-left (`A.toBlocks₁₁ = B.toBlocks₁₁`), the upper-right (`A.toBlocks₁₂ = B.toBlocks₁₂`), the lower-left (`A.toBlocks₂₁ = B.toBlocks₂₁`), and the lower-right (`A.toBlocks₂₂ = B.toBlocks₂₂`) blocks of the matrices `A` and `B`.

More concisely: For matrices A and B of type α with disjoint union indices n, o for rows and l, m for columns, A = B if and only if their upper-left (₁₁), upper-right (₁₂), lower-left (₂₁), and lower-right (₂₂) blocks are equal.

Matrix.fromBlocks_multiply

theorem Matrix.fromBlocks_multiply : ∀ {l : Type u_1} {m : Type u_2} {n : Type u_3} {o : Type u_4} {p : Type u_5} {q : Type u_6} {α : Type u_12} [inst : Fintype l] [inst_1 : Fintype m] [inst_2 : NonUnitalNonAssocSemiring α] (A : Matrix n l α) (B : Matrix n m α) (C : Matrix o l α) (D : Matrix o m α) (A' : Matrix l p α) (B' : Matrix l q α) (C' : Matrix m p α) (D' : Matrix m q α), A.fromBlocks B C D * A'.fromBlocks B' C' D' = (A * A' + B * C').fromBlocks (A * B' + B * D') (C * A' + D * C') (C * B' + D * D')

The theorem `Matrix.fromBlocks_multiply` states that for any types `l`, `m`, `n`, `o`, `p`, `q`, and `α` where `α` has a structure of a non-unital and non-associative semi-ring, and `l` and `m` are finite types, given matrices `A`, `B`, `C`, `D` of type `Matrix n l α`, `Matrix n m α`, `Matrix o l α`, `Matrix o m α` respectively, and matrices `A'`, `B'`, `C'`, `D'` of type `Matrix l p α`, `Matrix l q α`, `Matrix m p α`, `Matrix m q α` respectively, the product of the block matrix formed by `A`, `B`, `C`, `D` and the block matrix formed by `A'`, `B'`, `C'`, `D'` is equal to the block matrix formed by the matrices `(A * A' + B * C')`, `(A * B' + B * D')`, `(C * A' + D * C')`, `(C * B' + D * D')`. In other words, this theorem provides a formula for multiplying block matrices.

More concisely: For matrices A, B, C, D, A', B', C', D' of compatible sizes over a non-unital and non-associative semi-ring α, the block matrix product (A | B ; C | D) * (A' | B' ; C' | D') equals (A * A' + B * C', A * B' + B * D', C * A' + D * C', C * B' + D * D').

Matrix.fromBlocks_toBlocks

theorem Matrix.fromBlocks_toBlocks : ∀ {l : Type u_1} {m : Type u_2} {n : Type u_3} {o : Type u_4} {α : Type u_12} (M : Matrix (n ⊕ o) (l ⊕ m) α), M.toBlocks₁₁.fromBlocks M.toBlocks₁₂ M.toBlocks₂₁ M.toBlocks₂₂ = M

This theorem states that for any given matrix `M` indexed by the sum types of `n` and `o` for the rows and `l` and `m` for the columns with entries in `α`, the matrix obtained from the "block" operation of `Matrix.fromBlocks`, which flattens four smaller matrices, will be equal to `M` when those four smaller matrices are the "submatrices" of `M` extracted using `Matrix.toBlocks₁₁`, `Matrix.toBlocks₁₂`, `Matrix.toBlocks₂₁`, and `Matrix.toBlocks₂₂` respectively. In essence, this theorem is saying that if we divide a matrix into four submatrices and then recombine those submatrices in the same configuration, we will get back the original matrix.

More concisely: For any matrix `M` indexed by `n x o x l x m` with entries in `α`, the matrix obtained from `Matrix.fromBlocks [M.toBlocks₁₁, M.toBlocks₁₂, M.toBlocks₂₁, M.toBlocks₂₂]` is equal to `M`.

Matrix.fromBlocks_transpose

theorem Matrix.fromBlocks_transpose : ∀ {l : Type u_1} {m : Type u_2} {n : Type u_3} {o : Type u_4} {α : Type u_12} (A : Matrix n l α) (B : Matrix n m α) (C : Matrix o l α) (D : Matrix o m α), (A.fromBlocks B C D).transpose = A.transpose.fromBlocks C.transpose B.transpose D.transpose

The theorem `Matrix.fromBlocks_transpose` states that for any four matrices `A`, `B`, `C`, `D` where `A` is a matrix with rows indexed by `n` and columns indexed by `l`, `B` is a matrix with rows indexed by `n` and columns indexed by `m`, `C` is a matrix with rows indexed by `o` and columns indexed by `l`, and `D` is a matrix with rows indexed by `o` and columns indexed by `m`, the transpose of the matrix formed by 'block' concatenating these four matrices (`A`, `B`, `C`, `D`) is equal to the matrix formed by 'block' concatenating the transposes of these four matrices (`transpose A`, `transpose B`, `transpose C`, `transpose D`). In other words, transposing a 'block' matrix composed of four smaller matrices is the same as creating a 'block' matrix from the transposes of the four smaller matrices.

More concisely: The transpose of a block matrix formed by concatenating matrices `A` of size `n x l`, `B` of size `n x m`, `C` of size `o x l`, and `D` of size `o x m`, is equal to the block matrix formed by concatenating the transposes of `A`, `B`, `C`, and `D`.

Matrix.fromBlocks_one

theorem Matrix.fromBlocks_one : ∀ {l : Type u_1} {m : Type u_2} {α : Type u_12} [inst : DecidableEq l] [inst_1 : DecidableEq m] [inst_2 : Zero α] [inst_3 : One α], Matrix.fromBlocks 1 0 0 1 = 1

This theorem states that for any types `l` and `m`, and any type `α` that has a zero and a one, if equality is decidable for the types `l` and `m`, then forming a block matrix from four matrices, specifically, the 1 matrix, the 0 matrix, another 0 matrix, and another 1 matrix, is equal to the 1 matrix. This essentially asserts that the block matrix of the unit matrices and zero matrices conforms to the identity matrix in matrix arithmetic.

More concisely: For types `l` and `m` with decidable equality and any type `α` with zero and one, the block matrix of the identity (1) and zero (0) matrices of sizes compatible with `α` is equal to the identity matrix.

Matrix.fromBlocks_submatrix_sum_swap_sum_swap

theorem Matrix.fromBlocks_submatrix_sum_swap_sum_swap : ∀ {l : Type u_14} {m : Type u_15} {n : Type u_16} {o : Type u_17} {α : Type u_18} (A : Matrix n l α) (B : Matrix n m α) (C : Matrix o l α) (D : Matrix o m α), (A.fromBlocks B C D).submatrix Sum.swap Sum.swap = D.fromBlocks C B A

The theorem `Matrix.fromBlocks_submatrix_sum_swap_sum_swap` states that for all types `l`, `m`, `n`, `o`, and `α`, and for any matrices `A` of size `n x l`, `B` of size `n x m`, `C` of size `o x l` and `D` of size `o x m` with entries in `α`, the submatrix of the matrix formed by flattening the block matrices `A`, `B`, `C`, and `D` with the factors of the sum type swapped is itself a matrix formed by flattening the block matrices `D`, `C`, `B`, and `A`. In essence, this theorem states that swapping the factors of the sum type and then creating a block matrix is equivalent to creating a block matrix first and then swapping the factors.

More concisely: For any matrices A of size n x l, B of size n x m, C of size o x l, and D of size o x m with entries in α, the flattened matrices formed by swapping the factors of the sum types (A ⊕ B ⊕ C ⊕ D) and (D ⊕ C ⊕ B ⊕ A) are equal.

Matrix.fromBlocks_smul

theorem Matrix.fromBlocks_smul : ∀ {l : Type u_1} {m : Type u_2} {n : Type u_3} {o : Type u_4} {R : Type u_10} {α : Type u_12} [inst : SMul R α] (x : R) (A : Matrix n l α) (B : Matrix n m α) (C : Matrix o l α) (D : Matrix o m α), x • A.fromBlocks B C D = (x • A).fromBlocks (x • B) (x • C) (x • D)

This theorem states that for any type `R` with a scalar multiplication operation over a type `α`, and any four matrices `A` of type `Matrix n l α`, `B` of type `Matrix n m α`, `C` of type `Matrix o l α` and `D` of type `Matrix o m α`, scalar multiplication distributes over the operation of forming a large matrix from block matrices. That is, if you multiply a scalar `x` to the large matrix obtained from `A`, `B`, `C`, and `D`, it is the same as multiplying `x` to each of these matrices separately and then forming the large matrix.

More concisely: For any scalars `x` and matrices `A` of size `n x l`, `B` of size `n x m`, `C` of size `o x l`, and `D` of size `o x m` over a type `α` with scalar multiplication, `x * (A⊕B ⊗ C ⊕ D) = x * A ⊗ Iₙ ⊕ x * B ⊗ Iₗ ⊕ x * C ⊗ Iₙ' ⊕ x * D`, where `Iₙ`, `Iₗ`, and `Iₙ'` are identity matrices of sizes `n x n`, `m x m`, and `o x o`, respectively.

Matrix.blockDiagonal_apply

theorem Matrix.blockDiagonal_apply : ∀ {m : Type u_2} {n : Type u_3} {o : Type u_4} {α : Type u_12} [inst : DecidableEq o] [inst_1 : Zero α] (M : o → Matrix m n α) (ik : m × o) (jk : n × o), Matrix.blockDiagonal M ik jk = if ik.2 = jk.2 then M ik.2 ik.1 jk.1 else 0

This theorem, `Matrix.blockDiagonal_apply`, asserts for any types `m`, `n`, `o`, and `α` with `α` being a zero type and `o` having decidable equality, and a function `M` that maps elements of `o` to matrices indexed by `m` and `n` with entries in `α`. Given a pair `ik` of an element from `m` and an element from `o`, and a pair `jk` of an element from `n` and an element from `o`, the value at the `(ik, jk)` position of the block diagonal matrix formed by `M` is equal to the `(ik.1, jk.1)` entry of the `ik.2`-th matrix if `ik.2` equals `jk.2`, otherwise it is zero. In other words, it forms a block diagonal matrix where each block is a matrix given by the function `M`, and the positions outside these blocks are filled with zeros.

More concisely: Given a function mapping elements of a decidable type `o` to square matrices with entries in a zero type `α`, the block diagonal matrix formed by horizontally and vertically stacking these matrices has entries equal to the corresponding entry of the diagonal matrix if the indices belong to the same diagonal block, otherwise it is the zero entry.