Matrix.ext
theorem Matrix.ext :
∀ {m : Type u_2} {n : Type u_3} {α : Type v} {M N : Matrix m n α}, (∀ (i : m) (j : n), M i j = N i j) → M = N := by
sorry
The theorem `Matrix.ext` states that for any two matrices `M` and `N` of the same type, where the type is defined as `Matrix m n α` (i.e., matrices with entries in `α`, whose rows are indexed by `m` and whose columns are indexed by `n`), if every corresponding entry in `M` and `N` is equal (i.e., for all indices `(i, j)`, the entry at position `(i, j)` in `M` is equal to the entry at position `(i, j)` in `N`), then the two matrices `M` and `N` themselves are equal.
More concisely: If two matrices of the same type with matching entries are given, then they are equal.
|
Matrix.vecMul_vecMul
theorem Matrix.vecMul_vecMul :
∀ {m : Type u_2} {n : Type u_3} {o : Type u_4} {α : Type v} [inst : NonUnitalSemiring α] [inst_1 : Fintype n]
[inst_2 : Fintype m] (v : m → α) (M : Matrix m n α) (N : Matrix n o α),
Matrix.vecMul (Matrix.vecMul v M) N = Matrix.vecMul v (M * N)
This theorem states that for any types `m`, `n`, `o` and `α` with `α` being a non-unital semiring and `m` and `n` being finite types, and for any vector `v` of type `m → α` and any two matrices `M` and `N` of types `Matrix m n α` and `Matrix n o α`, respectively, the vector-matrix product of the vector-matrix product of `v` and `M` with `N` is equal to the vector-matrix product of `v` with the matrix product of `M` and `N`. In mathematical notation, this can be written as `(vM)N = v(MN)`, where `vM` denotes the vector-matrix product of `v` and `M`, `MN` denotes the matrix product of `M` and `N`, and `(vM)N` and `v(MN)` denote the vector-matrix products.
More concisely: For any finite types `m`, `n`, `o` and non-unital semiring `α`, the vector-matrix product of a vector `v` of type `m → α` with the matrix product of matrices `M` of type `Matrix m n α` and `N` of type `Matrix n o α` equals the vector-matrix product of `v` with `M` and the matrix `N`. That is, `(vM)N = v(MN)`.
|
Matrix.dotProduct_smul
theorem Matrix.dotProduct_smul :
∀ {m : Type u_2} {R : Type u_7} {α : Type v} [inst : Fintype m] [inst_1 : Monoid R] [inst_2 : Mul α]
[inst_3 : AddCommMonoid α] [inst_4 : DistribMulAction R α] [inst_5 : SMulCommClass R α α] (x : R) (v w : m → α),
Matrix.dotProduct v (x • w) = x • Matrix.dotProduct v w
The theorem `Matrix.dotProduct_smul` states that for any scalar `x` and any two vectors `v` and `w`, the dot product of `v` and the scalar multiplication of `x` with `w` is equal to the scalar multiplication of `x` with the dot product of `v` and `w`. This property holds for all types `m` corresponding to the vector dimensions, `R` corresponding to the scalar type, and `α` corresponding to the type of the vector entries, as long as `m` is finite, `R` is a monoid (a set equipped with an associative binary operation and an identity element), `α` is a commutative additive monoid (an additive monoid where the operation is commutative) and a multiplication operation is defined on `α`, `R` acts distributively on `α`, and the scalar multiplication of `R` and `α` is commutative. This is a generalization of the standard property of dot product in real vector spaces.
More concisely: For any scalar `x` and vectors `v` and `w` of finite dimension, the dot product of `v` and `x`(\*w\*) is equal to `x` \* (the dot product of `v` and `w`).
|
Matrix.neg_dotProduct
theorem Matrix.neg_dotProduct :
∀ {m : Type u_2} {α : Type v} [inst : Fintype m] [inst_1 : NonUnitalNonAssocRing α] (v w : m → α),
Matrix.dotProduct (-v) w = -Matrix.dotProduct v w
The theorem `Matrix.neg_dotProduct` states that for any types `m` and `α`, where `m` is a finite type and `α` is a non-unital, non-associative ring, given any two functions `v` and `w` from `m` to `α`, the dot product of the negation of `v` and `w` is equal to the negation of the dot product of `v` and `w`. This means that the negation operator can be pulled out of the dot product operation, essentially asserting that the dot product operation is compatible with scalar multiplication in the underlying ring.
More concisely: For any finite type `m` and non-unital, non-associative ring `α`, the negation of the dot product of functions `v` and `w` from `m` to `α` is equal to the dot product of the negation of `v` and `w`.
|
Matrix.diagonal_apply_ne'
theorem Matrix.diagonal_apply_ne' :
∀ {n : Type u_3} {α : Type v} [inst : DecidableEq n] [inst_1 : Zero α] (d : n → α) {i j : n},
j ≠ i → Matrix.diagonal d i j = 0
The theorem `Matrix.diagonal_apply_ne'` states that for any types `n` and `α`, given that `n` has decidable equality and `α` has an additive identity (zero), for any function `d : n → α` and any elements `i` and `j` of type `n` where `j` is not equal to `i`, the `ij`-th element of the square matrix obtained by applying the `Matrix.diagonal` function to `d` is zero. In essence, this theorem is saying that all off-diagonal elements of a diagonal matrix (constructed via the given function `d`) are zero.
More concisely: For any type `n` with decidable equality and type `α` with an additive identity, the `ij`-th element of the diagonal matrix obtained by applying `Matrix.diagonal` to a function `d : n → α` is zero when `i` and `j` are distinct.
|
Matrix.vecMul_one
theorem Matrix.vecMul_one :
∀ {m : Type u_2} {α : Type v} [inst : NonAssocSemiring α] [inst_1 : Fintype m] [inst_2 : DecidableEq m] (v : m → α),
Matrix.vecMul v 1 = v
The theorem `Matrix.vecMul_one` asserts that for any type `m`, and a type `α` that is a non-associative semiring, given the finite nature of `m` and the existence of a decidable equality on `m`, the result of multiplying a vector `v` (which is a function from `m` to `α`) by the identity matrix equals the original vector `v`. In other words, it states that the multiplication of any vector by the identity matrix doesn't alter the vector, which corresponds to a common property in linear algebra.
More concisely: For any finite type `m` and non-associative semiring type `α`, the multiplication of a vector `v` by the identity matrix is equal to `v` itself.
|
Matrix.mul_diagonal
theorem Matrix.mul_diagonal :
∀ {m : Type u_2} {n : Type u_3} {α : Type v} [inst : NonUnitalNonAssocSemiring α] [inst_1 : Fintype n]
[inst_2 : DecidableEq n] (d : n → α) (M : Matrix m n α) (i : m) (j : n),
(M * Matrix.diagonal d) i j = M i j * d j
The theorem `Matrix.mul_diagonal` states that for any types `m`, `n`, and `α` where `α` is a non-unital, non-associative semiring, and where `n` has a finite number of elements and decidable equality, given a function `d : n → α` that determines the elements on the diagonal of a matrix, a matrix `M : Matrix m n α`, and indices `i : m` and `j : n`, the result of matrix multiplication of `M` with the diagonal matrix determined by `d` at the indices `i, j` is equal to the product of the element at `i, j` in `M` and the `j`-th element of `d`. In other words, it states that multiplying a matrix by a diagonal matrix on the right corresponds to element-wise multiplication of the matrix by the diagonal entries.
More concisely: For any non-unital, non-associative semiring `α`, finite type `n` with decidable equality, function `d : n → α`, matrix `M : Matrix m n α`, and indices `i : m` and `j : n`, `M * (diagonalMatrix d)! j = M! i, j * d! j`, where `*` denotes matrix multiplication and `!` denotes matrix-vector multiplication.
|
Matrix.diagonal_transpose
theorem Matrix.diagonal_transpose :
∀ {n : Type u_3} {α : Type v} [inst : DecidableEq n] [inst_1 : Zero α] (v : n → α),
(Matrix.diagonal v).transpose = Matrix.diagonal v
The theorem `Matrix.diagonal_transpose` states that for any type `n` with decidable equality and any type `α` that has a zero element, given a function `v` from `n` to `α`, the transpose of the diagonal matrix generated by `v` is equal to the original diagonal matrix. In simpler terms, this theorem states that transposing a diagonal matrix does not change it.
More concisely: For any type `n` with decidable equality and type `α` with a zero element, the transpose of a diagonal matrix formed by a function `v` from `n` to `α` is equal to the original diagonal matrix `v`.
|
Matrix.transpose_mul
theorem Matrix.transpose_mul :
∀ {l : Type u_1} {m : Type u_2} {n : Type u_3} {α : Type v} [inst : AddCommMonoid α] [inst_1 : CommSemigroup α]
[inst_2 : Fintype n] (M : Matrix m n α) (N : Matrix n l α), (M * N).transpose = N.transpose * M.transpose
The theorem `Matrix.transpose_mul` states that for any two matrices `M` and `N` with entries in a type `α` that forms a commutative semigroup under multiplication and a commutative monoid under addition, the transpose of the product of `M` and `N` is equal to the product of the transpose of `N` and the transpose of `M`. Here `M` is a matrix where rows are indexed by `m` and columns are indexed by `n`, and `N` is a matrix where rows are indexed by `n` and columns are indexed by `l`. The theorem assumes that the type `n` is finite, i.e., there are only finitely many elements of type `n`. This theorem essentially describes the property of transposition with respect to matrix multiplication, which is a well-known result in linear algebra.
More concisely: For matrices M and N with finite index size, over a commutative semigroup for multiplication and commutative monoid for addition, the transposes of their product are equal: (M · N)^T = N^T · M^T.
|
Matrix.one_mulVec
theorem Matrix.one_mulVec :
∀ {m : Type u_2} {α : Type v} [inst : NonAssocSemiring α] [inst_1 : Fintype m] [inst_2 : DecidableEq m] (v : m → α),
Matrix.mulVec 1 v = v
This theorem states that for any type `m` and any type `α` that is a non-associative semiring, given a finite type `m` and the assertion that equality for `m` is decidable, multiplying the identity matrix (represented by `1`) with a vector `v` (which is a function from `m` to `α`) results in the vector `v` itself. In other words, in this mathematical setting, the identity matrix acts as the multiplicative identity for vector multiplication.
More concisely: For any non-associative semiring type `α`, if equality on the finite type `m` is decidable, then multiplication of the identity matrix (as a function from `m` to `α`) with any vector `v` (as a function from `m` to `α`) results in `v` itself.
|
Matrix.mul_neg
theorem Matrix.mul_neg :
∀ {m : Type u_2} {n : Type u_3} {o : Type u_4} {α : Type v} [inst : NonUnitalNonAssocRing α] [inst_1 : Fintype n]
(M : Matrix m n α) (N : Matrix n o α), M * -N = -(M * N)
The theorem `Matrix.mul_neg` states that for all types `m`, `n`, `o`, and `α`, where `α` is a non-unital non-associative ring, and `n` is a finite type, if `M` is a matrix with entries in `α` and rows indexed by `m` and columns indexed by `n`, and `N` is a matrix with entries in `α` and rows indexed by `n` and columns indexed by `o`, then the product of `M` and the additive inverse of `N` (`-N`) is equal to the additive inverse of the product of `M` and `N` (`-(M * N)`). This is a property of matrix multiplication over rings, similar to the distributivity of multiplication over subtraction in real numbers.
More concisely: For matrices $M$ with entries in a non-unital non-associative ring $\alpha$, indexed by types $m$ and $n$, and $N$ with entries in $\alpha$ indexed by $n$ and $o$, $-(M \cdot N) = M \cdot (-N)$.
|
Matrix.scalar_commute_iff
theorem Matrix.scalar_commute_iff :
∀ {n : Type u_3} {α : Type v} [inst : Semiring α] [inst_1 : DecidableEq n] [inst_2 : Fintype n] {r : α}
{M : Matrix n n α}, Commute ((Matrix.scalar n) r) M ↔ r • M = MulOpposite.op r • M
This theorem, `Matrix.scalar_commute_iff`, states that for any semiring `α`, any decidable equality type `n`, any finite type `n`, any element `r` of `α`, and any `n x n` matrix `M` with entries in `α`, the scalar `r` commutes with the matrix `M` if and only if scalar multiplication of the matrix `M` by `r` is equivalent to the multiplication of `M` by the opposite of `r`. In other words, `r` and `M` commute under the operation defined by `Matrix.scalar` if and only if, when one multiplies `M` by `r` (using scalar multiplication), one obtains the same result as when one multiplies `M` by the element that represents the opposite of `r` (using the operation defined by `MulOpposite.op`).
More concisely: For any semiring `α`, any finite type `n`, any element `r` of `α`, and any `n x n` matrix `M` over `α`, `r` commutes with `M` if and only if scalar multiplication by `r` is equivalent to multiplication by the opposite of `r` for matrices using the defined operations.
|
Matrix.mul_add
theorem Matrix.mul_add :
∀ {m : Type u_2} {n : Type u_3} {o : Type u_4} {α : Type v} [inst : NonUnitalNonAssocSemiring α]
[inst_1 : Fintype n] (L : Matrix m n α) (M N : Matrix n o α), L * (M + N) = L * M + L * N
This theorem, `Matrix.mul_add`, states that for any types `m`, `n`, `o`, and any type `α` that forms a NonUnitalNonAssocSemiring, and given that `n` is a Fintype, if `L` is a matrix with entries of type `α` and dimensions indexed by `m` and `n`, and `M` and `N` are matrices also with entries of type `α` and dimensions indexed by `n` and `o`, then the result of matrix multiplication `L` with the sum of `M` and `N` (`L * (M + N)`) is equal to the sum of the result of `L` multiplied by `M` and `L` multiplied by `N` (`L * M + L * N`). That is, it verifies the distributivity of matrix multiplication over matrix addition.
More concisely: For matrices `L` of dimensions `m x n`, `M` and `N` of dimensions `n x o`, and type `α` forming a NonUnitalNonAssocSemiring, the distributivity of matrix multiplication over matrix addition holds: `L * (M + N) = L * M + L * N`.
|
Matrix.diagonal_map
theorem Matrix.diagonal_map :
∀ {n : Type u_3} {α : Type v} {β : Type w} [inst : DecidableEq n] [inst_1 : Zero α] [inst_2 : Zero β] {f : α → β},
f 0 = 0 → ∀ {d : n → α}, (Matrix.diagonal d).map f = Matrix.diagonal fun m => f (d m)
The theorem `Matrix.diagonal_map` states that for any types `n`, `α`, and `β`, given that `n` has a decidable equality, `α` and `β` have a zero element, and a function `f` from `α` to `β` such that `f` of zero equals zero, for any function `d : n → α`, mapping `f` over the diagonal matrix of `d` results in the diagonal matrix of `f` applied to `d`. In other words, applying a function to every element of a diagonal matrix while maintaining the diagonal structure is equivalent to first applying the function to the diagonal entries and then forming a diagonal matrix.
More concisely: For any type `n` with decidable equality, types `α` and `β` with zero elements, and a function `f : α → β` with `f 0 = 0`, the application of `f` to the diagonal matrix of a function `d : n → α` is equal to the diagonal matrix of `f` applied to `d`.
|
Matrix.dotProduct_neg
theorem Matrix.dotProduct_neg :
∀ {m : Type u_2} {α : Type v} [inst : Fintype m] [inst_1 : NonUnitalNonAssocRing α] (v w : m → α),
Matrix.dotProduct v (-w) = -Matrix.dotProduct v w
The theorem `Matrix.dotProduct_neg` states that for any two mathematical objects `v` and `w` that are functions from an arbitrary type `m` to another type `α`, where `m` is a finite type and `α` is a non-unital non-associative ring, the dot product of `v` and the additive inverse of `w` (denoted as `-w`) equals to the additive inverse of the dot product of `v` and `w`. In other words, the dot product operation distributes over negation. In mathematical notation, it can be written as $\langle v, -w \rangle = - \langle v, w \rangle$, where $\langle \cdot, \cdot \rangle$ denotes the dot product.
More concisely: The dot product of a function from a finite type to a non-unital non-associative ring and the additive inverse of another function equals the additive inverse of their dot product. In other words, $\langle v, -w \rangle = - \langle v, w \rangle$.
|
Matrix.submatrix_id_id
theorem Matrix.submatrix_id_id :
∀ {m : Type u_2} {n : Type u_3} {α : Type v} (A : Matrix m n α), A.submatrix id id = A
This theorem states that for any matrix `A` with arbitrary types for its rows (`m`), columns (`n`), and entries (`α`), applying the `Matrix.submatrix` function to `A` with the identity function (`id`) for both the row and column selectors results in the same matrix `A`. In other words, using the identity function as selectors when creating a submatrix doesn't change the original matrix.
More concisely: For any matrix `A` of type `m × n` over type `α`, `Matrix.submatrix A id id = A`.
|
Matrix.one_eq_pi_single
theorem Matrix.one_eq_pi_single :
∀ {n : Type u_3} {α : Type v} [inst : DecidableEq n] [inst_1 : Zero α] [inst_2 : One α] {i j : n},
1 i j = Pi.single i 1 j
This theorem states that for any types `n` and `α`, if `n` has decidable equality, and `α` has both zero and one elements (typically representing a numeric type like an integer or real number), then for any `i` and `j` of type `n`, the identity matrix's (`1`) element at position `(i, j)` is equal to the function `Pi.single i 1 j`.
In other words, it asserts that each element of the identity matrix is the same as the output of the function `Pi.single`, which returns `1` when its first and last arguments (`i` and `j` in this case) are equal and `0` otherwise, i.e., it's `1` on the diagonal (`i = j`) and `0` elsewhere.
More concisely: For any type `n` with decidable equality and any type `α` with zero and one elements, the identity matrix of `n × n` matrices over `α` is equal to the matrix obtained by applying the function `Pi.single i 1` to the identity matrix at position `(i, j)` for all `i, j` in `n`.
|
Matrix.comp_equiv_dotProduct_comp_equiv
theorem Matrix.comp_equiv_dotProduct_comp_equiv :
∀ {m : Type u_2} {n : Type u_3} {α : Type v} [inst : Fintype m] [inst_1 : Fintype n]
[inst_2 : NonUnitalNonAssocSemiring α] (x y : n → α) (e : m ≃ n),
Matrix.dotProduct (x ∘ ⇑e) (y ∘ ⇑e) = Matrix.dotProduct x y
The theorem 'Matrix.comp_equiv_dotProduct_comp_equiv' states that for all types `m`, `n` and `α`, where `m` and `n` are of 'Fintype' (finite types) and `α` is a 'NonUnitalNonAssocSemiring', and for any two functions `x` and `y` from `n` to `α`, and for any equivalence `e` between `m` and `n`, the dot product of the functions `x` and `y` after applying the equivalence `e` (i.e., `x ∘ ⇑e` and `y ∘ ⇑e`) is the same as the dot product of the functions `x` and `y` themselves. In other words, permuting vectors on both sides of a dot product operation has no effect on the result.
More concisely: For any finite types m and n and a NonUnitalNonAssocSemiring α, the dot product of functions x and y from n to α, when composed with an equivalence e between m and n, equals the dot product of x and y themselves.
|
Matrix.mulVec_diagonal
theorem Matrix.mulVec_diagonal :
∀ {m : Type u_2} {α : Type v} [inst : NonUnitalNonAssocSemiring α] [inst_1 : Fintype m] [inst_2 : DecidableEq m]
(v w : m → α) (x : m), (Matrix.diagonal v).mulVec w x = v x * w x
This theorem states that for any types `m` and `α`, where `α` is a non-unital non-associative semiring, `m` is a finite type, and `m` has decidable equality, if `v` and `w` are functions from `m` to `α` and `x` is an element of type `m`, then the result of multiplying the vector `w` by the diagonal matrix defined by `v` at index `x` is equal to the product of `v x` and `w x`.
In other words, in the context of matrix and vector multiplication, this theorem asserts that the multiplication of a vector `w` by a diagonal matrix (which only has non-zero elements along its main diagonal, as defined by function `v`) performs an element-wise multiplication of the vector `w` with the diagonal of the matrix.
More concisely: For any non-unital non-associative semiring type `α`, finite type `m` with decidable equality, functions `v : m → α` and `w : m → α`, and element `x : m`, the diagonal matrix `v` multiplied by vector `w` equals the element-wise product of `v x` and `w x`.
|
Matrix.transpose_map
theorem Matrix.transpose_map :
∀ {m : Type u_2} {n : Type u_3} {α : Type v} {β : Type w} {f : α → β} {M : Matrix m n α},
M.transpose.map f = (M.map f).transpose
The theorem `Matrix.transpose_map` states that for any matrix `M` with entries of type `α`, indexed by types `m` and `n`, and any function `f` mapping from `α` to `β`, the result of first transposing the matrix `M` and then applying the function `f` to each entry (i.e., `Matrix.map (Matrix.transpose M) f`) is the same as the result of first applying the function `f` to each entry of `M` and then transposing the result (i.e., `Matrix.transpose (Matrix.map M f)`). In mathematical terms, this theorem is asserting the commutativity of the operations of matrix transposition and applying a function to each entry of a matrix.
More concisely: For any matrix `M` with entries of type `α` and any function `f` from `α` to `β`, the transposition of `Matrix.map M f` is equal to `Matrix.map (Matrix.transpose M) f`.
|
Matrix.transpose_zero
theorem Matrix.transpose_zero : ∀ {m : Type u_2} {n : Type u_3} {α : Type v} [inst : Zero α], Matrix.transpose 0 = 0
This theorem states that the transpose of a zero matrix, for any given types `m`, `n`, and `α`, where `α` is a type with a zero element, is also a zero matrix. In other words, transposing a matrix that consists solely of zero entries results in another matrix that also consists solely of zero entries. This is true regardless of the dimensions or types of the original zero matrix.
More concisely: For any types `m`, `n`, and `α` with a zero element, the transpose of a zero `m x n` matrix over `α` is also a zero `n x m` matrix over `α`.
|
Matrix.transpose_transpose
theorem Matrix.transpose_transpose :
∀ {m : Type u_2} {n : Type u_3} {α : Type v} (M : Matrix m n α), M.transpose.transpose = M
This theorem states that for any matrix `M` with entries in a type `α` and whose rows are indexed by a type `m` and columns are indexed by a type `n`, the transpose of the transpose of `M` is equal to `M` itself. This is a fundamental property of matrices, identical to the mathematical statement that the double transpose of a matrix returns the original matrix. In mathematical terms, if $M$ is any matrix, then $(M^T)^T = M$, where $T$ denotes the transpose operation.
More concisely: The transpose of a matrix's transpose is equal to the original matrix.
|
Matrix.conjTranspose_mul
theorem Matrix.conjTranspose_mul :
∀ {l : Type u_1} {m : Type u_2} {n : Type u_3} {α : Type v} [inst : Fintype n] [inst_1 : NonUnitalSemiring α]
[inst_2 : StarRing α] (M : Matrix m n α) (N : Matrix n l α),
(M * N).conjTranspose = N.conjTranspose * M.conjTranspose
The theorem `Matrix.conjTranspose_mul` states that for any two matrices `M` and `N` with elements in a non-unital semiring `α` that also forms a star-ring, the conjugate transpose of the product of `M` and `N` is equal to the product of the conjugate transpose of `N` and the conjugate transpose of `M`. In other words, if `M` is a matrix from `m` to `n` and `N` is a matrix from `n` to `l`, then the rule `(M * N)† = N† * M†` holds true. Here `†` denotes the conjugate transpose of a matrix and `*` denotes matrix multiplication.
More concisely: For matrices M from m to n and N from n to l over a non-unital semiring α that forms a star-ring, the conjugate transpose of their product equals the product of their conjugate transposes: (M * N)† = N† * M† .
|
Matrix.diagonal_apply_ne
theorem Matrix.diagonal_apply_ne :
∀ {n : Type u_3} {α : Type v} [inst : DecidableEq n] [inst_1 : Zero α] (d : n → α) {i j : n},
i ≠ j → Matrix.diagonal d i j = 0
This theorem states that for any types `n` and `α`, where `n` has a decidable equality and `α` has an element identified as zero, and any function `d` mapping `n` to `α`, and any elements `i` and `j` of `n` such that `i` is not equal to `j`, the value at the `i,j` position in the matrix created by the `Matrix.diagonal` function using `d` is zero. In simple terms, in a square matrix created by the function `Matrix.diagonal`, all the off-diagonal elements are zero.
More concisely: For any type `n` with decidable equality and any type `α` with an identified zero element, if we define an `n × n` matrix `M` using `Matrix.diagonal` with a function mapping `n` to `α`, then all off-diagonal entries of `M` equal zero.
|
Matrix.transpose_pow
theorem Matrix.transpose_pow :
∀ {m : Type u_2} {α : Type v} [inst : CommSemiring α] [inst_1 : Fintype m] [inst_2 : DecidableEq m]
(M : Matrix m m α) (k : ℕ), (M ^ k).transpose = M.transpose ^ k
The theorem `Matrix.transpose_pow` states that for any type `m` and any type `α` that is a commutative semiring, if `m` is a finite type with decidable equality, then for any matrix `M` with indices and entries in `m` and `α` respectively and any natural number `k`, the transpose of `M` to the power `k` is equal to the transpose of `M` all to the power `k`. In simpler terms, raising a matrix to a power and then taking its transpose is the same as first taking the transpose and then raising the resulting matrix to the power.
More concisely: For any finite commutative semiring type `m` with decidable equality, matrix `M` with indices in `m` and entries in a commutative semiring `α`, the transpose of `M` raised to the power `k` equals the power `k` of the transpose of `M`.
|
Matrix.transpose_apply
theorem Matrix.transpose_apply :
∀ {m : Type u_2} {n : Type u_3} {α : Type v} (M : Matrix m n α) (i : n) (j : m), M.transpose i j = M j i
This theorem states that for any type `m` and `n` and an arbitrary type `α`, given a matrix `M` with entries of type `α` and indices of types `m` (for rows) and `n` (for columns), and arbitrary indices `i` of type `n` and `j` of type `m`, the entry at index `(i, j)` in the transpose of `M` is equal to the entry at index `(j, i)` in `M`. In other words, it is stating the basic property of matrix transposition: flipping over the entries along the main diagonal.
More concisely: For any matrices M of type α with indices m for rows and n for columns, the entry at index (i, j) in the transpose of M equals the entry at index (j, i) in M.
|
Matrix.dotProduct_assoc
theorem Matrix.dotProduct_assoc :
∀ {m : Type u_2} {n : Type u_3} {α : Type v} [inst : Fintype m] [inst_1 : Fintype n] [inst_2 : NonUnitalSemiring α]
(u : m → α) (w : n → α) (v : Matrix m n α),
Matrix.dotProduct (fun j => Matrix.dotProduct u fun i => v i j) w =
Matrix.dotProduct u fun i => Matrix.dotProduct (v i) w
The theorem `Matrix.dotProduct_assoc` states that for any types `m`, `n`, and `α` (where `α` is a non-unital semiring and `m` and `n` are finite), and any functions `u : m → α`, `w : n → α`, and `v : Matrix m n α`, the associative law of the dot product of matrices holds. In other words, you can change the order of operations in a nested dot product without changing the result. The mathematical notation would look like this:
Given vectors `u ∈ α^m`, `w ∈ α^n` and matrix `v ∈ α^(m×n)`, the dot product of `u` with the matrix-vector product `v·w` (where each entry of `v·w` is the dot product of a row of `v` with `w`) is equal to the dot product of `w` with the matrix-vector product `u·v` (where each entry of `u·v` is the dot product of `u` with a column of `v`).
In symbols:
`u · (v·w) = (u·v) · w`
More concisely: For any non-unital semiring α and finite dimensions m and n, the dot product of a vector with the matrix-vector product of a matrix and another vector is equal to the matrix-vector product of the vector with the transpose of the original matrix and the second vector. In other words, `u · (v · w) = (u · v) · w`, where u is a vector in α^m, v is a matrix in α^(m×n), and w is a vector in α^n.
|
Matrix.zero_dotProduct
theorem Matrix.zero_dotProduct :
∀ {m : Type u_2} {α : Type v} [inst : Fintype m] [inst_1 : NonUnitalNonAssocSemiring α] (v : m → α),
Matrix.dotProduct 0 v = 0
The theorem `Matrix.zero_dotProduct` states that for any type `m` of finite size and any non-unital, non-associative semiring `α`, the dot product of the zero vector and any vector `v` (which is a function from `m` to `α`) is always the zero element of the semiring `α`. This is equivalent to saying that the sum of entrywise products of the zero vector and any other vector is always zero.
More concisely: For any finite size type m and non-unital, non-associative semiring α, the dot product of the zero vector and any vector in the m-dimensional space over α is the semiring's zero element.
|
Matrix.star_mul
theorem Matrix.star_mul :
∀ {n : Type u_3} {α : Type v} [inst : Fintype n] [inst_1 : NonUnitalSemiring α] [inst_2 : StarRing α]
(M N : Matrix n n α), star (M * N) = star N * star M
The theorem `Matrix.star_mul` states that for all types `n` and `α`, given the conditions that `n` is a finite type, `α` is a non-unital semiring and a star ring, and `M` and `N` are matrices with entries in `α` indexed by `n`, the star of the product of `M` and `N` equals the product of the star of `N` and the star of `M`. Essentially, this theorem is expressing the property of *-algebra where the * operation reverses the order of multiplication, also known as the "involution" property.
More concisely: For matrices $M$ and $N$ with star semiring entries, indexed by a finite type $n$, the star of their product equals the product of their stars: $(M \star N) = (N \star M) \star$.
|
Matrix.mul_smul
theorem Matrix.mul_smul :
∀ {l : Type u_1} {m : Type u_2} {n : Type u_3} {R : Type u_7} {α : Type v} [inst : AddCommMonoid α] [inst_1 : Mul α]
[inst_2 : Fintype n] [inst_3 : Monoid R] [inst_4 : DistribMulAction R α] [inst_5 : SMulCommClass R α α]
(M : Matrix m n α) (a : R) (N : Matrix n l α), M * a • N = a • (M * N)
The theorem `Matrix.mul_smul` states that for any type `l`, `m`, `n`, `R`, and `α`, given the conditions that `α` forms an additive commutative monoid and can be multiplied, `n` is a finite type, `R` forms a monoid, `α` is a distributive multiplication action of `R`, and `R` and `α` form a scalar multiplication commutative class, then for any matrix `M` with indices from `m` to `n` and entries in `α`, any element `a` from `R`, and any matrix `N` with indices from `n` to `l` and entries in `α`, the multiplication of `M` and the scalar multiplication of `a` and `N` equals the scalar multiplication of `a` and the multiplication of `M` and `N`. In mathematical notation, this says $M \cdot (a \cdot N) = a \cdot (M \cdot N)$.
More concisely: For matrices $M$ of size $(m \times n)$ and $(N$ of size $(n \times l)$ over an additive commutative monoid $\alpha$ with distributive multiplication action by a monoid $R$, where $\alpha$ and $R$ form a scalar multiplication commutative class, the multiplication of $M$ with the scalar multiplication of $a \in R$ and $N$ equals the scalar multiplication of $a$ and the product of $M$ and $N$: $M \cdot (a \cdot N) = a \cdot (M \cdot N)$.
|
Matrix.transpose_one
theorem Matrix.transpose_one :
∀ {n : Type u_3} {α : Type v} [inst : DecidableEq n] [inst_1 : Zero α] [inst_2 : One α], Matrix.transpose 1 = 1
This theorem states that for any type `n` and any type `α` with decidable equality, given that `α` is a zero and a one (that is, it has elements identified as "zero" and "one"), the transpose of the one matrix (a matrix with ones on the diagonal and zeros elsewhere) is equal to the one matrix itself. In mathematical terms, if $\alpha$ is a field and $n$ is a set with a decidable equality relation, then the transpose of the identity matrix (of arbitrary size determined by `n`) is still the identity matrix.
More concisely: For any type `n` with decidable equality and any type `α` that is a zero and one, the transpose of the identity matrix of size `n` over `α` is equal to the identity matrix itself.
|
Matrix.conjTranspose_eq_transpose_of_trivial
theorem Matrix.conjTranspose_eq_transpose_of_trivial :
∀ {m : Type u_2} {n : Type u_3} {α : Type v} [inst : Star α] [inst_1 : TrivialStar α] (A : Matrix m n α),
A.conjTranspose = A.transpose
The theorem `Matrix.conjTranspose_eq_transpose_of_trivial` states that for any type of matrix `A` with dimensions indexed by `m` and `n` and entries in `α`, if the star operation on `α` is trivial, meaning that the star operation maps each element to itself (`star x = x`), then the conjugate transpose of `A` is the same as the transpose of `A`. The star operation is typically used to denote the complex conjugate in the field of complex numbers, but in this case, it's trivial, which fits the context of real numbers where the conjugate of a real number is the number itself.
More concisely: For any real matrix `A` of dimensions `m x n`, if the star operation is the identity function, then the conjugate transpose of `A` equals the transpose of `A`.
|
Matrix.one_apply
theorem Matrix.one_apply :
∀ {n : Type u_3} {α : Type v} [inst : DecidableEq n] [inst_1 : Zero α] [inst_2 : One α] {i j : n},
1 i j = if i = j then 1 else 0
The theorem `Matrix.one_apply` states that for any types `n` and `α`, where `n` has decidable equality, `α` has a zero element and a one element, and for any elements `i` and `j` of type `n`, the application of the identity matrix (represented as `1`) at indices `i` and `j` is `1` if `i` and `j` are equal, otherwise it is `0`. Essentially, this theorem formalizes the standard property of identity matrices in mathematics.
More concisely: For any identity matrix of size `n` over type `α` with decidable equality, the `(i, j)`-entry is `1` if `i = j`, and `0` otherwise.
|
Matrix.transpose_smul
theorem Matrix.transpose_smul :
∀ {m : Type u_2} {n : Type u_3} {α : Type v} {R : Type u_10} [inst : SMul R α] (c : R) (M : Matrix m n α),
(c • M).transpose = c • M.transpose
The theorem `Matrix.transpose_smul` states that for all types `m` and `n` denoting the rows and columns of a matrix, type `α` denoting the type of entries in the matrix, and type `R` serving as a scalar under the scalar multiplication operation `SMul`, for any scalar `c` of type `R` and Matrix `M` of type `Matrix m n α`, the transpose of the scalar multiplication of `c` and `M` is equal to the scalar multiplication of `c` and the transpose of `M`. In mathematical terms, this theorem can be represented as $(cM)^T = cM^T$, where `c` is a scalar, `M` is a matrix, `$^T$` denotes transpose, and `cM` denotes scalar multiplication.
More concisely: The transpose of a matrix multiplied by a scalar is equal to the scalar multiplied by the transpose of the matrix.
|
Matrix.map_smul'
theorem Matrix.map_smul' :
∀ {n : Type u_3} {α : Type v} {β : Type w} [inst : Mul α] [inst_1 : Mul β] (f : α → β) (r : α) (A : Matrix n n α),
(∀ (a₁ a₂ : α), f (a₁ * a₂) = f a₁ * f a₂) → (r • A).map f = f r • A.map f
The theorem `Matrix.map_smul'` states that for any types `n`, `α`, and `β`, where `α` and `β` are types with multiplication, and any function `f` from `α` to `β`, scalar `r`, and `n` by `n` matrix `A` with entries of type `α`, if `f` preserves the multiplication operation (i.e., for any two elements `a₁` and `a₂` of `α`, applying `f` to the product of `a₁` and `a₂` is the same as multiplying the results of applying `f` to `a₁` and `a₂` individually), then scaling `A` by `r` and then mapping the entries of the result by `f` yields the same result as first mapping the entries of `A` by `f`, and then scaling the result by `f r`. In mathematical terms, if `f : α → β` is a function that preserves multiplication, then `f` commutes with scalar multiplication in the sense that `f(r * A) = f(r) * f(A)`, where `*` denotes scalar multiplication, and `A` is a matrix with entries in `α`.
More concisely: For any type `α` with multiplication, function `f : α → β`, scalar `r`, and `n` by `n` matrix `A` with entries of type `α`, if `f` preserves multiplication (i.e., `f(a₁ * a₂) = f(a₁) * f(a₂)` for all `a₁, a₂ : α`), then `f(r * A) = f(r) * f(A)`.
|
Matrix.diag_diagonal
theorem Matrix.diag_diagonal :
∀ {n : Type u_3} {α : Type v} [inst : DecidableEq n] [inst_1 : Zero α] (a : n → α), (Matrix.diagonal a).diag = a
The theorem `Matrix.diag_diagonal` states that for any type `n` with decidable equality and any type `α` with a zero element, if you have a function `a` from `n` to `α`, then the diagonal of the square matrix obtained by calling the function `Matrix.diagonal` with `a` as argument is equal to `a`. In other words, this is saying that if you create a square matrix where the diagonal elements are defined by some function `a` and the off-diagonal elements are zero, then retrieving the diagonal of this matrix gives you back the original function `a`.
More concisely: For any type `n` with decidable equality and any type `α` with a zero element, the diagonal of the square matrix obtained by applying the function `Matrix.diagonal` to a function `a` from `n` to `α` is equal to `a` itself.
|
Matrix.dotProduct_comm
theorem Matrix.dotProduct_comm :
∀ {m : Type u_2} {α : Type v} [inst : Fintype m] [inst_1 : AddCommMonoid α] [inst_2 : CommSemigroup α]
(v w : m → α), Matrix.dotProduct v w = Matrix.dotProduct w v
The theorem `Matrix.dotProduct_comm` states that for any types `m` and `α`, where `m` has a finite number of instances and `α` is a commutative semigroup under multiplication and has a commutative monoid structure under addition, the dot product of two functions `v` and `w` (from `m` to `α`) is commutative. In other words, the dot product of `v` and `w` is equal to the dot product of `w` and `v`. This mirrors the concept in mathematics that the dot product of two vectors is commutative, i.e., the order in which the vectors are multiplied does not affect the result.
More concisely: For any finite type `m` and commutative semigroup `α` under multiplication with a commutative monoid structure under addition, the dot product of two functions from `m` to `α` is commutative.
|
Matrix.dotProduct_mulVec
theorem Matrix.dotProduct_mulVec :
∀ {m : Type u_2} {n : Type u_3} {R : Type u_7} [inst : Fintype n] [inst_1 : Fintype m]
[inst_2 : NonUnitalSemiring R] (v : m → R) (A : Matrix m n R) (w : n → R),
Matrix.dotProduct v (A.mulVec w) = Matrix.dotProduct (Matrix.vecMul v A) w
This theorem states that for any vector `v` of type `m → R`, any matrix `A` of type `Matrix m n R`, and any vector `w` of type `n → R` (where `R` is a non-unital semiring, `m` and `n` are finite types), the dot product of `v` with the result of multiplying the matrix `A` by the vector `w` (i.e., `Matrix.dotProduct v (Matrix.mulVec A w)`) is equal to the dot product of the result of multiplying the vector `v` by the matrix `A` (i.e., `Matrix.vecMul v A`) with the vector `w` (i.e., `Matrix.dotProduct (Matrix.vecMul v A) w`). In other words, it asserts that the dot product operation is associative with respect to the matrix-vector multiplication operation.
More concisely: For any vector `v` of type `m -> R` and matrix `A` of type `Matrix m n R`, and vector `w` of type `n -> R`, the dot product of `v` with the matrix-vector product `A * w` is equal to the dot product of the vector-matrix product `v * A` with `w`.
|
Matrix.ext_iff
theorem Matrix.ext_iff :
∀ {m : Type u_2} {n : Type u_3} {α : Type v} {M N : Matrix m n α}, (∀ (i : m) (j : n), M i j = N i j) ↔ M = N := by
sorry
The theorem `Matrix.ext_iff` states that for any two matrices `M` and `N` with entries in a type `α`, row indices in a type `m`, and column indices in a type `n`, the matrices `M` and `N` are equal if and only if they have the same entries at each corresponding pair of row and column indices `(i, j)`. In other words, `M` and `N` are equal if they agree on all of their entries.
More concisely: For matrices M and N of size m x n over type α, M = N if and only if M i j = N i j for all i and j.
|
Matrix.conjTranspose_one
theorem Matrix.conjTranspose_one :
∀ {n : Type u_3} {α : Type v} [inst : DecidableEq n] [inst_1 : Semiring α] [inst_2 : StarRing α],
Matrix.conjTranspose 1 = 1
This theorem states that for any type `n` and any type `α` which has decidable equality, is a semiring and a star-ring, the conjugate transpose of the identity matrix is equal to the identity matrix. This can be interpreted in mathematical terms as, given a complex matrix `A`, if `A` is an identity matrix, then the conjugate transpose of `A` (also known as Hermitian transpose or adjoint), denoted as `A*`, is also an identity matrix. This property holds for all square matrices of any size `n`.
More concisely: For any square matrix `A` of type `n` over a type `α` with decidable equality, is a semiring and a star-ring, the conjugate transpose `A*` equals the identity matrix `Id`.
|
Matrix.smul_dotProduct
theorem Matrix.smul_dotProduct :
∀ {m : Type u_2} {R : Type u_7} {α : Type v} [inst : Fintype m] [inst_1 : Monoid R] [inst_2 : Mul α]
[inst_3 : AddCommMonoid α] [inst_4 : DistribMulAction R α] [inst_5 : IsScalarTower R α α] (x : R) (v w : m → α),
Matrix.dotProduct (x • v) w = x • Matrix.dotProduct v w
The theorem `Matrix.smul_dotProduct` states that for any type `m`, any Monoid `R`, any types `α`, where `α` has a multiplication operation, an addition operation and `α` is a commutative monoid, and `R` acts on `α` in a distributive manner, and also multiplication in `α` is compatible with the action of `R` on `α`, then for any element `x` of `R` and any two functions `v` and `w` from `m` to `α`, the dot product of `x` scaled `v` and `w` is equal to `x` scaled by the dot product of `v` and `w`. In other words, the scalar multiplication can be pulled out of the dot product, which is a standard property of dot products in linear algebra. In mathematical notation, for vectors `v` and `w`, and scalar `x`, this property could be written as `(x*v) . w = x * (v . w)`.
More concisely: For any commutative monoid `α` with distributive `R`-action, and any `R`-module vectors `v` and `w`, the scalar multiplication `x * (v . w)` is equal to the dot product `(x * v) . w`.
|
Matrix.zero_mul
theorem Matrix.zero_mul :
∀ {l : Type u_1} {m : Type u_2} {n : Type u_3} {α : Type v} [inst : NonUnitalNonAssocSemiring α]
[inst_1 : Fintype m] (M : Matrix m n α), 0 * M = 0
The theorem `Matrix.zero_mul` states that for any type `α` that is a NonUnitalNonAssocSemiring, and for any types `l`, `m`, `n`, if `M` is a matrix of type `α` with rows indexed by `m` and columns indexed by `n`, then the product of `0` and `M` is `0`. Here, `Fintype m` denotes that `m` is a finite type. Essentially, this theorem is saying that for any matrix `M`, its multiplication by zero results in the zero matrix, similar to the mathematical principle that any number multiplied by zero equals zero.
More concisely: For any non-unital, non-associative semiring type `α` and finite types `m` and `n`, the product of a matrix `M` of type `α` with rows indexed by `m` and columns indexed by `n`, and the scalar `0` of type `α`, is the zero matrix.
|
Matrix.smul_eq_diagonal_mul
theorem Matrix.smul_eq_diagonal_mul :
∀ {m : Type u_2} {n : Type u_3} {α : Type v} [inst : NonUnitalNonAssocSemiring α] [inst_1 : Fintype m]
[inst_2 : DecidableEq m] (M : Matrix m n α) (a : α), a • M = (Matrix.diagonal fun x => a) * M
This theorem states that for any type `m` (indexing the rows) and `n` (indexing the columns), and any type `α` (the type of the matrix entries) where `α` is a non-unital and non-associative semiring, and where `m` is a finite type with decidable equality, given any matrix `M` of type `Matrix m n α` and any scalar 'a' of type `α`, scaling the matrix `M` by `a` (denoted as `a • M`) is equivalent to multiplying a diagonal matrix (where all the diagonal entries are `a`) with the matrix `M`. The diagonal matrix is defined by the function `(Matrix.diagonal fun x => a)`.
More concisely: For any finite type `m` with decidable equality, type `α` of non-unital and non-associative semiring entries, and matrix `M` of type `Matrix m m α`, scaling `M` by a scalar `a` is equivalent to multiplying `M` with the diagonal matrix of `a`'s.
|
Matrix.conjTranspose_conjTranspose
theorem Matrix.conjTranspose_conjTranspose :
∀ {m : Type u_2} {n : Type u_3} {α : Type v} [inst : InvolutiveStar α] (M : Matrix m n α),
M.conjTranspose.conjTranspose = M
The theorem `Matrix.conjTranspose_conjTranspose` states that for any matrix `M` with indices `m` and `n` and entries in a type `α` that has an involutive star operation, the conjugate transpose of the conjugate transpose of `M` is equal to `M` itself. In mathematical terms, given a matrix `M` and denoting the conjugate transpose operation as `*`, the theorem asserts that `(M*)* = M`. This is analogous to the property of the conjugate transpose operation in complex matrix algebra, where taking the conjugate transpose twice returns the original matrix.
More concisely: For any square matrix M with entries in a type having an involutive star operation, the conjugate transpose of its conjugate transpose equals the original matrix: (M*)* = M.
|
Matrix.vecMul_transpose
theorem Matrix.vecMul_transpose :
∀ {m : Type u_2} {n : Type u_3} {α : Type v} [inst : NonUnitalCommSemiring α] [inst_1 : Fintype n]
(A : Matrix m n α) (x : n → α), Matrix.vecMul x A.transpose = A.mulVec x
The theorem `Matrix.vecMul_transpose` states that for any matrix `A` of type `α`, indexed by `m` for rows and `n` for columns, and a given vector `x` also of type `α`, the vector-matrix multiplication of `x` with the transpose of `A` is the same as the matrix-vector multiplication of `A` with `x`. Here, `α` is any type for which a non-unital commutative semiring structure exists, and `n` is assumed to be a finite type. In mathematical terms, if `A` is a matrix and `x` is a vector, then the operation `x * Transpose(A)` is equivalent to `A * x`.
More concisely: For any matrix A of type α with dimensions m × n and vector x of type α, the transpose of A multiplied with x is equal to the matrix-vector product of A and x.
|
Matrix.diagonal_zero
theorem Matrix.diagonal_zero :
∀ {n : Type u_3} {α : Type v} [inst : DecidableEq n] [inst_1 : Zero α], (Matrix.diagonal fun x => 0) = 0
The theorem `Matrix.diagonal_zero` states that for any type `n` and any type `α` which has an instance of `DecidableEq` for `n` and `Zero` for `α`, the diagonal matrix constructed by mapping all elements of `n` to zero is equal to the zero matrix. In other words, if we create a square matrix such that the diagonal elements are all zeros (and off-diagonal elements are also zeros because of the `Matrix.diagonal` definition), then this matrix is precisely the zero matrix.
More concisely: For any square matrix constructed by mapping all its diagonal elements to zero and assuming `n` has a decidable equality and `α` has a zero instance, this matrix is equal to the zero matrix.
|
Matrix.submatrix_apply
theorem Matrix.submatrix_apply :
∀ {l : Type u_1} {m : Type u_2} {n : Type u_3} {o : Type u_4} {α : Type v} (A : Matrix m n α) (r_reindex : l → m)
(c_reindex : o → n) (i : l) (j : o), A.submatrix r_reindex c_reindex i j = A (r_reindex i) (c_reindex j)
The theorem `Matrix.submatrix_apply` states that for any types `l`, `m`, `n`, `o`, and `α`, and for any matrix `A` with entries in `α` indexed by `m` for rows and `n` for columns, along with any reindexing functions `r_reindex` and `c_reindex` from `l` to `m` and `o` to `n` respectively, and for any indices `i` of type `l` and `j` of type `o`, the entry of the submatrix of `A` created by `r_reindex` and `c_reindex` at position `(i,j)` is equal to the entry of `A` at the position `(r_reindex i, c_reindex j)`. In other words, the theorem defines how to get entries from a submatrix of a given matrix using reindexing functions.
More concisely: For any matrix `A` with entries in type `α`, and given reindexing functions `r_reindex : l → m` and `c_reindex : o → n`, the entry of the submatrix at position `(i, j)` is equal to the entry of `A` at position `(r_reindex i, c_reindex j)`.
|
Matrix.mulVec_transpose
theorem Matrix.mulVec_transpose :
∀ {m : Type u_2} {n : Type u_3} {α : Type v} [inst : NonUnitalCommSemiring α] [inst_1 : Fintype m]
(A : Matrix m n α) (x : m → α), A.transpose.mulVec x = Matrix.vecMul x A
This theorem, `Matrix.mulVec_transpose`, states that for any type `m`, `n`, and `α`, where `α` has a structure of a `NonUnitalCommSemiring` and `m` is a `Fintype`, given a matrix `A` of type `Matrix m n α` and a vector `x` of type `m → α`, the multiplication of the transpose of `A` with vector `x` using the `Matrix.mulVec` function is the same as the vector multiplication of `x` with `A` using the `Matrix.vecMul` function. In mathematical terms, this means that for a given matrix `A` and a vector `x`, we have $(A^T)x = xA$, where $A^T$ denotes the transpose of `A`.
More concisely: For any matrix `A` of type `Matrix m n α` and vector `x` of type `m → α`, where `α` is a `NonUnitalCommSemiring` and `m` is a `Fintype`, the transpose of `A` multiplied by vector `x` is equal to vector `x` multiplied by `A`, i.e., $(A^T)x = xA$.
|
Matrix.conjTranspose_smul
theorem Matrix.conjTranspose_smul :
∀ {m : Type u_2} {n : Type u_3} {R : Type u_7} {α : Type v} [inst : Star R] [inst_1 : Star α] [inst_2 : SMul R α]
[inst_3 : StarModule R α] (c : R) (M : Matrix m n α), (c • M).conjTranspose = star c • M.conjTranspose
The theorem `Matrix.conjTranspose_smul` states that for any matrix whose entries are elements of a star-module, the conjugate transpose of the scalar multiplication of a scalar and this matrix is equal to the scalar multiplication of the star of the scalar and the conjugate transpose of the matrix. This is expressed in terms of scalar multiplication (`•`) and the star operation (`star`). In other words, given a scalar `c` and a matrix `M` with entries in a star-module, the conjugate transpose of `c` times `M` is the same as the star of `c` times the conjugate transpose of `M`. Note that `StarModule` is a strong assumption, and there are other versions of this theorem that apply when this condition isn't met.
More concisely: For any scalar `c` and matrix `M` with entries in a star-module, the conjugate transpose of `c` times `M` equals the star of `c` times the conjugate transpose of `M`.
|
Matrix.one_mul
theorem Matrix.one_mul :
∀ {m : Type u_2} {n : Type u_3} {α : Type v} [inst : NonAssocSemiring α] [inst_1 : Fintype m]
[inst_2 : DecidableEq m] (M : Matrix m n α), 1 * M = M
This theorem, `Matrix.one_mul`, states that for any types `m`, `n`, and `α` where `α` forms a nonassociative semiring, `m` has a finite number of distinct values and equality between elements of `m` is decidable, if `M` is a matrix with rows indexed by `m` and columns indexed by `n` and entries in `α`, then multiplying `M` by the multiplicative identity `1` results in `M` itself. In other words, for any such matrix `M`, `1 * M = M`.
More concisely: For any nonassociative semiring `α` with decidable equality and finite number of distinct values, and given a matrix `M` over `α` with finite index sets, the multiplicative identity `1` multiplied by `M` equals `M`.
|
Matrix.mul_assoc
theorem Matrix.mul_assoc :
∀ {l : Type u_1} {m : Type u_2} {n : Type u_3} {o : Type u_4} {α : Type v} [inst : NonUnitalSemiring α]
[inst_1 : Fintype m] [inst_2 : Fintype n] (L : Matrix l m α) (M : Matrix m n α) (N : Matrix n o α),
L * M * N = L * (M * N)
This theorem, `Matrix.mul_assoc`, states that for any four types `l`, `m`, `n`, and `o`, and any type `α` that is a non-unital semiring, given three matrices `L`, `M`, and `N`, with `L` being a matrix from `l` to `m` with entries in `α`, `M` being a matrix from `m` to `n` with entries in `α`, and `N` being a matrix from `n` to `o` with entries in `α`, the matrix multiplication operation is associative. In other words, multiplying `L`, `M`, and `N` in the order `(L * M) * N` is the same as multiplying them in the order `L * (M * N)`. This property holds true regardless of the actual entries of the matrices.
More concisely: For any non-unital semiring type α and matrices L from l to m, M from m to n, and N from n to o with entries in α, the associativity of matrix multiplication holds: (L * M) * N = L * (M * N).
|
Matrix.dotProduct_add
theorem Matrix.dotProduct_add :
∀ {m : Type u_2} {α : Type v} [inst : Fintype m] [inst_1 : NonUnitalNonAssocSemiring α] (u v w : m → α),
Matrix.dotProduct u (v + w) = Matrix.dotProduct u v + Matrix.dotProduct u w
The theorem `Matrix.dotProduct_add` states that for any vectors `u`, `v`, and `w` of elements from a type `m` mapping to a type `α`, where `α` forms a non-unital non-associative semiring, the dot product of `u` with the vector sum of `v` and `w` equals the sum of the dot product of `u` and `v` and the dot product of `u` and `w`. In mathematical terms, this is expressing the distributive property of the dot product over vector addition: `u ⋅ (v + w) = u ⋅ v + u ⋅ w`.
More concisely: For any vectors `u`, `v`, and `w` in a type `m` mapping to a non-unital non-associative semiring `α`, the dot product of `u` with the vector sum of `v` and `w` is equal to the sum of the dot products of `u` with `v` and `w`: `u ⋅ (v + w) = u ⋅ v + u ⋅ w`.
|
Matrix.diag_smul
theorem Matrix.diag_smul :
∀ {n : Type u_3} {R : Type u_7} {α : Type v} [inst : SMul R α] (r : R) (A : Matrix n n α),
(r • A).diag = r • A.diag
The theorem `Matrix.diag_smul` states that for any type `n`, any type `R`, any type `α`, and any instance of scalar multiplication (`SMul`) of `R` and `α`, given a scalar `r` of type `R` and a square matrix `A` of type `Matrix n n α`, the diagonal of the scalar multiplication of `r` and `A`, i.e., `r • A`, is equal to the scalar multiplication of `r` and the diagonal of `A`. In mathematical terms, this theorem asserts that if you scale a matrix by a scalar `r`, the diagonal of the resulting matrix is the same as if you just scaled the original diagonal by `r`. In other words, scalar multiplication commutes with taking the diagonal of a square matrix.
More concisely: For any square matrix A and scalar r, the diagonal of the matrix r * A is equal to the scalar multiplication of r and the diagonal of A.
|
Matrix.mul_apply_eq_vecMul
theorem Matrix.mul_apply_eq_vecMul :
∀ {m : Type u_2} {n : Type u_3} {o : Type u_4} {α : Type v} [inst : NonUnitalNonAssocSemiring α]
[inst_1 : Fintype n] (A : Matrix m n α) (B : Matrix n o α) (i : m), (A * B) i = Matrix.vecMul (A i) B
This theorem states that for any types `m`, `n`, `o`, and `α`, where `α` is a type with a structure of a non-unital, non-associative semiring, and `n` is a finite type, given matrices `A` (with row indexes of type `m` and column indexes of type `n`) and `B` (with row indexes of type `n` and column indexes of type `o`), for any row index `i` of type `m`, the `i`th row of the product matrix `A * B` is identical to the result of vector multiplication of the `i`th row of `A` with matrix `B`. In mathematical terms, this corresponds to the fact that matrix multiplication can be carried out row by row.
More concisely: For any non-unital, non-associative semiring type `α`, finite type `n`, matrices `A` of size `m x n` and `B` of size `n x o`, the `i`th row of the product `A * B` is equal to the vector-matrix multiplication of the `i`th row of `A` with `B`.
|
Matrix.neg_mul
theorem Matrix.neg_mul :
∀ {m : Type u_2} {n : Type u_3} {o : Type u_4} {α : Type v} [inst : NonUnitalNonAssocRing α] [inst_1 : Fintype n]
(M : Matrix m n α) (N : Matrix n o α), -M * N = -(M * N)
The theorem `Matrix.neg_mul` states that for any three types `m`, `n`, and `o`, and a type `α` forming a non-unital non-associative ring, and for any finite type `n`, if we have two matrices `M` and `N` such that `M` is a matrix with entries of type `α`, rows indexed by `m`, and columns indexed by `n`, and `N` is a matrix with entries of type `α`, rows indexed by `n`, and columns indexed by `o`, the multiplication of the negation of `M` with `N` equals the negation of the product of `M` and `N`. This theorem essentially states a property of matrix multiplication similar to the distributive law of scalar multiplication in linear algebra.
More concisely: For matrices $M$ and $N$ with compatible dimensions over a non-unital non-associative ring $\alpha$, $-M \cdot N = -(M \cdot N)$.
|
Matrix.submatrix_mul_equiv
theorem Matrix.submatrix_mul_equiv :
∀ {l : Type u_1} {m : Type u_2} {n : Type u_3} {o : Type u_4} {α : Type v} [inst : Fintype n] [inst_1 : Fintype o]
[inst_2 : AddCommMonoid α] [inst_3 : Mul α] {p : Type u_10} {q : Type u_11} (M : Matrix m n α) (N : Matrix n p α)
(e₁ : l → m) (e₂ : o ≃ n) (e₃ : q → p), M.submatrix e₁ ⇑e₂ * N.submatrix (⇑e₂) e₃ = (M * N).submatrix e₁ e₃
The theorem `Matrix.submatrix_mul_equiv` states that for any given types `l`, `m`, `n`, `o`, `α`, `p`, `q`; a finitely indexed set `n` and `o`; an additive commutative monoid `α` that's also a multiplicative structure; matrices `M` and `N` with entries in `α` where `M` is of type `Matrix m n α` and `N` is of type `Matrix n p α`; mappings `e₁` from `l` to `m`, an equivalence `e₂` from `o` to `n`, and a mapping `e₃` from `q` to `p`; the multiplication of the submatrices obtained by applying the mappings `e₁`, `e₂` to `M` and `e₂`, `e₃` to `N` respectively, is equal to the submatrix obtained from the multiplication of `M` and `N` by applying the mappings `e₁` and `e₃`. This means that the process of multiplication is preserved under the action of these mappings on the matrices and their indices.
More concisely: For matrices $M$ and $N$ over a commutative monoid $\alpha$, and mappings $e\_1$, $e\_2$, and $e\_3$, the submatrices obtained by applying $e\_1$, $e\_2$ to $M$ and $e\_2$, $e\_3$ to $N$ respectively, are equal to the submatrices obtained by applying $e\_1$ and $e\_3$ to the product of $M$ and $N$.
|
Matrix.dotProduct_comp_equiv_symm
theorem Matrix.dotProduct_comp_equiv_symm :
∀ {m : Type u_2} {n : Type u_3} {α : Type v} [inst : Fintype m] [inst_1 : Fintype n]
[inst_2 : NonUnitalNonAssocSemiring α] (u : m → α) (x : n → α) (e : n ≃ m),
Matrix.dotProduct u (x ∘ ⇑e.symm) = Matrix.dotProduct (u ∘ ⇑e) x
The theorem `Matrix.dotProduct_comp_equiv_symm` states that for any vectors `u` and `x` of types `m` and `n` respectively, in a non-unital, non-associative semiring `α`, and a given equivalence `e` between `n` and `m`, permuting the vector `x` (via `e.symm`) on the right side of a dot product is the same as permuting the vector `u` (via `e`) on the left side of the dot product. In more mathematical terms, it states that $u \cdot (x \circ e^{-1}) = (u \circ e) \cdot x$, where $\cdot$ denotes the dot product.
More concisely: For any vector `u` of type `m`, vector `x` of type `n`, and equivalence `e` between `n` and `m` in a non-unital, non-associative semiring `α`, the dot product of `u` with `x` circed by `e.symm` is equal to the dot product of `u` circed by `e` with `x`.
|
Matrix.diagonal_mul
theorem Matrix.diagonal_mul :
∀ {m : Type u_2} {n : Type u_3} {α : Type v} [inst : NonUnitalNonAssocSemiring α] [inst_1 : Fintype m]
[inst_2 : DecidableEq m] (d : m → α) (M : Matrix m n α) (i : m) (j : n),
(Matrix.diagonal d * M) i j = d i * M i j
This theorem states that for any types `m`, `n`, and `α` where `α` is a non-unital non-associative semiring, `m` is a fintype, and `m` has decidable equality, and given a function `d` from `m` to `α`, a matrix `M` of type `Matrix m n α`, and elements `i` of `m` and `j` of `n`, the multiplication of the diagonal matrix created from `d` and `M` at the positions `i` and `j` is equal to the result of multiplying the `i-th` element of `d` and the element at position `i` and `j` of `M`. In mathematical terms, this theorem states that $(D * M)_{ij} = d_i * M_{ij}$, where $D$ is the diagonal matrix created from `d`.
More concisely: For any non-unital non-associative semiring type `α`, fintype `m` with decidable equality, and function `d` from `m` to `α`, the diagonal matrix product of `d` and matrix `M` equals the element-wise product of the diagonal elements of `d` with the corresponding elements of `M`.
|
Matrix.diagonal_one
theorem Matrix.diagonal_one :
∀ {n : Type u_3} {α : Type v} [inst : DecidableEq n] [inst_1 : Zero α] [inst_2 : One α],
(Matrix.diagonal fun x => 1) = 1
The theorem `Matrix.diagonal_one` asserts that for any type `n` with decidable equality and any type `α` that has both zero and one elements, the square matrix created by the function `Matrix.diagonal` with all entries uniformly set to one (i.e., `(Matrix.diagonal fun x => 1)`) is equal to the identity matrix `1`. Essentially, this theorem states that a diagonal matrix with all diagonal entries as one is an identity matrix.
More concisely: For any type `n` with decidable equality and any type `α` having zero and one elements, the diagonal matrix with all entries equal to one is equal to the identity matrix.
|
Matrix.add_mul
theorem Matrix.add_mul :
∀ {l : Type u_1} {m : Type u_2} {n : Type u_3} {α : Type v} [inst : NonUnitalNonAssocSemiring α]
[inst_1 : Fintype m] (L M : Matrix l m α) (N : Matrix m n α), (L + M) * N = L * N + M * N
This theorem, `Matrix.add_mul`, states that for any types `l`, `m`, `n` and `α` where `α` is a non-unital non-associative semiring and `m` is finite, given three matrices `L`, `M`, and `N` with entries in `α` where `L` and `M` are matrices with rows indexed by `l` and columns indexed by `m`, and `N` is a matrix with rows indexed by `m` and columns indexed by `n`, the multiplication of the sum of `L` and `M` with `N` is equal to the sum of the result of `L` multiplied by `N` and `M` multiplied by `N`. In more mathematical terms, it's saying that $(L + M) \cdot N = L \cdot N + M \cdot N$. This is a property of matrix multiplication mirroring the distributive law of multiplication over addition.
More concisely: For matrices `L` and `M` with finite, non-unital, non-associative semiring entries, and `N` with compatible dimensions, $(L + M) \cdot N = L \cdot N + M \cdot N$.
|
Matrix.diagonal_apply_eq
theorem Matrix.diagonal_apply_eq :
∀ {n : Type u_3} {α : Type v} [inst : DecidableEq n] [inst_1 : Zero α] (d : n → α) (i : n),
Matrix.diagonal d i i = d i
The theorem `Matrix.diagonal_apply_eq` states that for any type `n`, any type `α` that has a zero element, any function `d` mapping from `n` to `α`, and any element `i` of type `n`, the diagonal element `(i, i)` of the square matrix formed by the `Matrix.diagonal` function applied to `d` is equal to the value of the function `d` at `i`. In other words, in any diagonal matrix constructed by the `Matrix.diagonal` function, the diagonal elements are exactly the values given by the input function at corresponding positions.
More concisely: For any function `d` mapping from type `n` to type `α` with zero element, the diagonal element `(i, i)` of the matrix `Matrix.diagonal d` equals `d i` for all `i` in `n`.
|
Matrix.smul_mul
theorem Matrix.smul_mul :
∀ {l : Type u_1} {m : Type u_2} {n : Type u_3} {R : Type u_7} {α : Type v} [inst : AddCommMonoid α] [inst_1 : Mul α]
[inst_2 : Fintype n] [inst_3 : Monoid R] [inst_4 : DistribMulAction R α] [inst_5 : IsScalarTower R α α] (a : R)
(M : Matrix m n α) (N : Matrix n l α), a • M * N = a • (M * N)
The theorem `Matrix.smul_mul` states that for any types `l`, `m`, `n`, `R`, `α` and given a commutative additive monoid `α` with multiplication, where `n` is a finite type, `R` is a monoid, and `α` is a `R`-module with multiplication distributing over scalar multiplication, then scalar multiplication is compatible with matrix multiplication. In other words, for any scalar `a` from `R`, and any two matrices `M` and `N` with elements from `α`, where `M`'s columns index type is same as `N`'s rows index type, the operation of scaling `M` by `a` and then multiplying by `N` is the same as the operation of multiplying `M` by `N` first and then scaling the result by `a`. This can be mathematically expressed as `a • (M * N) = a • M * N`.
More concisely: For any commutative monoid `R`, `α` being an `R`-module with multiplication distributing over scalar multiplication, and `M` and `N` being matrices with compatible dimensions over `α`, we have `a * (M * N) = a * M * N` for any scalar `a` in `R`.
|
Matrix.mul_zero
theorem Matrix.mul_zero :
∀ {m : Type u_2} {n : Type u_3} {o : Type u_4} {α : Type v} [inst : NonUnitalNonAssocSemiring α]
[inst_1 : Fintype n] (M : Matrix m n α), M * 0 = 0
The theorem `Matrix.mul_zero` states that for any matrix `M` with entries in a type α, where α is a non-unital non-associative semiring, and the rows of `M` are indexed by type m and the columns are indexed by type n, the multiplication of `M` and the zero matrix always equals the zero matrix. Here, `M * 0 = 0` essentially says that when you multiply any matrix by the zero matrix, you get back the zero matrix, which mirrors the principle in mathematics that any number multiplied by zero equals zero. The `Fintype n` constraint expresses that the type n, which indexes the columns of the matrix, is finite.
More concisely: For any finite-dimensional matrices M and O with entries in a non-unital non-associative semiring, M * O = O.
|
Matrix.map_op_smul'
theorem Matrix.map_op_smul' :
∀ {n : Type u_3} {α : Type v} {β : Type w} [inst : Mul α] [inst_1 : Mul β] (f : α → β) (r : α) (A : Matrix n n α),
(∀ (a₁ a₂ : α), f (a₁ * a₂) = f a₁ * f a₂) → (MulOpposite.op r • A).map f = MulOpposite.op (f r) • A.map f
The theorem `Matrix.map_op_smul'` states that for any types `n`, `α`, and `β` where `α` and `β` are both equipped with an multiplication operation, given a function `f` from `α` to `β` and a matrix `A` with entries in `α`, if `f` preserves multiplication, i.e., `f(a₁ * a₂) = f a₁ * f a₂` for all elements `a₁` and `a₂` in `α`, then the scalar multiplication of the matrix `A` by the "opposite" of a scalar `r` in `α` followed by mapping all entries by `f` is the same as the scalar multiplication of the matrix `A` (after mapping all entries by `f`) by the "opposite" of the image of `r` under `f`. In mathematical terms, we have `(op(r) * A).map(f) = op(f(r)) * (A.map(f))`. Here, `op(r)` denotes the "opposite" of the element `r` in the multiplication operation, `A.map(f)` denotes the matrix obtained by applying `f` to each entry of `A`, and `*` denotes scalar multiplication.
More concisely: For matrices A with entries in a type equipped with multiplication, and a function f preserving multiplication, the scalar multiplication of A by the opposite of a scalar r followed by mapping all entries by f equals the scalar multiplication of A mapped by f with the opposite of the image of r under f. In symbols: (op(r) * A).map(f) = op(f(r)) * (A.map(f)).
|
Matrix.mul_one
theorem Matrix.mul_one :
∀ {m : Type u_2} {n : Type u_3} {α : Type v} [inst : NonAssocSemiring α] [inst_1 : Fintype n]
[inst_2 : DecidableEq n] (M : Matrix m n α), M * 1 = M
The theorem `Matrix.mul_one` asserts that for any types `m` and `n`, and any type `α` of a non-associative semiring, given the conditions that `n` has both a finite type and decidable equality, then multiplication of any matrix `M` (which has its rows indexed by `m`, columns indexed by `n`, and entries in `α`) by the identity matrix gives back the original matrix `M`. In mathematical terms, it states that for any matrix `M`, we have `M * I = M`, where `I` is the identity matrix.
More concisely: For any matrix `M` with type `m × n` over a non-associative semiring `α` with finite type and decidable equality, `M` multiplied by the identity matrix `I` of the same size results in `M` itself. In Lean notation: `(M : matrix α m n) → M * I = M`.
|
Matrix.comp_equiv_symm_dotProduct
theorem Matrix.comp_equiv_symm_dotProduct :
∀ {m : Type u_2} {n : Type u_3} {α : Type v} [inst : Fintype m] [inst_1 : Fintype n]
[inst_2 : NonUnitalNonAssocSemiring α] (u : m → α) (x : n → α) (e : m ≃ n),
Matrix.dotProduct (u ∘ ⇑e.symm) x = Matrix.dotProduct u (x ∘ ⇑e)
This theorem states that for any types `m` and `n`, for any semiring `α`, and for any functions `u : m → α`, `x : n → α`, and `e : m ≃ n`, permuting the vector `u` before doing a dot product with `x` is the same as permuting the vector `x` after doing a dot product with `u`. Here, the dot product of two vectors is the sum of the products of their corresponding entries. The permutation is denoted by the composition of functions with `e` or `e.symm`, where `e` is a bijective function and `e.symm` is its inverse function.
More concisely: For any semirings α, functions u : m → α and x : n → α, and bijective function e : m ≃ n, the dot product u · x equals x · u, where the dot product is the sum of the products of corresponding entries, and the permutation is achieved by composing with e or e^(-1).
|
Matrix.submatrix_diagonal
theorem Matrix.submatrix_diagonal :
∀ {l : Type u_1} {m : Type u_2} {α : Type v} [inst : Zero α] [inst_1 : DecidableEq m] [inst_2 : DecidableEq l]
(d : m → α) (e : l → m), Function.Injective e → (Matrix.diagonal d).submatrix e e = Matrix.diagonal (d ∘ e)
The theorem states that for any types `l`, `m`, and `α`, where `α` has a zero element and `l` and `m` have decidable equality, given a diagonal matrix represented by a map `d : m → α` and a reindexing map `e : l → m`, if `e` is injective, then the submatrix of the original matrix created using `e` as the row and column selector is also a diagonal matrix. This resultant diagonal matrix is defined by the function composition `(d ∘ e)`. In other words, reindexing a diagonal matrix by an injective function still results in a diagonal matrix.
More concisely: If `d : m → α` is a diagonal matrix, `e : l → m` is an injective function, then the submatrix of `d` obtained by using `e` as row and column selector is also a diagonal matrix, defined as `(d ∘ e)`.
|
Matrix.add_dotProduct
theorem Matrix.add_dotProduct :
∀ {m : Type u_2} {α : Type v} [inst : Fintype m] [inst_1 : NonUnitalNonAssocSemiring α] (u v w : m → α),
Matrix.dotProduct (u + v) w = Matrix.dotProduct u w + Matrix.dotProduct v w
The theorem `Matrix.add_dotProduct` states that for any finite type `m` and any type `α` that forms a non-unital and non-associative semiring, if `u`, `v`, and `w` are functions from `m` to `α`, then the dot product of the sum of `u` and `v` with `w` is equal to the sum of the dot product of `u` with `w` and the dot product of `v` with `w`. In mathematical terms, it states that the dot product operation respects the distributive law: $(u + v) \cdot w = u \cdot w + v \cdot w$.
More concisely: The dot product of the sum of two functions with respect to a semiring operation is distributive over the dot product with a third function.
|
Mathlib.Data.Matrix.Basic._auxLemma.1
theorem Mathlib.Data.Matrix.Basic._auxLemma.1 :
∀ {n : Type u_3} {α : Type v} [inst : DecidableEq n] [inst_1 : Zero α] {d₁ d₂ : n → α},
(Matrix.diagonal d₁ = Matrix.diagonal d₂) = ∀ (i : n), d₁ i = d₂ i
This theorem states that for any type `n`, and any type `α` with a zero element and a decidable equality operation, and for any two functions `d₁` and `d₂` from `n` to `α`, the equality `Matrix.diagonal d₁ = Matrix.diagonal d₂` holds if and only if, for all `i` in `n`, `d₁ i = d₂ i`. In other words, two diagonal matrices are equal if and only if their corresponding diagonal elements are equal for all indices.
More concisely: For any type `n`, and for diagonal matrices `M₁` and `M₂` of type `Matrix α n` over a type `α` with a zero element and decidable equality, `M₁ = M₂` if and only if `∀ i, M₁ i = M₂ i`.
|
RingHom.mapMatrix_apply
theorem RingHom.mapMatrix_apply :
∀ {m : Type u_2} {α : Type v} {β : Type w} [inst : Fintype m] [inst_1 : DecidableEq m] [inst_2 : NonAssocSemiring α]
[inst_3 : NonAssocSemiring β] (f : α →+* β) (M : Matrix m m α), f.mapMatrix M = M.map ⇑f
The theorem `RingHom.mapMatrix_apply` states that for any type `m`, with finite instances and decidable equality, and for any types `α` and `β`, which are non-associative semirings, if `f` is a ring homomorphism from `α` to `β` and `M` is a square matrix of type `m` with entries from `α`, then applying the function `RingHom.mapMatrix` on `f` and `M` is the same as mapping `f` over each entry of the matrix `M` using the function `Matrix.map`. Here, `⇑f` is the coercion of the ring homomorphism `f` to a function.
More concisely: For any square matrix M with entries from a non-associative semiring α, and for any ring homomorphism f from α to another semiring β, applying the function RingHom.mapMatrix on f and M is equivalent to mapping f to each entry of M using Matrix.map.
|
Matrix.of_apply
theorem Matrix.of_apply :
∀ {m : Type u_2} {n : Type u_3} {α : Type v} (f : m → n → α) (i : m) (j : n), Matrix.of f i j = f i j
This theorem states that for any types `m`, `n`, and `α`, and for any function `f` from pairs of `m` and `n` to `α`, and any elements `i` of type `m` and `j` of type `n`, the result of applying the function `Matrix.of` to `f` and then indexing into the resulting matrix at `i` and `j` is equal to the result of directly applying `f` to `i` and `j`. In other words, creating a matrix from a function and then retrieving an element from the matrix is equivalent to directly applying the function to the indices of that element.
More concisely: For any types `m`, `n`, and `α`, and for any function `f` from pairs of `m` and `n` to `α`, the application of `f` to `(i, j)` is equal to the `(i, j)`-th entry of the matrix `Matrix.of f`.
|
Matrix.map_add
theorem Matrix.map_add :
∀ {m : Type u_2} {n : Type u_3} {α : Type v} {β : Type w} [inst : Add α] [inst_1 : Add β] (f : α → β),
(∀ (a₁ a₂ : α), f (a₁ + a₂) = f a₁ + f a₂) → ∀ (M N : Matrix m n α), (M + N).map f = M.map f + N.map f
This theorem, named `Matrix.map_add`, states that for any types `m`, `n`, `α`, and `β` where `α` and `β` are types with addition defined on them, given a function `f` from `α` to `β` that preserves addition, i.e., `f (a₁ + a₂) = f a₁ + f a₂` for all `a₁, a₂` in `α`, then applying `f` to the sum of two matrices `M` and `N` with entries in `α` is the same as summing the results of applying `f` to each of the matrices `M` and `N` separately. In other words, this theorem asserts that mapping a function over the addition of two matrices is equivalent to adding the results of mapping the function over each matrix separately, assuming the function preserves addition.
More concisely: Given matrices M and N with additive type entries α and a function f from α to another additive type β preserving addition, the application of f to the sum of M and N is equal to the sum of the applications of f to M and N.
|
Matrix.transpose_submatrix
theorem Matrix.transpose_submatrix :
∀ {l : Type u_1} {m : Type u_2} {n : Type u_3} {o : Type u_4} {α : Type v} (A : Matrix m n α) (r_reindex : l → m)
(c_reindex : o → n), (A.submatrix r_reindex c_reindex).transpose = A.transpose.submatrix c_reindex r_reindex
This theorem states that for any Matrix `A` with entries of type `α` and row indices of type `m` and column indices `n`, and any reindexing functions `r_reindex` and `c_reindex` that reindex the rows and columns respectively, the transpose of the submatrix of `A` obtained by applying `r_reindex` and `c_reindex` is equal to the submatrix of the transpose of `A` obtained by applying `c_reindex` and `r_reindex`. In other words, transposing a submatrix is equivalent to taking a submatrix of the original matrix's transpose, provided the indexing functions are swapped.
More concisely: For any matrix `A` of type `α m n`, and reindexing functions `r_reindex` and `c_reindex`, the submatrix of `A.transpose` obtained by applying `c_reindex` and `r_reindex` is equal to the transpose of the submatrix of `A` obtained by applying `r_reindex` and `c_reindex`.
|
Matrix.transpose_neg
theorem Matrix.transpose_neg :
∀ {m : Type u_2} {n : Type u_3} {α : Type v} [inst : Neg α] (M : Matrix m n α), (-M).transpose = -M.transpose := by
sorry
The theorem `Matrix.transpose_neg` states that for any type `m` and `n` and a type `α` which has a negation operation, given a matrix `M` with entries in `α` indexed by `m` and `n`, the transpose of the negation of `M` equals the negation of the transpose of `M`. In mathematical terms, for all matrices `M`, we have that `(-M)^T = -M^T`. Here, `T` denotes the transpose of a matrix, and `-` denotes the negation of each entry of the matrix.
More concisely: For any matrix `M` with entries in a type `α` having negation operation, the transpose of the negated matrix `-M` equals the negation of the transpose `-M^T`.
|
Matrix.submatrix_one_equiv
theorem Matrix.submatrix_one_equiv :
∀ {l : Type u_1} {m : Type u_2} {α : Type v} [inst : Zero α] [inst_1 : One α] [inst_2 : DecidableEq m]
[inst_3 : DecidableEq l] (e : l ≃ m), Matrix.submatrix 1 ⇑e ⇑e = 1
This theorem asserts that for any given types `l`, `m`, and `α`, where `α` is a type with zero and one elements, and `l` and `m` have decidable equality, if we have a bijection `e` from `l` to `m`, then the submatrix of the one-matrix (a matrix filled with ones), transformed according to `e`, is equal to the original one-matrix. This essentially means that the structure of the one-matrix is preserved under such transformation, regardless of the bijection used.
More concisely: For any types `l` and `m` with decidable equality, and a bijection `e` from `l` to `m` between types with zero and one elements, the submatrix of the one-matrix transformed according to `e` is equal to the original one-matrix.
|
Matrix.one_apply_ne
theorem Matrix.one_apply_ne :
∀ {n : Type u_3} {α : Type v} [inst : DecidableEq n] [inst_1 : Zero α] [inst_2 : One α] {i j : n},
i ≠ j → 1 i j = 0
This theorem states that for any types `n` and `α`, given that `n` has decidable equality, `α` has a zero element, and `α` has a one element, if `i` and `j` are elements of `n` and `i` doesn't equal `j`, then the `i,j`-th entry of the identity matrix (which is represented by `1` in Lean) is `0`. In simpler terms, this is asserting that all non-diagonal entries of an identity matrix are zero.
More concisely: For any type `n` with decidable equality and types `α` with zero and one elements, the `(i, j)`-entry of the identity matrix of size `n` is `0` when `i ≠ j`.
|
Matrix.conjTranspose_apply
theorem Matrix.conjTranspose_apply :
∀ {m : Type u_2} {n : Type u_3} {α : Type v} [inst : Star α] (M : Matrix m n α) (i : m) (j : n),
M.conjTranspose j i = star (M i j)
The theorem `Matrix.conjTranspose_apply` establishes the relationship between a matrix and its conjugate transpose in the context of the Lean theorem prover. Given a matrix `M` with entries in a `Star`-structured type `α`, and indices `i` from type `m` and `j` from type `n`, this theorem states that the entry at position `(j, i)` in the conjugate transposed matrix `M.conjTranspose` is the complex conjugate (indicated by `star`) of the entry at position `(i, j)` in the original matrix `M`. This is analogous to how the transpose of a matrix swaps the roles of its rows and columns, but with the additional step of complex conjugation.
More concisely: The theorem `Matrix.conjTranspose_apply` in Lean states that the entry at position `(j, i)` in the conjugate transpose of a complex matrix `M` equals the complex conjugate of the entry at position `(i, j)` in `M`.
|
Matrix.submatrix_submatrix
theorem Matrix.submatrix_submatrix :
∀ {l : Type u_1} {m : Type u_2} {n : Type u_3} {o : Type u_4} {α : Type v} {l₂ : Type u_10} {o₂ : Type u_11}
(A : Matrix m n α) (r₁ : l → m) (c₁ : o → n) (r₂ : l₂ → l) (c₂ : o₂ → o),
(A.submatrix r₁ c₁).submatrix r₂ c₂ = A.submatrix (r₁ ∘ r₂) (c₁ ∘ c₂)
The theorem `Matrix.submatrix_submatrix` states that for any matrix `A` with entries in `α` and indices in `m` and `n`, given mappings `r₁` from `l` to `m`, `c₁` from `o` to `n`, `r₂` from `l₂` to `l`, and `c₂` from `o₂` to `o`, the submatrix of `A` obtained first by using `r₁` and `c₁` to select rows and columns respectively, and then by using `r₂` and `c₂` on the resulting submatrix, is the same as the submatrix of `A` obtained by using the compositions `r₁ ∘ r₂` and `c₁ ∘ c₂` to select rows and columns respectively. In other words, the order of taking submatrices doesn't matter, you can take a submatrix of a submatrix or you can adjust your row and column selection functions and take the submatrix all at once.
More concisely: Given matrices `A` with entries in `α`, and submatrix indices defined by mappings `r₁`, `c₁`, `r₂`, and `c₂`, the submatrices obtained by first selecting rows with `r₁` and `c₁` and then with `r₂` and `c₂`, and the submatrices obtained directly with `r₁ ∘ r₂` and `c₁ ∘ c₂`, are equal.
|
Matrix.mulVec_mulVec
theorem Matrix.mulVec_mulVec :
∀ {m : Type u_2} {n : Type u_3} {o : Type u_4} {α : Type v} [inst : NonUnitalSemiring α] [inst_1 : Fintype n]
[inst_2 : Fintype o] (v : o → α) (M : Matrix m n α) (N : Matrix n o α),
M.mulVec (N.mulVec v) = (M * N).mulVec v
The theorem `Matrix.mulVec_mulVec` states that for any non-unital semiring `α` and any finite types `m`, `n`, and `o`, and given any vector `v` of type `o → α`, and any two matrices `M`, `N` of types `Matrix m n α` and `Matrix n o α` respectively, the result of multiplying matrix `M` with the result of multiplying matrix `N` with vector `v` (i.e., `Matrix.mulVec M (Matrix.mulVec N v)`) is equal to the result of multiplying the product of matrices `M` and `N` with vector `v` (i.e., `Matrix.mulVec (M * N) v`). In other words, the operation of multiplying a matrix with a vector is associative with respect to matrix multiplication.
More concisely: For any non-unital semiring α and finite types m, n, and o, the operation of matrix-vector multiplication is associative, that is, `(M * N) * v = M * (N * v)` for matrices M : Matrix m n α and N : Matrix n o α, and vector v : o → α.
|
Matrix.map_one
theorem Matrix.map_one :
∀ {n : Type u_3} {α : Type v} {β : Type w} [inst : DecidableEq n] [inst_1 : Zero α] [inst_2 : One α]
[inst_3 : Zero β] [inst_4 : One β] (f : α → β), f 0 = 0 → f 1 = 1 → Matrix.map 1 f = 1
The theorem `Matrix.map_one` states that for any types `n`, `α`, and `β`, where `n` has decidable equality and `α` and `β` both have zero and one elements, given a function `f` from `α` to `β` such that `f` maps 0 to 0 and 1 to 1, if we apply the function `f` to each entry of the identity matrix (denoted by 1), we will get the identity matrix back. This theorem essentially asserts that the identity matrix preserves its identity property under such a mapping function.
More concisely: For any types `n` with decidable equality, and `α` and `β` having zero and one elements, if `f : α → β` maps 0 to 0 and 1 to 1, then `(f ∘ Matrix.identity n).transpose = Matrix.identity n`.
|
Matrix.one_apply_eq
theorem Matrix.one_apply_eq :
∀ {n : Type u_3} {α : Type v} [inst : DecidableEq n] [inst_1 : Zero α] [inst_2 : One α] (i : n), 1 i i = 1 := by
sorry
The theorem `Matrix.one_apply_eq` states that for any type `n` and any type `α` which has decidable equality, zero and one as instances, for every `i` of type `n`, the matrix unit (represented as `1`) indexed at `i`, `i` will always be equal to `1`. In other words, this theorem is about the identity property of a unit matrix where all the diagonal elements are `1` and the rest are `0`.
More concisely: For any square matrix of size `n` over a type `α` with decidable equality, zero and one as instances, the matrix unit at position `i` equals `1` for all `i` in `n`.
|
Matrix.dotProduct_zero
theorem Matrix.dotProduct_zero :
∀ {m : Type u_2} {α : Type v} [inst : Fintype m] [inst_1 : NonUnitalNonAssocSemiring α] (v : m → α),
Matrix.dotProduct v 0 = 0
The theorem `Matrix.dotProduct_zero` states that for any set `m` and any type `α` that forms a non-unital, non-associative semiring, the dot product of any vector `v` (represented as a function from `m` to `α`) with the zero vector (represented as the function constantly returning zero) is always the zero element of `α`. This mirrors the property in vector algebra that the dot product of any vector with the zero vector is zero.
More concisely: For any non-unital, non-associative semiring `α` and vector `v` as a function from a set `m` to `α`, `∀ i, v i • 0 = 0`.
|
Matrix.mulVec_single
theorem Matrix.mulVec_single :
∀ {m : Type u_2} {n : Type u_3} {R : Type u_7} [inst : Fintype n] [inst_1 : DecidableEq n]
[inst_2 : NonUnitalNonAssocSemiring R] (M : Matrix m n R) (j : n) (x : R),
M.mulVec (Pi.single j x) = fun i => M i j * x
The theorem `Matrix.mulVec_single` states that for any types `m`, `n`, and `R`, where `n` is a finite type, `R` is a non-unital non-associative semiring, and there is a decidable equality on `n`, given any matrix `M` with rows indexed by `m`, columns indexed by `n`, and entries from `R`, any element `j` from `n`, and any element `x` from `R`, the result of multiplying the matrix `M` with the vector that has `x` at the `j`-th position and `0` elsewhere (`Pi.single j x`) is a function that, for each `i` in `m`, gives the product of the `i`-th row, `j`-th column element of `M` and `x`. In mathematical terms, this theorem states that for a matrix $M$ and a vector $v$ where only the $j$-th component is non-zero, the multiplication $Mv$ results in a vector where the $i$-th component is the $i$-th row, $j$-th column element of $M$ multiplied by the $j$-th component of $v$.
More concisely: For any matrix-vector multiplication of a matrix with rows indexed by `m`, columns indexed by a finite type `n`, and entries from a non-unital non-associative semiring `R` with decidable equality, and a vector with only the `j`-th component non-zero, the `i`-th component of the product is the product of the `i`-th row, `j`-th column element of the matrix and the `j`-th component of the vector.
|
Matrix.map_apply
theorem Matrix.map_apply :
∀ {m : Type u_2} {n : Type u_3} {α : Type v} {β : Type w} {M : Matrix m n α} {f : α → β} {i : m} {j : n},
M.map f i j = f (M i j)
This theorem states that for any types `m`, `n`, `α`, and `β`, any matrix `M` of type `Matrix m n α`, any function `f` from `α` to `β`, and any indices `i` of type `m` and `j` of type `n`, applying the `Matrix.map` function on `M` with `f` and then retrieving the entry at the ith row and jth column (represented as `Matrix.map M f i j`), is the same as applying `f` directly on the entry of `M` at the ith row and jth column (represented as `f (M i j)`). In other words, `Matrix.map` applies the function `f` to each entry of the matrix `M` in a way that preserves the position of each entry.
More concisely: For any matrices M of type m x n over types α and β, functions f from α to β, and indices i and j, the application of Matrix.map on M with f and then accessing the entry at row i and column j (Matrix.map M f i j) is equivalent to directly applying f to the entry at row i and column j of M (f (M i j)).
|