This action might not be possible to undo. Are you sure you want to continue?

Welcome to Scribd! Start your free trial and access books, documents and more.Find out more

In mathematics, an anti-diagonal matrix is a matrix where all the entries are zero except those on the diagonal going from the lower left corner to the upper right corner (↗), known as the antidiagonal. More precisely, an n-by-n matrix A is an anti-diagonal matrix if the (i, j) element is zero for all i, j ∈ {1, …, n} with i + j ≠ n + 1. An example of an anti-diagonal matrix is

All anti-diagonal matrices are also persymmetric. The product of two anti-diagonal matrices is a diagonal matrix

Augmented matrix

In linear algebra, the augmented matrix of a matrix is obtained by changing a matrix in some way. Given the matrices A and B, where:

Then, the augmented matrix (A|B) is written as:

This is useful when solving systems of linear equations or the augmented matrix may also be used to find the inverse of a matrix by combining it with the identity matrix.

Band matrix

In mathematics, particularly matrix theory, a band matrix is a sparse matrix, whose non-zero entries are confined to a diagonal band, comprising the main diagonal and zero or more diagonals on either side. Formally, an n×n matrix A=(ai,j ) is a band matrix if all matrix elements are zero outside a diagonally bordered band whose range is determined by constants k1 and k2:

The quantities k1 and k2 are the left and right half-bandwidth, respectively. The bandwidth of the matrix is k1 + k2 + 1 (in other words, the smallest number of adjacent diagonals to which the non-zero elements are confined). A band matrix with k1 = k2 = 0 is a diagonal matrix; a band matrix with k1 = k2 = 1 is a tridiagonal matrix; when k1 = k2 = 2 one has a pentadiagonal matrix and so on. If one puts k1 = 0,k2 = n−1, one obtains the definition of an upper triangular matrix; similarly, for k1 = n−1, k2 = 0 one obtains a lower triangular matrix.

Conjugate transpose

"Adjoint matrix" redirects here. An adjugate matrix is sometimes called a "classical adjoint matrix". In mathematics, the conjugate transpose, Hermitian transpose, or adjoint matrix of an m-byn matrix A with complex entries is the n-by-m matrix A obtained from A by taking thetranspose and then taking the complex conjugate of each entry (i.e. negating their imaginary parts but not their real parts). The conjugate transpose is formally defined by

*

where the subscripts denote the i,j-th entry, for 1 ≤ i ≤ n and 1 ≤ j ≤ m, and the overbar denotes a scalar complex conjugate. (The complex conjugate of a reals, isa

+ bi, where a and b are

− bi.)

This definition can also be written as

where

denotes the transpose and

denotes the matrix with complex

conjugated entries. Other names for the conjugate transpose of a matrix are Hermitian conjugate, or transjugate. The conjugate transpose of a matrix A can be denoted by any of these symbols:

or

, commonly used in linear algebra

(sometimes pronounced "A dagger"), universally used in quantum mechanics , although this symbol is more commonly used for the Moore-Penrose pseudoinverse In some contexts, denotes the matrix with complex conjugated entries, and thus or .

the conjugate transpose is denoted by

Example

If

then

Diagonal matrix

In linear algebra, a diagonal matrix is a square matrix in which the entries outside the main diagonal (↘) are all zero. The diagonal entries themselves may or may not be zero. Thus, the matrix D = (di,j) with n columns and n rows is diagonal if:

For example, the following matrix is diagonal:

The term diagonal matrix may sometimes refer to a rectangular diagonal matrix, which is an m-by-n matrix with only the entries of the form di,i possibly non-zero; for example,

, or

Hermitian matrix

A Hermitian matrix (or self-adjoint matrix) is a square matrix with complex entries which is equal to its own conjugate transpose – that is, the element in the ith row and jth column is equal to the complex conjugate of the element in the jth row and ith column, for all indices i and j:

If the conjugate transpose of a matrix be written concisely as

is denoted by

, then the Hermitian property can

Hermitian matrices can be understood as the complex extension of a real symmetric matrix.

Examples

For example,

is a Hermitian matrix

Identity matrix

In linear algebra, the identity matrix or unit matrix of size n is the n-by-n square matrix with ones on the main diagonal and zeros elsewhere. It is denoted by In, or simply by I if the size is immaterial or can be trivially determined by the context. (In some fields, such as quantum mechanics, the identity matrix is denoted by a boldface one, 1; otherwise it is identical to I.)

Some mathematics books use U and E to represent the Identity Matrix (meaning "Unit Matrix" and "Elementary Matrix", or from the German "Einheitsmatrix", although I is considered more universal.

[1]

respectively),

The important property of matrix multiplication of identity matrix is that for m-by-n A

Invertible matrix

In linear algebra, an n-by-n (square) matrix A is called invertible or nonsingular or nondegenerate if there exists an n-by-n matrix B such that

where In denotes the n-by-n identity matrix and the multiplication used is ordinary matrix multiplication. If this is the case, then the matrix B is uniquely determined by A and is called theinverse of A, denoted by A . It follows from the theory of matrices that if

−1

**for square matrices A and B, then also
**

[1]

Non-square matrices (m-by-n matrices for which m ≠ n) do not have an inverse. However, in some cases such a matrix may have a left inverse or right inverse. If A is m-by-n and the rank ofA is equal to n, then A has a left inverse: an n-bym matrix B such that BA = I. If A has rank m, then it has a right inverse: an n-bym matrix B such that AB = I. While the most common case is that of matrices over the real or complex numbers, all these definitions can be given for matrices over any commutative ring. A square matrix that is not invertible is called singular or degenerate. A square matrix is singular if and only if its determinant is 0. Singular matrices are rare in the sense that if you pick a random square matrix, it will almost surely not be singular. Matrix inversion is the process of finding the matrix B that satisfies the prior equation for a given invertible matrix A.

Involutory matrix

In mathematics, an involutory matrix is a matrix that is its own inverse. That is, matrix A is an involution iff A = I. One of the three classes of elementary matrix is involutory, namely therowinterchange elementary matrix. A special case of another class of elementary matrix, that which represents multiplication of a row or column by −1, is also involutory; it is in fact a trivial example of a signature matrix, all of which are involutory. Involutory matrices are all square roots of the identity matrix. This is simply a consequence of the fact that any nonsingular matrix multiplied by its inverse is the identity. If A is an n × nmatrix, then A is involutory if and only if ½(A + I) is idempotent. An involutory matrix which is also symmetric is an orthogonal matrix, and thus represents an isometry (a linear transformation which preserves Euclidean distance). A reflection matrix is an example of an involutory matrix.

2

Irregular matrix

An irregular matrix, or ragged matrix, can be described as a matrix that has a different number of elements in each row. Ragged matrices are not used in linear algebra, since standard matrix transformations cannot be performed on them, but they are useful as arrays in computing. Irregular matrices are typically stored using Iliffe vectors. For example, the following is an irregular matrix:

List of matrices

Organization of a matrix This page lists some important classes of matrices used in mathematics, science and engineering. A matrix (plural matrices, or less commonly matrixes) is a rectangular array of numbers called entries, as shown at the right. Matrices have a long history of both study and application, leading to diverse ways of classifying matrices. A first group is matrices satisfying concrete conditions of the entries, including constant matrices. An important example is the identity matrix given by

Further ways of classifying matrices are according to their eigenvalues or by imposing conditions on the product of the matrix with other matrices. Finally, many domains, both in mathematics and other sciences including physics and chemistry have particular matrices that are applied chiefly in these areas.

**Matrices with explicitly constrained entries
**

The following lists matrices whose entries are subject to certain conditions. Many of them apply to square matrices only, that is matrices with the same number of columns and rows. The main diagonal of a square matrix is the diagonal joining the upper left corner and the lower right one or equivalently the entries ai,i. The other diagonal is called anti-diagonal (or counter-diagonal).

Explanation Name (0,1)-matrix A matrix with all elements either 0 or 1. A matrix in which successive columns have a particular function applied to their entries. A square matrix with all entries off the anti-diagonal equal to zero.

Notes, References

Synonym for binary matrix and logical matrix.

Alternant matrix

Anti-diagonal matrix Anti-Hermitian matrix Anti-symmetric matrix

Synonym for skew-Hermitian matrix. Synonym for skew-symmetric matrix. A square matrix containing zeros in all entries except for the first row, first column, and main diagonal. A square matrix whose non-zero entries are confined to a diagonal band. A matrix with elements only on the main Sometimes defined differently, diagonal and either the superdiagonal or see article. subdiagonal. A matrix whose entries are all either 0 or Synonym for (0,1)-matrix and 1. logical matrix. [1] A square matrix that is symmetric

Arrowhead matrix

Band matrix

Bidiagonal matrix

Binary matrix

Bisymmetric matrix with respect to its main diagonal and its main cross-diagonal. Block-diagonal matrix Block matrix A block matrix with entries only on the diagonal. A matrix partitioned in sub-matrices

called blocks. Block tridiagonal matrix A block matrix which is essentially a tridiagonal matrix but with submatrices in place of scalar elements A matrix whose elements are of the form 1/(xi + yj) for (xi), (yj) injective sequences (i.e., taking every value only once). A matrix symmetric about its center; i.e., aij = an−i+1,n−j+1

Cauchy matrix

Centrosymmetric matrix

A square matrix with zero diagonal and Conference matrix +1 and −1 off the diagonal, such that CTC is a multiple of the identity matrix. A matrix with all rows and columns Complex Hadamard mutually orthogonal, whose entries are matrix unimodular. A square matrix A with real coefficients, such that f(x) = xTAx is nonnegative for every nonnegative vector x aii| > Σj≠i |aij|. A square matrix with all entries off the main diagonal equal to zero.

Copositive matrix

Diagonally dominant matrix

Diagonal matrix

A square matrix derived by applying an Elementary matrix elementary row operation to the identity matrix. A matrix that can be derived from another matrix through a sequence of elementary row or column operations. A square matrix in the form of an identity matrix but with arbitrary entries in one

Equivalent matrix

Frobenius matrix

column below the main diagonal. A square matrix with precisely one Generalized nonzero element in each row and permutation matrix column. Integer matrix Hadamard matrix A matrix whose entries are all integers. A square matrix with entries +1, −1 whose rows are mutually orthogonal. A matrix with constant skew-diagonals; also an upside down Toeplitz matrix. A square matrix which is equal to its conjugate transpose, A = A*. A square Hankel matrix is symmetric.

Hankel matrix

Hermitian matrix

An "almost" triangular matrix, for Hessenberg matrix example, an upper Hessenberg matrix has zero entries below the first subdiagonal. Hollow matrix A square matrix whose main diagonal comprises only zero elements. Synonym for (0,1)-matrix or binary matrix. Can be used to represent a k-adic relation.

Logical matrix

A matrix with all entries either 0 or 1

Metzler matrix

A matrix whose off-diagonal entries are non-negative. A square matrix with exactly one nonzero entry in each row and column. A row consists of 1, a, aq, aq², etc., and each row uses a different variable Synonym for generalized permutation matrix.

Monomial matrix

Moore matrix

Nonnegative matrix A matrix with all nonnegative entries. A matrix partitioned into sub-matrices, or equivalently, a matrix whose entries are Synonym for block matrix themselves matrices rather than scalars

Partitioned matrix

Pentadiagonal matrix

A matrix with the only nonzero entries on the main diagonal and the two diagonals just above and below the main one.

A matrix representation of a permutation, a square matrix with exactly Permutation matrix one 1 in each row and column, and all other elements 0. Persymmetric matrix Polynomial matrix Positive matrix Sign matrix A matrix that is symmetric about its northeast-southwest diagonal, i.e., aij = an−j+1,n−i+1 A matrix whose entries are polynomials. A matrix with all positive entries. A matrix whose entries are either +1, 0, or −1. A diagonal matrix where the diagonal elements are either +1 or −1. A square matrix which is equal to the negative of its conjugate transpose, A* = −A. A matrix which is equal to the negative of its transpose, AT = −A. A rearrangement of the entries of a banded matrix which requires less space. Sparse matrix algorithms can tackle huge sparse matrices that are utterly impractical for dense matrix algorithms.

Signature matrix

Skew-Hermitian matrix Skew-symmetric matrix Skyline matrix

Sparse matrix

A matrix with relatively few non-zero elements.

Sylvester matrix

The Sylvester matrix is A square matrix whose entries come from nonsingular if and only if the two polynomials are coprime to each coefficients of two polynomials. other.

Symmetric matrix Toeplitz matrix

A square matrix which is equal to its transpose, A = AT (ai,j = aj,i). A matrix with constant diagonals. A matrix with all entries above the main diagonal equal to zero (lower triangular) or with all entries below the main diagonal equal to zero (upper triangular). A matrix with the only nonzero entries on the main diagonal and the diagonals just above and below the main one. A square matrix whose inverse is equal to its conjugate transpose, A−1 = A*. A row consists of 1, a, a², a³, etc., and each row uses a different variable. A square matrix, with dimensions a power of 2, the entries of which are +1 or -1. A matrix with all off-diagonal entries less than zero.

Triangular matrix

Tridiagonal matrix

Unitary matrix Vandermonde matrix

Walsh matrix

Z-matrix

**[edit] Constant matrices
**

The list below comprises matrices whose elements are constant for any given dimension (size) of matrix. The matrix entries will be denoted aij. The table below uses the Kronecker symbol δij for two integers i and j which is 1 if i = j and 0 else.

Symbolic description of the entries aij = δn + 1 − i,j

Name

Explanation

Notes

Exchange matrix

A binary matrix with ones on the antidiagonal, and zeroes everywhere else.

A permutation matrix.

Hilbert matrix Identity matrix Lehmer matrix Matrix of ones Pascal matrix A matrix with all entries equal to one A matrix containing the entries of Pascal's triangle. A set of three 2 × 2 complex Hermitian and unitary matrices. When combined with the I2 identity matrix, they form an orthogonal basis for the 2 × 2 complex Hermitian matrices.

aij = (i + j − 1)−1. A square diagonal matrix, with all entries on the main diagonal equal to 1, aij = δij and the rest 0 aij = min(i,j) ÷ max(i,j). aij = 1.

A Hankel matrix.

A positive symmetric matrix.

Pauli matrices

Redheffer matrix

aij are 1 if i divides j or if j = 1; A (0, 1)-matrix. otherwise, aij = 0. A matrix with ones on the superdiagonal or subdiagonal and zeroes elsewhere. A matrix with all entries equal to zero. Multiplication by it shifts matrix elements by one position.

Shift matrix

aij = δi+1,j or aij = δi−1,j

Zero matrix

aij = 0.

**Matrices with conditions on eigenvalues or eigenvectors
**

Name Companion matrix Explanation A matrix whose eigenvalues are equal to the roots of the polynomial. Notes

Defective matrix A square matrix that does not have a complete

basis of eigenvectors, and is thus not diagonalisable. Diagonalizable matrix It has an eigenbasis, that is, a complete set of linearly independent eigenvectors.

A square matrix similar to a diagonal matrix.

Hurwitz matrix

A matrix whose eigenvalues have strictly negative real part. A stable system of differential equations may be represented by a Hurwitz matrix.

Positive-definite A Hermitian matrix with every eigenvalue matrix positive. Stability matrix Stieltjes matrix Synonym for Hurwitz matrix. A real symmetric positive definite matrix with Special case of an M-matrix. nonpositive off-diagonal entries.

**[edit] Matrices satisfying conditions on products or inverses
**

A number of matrix-related notions is about properties of products or inverses of the given matrix. The matrix product of a m-by-n matrix A and a n-by-k matrix B is the m-byk matrix C given by

This matrix product is denoted AB. Unlike the product of numbers, matrix products are not commutative, that is to say AB need not be equal to BA. A number of notions are concerned with the failure of this commutativity. An inverse of square matrix A is a matrix B (necessarily of the same dimension as A) such that AB = 1. Equivalently, BA = 1. An inverse need not exist. If it exists, B is uniquely determined, and is also called the inverse of A, denoted A−1.

Name Explanation Notes

Congruent matrix Idempotent matrix Invertible matrix Involutary matrix Nilpotent matrix

Two matrices A and B are congruent if there exists an invertible matrix P such that PT A P = B. A matrix that has the property A² = AA = A.

Compare with similar matrices.

A square matrix having a multiplicative inverse, that Invertible matrices form is, a matrix B such that AB = BA = I. the general linear group. A square matrix which is its own inverse, i.e., AA = I. A square matrix satisfying Aq = 0 for some positive integer q. A square matrix that commutes with its conjugate transpose: AA∗ = A∗A Signature matrices have this property. Equivalently, the only eigenvalue of A is 0. They are the matrices to which the spectral theorem applies.

Normal matrix

Orthogonal matrix Orthonormal matrix

A matrix whose inverse is equal to its transpose, A−1 They form the = AT. orthogonal group. A matrix whose columns are orthonormal vectors.

Similar matrix

Two matrices A and B are similar if there exists an invertible matrix P such that P−1AP = B.

Compare with congruent matrices.

Singular matrix A square matrix that is not invertible. Unimodular matrix Unipotent matrix An invertible matrix with entries in the integers (integer matrix) Necessarily the determinant is +1 or −1. Equivalently, A − I is nilpotent. See also unipotent group.

A square matrix with all eigenvalues equal to 1.

Totally unimodular matrix

A matrix for which every non-singular square submatrix is unimodular. This has some implications in the linear programming relaxation of an integer program.

Weighing matrix

A square matrix the entries of which are in {0, 1, −1}, such that AAT = wI for some positive integer w.

**[edit] Matrices with specific applications
**

Name Explanation Used in Notes

Adjugate matrix

Calculating inverse The matrix containing minors of matrices via Laplace's a given square matrix. formula. A square matrix of with entries 0, 1 and −1 such that the sum of Dodgson condensation to each row and column is 1 and calculate determinants the nonzero entries in each row and column alternate in sign. Calculating inverse matrices.

Alternating sign matrix

A matrix whose rows are Augmented matrix concatenations of the rows of two smaller matrices. A square matrix which may be used as a tool for the efficient location of polynomial zeros A matrix that converts composition of functions to multiplication of matrices. A matrix representing a nonsemisimple finite-dimensional algebra, or a Lie algebra (the two meanings are distinct).

Bézout matrix

Control theory, Stable polynomials

Carleman matrix

Cartan matrix

Circulant matrix

A matrix where each row is a System of linear equations, circular shift of its predecessor. discrete Fourier transform A containing the cofactors, i.e., signed minors, of a given matrix. A matrix used for transforming the vectorized form of a matrix into the vectorized form of its

Cofactor matrix

Commutation matrix

transpose. A matrix related to Coxeter groups, which describe symmetries in a structure or system. A square matrix containing the distances, taken pairwise, of a set of points. See also Euclidean distance matrix.

Coxeter matrix

Distance matrix

Computer vision, network analysis.

A linear transformation matrix used for transforming halfDuplication matrix vectorizations of matrices into vectorizations. A linear transformation matrix used for transforming Elimination matrix vectorizations of matrices into half-vectorizations. A matrix that describes the Euclidean distance pairwise distances between matrix points in Euclidean space. Fundamental matrix (linear differential equation) Generator matrix A matrix containing the fundamental solutions of a linear ordinary differential equation. A matrix whose rows generate all elements of a linear code. Coding theory See also distance matrix.

Gramian matrix

A matrix containing the pairwise Test linear independence angles of given vectors in an of vectors, including ones inner product space. in function spaces. A square matrix of second partial derivatives of a scalarvalued function. Detecting local minima and maxima of scalar-valued functions in several variables; Blob detection

They are real symmetric.

Hessian matrix

(computer vision) Householder matrix A transformation matrix widely used in matrix algorithms. A matrix of first-order partial derivatives of a vector-valued function. A matrix in game theory and economics, that represents the payoffs in a normal form game where players move simultaneously A matrix that occurs in the study of analytical interpolation problems. A matrix whose entries consist of random numbers from some specified random distribution. A matrix representing a rotational geometric transformation. Special orthogonal group, Euler angles QR decomposition. Implicit function theorem; Smooth morphisms (algebraic geometry).

Jacobian matrix

Payoff matrix

Pick matrix

Random matrix

Rotation matrix

Seifert matrix

A matrix in knot theory, primarily for the algebraic Alexander polynomial analysis of topological properties of knots and links. An elementary matrix whose corresponding geometric transformation is a shear transformation. A matrix of scores which express the similarity between two data Sequence alignment points. A square matrix preserving a Symplectic group,

Shear matrix

Similarity matrix

Symplectic matrix

standard skew-symmetric form. symplectic manifold. Totally positive matrix Generating the reference A matrix with determinants of all points of Bézier curve in its square submatrices positive. computer graphics. A matrix representing a linear transformation, often from one co-ordinate space to another to facilitate a geometric transform or projection.

Transformation matrix

• • •

Derogatory matrix — a square n×n matrix whose minimal polynomial is of order less than n. Moment matrix — a symmetric matrix whose elements are the products of common row/column index dependent monomials. X-Y-Z matrix — a generalisation of the (rectangular) matrix to a cuboidal form (a 3dimensional array of entries).

substitution matrix

Main diagonal

In linear algebra, the main diagonal (sometimes leading diagonal or primary diagonal) of a matrix A is the collection of cells Ai,j where i is equal to j. The main diagonal of a square matrix is the diagonal which runs from the top left corner to the bottom right corner. For example, the following matrix has 1s down its main diagonal:

A square matrix like the above in which the entries outside the main diagonal are all zero is called a diagonal matrix. The sum of the entries on the main diagonal of a matrix is known as the trace of that matrix.

The main diagonal of a rectangular matrix is the diagonal which runs from the top left corner and steps down and right, until the right edge is reached.

The other diagonal is called antidiagonal, counterdiagonal, secondary diagonal, or minor diagonal.

Modal matrix

In linear algebra, the modal matrix is used in the diagonalization process involving eigenvalues and eigenvectors. Assume a linear system of the following form:

where X is n×1, A is n×n, and B is n×1. X typically represents the state vector, and U the system input. Specifically the modal matrix M is the n×n matrix formed with the eigenvectors of A as columns in M. It is utilized in

where D is an n×n diagonal matrix with the eigenvalues of A on the main diagonal of D and zeros elsewhere. (note the eigenvalues should appear left→right top→bottom in the same order as its eigenvectors are arranged left→right into M) This process is also known as the similarity transform.

Nilpotent matrix

In linear algebra, a nilpotent matrix is a square matrix N such that

for some positive integer k. The smallest such k is sometimes called the degree of N. More generally, a nilpotent transformation is a linear transformation L of a vector space such that Lk = 0 for some positive integer k. Both of these concepts are special cases of a more general concept of nilpotence that applies to elements of rings.

Examples

The matrix

is nilpotent, since M2 = 0. More generally, any triangular matrix with 0's along the main diagonal is nilpotent. For example, the matrix

is nilpotent, with

Though the examples above have a large number of zero entries, a typical nilpotent matrix does not. For example, the matrices

both square to zero, though neither matrix has zero entries.

Normal matrix

A complex square matrix A is a normal matrix if A*A=AA* where A* is the conjugate transpose of A. That is, a matrix is normal if it commutes with its conjugate transpose. If A is a real matrix, then A*=AT; it is normal if ATA = AAT. Normality is a convenient test for diagonalizability: every normal matrix can be converted to a diagonal matrix by a unitary transform, and every matrix which can be made diagonal by a unitary transform is also normal, but finding the desired transform requires much more work than simply testing to see whether the matrix is normal.

Orthogonal matrix

In linear algebra, an orthogonal matrix is a square matrix with real entries whose columns (or rows) are orthogonal unit vectors (i.e., orthonormal). Because the columns are unit vectors in addition to being orthogonal, some people use the term orthonormal to describe such matrices. Equivalently, a matrix Q is orthogonal if its transpose is equal to its inverse:

alternatively,

As a linear transformation, an orthogonal matrix preserves the dot product of vectors, and therefore acts as an isometry of Euclidean space, such as a rotation or reflection. In other words, it is a unitary transformation.

Pentadiagonal matrix

In linear algebra, a pentadiagonal matrix is a matrix that is nearly diagonal; to be exact, it is a matrix in which the only nonzero entries are on the main diagonal, and the first two diagonals above and below it. So it is of the form

It follows that a pentadiagonal matrix has at most 5n − 6 nonzero entries, where n is the size of the matrix. Hence, pentadiagonal matrices are sparse. This makes them useful in numerical analysis.

Polynomial matrix

A polynomial matrix or matrix polynomial is a matrix whose elements are univariate or multivariate polynomials. A univariate polynomial matrix P of degree p is defined as:

where A(i) denotes a matrix of constant coefficients, and A(p) is non-zero. Thus a polynomial matrix is the matrix-equivalent of a polynomial, with each element of the matrix satisfying the definition of a polynomial of degree p.

An example 3×3 polynomial matrix, degree 2:

We can express this by saying that for a ring R, the rings Mn(R[X]) and (Mn(R))[X] are isomorphic.

**Rank (linear algebra)
**

The column rank of a matrix A is the maximal number of linearly independent columns of A. Likewise, the row rank is the maximal number of linearly independent rows of A. Since the column rank and the row rank are always equal, they are simply called the rank of A. More abstractly, it is the dimension of the image of A. For the proofs, see, e.g., Murase (1960)[1], Andrea & Wong (1960)[2], Williams & Cater (1968)[3], Mackiw (1995)[4]. It is commonly denoted by either rk(A) or rank A. The rank of an matrix is at most min(m,n). A matrix that has a rank as large as possible is said to have full rank; otherwise, the matrix is rank deficient.

**[edit] Equivalence of definitions
**

A common approach is reduce to a simpler form, generally row-echelon form by row operations: row operations do not change the row space (hence do not change the row rank), and, being invertible, map the column space to an isomorphic space (hence do not change the column rank). Once in row-echelon form, the rank is clearly the same for both row rank and column rank, and equals the number of pivots and the number of non-zero rows;

Equivalence of the determinantal definition (rank of largest non-vanishing minor) is generally proved alternatively. It is a generalization of the statement that if the span of n vectors has dimension p, then p of those vectors span the space: one can choose a spanning set that is a subset of the vectors. For determinantal rank, the statement is that if the row rank (column rank) of a matrix is p, then one can choose a p × p submatrix that is invertible: a subset of the rows and a subset of the columns simultaneously define an invertible submatrix. It can be alternatively stated as: if the span of n vectors has dimension p, then p of these vectors span the space and there is a set of p coordinates on which they are linearly independent.

A non-vanishing p-minor (p × p submatrix with non-vanishing determinant) shows that the rows and columns of that submatrix are linearly independent, and thus those rows and columns of the full matrix are linearly independent (in the full matrix), so the row and column rank are at least as large as the determinantal rank; however, the converse is less straightforward

Applications

One useful application of calculating the rank of a matrix is the computation of the number of solutions of a system of linear equations. The system is inconsistent if the rank of the augmented matrix is greater than the rank of the coefficient matrix. If, on the other hand, ranks of these two matrices are equal, the system must have at least one solution. The solution is unique if and only if the rank equals the number of variables. Otherwise the general solution has k free parameters where k is the difference between the number of variables and the rank. This theorem is due to Rouché and Capelli. In control theory, the rank of a matrix can be used to determine whether a linear system is controllable, or observable.

Row vector

In linear algebra, a row vector or row matrix is a 1 × n matrix, that is, a matrix consisting of a single row:[1]

The transpose of a row vector is a column vector:

The set of all row vectors forms a vector space which is the dual space to the set of all column vector

**Skew-Hermitian matrix
**

In linear algebra, a square matrix with complex entries is said to be skew-Hermitian or antihermitian if its conjugate transpose is equal to its negative.[1] That is, the matrix A is skew-Hermitian if it satisfies the relation

where denotes the conjugate transpose of a matrix. In component form, this means that

for all i and j, where ai,j is the i,j-th entry of A, and the overline denotes complex conjugation. Skew-Hermitian matrices can be understood as the complex versions of real skewsymmetric matrices, or as the matrix analogue of the purely imaginary numbers.[2] The concept can be generalized to include linear transformations of any complex vector space with a sesquilinear norm.

**Skew-symmetric matrix
**

In linear algebra, a skew-symmetric (or antisymmetric or antimetric[1]) matrix is a square matrix A whose transpose is also its negative; that is, it satisfies the equation:

or in component form, if for all and

:

For example, the following matrix is skew-symmetric:

Compare this with a symmetric matrix whose transpose is the same as the matrix

or to an orthogonal matrix, the transpose of which is equal to its inverse:

Symmetric matrix

In linear algebra, a symmetric matrix is a square matrix, A, that is equal to its transpose

The entries of a symmetric matrix are symmetric with respect to the main diagonal (top left to bottom right). So if the entries are written as A = (aij), then

for all indices i and j. The following 3×3 matrix is symmetric:

A matrix is called skew-symmetric or antisymmetric if its transpose is the same as its negative. The following 3×3 matrix is skew-symmetric:

The following matrix is neither symmetric nor skew-symmetric:

Every diagonal matrix is symmetric, since all off-diagonal entries are zero. Similarly, each diagonal element of a skew-symmetric matrix must be zero, since each is its own negative.

In linear algebra, a real symmetric matrix represents a self-adjoint operator over a real inner product space. The corresponding object for a complex inner product space is a Hermitian matrix with complex-valued entries, which is equal to its conjugate transpose. Therefore, in linear algebra over the complex numbers, it is generally assumed that a symmetric matrix refers to one which has has real-valued entries. Symmetric matrices appear naturally in a variety of applications, and typical numerical linear algebra software makes special accommodations for them.

Transpose

This article is about the Matrix Transpose operator. For other uses, see Transposition In linear algebra, the transpose of a matrix A is another matrix AT (also written A′, Atr or t A) created by any one of the following equivalent actions:

• • •

write the rows of A as the columns of AT write the columns of A as the rows of AT reflect A by its main diagonal (which starts from the top left) to obtain AT

Formally, the (i,j) element of AT is the (j,i) element of A. [AT]ij = [A]ji

If A is a m × n matrix then AT is a n × m matrix. The transpose of a scalar is the same scalar.

Triangular matrix

In the mathematical discipline of linear algebra, a triangular matrix is a special kind of square matrix where the entries either below or above the main diagonal are zero. Because matrix equations with triangular matrices are easier to solve they are very important in numerical analysis. The LU decomposition gives an algorithm to decompose any invertible matrix A into a normed lower triangle matrix L and an upper triangle matrix U.

A matrix of the form

is called lower triangular matrix or left triangular matrix, and analogously a matrix of the form

is called upper triangular matrix or right triangular matrix. The standard operations on triangular matrices conveniently preserve the triangular form: the sum and product of two upper triangular matrices is again upper triangular. The

inverse of an upper triangular matrix is also upper triangular, and of course we can multiply an upper triangular matrix by a constant and it will still be upper triangular. This means that the upper triangular matrices form a subalgebra of the ring of square matrices for any given size. The analogous result holds for lower triangular matrices. Note, however, that the product of a lower triangular with an upper triangular matrix does not preserve triangularity.

Tridiagonal matrix

In linear algebra, a tridiagonal matrix is a matrix that is "almost" a diagonal matrix. To be exact: a tridiagonal matrix has nonzero elements only in the main diagonal, the first diagonal below this, and the first diagonal above the main diagonal. For example, the following matrix is tridiagonal:

A determinant formed from a tridiagonal matrix is known as a continuant.[1

Unitary matrix

In mathematics, a unitary matrix is an n by n complex matrix U satisfying the condition

where is the identity matrix in n dimensions and is the conjugate transpose (also called the Hermitian adjoint) of U. Note this condition says that a matrix U is unitary if and only if it has an inverse which is equal to its conjugate transpose

A unitary matrix in which all entries are real is an orthogonal matrix. Just as an orthogonal matrix G preserves the (real) inner product of two real vectors,

so also a unitary matrix U satisfies

for all complex vectors x and y, where . If

stands now for the standard inner product on

is an n by n matrix then the following are all equivalent conditions: 1. is unitary 2. is unitary 3. the columns of form an orthonormal basis of with respect to this inner product 4. the rows of form an orthonormal basis of with respect to this inner product is an isometry with respect to the norm from this inner product 5. 6. U is a normal matrix with eigenvalues lying on the unit circle.

X–Y–Z matrix

An X–Y–Z matrix is a generalization of the concept of matrix to three dimensions. An X–Y–Z matrix A will thus have components Ai,j,k where

for some positive integers M,N,P. Such matrices are helpful for example when considering grids in three dimensions, as in computer simulations of three-dimensional problems.

Zero matrix

In mathematics, particularly linear algebra, a zero matrix is a matrix with all its entries being zero. Some examples of zero matrices are

The set of m×n matrices with entries in a ring K forms a ring in is the matrix with all entries equal to , where identity in K.

. The zero matrix is the additive

The zero matrix is the additive identity in

. That is, for all

it satisfies

There is exactly one zero matrix of any given size m×n having entries in a given ring, so when the context is clear one often refers to the zero matrix. In general the zero element of a ring is unique and typically denoted as 0 without any subscript indicating the parent ring. Hence the examples above represent zero matrices over any ring. The zero matrix represents the linear transformation sending all vectors to the zero vector.

It contains all type matrices definitions..

It contains all type matrices definitions..

- Matrix Arithmetic
- Lattices
- Square Matrices
- 5 Group Theory
- Gu_Yinzheng_201308_MSc-Weingarten Function and Random Matrices
- 10Ma1bPr Symmatric Matrices
- Vectos Spaces
- linear algebra
- Notes 354
- Matrices
- Diophantine Approximation and Diophantine Equations
- Ablamowicz.-.Matrix_Exponential_via_Clifford_Algebras.(1998).[sharethefiles.com].pdf
- Ebooktoan.com 173 Bai Toan Day So Tu Cac Ky Thi Olympic Toan Tren the Gioi
- Lecture 9
- Elementary Vector Analysis
- 01 Rational Number
- complex analysis 3
- Differential Geometry III Lecture Notes - A. Kovalev
- Introduction to Differential Equations
- Vector and Matrices - Some Articles
- 4400 Pell Notes
- Strurm Louville_Robin Boundary Condition
- Notes on Class Field Theory
- Diagonalización
- Partial Derivatives
- @
- AnalysisScript_13
- Vector Space
- ion to Algebraic Geometry
- Basic Functional Analysis.pdf

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

We've moved you to where you read on your other device.

Get the full title to continue

Get the full title to continue listening from where you left off, or restart the preview.

scribd