Math

© All Rights Reserved

45 views

Math

© All Rights Reserved

- Aerodynamics: Stability and Control
- finite element programming with MATLAB
- Manual Hp 42s
- A Scalar Matrix S
- divide and conquer algo
- Lucrare VI-A Pentru BALKANMINE
- ZOPP Introduction
- HASHIM_Optimal Speed Control for DC Motors Using LQR
- MATLAB
- Wk02 Cad Cam
- LGB20103_Numerical_Methods.pdf.pdf
- 5209.pdf
- Image Transformations
- Computation in Matrix Chain Product
- Artificial Intelligence
- On Using Quasi-Newton Algorithms
- hw1
- mp-204_103
- Mutual Coupling
- A.short History.of.Valence.bond.Theory

You are on page 1of 13

In mathematics, matrix multiplication is a binary operation that takes a pair of matrices, and produces another

matrix. Numbers such as the real or complex numbers can be multiplied according to elementary arithmetic. On

the other hand, matrices are arrays of numbers, so there is no unique way to define "the" multiplication of

matrices. As such, in general the term "matrix multiplication" refers to a number of different ways to multiply

matrices. The key features of any matrix multiplication include: the number of rows and columns the original

matrices have (called the "size", "order" or "dimension"), and specifying how the entries of the matrices generate

the new matrix.

Like vectors, matrices of any size can be multiplied by scalars, which amounts to multiplying every entry of the

matrix by the same number. Similar to the entrywise definition of adding or subtracting matrices, multiplication

of two matrices of the same size can be defined by multiplying the corresponding entries, and this is known as the

Hadamard product.

One can form many other definitions. However, the most useful definition can be motivated by linear equations

and linear transformations on vectors, which have numerous applications in applied mathematics, physics, and

engineering. This definition is often called the matrix product.

[1][2]

Although this definition is not commutative, it

still retains the associative property and is distributive over entrywise addition of matrices, and leads to the

definitions of: an identity matrix (analogous to multiplying real numbers by the number 1), an inverse matrix

(analogous to the multiplicative inverse of a number), powers and nth roots of a square matrix, consequently the

matrix exponential can be defined by a power series of a square matrix, and so on. A consequence of this matrix

product is determinant multiplicativity. This matrix product is an important operation in matrix groups, and the

theory of group representations and irreps.

This article will use the following notational conventions: matrices are represented by capital letters in bold,

vectors in lowercase bold, and entries of vectors and matrices are italic (since they are scalars). Index notation is

often the clearest way to express definitions, and will be used as standard in the literature.

Contents

1 Scalar multiplication

1.1 General definition

1.2 Examples

2 Matrix product (two matrices)

2.1 General definition

2.2 Illustration

2.3 Examples

3 Properties of matrix multiplication

3.1 General

3.1.1 All matrices

3.1.2 Square matrices only

3.2 Linear transformations

4 Matrix product (any number)

4.1 Chain multiplication

4.1.1 General definition

4.2 Powers of matrices

4.3 Powers of diagonal matrices

5 The inner and outer products

5.1 Examples

6 Algorithms for efficient matrix multiplication

6.1 Communication-avoiding and distributed algorithms

7 Other forms of multiplication

7.1 Hadamard product

7.2 Frobenius product

Matrix multiplication - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Matrix_multiplication

1 of 13 12/06/2013 10:36 PM

7.3 Kronecker product

8 See also

9 Notes

10 References

11 External links

Scalar multiplication

Main article: Scalar multiplication

The simplest form of multiplication associated with matrices is scalar multiplication.

General definition

Left scalar multiplication

The left multiplication of a matrix A with a scalar gives another matrix A of the same size as A. The entries of

A are given by

explicitly:

Right scalar multiplication

Similarly, the right multiplication of a matrix A with a scalar is defined to be

When the underlying ring is commutative, for example, the real or complex number field, these two

multiplications are the same, and are simply called scalar multiplication. However, for matrices over a more

general ring that are not commutative, such as the quaternions, they may not be equal.

Examples

For a real scalar and matrix:

For quaternion scalars and matrices:

Matrix product (two matrices)

Matrix multiplication - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Matrix_multiplication

2 of 13 12/06/2013 10:36 PM

Arithmetic process of multiplying numbers

(solid lines) in row i in matrix A and

column j in matrix B, then adding the terms

(dashed lines) to obtain entry ij in the final

matrix.

Assume two matrices are to be multiplied (the generalization to any number is discussed below). If A is an nm

matrix and B is an mp matrix, the result AB of their multiplication is an np matrix defined only if the number of

columns m in A is equal to the number of rows m in B.

General definition

When multiplying matrices, the elements of the rows in the first

matrix are multiplied with corresponding columns in the second

matrix (depicted in the image right). One may compute each entry in

the third matrix one at a time.

For two matrices

where necessarily the number of columns in A equals the number of rows in B, in this case m, the matrix product

AB is denoted without symbol (no multiplication signs or dots) to be the np matrix:

[3][4]

where AB has entries defined by:

Treating the rows and columns in each matrix as row and column vectors respectively, this entry is also their

vector dot product:

(See below for further details). Usually the entries are numbers or expressions, but can even be matrices

themselves (see block matrix). The matrix product can still be calculated exactly the same way.

Matrix multiplication - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Matrix_multiplication

3 of 13 12/06/2013 10:36 PM

Illustration

The figure to the right illustrates diagrammatically the product of two

matrices A and B, showing how each intersection in the product

matrix corresponds to a row of A and a column of B.

The values at the intersections marked with circles are:

Examples

Square matrix and column vector

If

their matrix product is:

yet BA is not defined.

The product of a square matrix multiplied by a column matrix arises naturally in linear algebra; for solving linear

equations and representing linear transformations. By choosing a, b, c, d in A appropriately, A can represent a

variety of transformations such as rotations, scaling and reflections, shears, of a geometric shape in space.

Square matrices

If

their matrix products are:

and

In this case, both products AB and BA are defined, and the entries show that AB and BA are not equal in general.

Multiplying square matrices which represent linear transformations corresponds to the composite transformation

(see below for details). Also, similarity transformations involving similar matrices are matrix products of square

matrices, which requires the notion of an inverse matrix (see also below).

Matrix multiplication - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Matrix_multiplication

4 of 13 12/06/2013 10:36 PM

Properties of matrix multiplication

General

Analogous to numbers (elements of a field), matrices satisfy the following general properties, although there is one

subtlety, due to the nature of matrix multiplication.

All matrices

Not commutative:

In general:

because AB and BA may not be simultaneously defined, and even if they are they may still not be equal. This

is contrary to ordinary multiplication of numbers. To specify the ordering of matrix multiplication in words;

"pre-multiply (or left multiply) A by B" means BA, while "post-multiply (or right multiply) A by C" means AC.

As long as the entries of the matrix come from a ring that has an identity, and n > 1 there is a pair of nn

noncommuting matrices over the ring. A notable exception is that the identity matrix (or any scalar multiple

of it) commutes with every square matrix.

1.

Associative: 2.

Distributive over matrix addition: 3.

Scalar multiplication is compatible with matrix multiplication:

and

where is a scalar. If the entries of the matrix are real or complex numbers (or from any other commutative

ring), then all four quantities are equal. More generally, all four are equal if belongs to the center of the ring

of entries of the matrix, because in this case X = X for all matrices X.

4.

Transpose:

where T denotes the transpose, the interchange of row i with column i in a matrix. This identity holds for any

matrices over a commutative ring, but not for all rings in general. Note that A and B are reversed.

5.

Hermitian conjugate: If A and B have complex entries, then

where denotes the Hermitian conjugate of a matrix (complex conjugate and transposed).

6.

Traces: The trace of a product AB is independent of the order of A and B: 7.

Square matrices only

Main article: square matrix

Identity element: If A is a square matrix, then

where I is the identity matrix of the same order.

1.

Inverse matrix: If A is a square matrix, there may be an inverse matrix A

1

of A such that

If this property holds then A is an invertible matrix, if not A is a singular matrix. Moreover,

2.

Determinants: The determinant of a product AB is the product of the determinants of square matrices A and

B (not defined when the underlying ring is not commutative):

Since det(A) and det(B) are just numbers and so commute, det(AB) = det(A)det(B) = det(B)det(A) = det(BA),

even when AB BA.

3.

Linear transformations

Matrix multiplication - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Matrix_multiplication

5 of 13 12/06/2013 10:36 PM

Main article: Linear transformations

Matrices offer a concise way of representing linear transformations between vector spaces, and matrix

multiplication corresponds to the composition of linear transformations. The matrix product of two matrices can

be defined when their entries belong to the same ring, and hence can be added and multiplied.

Let U, V, and W be vector spaces over the same field with given bases, S: V W and T: U V be linear

transformations and ST: U W be their composition.

Suppose that A, B, and C are the matrices representing the transformations S, T, and ST with respect to the given

bases.

Then AB = C, that is, the matrix of the composition (or the product) of linear transformations is the product of their

matrices with respect to the given bases.

Matrix product (any number)

Chain multiplication

Main article: Matrix chain multiplication

Matrix multiplication can be extended to the case of more than two matrices, provided that for each sequential

pair, their dimensions match.

General definition

The product of N matrices A

1

, A

2

, ..., A

N

with sizes n

0

n

1

, n

1

n

2

, ..., n

N 1

n

N

, is the n

0

n

N

matrix:

The same properties will hold, as long as the ordering of matrices is not changed. The number of possible ways of

grouping n matrices for multiplication is equal to the (n 1)th Catalan number.

For example, if A, B, C, and D are respectively mp, pq, qr, and rn matrices, then there are 5 ways of grouping

them without changing their order, and

is an mn matrix.

Powers of matrices

Square matrices can be multiplied by themselves repeatedly in the same way as ordinary numbers, because they

always have the same number of rows and columns. This repeated multiplication can be described as a power of

the matrix, a special case of the ordinary matrix product. On the contrary, rectangular matrices do not have the

same number of rows and columns so they can never be raised to a power. An nn matrix A raised to a positive

integer k is defined as

and the following identities hold, where is a scalar:

Zero power

where I is the identity matrix. This is parallel to the zeroth power of any number which equals unity.

Matrix multiplication - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Matrix_multiplication

6 of 13 12/06/2013 10:36 PM

Scalar multiplication

Determinant

The naive computation of matrix powers is to multiply k times the matrix A to the result, starting with the identity

matrix just like the scalar case. This can be improved using exponentiation by squaring, a method commonly used

for scalars. For diagonalizable matrices, an even better method is to use the eigenvalue decomposition of A.

Another method based on the CayleyHamilton theorem finds an identity using the matrices' characteristic

polynomial, producing a more effective equation for A

k

in which a scalar is raised to the required power, rather

than an entire matrix.

Powers of diagonal matrices

A special case is the power of a diagonal matrix A.

Since the product of diagonal matrices amounts to simply multiplying corresponding diagonal elements together,

the power k of a diagonal matrix A will have entries raised to the power. Explicitly;

meaning it is easy to raise a diagonal matrix to a power. When raising an arbitrary matrix (not necessarily a

diagonal matrix) to a power, it is often helpful to exploit this property by diagonalizing the matrix first.

The inner and outer products

Given two column vectors a and b, the Euclidean inner product and outer product are the simplest special cases of

the matrix product, by transposing the column vectors into row vectors.

[5]

The inner product

is a column vector multiplied on the left by a row vector:

More explicitly,

The outer product

is a row vector multiplied on the left by a column vector:

where

Matrix multiplication - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Matrix_multiplication

7 of 13 12/06/2013 10:36 PM

Matrix product (in terms of inner product)

Suppose that the first nm matrix A is decomposed into its row vectors a

i

, and the second mp matrix B into its

column vectors b

i

:

[1]

where

The entries in the introduction were given by:

It is also possible to express a matrix product in terms of concatenations of products of matrices and row or

column vectors:

These decompositions are particularly useful for matrices that are envisioned as concatenations of particular

types of row vectors or column vectors, e.g. orthogonal matrices (whose rows and columns are unit vectors

orthogonal to each other) and Markov matrices (whose rows or columns sum to 1).

Matrix product (in terms of outer product)

An alternative method results when the decomposition is done the other way around, i.e. the first matrix A is

decomposed into column vectors a

i

and the second matrix B into row vectors b

i

:

where this time

Matrix multiplication - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Matrix_multiplication

8 of 13 12/06/2013 10:36 PM

What is the fastest algorithm for

matrix multiplication?

List of unsolved problems in

computer science

The bound on over time.

This method emphasizes the effect of individual column/row pairs on the result, which is a useful point of view

with e.g. covariance matrices, where each such pair corresponds to the effect of a single sample point.

Examples

Suppose

using the inner product approach:

while the outer product approach gives:

Algorithms for efficient matrix multiplication

The running time of square matrix multiplication, if carried out navely, is

. The running time for multiplying rectangular matrices (one

mp-matrix with one pn-matrix) is , however, more efficient

algorithms exist, such as Strassen's algorithm, devised by Volker Strassen in

1969 and often referred to as "fast matrix multiplication". It is based on a

way of multiplying two 22-matrices which requires only

7 multiplications (instead of the usual 8), at the expense

of several additional addition and subtraction operations.

Applying this recursively gives an algorithm with a

multiplicative cost of . Strassen's

algorithm is more complex, and the numerical stability is

reduced compared to the nave algorithm.

[6]

Nevertheless, it appears in several libraries, such as

BLAS, where it is significantly more efficient for matrices

with dimensions n > 100,

[7]

and is very useful for large

matrices over exact domains such as finite fields, where

numerical stability is not an issue.

The current algorithm with the lowest known

exponent k is a generalization of the Coppersmith

Winograd algorithm that has an asymptotic complexity of

O(n

2.3727

) thanks to Vassilevska Williams.

[8]

This

algorithm, and the Coppersmith-Winograd algorithm on

which it is based, are similar to Strassen's algorithm: a

way is devised for multiplying two kk-matrices with

fewer than k

3

multiplications, and this technique is

applied recursively. However, the constant coefficient

hidden by the Big O notation is so large that these algorithms are only worthwhile for matrices that are too large to

Matrix multiplication - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Matrix_multiplication

9 of 13 12/06/2013 10:36 PM

Block matrix multiplication. In

the 2D algorithm, each

processor is responsible for one

submatrix of C. In the 3D

algorithm, every pair of

submatrices from A and B that

is multiplied is assigned to one

processor.

handle on present-day computers.

[9]

Since any algorithm for multiplying two nn-matrices has to process all 2n

2

-entries, there is an asymptotic lower

bound of operations. Raz (2002) proves a lower bound of for bounded coefficient arithmetic

circuits over the real or complex numbers.

Cohn et al. (2003, 2005) put methods such as the Strassen and CoppersmithWinograd algorithms in an entirely

different group-theoretic context, by utilising triples of subsets of finite groups which satisfy a disjointness

property called the triple product property (TPP). They show that if families of wreath products of Abelian groups

with symmetric groups realise families of subset triples with a simultaneous version of the TPP, then there are

matrix multiplication algorithms with essentially quadratic complexity. Most researchers believe that this is

indeed the case.

[10]

However, Alon, Shpilka and Umans have recently shown that some of these conjectures

implying fast matrix multiplication are incompatible with another plausible conjecture, the sunflower

conjecture.

[11]

Because of the nature of matrix operations and the layout of matrices in memory, it is typically possible to gain

substantial performance gains through use of parallelization and vectorization. It should therefore be noted that

some lower time-complexity algorithms on paper may have indirect time complexity costs on real machines.

Freivalds' algorithm is a simple Monte Carlo algorithm that given matrices verifies in time if

.

Communication-avoiding and distributed algorithms

On modern architectures with hierarchical memory, the cost of loading and

storing input matrix elements tends to dominate the cost of arithmetic. On a

single machine this is the amount of data transferred between RAM and cache,

while on a distributed memory multi-node machine it is the amount transferred

between nodes; in either case it is called the communication bandwidth. The

nave algorithm using three nested loops uses communication bandwidth.

Cannon's algorithm, also known as the 2D algorithm, partitions each input matrix

into a block matrix whose elements are submatrices of size by ,

where M is the size of fast memory.

[12]

The nave algorithm is then used over the

block matrices, computing products of submatrices entirely in fast memory. This

reduces communication bandwidth to , which is asymptotically

optimal (for algorithms performing computation).

[13][14]

In a distributed setting with p processors arranged in a by 2D mesh, one

submatrix of the result can be assigned to each processor, and the product can be

computed with each processor transmitting words, which is

asymptotically optimal assuming that each node stores the minimum

elements.

[14]

This can be improved by the 3D algorithm, which arranges the

processors in a 3D cube mesh, assigning every product of two input submatrices

to a single processor. The result submatrices are then generated by performing a

reduction over each row.

[15]

This algorithm transmits words per

processor, which is asymptotically optimal.

[14]

However, this requires replicating

each input matrix element times, and so requires a factor of more

memory than is needed to store the inputs. This algorithm can be combined with

Strassen to further reduce runtime.

[15]

"2.5D" algorithms provide a continuous

tradeoff between memory usage and communication bandwidth.

[16]

Other forms of multiplication

There are other ways to multiply two matrices, in fact simpler than the definition above.

Matrix multiplication - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Matrix_multiplication

10 of 13 12/06/2013 10:36 PM

Hadamard product

Main article: Hadamard product (matrices)

For two matrices of the same dimensions, there is the Hadamard product, also known as the element-wise

product, pointwise product, entrywise product and the Schur product.

[17]

For two matrices A and B of the

same dimensions, the Hadamard product A B is a matrix of the same dimensions, which has elements

explicitly:

Due to the characteristic entrywise procedure, this operation is identical to many multiplying ordinary numbers

(mn of them) all at once; hence the Hadamard product is commutative, associative and distributive over addition,

and is a principal submatrix of the Kronecker product. It appears in lossy compression algorithms such as JPEG.

Frobenius product

The Frobenius inner product, sometimes denoted A : B, is the component-wise inner product of two matrices as

though they are vectors. It is also the sum of the entries of the Hadamard product. Explicitly,

where "tr" denotes the trace of a matrix and vec denotes vectorization. This inner product induces the Frobenius

norm.

Kronecker product

Main article: Kronecker product

For two matrices A and B of any different dimensions mn and pq respectively (no constraints on the dimensions

of each matrix), the Kronecker product denoted A B is a matrix with dimensions mpnq, which has

elements

[citation needed]

,

where represents the floor function.

Explicitly:

This is the application of the more general tensor product applied to matrices.

See also

Matrix multiplication - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Matrix_multiplication

11 of 13 12/06/2013 10:36 PM

Basic Linear Algebra

Subprograms

Composition of relations

CoppersmithWinograd

algorithm

Cracovian

Logical matrix

Matrix addition

Matrix exponentiation

Matrix inversion

Strassen algorithm

Notes

^

a b

Encyclopaedia of Physics (2nd Edition), R.G. Lerner, G.L. Trigg, VHC publishers, 1991, ISBN (Verlagsgesellschaft)

3-527-26954-1, ISBN (VHC Inc.) 0-89573-752-3

1.

^ McGraw Hill Encyclopaedia of Physics (2nd Edition), C.B. Parker, 1994, ISBN 0-07-051400-3 2.

^ Linear Algebra (4th Edition), S. Lipcshutz, M. Lipson, Schaum's Outlines, McGraw Hill (USA), 2009, ISBN

978-0-07-154352-1

3.

^ Mathematical methods for physics and engineering, K.F. Riley, M.P. Hobson, S.J. Bence, Cambridge University Press, 2010,

ISBN 978-0-521-86153-3

4.

^ Mathematical methods for physics and engineering, K.F. Riley, M.P. Hobson, S.J. Bence, Cambridge University Press,

2010, ISBN 978-0-521-86153-3

5.

^ Miller, Webb (1975), "Computational complexity and numerical stability" (http://citeseerx.ist.psu.edu/viewdoc

/download?doi=10.1.1.148.9947&rep=rep1&type=pdf), SIAM News 4: 97107

6.

^ Press 2007, p. 108. 7.

^ Virginia Vassilevska Williams. "Multiplying matrices faster than Coppersmith-Winograd" (http://www.cs.stanford.edu

/~virgi/matrixmult-f.pdf). The original algorithm was presented by Don Coppersmith and Shmuel Winograd in 1990, has

an asymptotic complexity of O(n

2.376

).

8.

^ Robinson, Sara (2005), "Toward an Optimal Algorithm for Matrix Multiplication" (http://www.siam.org/pdf/news

/174.pdf), SIAM News 38 (9)

9.

^ Robinson, 2005. 10.

^ Alon, Shpilka, Umans, On Sunflowers and Matrix Multiplication (http://eccc.hpi-web.de/report/2011/067/) 11.

^ Lynn Elliot Cannon, A cellular computer to implement the Kalman Filter Algorithm (http://portal.acm.org

/citation.cfm?coll=GUIDE&dl=GUIDE&id=905686), Technical report, Ph.D. Thesis, Montana State University, 14 July 1969.

12.

^ Hong, J.W.; H.T. Kung (1981). "I/O complexity: The red-blue pebble game". STOC 81: Proceedings of the thirteenth annual

ACM symposium on Theory of computing: 326333.

13.

^

a b c

Irony, Dror; Sivan Toledo, Alexander Tiskin (September 2004). "Communication lower bounds for distributed-

memory matrix multiplication". J. Parallel Distrib. Comput. 64 (9): 10171026. doi:10.1016/j.jpdc.2004.03.021

(http://dx.doi.org/10.1016%2Fj.jpdc.2004.03.021).

14.

^

a b

Agarwal, R.C.; S. M. Balle, F. G. Gustavson, M. Joshi, P. Palkar (September 1995). "A three-dimensional approach to

parallel matrix multiplication". IBM J. Res. Dev. 39 (5): 575582. doi:10.1147/rd.395.0575 (http://dx.doi.org

/10.1147%2Frd.395.0575).

15.

^ Solomonik, Edgar; James Demmel (2011). "Communication-optimal parallel 2.5D matrix multiplication and LU

factorization algorithms". Proceedings of the 17th international conference on Parallel processing. Part II: 90109.

16.

^ (Horn & Johnson 1985,Ch. 5) 17.

References

Henry Cohn, Robert Kleinberg, Balazs Szegedy, and Chris Umans. Group-theoretic Algorithms for Matrix

Multiplication. arXiv:math.GR/0511460. Proceedings of the 46th Annual Symposium on Foundations of

Computer Science, 2325 October 2005, Pittsburgh, PA, IEEE Computer Society, pp. 379388.

Henry Cohn, Chris Umans. A Group-theoretic Approach to Fast Matrix Multiplication.

arXiv:math.GR/0307321. Proceedings of the 44th Annual IEEE Symposium on Foundations of Computer Science,

1114 October 2003, Cambridge, MA, IEEE Computer Society, pp. 438449.

Coppersmith, D., Winograd S., Matrix multiplication via arithmetic progressions, J. Symbolic Comput. 9,

p. 251-280, 1990.

Horn, Roger A.; Johnson, Charles R. (1985), Matrix Analysis, Cambridge University Press,

ISBN 978-0-521-38632-6

Horn, Roger A.; Johnson, Charles R. (1991), Topics in Matrix Analysis, Cambridge University Press,

ISBN 978-0-521-46713-1

Knuth, D.E., The Art of Computer Programming Volume 2: Seminumerical Algorithms. Addison-Wesley

Professional; 3 edition (November 14, 1997). ISBN 978-0-201-89684-8. pp. 501.

Press, William H.; Flannery, Brian P.; Teukolsky, Saul A.; Vetterling, William T. (2007), Numerical Recipes: The

Matrix multiplication - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Matrix_multiplication

12 of 13 12/06/2013 10:36 PM

Art of Scientific Computing (3rd ed.), Cambridge University Press, ISBN 978-0-521-88068-8.

Ran Raz. On the complexity of matrix product. In Proceedings of the thirty-fourth annual ACM symposium on

Theory of computing. ACM Press, 2002. doi:10.1145/509907.509932 (http://dx.doi.org

/10.1145%2F509907.509932).

Robinson, Sara, Toward an Optimal Algorithm for Matrix Multiplication, SIAM News 38(9), November 2005.

PDF (http://www.siam.org/pdf/news/174.pdf)

Strassen, Volker, Gaussian Elimination is not Optimal, Numer. Math. 13, p. 354-356, 1969.

Styan, George P. H. (1973), "Hadamard Products and Multivariate Statistical Analysis", Linear Algebra and its

Applications 6: 217240, doi:10.1016/0024-3795(73)90023-2 (http://dx.doi.org

/10.1016%2F0024-3795%2873%2990023-2)

Vassilevska Williams, Virginia, Multiplying matrices faster than Coppersmith-Winograd, Manuscript, May

2012. PDF (http://www.cs.stanford.edu/~virgi/matrixmult-f.pdf)

External links

How to Multiply Matrices (http://www.mathsisfun.com/algebra/matrix-multiplying.html)

The Simultaneous Triple Product Property and Group-theoretic Results for the Exponent of Matrix

Multiplication (http://arxiv.org/abs/cs/0703145)

WIMS Online Matrix Multiplier (http://wims.unice.fr/~wims/en_tool~linear~matmult.html)

Matrix Multiplication Problems (http://ceee.rice.edu/Books/LA/mult/mult4.html#TOP)

Block Matrix Multiplication Problems (http://www.gordon-taft.net/MatrixMultiplication.html)

Wijesuriya, Viraj B., Daniweb: Sample Code for Matrix Multiplication using MPI Parallel Programming

Approach (http://www.daniweb.com/forums/post1428830.html#post1428830), retrieved 2010-12-29

Linear algebra: matrix operations (http://www.umat.feec.vutbr.cz/~novakm/algebra_matic/en) Multiply or

add matrices of a type and with coefficients you choose and see how the result was computed.

Visual Matrix Multiplication (http://www.wefoundland.com/project/Visual_Matrix_Multiplication) An

interactive app for learning matrix multiplication.

Matrix Multiplication in Java Dr. P. Viry (http://www.ateji.com/px/whitepapers

/Ateji%20PX%20MatMult%20Whitepaper%20v1.2.pdf?phpMyAdmin=95wsvAC1wsqrAq3j,M3duZU3UJ7)

Retrieved from "http://en.wikipedia.org/w/index.php?title=Matrix_multiplication&oldid=584205584"

Categories: Matrix theory Bilinear operators Binary operations Multiplication Numerical linear algebra

This page was last modified on 2 December 2013 at 13:27.

Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. By

using this site, you agree to the Terms of Use and Privacy Policy.

Wikipedia is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.

Matrix multiplication - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Matrix_multiplication

13 of 13 12/06/2013 10:36 PM

- Aerodynamics: Stability and ControlUploaded bysandglasspatrol
- finite element programming with MATLABUploaded bybharathjoda
- Manual Hp 42sUploaded byAsia Argento
- A Scalar Matrix SUploaded byNaseer Shahzada
- Lucrare VI-A Pentru BALKANMINEUploaded byluminita66
- divide and conquer algoUploaded byankitdocs
- ZOPP IntroductionUploaded byArianne A Zamora
- HASHIM_Optimal Speed Control for DC Motors Using LQRUploaded byAnonymous PsEz5kGVae
- MATLABUploaded bysoumikbh
- Wk02 Cad CamUploaded byquratulain
- LGB20103_Numerical_Methods.pdf.pdfUploaded bysyafiq
- 5209.pdfUploaded byRadha
- Image TransformationsUploaded bySyed Emad
- Computation in Matrix Chain ProductUploaded byValentin
- Artificial IntelligenceUploaded byWat Shehn
- On Using Quasi-Newton AlgorithmsUploaded bya_ordanini4627
- hw1Uploaded byMorokot Angela
- mp-204_103Uploaded byAgam Reddy M
- Mutual CouplingUploaded byAsim Saleem
- A.short History.of.Valence.bond.TheoryUploaded byJose Vicente Correa
- linearalg03.pdfUploaded byParitosh Gupta
- Chapter8 MatricesUploaded bycesamav
- Lecture 12Uploaded byreza2224
- assignmentUploaded byTalha Aftab
- BSIT 42Lab[1] RevisedUploaded byAnkita Sharma
- Principal Static Wind Loads on a Large Roof StructureUploaded byIng Struc-ture
- gate-2014-syllabus-for-computer-science-and-information-technology.pdfUploaded byAnshu Gaurav
- Mimo Zeros Chap7Uploaded bywadjibelhaj
- Q02Uploaded bysaheb_ju
- chemUploaded byapi-236544093

- Fire Emblem 7 CodesUploaded byGontzi Marinus
- A most interesting numberUploaded byJulia Anderson
- A treatise on Hexadecimal Numbers Part 2Uploaded byJulia Anderson
- Treatise on Textual AnalysisUploaded byJulia Anderson
- SecretUploaded byJulia Anderson
- The Complete EncryptionUploaded byJulia Anderson
- A cryptographic AnalysisUploaded byJulia Anderson
- Computed DataUploaded byJulia Anderson
- Syllabus MEW Version 14-01-2Uploaded byJulia Anderson
- Mathematics JunkUploaded byJulia Anderson
- Certified Encryption SchemaUploaded byJulia Anderson
- Engima CodexUploaded byJulia Anderson
- Secret CodesUploaded byJulia Anderson
- secret-codes.pdfUploaded byJulia Anderson
- New RandomUploaded byJulia Anderson
- MatrixUploaded byJulia Anderson
- Learning Cocoa With Objective-C (2002)Uploaded byJulia Anderson
- Closest PairUploaded byJhonatan Ríchard Raphael
- DC_13-PotterUploaded bygogutagigel

- The Determinant FunctionUploaded bysebastobon85
- Question 4 Scientific ComputingUploaded byPete Jacopo Belbo Caya
- New Method Four Point Bending MatlabUploaded bySushil Kumar
- Eigen VectorsUploaded byJ S Kalyana Rama
- Linear Algebra Chapter 2- MATRICESUploaded bydaniel_bashir808
- Linear AlgebraUploaded byЦогтгэрэл Гантөмөр
- 1966 - Potter - Matrix Quadratic SolutionsUploaded byJames Brown
- sec1.4Uploaded bySiti Nur Shafiqah
- Chapter 4 - Advanced Matrix OperationsUploaded byKrishna Kumar
- The relative gain for non-square multivariable systems - chang1990.pdfUploaded byHesam Ahmadian
- 1 Linear AlgebraUploaded bySonali Vasistha
- linear system 2 homeworkUploaded bylarasmoyo
- MAT1341 Linear Algebra Midterm 3Uploaded byWeideSha
- Chapter 4 Introduction to MatrixUploaded byEni Aryanti Yusoff
- Quiz 04sle Gaussianelimination SolutionUploaded byAnonymous WjGf1l
- Chap4Uploaded bySheidaHa
- SSG 314 Lists & Other StructuresUploaded byOmotayo Fakinlede
- Applied 1Uploaded byeden
- Matrix AlgebraUploaded bySylvia Cheung
- BECE-15-EM.pdfUploaded bySiva Sankar
- rgtrwgrUploaded byraj
- Systems of Linear Equations and MatricesUploaded bygosaye desalegn
- Higher Mathematics BookletUploaded byManya Ranjan
- Matrices DefinitionUploaded byshantan02
- Math Ch 1 9thUploaded byMuzammil Hasnain
- Arrays in MatlabUploaded byGabriel Obed Fosu Peters
- HHW maths 12 (2016-17) - CopyUploaded byChitrita Pant
- LinearAlgebra Author BenjaminUploaded byAlok Thakkar
- SVD computationUploaded byEnas Dhuhri
- ENEE 660 HW Sol #3Uploaded byPeacefulLion