Professional Documents
Culture Documents
Case Study2
Case Study2
G
ROU
P-
1.MUSKA
N
SINGH(02
8)
2.NAMAN OCTOBER 23, 2019
KAUSHIK(SUBMITTED TO -MS.ANCHAL
029)
3.NISHAN
Contents
COSETS…………………………………………………………………………………………..10
APPLICATIONS…………………………………………………………………11
LAGRANGE’S THEOREM………………………………………………………………….12
APPLICATIONS…………………………………………………………………13
CAYLEY HAMILTON
THEOREM
In linear algebra, the Cayley–Hamilton theorem
(named after the mathematicians Arthur Cayley and
William Rowan Hamilton) states that every square
matrix over a commutative ring (such as the real or
complex field) satisfies its own characteristic equation.
The theorem was first proved in 1853 in terms of inverses of linear functions of quaternions, a
non-commutative ring, by Hamilton. This corresponds to the special case of certain 4 × 4 real or
2 × 2 complex matrices. The theorem holds for general quaternionic matrices. Cayley in 1858
stated it for 3 × 3 and smaller matrices, but only published a proof for the 2 × 2 case. The general
case was first proved by Frobenius in 1878.
Examples
1×1 matrices
For a 1×1 matrix A = (a1,1), the characteristic polynomial is given by p(λ) = λ − a, and so
p(A) = (a) − a1,1 = 0 is obvious.
2×2 matrices
Applications
Determinant and inverse matrix
For a general n×n invertible matrix A, i.e., one with nonzero determinant, A−1 can thus be
written as an (n − 1)-th order polynomial expression in A: As indicated, the Cayley–Hamilton
theorem amounts to the identity
The coefficients ci are given by the elementary symmetric polynomials of the eigenvalues of A.
Using Newton identities, the elementary symmetric polynomials can in turn be expressed in
terms of power sum symmetric polynomials of the eigenvalues:
n-th Power of matrix
The Cayley–Hamilton theorem always provides a relationship between the powers of A (though not
always the simplest one), which allows one to simplify expressions involving such powers, and evaluate
them without having to compute the power An or any higher powers of A.
Notice that we have been able to write the matrix power as the sum of two terms. In fact, matrix power of
any order k can be written as a matrix polynomial of degree at most n - 1, where n is the size of a square
matrix. This is an instance where Cayley–Hamilton theorem can be used to express a matrix function,
which we will discuss below systematically.
Matrix functions
When the eigenvalues are repeated, that is λi = λj for some i ≠ j, two or more equations are identical; and
hence the linear equations cannot be solved uniquely. For such cases, for an eigenvalue λ with multiplicity
m, the first m – 1 derivative of p(x) vanishes at the eigenvalues. Thus, there are the extra m – 1 linearly
independent solutions
which, when combined with others, yield the required n equations to solve for ci.
Finding a polynomial that passes through the points (λi, f (λi)) is essentially an interpolation problem, and
can be solved using Lagrange or Newton interpolation techniques, leading to Sylvester's formula.
More recently, expressions have appeared for other groups, like the Lorentz group SO(3, 1), O(4, 2) and
SU(2, 2) as well as GL(n, R). The group O(4, 2) is the conformal group of spacetime, SU(2, 2) its simply
connected cover (to be precise, the simply connected cover of the connected component SO+(4, 2) of
O(4, 2)). The expressions obtained apply to the standard representation of these groups. They require
knowledge of (some of) the eigenvalues of the matrix to exponentiate. For SU(2) (and hence for SO(3)),
closed expressions have recently been obtained for all irreducible representations, i.e. of any spin.
COSETS
In mathematics, if G is a group, and H is a subgroup of G, and g is an element of G, then
Cosets are a basic tool in the study of groups; for example they play a central role in
Lagrange's theorem that states that for any finite group G, the number of elements of every
subgroup H of G divides the number of elements of G.
The element g belongs to the coset gH. If x belongs to gH then xH=gH. Thus every
element of G belongs to exactly one left coset of the subgroup H. Elements g and x
belong to the same left coset of H if and only if g-1x belongs to H. Similar statements
apply to right cosets.
Integer :
Let G be the additive group of the integers, Z = ({..., −2, −1, 0, 1, 2, ...}, +) and H the
subgroup (mZ, +) = ({..., −2m, −m, 0, m, 2m, ...}, +) where m is a positive integer. Then the
cosets of H in G are the m sets mZ, mZ + 1, ..., mZ + (m − 1), where mZ + a = {..., −2m+a,
−man, a, m+a, 2m+a, ...}. There are no more than m cosets, because mZ + m = m(Z + 1) =
mZ. The coset (mZ + a, +) is the congruence class of a modulo m.
Vectors :
Another example of a coset comes from the theory of vector spaces. The elements
(vectors) of a vector space form an abelian group under vector addition. It is not hard to
show that subspaces of a vector space are subgroups of this group. For a vector space V, a
subspace W, and a fixed vector a in V, the sets are called affine subspaces, and are cosets
(both left and right, since the group is abelian). In terms of geometric vectors, these
affine subspaces are all the "lines" or "planes" parallel to the subspace, which is a line or
plane going through the origin.
Applications :
Cosets of Q in R are used in the construction of Vitali sets, a type of non-measurable set.
Cosets are important in computational group theory. For example, Thistlethwaite's algorithm for
solving Rubik's Cube relies heavily on cosets.
Coset leaders are used in decoding received data in linear error-correcting codes.
In coding theory, a linear code is an error-correcting code for which any linear
combination of codewords is also a code word. Linear codes are traditionally
partitioned into block codes and convolutional codes, although turbo codes
can be seen as a hybrid of these two types. Linear codes allow for more
efficient encoding and decoding algorithms than other codes (cf. syndrome
decoding).
LANGRANGE’S THEOREM
Lagrange's theorem, in the mathematics of group theory, states that for any finite group G,
the order (number of elements) of every subgroup H of G divides the order of G. The
theorem is named after Joseph-Louis Langrange.
This can be shown using the concept of left cosets of H in G. The left cosets are the
equivalence classes of a certain equivalence relation on G and therefore form a partition of
G. Specifically, x and y in G are related if and only if there exists h in H such that x = yh. If
we can show that all cosets of H have the same number of elements, then each coset of
H has precisely |H| elements. We are then done since the order of H times the number
of cosets is equal to the number of elements in G, thereby proving that the order of H
divides the order of G.
To show any two left cosets have the same cardinality, it suffices to demonstrate a
bijection between them. Suppose aH and bH are two left cosets of H. Then define a map
f : aH → bH by setting f(x) = ba -1 x. This map is bijective because it has an inverse given
by :
This proof also shows that the quotient of the orders |G| / |H| is equal to the index [G : H] (the
number of left cosets of H in G). If we allow G and H to be infinite, and write this statement as
Applications
A consequence of the theorem is that the order of any element a of a finite group (i.e. the
smallest positive integer number k with ak = e, where e is the identity element of the
group) divides the order of that group, since the order of a is equal to the order of the
cyclic subgroup generated by a. If the group has n elements, it follows
This can be used to prove Fermat's little theorem and its generalization, Euler's theorem.
These special cases were known long before the general theorem was proved.
Fermat's little theorem states that if p is a prime number, then for any integer
a, the number ap − a is an integer multiple of p. In the notation of modular
arithmetic, this is expressed as:
ap-1 = a (mod p)
The theorem also shows that any group of prime order is cyclic and simple. This in turn
can be used to prove Wilson's theorem, that if p is prime then p is a factor of
(n-1)! = -1 (mod n)
Lagrange's theorem can also be used to show that there are infinitely many primes: if
there were a largest prime p, then a prime divisor q of the Mersenne number (2p-1)
would be such that the order of (Z/qZ)* in the multiplicative (Z/qZ)* group divides the
order of which is q-1 . Hence p<q , contradicting the assumption that p is the largest
prime.