You are on page 1of 41

net

ww
w.E
a syE
ngi
nee
rin
g.n
et

## **Note: Other Websites/Blogs Owners Please do not Copy (or) Republish

this Materials without Legal Permission of the Publishers.

## **Disclimers : EasyEngineering not the original publisher of this Book/Material

on net. This e-book/Material has been collected from other sources of net.

ww
w.E
asy
En
gin
eer
ing
.ne
t

## Manual for K-Notes

Why K-Notes?

Towards the end of preparation, a student has lost the time to revise all the chapters from his /
her class notes / standard text books. This is the reason why K-Notes is specifically intended for
Quick Revision and should not be considered as comprehensive study material.

## What are K-Notes?

ww
A 40 page or less notebook for each subject which contains all concepts covered in GATE

w.E
Curriculum in a concise manner to aid a student in final stages of his/her preparation. It is highly
useful for both the students as well as working professionals who are preparing for GATE as it

asy
comes handy while traveling long distances.

En
When do I start using K-Notes?

gin
It is highly recommended to use K-Notes in the last 2 months before GATE Exam
(November end onwards).

ee
How do I use K-Notes?
rin
g.n
Once you finish the entire K-Notes for a particular subject, you should practice the respective
Subject Test / Mixed Question Bag containing questions from all the Chapters to make best use
of it.
et

LINEAR ALGEBRA
MATRICES
A matrix is a rectangular array of numbers (or functions) enclosed in brackets. These numbers
(or function) are called entries of elements of the matrix.
2 0.4 8
Example:  order = 2 x 3, 2 = no. of rows, 3 = no. of columns
 5 -32 0 

## Special Type of Matrices

ww
1. Square Matrix
A m x n matrix is called as a square matrix if m = n i.e, no of rows = no. of columns
 a11a22 ......... 
w.E
The elements aij when i = j

1 2
Example: 
are called diagonal elements

4 5
2. Diagonal Matrix asy
En
A square matrix in which all non-diagonal elements are zero and diagonal elements may or
may not be zero.
1 0 
Example:  
0 5 
gin
Properties ee
a. diag [x, y, z] + diag [p, q, r] = diag [x + p, y + q, z + r]
rin
b. diag [x, y, z] × diag [p, q, r] = diag [xp, yq, zr]

c.  diag x, y, z  
1 
 diag  1 , 1 , 1 
 g.n
d.
t

## diag x, y, z   = diag [x, y, z]

n
x y z 
et
e. diag x, y, z    diag xn , yn , zn 
f. Eigen value of diag [x, y, z] = x, y & z
g. Determinant of diag [x, y, z] = xyz

3. Scalar Matrix
A diagonal matrix in which all diagonal elements are equal.

4. Identity Matrix
A diagonal matrix whose all diagonal elements are 1. Denoted by I

Properties
a. AI = IA = A
n
b. I  I
1
c. I  I
d. det(I) = 1

ww
5. Null matrix
An m x n matrix whose all elements are zero. Denoted by O.

w.EProperties:
a. A + O = O + A = A
b. A + (- A) = O asy
6. Upper Triangular Matrix En
gin
A square matrix whose lower off diagonal elements are zero.
3 4 5 
Example: 0 6 7 
0 0 9 
ee rin
7. Lower Triangular Matrix g.n
A square matrix whose upper off diagonal elements are zero.
3 0 0
Example:  4 6 0 
et
 5 7 9 

8. Idempotent Matrix
A matrix is called Idempotent if A 2  A
1 0 
Example:  
0 1 

9. Involutary Matrix
A matrix is called Involutary if A 2  I .

Matrix Equality
Two matrices  A mn and Bp  q are equal if

## m=p; n=q i.e., both have same size

aij = bij for all values of i & j.

For addition to be performed, the size of both matrices should be same.
If [C] = [A] + [B]

ww
Then cij  aij  bij

w.E
i.e., elements in same position in the two matrices are added.

Subtraction of Matrices
[C] = [A] – [B]
= [A] + [–B]
asy
En
Difference is obtained by subtraction of all elements of B from elements of A.

gin
Hence here also, same size matrices should be there.

Scalar Multiplication
ee rin
The product of any m × n matrix A a jk  and any scalar c, written as cA, is the m × n
 
matrix cA = ca jk  obtained by multiplying each entry in A by c.
 
g.n
Multiplication of two matrices
et
Let  A m  n and Bp  q be two matrices and C = AB, then for multiplication, [n = p]

n
Cik   aij b jk
j1

Properties

##  If AB exists then BA does not necessarily exists.

Example:  A 3  4 , B4  5 , then AB exits but BA does not exists as 5 ≠ 3
So, matrix multiplication is not commutative.

##  Matrix multiplication is not associative.

A(BC) ≠ (AB)C .
 Matrix Multiplication is distributive with respect to matrix addition
A(B + C) = AB +AC
 If AB = AC  B = C (if A is non-singular)
BA = CA  B = C (if A is non-singular)

Transpose of a matrix

ww
If we interchange the rows by columns of a matrix and vice versa we obtain transpose of a
matrix.
1 3 
w.E
eg., A = 2 4 
6 5 
1 2 6 
; AT  
3 4 5

Conjugate of a matrix
asy
En
The matrix obtained by replacing each element of matrix by its complex conjugate.

Properties gin
a. A  A ee rin
 A  B  A  B
b.

c. KA   K A g.n
d.  AB  AB et
Transposed conjugate of a matrix
The transpose of conjugate of a matrix is called transposed conjugate. It is represented by A  .

 A 

a. A

b.  A  B    A   B
c. KA    KA
d.  AB    B A

Trace of matrix
Trace of a matrix is sum of all diagonal elements of the matrix.

## Classification of real Matrix

T
a. Symmetric Matrix :  A   A
T
b. Skew symmetric matrix :  A   A

## c. Orthogonal Matrix :  AT  A1 ; AA T =I 

ww
Note:

w.E
a. If A & B are symmetric, then (A + B) & (A – B) are also symmetric
b. For any matrix AA T is always symmetric.
 A + AT  A  AT
c. For any matrix, 

 2
asy 
 is symmetric &


 2

 is skew symmetric.

En
d. For orthogonal matrices, A  1

## Classification of complex Matrices

a. Hermitian matrix : A  A 
gin
b. Skew – Hermitian matrix : A    A

c. Unitary Matrix : A  A 1 ; AA  1
ee  rin
g.n
Determinants
Determinants are only defined for square matrices.
For a 2 × 2 matrix
et
a11 a12
 = a11a22  a12a21
a21 a22

## Minors & co-factor

a11 a12 a13
If   a21 a22 a23
a31 a32 a33

a12 a13
Minor of element a21 : M21 
a32 a33

i j
Co-factor of an element aij   1  Mij

## To design cofactor matrix, we replace each element by its co-factor

 Determinant

ww  
Suppose, we need to calculate a 3 × 3 determinant


3
 a1 jcof a1 j 
3 3
 a2 jcof a2 j    a3 jcof a3 j 
w.E j1 j1 j1

## We can calculate determinant along any row of the matrix.

Properties
asy
 En
Value of determinant is invariant under row & column interchange i.e., A T  A

gin
If any row or column is completely zero, then A  0

 If A is a matrix of order n × n , then
ee
If two rows or columns are interchanged, then value of determinant is multiplied by -1.
If one row or column of a matrix is multiplied by ‘k’, then determinant also becomes k times.

rin
KA  Kn A
 Value of determinant is invariant under row or column transformation g.n

AB  A * B
An  A
n et
1
 A 1 
A

## Adjoint of a Square Matrix

T
Adj(A) = cof  A  

Inverse of a matrix
Inverse of a matrix only exists for square matrices
A  
1

A
Properties

a. AA1  A1 A  I
 AB 1  B1 A 1
ww b.

## c.  ABC 1  C1B1 A 1

w
T
  
1
d. AT  A1

.E
e. The inverse of a 2 × 2 matrix should be remembered

a b 
1
asy
1  d b
  
ad  bc  c a 
I.
c d
Divide by determinant. En
II.
gin
Interchange diagonal element.
III.
ee
Take negative of off-diagonal element.

rin
Rank of a Matrix
a. Rank is defined for all matrices, not necessarily a square matrix. g.n
b. If A is a matrix of order m × n,
then Rank (A) ≤ min (m, n)
c. A number r is said to be rank of matrix A, if and only if
et
 There is at least one square sub-matrix of A of order ‘r’ whose determinant is
non-zero.
 If there is a sub-matrix of order (r + 1), then determinant of such sub-matrix
should be 0.

## Linearly Independent and Dependent

Let X1 and X2 be the non-zero vectors
 If X1=kX2 or X2=kX1 then X1,X2 are said to be L.D. vectors.
 If X 1kX2 or X2  kX1 then X1,X2 are said to be L.I. vectors.

Note
Let X1,X2 ……………. Xn be n vectors of matrix A
 if rank(A)=no of vectors then vector X1,X2………. Xn are L.I.
 if rank(A)<no of vectors then vector X1,X2………. Xn are L.D.

ww
System of Linear Equations
There are two type of linear equations

w.E
Homogenous equation
a11 x1  a12 x2  ..........  a1n xn  0

asy
a21 x1  a22 x2  ..........  a2n xn  0
------------------------------------

En
------------------------------------

gin
am1 x1  am2 x2  ..........  amn xn  0
This is a system of ‘m’ homogenous equations in ‘n’ variables
 a11
a
 21
a12
a22
a13 
    a2n
ee
a1n 

 x1 
 x
 2

0 

rin
0 
 
Let A      ; x ; 0=  - 


a

    amn m  n
 

 
-  g.n
 m1 am2
This system can be represented as
AX = 0
 xn 
n1
0 
et
m1

Important Facts
 Inconsistent system
Not possible for homogenous system as the trivial solution
T T
 x1 , x2 ....., xn   0,0,......,0 always exists .

##  Consistent unique solution

If rank of A = r and r = n  |A| ≠ 0 , so A is non-singular. Thus trivial solution exists.

##  Consistent infinite solution

If r < n, no of independent equation < (no. of variables) so, value of (n – r) variables can be
assumed to compute rest of r variables.

 Non-Homogenous Equation
a11 x1  a12 x2  ..........  a1n xn  b1
a21 x1  a22 x2  ..........  a2n xn  b2
-------------------------------------
-------------------------------------

## ww am1 x1  am2 x2  ..........  amnxn  bn

This is a system of ‘m’ non-homogenous equation for n variables.

w.E  a11

 a21
a12
a22
  a1n 

    a2n 
 x1 
 
 x2 
 b1 
 
 b2 
A   

  asy 

; X     ; B=  - 
 

 
 - 
a
 m1 am2
En
    amn m  n x 
 n
b 
 m m  1

##  a11 gin a12  an b1 

a
 21
Augmented matrix = [A | B] =   

ee a22   b2 

rin
 
am1 am2   amn bm 
g.n
Conditions et
 Inconsistency
If r(A) ≠ r(A | B), system is inconsistent

##  Consistent unique solution

If r(A) = r(A | B) = n, we have consistent unique solution.

##  Consistent Infinite solution

If r(A) = r (A | B) = r & r < n, we have infinite solution

The solution of system of equations can be obtained by using Gauss elimination Method.
(Not required for GATE)

Note
 Let An n and rank(A)=r, then the no of L.I. solutions of Ax = 0 is “n-r”

## Eigen values & Eigen Vectors

If A is n × n square matrix, then the equation
Ax = λ x

ww
is called Eigen value problem.
Where λ is called as Eigen value of A.
x is called as Eigen vector of A.

w.E a11  
 a
a12 
a22    
a1n 
a2n 

asy
Characteristic polynomial  A  I  
 

21

 am1 am2

  amn   

En
Characteristic equation  A  I  0

gin
The roots of characteristic equation are called as characteristic roots or the Eigen values.
To find the Eigen vector, we need to solve
 A  I x  0
ee
This is a system of homogenous linear equation.
rin
We substitute each value of λ one by one & calculate Eigen vector corresponding to
each Eigen value.
g.n
Important Facts et
a. If x is an eigenvector of A corresponding to λ, the KX is also an Eigenvector where K
is a constant.
b. If a n × n matrix has ‘n’ distinct Eigen values, we have ‘n’ linearly independent Eigen
vectors.
c. Eigen Value of Hermitian/Symmetric matrix are real.
d. Eigen value of Skew - Hermitian / Skew – Symmetric matrix are purely imaginary or
zero.
e. Eigen Value of unitary or orthogonal matrix are such that | λ | = 1.
f. If 1 , 2 ......., n are Eigen value of A, k 1 ,k2 .......,kn are Eigen values of kA.

## g. Eigen Value of A1 are reciprocal of Eigen value of A.

A A A
h. If 1 , 2 ...., n are Eigen values of A, , , ........., are Eigen values of Adj(A).
1 2 n
i. Sum of Eigen values = Trace (A)
j. Product of Eigen values = |A|
k. In triangular or diagonal matrix, Eigen values are diagonal elements.

## Cayley - Hamiltonian Theorem

Every matrix satisfies its own Characteristic equation.
e.g., If characteristic equation is

wwC1 n  C2 n  1 ......  Cn  0
Then

w.E
C1An  C2An  1  ......  CnI  O
Where I is identity matrix
O is null matrix
asy
En
gin
ee rin
g.n
et

CALCULUS

## Important Series Expansion

n
n
a. 1  x    n
Cr xr
r 0

b. 1  x 1  1  x  x2  ............
x x2 2 x3 3
c. a  1  x log a   xloga   xloga   ................
2! 3!
3
d. sinx  x  x x5 .................

ww e. cos x  1  x
2
3!

2!
 5!
+ x
4
4!
......................

w.E f. tan x = x  x
3 2 5
3! + 15 x + .........

g. log (1 + x) = x  x
asy 2
2 +
x3 + ............, x < 1
3

Important Limits En
lt
sinx gin
a.

b.
x0
lt
x0
x
tanx
x
 1

 1
ee rin
c.
lt
x0
1
1  nx  x  en g.n
d.
x0
lt
cos x  1 et
lt 1
e. 1  x  x  e
x0

 x
lt x
f. 1 1  e
x

L – Hospitals Rule
If f (x) and g(x) are to function such that
lt lt
f x  0 and gx  0
xa xa

lt f x lt f'  x 
Then 
x  a g x x  a g'  x 
If f’(x) and g’(x) are also zero as x  a , then we can take successive derivatives till this
condition is violated.

lim
For continuity, f  x =f  a
xa
lim  f  x0  h  f  x0  
For differentiability,   exists and is equal to f '  x0 
h  0 h
ww  
If a function is differentiable at some point then it is continuous at that point but converse

w.E
may not be true.

## Mean Value Theorems

 Rolle’s Theorem
asy
If there is a function f(x) such that f(x) is continuous in closed interval a ≤ x ≤ b and f’(x)

En
is existing at every point in open interval a < x < b and f(a) = f(b).

gin
Then, there exists a point ‘c’ such that f’(c) = 0 and a < c < b.

##  Lagrange’s Mean value Theorem

ee rin
If there is a function f(x) such that, f(x) is continuous in closed interval a ≤ x ≤ b; and f(x) is
differentiable in open interval (a, b) i.e., a < x < b,
Then there exists a point ‘c’, such that

f ' c 
f b   f  a  g.n
Differentiation
b  a 
et
Properties: (f + g)’ = f’ + g’ ; (f – g)’ = f’ – g’ ; (f g)’ = f’ g + f g’

Important derivatives
a. xn → n xn  1
nx  1 x
b.
c. loga x  (loga e) 1  x
x x
d. e  e

x x
e.
a  a loge a
f. sin x → cos x
g. cos x → -sin x
2
h. tan x → sec x
i. sec x → sec x tan x
j. cosec x → - cosec x cot x
k. cot x → - cosec2 x
l. sin h x → cos h x
m. cos h x → sin h x

ww n. sin1 x 
1
1 - x2

w.E
o.
cos1 x 
-1
1  x2

1
asy
p. tan1 x 
1  x2
En
cosec 1 x 
-1 gin
q.

r. sec1x 
x x2  1
1
x x2  1
ee rin
cot 1 x 
-1
1  x2
g.n
s.
et
Increasing & Decreasing Functions
 f '  x   0 V x   a, b  , then f is increasing in [a, b]
 f '  x   0 V x   a, b  , then f is strictly increasing in [a, b]
 f '  x   0 V x   a, b  , then f is decreasing in [a, b]
 f '  x   0 V x   a, b  , then f is strictly decreasing in [a, b]

## Maxima & Minima

Local maxima or minima
There is a maximum of f(x) at x = a if f’(a) = 0 and f”(a) is negative.
There is a minimum of f (x) at x = a, if f’(a) = 0 and f” (a) is positive.
To calculate maximum or minima, we find the point ‘a’ such that f’(a) = 0 and then decide
if it is maximum or minima by judging the sign of f”(a).

## Global maxima & minima

We first find local maxima & minima & then calculate the value of ‘f’ at boundary points of

ww interval given eg. [a, b], we find f(a) & f(b) & compare it with the values of local maxima &
minima. The absolute maxima & minima can be decided then.

w.E
Taylor & Maclaurin series
Taylor series

## f(a + h) = f(a) + h f’(a) + asy

h2
2
f”(a) + ………………..

 Maclaurin
En
f(x) = f(0) + x f’(0) +
x2
2 gin
f“(0)+……………..

## Partial Derivative ee rin

If a derivative of a function of several independent variables be found with respect to any
one of them, keeping the others as constant, it is said to be a partial derivative.
g.n
Homogenous Function

## a0 xn  a1 xn  1 y  a2 xn2 y 2  .............  an yn is a homogenous function

et
of x & y, of degree ‘n’

   x  x  
 2 n
n y y y
= x a0  a1
x  a2  ....................  an

Euler’s Theorem
If u is a homogenous function of x & y of degree n, then
 u u 
x  y  nu
 x y 

## Maxima & minima of multi-variable function

 2 f   2 f   2 f 
let r   2  ; s    ; t   2
 y 
 x  x a  xy  xa   x a
y b y b y b

 Maxima
rt > s2 ; r<0
 Minima
rt > s2 ; r>0

ww
rt < s2

w.E
Integration
Indefinite integrals are just opposite of derivatives and hence important derivatives must
always be remembered.

## Properties of definite integral

asy
a.
b b

 f  x  dx   f  t  dt En
b.
a
b
a
a

 f  x  dx    f  x  dx
gin
c.
a
b c
b
b ee
 f  x  dx   f  x  dx   f  x  dx rin
d.
a
b
a
b
 f  x  dx   f  a  b  x  dx
c

g.n
e.
a

d
 t 

dt  t 
a

f  x  dx  f    t    '  t   f    t    '  t 
et
Vectors
a  b of two vector a = a1 ,a2 ,a3  and b = b1 ,b2 ,b3 

##  a + b = a1  b1 ,a2  b2 ,a3  b3 

 Scalar Multiplication
ca = ca1 , ca2 , ca3 

 Unit vector
a
â 
a
 Dot Product
a.b= a b cos γ, where ‘γ’ is angle between a & b .

## a . b = a1b1  a2b2  a3b3

Properties

ww 
a. | a . b | ≤ |a| |b|
b. |a + b| ≤ |a| + |b|

(Schwarz inequality)
(Triangle inequality)

w.E
c. |a + b|2 + |a – b|2 = 2

## Vector cross product

a
2
 b
2
(Parallelogram Equality)

v  ab  a

ˆi ˆj kˆ
b
asy
sin γ

= a1 a2 a3 En
where a  a1 ,a2 ,a3  ; b  b1 ,b2 ,b3 
b1 b2 b3
gin
 Properties:
a. a  b b  a  
ee rin
b. a  b   c  a  b  c
g.n
 Scalar Triple Product

(a, b, c) = a . b  c 
et
 Vector Triple product

a b  c  a . c b  a . b c  

##  Gradient of scalar field

Gradient of scalar function f (x, y, z)
f f f
grad f  ˆi  ˆj  kˆ
x y z

 Directional derivative
Derivative of a scalar function in the direction of b
b
Db f  . grad f
b

##  Divergence of vector field

v1 v 2 v 3
 .v    ; where v   v1 , v 2 , v 3 
x y z

ww
 Curl of vector field
i j k

w.E Curl v    v  
x
v1

y
v2

z
v3
v   v1 , v 2 , v 3 

asy
Some identities En
a. Div grad f =  2 
gin 2 f 2 f 2 f
 
x 2 y 2 z 2
b. Curl grad f =   f  0

c. Div curl f = .    f   0
ee rin
g.n
d. Curl curl f = grad div f – 2  f
et
Line Integral

 
b
dr
 F r  .dr 
C
a F r  t   . dt dt

Hence curve C is parameterized in terms of t ; i.e. when ‘t’ goes from a to b, curve C is
traced.
b

 F r  .dr 
C
a F1 x' F2 y ' F3 z '  dt
F  F1 ,F2 ,F3 

Green’s Theorem
 F2 F1 
R  x 
y 
 dx dy   F1 dx  F2dy 
C

## Gauss Divergence Theorem


T
div F. dv   F . nˆ d A
S

## ww Where n̂ is outer unit normal vector of s .

T is volume enclosed by s.

w.E
Stoke’s Theorem
S  curl F . nˆ d A  C F. r ' s ds
asy
Where n̂ is unit normal vector of S

En
C is the curve which enclosed a plane surface S.

gin
ee rin
g.n
et

DIFFERENTIAL EQUATIONS

The order of a deferential equation is the order of highest derivative appearing in it.
The degree of a differential equation is the degree of the highest derivative occurring in it,
after the differential equation is expressed in a form free from radicals & fractions.

## ww Collect all function of x & dx on one side.

Collect all function of y & dy on other side.
like f(x) dx = g(y) dy

w.E solution:  f  x  dx   g  y  dy  c

##  Exact differential equation

An equation of the form
asy
En
M(x, y) dx + N (x, y) dy = 0
For equation to be exact.
M N
 gin
; then only this method can be applied.

The solution is
y x
ee
a =  M dx   (termsof Nnot containingx)dy rin
 Integrating factors
g.n
An equation of the form
P(x, y) dx + Q (x, y) dy = 0
et
This can be reduced to exact form by multiplying both sides by IF.
1  P Q 
If    is a function of x, then
Q  y x 
1  P Q 
R(x) =    
Q  y x 
Integrating Factor
IF = exp   R  x  dx 
 Q P 
Otherwise, if 1 P    is a function of y
 x y 

 Q P 
S(y) = 1 P   
 x y 
Integrating factor, IF = exp   S  y  dy 
 Linear Differential Equations
An equation is linear if it can be written as:
y ' P  x  y  r  x 
If r(x) = 0 ; equation is homogenous

## ww else r(x) ≠ 0 ; equation is non-homogeneous

y(x) = ce 
 p x dx
is the solution for homogenous form

## w.Efor non-homogenous form, h =  P  x  dx

y  x   eh   ehrdx  c

asy 

 Bernoulli’s equation

The equation
dy
 Py  Qyn En
dx
Where P & Q are function of x gin
dz
dx
 P 1  n z  Q 1  n
ee
Divide both sides of the equation by y n & put y
 1  n
z

rin
This is a linear equation & can be solved easily.
g.n
 Clairaut’s equation
An equation of the form y = Px + f (P), is known as Clairaut’s equation where P = dy
e
 t dx
The solution of this equation is
y = cx + f (c) where c = constant

## Constant coefficient differential equation

dn y dn1 y
 k 1  ..............  k n y  X
dxn dxn1

## Where X is a function of x only

a. If y1 , y 2 ,.........., y n are n independent solution, then
c1 y1  c2 y 2  ..........  cn y n  x is complete solution
where c1 ,c2 ,..........,cn are arbitrary constants.

## b. The procedure of finding solution of nth order differential equation involves

computing complementary function (C. F) and particular Integral (P. I).

## c. Complementary function is solution of

ww dn y
dxn
 k 1
dn1 y
dxn1
 .............  k n y  0

w.E
d. Particular integral is particular solution of
dn y
dxn
 k 1
dn1 y
dxn1 asy
 ............  kn y  x

En
e. y = CF + PI is complete solution

gin
Finding complementary function
 Method of differential operator
d dy
ee rin
Replace

Similarly
dx
by D →
dx
 Dy
g.n
dn
dxn
by Dn

dn y
dxn
 Dn y et
dn y dn1 y
 k 1  ............  kn y  0 becomes
dxn dxn1
Dn  k1Dn1  ...........  kn  y  0
Let m1 ,m2 ,............,mn be roots of
Dn  k1Dn1 ................  Kn  0 ………….(i)

## Case I: All roots are real & distinct

D  m1 D  m2  ............ D  mn   0 is equivalent to (i)
m1x m2x
y = c1e  c2e  ...........  cnemnx
is solution of differential equation
Case II: If two roots are real & equal
i.e., m1  m2  m
m3x
y =  c1  c2 x  emx  c3e  ..........  cnemnx

ww
w.E
Case III: If two roots are complex conjugate
m1   j ; m2   j

asy
y = ex c1 'cos x  c2 'sinx   ..........  cnemnx

## Finding particular integral

Suppose differential equation is En
dn y
dxn
 k 1
dn1 y
dxn1
gin
 ..........  kn y  X

Particular Integral

PI = y1 
W1  x 
ee
 dx  y 2 
W2  x 
 dx  ..........  yn 
Wn  x 
rin
 dx
W x W x W x
Where y1 , y 2 ,............yn are solutions of Homogenous from of differential equations. g.n
et
y1 y2  yn y1 y2 y i1 0  yn
y1 ' y2 '  yn ' y1 ' y 2 '  y i1 ' 0 yn '
W x  Wi  x  
  0
n n n n n n n
y1 y2  yn  y1 y2  y i1 1 yn 

Wi  x  is obtained from W(x) by replacing ith column by all zeroes & last 1.

Euler-Cauchy Equation
An equation of the form
dn y n1 d
n1
y
xn n  k 1 x  ..........  kn y  0
dx n
dx 1

## is called as Euler-Cauchy theorem

Substitute y = xm
The equation becomes
m m  1  ........ m  n  k1m(m  1)......... m  n  1   .............  kn  xm  0
 

ww
The roots of equation are

w.E
Case I: All roots are real & distinct
y  c1 x
m1 m2
 c2 x  ...........  cn xmn

asy
Case II: Two roots are real & equal
m1  m2  m
En
y   c1  c2 nx  xm  c3 x
m3 mn
 ........  cnx

gin
ee
Case III: Two roots are complex conjugate of each other
m1    j ; m2    j
m
rin
y = x  A cos   nx   B sin   nx     c3x 3  ...........  cnx n
m

g.n
et

COMPLEX FUNCTIONS

f  z   ez  e
 x iy 

##  Logarithmic function of complex variable

If ew  z ; then w is logarithmic function of z

ww
log z = w + 2inπ
This logarithm of complex number has infinite numbers of values.
The general value of logarithm is denoted by Log z & the principal value is log z & is

w.E
found from general value by taking n = 0.

 Analytic function
asy
A function f(z) which is single valued and possesses a unique derivative with respect to z at

En
all points of region R is called as an analytic function.
u u v v

gin
If u & v are real, single valued functions of x & y s. t. , , ,
x y x y
are continuous

u v v u

ee
throughout a region R, then Cauchy – Riemann equations  ;
x y x
are necessary & sufficient condition for f(z) = u + iv to be analytic in R.
rin

y

##  Line integral of a complex function g.n

b
 
 f  z  dz   f z  t  z '  t  dt
C a

## where C is a smooth curve represented by z = z(t), where a ≤ t ≤ b.

et
 Cauchy’s Theorem
If f(z) is an analytic function and f’(z) is continuous at each point within and on a
closed curve C. then
 f  z  dz  0
C

##  Cauchy’s Integral formula

If f(z) is analytic within & on a closed curve C, & a is any point within C.
1 f z
f  a 
2i C z  a
dz

n a  n! f z
f  2i  n1
dz
C  z  a

## Singularities of an Analytic Function

 Isolated singularity

wwf z 

 an  z  a
n
n
; an 
1

f t
2i  t  an1
dt

w.E
z = z 0 is an isolated singularity if there is no singularity of f(z) in the neighborhood of z
= z0 .
 Removable singularity asy
En
If all the negative power of (z – a) are zero in the expansion of f(z),

f(z) =

 an  z  a
n

ginn0
The singularity at z = a can be removed by defined f(z) at z = a such that f(z) is

 Poles
analytic at z = a.
ee rin
th
If all negative powers of (z – a) after n are missing, then z = a is a pole of order ‘n’.
g.n
 Essential singularity
If the number of negative power of (z – a) is infinite, the z = a is essential
singularity & cannot be removed.
et
RESIDUES
If z = a is an isolated singularity of f(z)

2 1 2
f  z   a0  a1  z  a  a2  z  a   .............  a1 z  a   a 2 z  a   ...........
Then residue of f(z) at z = a is a1

Residue Theorem

## If f(z) has a pole of order ‘n’ at z=a

1  dn1  
Res f  a   n1  z  a f  z  
n

n  1!  dz z a

## Evaluation Real Integrals

2

 F  cos ,sin   d
ww I=
0

## ei  ei ei  ei

w.E cos  

Assume z= ei
2
; sin  
2i

asy
 1
 z+ z 
cos     ; sin   1  z  1 
2
En 
2i  z

I=  f z
c
dz
iz gin  n
 k=1

 2i  Res  f  zk   

ee
Residue should only be calculated at poles in upper half plane.

## Residue is calculated for the function: 

 f z 
 rin
 iz 

 f  x  dx  2iRes f  z  g.n


et
Where residue is calculated at poles in upper half plane & poles of f(z) are found
by substituting z in place of x in f(x).

## PROBABILITY AND STATISTICS

Types of events
 Complementary events
Ec   s  E
The complement of an event E is set of all outcomes not in E.

##  Mutually Exclusive Events

Two events E & F are mutually exclusive iff P(E ∩ F) = 0.

## w.E Two events E & F are collectively exhaustive iff (E U F) = S

Where S is sample space.

##  Independent events asy

En
If E & F are two independent events
P(E ∩ F) = P (E) * P(F)

## De Morgan’s Law gin

n 
 i1 
C

 U Ei  =

C
n

i1
EiC ee rin

n 
 Ei  =
 i1 
n

i1
EiC
g.n
Axioms of Probability
E1 ,E2 ,...........,En are possible events & S is the sample space.
et
a. 0 ≤ P (E) ≤ 1

b. P(S) = 1

n  n
c. P  Ei  = P Ei  for mutually exclusive events
 i1  i=1

## Some important rules of probability

P(A U B) = P(A) + P(B) – P(A  B)
P(A  B) = P(A)* P B | A  = P(B) * P  A | B 
P  A | B  is conditional probability of A given B.
If A & B are independent events
P(A  B) = P(A) * P(B)
P(A | B) = P(A)
P(B | A) = P(B)

ww
Total Probability Theorem
P(A  B) = P (A  E) + P (B  E)

## w.E = P(A) * P(E |A) + P(B) * P(E |B)

Baye’s Theorem
P(A |E) = asy
P(A  E) + P (B  E)

En
= P(A)* P(E | A) + P(B) * P(E | B)

Statistics
 Arithmetic Mean of Raw Data gin
x
x
n ee
x = arithmetic mean; x = value of observation ; n = number of observations rin
 Arithmetic Mean of grouped data
  fx 
g.n
x
f
; f = frequency of each observation
et
 Median of Raw data
Arrange all the observations in ascending order
x1  x2  ............  xn
n  1 
If n is odd, median = th value
2
th th

If n is even, Median =
 n2 
value + n 2  1  value
2

##  Mode of Raw data

Most frequently occurring observation in the data.

##  Standard Deviation of Raw Data

2
n xi2    xi 

n2
n = number of observations
variance = 2

ww
Standard deviation of grouped data

N fi2 xi2    fi xi 
2

w.E

N
fi = frequency of each observation
N = number of observations.
variance = 2
asy
 Coefficient of variation = CV = En 

gin

##  Properties of discrete distributions

a.

b.
P  x  1
E X   x P x
ee rin
  
c. V  x   E x 2  E  x  
2
g.n
 Properties of continuous distributions

et
  f  x  dx  1

x
 F x   f  x  dx = cumulative distribution


 E x   xf  x  dx = expected value of x


 
2
 V  x   E x 2  E  x   = variance of x

##  Properties Expectation & Variance

E(ax + b) = a E(x) + b
V(ax + b) = a2 V(x)
E  ax1  bx2   aE  x1   bE  x2 

V  ax1  bx2   a2 V  x1   b2 V  x2 
cov (x, y) = E (x y) – E (x) E (y)

Binomial Distribution
no of trials = n

ww
Probability of success = P
Probability of failure = (1 – P)

w.E P  X  x   nCxPx 1  P 
Mean = E(X) = nP
nx

## Variance = V[x] = nP(1 – P)

asy
Poisson Distribution En
gin
A random variable x, having possible values 0,1, 2, 3,……., is poisson variable if
e  x

Mean = E(x) = λ
P X  x 

Variance = V(x) = λ
ee
x!

rin
g.n
Continuous Distributions

Uniform Distribution
et
 1
 if a  x  b
f  x   b  a
 0 otherwise

ba
Mean = E(x) =
2
b  a2
Variance = V(x) =
12

Exponential Distribution
 x
e if x  0
f x  
 0 if x  0

Mean = E(x) = 1

Variance = V(x) = 1 2

Normal Distribution
1    x   2 
f x  ep  ,   x  

ww 22
Means = E(x) = μ
 22

w.E
Variance = v(x) = 2

Coefficient of correlation


cov  x, y  asy
var  x  var  y 
En
x & y are linearly related, if ρ = ± 1
x & y are un-correlated if ρ = 0 gin
Regression lines
 x  x = bxy y  y  
ee rin
 y  y = b yx x  x g.n
Where x & y are mean values of x & y respectively

b xy =
cov  x, y 
; b yx =
cov  x, y 
et
var  y  var  x 

 bbxy yx

NUMERICAL – METHODS

## Numerical solution of algebraic equations

 Descartes Rule of sign:
An equation f(x) = 0 cannot have more positive roots then the number of sign
changes in f(x) & cannot have more negative roots then the number of sign
changes in f(-x).

 Bisection Method

ww If a function f(x) is continuous between a & b and f(a) & f(b) are of opposite sign,
then there exists at least one roots of f(x) between a & b.

## w.E Since root lies between a & b, we assume root x0 

a  b
2

asy
If f  x0   0 ; x 0 is the root

Else, if f  x0  has same sign as f  a  , then roots lies between x 0 & b and
we assume En
x1  0
x b
2 gin
, and follow same procedure otherwise if f  x0  has same sign

as f b  , then
ee
root lies between a & x 0 & we assume x1 
a  x0
2 rin

g.n
We keep on doing it, till f  xn    , i.e., f  xn  is close to zero.
No. of step required to achieve an accuracy 

n
 ba 
loge 
  

et
loge 2

 Regula-Falsi Method
This method is similar to bisection method, as we assume two value x0 & x1
such that
f  x0  f  x1   0 .

f  x1  .x0  f  x0  .x1
x2 
f  x1   f  x0 
If f(x2)=0 then x2 is the root , stop the process.

If f(x2)>0 then
f  x2  .x0  f  x0  .x2
x3 
f  x2   f  x 0 
If f(x2)<0 then

f  x1  .x2  f  x2  .x1
x3 
f  x1   f  x2 

ww
Secant Method
In secant method, we remove the condition that f  x0  f  x1   0 and it doesn’t

w.E provide the guarantee for existence of the root in the given interval , So it is called
un reliable method .

x2  asy
f  x1  .x0  f  x0  .x1
f  x1   f  x0 

En
and to compute x3 replace every variable by its variable in x2

x3  gin
f  x2  .x1  f  x1  .x2
f  x2   f  x1 

ee

rin
 Newton-Raphson Method g.n
xn1  xn 
f  xn 
f '  xn  et
Note : Since N.R. iteration method is quadratic convergence so to apply this formula f "  x 
must exist.

Order of convergence
 Bisection = Linear
 Regula false = Linear
 Secant = superlinear

 Numerical Integration

Trapezoidal Rule

b
a f  x  dx , can be calculated as

Divide interval (a, b) into n sub-intervals such that width of each interval
b  a
h

ww n
we have (n + 1) points at edges of each intervals
 x0 , x1 , x2 ,.........., xn 
w.E y0  f  x0  ; y1  f  x1  ,..................., yn  f  xn 
asy
b
h
En
a f  x  dx  2  y 0  2  y1  y 2  ..........  yn1   yn 
gin
Simpson’s 1
3
rd Rule ee
Here the number of intervals should be even rin
h
b a
 n 
 g.n
b
h
et
a f  x  dx  3  y 0  4  y1  y 3  y5  ..........  yn1   2  y 2  y 4  ................  yn2   yn 

Simpson’s 3
8 th Rule
Here the number of intervals should be even
b
3h
a f  x  dx   y  3(y1  y 2  y 4  y5........ yn1 )  2(y 3  y 6  y 9 ..............yn3 )  y n 
8  0

Truncation error
(b  a) 2
Trapezoidal Rule: T
bound
h max f "  and order of error =2

12

Simpson’s 1 Rule: T 
(b  a) 4 iv 
h max f  and order of error =4
3 bound 180

Simpson’s 3 th Rule: T 
3(b  a) 4 iv 
h max f  and order of error =5
8 bound n80
where x0    xn

ww Note : If truncation error occurs at nth order derivative then it gives exact result while
integrating the polynomial up to degree (n-1).

w.E
Numerical solution of Differential equation

## Euler’s Method asy

dy
dx
 f  x, y 
En
gin
To solve differential equation by numerical method, we define a step size h
 x0  h, x0  2h,.........., x0  nh
We can calculate value of y at
intermediate points.
y i1  yi  hf  xi , y i 
ee rin
& not any

y i  y  x i  ; y i  1  y  xi  1  ; Xi  1  Xi  h
g.n
Modified Euler’s Method (Heun’s method)

y1  y 0 
h
f  x , y   f  x0  h, y 0  h  
et
2 0 0 

## Runge – Kutta Method

y1  y 0  k
k  1 6 k1  2k 2  2k 3  k 4 
k1  hf  x0 , y 0 
 h k 
k 2  hf  x0  , y 0  1 
 2 2

 h k 
k 3  hf  x0  , y 0  2 
 2 2 
k 4  hf  x0  h, y 0  k 3 

ww
w.E
asy
En
gin
ee rin
g.n
et