You are on page 1of 66

1.

Matrices
04 September 2021 09:21

Equality of Matrices

Two matrices A = [aij] and B = [bij] are equal, written if and only if they have the same size and the corresponding entries are equal, that is a11 = b11, a12 =
b12 and so on. Matrices that are not equal are called different. Thus, matrices of different sizes are always different.

Addition of Matrices

The addition of two matrices A = [aij] and B = [bij] of the same size is written as A + B and has the entries aij + bij obtained by adding the corresponding
entries of A and B. Matrices of different sizes cannot be added.

Scaler Multiplication (Multiplication by a number)

The product of any matrix A = [aij] and any scalar c (number c) is written cA = [caij] and is the matrix obtained by multiplying each entry of A by c.

Here is simply written and is called the negative of A. Similarly, is written . Also, is written and is called the difference of A and B (which must have the
same size!).

 Rules of Matrix Addition

Here 0 denotes the zero matrix (of size ), that is, the matrix with all entries zero. If or , this is a vector, called a zero vector.

Similarly, for any scaler also,

Page 1
Multiplication of 2 matrices

The product C = AB (in this order) of an m X n matrix A = [aij] times an r X p matrix B = [bij] is defined if and only if r = n and is then the m X p matrix
C = [cij].

The condition r = n means that the second factor, B, must have as many rows as the first factor has columns, namely n. A diagram of sizes that shows
when matrix multiplication is possible is as follows:

 Rules of Matrix Multiplication

Matrix Multiplication Is Not Commutative, AB ് BA

Transposition of Matrix

The transpose of an m X n matrix A = [aij] is the n X m matrix AT that has the first row of A as its first column, the second row of A as its second column,
and so on.

 Rules of Transposition

Special Matrices

 Singular Matrix: This matrix has a determinant = 0.

Page 2
 Symmetric Matrices: They are those matrices who have the transpose equal to matrix itself i.e. AT = A.

 Skew symmetric Matrices: They are those matrices who have the transpose equal to the minus of the matrix i.e. AT = - A.

e.g.

Note: Any real square matrix A can be expressed as the sum of the symmetric matrix R and the skew - symmetric matrix S, where

R = 0.5(A + AT) and S = 0.5(A - AT)

 Triangular Matrices

• Upper Triangular Matrix: They have nonzero entries only on and above the main diagonal.
e.g.

 The matrix is said to have reduced to the echelon form.

• Lower Triangular Matrix: They have nonzero entries only on and below the main diagonal.
e.g.

Note: A matrix A can be expressed be expressed as the product of Lower Triangular Matrix (L) and Upper Triangular Matrix (U). For example,

Page 3
 Diagonal Matrices: These are square matrices that can have non - zero entries only on the main diagonal. Any entry above or below the main
diagonal must be zero.
e.g.

Note: If all the diagonal elements become 1, then the diagonal matrix will be known as the identity matrix.

be േ 1 only.
 Orthogonal Matrices: They are those matrices who have the transpose equal to inverse of the matrix i.e. AT = A-1, so their determinant will always

System of Linear Equations

It can be represented as:

If all the entries on the RHS become zero, then the system of linear equation will be known as the homogenous system of linear equations. Otherwise,
they will be known as the non - homogenous system of linear equations.

In the matrix form, they can be expressed as:

Page 4
In the matrix form, they can be expressed as:
Ax = b
where,

and,

Assuming that A is not a zero matrix and noting that x has n components whereas b has m components. The matrix

෩ where dashed line is used to indicate that the last column is not a part of the matrix A rather it has come from the matrix
is called the augmented matrix ‫ۯ‬
b.

Solution of the System of Linear Equations by Gauss Elimination

Page 5
Solution of the System of Linear Equations by Gauss Elimination

Consider the following set of linear equations:


x1 - x2 + x3 = 0
-x1 + x2 - x3 = 0
10x2 + 25x3 = 90
20x1 + 10x2 = 80

Step:1 Eliminate x1

Call the first row of A the pivot row and the first equation the pivot equation. Call the coefficient 1 of its x1 - term the pivot in this step as shown below:

Use this equation to eliminate (get rid of x1 in the other equations. For this, do:

• Add 1 times the pivot equation to the second equation.


• Add - 20 times the pivot equation to the fourth equation.

So, we will get the final form of the augmented matrix as:

Step: 2 Eliminate x2

The first equation remains as it is. We want the new second equation to serve as the next pivot equation. But since it has no x2 term (in fact, it is 0 = 0). So,
we will exchange the second row with the third and fourth row. So in this way, we will put the second row at last as shown below:

Page 6
And then:

• Add - 3 times the pivot equation to the third equation as shown below:

Step: 3 Back substitution i.e. find the unknowns in this order: x3, x2 and then x1.

Working backward from the last to the first equation of this “triangular” system, we can know readily find x3 then x2 and then x1 as:

-95x3 = -190
10x2 + 25x3 = 90
x 1 - x2 + x 3 = 0
So, we will get x3 = 2, x2 = 4 and x1 = 2

Linear Independence & Dependence of Vectors

૙ ૛ ૛
Consider the following set of vectors:

࡭ ൌ ૝૛ ૛૝ ૞૝
െ૛૚ ૙ െ૚૞
૙ ૛ ૛
Replacing R3 with R3 + 6R1 - 0.5R2 we get,

࡭ ൌ ૝૛ ૛૝ ૞૝
૙ ૙ ૙

If we find the determinant of this matrix, we will get det A = 0. So, the given vectors are linearly dependent.

But in the following set of vectors:

Page 7
െ૚ ૚ െ૚
But in the following set of vectors:

࡮ൌ ૙ ૚૙ ૛૞
૛૙ ૚૙ ૙

Here, det B ് 0. So, the given vectors are linearly independent.

Rank of Matrix

The rank of a matrix A is the maximum number of linearly independent row vectors of A (number of non - zero rows). It is denoted by rank (A).

For example the above matrix A has rank = 2 and the matrix B has rank = 3.

Solution of linear equations

෩:
Consider the following augmented matrix ࡭

And, the matrix A:

Both of these matrices have the rank 3. Hence, if:

෩ ൌ number of unknowns, we will get a unique solution (consistent equations).


• Rank A = Rank ࡭

෩ , we will get no solution (inconsistent equations).


• Rank A ് Rank ࡭

෩ < number of unknowns, we will get infinite number of solutions (consistent equations).
• Rank A = Rank ࡭

Homogenous Linear Equations

The following homogenous system of linear equation

Page 8
෩ < number of unknowns
always has the trivial solution. Non - trivial solutions exist if and only if rank A = rank ࡭

Inverse of Matrix

The inverse of matrix A is denoted by A-1 such that


AA-1 = I
where, I is the n X n unit matrix.

If A has an inverse, then A is called a non - singular matrix. If A has no inverse, then A is called a singular matrix.

 If A has an inverse, the inverse is unique.

Existence of Inverse of A

The inverse of matrix A exists if and only if rank A = n and det A ് 0

Finding the Inverse of a Matrix by Gauss–Jordan Elimination

Page 9
Page 10
Matrix inverse by determinants

The inverse of a non - singular n X n matrix A = [aij] is given by:

where, Cjk is the cofactor of ajk in det A

Eigen values & Eigen vectors

A matrix eigen value considers the vector equation:


Ax = λx

Here, A is the given square matrix, x is an unknown vector and λ is an unknown scalar. Here, our only aim is to find the other values of which gives x ് 0.
Here, the λ's satisfying the equation are known as the eigen values and the non - zero x's satisfying the equation are known as the eigen vectors.

૞ ૝
e.g. Find the eigen values and eigen vectors for the matrix ቂ ቃ
૚ ૛
૞ െ ࣅ ૝
Characteristic equation = det(A - λI) = ቚ ቚ = (5 - ࣅ)(2 - ࣅ) - 4 = ૃ૛ - 7λ + 6 = 0. So, λ = 1, 6
૚ ૛െࣅ

૝ ૝ ࢞
Now, corresponding to λ = 1, we get
ቂ ቃቂ ቃൌ૙
૚ ૚ ࢟

Here, we get only one independent equation, x + y = 0 which gives x = - y

Page 11
Here, we get only one independent equation, x + y = 0 which gives x = - y
Hence,

And, corresponding to λ = 6, we get

Again, we get only one independent equation, x - 4y = 0 which gives x = 4y


Hence,

Properties of Eigen Values

 Any square matrix A and its transpose AT have same eigen values.

 In a triangular matrix, the principal diagonal gives the eigen values of the matrix.

 The sum of eigen values is equal to the sum of the elements of the principal diagonal (trace) of the matrix.

 The product of eigen values of the matrix A is equal to the determinant of A.

 If λ is the eigen value of an orthogonal matrix then 1/λ is also the eigen value of the matrix.

 The eigen values of a symmetric matrix are real.

 The eigen values of a skew - symmetric matrix are either zero or purely imaginary.

 The eigen values of an orthogonal matrix are real or complex conjugates in pairs and have absolute value 1.

Matrix diagonalization
࢞૚ ࢞૛ ࢞૜
Let A be 3 X 3 square matrix whose eigen values are λ1, λ2 and λ3 and X1 = ൥࢟૚ ൩, X2 = ൥࢟૛ ൩ , X3 = ൥࢟૜ ൩ be the corresponding eigen vectors.
ࢠ૚ ࢠ૛ ࢠ૜
࢞૚ ࢞ ૛
࢞૜
Denoting the square matrix [X1 X2 X3] = ൥࢟૚ ࢟૛
࢟૜ ൩ by X, we have
ࢠ૚ ࢠ૛ࢠ૜
ૃ ૚ ࢞૚ ૃ ૛ ࢞૛ ૃ૜ ࢞૜ ࢞૚ ࢞ ૛ ࢞૜ ૃ૚ ૙ ૙
AX = A[X1 X2 X3] = [AX1 AX2 AX3] = [λ1X1 λ2X2 λ3X3] = ቎ૃ૚ ࢟૚ ૃ૛ ࢟૛ ࢟
ૃ૜ ࢟૜ ቏ = ൥ ૚ ࢟૛ ࢟૜ ൩ × ቎ ૙ ૃ૛ ૙ ቏ ൌ XD where, D is the diagonal matrix.
ૃ૚ ࢠ૚ ૃ૛ ࢠ૛ ૃ૜ ࢠ૜ ࢠ૚ ࢠ૛ ࢠ૜ ૙ ૙ ૃ૜
-1 m -1 m
So, D = X AX and also D = X A X where, m = 2, 3, 4…

Page 12
So, D = X-1AX and also Dm = X-1AmX where, m = 2, 3, 4…

Page 13
2. Complex Numbers
05 September 2021 13:03

A complex number z is an ordered pair (x, y) of real numbers x and y, written as:

z = (x, y)
where, x is called the real part and y the imaginary part of z, written
x = Re (z) and y = Im (z)
In practice, complex number z = (x, y) is written as:
z = x + iy

Representation of Complex Numbers

We choose two perpendicular coordinate axes, the horizontal x-axis, called the real axis, and the vertical y-axis, called the imaginary axis. On both
axes we choose the same unit of length.

Complex Conjugates

The complex conjugate ࢠത of a complex number is defined by


ࢠത ൌ ࢞ െ ࢏࢟
It is obtained geometrically by reflecting the point z in the real axis.

For example,

• Properties of Complex Conjugates

Page 14
• Properties of Complex Conjugates

Polar form of complex numbers

Like the cartesian representation of complex numbers. These numbers can also be represented in the polar coordinates as:

x = r cos ࣂ and y = r sin ࣂ

So, z = x + iy = r(cos ࣂ ൅ sin ࣂ)


 ۛۛۛۛۛۛۛ  ۛۛۛ
r is called the absolute value or modulus of z and is denoted by |ࢠ|. Hence, |ࢠ| ൌ ࢘ ൌ ඥ࢞૛ ൅ ࢟૛ ൌ √ࢠࢠത and ࣂ is called the argument of z and is denoted
by arg z. Thus ࣂ ൌ arg z and tan ࣂ = y/x.

 Here, as in calculus, all angles are measured in radians and positive in the counter - clockwise sense.

Complex plane, Distance between 2 points in


polar form of a the complex plane
complex number

Triangle Inequality

In general,

Page 15
 Multiplication & Division in Polar form

• Multiplication

Let

So,

or,

Taking absolute values on both sides, we get

Taking argument of both sides, we get

• Division

and,

or,

De Moivre's Theorem

where

Euler's formula

 ۛۛۛۛۛۛۛۛۛۛۛۛۛۛ
And, หࢋ࢏࢟ ห ൌ |ࢉ࢕࢙ ࢟ ൅ ‫ |࢟ ܖܑܛ‬ൌ ඥࢉ࢕࢙૛ ࢟ ൅ ࢙࢏࢔૛ ࢟ ൌ ૚
But, ࢋ࢏࢟ ് ૙

Page 16
In polar form,
z = rࢋ࢏ࣂ

Complex Function

It can be represented:
w = f (z) = u(x, y) + iv (x, y)
Limits and Continuity

A function f (z) is said to have the limit l as z approaches a point z0, written as:

positive real ࣕ we can find a positive real such that for all z ് z0 in disc |ࢠ െ ࢠ૙ | ൏ ࢾ
if f is defined in a neighbourhood of (except perhaps at z0 itself) and if the values of f are “close” to l for all z “close” to z0 in precise terms, if for every

Limit

we have |ࢌሺࢠሻ െ ࢒| ൏ ࣕ geometrically, if for every z ് z0 in that ࢾ - disc the value of f lies in the second disc.

A function f(z) is said to be continuous at z = z0 if f(z0) is defined and

Note that by definition of a limit this implies that f (z) is defined in some neighbourhood of z0. f (z) is said to be continuous in a domain if it is continuous at
each point of this domain.

Derivative

The derivative of a complex function f at a point z0 is written f'(z0) and is defined by:

provided this limit exists. Then f is said to be differentiable at z . If we write ∆z = z - z , we have z = z + ∆z and the above equation will become:

Page 17
provided this limit exists. Then f is said to be differentiable at z0. If we write ∆z = z - z0, we have z = z0 + ∆z and the above equation will become:

Analytic Functions

A function is said to be analytic in a domain D if f(z) is defined and differentiable at all points of D. The function f(z) is said to be analytic at a point z = z0
in D if f(z) is analytic in a neighbourhood of z0.

Cauchy - Reiman Equation

A function w = f (z) = u(x, y) + iv (x, y) is said to be analytic in a domain D if and only if the first partial derivatives of u and v satisfy the two Cauchy–
Riemann equations everywhere in D.

In polar coordinates,

Complex Logarithm

ln z = ln (x + iy)
or

Putting w = x + iy we get,

r = eu and v = ࣂ
Here,

Or

The value of ln z corresponding to the principal value Arg z denoted by Ln z and is called the principal value of ln z. Thus,

Important:



Page 18

 ln i = iۛ

 ii = ࢋି࣊/૛

Complex Integration

• Line integral in the complex plane

Here an indefinite integral is a function whose derivative equals a given analytic function in a region. By inverting known differentiation formulas we may
find many types of indefinite integrals.

Complex definite integrals are called (complex) line integrals. They are written as:

Properties of Line Integral

1. Linearity: Integration is a linear operation, that is, we can integrate sums term by term and can take out constant factors from under the integral
sign. This means that if the integrals of f1 and f2 over a path C exist, so does the integral of k1 f1 + k2 f2 over the same path.

2. Limit Reversal: While integrating over the same path from z0 to Z and from Z to z0, put a minus sign as shown:

3. Partitioning of Path:

Page 19
3. Partitioning of Path:

• Cauchy Integral Theorem

It states that in a simply connected domain D, if f(z) is analytic then for every closed path C in D

 A simple closed path is sometimes called a contour and an integral over such a path a contour integral.

For the entire functions (analytic for all z) for any closed path,

For the points outside the contour where f(z) is not analytic,

Cauchy Integral Theorem

If f(z) is analytic within and on a closed curve C and if a is a point lying in it, then

Page 20
Derivatives of an Analytic Function

If f(z) is analytic in a domain D, then it has derivatives of all orders in D, which are then also analytic functions in D. The values of these derivatives at a
point z0 in D are given by the formulas:

In general,

where, n = 1, 2, 3,…

Here C is any closed path in D which encloses z0 and whose full interior belongs to D and we integrate counter - clockwise around C.

Laurent Series

Let f(z) to be analytic in a domain containing 2 circles C1 and C2 with centre z0 and the annulus between them

Then, f(z) can be represented by the Laurent series as:

consisting of non-negative and negative powers. The coefficients of this Laurent series are given by the integrals:

Page 21
taken counter-clockwise around any simple closed path C that lies in the annulus and encircles the inner circle as shown below:

Singularity

A singular point of an analytic function f(z) is a z0 at which ceases to be analytic, and a zero is a z at which f(z) = 0.

1. Isolated singularity: If z = a is a singularity of f(z) such that f(z) is analytic at each point in its neighbourhood then z = a is called an isolated
singularity.

2. Removable singularity: A function has a removable singularity at z = z0 if is not analytic at but can be made analytic there by assigning a suitable
value f(z0). Such singularities are of no interest since they can be removed as just indicated. Example: (sin z) /z becomes analytic at z if we define
f(0) = 1.

3. Essential singularity: If the number of negative powers of (z - a) is infinite then, z = a is called an essential singularity. In this case ࢒࢏࢓ࢠ→ࢇ ࢌሺࢠሻ
does not exist.

Residue Integration

Let f(z) to be analytic inside a simple closed path C and on C, except for finitely many singularities z1, z2, z3, …, zk inside C. Then, the integral of f(z)
taken counter - clockwise around C equals 2i࣊ times the sum of residues of f(z) at z1, z2, z3, …, zk:

where

Page 22
where

For a simple pole,

Page 23
3. Numerical Methods
06 September 2021 12:20

Solution of Linear Equations

• Gauss Elimination

Consider the following set of linear equations:


x1 - x2 + x3 = 0
-x1 + x2 - x3 = 0
10x2 + 25x3 = 90
20x1 + 10x2 = 80

Step:1 Eliminate x1

Call the first row of A the pivot row and the first equation the pivot equation. Call the coefficient 1 of its x1 - term the pivot in this step as shown
below:

Use this equation to eliminate (get rid of x1 in the other equations. For this, do:

• Add 1 times the pivot equation to the second equation.


• Add - 20 times the pivot equation to the fourth equation.

So, we will get the final form of the augmented matrix as:

Page 24
Step: 2 Eliminate x2

The first equation remains as it is. We want the new second equation to serve as the next pivot equation. But since it has no x2 term (in fact, it is 0 = 0).
So, we will exchange the second row with the third and fourth row. So in this way, we will put the second row at last as shown below:

And then:

• Add - 3 times the pivot equation to the third equation as shown below:

Step: 3 Back substitution i.e. find the unknowns in this order: x3, x2 and then x1.

Working backward from the last to the first equation of this “triangular” system, we can know readily find x3 then x2 and then x1 as:

-95x3 = -190
10x2 + 25x3 = 90
x1 - x2 + x3 = 0
So, we will get x3 = 2, x2 = 4 and x1 = 2

Page 25
• Gauss - Jordan Method (Inverse of Matrix)

Here, for the equation Ax = b

The inverse of matrix A is denoted by A-1 such that


AA-1 = I
where, I is the n X n unit matrix.

Consider the following example,

Page 26
• Gauss - Seidel Iteration Method

Consider the following set of linear equations

or

So, now we will use this equation for iteration. To start with, take x1(0) = 100; x2(0) = 100; x3(0) = 100 and x4(0) = 100 and compute the new values using
the above equations as:

Page 27
Again, use these values to compute new values of x1, x2, x3 and x4 as:

On repeating the above steps few more times, we will finally get the following set of values:

So, as we can see here that these values are converging to the final values of the unknowns as x1 = x2 = 87.5 and x3 = x4 = 62.5.

• Gauss - Jacobi Iteration Method

It has the solving procedure similar to the Gauss - Seidel Method but it uses the old values for computation. For example:

Consider the following equations:

Page 28
Consider the following equations:
10x + y - z = 11.19
x + 10 y + z = 28.08
y - x + 10z = 35.61

Solution of Non - Linear Equations by iteratives methods

• Fixed Point Iteration

To solve an algebraic equation f(x) = 0 by this method, we first convert f (x) = 0 into the form

x = g (x)

Page 29
Then we choose a point x0 and find x1 = g(x0), x2 = g(x1) and so on till the values of x converge to a fixed point and in general

xn + 1 = g (xn)

 Convergence of fixed point iteration

Let x = s be a solution of x = g (x) and suppose that g has a continuous derivative in some interval J containing s. Then, if |܏ᇱ ሺ‫ܠ‬ሻ| ൑ ‫ ܓ‬൏ ૚ in J, the
iteration process defined above converges for any x0 in J.

• Newton - Raphson method

In the equation f(x) = 0, f is assumed to have a continuous derivative f '.The underlying idea is that we approximate the graph of f by suitable tangents.
Using an approximate value obtained from the graph of f, we let x1 be the point of intersection of the x-axis and the tangent to the curve of f at x0 as
shown below:

Then

 Convergence of Newton - Raphson method

• Secant Method

The Newton - Raphson method is very powerful but has the disadvantage that the derivative may sometimes be a far more difficult expression than f
itself and its evaluation therefore computationally expensive. This situation suggests the idea of replacing the derivative with the difference quotient as:

Page 30
Then we have,

Geometrically, we intersect the x-axis at xn + 1 with the secant of f(x) passing through Pn - 1 and Pn in the below figure. We need two starting values x0
and x1. Evaluation of derivatives is now avoided.

• Method of false position (Regula - Falsi method)

Assuming f to be continuous. We find the x intercept 'c0' of the line passing through (a0, f(a0)), (b0, f(b0)). At f(c0) = 0, the solution gets converged.
If f(a0)f(c0) < 0 as shown below, we set a1 = a0, b1 = c0 and repeat to get c1, etc. If f(a0)f(c0) > 0, then f(b0)f(c0) < 0 and we set a1 = c0, b1 = b0, etc.

The value of c0 will be given by:

• Bisection method

This simple but slowly convergent method for finding a solution of f(x) = 0 with continuous f is based on the intermediate value theorem, which states

ሾ‫܉‬, ‫܊‬ሿ. The solution is found by repeating bisection of the interval and in each iteration picking that half which also satisfies that sign condition.
that if a continuous function f has opposite signs at some and that is, either f(a) < 0 , f(b) > 0 or f(a) > 0 , f(b) < 0 then f must be 0 somewhere in

Page 31
For f(x) = 0 and x ࣕ ሾࢇ, ࢈ሿ. Let f(a) > 0 and f(b) < 0 then, f(a)f(b) < 0 which means that roots will lie between a & b (c = (a + b)/2).

 If f(c) < 0, then next iteration will be d = (a + c)/2.

 If f(c) > 0, then next iteration will be d = (b + c)/2.

Numerical Integration

• Trapezoidal Rule

 Here, step size 'h' = ۛۛۛۛand n = number of intervals.


࢈−ࢇ

 Gives exact value of integral when f(x) is a linear function.

• Simpson's 1/3 Rule

 Here, 'n' must be divisible by 2.

• Simpson's 3/8 Rule

 Here, 'n' must be divisible by 3.

Numerical Solution of ODEs

• Euler's method

 Explicit / Forward
ࢊ࢟
Consider the equation ۛۛ ൌ ࢌሺ࢞, ࢟ሻ given that y(x0) = y0. Its curve of solution through P(x0, y0) is shown dotted below. Now, we have to find the ordinate
ࢊ࢞
of any other point Q on this curve.

Page 32
Q

Let us divide LM into n - subintervals each of width h at L1, L2, L3 …. so that, h is quite small. In the small interval LL1, we approximate the curve by a
tangent at P. If the ordinate through L1 meets the tangent in P1(x0 + h, y1), then

 Backward/ Implict

It gives lower error than explicit method but very complicated and slow as it uses the new value of the slope to solve the given linear ODE.

• Runga Kutta Methods

For the linear ODE,

There are 2 types of Runga - Kutta Methods:

 Second order (RK - 2)

࢑૚ ൌ ࢎࢌሺ࢞૙ , ࢟૙ ሻ and ࢑૛ ൌ ࢎࢌሺ࢞૙ ൅ ࢎ, ࢟૙ ൅ ࢑૚ ሻ


where,

 Fourth order (RK - 4)

Page 33
 Fourth order (RK - 4)

࢑૚ ൌ ࢎࢌሺ࢞૙ , ࢟૙ ሻ, ࢑૛ ൌ ࢎࢌሺ࢞૙ ൅ ૙. ૞ࢎ, ࢟૙ ൅ ૙. ૞࢑૚ ሻ , ࢑૜ ൌ ࢎࢌሺ࢞૙ ൅ ૙. ૞ࢎ, ࢟૙ ൅ ૙. ૞࢑૛ ሻ and ࢑૝ ൌ ࢎࢌሺ࢞૙ ൅ ࢎ, ࢟૙ ൅ ࢑૜ ሻ.


where,

Note: If the differential equation becomes only the function of x, then the expression of y1 will be similar to the Simpson's 1/3 rule.

Page 34
4. Statistics & Probability
07 September 2021 10:36

The “probability” of an event A in an experiment is supposed to measure how frequently A is about to occur if we make many trials. If we flip a coin, then
heads H and tails T will appear about equally often—we say that H and T are “equally likely.”

If the sample space S of an experiment consists of finitely many outcomes (points) that are equally likely, then the probability of an event A is:

• The probability of a sure event is 1.


• The probability of an impossible event is 0.

Union, Intersection and Complementary events

• The union of A and B (A ∪ B) consists of all points in A OR B or both.

• The intersection of A and B (A ∩ B) consists of all points that are in both A AND B.

If A and B have nothing in common then, A ∩ B = ∅. Such events are known as the mutually exhaustive events.

Addition of mutually exclusive events: These events cannot occur simultaneously. For mutually exclusive events A1, A2,…, Am in a sample space S.

Addition of Arbitrary events: For A and B in a sample space.

Conditional Probability: Independent Events

Often it is required to find the probability of an event B under the condition that an event A occurs. This probability is called the conditional probability of

A ∩ B. Thus
B given A and is denoted by P(B|A). In this case A serves as a new (reduced) sample space, and that probability is the fraction of which corresponds to

Similarly, the conditional probability of A given B is

Page 35
Multiplication Rule:

Permutation & Combination

Combination Permutation
• Selection • Selection followed by arrangement
࢔! ࢔!
• nCr = ۛۛۛۛۛۛ
࢘!ሺ࢔ ି࢘ሻ!
ۛۛۛۛۛ= r! X nCr, here r can vary from 0 to n.
• nPr = ሺ࢔ ି࢘ሻ!

• Example: Consider 3 persons A, B and C. We can • Example: Consider 3 persons A, B and C. We can
select them in following ways: arrange them in following ways:
1. No one is selected and arranged: 3P0 = 1
1. No one is selected: 3C0 = 1 2. One of them is selected and arranged: 3P1 = 3
3
2. One of them is selected: C1 = 3 3. Two of them are selected and arranged: 3P2 = 6
3
3. Two of them are selected: C2 = 3 4. All of them are selected and arranged: 3P3 = 6
3
4. All of them are selected: C3 = 1  Here, by default we have assumed that each object is different
and are arranged linearly.

Random Variables: Probability Distribution

A probability distribution or, briefly, a distribution, shows the probabilities of events in an experiment. The quantity that we observe in an experiment will
be denoted by X and called a random variable.

P(X = a) with which X assumes a is defined. Similarly, for any interval I the probability P(X ∈ I) with which X assumes any value in I is defined.
A random variable X is a function defined on the sample space S of an experiment. Its values are real numbers. For every number a the probability

If we count (cars on a road, defective screws in a production, tosses until a die shows the first Six), we have a discrete random variable and
distribution. If we measure (electric voltage, rainfall, hardness of steel), we have a continuous random variable and distribution. Precise definitions
follow. In both cases the distribution of X is determined by the distribution function:

this is the probability that in a trial, X will assume any value not exceeding x.

• Discrete Random Variables and Distributions

These variables appear in experiments in which we count (defectives in a production, customers standing in a line, etc.). The discrete distribution of X is
determined by the probability function of X, defined by:

Page 36
From this we get the values of the distribution function F(x) by taking sums,

where for any given x we sum all the probabilities pj for which xj is smaller than or equal to that of x. This is a step function with upward jumps of size pj at
the possible values xj of X and constant in between.

Two useful formulas for discrete distributions are readily obtained as follows:

• Continuous Random Variables and Distributions

They appear in experiments in which we measure (lengths of screws, voltage in a power line, etc.). The continuous distribution of X is determined by
the probability density function of X, defined by:

or

and,

 Mean & Variance of a distribution

Page 37
• Mean/Expectation (µ)

• Variance ( )

 σ is known as the standard deviation.

Uniform Distribution

where, a < x < b

• Mean

• Variance

• Standard deviation of mean of f (x)

Binomial Distribution

The binomial distribution occurs in games of chance (rolling a die, see below, etc.), quality inspection (e.g., counting of the number of defectives), opinion
polls (counting number of employees favouring certain schedule changes, etc.), medicine (e.g., recording the number of patients who recovered on a new
medication), and so on.

We are interested in the number of times an event A occurs in n independent trials. In each trial the event A has the same probability P(A) = p. Then in a
trial, A will not occur with probability q = 1 - p. In n trials, the random variable that interests us is

Page 38
trial, A will not occur with probability q = 1 - p. In n trials, the random variable that interests us is

X = number of times the event A occurs in 'n' trials

X can assume the values 0, 1, 2,…., n and we want to determine the corresponding probabilities. Now X = x means that A occurs in x trials and in n -x
trials it does not occur. This may look as follows.

or,

So,

where,

 Mean

 Variance

 Standard deviation

• Sampling with replacement

This means that we draw things from a given set one by one, and after each trial we replace the thing drawn (put it back to the given set and mix) before
we draw the next thing. This guarantees independence of trials and leads to the binomial distribution. Indeed, if a box contains N things, for example,
screws, M of which are defective, the probability of drawing a defective screw in a trial is p = M/N. Hence the probability of drawing a non-defective screw
is q = 1 - p = 1 - M/N with probability:

• Sampling without replacement (Hypergeometric Distribution)

This means that we return no screw to the box. Then we no longer have independence of trials. In such case, the probability of drawing x defective
screw in n trials is

Page 39
 Mean (µ)

 Variance (σ2)

Poisson Distribution

It is a discrete distribution with infinitely many possible values and probability function:

with mean = Variance (µ = ࣌૛ )

Normal/Gauss distribution

Its density function is given as:

where, exp is the exponential function with base e = 2.718… and µ, ࣌ are the mean and standard deviation respectively.
It has inflection at ࣌ ± ࣆ.

It has the distribution function as:

At µ = 0 and ࣌ = 1, F(x) reduces to:

Page 40
Exponential Distribution

A continuous random variable x assuming non negative values is said to have exponential distribution with parameter λ 0 , if its probability density
function is given by,

Linear Regression Analysis

In this analysis the dependence of Y on x is a dependence of the mean of Y on x, so that is a function in the ordinary sense. The curve of is called the
regression curve of Y on x.

Let us consider a simple case of a straight regression line with equation,

Then we may want to graph the sample values as n points in the XY-plane, fit a straight line through them, and use it for estimating µ(x) at values of x that
interest us, so that we know what values of Y we can expect for those x.

The straight line should be fitted through the given points so that the sum of the squares of the distances of those points from the straight line is
minimum, where the distance is measured in the vertical direction (the y-direction).

Now, from the given sample, we shall now determine a straight line by least squares. We write the line as:

y = k0 + k1x

and call it the sample regression line because it will be the counterpart of the regression line µ = k0 + k1x.

Now, a sample point (xj , yj) has the vertical distance (distance measured in the y-direction) from above equation given by:

Page 41
Hence the sum of the squares of these distances is

Now, we will find k0 and k1 so that q is minimum. From calculus we know that a necessary condition for this is:

or

On solving these 2 equations, we get the values of k0 and k1 and using them, we can find the equation for our straight line.

Page 42
5. Calculus
08 September 2021 13:36

Limits, Continuity and Differentiability

• Limit

‫ࢌ ܕܑܔ‬ሺ࢞ሻ is the expected value of ࢌ at ࢞ ൌ ࢇ given that values of ࢌ near ࢞ to the left of ࢇ. This value is known as the left hand limit of ࢌ at ࢇ.
࢞→ࢇష

‫ࢌ ܕܑܔ‬ሺ࢞ሻ is the expected value of ࢌ at ࢞ ൌ ࢇ given that values of ࢌ near ࢞ to the right of ࢇ. This value is known as the right hand limit of ࢌ at ࢇ.
࢞→ࢇశ

‫ܕܑܔ‬
If the right and left hand limits coincide, we call that common value as the limit of ࢌሺ࢞ሻ at ࢞ ൌ ࢇ denoted by ࢌሺ࢞ሻ.
࢞→ࢇ

Standard Results of limit

L-Hospital's Rule

It is a general method for evaluating the basic indeterminants of forms like 0/0 and ∞/∞. It states that if ‫ ۛۛۛ ܕܑܔ‬reduces to 0/0 or ∞/∞, then differentiate
ࢌሺ࢞ሻ
࢞→ࢇ ࢍሺ࢞ሻ
numerator and denominator until and unless this form is eliminated.

• Continuity

Page 43
The word continuous means without any break or gap. A function is continuous when its graph is a single unbroken curve.

A function ࢌሺ࢞ሻ is continuous at ࢞ ൌ ࢇ if the following three conditions are satisfied :

 ࢌሺࢇሻ is defined.

 The Left Hand Limit = Right Hand Limit (LHL = RHL).

If any of these 2 is not satisfied then, the function is said to be discontinuous.

• Differentiability

The function ࢌሺ࢞ሻ is differentiable at point P , if there exists a unique tangent at point P or if the curve does not have P as a corner point i.e. the function is
not differentiable at those points on which function has jumps and sharp edges.

or, if the left hand and the right hand limits are equal and the left and right hand derivatives are equal.

For example, |࢞| is not differetiable at x = 0 because it has a sharp edge at x = 0.

First derivative of ࢌሺ࢞ሻ at ࢞ ൌ ࢇ,

Mean Value Theorems

Lagrange's Mean Value Theorem

If ࢌሺ࢞ሻ is a real - valued function such that

• ࢌሺ࢞ሻ is continuous in the closed interval ሾࢇ, ࢈ሿ.

• ࢌሺ࢞ሻ is differentiable in the open interval (a, b).

• ࢌሺࢇሻ ≠ ࢌሺ࢈ሻ.

Then there exists at least one value of ࢞, ࢉ ∈ ሺࢇ, ࢈ሻ such that ࢌᇱ ሺࢉሻ ൌ ۛۛۛۛۛۛۛ.
ࢌሺ࢈ሻିࢌሺࢇሻ
࢈ିࢇ

Rolle's Theorem (Special case of Lagrange's Mean Value Theorem)

Page 44
If ࢌሺ࢞ሻ is a real - valued function such that

• ࢌሺ࢞ሻ is continuous in the closed interval ሾࢇ, ࢈ሿ.

• ࢌሺ࢞ሻ is differentiable in the open interval (a, b).

• ࢌሺࢇሻ ൌ ࢌሺ࢈ሻ.

Then there exists at least one value of ࢞, ࢉ ∈ ሺࢇ, ࢈ሻ such that ࢌ′ሺࢉሻ ൌ ૙.

Series Expansion of Functions

Taylor Series

If ࢌሺ࢞ሻ is differentiable at point ࢞ ൌ ࢇ then it can be expanded as an infinite series as follows:

At x = 0, the Taylor series will reduce to the Maclurian series.


Series function of some important functions around x = 0 (Maclurian Series)

Page 45
Definite Integrals

Properties of definite integrals

Page 46
Properties of definite integrals

Improper Integrals

• Gamma Function

Standard Results

• Beta function

Page 47
Standard Results

Partial Derivatives

Let z = ࢌሺ࢞, ࢟ሻ be a real function of ࢞ and ࢟. If we keep ࢟ as constant and vary ࢞ alone, then z is a function of ࢞ only. The derivative of z w.r.t. ࢞ will then be
ࣔࢌ ࣔࢠ
known as the partial derivative of z w.r.t. ࢞ and denoted as ۛۛ or ۛۛ or Dxf.
ࣔ࢞ ࣔ࢞

The same procedure will be followed by keeping ࢞ as constant and vary y.

Thus,

Also

Sometimes, following notations can also be used:

Total Derivative/ Change of Variable

If u = f(x, y) where x = φ(t) and y = ψ(t), then we can express u as the function of t alone by putting the value of x and y in u = f(x, y). Thus, we can find the
ordinary derivative du/dt which is known as the total derivative of u.

Page 48
Now, to find the actual value of du/dt without substituting the values of x and y in f(x, y), we establish the following Chain rule:

If we put t = x, we get

• If f(x, y) = c is an implicit relation between x and y which defines as a differentiable function of x then,

or

Jacobians

If u and v are functions of 2 independent variables x and y, then the determinant

is called the Jacobian of u and v w.r.t. x and y and is written as:

Similarly, the Jacobian of u, v and w w.r.t. x, y and z is:

Properties of Jacobians

• If J = ࣔሺ࢛, ࢜ሻ/ࣔሺ࢞, ࢟ሻ and J' = ࣔሺ࢞, ࢟ሻ/ࣔሺ࢛, ࢜ሻ then JJ' = 1.

Page 49
• Chain rule for Jacobians

• For implicit functions also,

• If u1, u2 and u3 are functions of x1, x2 and x3 then a functional relationship exists between u1, u2, u3 and x1, x2, x3 of form:

Maxima & Minima

• For functions with ONE independent variable

Page 50
• For functions with TWO independent variable

Page 51
Vectors

A quantity which is completely specified by its magnitude as well as direction is known as a vector.

෡ ൌ ۛۛ.
ሬሬԦ

A vector of unit magnitude is called a unit vector. This idea is often used to concisely represent the direction of any vector. It is denoted by ࡭ |࡭|

A vector of zero magnitude and no direction is known as the null or zero vector.

Two vectors can be equal to each other if they have same magnitude and same direction (parallel to each other).

Vector addition: Vectors are added according to the triangle law of addition as shown below:

Page 52
Let A and B be represented by 2 vectors ሬሬሬሬሬሬԦ
ࡻࡼ and ሬሬሬሬሬሬԦ
ࡼࡽ repectively then ሬሬሬሬሬሬԦ
ࡻࡽ = C is called the sum or resultant of A and B as shown below:

C=A+B

Vector subtraction: The subtraction of a vector B from A can be shown as the addition of - B to A and we can write,

A - B = A + (-B)

Multiplication of vectors by scaler: The product mA of a vector A and scaler m is a vector whose magnitude is m times that of A and direction is same
or opposite to A according as m is positive or negative.

Product of 2 vectors:

ሬԦ ⋅ ࡮
• Scaler/Dot Product: ࡭ ሬሬԦ ൌ |࡭||࡮| ‫ࣂ ܛܗ܋‬

૚ ૚ ૚
 Orthogonal vectors have zero dot product.

• Vector/Cross Product: ࡭ሬሬԦ ൈ ࡮


ሬሬԦ ൌ |࡭||࡮| ‫࢔ ࣂ ܖܑܛ‬ ෝ ൌ ቮࢇ࢞
ෝ where, ࢔ ࢇ࢟ ࢇࢠ ቮ is the unit normal vector.
࢈࢞ ࢈࢟ ࢈ࢠ
 Colinear vectors have zero cross product.

Vector Differentiation

Vector Differential Operator: It can be defined as:

 Gradient (Scaler product):

Page 53
෡ then,
• Operation on Vectors - Divergence (dot product): If ࡲ ൌ ࢌଙ̂ ൅ ࣘࡶ෠ ൅ ࣒࢑

If ࢺ. ࡲ ൌ ૙, then the flow of fluid is incompressible.

෡ then,
• Operation on Vectors - curl (cross product): If ࡲ ൌ ࢌଙ̂ ൅ ࣘࡶ෠ ൅ ࣒࢑

If ࢺ ൈ ࡲ ൌ ૙, then the flow of fluid is irrotational.

Properties of curl & divergence

෡ then,
If ࡲ ൌ ࢌଙ̂ ൅ ࣘࡶ෠ ൅ ࣒࢑

 curl(grad F) = 0

 div(curl F) = 0

 div(grad F) = ࢺ૛ ࡲ

 curl(curl F) = ࢺ × ሺࢺ × ࡲሻ ൌ ࢺሺࢺ. ࡲሻ െ ࢺ૛ ࡲ

 grad(div F) = ࢺሺࢺ. ࡲሻ ൌ ࢺ × ሺࢺ × ࡲሻ ൅ ࢺ૛ ࡲ

 grad(Fr) = ࢺሺࡲ࢘ሻ ൌ ࡲࢺ࢘ ൅ ࢘ࢺࡲ

 curl(Fr) = ࢺ × ሺࡲ࢘ሻ = ࢺࡲ × ࢘ ൅ ࡲࢺ × ࢘

 div(Fr) = ࢺሺࡲ࢘ሻ = ࢺࡲ. ࢘ ൅ ࢘ࢺ. ࡲ

Vector Integration

• Line Integral

Page 54
If C is a closed path then

• Surface Integral

Since,

And

Change of variable (Jacobian)

that is, the integrand is expressed in terms of u and v, and dxdy is replaced by dudv times the absolute value of the Jacobian.

Here, we assume that

Page 55
Conversion of double integral to line integral (Green's Theorem)

Stoke's Theorem

ෝ is the direction of the surface S and this direction is normal or perpendicular outward to the surface.
where, ࢔

• Volume Integral

Conversion of Volume integral to Surface integral (Gauss Divergence Theorem)

Page 56
6. Differential Equations
09 September 2021 13:22

An ordinary differential equation (ODE) is an equation that contains one or several derivatives of an unknown function w.r.t. ONE independent
variable.

Order of differential equations: It is the highest derivative appearing in the differential equation.

Degree of differential equations: It is the degree of the highest derivative appearing in the differential equation after removing all the radicals and
fractions from the derivative terms.

e.g.

Separable ODE

Generic form

On integrating on both sides with respect to x, we will get

On the left we can switch to y as the variable of integration. By calculus, ࢟ᇱ ࢊ࢞ ൌ ࢊ࢟, so that

Reduction to Variable Separable form (Homogenous ODE)

Consider the following ODE,

Putting ࢟ ൌ ࢛࢞ and differentiating it to get

On comparing these 2 equations ,.we get,

Page 57
On comparing these 2 equations ,.we get,

Now, on applying variable separable we get

Solution of Linear Differential Equations

A differential equation is said to be linear if the dependent variable and its coefficients occur ONLY in the FIRST degree and not multiplied together.

Generic form

where, P and Q are function of ࢞.



To solve this LDE, multiply both sides by ࢋ‫ ࢞ࢊࡼ ׬‬which is known as the integrating factor (I.F.) to get the solution in the form:

Exact Differential Equations

A differential equation of form M(x, y) + N(x, y) = 0 is said to be exact if

Their solution is of form,

Second Order Linear ODE

Generic form

Else non - linear, if not written in this form.

Homogenous Second Order Linear ODE

Homogenous Second Order Linear ODE with constant coefficients

Page 58
Consider the following differential equation

First it is written in terms of D by substituting the differential term by D and then its auxiliary/characteristic equation will be written by putting D = m as
shown below:
m2 + am + b = 0
2
where, m = y' and m = y''.

On solving this quadratic equation, we will get the values of m and depending on that, we get the solution of our differential equation as:

• If values of 'm' are real and distinct (say m1, m2)

The solution of the differential equation will be of form:


y = C1em1x + C2em2x

• If values of 'm' are real and equal (m1 = m2 = m)

The solution of the differential equation will be of form:


y = (C1 + C2x)emx

• If values of 'm' are complex conjugates (A േ iB)

The solution of the differential equation will be of form:


y = (C1sin Bx + C2cos Bx)eAx

How to solve these equations ?

Any differential equation has the following form of the solution:

y = Complementary Function (C.F.) + Particular Integral (P.I.)

Step-1: Find the Complementary Function (C.F.) of the given differential equation by the methods shown above.

Step-2: After finding the C.F. , check the RHS of the differential equation. If it is 0 then, y = C.F. will be the final solution of the given differential equation. If
not, then find the P.I. by following methods:

• Case -1: If RHS = eax then, P.I. = eax/F(D), put D = a then, P.I. = eax/F(a) if F(a) = 0 then find F'(D) w.r.t. D and multiply the numerator by x. If it fails,
find F''(D) and multiply the numerator by x2.

• Case -2: If RHS = sin ax or cos ax then, P.I. = sin ax/F(D), put D2 = - a2 then, rationalise the denominator if F(a) = 0 then find F'(D) w.r.t. D and

Page 59
• Case -2: If RHS = sin ax or cos ax then, P.I. = sin ax/F(D), put D2 = - a2 then, rationalise the denominator if F(a) = 0 then find F'(D) w.r.t. D and
multiply the numerator by x.

• Case-3: If RHS = xm where, m is a positive integer then, P.I. = xm/F(D) = [F(D)]-1 xm now, expand [F(D)]-1 using binomial expansion and put the value
of D calculated from characteristic equation.

• Case-4: If RHS = Veax where, V is a function of x then, P.I. = Veax/F(D) = Veax/F(D+a) and then evaluate F(D+a) as in above steps.

Cauchy - Euler Linear Differential Equation

This equation can be reduced to linear differential equation by putting:

ࢊ૜ ࢟
• ࢞૜ ۛۛۛۛൌ ࡰሺࡰ െ ૚ሻሺࡰ െ ૛ሻ࢟
ࢊ࢞૜

ࢊ૛ ࢟
• ࢞૛ ۛۛۛۛൌ ࡰሺࡰ െ ૚ሻ࢟
ࢊ࢞૛

and so on.
Then, solve by using the above cases for f(x).

Laplace Transformation

For any function, it can be defined as:

Linearity of Laplace Transform

First Shift Theorem

or

Laplace Transform of Derivatives

Page 60
Laplace Transform of Derivatives

In general,

Using these formulas for Laplace of derivatives, we can solve the differential equations by putting their values in the equation and applying the suitable
boundary conditions.

Laplace Transform of Integrals

Heaviside/ Unit Step Function

Unit step function u(t) Unit step function u(t - a)

Second Shift Theorem

or

Page 61
Laplace of Integrals

Multiplication by tn

Division by t

Laplace of Periodic Functions

f is periodic with period p.

Important results

Page 62
Important results

Partial Differential Equations

• 1 - D Wave Equation

Here, c2 = T/m = constant


࢛ࣔ
Solve this equation under boundary conditions u(0, t) = u(L, t) = 0 and u(x, 0) = f(x) at t = 0 and ۛۛ ൌ ૙.
࢚ࣔ

The general solution will be:

where,

e.g.

Page 63
e.g.

Page 64
• 1 - D Heat Equation

Here, c2 = Thermal Diffusivity = constant


Solve this equation under boundary conditions u(0, t) = u(L, t) = 0 and u(x, 0) = f(x) at t = 0.

The general solution will be:

where,

e.g.

Page 65
Page 66

You might also like