Professional Documents
Culture Documents
z &GVCKNGF'ZCO#PCN[UKU%JCRVGT9KUGCPF6QRKE9KUG
,QVLGH
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in
any form or by any means, electronic, mechanical, photocopying, recording or scanning without the
written permission of the publisher.
Limits of Liability: While the publisher and the author have used their best efforts in preparing this book,
Wiley and the author make no representation or warranties with respect to the accuracy or completeness
of the contents of this book, and specifically disclaim any implied warranties of merchantability or fitness
for any particular purpose. There are no warranties which extend beyond the descriptions contained in
this paragraph. No warranty may be created or extended by sales representatives or written sales
materials.
Disclaimer: The contents of this book have been checked for accuracy. Since deviations cannot be
precluded entirely, Wiley or its author cannot guarantee full agreement. As the book is intended for
educational purpose, Wiley or its author shall not be responsible for any errors, omissions or damages
arising out of the use of the information contained in the book. This publication is designed to provide
accurate and authoritative information with regard to the subject matter covered. It is sold on the
understanding that the Publisher is not engaged in rendering professional services.
Edition: 2020
ISBN: 978-81-265-5869-8
ISBN: 978-81-265-8958-6 (ebk)
www.wileyindia.com
Printed at:
NOTE TO THE ASPIRANTS
are spread in different cities across India, as well as, in six 7-Day free subscription for topic-wise GATE tests.
cities outside India. The examination is purely a Computer Instant correction report with remedial action.
Validity
The GATE score is valid for THREE YEARS from the date of
announcement of the results.
GATE 2010 GATE 2011 GATE 2012 GATE 2013 GATE 2014 GATE 2015 GATE 2016 GATE 2017 GATE 2018 GATE 2019 GATE 2020
S.No. Chapter Name 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2
Mark Marks Mark Marks Mark Marks Mark Marks Mark Marks Mark Marks Mark Marks Mark Marks Mark Marks Mark Marks Mark Marks
Engineering
1 5 5 3 5 4 5 3 3 5 3 4 2 5 4 4 2 5 4 4 3 5 5
Mathematics
2 Electrical Circuits 2 2 2 2 4 6 2 4 4 3 3 4 4 4 3 4 3 3 5 3 1 2
Signals and
3 3 3 6 0 3 3 7 2 4 2 2 4 2 2 5 3 3 4 2 4 4 2
Systems
4 Control Systems 3 4 2 5 1 6 2 4 2 3 1 3 4 4 1 4 3 3 2 5 3 5
Analog
5 1 4 1 5 1 2 4 7 3 4 1 4 4 4 4 2 2 4 3 3 3 3
Electronics
6 Digital Electronics 2 3 4 5 3 1 1 2 1 6 3 4 2 4 2 3 3 4 4 3 4 2
7 Measurements 2 3 0 2 3 1 0 2 0 1 3 2 2 4 3 4 2 4 1 2 2 5
Sensors and
8 Industrial 4 6 6 5 3 3 4 5 4 5 5 3 1 2 2 4 2 1 2 2 3 4
Instrumentation
Communication
9 and Optical 3 2 1 1 3 3 3 1 1 2 3 7 2 3 1 4 2 3 2 5 0 2
Instrumentation
16-Jun-20 2:54:35 PM
CONTENTS
Chapter 5: Analog Electronics 201 Appendix Solved Gate (IN) 2020 453
Important Formulas 201
Questions 210
Answers with Explanation 232
CHAPTER ANALYSIS
Topic GATE GATE GATE GATE GATE GATE GATE GATE GATE GATE
2010 2011 2012 2013 2014 2015 2016 2017 2018 2019
Linear Algebra 2 1 1 3 2 1 1 4 2 1
Calculus 3 2 2 2 2 5 2 3
Differential Equation 1 2 2 3 1 1
Analysis of Complex Variables 2 1 2 1 1 1 1 1
Probability and Statistics 2 1 2 2 2 2 3 2
Numerical Methods 1 1 1
IMPORTANT FORMULAS
Linear Algebra (d) Diagonal matrix: A square matrix is called a diago-
nal matrix if all the elements except those in the lead-
1. Types of Matrices ing diagonal are zero, that is aij = 0 for all i ≠ j.
(a) Row matrix: A matrix having only one row is (e) Scalar matrix: A matrix A = [aij ]n × n is called a
called a row matrix or a row vector. Therefore, for a scalar matrix if
row matrix, m = 1.
(i) aij = 0, for all i ≠ j .
(b) Column matrix: A matrix having only one column
is called a column matrix or a column vector. There- (ii) aii = c, for all i, where c ≠ 0.
fore, for a column matrix, n = 1. (f) Identity or unit matrix: A square matrix A = [aij]n × n
(c) Square matrix: A matrix in which the number of is called an identity or unit matrix if
rows is equal to the number of columns, say n, is (i) aij = 0, for all i ≠ j.
called a square matrix of order n. (ii) aij = 1, for all i.
(g) Null matrix: A matrix whose all the elements are (e) Cancellation laws: If A, B and C are matrices of the
zero is called a null matrix or a zero matrix. same order, then
(h) Upper triangular matrix: A square matrix A = [aij] A+B=A+C⇒B=C
is called an upper triangular matrix if aij = 0 for i > j. B+A=C+A⇒B=C
Lower triangular matrix: A square matrix A = [aij]
(i)
5. Some important properties of matrix multiplication
is called a lower triangular matrix if aij = 0 for i < j. are:
2. Types of a Square Matrix (a) Matrix multiplication is not commutative.
(a) Nilpotent matrix: A square matrix A is called a nil- (b) Matrix multiplication is associative, that is (AB)C =
potent matrix if there exists a positive integer n such A(BC).
that An = 0. If n is least positive integer such that (c)
Matrix multiplication is distributive over matrix
An = 0, then n is called index of the nilpotent matrix A. addition, that is A(B + C) = AB + AC.
(b) Symmetrical matrix: It is a square matrix in which (d) If A is an m × n matrix, then ImA = A = AIn.
aij = aji for all i and j. A symmetrical matrix is nec- (e) The product of two matrices can be the null matrix
essarily a square one. If A is symmetric, then AT = A. while neither of them is the null matrix.
(c)
Skew-symmetrical matrix: It is a square matrix in
which aij = −aji for all i and j. In a skew-symmetrical 6. Some of the important properties of scalar multipli-
matrix, all elements along the diagonal are zero. cation are:
(a) k(A + B) = kA + kB
(d) Hermitian matrix: It is a square matrix A in which
(i, j)th element is equal to complex conjugate of the (b) (k + l) ⋅ A = kA + lA
(j, i)th element, i.e. aij = a ji for all i and j. (c) (kl) ⋅ A = k(lA) = l(kA)
(e) Skew-Hermitian matrix: It is a square matrix A = (d) (−k) ⋅ A = −(kA) = k(−A)
[aij] in which aij = − aij for all i and j. (e) 1 ⋅ A = A
(f) Orthogonal matrix: A square matrix A is called (f) −1 ⋅ A = −A
orthogonal matrix if AAT = ATA = I. Here, A and B are two matrices of same order and k and
l are constants.
3. Equality of a Matrix
If A is a matrix and A2 = A, then A is called idempotent
Two matrices A = [aij]m × n and B = [bij]x × y are equal if matrix. If A is a matrix and satisfies A2 = I, then A is
(a) m = x, that is the number of rows in A equals the called involuntary matrix.
number of rows in B.
7. Some of the important properties of transpose of a
(b) n = y, that is the number of columns in A equals the
matrix are:
number of columns in B.
(a) For any matrix A, (AT)T = A
(c) aij = bij for i = 1, 2, 3, …, m and j = 1, 2, 3, …, n.
(b) For any two matrices A and B of the same order
4. Some of the important properties of matrix addition (A + B)T = AT + BT
are:
(c) If A is a matrix and k is a scalar, then
(a) Commutativity: If A and B are two m × n matrices,
(kA)T = k(AT)
then A + B = B + A, that is matrix addition is commu-
tative. (d) If A and B are two matrices such that AB is defined,
then
(b) Associativity: If A, B and C are three matrices of
(AB)T = BTAT
the same order, then
(A + B) + C = A + (B + C ) 8. Some of the important properties of inverse of a
that is matrix addition is associative. matrix are:
(c) Existence of identity: The null matrix is the identity (a) A−1 exists only when A is non-singular, that is |A| ≠ 0.
element for matrix addition. Thus, A + O = A = O + A (b) The inverse of a matrix is unique.
(d) Existence of inverse: For every matrix A = [aij]m × n, (c) Reversal laws: If A and B are invertible matrices of
there exists a matrix [aij]m × n, denoted by −A, such the same order, then
that A + (−A) = O = ( −A) + A (AB)−1 = B−1A−1
(d) If A is an invertible square matrix, then (AT)−1 = (A−1)T (l) If A is any n-rowed square matrix of rank, n − 1, then
(e) The inverse of an invertible symmetric matrix is a adj A ≠ 0
symmetric matrix. (m) The rank of transpose of a matrix is equal to rank of
(f) Let A be a non-singular square matrix of order n. the original matrix.
Then rank (A) = rank (AT)
|adj A| = |A|n −1 (n) The rank of a matrix does not change by
(g) If A and B are non-singular square matrices of the pre-multiplication or post-multiplication with a
same order, then non-singular matrix.
adj (AB) = (adj B)(adj A) (o) If A − B, then rank (A) = rank (B).
(p) The rank of a product of two matrices cannot exceed
(h)
If A is an invertible square matrix, then
rank of either matrix.
adj AT = (adj A)T rank (A × B) ≤ rank A
(i) If A is a non-singular square matrix, then or rank (A × B) ≤ rank B
adj (adj A) = |A|n − 2 A (q) The rank of sum of two matrices cannot exceed sum
(j) If A is a non-singular matrix, then of their ranks.
(r) Elementary transformations do not change the rank
1
|A−1| = |A|−1, that is | A−1 | = of a matrix.
| A|
10. Determinants
(k) Let A, B and C be three square matrices of same
type and A be a non-singular matrix. Then Every square matrix can be associated to an expression
or a number which is known as its determinant. If A =
AB = AC ⇒ B = C [aij] is a square matrix of order n, then the determinant of
and BA = CA ⇒ B = C A is denoted by det A or |A|. If A = [a11] is a square matrix
of order 1, then determinant of A is defined as
9. The rank of a matrix A is commonly denoted by rank
|A| = a11
(A). Some of the important properties of rank of a
matrix are: ⎡a a12 ⎤
(a) The rank of a matrix is unique. If A = ⎢ 11 ⎥ is a square matrix of order 2, then
a
⎣ 21 a23 ⎦
(b) The rank of a null matrix is zero. determinant of A is defined as
(c) Every matrix has a rank. |A| = a11a23 − a12a21
(d) If A is a matrix of order m × n, then rank (A) ≤ m × n
⎡ a11 a12 a13 ⎤
(smaller of the two) ⎢
If A = ⎢ a21 a22 a23 ⎥⎥ is a square matrix of order 3,
(e) If rank (A) = n, then every minor of order n + 1,
⎢⎣ a31 a32 a33 ⎥⎦
n + 2, etc., is zero.
(f) If A is a matrix of order n × n, then A is non-singular then determinant of A is defined as
and rank (A) = n. |A| = a11 (a22a33 − a23a32) − a21 (a12a33 − a13a32)
(g) Rank of IA = n. + a21 (a12a23 − a13a22)
or |A| = a11 (a22a33 − a23a32) − a12 (a21a33 − a23a31)
A is a matrix of order m × n. If every kth order minor
(h)
+ a13 (a21a32 − a22a31)
(k < m, k < n) is zero, then
rank (A) < k 11. Minors
(i) A is a matrix of order m × n. If there is a minor of The minor Mij of A = [aij] is the determinant of the square
order (k < m, k < n) which is not zero, then sub-matrix of order (n − 1) obtained by removing i th row
rank (A) ≥ k and j th column of the matrix A.
(j) If A is a non-zero column matrix and B is a non-zero 12. Cofactors
row matrix, then rank (AB) = 1. The cofactor Cij of A = [aij] is equal to (−1)i + j times the
(k) The rank of a matrix is greater than or equal to the determinant of the sub-matrix of order (n − 1) obtained
rank of every sub-matrix. by leaving i th row and j th column of A.
13. Some of the important properties of determinants are: two equations for x and y using the matrix method.
(a) Sum of the product of elements of any row or col- The values obtained for x and y with z = k is the
umn of a square matrix A = [aij] of order n with solution of the system.
their cofactors is always equal to its determinant.
15. The method to solve a non-homogeneous system of
n n simultaneous linear equations. Please note the num-
∑a c
i =1
ij ij = | A| = ∑ aij cij
j =1
ber of unknowns and the number of equations.
(a) Given that A is a non-singular matrix, then a system
(b) Sum of the product of elements of any row or col- of equations represented by AX = B has the unique
umn of a square matrix A = [aij] of order n with the solution which can be calculated by X = A−1 B.
cofactors of the corresponding elements of other (b) If AX = B is a system with linear equations equal to
row or column is zero. the number of unknowns, then three cases arise:
n n
• I f A ≠ 0, system is consistent and has a unique
∑a c
i =1
ij ik = 0 = ∑ aij ckj
j =1 solution given by X = A−1 B.
(c) For a square matrix A = [aij] of order n, |A| = |AT|. • I f A = 0 and (adj A)B = 0, system is consistent
(d) Consider a square matrix A = [aij] of order n ≥ 2 and and has infinite solutions.
B obtained from A by interchanging any two rows or • If A = 0 and (adj A)B ≠ 0, system is inconsistent.
columns of A, then |B| = −A.
(e) For a square matrix A = [aij] of order (n ≥ 2), if any 16.Cramer’s Rule
two rows or columns are identical, then its determi- Suppose we have the following system of linear equa-
nant is zero, that is |A| = 0. tions:
(f) If all the elements of a square matrix A = [aij] of a1x + b1 y + c1z = k1
order n are multiplied by a scalar k, then the deter- a2x + b2 y + c2z = k2
minant of new matrix is equal to k|A|.
a3x + b3 y + c3z = k3
(g) Let A be a square matrix such that each element of
a row or column of A is expressed as the sum of two Now, if
or more terms. Then |A| can be expressed as the sum
of the determinants of two or more matrices of the ⎡ a1 b1 c1 ⎤
same order. Δ = ⎢⎢ a2 b2 c2 ⎥⎥ ≠ 0
(h) Let A be a square matrix and B be a matrix obtained ⎢⎣ a3 b3 c3 ⎥⎦
from A by adding to a row or column of A a scalar
⎡ k1 b1 c1 ⎤
multiple of another row or column of A, then |B| = |A|.
Δ1 = ⎢⎢ k2 b2 c2 ⎥⎥ ≠ 0
(i) Let A be a square matrix of order n (≥ 2) and also a
null matrix, then |A| = 0. ⎢⎣ k3 b3 c3 ⎥⎦
(j) Consider A = [aij] as a diagonal matrix of order ⎡ a1 k1 c1 ⎤
n (≥ 2), then Δ 2 = ⎢⎢ a2 k2 c2 ⎥⎥ ≠ 0
|A| = a11 × a22 × a33 × … × ann ⎢⎣ a3 k3 c3 ⎥⎦
(k) Suppose A and B are square matrices of same order,
⎡ a1 b1 k1 ⎤
then
Δ 3 = ⎢⎢ a2 b2 k2 ⎥⎥ ≠ 0
|AB| = |A| ⋅ |B|
⎢⎣ a3 b3 k3 ⎥⎦
14. There are two cases that arise for homogeneous sys-
tems: Thus, the solution of the system of equations is given by
(a) Matrix A is non-singular or |A| ≠ 0. Δ1
x=
The solution of the homogeneous system in the Δ
above equation has a unique solution, X = 0, that is Δ2
x1 = x2 = … = xj = 0. y=
Δ
Matrix A is singular or |A| = 0, then it has infinite
(b) Δ
many solutions. To find the solution when |A| = 0, z= 3
Δ
put z = k (where k is any real number) and solve any
17. Augmented Matrix problem. To solve the problem, we need to determine the
Consider the following system of equations: value of X’s and l’s to satisfy the above-mentioned vec-
tor. Note that the zero vector (that is X = 0) is not of our
a11 x1 + a12 x2 + + a1n xn = b1 interest. A value of l for which the above equation has a
solution X ≠ 0 is called an eigenvalue or characteristic
a21 x1 + a22 x2 + + a2 n xn = b2
value of the matrix A. The corresponding solutions
X ≠ 0 of the equation are called the eigenvectors or
characteristic vectors of A corresponding to that eigen-
am1 x1 + am 2 x2 + + amn xn = bm
value, l. The set of all the eigenvalues of A is called
This system can be represented as AX = B. the spectrum of A. The largest of the absolute values of
the eigenvalues of A is called the spectral radius of A.
a11 a12 a1n x1 The sum of the elements of the principal diagonal of a
a
a22 a2 n x matrix A is called the trace of A.
where A = 21 , X = 2 and
20. Properties of Eigenvalues and Eigenvectors
am1 am 2 amn xn Some of the main characteristics of eigenvalues and
eigenvectors are discussed in the following points:
⎡b1 ⎤ (a) If l1, l2, l3, …, ln are the eigenvalues of A, then kl1,
⎢ ⎥
b kl2, kl3, …, kln are eigenvalues of kA, where k is a
B = ⎢ 2 ⎥.
⎢ ⎥ constant scalar quantity.
⎢ ⎥
⎢⎣bn ⎥⎦ (b) If l1, l2, l3, …, ln are the eigenvalues of A, then
1 1 1 1
a11 a12 a1n b1 , , , ..., are the eigenvalues of A−1.
λ1 λ 2 λ3 λn
a a22 a2 n b2
The matrix A B = 21 is called (c) If l1, l2, l3, …, ln are the eigenvalues of A, then
λ1k , λ 2k , λ3k , ..., λ nk are the eigenvalues of Ak.
am1 am 2 amn bm
(d) If l1, l2, l3, …, ln are the eigenvalues of A, then
augmented matrix.
A A A A
18. Cayley–Hamilton Theorem , , , ..., are the eigenvalues of adj A.
λ1 λ 2 λ3 λn
According to the Cayley–Hamilton theorem, every
square matrix satisfies its own characteristic equations. (e) The eigenvalues of a matrix A are equal to the eigen-
Hence, if values of AT.
(f) The maximum number of distinct eigenvalues is n,
A − λ I = ( −1) n ( λ n + a1λ n −1 + a2 λ n − 2 + + an )
where n is the size of the matrix A.
is the characteristic polynomial of a matrix A of order n, (g) The trace of a matrix is equal to the sum of the
then the matrix equation eigenvalues of a matrix.
(a) It is continuous on the closed interval [a, b]. Suppose f(x) is a real-valued function defined at the
(b) It is differentiable on the open interval (a, b). interval (a, b). Then f(x) is said to have minimum value,
if there exists a point y in (a, b) such that
(c) f(a) = f(b).
f (x) ≥ f (y) for all x ∈ (a, b)
Then, according to Rolle’s theorem, there exists a real
number c ∈( a, b) such that f ′(c) = 0. Local maxima and local minima of any function can be
calculated as:
22. Lagrange’s Mean Value Theorem Consider that f(x) be defined in (a, b) and y ∈ (a, b).
Consider a function f(x) defined in the closed interval Now,
[a, b], such that (a) If f ′( y ) = 0 and f ′( x ) changes sign from positive
(a) It is continuous on the closed interval [a, b]. to negative as ‘x’ increases through ‘y’, then x = y is
(b) It is differentiable on the closed interval (a, b). a point of local maximum value of f(x).
Then, according to Lagrange’s mean value theorem, (b) If f ′( y ) = 0 and f ′( x ) changes sign from negative
there exists a real number c ∈( a, b), such that to positive as ‘x’ increases through ‘y’, then x = y is
a point of local minimum value of f(x).
f ( b) − f ( a)
f ′( c ) =
b−a 27. Some important properties of maximum and minima
are given as follows:
23. Cauchy’s Mean Value Theorem
(a) If f(x) is continuous in its domain, then at least one
Consider two functions f(x) and g(x), such that maxima and minima lie between two equal values of x.
(a) f (x) and g(x) both are continuous in [a, b]. (b) Maxima and minima occur alternately, that is no
(b) f ′(x) and g ′(x) both exist in (a, b). two maxima or minima can occur together.
Then there exists a point c ∈( a, b) such that
28. Maximum and minimum values in a closed interval
f ′( c ) f ( b) − f ( a) [a, b] can be calculated using the following steps:
=
g ′( c ) g ( b) − g ( a) (a) Calculate f ′( x ).
24. Taylor’s Theorem (b) Put f ′( x ) = 0 and find value(s) of x. Let c1, c2, ..., cn
If f(x) is a continuous function such that f ′(x), be values of x.
f ′( x ), f ′′( x ), …, f n −1 ( x ) are all continuous in [a, a + h] and (c) Take the maximum and minimum values out of the
f n ( x ) exists in (a, a + h) where h = b − a, then accord- values f(a), f(c1), f(c2), …, f(cn), f(b). The maximum
ing to Taylor’s theorem, and minimum values obtained are the absolute max-
imum and absolute minimum values of the function,
h2 respectively.
f ( a + h) = f ( a) + hf ′( a) + f ′′( a)
2!
hn −1 hn n 29. Partial Derivatives
+ + f n −1 ( a) + f ( a)
( n − 1)! n! Partial differentiation is used to find the partial deriva-
tive of a function of more than one independent variable.
25. Maclaurin’s Theorem The partial derivatives of f(x, y) with respect to x and y
If the Taylor’s series obtained in Section 24 is centered are defined by
at 0, then the series we obtain is called the Maclaurin’s
series. According to Maclaurin’s theorem, ∂f f ( x + ay ) − f ( x, y )
= lim
∂x a → 0 a
h2 hn −1
f ( h) = f (0) + hf ′(0) + f ′′(0) + + f n −1 (0) ∂f f ( x , y + b) − f ( x , y )
2! (n − 1)! = lim
∂x b → 0 b
hn n
+ f ( 0) and the above limits exist.
n!
∂f ∂x is simply the ordinary derivative of f with respect
26. Maxima and Minima
to x keeping y constant, while ∂f ∂x is the ordinary
Suppose f(x) is a real-valued function defined at an inter-
derivative of f with respect to y keeping x constant.
nal (a, b). Then f(x) is said to have maximum value, if
there exists a point y in (a, b) such that Similarly, second-order partial derivatives can be calcu-
lated by
f (x) = f (y) for all x ∈ (a, b)
∂ ⎛ ∂ f ⎞ ∂ ⎛ ∂ f ⎞ ∂ ⎛ ∂f ⎞ ∂ ⎛ ∂f ⎞ Integration Result
⎜ ⎟, , ⎜ ⎟, and is, respec-
∂x ⎝ ∂x ⎠ ∂x ⎜⎝ ∂y ⎟⎠ ∂y ⎝ ∂x ⎠ ∂y ⎜⎝ ∂y ⎟⎠
x 1
∫ cos + sin x cos x + C
2
∂2 f ∂2 f ∂2 f ∂2 f xdx
tively, denoted by , , , . 2 2
∂x 2 ∂x ∂y ∂y ∂x ∂y 2
1
∫ cos sin x − sin 3 x + C
3
30. Some of the important relation are given as follows: xdx
3
(a) If u = f(x, y) and y = f(x), then
1 n −1
cos n −1 x sin x +
∂u ∂u ∂u ∂y n n
∫ cos
n
= + ⋅ xdx
∂x ∂x ∂y ∂x
∫ cos
n−2
xdx + C
(b) If u = f(x, y) and x = f1(t1, t2) and y = f2(t1, t2), then
tan −1 x
∫ cos n −1 ∫
− tan n − 2 xdx + C
n
∂u ∂u ∂x ∂u ∂y xdx
= ⋅ + ⋅
∂t1 ∂x ∂t1 ∂y ∂t1
∂u ∂u ∂x ∂u ∂y dx 1 x−a
and = ⋅ + ⋅ ∫x 2
− a2
ln
2a x + a
+C
∂t 2 ∂x ∂t 2 ∂y ∂t 2
1 1 a2 + b2
∫ (ax + b) 2
dx −
a ( ax + b )
+C
1
( − n2 x 2 cos nx + 2 cos nx
∫x
2 3
1 1 sin nxdx n
∫ (ax + b) n
dx −
a ( n − 1) ( ax + b )
n −1
+C + 2nx sin nx ) + C
1 2 2
1 1 ⎛ x⎞ ( n x sin nx − 2 sin nx
∫ a2 + x 2 dx ∫x
2
tan −1 ⎜ ⎟ + C cos nxdx n3
a ⎝ a⎠
+ 2nx cos nx ) + C
f ′ (x)
∫ f ( x ) dx ln f ( x ) + C
32. Integration by Partial Fraction
x 1 The formula which come handy while working with
∫ sin − sin x cos x + C
2
xdx
2 2 partial fractions are given as follows:
1
∫ sin xdx
3 1
− cos x + cos3 x + C ∫ x − a dx = ln ( x − a) + C
3
1 1 1 ⎛ x⎞
− sin n −1 x cos x ∫a +x
dx = tan −1 ⎜ ⎟ + C
⎝ a⎠
∫ sin
2 2
n
xdx n a
n −1
n ∫
+ sin n − 2 xdx + C
∫a 2
x
+x 2
1
(
dx = ln a 2 + x 2 + C
2
)
(Continued)
33. Integration Using Trigonometric Substitution the resulting expression is integrated with respect to
Trigonometric substitution is used to simplify certain x within the limits x1, x2.
integrals containing radical expressions. Depending on x2 y2
the function we need to integrate, we substitute one of ∫∫ f ( x, y ) dxdy = ∫∫ f ( x, y ) dydx
the following expressions to simplify the integration: Q x1 y1
(a) For a 2 − x 2 , use x = a sin θ . (c) When x1, x2, y1 and y2 are constants, then
y2 x2 x2 y2
(b) For a 2 + x 2 , use x = a tan θ .
∫∫ f ( x, y ) dxdy = ∫∫ f ( x, y ) dxdy = ∫∫ f ( x, y ) dydx
(c) For x − a , use x = a sec θ .
2 2 Q y1 x1 x1 y1
34. Some of the important properties of definite inte- 36. Change of Order of Integration
grals are given as follows: As already discussed, if limits are constant
(a) The value of definite integrals remains the same y2 x2 x2 y 2
a
38. Fourier Series
∫ f ( x ) ⋅ dx = 0, if the function is odd. Fourier series is a way to represent a wave-like function
−a as a combination of sine and cosine waves. It decom-
na a poses any periodic function into the sum of a set of sim-
(g) ∫ f ( x ) ⋅ dx = n∫ f ( x ) ⋅ dx if f(x) = f(x + a) ple oscillating functions (sines and cosines). The Fourier
0 0 series for the function f(x) in the interval α < x < α + 2π
35. Some of the important properties of double integrals is given by
are: a0 ∞ ∞
(a) When x1, x2 are functions of y and y1, y2 are con- f ( x) = + ∑ an cos nx + ∑ bn sin nx
2 n =1 n =1
stants, then f(x, y) is integrated with respect to x
keeping y constant within the limits x1, x2 and the where
resulting expression is integrated with respect to y 1 α + 2π
π ∫α
between the limits y1, y2. a0 = f ( x )dx
y2 x2
1 α + 2π
an = ∫
∫∫
Q
f ( x, y ) dxdy = ∫ ∫ f ( x, y) dxdy
y1 x1
π α
f ( x ) cos nxdx
1 α + 2π
(b) When y1, y2 are functions of x and x1, x2 are con- bn = ∫ f ( x ) sin nxdx
π α
stants, f(x, y) is first integrated with respect to y,
keeping x constant and between the limits y1, y2 and The values of a0, an and bn are known as Euler’s formulae.
π ⎢
⎣ α c ⎦⎥ T
hus, if a periodic function f(x) is odd, its Fourier
1 c
bn = ⎡ ∫ ϕ 1 ( x ) sin nxdx + ∫ ϕ 2 ( x ) sin nxdx ⎤
α + 2π
expansion contains only sine terms and bn.
π ⎣⎢ α c ⎦⎥
43. Vectors
At x = c, there is a finite jump in the graph of function. Vector is any quantity that has magnitude as well as
Both the limits, left-hand limit (that is, f(c − 0)) and direction. If we have two points
right-hand limit (that is, f(c + 0)), exist and are different. A and B, then vector
between A and B is denoted by AB .
At such a point, Fourier series gives the value of f(x) as
the arithmetic mean of these two limits. Hence, at x = c, Position vector is a vector of any points, A, with respect to
the origin, O. If A is given by the coordinates x, y and z.
1
f ( x) = ⎡ f (c − 0 ) + f (c + 0 )⎤⎦
2⎣ AP = x 2 + y 2 + z 2
41. Change of Interval 44. Zero vector is a vector whose initial and final points
Till now, we have talked about functions having peri- are same. Zero vectors are denoted by 0 . They are also
ods of 2π . However, often the period of the function called null vectors.
required to be expanded is some other interval (say 2c). 45. Unit vector is a vector whose magnitude is unity or one.
Then, the Fourier expansion is given as follows: It is in the direction of given vector A and is denoted
a0 ∞ nπ x ∞ nπ x by A .
f ( x) = + ∑ an cos + ∑ bn sin 46. Equal vectors are those which have the same magnitude
2 n =1 c n =1 c
and direction regardless of their initial points.
where
47. Addition of Vectors
1 α + 2c
a0 = ∫ f ( x )dx According to triangle law of vector addition, as shown in
c α
Fig. 1,
1 α + 2c nπ x
an = ∫ f ( x ) cos dx
c α c C
1 α + 2c nπ x
bn = ∫ f ( x ) sin dx
c α c c
b
42. Fourier Series Expansion of Even and Odd Functions
(a) When f(x) is an even function,
A B
1 c 2 c a
a0 = ∫ f ( x )dx = ∫ f ( x )dx
c c 0 Figure 1 | Triangle law of vector addition.
− c
1 c nπ x 2 c nπ x
an = ∫ f ( x ) cos dx = ∫ f ( x ) cos dx c = a+b
c − c c c 0 c
⎛ ∂f1 ∂f 2 ⎞
a1 a2 x ∫∫ ⎜⎝ ∂x − ∂y ⎟⎠ dxdy = ∫ ( f dx + f dy)
s c
2 1
∫ [ A1dx + A2 dy + A3 dz ]
Differential Equations
c
∫ f ( x)dx = ∫ g ( y)dy + C
n
∑ g (α
k =1
k , βk , γ k ) Δc p
Integrating factor = (I.F.) = e ∫ (d) If the given equation is of the form Mdx + Ndy = 0
P ⋅ dx
1 ⎛ ∂N ∂M ⎞
and − = f ( y ) where f ( y ) in a func-
y ⋅ e∫ = ∫ Q ⋅ e∫ M ⎜⎝ ∂x ∂y ⎟⎠
P ⋅ dx P ⋅ dx
dx + C
tion of y, then
⇒ y(I.F.) = ∫ Q(I.F.) dx + C
I.F. = e ∫
f ( y )⋅∂y
Hence, the solution of Clairaut’s equation is obtained on Case III: If the roots of A.E. are complex, that is
replacing p by c. α + iβ , α − iβ , m3 ,..., mn , then
80. A number of the form x + iy, where x and y are real num- Hyperbolic and circular functions are related as follows:
bers and i = ( −1) , is called a complex number. sin ix = i sin h x
x is called the real part of x + iy and is written as R(x + cos ix = cos h x
iy), whereas y is called the imaginary part and is written tan ix = i tan h x
as I(x + iy).
84. Logarithmic Function of Complex Variables (c) If c1 , c2 , c3 , … be any number of closed curves
If z = x + iy and w = u + iy are related such that e w = z , within c, then
then w is said to be a logarithm of x to the base e and is
written as ∫ f ( z ) ⋅ dz = ∫ f ( z ) ⋅ dz + ∫ f ( z ) ⋅ dz
w = log e z c c1 c2
Also, e w + 2 inπ
= e ⋅e w
=z2 inπ
[∵ e 2 inπ
= 1] + ∫ f ( z ) ⋅ dz +
c3
⇒ log z = w + 2 sin π
88. Cauchy’s Integral Formula
85. Cauchy–Riemann Equations
If f(x) is analytic within and on a simple closed curve c
A necessary condition for w = u ( x, y ) + iv ( x, y ) to be and a is any point interior to c, then
analytic in a region R is that u and v satisfy the following
1 f ( z ) dz
equations: f (α ) = ∫
∂u ∂v 2π i c z − α
=
∂z ∂y f (z)
Consider in above equation, which is analytic at
∂u ∂v z −α
=
∂y ∂x all points within c except at z = α . Now, with a as cen-
ter and r as radius, draw a small circle c1 lying entirely
Above equations are called Cauchy–Riemann equations.
within c.
If the partial derivatives in above equations are contin-
Generally, we can write that
uous in R, the equations are sufficient conditions that
f ( z ) be analytic in R. n! f (z)
f n (α ) = ∫ dz
The derivative of f ( z ) is then given by 2π i c ( z − α )n +1
c −∞
+ a−1( z − α ) + a−2 ( z − α )
−1 −2
87. Some of the important results that can be concluded
are:
(a) The line integral of a function f ( z ) , which is ana- 1 f (t )
lytic in the region R, is independent of the path join-
where an = ∫
2π i (t − α )n +1
dt .
ing any two points of R.
(b)
Extension of Cauchy’s theorem: If f(z) is ana- 91. Zeros and Poles of an Analytic Function
lytic in the region R between the two simple closed (a) A zero of an analytic function f(z) is the value of z
curves c and c1, then for which f(z) = 0.
∫ f ( z ) ⋅ dz = ∫ f ( z ) ⋅ dz
(b) A singular point of a function f(z) is a value of z at
c c1
which f(z) fails to be analytic.
92. Residues (e) If the union of two or more events associated with a
The coefficient of ( z − α ) in the expansion of f ( z )
−1
random experiment includes all possible outcomes,
around an isolated singularity is called the residue of then the events are called exhaustive events.
f ( z ) at the point. (f) If the occurrence or non-occurrence of one event
does not affect the probability of the occurrence
It can be found from the formula or non-occurrence of the other, then the events are
1 d n −1 ⎡ independent.
a−1 = lim
z → α ( n − 1)! dz z −1 ⎣
( z − α )n f ( z )⎤⎦ (g) Two events are equally likely events if the p robability
of their occurrence is same.
where n is the order of the pole.
(h) An event which has a probability of occurrence equal
The residue of f(z) at z = a can also be found by to 1− P, where P is the probability of o ccurrence of
1 an event A, is called the complementary event of A.
Res f (α ) = ∫c f ( z )
2π i
96. Axioms of Probability
93. Residue Theorem (a) The numerical value of probability lies between 0
If f(z) is analytic in a region R except for a pole of order and 1.
n at z = a and let C be a simple closed curve in R con- Hence, for any event A of S, 0 ≤ P ( A) ≤ 1 .
taining z = a, then (b) The sum of probabilities of all sample events is
unity. Hence, P(S) = 1.
∫ f ( z ) dz = 2π i × (sum of the residue at the singular
c (c) Probability of an event made of two or more sample
points within C ) events is the sum of their probabilities.
94. Calculation of Residues 97. Conditional Probability
(a) If f (z) has a simple pole at z = a, then Let A and B be two events of a random experiment. The
Res f (α ) = lim ⎡⎣( z − α ) f ( z )⎤⎦ probability of occurrence of A if B has already occurred
z →α and P ( B ) ≠ 0 is known as conditional probability. This
(b) If f ( z ) = ϕ ( z ) /ψ ( z ) , where is denoted by P ( A/B). Also, conditional probability
can be defined as the probability of occurrence of B if
ψ ( z ) = ( z − α ) f ( z ) , f (α ) ≠ 0 , then
A has already occurred and P ( A) ≠ 0 . This is denoted
by P ( B /A).
ϕ (α )
Res f (α ) =
ψ (α ) 98. Geometric Probability
Due to the nature of the problem or the solution or both,
(c) If f (z) has a pole of order n at z = ∞, then
random events that take place in continuous sample space
1 ⎧ d n −1 ⎡ ⎫ may invoke geometric imagery. Hence, geometric proba-
Res f (α ) = ⎨ n −1 ⎣( z − α ) f ( x )⎤⎦ ⎬
n
bilities can be considered as non-negative quantities with
( )⎩
n − 1 ! dz ⎭ z =α maximum value of 1 being assigned to subregions of a
given domain subject to certain rules. If P is an expres-
Probability and Statistics sion of this assignment defined on a domain S, then
0 < P ( A) ≤ 1, A ⊂ S and P ( S ) = 1
95. Types of Events
(a) Each outcome of a random experiment is called an The subsets of S for which P is defined are the random
elementary event. events that form a particular sample spaces. P is defined
by the ratio of the areas so that if σ ( A) is defined as the
(b) An event associated with a random experiment
area of set A, then
that always occurs whenever the experiment is per-
formed is called a certain event. σ ( A)
P ( A) =
σ ( s)
(c) An event associated with a random experiment that
never occurs whenever the experiment is performed 99. Rules of Probability
is called an impossible event.
Some of the important rules of probability are given as
(d) If the occurrence of any one of two or more events, follows:
associated with a random experiment, presents the (a) Inclusion–Exclusion principle of probability:
occurrence of all others, then the events are called
mutually exclusive events. P ( A ∪ B) = P ( A) + P ( B) − P ( A ∩ B)
⎛ A⎞ ⎛ A⎞ where N = ∑ fi
i =1
P ( A) = P ⎜ ⎟ P ( B1 ) + P ⎜ ⎟ P ( B2 )
⎝B ⎠
1 ⎝B ⎠ 2 101. Median
⎛ A⎞ Median for Raw Data
+ + P ⎜ ⎟ P ( Bn )
⎝ Bn ⎠ Suppose we have n numbers of ungrouped/raw values
and let values be x1 , x2 ,..., xn . To calculate median,
(d) Conditional probability rule: arrange all the values in ascending or descending order.
th
P ( A ∩ B ) = P ( B ) ∗ P ( A/B) ⎡ ( n + 1) ⎤
Now, if n is odd, then median = ⎢ ⎥ value
P ( A ∩ B) ⎣ 2 ⎦
⇒ P ( A/B ) = If n is even, then median
P ( B)
P ( A ∩ B) ⎡⎛ n ⎞ th ⎛n ⎞
th
⎤
Or P ( B /A) = = ⎢⎜ ⎟ value + ⎜ + 1⎟ value ⎥
P ( A) ⎝
⎢⎣ 2 ⎠ ⎝ 2 ⎠ ⎥⎦
(e) Bayes’ theorem: Suppose we have an event A 2
corresponding to a number of exhaustive events Median for Grouped Data
B1 , B2 ,..., Bn . To calculate median of grouped values, identify the
If P ( Bi ) and P ( A/Bi ) are given, then class containing the middle observation.
P ( Bi ) P ( A/Bi ) ⎡ ( N + 1) ⎤
P ( Bi /A) = ⎢ − ( F + 1) ⎥
∑ P ( Bi ) P ( A/Bi ) Median = L + ⎢ 2 ⎥×h
⎢ fm ⎥
(f) Rule of total probability: Consider an event E which ⎢⎣ ⎥⎦
occurs via two different values A and B. Also, let
A and B be mutually exhaustive and collectively where L = lower limit of median class
exhaustive events. N = total number of data items = ∑ f
Now, the probability of E is given as F = cumulative frequency of class immediately
P( E ) = P( A ∩ E ) + P( B ∩ E ) preceding the median class
∑ ( f /x )
i =1
i i
n
where N = ∑ f i .
i =1
n
(d) Divide the sum by n to obtain the value of variance,
(c) Obtain the total of these deviations, that is ∑ x2 − X .
i =1 that is
(d) Divide the total obtained in step 3 by the number of 1 n
observations. σ2 = ∑ ( xi − X )2
n i =1
Mean Deviation of Discrete Frequency Distribution
(e) T
ake out the square root of variance to obtain stand-
For a frequency deviation, the mean deviation is given by ard deviation,
n
1
M.D. =
N
∑f
i =1
i xi − x
σ2 =
1 n
∑ ( xi − X )2
n i =1
n
where N = ∑ f i . Standard Deviation of Discrete Frequency Distribution
i =1
The following steps should be followed to calculate If we have a discrete frequency distribution of X, then
mean deviation of discrete frequency deviation:
1 ⎡ n ⎤
(a) Calculate the central value or average ‘A’ of the σ2 = ⎢ ∑
N ⎣ i =1
( xi − X ) 2 ⎥
given frequency distribution about which mean ⎦
deviation is to be calculated. 1 ⎡ n ⎤
(b) Take mod of the deviations of the observations from
σ= ⎢ ∑
N ⎣ i =1
( xi − X ) 2 ⎥
⎦
the central value, that is xi − x . n
The binomial distribution occurs when the experiment For hypergeometric distribution,
performed satisfies the following four assumptions of
n n
Bernoulli’s trails.
∑ p( x) = 1, since ∑ m
C xnC y − x = m + nC y
(a) They are finite in number. i =1 i =1
(b) There are exactly two outcomes: success or failure. 116. Geometric Distribution
(c) The probability of success or failure remains same Consider repeated trails of a Bernoulli experiment E
in each trail. with probability of success, P, and probability of failure,
q = 1 − p . Let x denote the number of times E must be
(d) They are independent of each other.
repeated until finally obtaining a success. The distribu-
The probability of obtaining x successes from n trails is tion is
given by the binomial distribution formula,
Also, p( x ) = q x p, x = 0, 1, 2, ..., q = 1 − p
P ( X ) = nC x P x (1 − p) n − x
∞ ∞
1
where P is the probability of success in any trail and ∑ P( x) = P ∑ q x = p
x=0 x=0 1− q
=1
(1 − p) is the probability of failure.
The mean of geometric distribution = q /p.
114. Poisson Distribution
Poisson distribution is a distribution related to the prob- The variance of geometric distribution = q /p 2.
abilities of events which are extremely rare but which
have a large number of independent opportunities for 117. General Continuous Distribution
occurrence. When a random variable X takes all possible values in an
interval, then the distribution is called continuous distri-
A random variable X, taking on one of the values 0, 1,
bution of X.
2, …, n, is said to be a Poisson random variable with
parameters m if for some m > 0, A continuous distribution of X can be defined by a prob-
ability density function f(x) which is given by
e−m mx
P( x) = ∞
x!
p( −∞ ≤ x ≤ ∞) = ∫
−∞
f ( x )dx = 1
For Poisson distribution,
Mean = E ( x ) = m The expectation for general continuous distribution is
given by
Variance = V ( x ) = m
∞
Therefore, the expected value and the variable of a Pois- E (x) = ∫ x f ( x ) dx
son random variable are both equal to its parameters m. −∞
This distribution is known as hypergeometric distribu- Then the distribution is called uniform distribution.
tion.
The mean of uniform distribution is given by upper triangular system from which the unknowns are
b found by back substitution.
E ( x ) = ∫ x ⋅ F ( x )dx Now, consider the following equations:
a
b a1 x + b1 y + c1 z = d1 (5)
1 x2
=
b−a 2 a
a2 x + b2 y + c2 z = d 2 (6)
a+b a3 x + b3 y + c3 z = d3 (7)
=
2
In uniform distribution, x takes the values with the same Step 1: Eliminate x from Eqs. (6) and (7)
probability. Assuming a1 ≠ 0, we eliminate x from Eq. (6) by sub-
The variance of uniform distribution is given by tracting (a2 /a1 ) times Eq. (5) from Eq. (7). Similarly,
we eliminate x from Eq. (7) by subtracting (a3 /a1 ) times
1
V (x) = σ 2 = ( b − a )2 Eq. (5) from Eq. (7). Hence, the new set of equations is
12 given by
119. Exponential Distribution
a1 x + b1 y + c1 z = d1 (8)
If density of a random variable x for λ > 0 is given by
b2′ y + c2′ z = d 2′ (9)
⎧λ e − λ x if x ≥ 0
f (x) = ⎨ b3′ y + c3′ z = d3′ (10)
⎩ 0 if x < 0
Then the distribution is called exponential distribution Here, Eq. (8) is called the pivotal equation and a1 is
with parameter l. called the first pivot.
The cumulative distribution function F(a) of an expo- Step 2: Eliminate y from Eq. (7)
nential random variable is given by
Assuming b2 ≠ 0, we eliminate y from Eq. (7), by sub-
a
tracting (b3′ /b2′ ) times Eq. (6) from Eq. (7). Hence, the
F ( a) = P ( x ≤ a) = ∫ λ e − λ x dx = ( −e − λ x ) a new set of equations is given by
0
= 1 − e− λa , a ≥ 0
a1 x + b1 y + c1 z = d1 (11)
Mean for exponential distribution is given by b2′ y + c2′ z = d 2′ (12)
E ( x ) = 1/λ
c3′′z = d3′′ (13)
Variance of exponential distribution is given by
1 Now, Eq. (12) is called the pivotal equation and b′2 is
V ( x) = called the new pivot.
λ2
Step 3: Evaluate the unknowns
120. Normal Distribution
The values of x, y and z are found from Eqs. (11), (12)
A random variable X is a normal random variable with and (13) by back substitution.
parameters μ and σ 2, if the probability density function
is given by 122. Matrix Decomposition Methods (LU Decomposition
1 2 2
Method)
f ( x) = e −( x − μ ) / 2σ
, −∞ < x < ∞ In this section, we will discuss some more numeric
σ 2π
methods for solving linear systems of n equations in n
where μ is mean for normal distribution and σ is stand- unknowns x1, …, xn,
ard deviation for normal distribution.
AX = B (14)
Numerical Methods
where A is the n × n coefficient matrix, X = [ x1 … xn ]
121. Gauss Elimination Method and B = [b1 … bn ] . This method is based on the fact that
The Gauss elimination method is a basic method of every matrix can be expressed as the product of a lower
obtaining solutions. In this method, the unknowns are and an upper triangular matrix, provided all the princi-
eliminated successively, and the system is reduced to an pal minors are non-singular.
If we assume | a1 | ≥ | b1 | + | c1 |; | b2 | ≥ | a2 | + | c2 |; | c3 | ≥ | ainterval
3 | + | b3 | ( x0 , x1 )
is bisected and its midpoint x2 is
+ | c1 |; | b2 | ≥ | a2 | + | c2 |; | c3 | ≥ | a3 | + | b3 | to be true, then the iterative method obtained as
can be used for the above system to find the values of x,
y and z. x2 = ( x1 + x0 )/2
1
x0
y (1) = (d 2 − a2 x (1) − c2 z (0) ) (33)
b2 x′ x =(x0+x1) x1
x
(1) (1)
Step 5: Now substitute x for x and y for y in the
2
third equation.
Third
1 iteration
z (1) = (d3 − a3 x (1) − b3 y (1) ) (34) Second
c3 iteration
Starting
Step 6: In finding the values of the unknowns, we use interval
(r ) (r ) (r )
the latest available values on R.H.S. If x , y , z are
Figure 7 | Illustration of bisection method.
the rth iterates, then the iteration scheme will be
1 128. Regula–Falsi Method (Method of False Position
x ( r +1) = (d1 − b1 y ( r ) − c1 z ( r ) ) (35)
a1 Method)
The iterative procedure is started by choosing two
1
y ( r +1) = (d 2 − a2 x ( r +1) − c2 z ( r ) ) (36) values x0 and x1 such that f ( x0 ) and f ( x1 ) are of
b2 opposite signs. Then the two points [ x0 , f ( x0 )] and
[ x1 , f ( x1 )] are joined by a straight line. The intersec-
1
z ( r +1) = (d3 − a3 x ( r +1) − b3 y ( r +1) ) (37) tion of this line with the x-axis gives x2. If f ( x2 ) and
c3
f ( x0 ) are of opposite signs, then replace x1 by x2; oth-
erwise, replace x0 by x2. This yields a new set of values
This process is continued till the convergence is for x0 and x1. The present range is much smaller than
assumed. the range or interval between the first chosen set of x0
127. Bisection Method and x1. The convergence is thus established, and the
iterations are carried over with the new set of x0 and x1.
We begin the iterative cycle by choosing two Another x2 is found by the intersection of the straight
trial points x0 and x1 , which enclose the actual root. line joining the new f ( x0 ) and f ( x1 ) points with
Then f ( x0 ) and f ( x1 ) are of opposite signs. The x-axis. Each new or successive interval is smaller than
the previous interval, and it is guaranteed to converge In the third step x3 from x2 again by the same formula,
to the root. and so on.
The procedure is illustrated in Fig. 8.
y
y
A (x0, f(x0))
y =f(x)
f(x0)
x3 x2 x1 b
0 x0 x
P(x) x
x2 x1 x0
B (x1, f(x1))
Figure 9 | Newton–Raphson method.
Figure 8 | Illustration of Regula–Falsi method.
130. Secant Method
From Fig. 8, it can be seen (from the equation of the
straight line) that The secant method can be thought of as a finite differ-
ence approximation of Newton–Raphson method.
f ( x1 ) − f ( x0 )
y − f ( x0 ) = ( x − x0 )
x1 − x0
x0 − x1
where x2 = x0 − f ( x0 )
f ( x1 ) − f ( x0 )
which is an approximation to the root.
We use the new value of x as x2 and repeat the same where d x is deviation of the central values from the
process using x1 and x2 instead of x0 and x1. This pro- assumed mean of x-series, d y is deviation of the cen-
cess is continued in the same manner unless we obtain tral values from the assumed mean of y-series, f is the
xn = xn −1 . frequency corresponding to the pair (x, y) and n is total
number of frequencies ( = ∑ f ) .
x1 − x0
x2 = x1 − f ( x1 )
f ( x1 ) − f ( x0 ) 133. Lines of Regression
x2 − x1 Sometimes, the dots of the scatter diagram tend to
x3 = x2 − f ( x2 )
f ( x2 ) − f ( x1 ) cluster along a well-defined direction which suggests
a linear relationship between the variables x and y
The generalized solution for the secant method as as shown in Fig. 11. Such a line giving the best fit
xn −1 − xn − 2 for the given distribution of dots is known as line of
xn = xn −1 − f ( xn −1 ) regression.
f ( xn −1 ) − f ( xn − 2 )
The line giving the best possible mean values of y for
131. Jacobian each specified value of x is called the line of regression
If u and v are functions of two independent variables x of y on x and the line giving the best possible mean val-
∂ u /∂ x ∂ u /∂ y ues of x for each specified value of y is called the line of
and y, then the determinant is called the regression of x on y.
∂ v /∂ x ∂ v /∂ y
σy
Jacobian of u, v with respect to x, y and is written as The regression coefficient of y on x is r .
σx
∂ (u, v ) ⎛ u, v ⎞
or J ⎜ .
∂ ( x, y ) ⎝ x, y ⎟⎠ σx
The regression coefficient of x on y is r .
σy
132. Coefficient of Correlation
Coefficient of correlation is defined as the numerical y Line of regression of x on y
measure of correlation and can be calculated by the fol- x = a′+b′y
lowing relation:
Line of regression of y on x
r=
∑ XY y = a +bx
nσ xσ y x′ x
°
where X is deviation from the mean ( x − x ) , Y is devi-
ation from the mean ( y − y ) , σ x is standard deviation
of x-series, σ y is standard deviation of y-series, and n is y′
number of values of the two variables.
Coefficient of correlation for grouped data can be calcu- Figure 11 | Line of regression.
lated using the following relation:
QUESTIONS
(a) 0 (b) 1 T T
6. Let A be a 3 × 3 matrix with rank 2. Then AX = 0 has 10. Which of the following statements is true?
(a) only the trivial solution X = 0 (a) x is a null vector
(b) one independent solution (b) x is unique
(c) two independent solutions (c) x does not exist
(d) three independent solutions (d) x has infinitely many values
(GATE 2005: 1 Mark) (GATE 2006: 2 Marks)
11. Let A be an n × n real matrix such that A2 = I and y 17. X and Y are non-zero square matrices of size n × n. If
be an n-dimensional vector. Then the linear system of XY = 0n × n , then
equations Ax = y has (a) |X| = 0 and |Y| ≠ 0 (b) |X| ≠ 0 and |Y| = 0
(a) no solution (c) |X| = 0 and |Y| = 0 (d) |X| ≠ 0 and |Y| ≠ 0
(b) a unique solution (GATE 2010: 2 Marks)
(c) more than one but finitely many independent
−2 2 −3
solutions
18. The matrix M = 2 1 −6 has eigenvalues –3,
(d) infinitely many independent solutions
(GATE 2007: 1 Mark) −1 −2 0
–3, 5. An eigenvector corresponding to the eigenvalue
12. Let A = [ai , j ],1 ≤ i, j ≤ n , with n ≥ 3 and aij = i ⋅ j
5 is [1 2 −1] . One of the eigenvectors of the matrix
T
13. Let P ≠ 0 be a 3 × 3 real matrix. There exist linearly (GATE 2011: 1 Mark)
independent vectors x and y such that Px = 0 and Py = 0.
The dimension of the range space of P is −5 −3 1 0
19. Given that A = and I = the value of
(a) 0 (b) 1 A3 is 2 0 0 1
(c) 2 (d) 3 (a) 15A + 12I (b) 19A + 30I
(GATE 2009: 1 Mark) (c) 17A + 15I (d) 17A + 21I
14. The eigenvalues of a (2 × 2) matrix X are –2 and –3. The (GATE 2012: 2 Marks)
−1
eigenvalues of the matrix ( X + I ) ( X + 5 I ) are 20. The dimension of the null space of the matrix
(a) –3, –4 (b) –1, –2 0 1 1
(c) –1, –3 (d) –2, –4 1 − 1 0
is
(GATE 2009: 2 Marks) −1 0 − 1
0 0 1 (a) 0 (b) 1
P = 1 0 0 (c) 2 (d) 3
15. The matrix rotates a vector about the
0 1 0 (GATE 2013: 1 Mark)
1 21. If the A matrix of the state space model of a SISO linear
time invariant system is rank deficient, the transfer func-
axis 1 by an angle of
tion of the system must have
1
(a) a pole with a positive real part
(a) 30° (b) 60°
(b) a pole with a negative real part
(c) 90° (d) 120°
(c) a pole with a positive imaginary part
(GATE 2009: 2 Marks)
(d) a pole at the origin
16. A real n × n matrix A = [aij ] is defined as follows: (GATE 2013: 1 Mark)
25. Let A be an n × n matrix with rank r (0 < r < n). Then 31. Let N be a 3 by 3 matrix with real number entries. The
Ax = 0 has p independent solutions, where p is matrix N is such that N2 = 0. The eigen values of N are
(a) r (b) n (a) 0, 0, 0 (b) 0, 0, 1
(c) n − r (d) n + r (c) 0, 1, 1 (d) 1, 1, 1
(GATE 2015: 1 Mark) (GATE 2018: 1 Mark)
A Calculus
C 34. A vector normal to iˆ + 2 ˆj − kˆ is
0
x
C1 (a) iˆ − ˆj – kˆ (b) −iˆ − 2 ˆj + kˆ
1 +T
39. The curves, for which the curvature ρ at any point is (b) lim
T →∞ 2T ∫ −T
f 2 (t )dt
equal to cos3θ, where θ is the angle made by the tangent
1/ 2
at that point with the positive direction of the x-axis, are 1 +T
2 3/ 2
(given ρ = y ′′ /[1 + ( y ′ ) ] where y′ and y′′ are the first (c) Tlim
→∞ 2T ∫ −T
f (t )dt
and second derivatives of y with respect to x)
1 +T
(a) circles (b) parabolas (d) lim
T →∞ 2T ∫ −T
f (t ) f (t + τ )dt
(c) ellipses (d) hyperbolas
(GATE 2005: 2 Marks) (GATE 2006: 2 Marks)
2/3 2/3
40. A scalar field is given by f = x + y , where x and y 44. The solution of the integral equation
t
are the cartesian coordinates. The derivative of f along y (t ) = t exp(t ) − 2 exp(t ) ∫ exp(−τ ) y (τ )d τ
0
the line y = x directed away from the origin, at the point
(8, 8) is (a) 1/2(exp(t ) − exp( −t ))
(b) 1/2(exp(t ) + exp( −t ))
2 3
(a) (b) (exp(t ) − exp(−t ))
3 2
(c)
(exp(t ) + exp(−t ))
2 3
(c) (d) (exp(−t ) + exp(t ))
3 2 (d)
(exp(−t ) − exp(t ))
(GATE 2005: 2 Marks)
(GATE 2006: 2 Marks)
5
45. The polynomial p ( x) = x + x + 2 has 53. It is known that two roots of the nonlinear equation
(a) all real roots x 3 – 6 x 2 + 11x – 6 = 0 are 1 and 3. The third root will be
(b) 3 real and 2 complex roots (a) j (b) –j
(c) 1 real and 4 complex roots (c) 2 (d) 4
(d) all complex roots (GATE 2008: 2 Marks)
(GATE 2007: 2 Marks) − at
54. The Fourier transform of x(t ) = e u (−t ), where u(t) is
the unit step function,
esin x
46. For real x, the maximum value of is (a) exists for any real value of a
ecos x
(b) does not exist for any real value of a
(a) 1 (b) e
2 (c) exists if the real value of a is strictly negative
(c) e (d) ∞
(d) exists if the real value of a is strictly positive
(GATE 2007: 2 Marks)
3
(GATE 2008: 2 Marks)
47. Consider the function f ( x) = x , where x is real. Then
the function f(x) at x = 0 is 55. A sphere of unit radius is centered at the origin. The unit
normal at a point (x, y, z) on the surface of the sphere is
(a) continuous but not differentiable
the vector
(b) once differentiable but not twice
1 1 1
(c) twice differentiable but not thrice (a) ( x, y, z ) (b) , ,
3 3 3
(d) thrice differentiable
(GATE 2007: 2 Marks) x y z x y z
(c) , , (d) , ,
∞ ∞ 3 3 3 2 2 2
∫ ∫
2 2
48. The value of the integral e − x e − y dx dy is
0 0 (GATE 2009: 1 Mark)
π
(a) (b) π 56. A quantity x is calculated by using the formula
2
x = ( p − q )/r , the measured values are p = 9, q = 6,
π
(c) π (d) r = 0.5. Assume that the measurement errors in p, q and
4 r are independent. The absolute maximum error in the
(GATE 2007: 2 Marks) measurement of each of the three quantities is ε. The
dy absolute maximum error in the calculated value of x is
2
49. Given y = x + 2 x + 10, the value of is equal to (a) ε (b) 2ε
dx x =1
(a) 0 (b) 4 (c) 3ε (d) 16ε
(c) 12 (d) 13 (GATE 2009: 2 Marks)
(GATE 2008: 1 Mark) x3 x5 x 7
sin x 57. The infinite series f ( x) = + + ∞ converges
50. lim is to 3! 5! 7!
x→0 x
(a) cos( x) (b) sin( x)
(a) indeterminate (b) 0 x
(c) 1 (d) ∞ (c) sinh( x) (d) e
(GATE 2008: 1 Mark) (GATE 2010: 1 Mark)
∞
52. Consider the function y = x 2 – 6 x + 9 . The maximum 59. The electric charge density in the region
value of y obtained when x varies over the interval 2 to 5 R : x 2 + y 2 ≤ 1, y ≤ 0 is given as σ ( x, y ) = 1 C/m 2 , where
is x and y are in meters. The total charge (in coulomb) con-
tained in the region R is
(a) 1 (b) 3
(a) 4π (b) 2π
(c) 4 (d) 9
(c) π /2 (d) 0
(GATE 2008: 2 Marks)
(GATE 2010: 2 Marks)
75. a , b , c are three orthogonal vectors. Given that 81. For an initial value problem y + 2 y + 101 y = 10.4e x ,
a = iˆ + 2 ˆj + 5kˆ and b = iˆ + 2 ˆj − kˆ , the vector c is par- y (0) = 1.1 and y = −0.9 . Various solutions are written
allel to in the following groups. Match the type of solution with
the correct expression.
(a) iˆ + 2 ˆj + 3kˆ (b) 2iˆ + ˆj
Group 1 Group 2
(c) 2iˆ − ˆj (d) 4kˆ P. General solution 1. 0.1e x
(GATE 2019: 1 Mark) of homogeneous
equations
76. The vector function A is given by A = ∇u , where u ( x, y ) −x
Q. Particular integral 2. e ( A cos10 x + B sin 10 x)
is a scalar function. Then ∇ × A is −x x
R.
Total solution 3. e cos10 x + 0.1e
(a) −1 (b) 0 satisfying boundary
(c) 1 (d) ∞ conditions
(GATE 2019: 1 Mark)
(a) P−2, Q−1, R−3 (c) P−1, Q−2, R−3
77. The curve y = f ( x) is such that the tangent to the curve (b) P−1, Q−3, R−2 (d) P−3, Q−2, R−1
at every point (x, y) has a y-axis intercept c, given by
(GATE 2007: 2 Marks)
c = − y. Then, f( x) is proportional to
(a) x−1 (b) x2 82. The boundary-value problem y ′′ + λ y = 0 ,
(c) x3 (d) x4 y (0) = y (π ) = 0 will have non-zero solutions if and
(GATE 2019: 2 Marks) only if the values of λ are
(a) 0, ± 1, ± 2, ... (b) 1, 2, 3, …
(c) 1, 4, 9, … (d) 1, 9, 25, …
Differential Equations
(GATE 2007: 2 Marks)
78. The characteristic roots of the system described as dy
dx dy 83. Consider the differential equation = 1 + y 2 . Which
= y, = − x are at dx
dt dt one of the following can be a particular?
(a) +1, +1 (b) −1, +1 solution of this differential equation?
∂f ∂f dy
ai (i = 0 to n) are constants, then x +y is 84. Consider the differential equation + y = e x with
∂x ∂y y(0) = 1. The value of y(l) is dx
f n 1
(a) (b) (a) e + e
−1
(b) (e + e −1 )
n f 2
1
(c) nf (d) n f (c) (e − e −1 ) −1
(d) 2(e − e )
2
(GATE 2005: 1 Mark) (GATE 2010: 2 Marks)
80. The general solution of the differential equation 85. Consider the differential equation y + 2 y + y = 0 with
( D 2 − 4 D + 4) y = 0 , is of the form (given D = d /dx ), boundary conditions y (0) = 1, y (1) = 0 .
and c1 and c2 are constants The value of y(2) is
−1
(a) C1e 2 x (b) C1e 2 x + C2 e −2 x (a) –1 (b) −e
2x 2x
(c) C1e + C2 xe (d) C1e 2 x + C2 xe −2 x (c) −e2 (d) –e
−2
y
(a) (b) 0,0
3 2 3 2 −1
2
x
n n n n
y= -
2 1 1 1 1 1 2 1 2
−3
(c) + (d) +
3 3 3 2 3 2 3 3
−5
(GATE 2011: 2 Marks) −4 −2 0 2 4
x
87. With initial condition x(1) = 0.5, the solution of the dif-
dx The function shown is the solution of the differential
ferential equation, t + x = t
dt equation (assuming all initial conditions to be zero) is:
1 2 1 d2y dy
(a) x = t − (b) x = t − (a) =1 (b) =x
2 2 dx 2 dx
t2 t dy dy
(c) x= (d) x= (c) = −x (d) = x
2 2 dx dx
(GATE 2012: 1 Mark) (GATE 2014: 1 Mark)
88. Consider the differential equation 93. Consider the following equations
d 2 y (t ) dy (t )
+2 + y (t ) = δ (t ) with y (t ) t = 0 = −2 and ∂V ( x , y )
dt 2 dt = px 2 + y 2 + 2 xy
dy dy ∂x
= 0 . The numerical value of is ∂V ( x , y )
dt t = 0 − dt t = 0+ = x 2 + qy 2 + 2 xy
∂y
(a) –2 (b) –1
(c) 0 (d) 1 where p and q are constants. V(x, y) that satisfies the
(GATE 2012: 2 Marks) above equations is
∂f ∂ 2 f x3 y3
89. The type of the partial differential equation = is (a) p + q + 2 xy + 6
∂t ∂x 2 3 3
(a) Parabolic (b) Elliptic x3 y3
(c) Hyperbolic (d) Nonlinear (b) p +q +5
3 3
(GATE 2013: 1 Mark)
x3 y3
90. While numerically solving the differential equation (c) p + q + x 2 y + xy 2 + xy
3 3
dy
+ 2 xy 2 = 0 , y (0) = 1 using Euler’s predictor-corrector
dx x3 y3
(d) p + q + x 2 y + xy 2
(improved Euler–Cauchy) method with a step size of 0.2, 3 3
the value of y after the first step is (GATE 2018: 2 Marks)
(a) 1.00 (b) 1.03
(c) 0.97 (d) 0.96
Analysis of Complex Variables
(GATE 2013: 2 Marks)
91. The maximum value of the solution y (t ) of the differ- 94. Consider the circle z – 5 – 5i = 2 in the complex plane
ential equation y (t ) + y (t ) = 0 with initial conditions (x, y) with z = x + iy . The minimum distance from the
y (0) = 1 , for t ≥ 0 is
y (0) = 1 and
origin to the circle is
(a) 1 (b) 2 (a) 5 2−2 (b) 54
(c) π (d) 2 (c) 34 (d) 5 2
(GATE 2013: 2 Marks) (GATE 2005: 2 Marks)
Imag. axis
- i10 − i10 104. The velocity v (in m/s) of a moving mass, starting from
dv
rest, is given as = v + 1 . Using Euler forward differ-
dt
ence method (also known as Cauchy–Euler method)
Imag. axis
Imag. axis
∫ e
1/ z
(GATE 2008: 2 Marks) dz
C
(c) f ( z ) = constant 118. You have gone to a cyber-cafe with a friend. You found
f ( z) = x2 + y 2 that the cyber-café has only three terminals. All termi-
(d)
nals are unoccupied. You and your friend have to make
(GATE 2019: 2 Marks) a random choice of selecting a terminal. What is the
"Rakastaa."
"Asanovin luo."
"Etkö tahdo? Etkö sinä silloin ole syypää, etkö sinä ole tehnyt
pahoin?"
Pasinkov läksi.
*****
"Olin."
"Ei, sitä en voi sanoa. Minä odotin paljon enempää vastusta. Hän
ei ole niin tyhjä eikä ylpeä kuitenkaan, kuin minä hänestä luulin."
"Vai niin…"
"Mitä hän…"
"Ja nyt minä tietysti en enää saata käydä heillä?" sanoin minä
murheissani.
"Miksi et? Silloin tällöin sinä kyllä saatat käydä siellä, ja sinun
pitääkin käydä, ett'ei saataisi aihetta lörpötyksiin ja väärin käsityksiin.
"Voi Jakov, sinä et nyt voi muuta kuin halveksia minua!" vaikeroin
minä, jaksaen töin tuskin pidättää kyyneliäni.
"Voi, pikku isä, huonosti, hyvin huonosti!" vastasi hän huoaten. "Te
ette suinkaan tunne häntä enää. Näyttää, kuin hänellä ei enää olisi
pitkää elonaikaa jäljellä. Juuri sentähden me vain oleskelemmekin
täällä, että hän on niin heikko. Me olemme matkalla Odessaan,
näettäkös, oikein kunnolliseen parannuslaitokseen."
"Siperiastako!"
"Olkaa hyvä, tulkaa ylös, olkaa niin hyvä!" huusi Jelisei rapuilta.
"Jakov Ivanits ikävöitsee teitä nähdäksensä."
"Nuolestako!"
"Te ette jaksa puhua niin paljoa", sanoi Jelisei, joka oli koko ajan
huoneessa.
"Ja nyt", jatkoi Pasinkov, avaten silmänsä, mutta pää vielä tyynyn
varassa, "olen minä jo toista viikkoa maannut tähän pikku komeroon
suljettuna. Luullakseni lienen jollakin tavalla vilustunut. Piirilääkäri
hoitelee minua. Hänet kyllä saat kohta nähdä luullakseni. Hän
näyttää ymmärtävän asian varsin hyvästi. Muuten minä olen iloissani
tästä sattumasta, sillä mitenpä minä muulla tavalla olisin saanutkaan
nähdä sinua."
"Olkaa hyvä, sanokaa, herra tohtori", aloin minä heti, kuin hän
pääsi istumaan suureen, mukavaan nojatuoliin, "millainen on
ystäväni tila? Onko hän vaarassa?"
"On se."
"Kaupungin lääkäri."
"Miksikä ei?"
"Mitäpä siitä, että olenkin täysi lääkäri. Ettekö ehkä luule minun
sillä ymmärtävän homoiopatiaakin? Kyllä, yhtä hyvin kuin kuka muu
hyvänsä. Täällä on apteekkari, joka parantelee homolopatialla eikä
hän ole edes suorittanut mitään tieteellistä tutkintoakaan."
"Vai niin, no, siinä teitte hyvin, siinä", vastasi hän ja läksi hitaasti
laskeutumaan alas rappusia.
"Niin, tee niin, siitä tulee hauskaa!" vastasi hän iloisesti. "Kuten
ennen Winterkellerissä, muistatko? Mutta mitä lukisimme? Katso
tuolta ikkunalta, siellä minun kirjani ovat."
"Lermontov."
taikka tätä:
Minä punastuin.
"Minä sanoin hänelle kaikki, koko totuuden. Minä puhuin aina totta
hänen kanssansa. Kaikki teeskenteleminen hänen edessään olisi
ollut synti."
"Mutta, sanos sinä minulle", alkoi hän taas, "unhotitko sinä hänet
pian?"
"Ja kaiken sen sinä sait tietää minulta, ystävä parka, etkä sinä
sanallakaan, et ainoallakaan äänen väräyksellä ilmaissut, mitä sinä
itse kärsit, läksit vain selvittelemään asiata hänen kanssansa."