Professional Documents
Culture Documents
Mathematics - II
Satya Bagchi
March 15, 2021
This reading material was updated on 15.03.2021
1. Define order and degree of an ODE. Find the order and degree of the ODEs
dy 23 2
d y 2 dy
a) y + ( dx ) = 1 + x and ( dx2) + y = dx .
2. Homogeneous equation: If a function f (x, y) can be expressed in the form xn φ( xy )
or in the form y n ψ( xy ), then f (x, y) is said to be a homogeneous function of degree n.
3. Solve the following differential equations (Put y = vx):
3
a) x2 y dx − (x3 + y 3 ) dy = 0 Ans: − 3y
x
3 + log y = c
b) (x + 2y − 3) dx − (2x + y − 3) dy = 0 Ans:
c) (3x + 3y + 1) dy = (x + y + 1) dx Ans: 3
4 (x
1
+ y) − log(4(x + y) + 2) = x + c
8
d) y(2xy + 1) dx + x(1 + 2xy + x2 y 2 ) dy = 0 (xy = v).
dy
Note: The above equations can be written in the form dx = f (x, y), where f (x, y) is a
homogeneous function.
4. Rules for finding integrating factors for first order ODE: (M dx + N dy = 0)
a) If M x + N y =6 0 and the equation be homogeneous, then 1
M x+N y is an integrating
factor (I.F.) of the equation M dx + N dy = 0.
b) If M x − N y 6= 0 and the equation can be written as
{f (xy)}ydx + {F (xy)}xdy = 0,
then 1
M x−N y is an integrating factor of the equation M dx + N dy = 0.
1
R
c) If N1 ( ∂M
∂y − ∂x ) be a function of x alone, say f (x), then e
∂N f (x)dx is an integrating
factor of the equation M dx + N dy = 0.
R
d) If M
1 ∂N
∂y ) be a function of y alone, say φ(y), then e
( ∂x − ∂M φ(y)dy is an integrating
factor of the equation M dx + N dy = 0.
5. Solve the following differential equations:
(i) x dx + y dy + x dy−y dx
x2 +y 2
=0 Ans: x2 + y 2 + 2 tan−1 ( xy ) = C
(ii) (x y − 2xy ) dx + (3x y − x3 ) dy = 0
2 2 2 Ans: xy + 3 log xy + log x = c
dy (xy sin xy+cos xy)y
(iii) dx = (cos xy−xy sin xy)x Ans:
(iv) (3x2 y 4 + 2xy) dx + (2x3 y 3 − x2 ) dy = 0 Ans: x2 + x3 y 3 + cy = 0
dy
(v) x2 dx +y =1 Ans:
dy dy
(vi) y + x2 dx
2 = xy dx Ans:
dy y 2 ex +2xy
(vii) dx = x2
Ans: I.F. = y −2
(viii) (y 2 + x2 + 2x) dx + 2y dy = 0 Ans: I.F. = ex (using 4c), x2 + y 2 = ce−x
(ix) (x4 + y 4 ) dx − xy 3 dy = 0. Ans:
6. Solution of first order but not of first degree ODE:
Start with a first order and n-th degree ODE:
pn + P1 pn−1 + · · · + Pn−1 p + Pn = 0,
dy
where Pi are functions of x and y; p ≡ dx .
This reading material was updated on 15.03.2021
{p − f1 }{p − f2 } · · · {p − fn } = 0.
Solving each of the factors we get the complete solution as F1 (x, y, c)F2 (x, y, c) · · · Fn (x, y, c) =
0, where c is an arbitrary constant.
b) Solvable for y: If it can be expressed as
Differentiating both sides of (1.1) with respect to x, we get an equation of the form
dp
p = F (x, p, dx ).
It can be solved to get a solution of the form
φ(x, p, c) = 0. (1.2)
Next we eliminate p from (1.1) and (1.2), we get the required solution.
c) Solvable for x: If it can be expressed as
Differentiating both sides of (1.3) with respect to y, we get an equation of the form
1 dp
= F (y, p, ).
p dy
It can be solved to get a solution of the form
φ(y, p, c) = 0. (1.4)
Next we eliminate p from (1.3) and (1.4), we get the required solution.
2
7. Solve the following differential equations:
a) x2 (p2 − y 2 ) + y 2 = x4 + 2xyp Ans: {y − sinh(x + c)}{y − sinh(c − x)} = 0.
b) p2 +p−6=0 Ans: (y + 3x − c)(y − 2x − c) = 0.
c) y = 2px + p2 y Ans: y 2 = 2cx + c2 .
d) x + √ p =a Ans: (x − a)2 + (y + c)2 = 1.
1+p2
8. Clairut0 s Equation
A differential equation of the form y = px + f (p) is known as Clairut0 s Equation.
A Clairaut’s equation is a differential equation of the form
dy dy
y(x) = x +f .
dx dx
d2 y
2
dy dy 0 dy d y
= +x 2 +f ,
dx dx dx dx dx2
2
dy d y
so 0 = x + f 0 dx dx2
.
This reading material was updated on 15.03.2021
d2 y dy
Hence, either 0 = dx2
or 0 = x + f 0 dx .
dy
In the former case, C = dx for some constant C. Substituting this into the Clairaut’s
equation, we have the family of straight line functions given by y(x) = Cx + f (C), the
so-called general solution of Clairaut’s equation.
dy
The latter case, 0 = x + f 0 dx , defines only one solution y(x), the so-called singular
solution, whose graph is the envelope of the graphs of the general solutions. The sin-
gular solution is usually represented using parametric notation, as (x(p), y(p)), where p
dy
represents dx .
9. Solve the following differential equations and find the singular solutions:
√
a) y = px + 1 + p2 Ans: y = cx + 1 + c2 ; x2 + y 2 = 1.
p
b) y = px + a
p Ans: y = cx + ac ; y 2 = 4ax.
√ 2 2
c) y = px + Ans: y = cx + a2 c2 + b2 ; xa2 + yb2 = 1.
p
a2 p2 + b2
d) y = px + p − p2 Ans: y = cx + c − c2 ; 4y = (x + 1)2 .
10. Solution of first order linear ODE:
dy
a) If the equation
R
is dx + P y = Q, where P and Q are functions of x only, then
I.F.= e P dx
b) If the equation is dx
dy + P x = Q, where P and Q are functions of y only, then
R
I.F.= e P dy .
3
12. Solution Method for linear differential equation of the form f (D)y = 0:
If the roots of the auxiliary equation f(m)=0 are m1 , m2 and m3 , then the solutions of
the given ODE are
a) If m1 6= m2 6= m3 , y = C1 em1 x + C2 em2 x + C3 em3 x ,
b) If m1 = m2 6= m3 , y = (C1 + C2 x)em1 x + C3 em3 x ,
c) If m1 = m2 = m3 , y = (C1 + C2 x + C3 x2 )em1 x ,
d) If m1 = α + iβ, m2 = α − iβ and m3 , y = eαx (C1 cos βx + C2 sin βx) + C3 em3 x .
13. Particular Integral (P.I.) By Short Method (Rules) of the form f (D)y = X(x):
a) If X = eax , P.I. = f (a) e ,
1 ax provided f (a) 6= 0,
xr eax
= r!g(a) , where f (D) = (D − a)r g(D), g(a) 6= 0.
b) If X = sin ax/ cos ax,
P.I. = φ(D
1
2 ) sin ax = φ(−a2 ) sin ax, provided φ(−a ) 6= 0, where f (D) = φ(D ).
1 2 2
[f (D)]−1 and arrange the terms in ascending powers of D and operate with P (x).
Remark: With the help of Bernstein coefficient nr = n.(n−1).(n−2)...(n−r+1) , where
r!
r is a non negative integer, we can obtain the binomial series which is valid for any
real number n if |x| < 1,
n n n 2 n 3
(1 + x) = 1 + x+ x + x + ....
1 2 3
e3x
e) (D2 − 2D + 1)y = x2 e3x P.I. = 8 (2x
2 − 4x + 3)
f) (D2 − 4D + 4)y = x2 ex . P.I. = ex (x2 + 4x + 6)
g) (D2 − 4)y = x sin x P.I. = x sin x
−5 − 2 cos x
25
4
o) (D2 − 9)y = e3x cos x P.I. = − 37
1 3x
e [cos x − 6 sin x]
ex x4
p) (D2 − 2D + 1)y = x2 ex . P.I. = 12
q) (D2 + 1)y = sin x sin 2x P.I. = 41 x sin x + 1
16 cos 3x
r) (D2 − D − 2)y = sin 2x P.I. = 1
20 [cos 2x − 3 sin 2x]
s) (D2 − 1)y = x3 + 2x + 1 P.I. = x3 + 4x + 1
15. Linear Independence and the Wronskian of Two Functions:
Recall our definition of the linear dependence of two functions f and g on an open
interval I : f and g are linearly dependent if there exists constants c1 and c2 , not both
zero, such that
c1 f (t) + c2 g(t) = 0; ∀t ∈ I.
Thus, if the Wronskian is nonzero at any t ∈ I, the two functions must be linearly
independent. If we are considering f = y1 and g = y2 to be two solutions to the ODE
(D2 + P D + Q)y = 0,
where P and Q are both continuous functions of x on some interval I, then the Wronskian
has some extra properties which are given by Abel’s Theorem:
R
W [y1 ; y2 ](x) = Ae− P dx
,
W [f ; g] = 6x2 6= 0
as long as x 6= 0.
16. a) Prove Abel’s Theorem.
b) Prove that sin 2x and cos 2x are solutions of the differential equation (D2 + 4)y = 0
and these are linearly independent.
c) Show that (D2 − 2D + 2)y = 0 has exactly two linearly independent solutions.
[These are ex sin x and ex cos 2x.] Find the solution y(x) with y(0) = 2, y 0 (0) = −3.
d) Show that the functions ex sin x and ex cos x are linearly independent solutions of
a 2nd order linear ODE. Write down the general solution y(x). Also find y(x) with
the conditions y(0) = 2, y(0) = −3.
5
e) Suppose W [y1 ; y2 ](x) 6= 0 for some x, can the functions y1 and y2 be linearly
independent solutions of a 2nd order ODE?
f) Show that the functions x2 and x2 log x are linearly independent and form an ODE
whose solutions are these functions. [Hint. General solution of the required ODE
will be y = Ax2 + Bx2 log x , A, B-arbitrary constants. Eliminate A and B to get
the ODE].
g) Show that ex and e−x are linearly independent solutions of (D2 − 1)y = 0 on any
interval.
h) Can you form a second order linear homogeneous ODE whose two independent
solutions are f (t) = 2t and g(t) = 3t2 in (−2, 2)?
i) Check whether the functions sin x, cos x and ex are linearly independent or not.
17. Existence and Uniqueness theorem of Picard: Here we concentrate on the solution
of the first order IVP
dy
= f (x, y); y(0) = y0 . (1.5)
dx
We are interested in the following questions:
a) Under what conditions, there exists a solution to (1.5).
This reading material was updated on 15.03.2021
has atleast one solution in the nbd of (1, 0) but we see that ∂f
∂y is not bounded around
(1, 0). Can we conclude that the said IVP has no unique solution around (1, 0)? Answer is
6
no, as the conditions for the ‘Existence and Uniqueness theorem of Picard’ are sufficient
but not necessary.
The conditions of the existence and uniqueness theorem are sufficient but not necessary.
For example, consider
√
y 0 = y + 1, y(0) = 0, x ∈ [0, 1].
Clearly f does not satisfy Lipschitz condition near origin. But still it has unique solution.
Can you prove this?
Example: Consider the initial value problem (IVP)
y 0 = xy − sin y, y(0) = 2.
Here f and ∂f∂y are continuous in a closed rectangle about x0 = 0 and y0 = 2. Hence,
there exists unique solution in the neighbourhood of (0, 2).
18. Complete solution in terms of a known integral:
Sketch of the process: Consider the 2nd order linear ODE
where P and Q are both functions of x. Let y = u be one independent solution of the
associated homogeneous equation.
Claim: y = uv be the complete solution of (1.6), where v is a function of x to be
This reading material was updated on 15.03.2021
determined.
Finding y 0 and y 00 and putting y, y 0 and y 00 in the given equation and using the fact
y = u is one independent solution of the associated homogeneous equation, we get
2u0 X
v 00 + ( + P )v 0 = ,
u u
which is a linear equation in v 0 . Next we can find v 0 and v and hence we are done!
19. Solve the following ODE in terms of known integral:
a) [xD2 − (2x − 1)D + (x − 1)]y = 0 Ans. y = ex (c1 ln x + c2 ).
b) [xD2 + (1 − x)D − 1]y = ex Ans. y = ex ln x + c1 ex x−1 e−x dx + c2 ex .
R
dy
a) dx
xy = y2
= dz
xyz−2x2
Ans: x = C1 y, x = log(yz − 2x) − log y + C2 .
bdy
b) adx
yz(b−c) = xz(c−a) = cdz
xy(a−b) Ans: ax2 + by 2 + cz 2 = C1 , a2 x2 + b2 y 2 + c2 z 2 = C2 .
dy
c) xdx
z 2 −2yz−y 2
= y+z = dz
y−z Ans: x2 + y 2 + z 2 = C1 , x2 − z 2 − 2yz = C2 .
dy
d) dx
1+y = 1+x = dz
z Ans: z(x − y) = C1 , z = C2 (x + y + 2).
7
dy
e) dx
y2
= x2
= dz
x2 y 2 z 4
Ans: x3 − y 3 = C1 , y 3 + 1
z3
= C2 .
dy
f) dx
yz = zx = dz
xy Ans: x2 − y 2 = C1 , x2 − z 2 = C2 .
dy
g) dx
z = −z = dz
z 2 +(x+y)2
Ans: x + y = C1 , log[(x + y)2 + z 2 ] − 2x = C2 .
dy
22. Geometrical interpretation of the equations dx
P = Q = dz
R.
We know, from the geometry of three dimensions, that the direction cosines of the
dy dz
ds , ds , ds , that is are in the ratio dx : dy : dz.
tangent to a curve are dx
Hence geometrically these equations represent a system of curves in space, such that
the direction cosines of the tangent to it at any point (x, y, z) are proportional to P , Q,
R.
Definition: Suppose that F (t) is a real valued function defined over the interval (−∞, ∞)
This reading material was updated on 15.03.2021
such that F (t) = 0 for all t < 0. The Laplace Transform (LT) of F (t), denoted by L{F (t)},
is defined as Z ∞
L{F (t)} = f (s) = e−st F (t)dt.
0
1. Linear property: Suppose that f1 (s) and f2 (s) are LT of F1 (t) and F2 (t) respectively,
then L{c1 F1 (t)±c2 F2 (t)} = c1 L{F1 (t)}±c2 L{F2 (t)}, where c1 and c2 are any constants.
2. First shifting property: If L{F (t)} = f (s) then, L{eat F (t)} = f (s − a).
3. Change of scale property:
If L{F (t)} = f (s) then, L{F (at)} = a1 f ( as ), where a is a constant.
Γ(n+1)
4. L{1} = 1s , s > 0 and L{tn } = sn+1
, if s > 0 and n > −1.
5. L{eat } = s−a ,
1
L{cos at} = s
s2 +a2
and L{sin at} = a
s2 +a2
.
6. L{e−at } = s+a ,
1
L{cosh at} = s
s2 −a2
and L{sinh at} = a
s2 −a2
.
7. LT of Derivatives of a Function: If L{F (t)} = f (s), then L{F n (t)} = sn f (s) −
n
sn−1 F (0) − sn−2 F 0 (0) − · · · − sF (n−2) (0) − F (n−1) (0), where F n (t) stands for d dtFn(t) .
8. LT of Integral of a Function: If L{F (t)} = f (s), then
Z t
1
L{ F (u)du} = f (s).
0 s
n
9. Multiplication by tn : If L{F (t)} = f (s), then L{tn F (t)} = (−1)n ds
d
n f (s), for n =
1, 2, 3, . . . .
R∞
10. Division by t: If L{F (t)} = f (s), then L{ F (t)
t } = s f (x)dx, provided the integral
exists.
8
2.2 Problem Set
1. Evaluate:
a) L{t5 e3t } Ans: 120
(s−3)6
.
b) L{cosh(2t)} Ans: s
s− 4
.
c) L{5t − 2} Ans: 5−2s
s2
.
d) L{sinh at} Ans: s2 −a
a
2.
m) L{ 1−cos
t2
t
} Ans: s log √s2s+1 + cot−1 s.
Rt
n) L{e−2t 0 t sin 3tdt} Ans: 6
(s2 +4s+13)2
.
Rt
o) L{t 0 sinu u du} Ans: 1
s2
cot−1 s + 1
s(s2 +1)
.
p) 1
(s2 +a2 )2
Ans: 1
2a3
(sin at − at cos at)
2. Using the Laplace transformation, Show that:
R∞
a) 0 te−3t sin t dt = 503
.
R ∞ −3t
b) 0 te cos t dt = 25 . 2
R∞
c) 0 t3 e−t sin t dt = 0.
R∞
d) 0 sint t dt = π2 .
e) L{tn eat } = n!
(s−a)n+1
.
2 2
f) L{ sint t } = 1
4 log( s s+4
2 ).
3. Define unit step function. Find the Laplace transform of the unit step function.
4. Definition: If the LT of a function F (t) is f (s), i.e. L{F (t)} = f (s) then F (t) is called
an inverse Laplace transform (ILT) of f (s). We write
7. Linear property: Suppose that f1 (s) and f2 (s) are LT of F1 (t) and F2 (t) respectively,
then
L−1 {c1 f1 (s) ± c2 f2 (s)} = c1 L−1 {f1 (s)} ± c2 L−1 {f2 (s)} where c1 and c2 are any con-
stants.
9
8. Convolution
Rt Theorem: If L−1
R t{f (s)} = F (t), L {g(s)} = G(t), then L {f (s) g(s)} =
−1 −1
c) 5
(s+4)(s−7) Ans: 5 7t
11 [e − e−4t ].
8e−3
d) s2 −4
Ans: 4e−3 sinh 2t.
s2
e) (s2 +1)2
Ans: 1
2 (sin t + t cos t).
2(1−cos t)
f) log(1 + 1
s2
) Ans: t .
g) 9
s2 (s−3)
Ans: −1 − 3t + e3t .
h) 5
s2 −2s−3
Ans: 5 −t
2e sinh 2t.
5s2 +8s−1
i) (s+3)(s2 +1)
Ans: 2e−3t + 3 cos t − sin t.
j) s
(s2 +a2 )2
Ans: 2a .
t sin at
(1−et )
k) log(1 + 1s ) Ans: .
This reading material was updated on 15.03.2021
t
Rt (1−cos u)
l) 1
s log(1 + 1
s2
) Ans: 2 0 u du.
3e−3s 2e−7s
10. Find L−1 { 5s − s − s }.
11. State convolution theorem for Laplace transforms. Use it to evaluate 1+4
1
L−1 { }.
(p + 2)2 (p − 2)
e2t −4te−2t −e−2t
Ans: 16 .
12. Use Laplace Transforms to solve the differential equation:
d2 y dy
2
−3 = 9,
dx dx
dy
given that when x = 0, y = 0 and dx = 0.
Solution Taking Laplace Transform both sides, we get
d2 y dy
L{ } − 3L{ } = L{9}
dx2 dx
10
c) y 00 (t) + 2y 0 (t) + 5y(t) = e−t sin t, where y(0) = 0, y 0 (0) = 1. Ans:
d) (D2 + 6D + 9)y = sin t, where y(0) = 1, y 0 (0) = 0. Ans:
d2 y dy
e) dx2
+ 6 dx + 13y = 0,
dy
given that when x = 0; y = 3 and dx = 7. Ans: y = e−3t (3 cos 2t + 8 sin 2t).
f) ty 00 (t) + (t + 1)y 0 (t) = e−t , y(0) = 0. Ans: y(t) = 1 − e−t .
g) y 00 (t) + 4y 0 (t) = cosh 2t; y(0) = 0, y 0 (0) = 1. [y(t) = 81 (cosh 2t + 7 cos 2t + 4 sin 2t).]
h) y 00 (t) + 2y 0 (t) = t − 32 ; given that y(2) = 1, y 0 (2) = 0. Ans: y = 1 + 41 (t − 2)2 .
i) ẋ + x = 3e2t , x(0) = 0. Ans: x(t) = −e−t + e2t .
j) ẍ − 4ẋ + 4x = 0; x(0) = 0, ẋ(0) = 3. x(t) = 3te2t .
k) ẍ + 2ẋ + 2x = 2; x(0) = 0, ẋ(0) = 1. x(t) = 1 − e−t cos t.
l) ẍ + ẋ = 3t2 ; x(0) = 0, ẋ(0) = 1. x(t) = −5 + 6t − 3t2 + t3 + 5e−t .
m) ẍ + 2ẋ + 5x = 3e−t sin t; x(0) = 0, ẋ(0) = 3. x(t) = e−t (sin 2t + sin t).
d2 y
n) dt2
+ 2 dy
dt = t − 2 ; given that y(2) = 1,
3 dy
dt = 0 at t = 2. Ans: y = 1 + 41 (t − 2)2 .
14. Using Laplace transform solve
d2 x
+ x = t cos 2t;
This reading material was updated on 15.03.2021
dt2
given that x = dx
dt = 0 at t = 0. Ans: y = 1 + 41 (t − 2)2 .
3 PROBABILITY
Relative Frequency Definition: Suppose that the random experiment is repeated n times.
If event A occurs n(A) times, then the probability of event A, denoted P (A), is defined as
n(A)
P (A) = lim ,
n→∞ n
n(A)
where is called the relative frequency of event A. Note that this limit may not exist,
n
and in addition, there are many situations in which the concepts of repeatability may not
be valid. It is clear that for any event A, the relative frequency of A will have the following
properties:
n(A) n(A)
1. 0 ≤ ≤ 1, where = 0 if A occurs in none of the n repeated trials and
n n
n(A)
= 1 if A occurs in all of the n repeated trials.
n
2. If A and B are mutually exclusive events, then
n(A ∪ B) n(A) n(B)
n(A ∪ B) = n(A) + n(B) and = + .
n n n
11
• Axiom 1: P (A) ≥ 0
• Axiom 2: P (S) = 1
• Axiom 3: P (A ∪ B) = P (A) + P (B) if A ∩ B = φ.
If the sample space S is not finite, then axiom 3 must be modified as follows:
∞
X
P (∪∞
i=1 Ai ) = P (Ai ).
i=1
These axioms satisfy our intuitive notion of probability measure obtained from the notion of
relative frequency.
Finite Sample Space: Consider a finite sample space S with n finite elements
S = {ζ1 , ζ2 , . . . , ζn },
n
2.
P
= p1 + p2 + · · · + pn = 1.
i=1
3. If A = i=1 ζi , where I is a collection of subscripts, then
S
X X
P (A) = P (ζi ) = pi .
ζi ∈A i∈I
Equally Likely Events: When all elementary events ζi (i = 1, 2, . . . , n) are equally likely,
that is,
p1 = p2 = · · · = pn ,
then, we have
1 n(A)
; i = 1, 2, . . . , n and P (A) =
pi =
n n
where n(A) is the number of outcomes belonging to event A and n is the number of sample
points in S.
Conditional probability: The conditional probability of an event A given event B, denoted
by P (A|B), is defined as
P (A ∩ B)
P (A|B) = , P (B) > 0,
P (B)
P (A ∩ B)
P (B|A) = , P (A) > 0,
P (A)
The above equation is often quite useful in computing the joint probability of events.
12
Bayes’ Rule: From the above, we can obtain the following Bayes’ rule:
P (B|A)P (A)
P (A|B) = .
P (B)
Total probability: The events A1 , A2 , . . . , An are called mutually exclusive and exhaustive
if
∪ni=1 Ai = A1 ∪ A2 ∪ · · · ∪ An and Ai ∩ Aj = φ, i 6= j.
which is known as the total probability of event B. Let A = Ai above Bayes’ rule, then, using
the above equation, we obtain
P (B|Ai )P (Ai )
P (Ai |B) = n .
P
P (B|Ai )P (Ai )
i=1
Note that the term on the right-hand side are all conditioned on events Ai , while the term on
the left is conditioned on B. The above equation is known as Bays’ theorem.
This reading material was updated on 15.03.2021
Independent events: Two events A and B are said to be independent if and only if
P (A ∩ B) = P (A)P (B).
Consider a random experiment with sample space S. A random variable X(ζ) is a single-
valued real function that assigns a real number called the value of X(ζ) to each sample point
ζ of S. Often, we use a single letter X for this function in place of X(ζ) and use random
variable to denote the random variable.
Note that the terminology used here is traditional. Clearly a random variable is not a variable
at all in the usual sense, and it is a function.
The sample space S is termed the domain of the random variable X, and the collection of all
numbers [values of X(ζ)] is termed the range of the random variable X. Thus the range of
X is a certain subset of the set of all real numbers.
Note that two or more different sample points might give the same value of X(ζ), but two
different numbers in the range cannot be assigned to the same sample point.
If X is a random variable and x is a fixed real number, we can define the event (X = x) as
(X = x) = {ζ : X(ζ) = x}.
Similarly, for fixed numbers x, x1 , and x2 , we can define the following events:
13
(X ≤ x) = {ζ : X(ζ) ≤ x} (3.1)
(X > x) = {ζ : X(ζ) > x} (3.2)
(x1 < X ≤ x2 ) = {ζ : x1 < X(ζ) ≤ x2 } (3.3)
P (X = x) = P {ζ : X(ζ) = x} (3.4)
P (X ≤ x) = P {ζ : X(ζ) ≤ x} (3.5)
P (X > x) = P {ζ : X(ζ) > x} (3.6)
P (x1 < X ≤ x2 ) = P {ζ : x1 < X(ζ) ≤ x2 } (3.7)
Most of the information about a random experiment described by the random variable X is
determined by the behavior of FX (x).
We can compute other probabilities, such as P (a < X ≤ b), P (X > a), and P (X < b)
Discrete random variable: Let X be a random variable (r.v.) with cdf FX (x). If FX (x)
changes values only in jumps (at most a countable number of them) and is constant between
jumps, i.e., FX (x) is a staircase function – then X is called a discrete random variable.
Alternatively, X is a discrete random variable only if its range contains a finite or countably
infinite number of points.
Probability Mass Functions: Suppose that the jumps in FX (x) of a discrete random
variable X occur at the points x1 , x2 , . . . , where the sequence may be either finite or countably
infinite, and we assume xi < xj if i < j. Then
The function pX (x) is called the probability mass function (pmf ) of the discrete random
variable X.
14
3.6 CONTINUOUS R. V. AND PROBABILITY DENSITY FUNCTIONS
Definition: Let X be a random variable with cdf FX (x). If FX (x) is continuous and also has
a derivative dFX (x)/dx which exists everywhere except at possibly a finite number of points
and is piece-wise continuous, then X is called a continuous random variable. Alternatively, X
is a continuous random variable only if its range contains an interval (either finite or infinite)
of real numbers. Thus, if X is a continuous random variable, then
P (X = x) = 0 (3.14)
Note that this is an example of an event with probability 0 that is not necessarily the impos-
sible event Φ.
In most applications, the random variable is either discrete or continuous. But if the cdf
FX (x) of a random variable X possesses features of both discrete and continuous r.v.’s, then
the r.v. X is called the mixed r.v.
Probability Density Functions: Let
dFX (x)
fX (x) = (3.15)
This reading material was updated on 15.03.2021
dx
The function fX (x) is called the probability density function (pdf) of the continuous random
variable X.
Mean: The mean (or expected value) of a random variable X, denoted by µX or E(X), is
defined by
xk pX (xk ), X discrete;
P
k
µX = E(X) = R∞ (4.1)
xfX (x)dx, X continuous.
−∞
The function fX (x) is called the probability density function (pdf) of the continuous random
variable X.
Moment: The nth moment of a random variable X is defined by
X discrete;
P n
xk pX (xk ),
k
n
E(X ) = R∞ n (4.2)
x fX (x)dx, X continuous.
−∞
2
σX = V ar(X) = E{[X − E(X)]2 } (4.3)
15
Thus,
A random variable X is called a Bernoulli random variable with parameter p if its pmf is
given by
This reading material was updated on 15.03.2021
0,
x < 0;
FX (x) = 1 − p; 0 ≤ x < 1 (4.7)
1 x ≥ 1.
µX = E(X) = p, (4.8)
2
σX = V ar(X) = p(1 − p). (4.9)
A Bernoulli random variable X is associated with some experiment where an outcome can
be classified as either a "success" or a "failure," and the probability of a success is p and the
probability of a failure is 1 − p. Such experiments are often called Bernoulli trials.
A random variable X is called a binomial random variable with parameters (n, p) if its
pmf is given by
n k
pX (k) = P (X = k) = p (1 − p)n−k k = 0, 1, · · · , n; (4.10)
k
16
where 0 ≤ p ≤ 1 and
n n!
= ,
k k!(n − k)!
which is known as the binomial coefficient. The corresponding cdf of X is
n
X n k
FX (x) = p (1 − p)n−k n ≤ x < n + 1. (4.11)
k
k=0
2
σX = V ar(X) = np(1 − p). (4.13)
5 Poisson Distribution
This reading material was updated on 15.03.2021
A random variable X is called a Poisson random variable with parameter λ (> 0) if its
pmf is given by
λk
pX (k) = P (X = k) = e−λ k = 0, 1, · · · (5.1)
k!
The corresponding cdf of X is
n
X λk
FX (x) = e−λ n ≤ x < n + 1. (5.2)
k!
k=0
µX = E(X) = λ, (5.3)
2
σX = V ar(X) = λ. (5.4)
The Poisson random variable has a tremendous range of applications in diverse areas because
it may be used as an approximation for a binomial random variable with parameters (n, p)
when n is large and p is small enough so that np is of a moderate size.
Some examples of Poisson r.v.’s include
1. The number of telephone calls arriving at a switching center during various intervals of
time.
2. The number of misprints on a page of a book.
3. The number of customers entering a bank during various intervals of time.
17
6 Normal (or Gaussian) Distribution
A random variable X is called a normal (or gaussian) random variable if its pdf is given
by
(x − µ)2
1 −
fX (x) = √ e 2σ 2 , (6.1)
2πσ
(x − µ)
Zx (ζ − µ)2 Zσ ζ2
1 − 1 −
FX (x) = √ e 2σ 2 dζ = √ e 2 dζ. (6.2)
2πσ 2π
−∞ −∞
This integral cannot be evaluated in a closed form and must be evaluated numerically. It is
convenient to use the function Φ(z), defined as
This reading material was updated on 15.03.2021
Zz ζ2
1 −
Φ(z) = √ e 2 dζ. (6.3)
2π
−∞
to help us to evaluate the value of FX (x). Then Eq. (2.53) can be written as
x−µ
FX (x) = Φ( ). (6.4)
σ
Note that
µX = E(X) = µ, (6.6)
2
σX = V ar(X) = σ 2 . (6.7)
We shall use the notation N (µ; σ 2 ) to denote that X is normal with mean p and variance
σ 2 . A normal random variable Z with zero mean and unit variance – that is, Z = N (0; 1) is
called a standard normal random variable.
Note that the cdf of the standard normal random variable is given by Eq. (2.54). The
normal random variable is probably the most important type of continuous random variable.
It has played a significant role in the study of random phenomena in nature. Many naturally
occurring random phenomena are approximately normal. Another reason for the importance
of the normal random variable is a remarkable theorem called the central limit theorem. This
18
theorem states that the sum of a large number of independent random variable’s, under certain
conditions, can be approximated by a normal random variable.
Example: Prove that
V ar(X) = E[X 2 ] − [E(X)]2 .
Solution:
V ar(X) = E[(X − µ)2 ]
X
= (x − µ)2 p(x)
x
X
= (x2 − 2µx + µ2 )p(x)
x
X X X
= x2 p(x) − 2µ xp(x) + µ2 p(x)
x x x
= E[X 2 ] − 2µ2 − µ 2
= E[X 2 ] − µ2
Solution:
X
E[aX + b] = (ax + b)p(x)
x : p(x)>0
X X
=a xp(x) + b p(x)
x : p(x)>0 x : p(x)>0
= aE[X] + b.
Example: Let X be a binomial random variable with parameters (n, p). Verify the following
µX = E(X) = np and 2
σX = V ar(X) = np(1 − p).
Solution:
n
k n
X
k
E[X ] = i pi (1 − p)n−i
i
i=0
n
k n
X
= i pi (1 − p)n−i .
i
i=1
= npE[(Y + 1)k−1 ].
19
where Y is a binomial random variable with parameters n − 1, p. Setting k = 1 in the
preceding equation yields
E[X] = np.
That is, the expected number of successes that occur in n independent trials when each is a
success with probability p is equal to np. Setting k = 2 in the preceding equation and using
the preceding formula for the expected value of a binomial random variable yields
E[X 2 ] = npE[Y + 1]
= np[(n − 1)p + 1].
Example: Let X be a Poisson random variable with parameter λ. Verify the following
This reading material was updated on 15.03.2021
µX = E(X) = λ, and σX
2
= V ar(X) = λ.
Next
∞ ∞
X λk X λk−2
E[X(X − 1)] = k(k − 1)e−λ = λ2 e−λ
k! (k − 2)!
k=0 k=2
∞
X λi
= λ2 e−λ = λ2 e−λ eλ = λ2 .
i!
i=0
x−µ
Solution: Making the substitution y = , we see that
σ
Z∞ (x − µ)2 Z∞ y 2
1 − 1 −
√ e 2σ 2 dx = √ e 2 dy (6.9)
2πσ 2π
−∞ −∞
20
y2
R∞ −
Let I = e 2 dy, then
−∞
Z∞ y2 Z∞ x2 Z∞ Z∞ y 2 + x2
− − −
I2 = e 2 dy e 2 dx = e 2 dydx
−∞ −∞ −∞ −∞
Now we evaluate the double integral by means of a change of variables to polar coordinates.
Thus
Z∞ Z2π r2 Z∞ r2
− −
I2 = e 2 rdrdθ = 2π re 2 dr = 2π.
0 0 0
√
Hence I = 2π. Putting the value of I in equation (6.8), we get
Z∞ (x − µ)2
1 −
√ e 2σ 2 dx = 1.
2πσ
−∞
This reading material was updated on 15.03.2021
(x − µ)2
R∞ − √
Hence e 2σ 2 dx = σ 2π.
−∞
Example: Let X be a normal (or gaussian) random variable with pdf is given by
(x − µ)2
1 −
fX (x) = √ e 2σ 2 .
2πσ
Verify the following
µX = E(X) = µ and 2
σX = V ar(X) = σ 2 .
Z∞ (x − µ)2
1 −
µX = E(X) = √ xe 2σ 2 dx.
2πσ
−∞
Writting x as (x − µ) + µ, we have
Z∞ (x − µ)2 Z∞ (x − µ)2
1 − 1 −
E(X) = √ (x − µ)e 2σ 2 dx + µ √ e 2σ 2 dx.
2πσ 2πσ
−∞ −∞
Z∞ y2 Z∞
1 −
2 1
E(X) = √ ye 2σ dy + µ √ fX (x)dx.
2πσ 2πσ
−∞ −∞
21
R∞
The first integral is zero, since its integrand is an odd function. We know that fX (x)dx = 1.
−∞
Therefore
µX = E(X) = µ.
From definition
Z∞ (x − µ)2
1 −
2
σX = E[(X − µ)2 ] = √ (x − µ)2 e 2σ 2 dx (6.10)
2πσ
−∞
Z∞ (x − µ)2
− √
e 2σ 2 dx = σ 2π.
−∞
Z∞ (x − µ)2
(x − µ)2 − √
− e 2σ 2 dx = 2π.
σ3
This reading material was updated on 15.03.2021
−∞
σ2
Multiplying both sides by √ , we have
2π
Z∞ (x − µ)2
1 −
√ (x − µ)2 e 2σ 2 dx = σ 2 .
2πσ
−∞
Thus, 2 = V ar(X) = σ 2 .
σX
1. If you use pure guessing on a 10-question true/false exam, what is the probability that
1
you get them all right? Ans: 10 .
2
2. If you use pure guessing on a 10-question true/false exam, what is the probability that
11
you get at least 7 out of 10? Ans: .
64
3. A box of 30 diodes is known to contain five defective ones. If two diodes are selected at
random without replacement, what is the probability that at least one of these diodes
is defective?
4. Suppose that P (B|A) > P (B). What does this imply about the relation between
P (A|B) and P (A)?
5. How many outcome sequences are possible when a die is rolled four times, where we
say, for instance, that the outcome is (3, 4, 3, 1) if the first roll landed on 3, the second
on 4, the third on 3, and the fourth on 1?
6. A total of 7 different gifts are to be distributed among 10 children. How many distinct
results are possible if no child is to receive more than one gift? Ans: 604800.
22
7. If 8 castles (that is, rooks) are randomly placed on a chessboard, compute the probability
that none of the rooks can capture any others. That is, compute the probability that
8!
no row or file contains more than one rook. Ans: 64
8
8. Prove that
n+m n m n m n m n m
= + + + ··· + .
r 0 r 1 r−1 2 r−2 r 0
Hint: Consider a group of n men and m women. How many groups of size r are possible?
9. Sixty percent of the students at certain school wear neither a ring nor a necklace. Twenty
percent wear a ring and 30 percent wear a necklace. If one of the students is chosen
randomly, what is the probability that this student is wearing
a) a ring or a necklace;
b) a ring and a necklace?
10. A woman has n keys, of which one will open her door.
a) If she tries the keys at random, discarding those that do not work, what is the
probability that she will open the door on her k-th try?
b) What if she does not discard previously tried keys?
This reading material was updated on 15.03.2021
11. A closet contains 10 pairs of shoes. If 8 shoes are randomly selected, what is the
probability that there will be
10 8
8 2
a) no complete pair; Ans: 20 .
8
10
9 6
6 2
b) exactly one complete pair? Ans: 1
20
.
8
12. Consider an experiment whose sample space consists of a countably infinite number of
points. Show that not all points can be equally likely. Can all points have positive
probability of occurring? Why or why not?
13. What is the probability that at least one of a pair of fair dice lands on 6, given that the
sum of the dice is i, for i = 2, 3, . . . , 12?
14. Suppose that 5 percent of men and 0.25 percent of women are colorblind. A colorblind
person is chosen at random. What is the probability of this person being male? Assume
that there are an equal number of males and females. What if the population consisted
of twice as many males as females?
15. A ball is drawn from an urn containing 3 white and 3 black balls. After the ball is
drawn, it is then replaced and another ball is drawn. This goes on indefinitely. What
is the probability that of the first 4 balls, exactly 2 are white?
16. Let X be a normal random variable with mean 12 and variance 4. Find the value of c
such that P (X > c) = 0.10. Ans: c = 14.56
17. Suppose that 3 balls are chosen without replacement from an urn consisting of 5 white
and 8 red balls. Let Xi equal 1 if the i-th ball selected is white, and let it equal 0
otherwise. Give the joint probability mass function of
a) X1 , X2 ;
b) X1 , X2 , X3 .
18. Suppose that A is an event such that P (A) = 0 and that B is any other event. Prove
23
that A and B are independent events.
19. Soldier A and Soldier B are practicing shooting. The probability that A would miss the
target is 0.2 and the probability that B would miss the target is 0.5. The probability
that both A and B would miss the targets is 0.1. What is the probability that at least
one of the two will miss the target? What is the probability that exactly one of the two
soldiers will miss the target? Ans: 0.6, 0.5.
20. A box contains three cards. One card is red on both sides, one card is green on both
sides, and one card is red on one side and green on the other. Then we randomly select
one card from this box, and we can know the color of the selected card’s upper side. If
this side is green, what is the probability that the other side of the card is also green?
21. Suppose that the p.d.f. of a random variable X is:
(
cx2 , 1 ≤ x ≤ 2;
f (x) =
0; otherwise.
What is the value of constant c? Sketch the p.d.f. P (X > 32 ) =? Ans: 7 , 56 .
3 37
22. If an integer between 100 and 200 is to be chosen at random, what is the expected
value? Ans: 150.
23. A rabbit is playing a jumping game with friends. She starts from the origin of a real
line and moves along the line in jumps of one step. For each jump, she flips a coin.
This reading material was updated on 15.03.2021
If heads, she would jump one step to the left (i.e. negative direction). Otherwise, she
would jump one step to the right. The chance of heads is p (0 ≤ p ≤ 1). What is the
expected value of her position after n jumps ? (assume each step is in equal length and
assume one step as one unit on the real line) Ans: n(1 − 2p).
24. Suppose X has a normal distribution with mean 1 and variance 4. Find the value of
the following: (a). P (X ≤ 3), (b). P (|X| ≤ 2). Ans: 0.8413, 0.6247.
25. What is the probability that there will be strictly more heads than tails out of 10 flips
of a fair coin? Out of 20 flips?
26. If there are 3 red balls and 7 blue balls in an urn, what is the probability that in two
trials two red balls will be drawn?
27. If there are 3 red balls and 7 blue balls in an urn, what is the probability that in 10
trials at least 4 red balls will be drawn?
28. A die is a small cube with numbers 1 − 6 on its six sides. A roll of two dice has an
outcome which is the sum of the upward-facing sides of the two, so is an integer in the
range 2 − 12. A die is fair if anyone of its six sides is as likely to come up as any other.
What is the probability that a roll of two fair dice will give either a ‘7’ or an ‘8’ ? What
is the probability of a ‘2’ ?
29. Let X be a Poisson random variable with parameter λ and its probability mass function
(pdf) is given by
λk
pX (k) = P (X = k) = e−λ ; k = 0, 1, 2, · · · .
k!
Find the mean µX and the variance σX
2 .
7 USEFUL FORMULAS
24
eix +e−ix eix −e−ix ex +e−x ex −e−x
1. cos x = 2 , sin x = 2i , cosh x = 2 , sinh x = 2 .
2. cosh2 (x) − sinh (x) = 1, tanh (x) + sec h2 (x) = 1.
2 2
R√ √
2 2 2
3. a2 + x2 dx = x a2 +x + a2 sinh−1 xa + C.
R√ √
2 2 2
4. x2 − a2 dx = x x2 −a − a2 cosh−1 xa + C.
R√ √
2 2 2
5. a2 − x2 dx = x a2 −x + a2 sin−1 xa + C.
6. a2dx = a1 tan−1 xa + C.
R
+x2
√
7. sin−1 xdx = x sin−1 x + 1 − x2 + C
R
8 BOOKS: (TEXT/REFERENCES)
25