You are on page 1of 10

Mustaqbal University

College of Engineering &Computer Sciences


Electronics and Communication Engineering Department

Course: EE301: Probability Theory and Applications


Prerequisite: Stat 219

Text Book: B.P. Lathi, “Modern Digital and Analog Communication Systems”, 3 th edition, Oxford University Press, Inc., 1998
Reference: A. Papoulis, Probability, Random Variables, and Stochastic Processes, Mc-Graw Hill, 2005

Dr. Aref Hassan Kurdali


Probability Theory
Probability theory is rooted in phenomena that can be modeled
by an experiment with an outcome that is subject to chance.
Moreover, if the experiment is repeated, the outcome can differ
because of the influence of an underlying random phenomenon or
chance mechanism. Such an experiment is referred to as a
random experiment. The following three features describe the
random experiment:
1. The experiment is repeatable under identical conditions.
2. On any trial of the experiment, the outcome is unpredictable.
3. If the experiment is repeated a large number of times., the
outcomes exhibit statistical regularity; that is, a definite average
pattern of outcomes is observed.

There are two approaches to define the probability:


1) Population approach (Ex. Questionnaires (Surveys), Voting
among alternatives)
2) Relative frequency approach.
Let event A denote one of the possible outcomes of a random
experiment. For example, in the coin-tossing experiment,
event A may represent "heads." Suppose that in n trials of the
experiment, event A occurs N(A) times. We may then assign
the ratio N(A)/n to the event A. This ratio is called the relative
frequency of the event A. Clearly, the relative frequency is a
nonnegative real number less than or equal to one. That is to
say, and the probability of event A is defined as
The total of all possible outcomes of the random experiment is
called the sample space, which is denoted by S. The entire
sample space S is called the sure event; the null set ϕ is called
the null or impossible event; and a single sample point is
called an elementary event.

Example: Let S= {1, 2, 3, 4, 5, & 6}


Define:
A={3}, B= {1, 4 & 6}, C={ 2, 5 & 6}, D={ x < 4} = {1, 2 & 3}
Calculate
BC = B and C = {6} , CD = {2}, AC={ϕ} null set
P(BC) = P(B) + P(C) - P(B+C)
P(not B) = {2,3 &5} = 1-P(B)
Let Z = (A+B) = {3, 1, 4, & 6), P(A+B) = P(A) + P(B)
Y = (B + C) = {1, 4, 6,2, 5)
P(B+C) = P(B) +P(C) – P(BC)
P(BC) = P(B) + P(C) - P(B+C)
A probability system consists of the following three aspects:
1. A sample space S of elementary events (outcomes).
2. A class ξ of events that are subsets of S.
3. A probability measure P(.) assigned to each event (a) in the
class ξ, which has the following properties:
I. P(S) = 1
II. 0 ≤ P(a) ≤ 1 & P(not a) = 1 − P(a).
III. If (a + b) is the union of two mutually exclusive events (a
and b) in the class ξ, then P(a +b) = P(a) + P(b)

Properties (I), (II), and (III) are known as the axioms of


probability which constitute an implicit definition of probability.
Joint Probability
For two events (not mutually exclusive ) a & b

Joint (Intersection) of a & b = a and b = ab


Union of a & b = a or b = a + b
P(a+b) = P(a)+P(b) − P(ab).

Similarly, P(ab) = P(a)+P(b) − P(a+b).

If P(ab) = 0, we say events a & b are mutually exclusive


(disjoint) events.
Conditional probability
P(a|b) is the probability of a, given that we know b
P(a|b) is called Conditional probability of a given b.
The joint probability of both a and b is given by:
P(ab) = P(a|b) P(b) and if P(ab) = P(ba),
we have Bayes’Theorem :

P(ba) = P(a|b)P(b) = P(b|a)P(a) = P(ab)


P(a/b) = P(ab)/P(b)
P(c/f) = P(cf)/P(f)
Statistical Independence
If two events a and b are such that P(a|b) = P(a),
then, the events a and b are called statistically independent (SI).
Note that from Bayes’Theorem, also, P(b|a) = P(b),
therefore, P(ab) = P(a|b)P(b) = P(a)P(b).
This last equation is often taken as the
definition of statistical independence.
i.e. If P(ab) = P(a)P(b) then a & b are SI.
If P(ab) = 0 then a & b are mutually exclusive and Statistical
dependent

Events a & b are SI if


P(a/b) = P(a) or P(b/a) = P(b)
P(ab) = P(a) P(b)
Problem 1.1
Consider a box containing a total of 400 objects (balls,
squares & triangles) with three different colors (red,
yellow & white).
•Complete the following table if probability (T) = 1/2,
probability (RT) = 1/4, probability (Y) = 3/10, probability
(W+Y) = 3/5, probability (B/R) = 1/8 probability (W/B) =
probability (S/Y) = 1/2 and probability (S/W) = 1/6.
Color\Object Ball(B) Square(S) Triangle(T) Sum (Rows)
Red(R)
Yellow(Y)
white(W)
Sum (Columns) Total = 400
Problem 1.1 cont…
P(W+Y) = P(W) + 3/10 = 3/5, P(W) = 3/10

•Are the two events: {yellow object} and {ball} statistical independent? Why?

•Are the two events: {white object} and {Triangle} statistical independent? Why?

•Calculate the probability of each of the following events:


A = {Drawing a white object} = P(W) =
B = {Drawing a yellow ball} = P(YB) =
C = {Drawing a square} = P(S) =
D = {Drawing a ball given it is a white object} = P(B/W) =
E = {Drawing a square which is red or white} = P(RS+WS) = P(RS)+P(WS) =
F = {Drawing a ball or red object} = P(B+R) = P(B)+P(R) - P(RB)

You might also like