Principles of Communication

Prof. V. Venkata Rao

2

CHAPTER 2
U

Probability and Random Variables

2.1 Introduction
At the start of Sec. 1.1.2, we had indicated that one of the possible ways of classifying the signals is: deterministic or random. By random we mean unpredictable; that is, in the case of a random signal, we cannot with certainty predict its future value, even if the entire past history of the signal is known. If the signal is of the deterministic type, no such uncertainty exists. Consider the signal x ( t ) = A cos ( 2 π f1 t + θ ) . If A , θ and f1 are known, then (we are assuming them to be constants) we know the value of x ( t ) for all t . ( A , θ and f1 can be calculated by observing the signal over a short period of time). Now, assume that x ( t ) is the output of an oscillator with very poor frequency stability and calibration. Though, it was set to produce a sinusoid of frequency f = f1 , frequency actually put out maybe f1' where f1' ∈ ( f1 ± ∆ f1 ) .

Even this value may not remain constant and could vary with time. Then, observing the output of such a source over a long period of time would not be of much use in predicting the future values. We say that the source output varies in a random manner.

Another example of a random signal is the voltage at the terminals of a receiving antenna of a radio communication scheme. Even if the transmitted

2.1
Indian Institute of Technology Madras

Principles of Communication

Prof. V. Venkata Rao

(radio) signal is from a highly stable source, the voltage at the terminals of a receiving antenna varies in an unpredictable fashion. This is because the conditions of propagation of the radio waves are not under our control.

But randomness is the essence of communication. Communication theory involves the assumption that the transmitter is connected to a source, whose output, the receiver is not able to predict with certainty. If the students know ahead of time what is the teacher (source + transmitter) is going to say (and what jokes he is going to crack), then there is no need for the students (the receivers) to attend the class!

Although less obvious, it is also true that there is no communication problem unless the transmitted signal is disturbed during propagation or reception by unwanted (random) signals, usually termed as noise and interference. (We shall take up the statistical characterization of noise in Chapter 3.)

However, quite a few random signals, though their exact behavior is unpredictable, do exhibit statistical regularity. Consider again the reception of radio signals propagating through the atmosphere. Though it would be difficult to know the exact value of the voltage at the terminals of the receiving antenna at any given instant, we do find that the average values of the antenna output over two successive one minute intervals do not differ significantly. If the conditions of propagation do not change very much, it would be true of any two averages (over one minute) even if they are well spaced out in time. Consider even a simpler experiment, namely, that of tossing an unbiased coin (by a person without any magical powers). It is true that we do not know in advance whether the outcome on a particular toss would be a head or tail (otherwise, we stop tossing the coin at the start of a cricket match!). But, we know for sure that in a long sequence of tosses, about half of the outcomes would be heads (If this does not happen, we suspect either the coin or tosser (or both!)).

2.2
Indian Institute of Technology Madras

Principles of Communication

Prof. V. Venkata Rao

Statistical

regularity

of

averages

is

an

experimentally

verifiable

phenomenon in many cases involving random quantities. Hence, we are tempted to develop mathematical tools for the analysis and quantitative characterization of random signals. To be able to analyze random signals, we need to understand random variables. The resulting mathematical topics are: probability theory, random variables and random (stochastic) processes. In this chapter, we shall develop the probabilistic characterization of random variables. In chapter 3, we shall extend these concepts to the characterization of random processes.

2.2 Basics of Probability
We shall introduce some of the basic concepts of probability theory by defining some terminology relating to random experiments (i.e., experiments whose outcomes are not predictable).

2.2.1. Terminology
Def. 2.1: Outcome

The end result of an experiment. For example, if the experiment consists of throwing a die, the outcome would be anyone of the six faces, F1 ,........, F6
Def. 2.2: Random experiment

An experiment whose outcomes are not known in advance. (e.g. tossing a coin, throwing a die, measuring the noise voltage at the terminals of a resistor etc.)
Def. 2.3: Random event

A random event is an outcome or set of outcomes of a random experiment that share a common attribute. For example, considering the experiment of throwing a die, an event could be the 'face F1 ' or 'even indexed faces' ( F2 , F4 , F6 ). We denote the events by upper case letters such as A , B or
A1 , A2 ⋅ ⋅ ⋅ ⋅

2.3
Indian Institute of Technology Madras

Principles of Communication

Prof. V. Venkata Rao

Def. 2.4: Sample space

The sample space of a random experiment is a mathematical abstraction used to represent all possible outcomes of the experiment. We denote the sample space by

S.
S

Each outcome of the experiment is represented by a point in

and is

called a sample point. We use s (with or without a subscript), to denote a sample point. An event on the sample space is represented by an appropriate collection of sample point(s).
Def. 2.5: Mutually exclusive (disjoint) events

Two events A and B are said to be mutually exclusive if they have no common elements (or outcomes).Hence if A and B are mutually exclusive, they cannot occur together.
Def. 2.6: Union of events

The union of two events A and B , denoted A ∪ B , {also written as

(A

+ B ) or ( A or B )} is the set of all outcomes which belong to A or B or both.

This concept can be generalized to the union of more than two events.
Def. 2.7: Intersection of events

The intersection of two events, A and B , is the set of all outcomes which belong to A as well as B . The intersection of A and B is denoted by ( A ∩ B ) or simply ( A B ) . The intersection of A and B is also referred to as a joint event

A and B . This concept can be generalized to the case of intersection of three or
more events.
Def. 2.8: Occurrence of an event

An event A of a random experiment is said to have occurred if the experiment terminates in an outcome that belongs to A .

2.4
Indian Institute of Technology Madras

Thus φ = (note that if A and B are disjoint events.Principles of Communication Prof.1) ⎛n ⎞ For small values of n . Two of the most popular definitions are: i) the relative frequency definition. let the experiment be that of tossing a coin and A the event ⎛n ⎞ 'outcome of a toss is Head'. For example. we expect.2 Probability of an Event The probability of an event has been defined in several ways. it is likely that ⎜ A ⎟ will fluctuate quite badly. then the probability of A . S 2. is an event with no sample points. ⎜ A ⎟ may not deviate from ⎝ n ⎠ 2. If the event A occurs nA times. 2. 2.2. and ii) the classical definition. 2. V.5 Indian Institute of Technology Madras . ⎜ A ⎟ to tend to a definite limiting ⎝ n ⎠ value. denoted φ . ⎝ ⎠ (2. Def. But ⎝ n ⎠ ⎛n ⎞ as n becomes larger and larger. is defined as ⎛n ⎞ P ( A ) = lim ⎜ A ⎟ n → ∞ ⎝ n ⎠ ⎛ nA ⎞ ⎜ n ⎟ represents the fraction of occurrence of A in n trials.10: Null event The null event. If n is the order of 100. Def. then A B = φ and vice versa). Venkata Rao Def.9: Complement of an event The complement of an event A .11: The relative frequency definition: Suppose that a random experiment is repeated n times. denoted by A is the event containing all points in S but not in A . denoted by P ( A ) .

This is done by counting the total number N of the possible outcomes of the experiment.3c) (Note that in Eq. 2. 2. In the classical approach. An be random events such that: a) Ai A j = φ .3a) (2..12: The classical definition: The relative frequency definition given above has empirical flavor. for i ≠ j and (2. it is possible for us to derive some additional relationships: i) If A B ≠ φ .. then P ( A) = NA N (2..Principles of Communication Prof.6 Indian Institute of Technology Madras . V..4) ii) Let A1 . say ten percent and as n becomes larger and larger. then P ( A + B ) = P ( A ) + P ( B ) − P ( A B ) (2.2) where it is assumed that all outcomes are equally likely! Whatever may the definition of probability. 2.. A2 .. then P ( A + B ) = P ( A ) + P ( B ) (2. we require the probability measure (to the various events on the sample space) to obey the following postulates or axioms: P1) P ( A ) ≥ 0 P2) P ( S ) = 1 P3) ( AB ) = φ .5a) 2.3b) (2. to denote the union of A and B and to denote the addition of two real numbers). Using Eq. namely. Venkata Rao 1 by more than.3(c). the symbol + is used to mean two different things. we 2 1 ⎛n ⎞ expect ⎜ A ⎟ to converge to . If N A of those outcomes are favorable to the occurrence of the event A . the probability of the event A is found without experimentation.3. 2 ⎝ n ⎠ Def..

it represents the probability of B occurring. The occurrence of the event 'the capacitor on the second draw' is very much dependent on what has been drawn at the first instant.. Note: A1 .Principles of Communication Prof.7 is left as an exercise.5a) and exhaustive (Eq.. + An = S. 2. In a real world random experiment. iii) P ( A ) = 1 − P ( A ) The derivation of Eq.5b) (2. given that A has occurred. To give a simple example.. it is quite likely that the occurrence of the event B is very much influenced by the occurrence of the event A . A2 .. This relation can be arrived at by using either the relative frequency definition of probability or the classical definition. 2. let a bowl contain 3 resistors and 1 capacitor...7 Indian Institute of Technology Madras .7) A very useful concept in probability theory is that of conditional probability. + P ( A An ) where A is any event on the sample space.4. Such dependencies between the events is brought out using the notion of conditional probability . 2. The conditional probability P ( B | A ) can be written in terms of the joint probability P ( A B ) and the probability of the event P ( A ) . V. P ( A ) = P ( A A1 ) + P ( A A2 ) + . (2. denoted P ( B | A ) .... we have ⎛ nAB ⎞ P ( AB ) = lim ⎜ ⎟ n → ∞ ⎝ n ⎠ ⎛n ⎞ P ( A ) = lim ⎜ A ⎟ n → ∞ ⎝ n ⎠ 2. (2.6) Then. An are said to be mutually exclusive (Eq. Venkata Rao b) A1 + A2 + ..5b). 2. Using the former. ⋅ ⋅ ⋅.6 and 2.

13: Conditional Probability P ( B | A ) = lim nAB nA ⎞ ⎟ P ( AB ) .8a) P ( A B ) = P (B | A) P ( A) Interchanging the role of A and B . 2.8 Indian Institute of Technology Madras . 2.10b) Eq. P (B ) ≠ 0 P (B ) (2. say P ( B | A ) and the (unconditional) probability P ( A ) . 2.10(b) is one form of Bayes’ rule or Bayes’ theorem.8(a) as P (B | A) = Similarly P (B ) P ( A | B ) P ( A) . V. Eq.9) In view of Eq. P (B ) ≠ 0 (2.Principles of Communication Prof.10(a) or 2. Similar relation can be derived for the joint probability of a joint event involving the intersection of three or more events. we have P ( A | B) = P ( AB ) P (B ) . As P ( B | A ) refers to the probability of B occurring. Venkata Rao where nAB is the number of times AB occurs in n repetitions of the experiment.8b) Eq. P ( A) ≠ 0 ⎟ = P ( A) ⎟ ⎟ ⎠ n → ∞ ⎛ nAB ⎜ = lim ⎜ n n → ∞ ⎜ nA ⎜ n ⎝ or (2. we can also write Eq.9.10a) P ( A | B) = P ( A) P (B | A) .8(a) and 2. P ( A) ≠ 0 (2. 2.9 expresses the probability of joint event AB in terms of conditional probability. For example P ( A BC ) can be written as 2. 2. we have Def 2. given that A has occurred.8(b) can be written as P ( A B ) = P (B | A) P ( A) = P (B ) P ( A | B ) (2.

Suppose the events A and B are such that P (B | A) = P (B ) (2..... Another useful probabilistic concept is that of statistical independence..1 Let A1 . Thus three events A . 2.11) Exercise 2. the events A and B are said to be statistically independent. A set of k events A1 .. 2. An be n mutually exclusive and exhaustive events and B is another event defined on the same space. Alternatively. Venkata Rao P ( A BC ) = P ( A B ) P (C | AB ) = P ( A ) P ( B | A ) P (C | AB ) (2.. then P ( A B ) = P ( A) P (B ) A and B satisfy the (2. The notion of statistical independence can be generalized to the case of more than two events.Principles of Communication Prof.... Show that P Aj | B = ( ) i = 1 ) ( ) ∑ P (B | Aj ) P ( Aj ) P B | Aj P Aj n ( (2. A2 . then P ( A B ) = 0 ..12) Eq. Ak are said to be statistically independent if and only if (iff) the probability of every intersection of k or fewer events equal the product of the probabilities of its constituents..14 can be used to define the statistical independence of two events. 2.14) Either Eq. A2 . C are independent when 2. Then. V. whereas if they are disjoint.13 or 2. if Eq.13) That is.. knowledge of occurrence of A tells no more about the probability of occurrence B than we knew without that knowledge.13. B .9 Indian Institute of Technology Madras . Note that if A and B are independent.12 represents another form of Bayes’ theorem.. then P ( A B ) = P ( A ) P ( B ) .

1(a). both are also very short tempered and will wait only 10 min. much against the wishes of the parents on both the sides. Mark this event on the sample space. Venkata Rao P ( A B ) = P ( A) P (B ) P ( AC ) = P ( A ) P (C ) P ( BC ) = P ( B ) P (C ) and P ( A BC ) = P ( A ) P ( B ) P (C ) We shall illustrate some of the concepts introduced above with the help of two examples. a) The sample space is the rectangle. Example 2.Principles of Communication Prof. Unfortunately. c) Find the probability that the lovebirds will get married and (hopefully) will live happily ever after. 2. However. before leaving in a huff never to meet again.m. after seeing each other for some time (and after a few tiffs) decide to get married. Also arrival of one person is independent of the other. a) b) Picture the sample space Let the event A stand for “P1 and P2 meet”. 2. on the ensuing Friday (looks like they are not aware of Rahu Kalam or they don’t care about it). both are somewhat lacking in punctuality and their arrival times are equally likely to be anywhere in the interval 11 to 12 hrs on that day. They agree to meet at the office of registrar of marriages at 11:30 a. shown in Fig. V.10 Indian Institute of Technology Madras .1 Priya (P1) and Prasanna (P2).

1 b) The diagonal OP represents the simultaneous arrival of Priya and Prasanna.1(b).1(b): The event A of Example 2. 2.11 Indian Institute of Technology Madras . meeting between P1 and P2 would take place if P2 arrives within the interval a to b . Fig. Assuming that P1 arrives at 11: x . indicating the possibility of P1 and P2 meeting.1 2. Venkata Rao Fig. V. is shown in Fig.Principles of Communication Prof. 2. 2.1(a): S of Example 2. The event A . as shown in the figure.

H1 H2 ). Let the event A be 'not H1 H2 ' and B be the event 'match'. T1 H2 and 4 P ( A) = 3 4 1 1 P (B | A) = 4 = 3 3 4 Intuitively..12 Indian Institute of Technology Madras . it represents the outcome T1 T2 .e.) We shall treat that all these outcomes are equally likely. 1 . be tossed together. Are A and B independent? We know that P ( B | A ) = P ( AB ) P ( A) . V. i. similarly T2 etc. H1 T2 . (Treating each of these outcomes as an event. Hence P ( A B ) = H1 T2 ’. therefore. The event A comprises of the outcomes ‘ T1 T2 . Venkata Rao c) Probability of marriage = Shaded area 11 = Total area 36 Example 2. the result P ( B | A ) = 1 is satisfying because. H1 H2 . marked 1 and 2.Principles of Communication Prof. given 'not H1 H2 ’ the 3 toss would have resulted in anyone of the three other outcomes which can be 2. T1 H2 . The four possible outcomes are T1 T2 . ( T1 indicates toss of coin 1 resulting in tails. A B is the event 'not H1 H2 ' and 'match'. that is the probability of occurrence of any of these four outcomes is 1 . Find P ( B | A ) . we find that these events 4 are mutually exclusive and exhaustive). (Match comprises the two outcomes T1 T2 .2: Let two honest coins.

The figure shows the transformation of a few individual sample points as well as the transformation of the event A . This implies that the outcome T1 T2 given 3 1 1 and P ( B | A ) = . say X . that is.1 Distribution function: Taking a specific case. 2.3 Random Variables Let us introduce a transformation or function. namely 'not H1 H2 '. as shown in Fig. A and B are dependent events. that is. The six faces of the die can be treated as the six sample points in S . 2. V. a2 ] .3. has a probability of As P ( B ) = 1 .2: A mapping X ( ) from S to the real line. to each si ∈ S . Fig. which falls on the real line segment [a1 . whose domain is the sample space (of a random experiment) and whose range is in the real line. 2. 3 1 . X ( si ) . X assigns a real number.2.Principles of Communication Prof. Venkata Rao treated to be equally likely. let the random experiment be that of throwing a die. 2 3 2.2.13 Indian Institute of Technology Madras .

i = 1. FX ( whose domain is the real line and whose range is the interval [0.) Evidently.. 2.15) That is. 1] ). FX ( x ) is the probability of the event. then the events on the sample space will become transformed into appropriate segments of the real line.. If X ( si ) = i − 1. for convenience.5. . Once the transformation is induced. s1 to s4 . 2.14 Indian Institute of Technology Madras . Then we can enquire into the probabilities such as P ⎡{s : X ( s ) < a1}⎤ ⎣ ⎦ P ⎡{s : b1 < X ( s ) ≤ b2 }⎤ ⎣ ⎦ or P ⎡{s : X ( s ) = c}⎤ ⎣ ⎦ These and other probabilities can be arrived at. denoted by FX ( FX ( x ) = P ⎡{s : X ( s ) ≤ x}⎤ ⎣ ⎦ ) which is given by (2. provided we know the Distribution Function of X. will be as shown in Fig. 6 . . V. 3.. 4 each with sample point representing an event with the probabilities P ( s1 ) = P ( s2 ) = 1 1 1 . it could be any ) is a function other symbol and we may use FX ( α ) . let S consist of four sample points. 2.. Let X ( si ) = i . But. i = 1. Venkata Rao Fi = si . FX ( a1 ) etc.Principles of Communication Prof. P ( s3 ) = and P ( s4 ) = .3. 2.. then 8 8 2 the distribution function FX ( x ) . we use x as the argument of FX ( ) . As an example of a distribution function (also called Cumulative Distribution Function CDF). comprising all those sample points which are transformed by X into real numbers less than or equal to x . 4 . (Note that. with 1 .

Principles of Communication Prof. In other words. there is a discontinuity in FX of magnitude Pa at a point x = a . there is a discontinuity of 4 4 x = − 0. Venkata Rao Fig.15 Indian Institute of Technology Madras . note that FX ( x ) = 0 for x < − 0. − ∞ < x < ∞ FX ( − ∞ ) = 0 FX ( ∞ ) = 1 If a > b . In general. Properties iv) and v) follow from the fact {s : X ( s ) ≤ b} ∪ {s : b < X ( s ) ≤ a} = {s : X ( s ) ≤ a} Referring to the Fig. then FX ( a ) ≥ FX ( b ) The first three properties follow from the fact that FX ( ) represents the probability and P ( S ) = 1. if and only if P ⎡{s : X ( s ) = a}⎤ = Pa ⎣ ⎦ (2.3.5 ) = 1 1 at the point .3: An example of a distribution function FX ( ) satisfies the following properties: i) ii) iii) iv) v) FX ( x ) ≥ 0.5 whereas FX ( − 0.5 . V.16) 2. 2. 2. then ⎡FX ( a ) − FX ( b ) ⎤ = P ⎡{s : b < X ( s ) ≤ a}⎤ ⎣ ⎦ ⎣ ⎦ If a > b .

s1 to s6 . for any real x .16 is satisfied. The only events that have been identified on the sample space are: A = {s1 . s5 } and C = {s6 } and their probabilities are P ( A ) = 2 1 1 . F X { } ( a+ ) − FX ( a ) = 0 . 1 TP PT Let x = a .5 < x ≤ 4.) However. For example. (The term “random variable” is somewhat misleading because an RV is a well defined function from S into the real line. Hence.16 Indian Institute of Technology Madras .5 ] = P ( s4 ) is not known as s4 is not an event on the sample space. 2. with ∆ > 0 . P ( B ) = and P (C ) = . where a + = ∆ → 0 lim (a + ∆ ) That is. 2. We see that the probabilities for the 6 2 6 various unions and intersections of A . is continuous to the right at each point x 1.Principles of Communication Prof. Venkata Rao The properties of the distribution function are summarized by saying that Fx ( TP PT ) is monotonically non-decreasing. Then the distribution function fails to exist because P [s : 3. s4 . In other words. Let the transformation X be X ( si ) = i . B and C can be obtained. V. let S consist of six sample points. Functions such as X ( ) for which distribution functions exist are called Random Variables (RV). Consider. s2 }. and has a step of size Pa at point a if and if Eq. the limit of the set s : a < X ( s ) ≤ a + ∆ is the null set and can be proved to be so. lim P ⎡a < X ( s ) ≤ a + ∆ ⎤ = lim ⎡FX ( a + ∆ ) − FX ( a ) ⎤ ⎣ ⎦ ⎣ ⎦ ∆ → 0 ∆ → 0 We intuitively feel that as ∆ → 0 . B = {s3 . F X ( x ) is continuous to the right. every transformation from S into the real line need not be a random variable. {s : X ( s ) ≤ x} should be an event in the sample space with some assigned probability.

17a) −∞ ∫ f (α) d α X x (2.17b) The distribution function may fail to have a continuous derivative at a point x = a for one of the two reasons: i) ii) the slope of the Fx ( x ) is discontinuous at x = a Fx ( x ) has a step discontinuity at x = a The situation is illustrated in Fig. s2 } . The PDF. namely. V. that is fX ( x ) = or FX ( x ) = d FX ( x ) dx (2. s1 to s6 . s4 . 5 ⎪3 .Principles of Communication Prof. P ( B ) = and P (C ) = . s5 } and C = {s6 } with P ( A ) = 1 1 1 . Venkata Rao Exercise 2. f X ( x ) is defined as the derivative of the CDF. 2. 2 ⎪ = ⎨2 .4.2 Probability density function Though the CDF is a very fundamental concept. Sketch FY ( y ) . 2. The events on S are the same as above.2 Let identified S be a sample space with six sample points. in practice we find it more convenient to deal with Probability Density Function (PDF).3. ⎧1. i = 6 ⎩ Y ( si ) Show that Y ( ) is a random variable by finding FY ( y ) .17 Indian Institute of Technology Madras . B = {s3 . A = {s1 . 3 2 6 Let Y ( ) be the transformation. 2. 4. i = 1. i = 3.

if there is a discontinuity in FX at x = a of magnitude Pa .18. the PDF will be. V. 2. This impulse function cannot be taken as the limiting case of 2⎦ 8 ⎣ an even function (such as 1 2 ε → 0 1 ⎛x⎞ ga ⎜ ⎟ ) because. for the CDF shown in Fig. 2. f X ( x ) has an impulse of weight 1 8 at x = 1 2 as 1⎤ 1 ⎡ P ⎢ X = ⎥ = . For example. Venkata Rao Fig. we include an impulse Pa δ ( x − a ) in the PDF. 2. That is.3. we resolve the ambiguity by taking f X to be a derivative on the right. In the first case. (Note that FX ( x ) is continuous to the right.Principles of Communication Prof. FX ( x ) has a discontinuous slope at x = 1 and a step discontinuity at x = 2 .18 Indian Institute of Technology Madras .18) In Eq. ε ⎝ε⎠ 1 2 lim 1 −ε 2 ∫ f X ( x ) dx = lim ε → 0 1 2 ∫ 1 ⎛ 1⎞ 1 δ ⎜ x − ⎟ dx ≠ 8 ⎝ 2⎠ 16 −ε 2.) The second case is taken care of by introducing the impulse in the probability domain. fX ( x ) = 1 ⎛ 1⎞ 1 ⎛ 1⎞ 1 ⎛ 3⎞ 1 ⎛ 5⎞ δ⎜ x + ⎟ + δ⎜ x − ⎟ + δ⎜ x − ⎟ + δ⎜ x − ⎟ 4 ⎝ 2⎠ 8 ⎝ 2⎠ 8 ⎝ 2⎠ 2 ⎝ 2⎠ (2.4: A CDF without a continuous derivative As can be seen from the figure.

We say that X is a mixed random variable if FX ( x ) is discontinuous but not a staircase. is a continuous function of x for all x .3. 2.Principles of Communication Prof. we shall discuss some examples of these three types of variables.20) As the domain of the random variable X ( ) is known. a random variable can be classified as: i) continuous (ii) discrete and (iii) mixed. Later on in this lesson. As FX is non-decreasing and FX ( ∞ ) = 1. say X and Y . 2.3 Joint distribution and density functions Consider the case of two random variables. y ) = P ⎡{s : X ( s ) ≤ x .Y ( x . If the CDF.19b) Based on the behavior of CDF. lim ε → 0 1 −ε 2 ∫ f X ( x ) dx = 1 .19a) = 1 −∞ ∫ f ( x ) dx X (2. This ensures. given by FX . 1 ≤ x < 3 ⎪8 2 2 ⎩ Such an impulse is referred to as the left-sided delta function. we have i) ii) fX ( x ) ≥ 0 ∞ (2. then we have a set of k co-existing random variables. 8 1 1 ⎧2 ⎪8 . FX ( x ) . We can induce more than one transformation on a given sample space. They can be characterized by the (two-dimensional) joint distribution function. then X corresponds to a discrete variable. it is convenient to denote the variable simply by X . then X 1 is a continuous random variable. Venkata Rao 1 2 However. If we induce k such transformations. Y ( s ) ≤ y }⎤ ⎣ ⎦ 1 TP PT (2. If FX ( x ) is a TP PT staircase. − 2 ≤ x < 2 ⎪ FX ( x ) = ⎨ ⎪3 .19 Indian Institute of Technology Madras . V.

In other words. V.Principles of Communication Prof. then F ( x1 .Y ( x .Y ( ∞ .Y ( x1 . y1 ) Looking at the sample space points s ∈ S. 2. Y ( s )} corresponding to FX . y ) = FY ( y ) FX . y ) = FX . Venkata Rao That is.Y ( x . −∞ < y < ∞ FX . y1 ) is the probability associated with the set of all sample points whose transformation does not fall outside the shaded region in the two dimensional (Euclidean) space shown in Fig.Y ( x .20 Indian Institute of Technology Madras . Properties of the two dimensional distribution function are: i) ii) iii) iv) v) FX . y1 ) is the probability sample points s ∈ associated with the event AB . Similarly.Y ( x1 . the transformed values will be less than or equal to y .Y ( ∞ . FX . y ) ≥ 0 . ∞ ) = 1 FX . let A be the set of all those sample S such that X ( s ) ≤ x1 . ∞ ) = FX ( x ) 2. their transformed values will be less than or equal to x and at the same time.Y ( − ∞ . FX . 2. − ∞ ) = 0 FX .Y ( x .5: Space of { X ( s ) . under Y .5. if B is comprised of all those S such that Y ( s ) ≤ y1 . Fig. y ) is the probability associated with the set of all those sample points such that under X . −∞ < x < ∞.

Y ( x2 . the statistics of any individual variable is called the marginal. β ) dβ X .Y (2.23a) Similarly. y ) dx X .Y ( x . f X ( x ) and fY ( y ) .23b) (In the study of several random variables.Y ∞ f X ( x1 ) = x ∞ ⎫ ⎤ d ⎧ 1⎡ ⎪ ⎪ ⎨ ∫ ⎢ ∫ f X . FX .21 Indian Institute of Technology Madras .Principles of Communication Prof.Y y x (2.Y ( x2 . FX ( x1 ) = x1 − ∞− ∞ ∫ ∫ f ( α . f X ( x1 ) = fX ( x ) = d FX ( x ) dx ∞ ∞ = x = x1 −∞ ∫ f ( x . Venkata Rao vi) If x2 > x1 and y 2 > y1 . y ) ∂x ∂y (2.22) Eq.21b) The notion of joint CDF and joint PDF can be extended to the case of k random variables. y ) dy X . Hence. Given the joint PDF of random variables X and Y . β ) d β⎥ d α ⎬ d x1 ⎪− ∞ ⎢ − ∞ ⎥ ⎪ ⎦ ⎩ ⎣ ⎭ (2. then FX .22 involves the derivative of an integral. it is possible to obtain the one dimensional PDFs. where k ≥ 3 . ∞ ) = FX ( x1 ) . y1 ) We define the two dimensional joint PDF as f X . y ) = − ∞− ∞ ∫ ∫ f ( α .Y ( x1 . fY ( y ) = ∞ −∞ ∫ f ( x . β) d β d α X . 2. y ) = ∂2 FX .Y 1 or −∞ ∫ f ( x . That is. Hence it is common to refer FX ( x ) as the marginal 2. We know that.Y ( x .21a) or FX .Y ( α .Y ( x .Y ( x1 . β ) d α dβ X .Y (2. y1 ) ≥ FX . V. y 2 ) ≥ FX .

25d) f X .22 Indian Institute of Technology Madras . f X |Y ( x | y ) ≥ 0 ∞ (2.3. given y = y1 . y ) . f X |Y ( x | y ) = fY | X ( y | x ) = f X .26b) 2. it satisfies all the requirements of an ordinary PDF. FY ( y ) and fY ( y ) are similarly referred to.Y ( x .Y ( x .25a) and or (2.24) where it is assumed that fY ( y1 ) ≠ 0 .Y ( x . y1 ) fY ( y1 ) (2.25b) (2. denoted f X |Y ( x | y1 ) and defined as f X |Y ( x | y1 ) = f X .Principles of Communication Prof. y ) fY ( y ) f X . but fixed. Accordingly. We might also be interested in the PDF of X given a specific value of y = y1 . we know how to obtain f X ( x ) or fY ( y ) . This is called the conditional PDF of X . V. An analogous relation can be defined for fY | X ( y | x ) . y ) fX ( x ) (2. Venkata Rao distribution function of X and f X ( x ) as the marginal density function. that is.25c) (2. Once we understand the meaning of conditional PDF. y ) = f X |Y ( x | y ) fY ( y ) = fY | X ( y | x ) f X ( x ) The function f X |Y ( x | y ) may be thought of as a function of the variable x with variable y arbitrary.26a) = 1 and −∞ ∫ f ( x | y ) dx X |Y (2. That is.Y ( x . we might as well drop the subscript on y and denote it by f X |Y ( x | y ) .) 2.Y ( x . we have the pair of equations.4 Conditional density Given f X .

z ) = f X ( x ) fY ( y ) fz ( z ) (2. 2. fY | X ( y | x ) = fY ( y ) Eq. the definition of statistical independence is somewhat simpler than in the case of events.30b) Similarly. Let X and Y be independent..5 Statistical independence In the case of random variables. We call k random variables X1 . 2. Venkata Rao 2.Principles of Communication Prof. 2. f X . Z ( x .30(a) and 2.. Z are independent.3.29) Statistical independence can also be expressed in terms of conditional PDF.. f X .25(c). X k statistically independent iff.Y ( x . iff f X . iff. Y . We shall now give a few examples to illustrate the concepts discussed in sec. V. the k -dimensional joint PDF factors into the product i = 1 ∏ f X i ( xi ) k (2.Y ( x . X 2 . two random variables X and Y are statistically independent.. x < 0 ⎪ FX ( x ) = ⎨ K x 2 .28) and three random variables X . we have f X |Y ( x | y ) fY ( y ) = f X ( x ) fY ( y ) or f X |Y ( x | y ) = f X ( x ) (2. . Example 2.. x > 10 ⎩ i) ii) Find the constant K Evaluate P ( X ≤ 5 ) and P ( 5 < X ≤ 7 ) 2. Then..3.23 Indian Institute of Technology Madras .27) Hence. y ) = f X ( x ) fY ( y ) Making use of Eq.3 A random variable X has ⎧ 0 . 0 ≤ x ≤ 10 ⎪100 K ..30(b) are alternate expressions for the statistical independence between X and Y . y ) = f X ( x ) fY ( y ) (2.Y .30a) (2. y .. .

Venkata Rao iii) What is f X ( x ) ? i) ii) FX ( ∞ ) = 100 K = 1 ⇒ K = 1 . 2.4 Consider the random variable X defined by the PDF fX ( x ) = a e −bx . 2 ii) the given PDF can be written as ⎧a e b x . V.25 ⎝ 100 ⎠ P ( 5 < X ≤ 7 ) = FX ( 7 ) − FX ( 5 ) = 0. 100 ⎛ 1 ⎞ P ( x ≤ 5 ) = FX ( 5 ) = ⎜ ⎟ × 25 = 0. ⎩ x < 0 x ≥ 0. In order for f X ( x ) to ∞ i) represent a legitimate PDF. ⎪ fX ( x ) = ⎨ − b x ⎪a e . hence b = 2 a .Principles of Communication Prof. Example 2.24 Indian Institute of Technology Madras . 0 ∞ ∞ 1 . ⎪0 . ⎩ x < 0 0 ≤ x ≤ 10 x > 10 Note: Specification of f X ( x ) or FX ( x ) is complete only when the algebraic expression as well as the range of X is given. Determine the corresponding FX ( x ) Find P [1 < X ≤ 2] . ∫ a e − b x dx = 0 −∞ ∫ ae −bx dx = 2 ∫ a e − b x dx = 1 .24 fX ( x ) = d FX ( x ) dx . − ∞ < x < ∞ where a and b are positive constants.02 x . we require That is. i) ii) iii) Determine the relation between a and b so that f X ( x ) is a PDF. As can be seen f X ( x ) ≥ 0 for − ∞ < x < ∞ . ⎧0 ⎪ = ⎨0.

5 ) Are X and Y independent? (a) fY | X ( y | x ) = f X . x > 0 2 We can now write the complete expression for the CDF as ⎧ ⎪ ⎪ FX ( x ) = ⎨ ⎪1 − ⎪ ⎩ iii) FX ( 2 ) − FX (1) = 1 bx e .Y ( x . Take a specific value of x = 2 FX ( 2 ) = −∞ ∫ f (x) d x X ∞ = 1 − ∫ fX ( x ) d x 2 But for the problem on hand. 2 2 −∞ ∫ 2 x Consider x > 0 .25 Indian Institute of Technology Madras . 2 x < 0 x ≥ 0 1 −b e − e− 2 b 2 ( ) Example 2. 2 1 − bx e . otherwise ⎩ Find (a) (b) i) fY | X ( y | 1) and ii) fY | X ( y | 1.5 ⎧1 .Y ( x . y ) fX ( x ) 2. Venkata Rao For x < 0 . we have FX ( x ) = b bα 1 e d α = eb x . V. y ) = ⎨ 2 ⎪0 . 0 ≤ y ≤ 2 ⎪ Let f X . ∫ f X ( x ) d x = 2 ∞ −2 −∞ ∫ f (x) d x X Therefore.Principles of Communication Prof. 0 ≤ x ≤ y. FX ( x ) = 1 − 1 − bx e .

Also we see that fY | X ( y | x ) depends on x whereas if X and Y were to be independent.Principles of Communication Prof. otherwise 4 the dependence between the random variables X and Y is evident from the statement of the problem because given a value of X = x1 . 2. (i) ∫2 dy x 2 1 = 1− x . fX ( x ) = Hence. for any given x . 1 ≤ y ≤ 2 ⎨ 1 ⎩0 . V.Y by integrating out the unwanted variable y over the appropriate range. 1. y ≥ x . Y should be greater than or equal to x1 for the joint PDF to be non zero. then fY | X ( y | x ) = fY ( y ) . Hence. 0 ≤ x ≤ 2 2 fY | X ( y | 1) = 1 2 = ⎧1 . in addition. The maximum value taken by y is 2 . otherwise 2 (ii) b) 1 ⎧2 .26 Indian Institute of Technology Madras .5 ) = 2 = ⎨ 1 ⎩0 . Venkata Rao f X ( x ) can be obtained from f X .5 ≤ y ≤ 2 fY | X ( y | 1.

detection. 2. We will now develop the necessary theory for the statistical characterization of the output random variables.4 Transformation of Variables The process of communication involves various operations such as modulation.Y ( x . In performing these operations. 2.3 For the two random variables X and Y .3 Find a) b) f X . given the transformation and the input random variables.6: PDFs for the exercise 2. the following density functions have been given. 2. 0 ≤ y ≤ 10 ⎪100 ⎪ fY ( y ) = ⎨ ⎪ 1 − y . y ) Show that ⎧ y . V. (Fig.6) Fig. Venkata Rao Exercise 2. we typically generate new random variables from the given ones. filtering etc.Principles of Communication Prof. 10 < y ≤ 20 ⎪ 5 100 ⎩ 2.27 Indian Institute of Technology Madras .

Principles of Communication Prof... Denoting its real roots by xn ....... To find fY ( y ) . By this we mean the number Y ( s1 ) associated with any sample point s1 is Y ( s1 ) = g ( X ( s1 ) ) Our interest is to obtain fY ( y ) ..4. x2 and x3 . 2.... Let Y be the new random variable. for the specific y .1). y = g ( x1 ) = .. obtained from X by the transformation Y = g ( X ) . = g ( xn ) = . (2..28 Indian Institute of Technology Madras .1 Functions of one random variable Assume that X is the given random variable of the continuous type with the PDF. V.. 2.. x1 .7. . Venkata Rao 2.. f X ( x ) ... We see that the equation y 1 = g ( x ) has three roots namely. we will show that fY ( y ) = g ' ( x1 ) f X ( x1 ) + . This can be obtained with the help of the following theorem (Thm..1 Let Y = g ( X ) . + g ' ( xn ) f X ( xn ) + . ..31) where g ' ( x ) is the derivative of g ( x ) ....... 2. Proof: Consider the transformation shown in Fig.... Theorem 2.. solve the equation y = g ( x ) .

Principles of Communication Prof. d x 3 = dy g ' ( x2 ) dy g ' ( x3 ) 2. x3 < x ≤ x3 + d x3 where d x1 > 0 . Therefore. for any given y 1 . As we see from the figure. From the above. it follows that P [ y1 < Y ≤ y1 + d y ] = P [ x1 < X ≤ x1 + d x1 ] + P [ x2 + dx2 < X ≤ x2 ] + P [ x3 < X ≤ x 3 + d x3 ] This probability is the shaded area in Fig. Venkata Rao Fig.7.29 Indian Institute of Technology Madras . 2. 2. x2 + d x2 < x ≤ x2 . d x2 = P [ x3 < X ≤ x3 + d x 3 ] = f X ( x 3 ) d x3 . we need to find the set of values x such that y1 < g ( x ) ≤ y1 + d y and the probability that X is in this set.7: X − Y transformation used in the proof of theorem 2. V. d x3 > 0 .1 We know that fY ( y ) d y = P [ y < Y ≤ y + d y ] . P [ x1 < X ≤ x1 + d x1 ] = f X ( x1 ) d x1 . d x1 = dy g ' ( x1 ) P [ x2 + d x2 < x ≤ x2 ] = f X ( x2 ) d x2 . this set consists of the following intervals: x1 < x ≤ x1 + d x1 . but d x2 < 0 .

fY ( y ) = 1 − y + 1 As y = x − 1.31 follows. where a is a constant. x ≤ 1 ⎪ Let f X ( x ) = ⎨ ⎪ 0 . V.Principles of Communication Prof. fY ( y ) d y = g '(x) fX ( x ) d y and as g ' ( x ) = 1. x1 ) . 2. we have fY ( y ) = f X ( y − a ) ⎧1 − x .30 Indian Institute of Technology Madras . there is a unique x satisfying the above transformation. Hence fY ( y ) contains an impulse. and x ranges from the interval ( − 1. that is FY ( y ) is discontinuous at y = y1 . 0 ) . Example 2. by canceling d y from the Eq. Let us find fY ( y ) . we have the range of y as ( − 2. elsewhere ⎩ and a = − 1 Then. then we have P [Y = y1 ] = P ( x0 < X ≤ x1 ) = FX ( x1 ) − FX ( x0 ) . 1) .32. 2. For a given y . Venkata Rao We conclude that fY ( y ) d y = g ' ( x1 ) f X ( x1 ) dy + g ' ( x2 ) f X ( x2 ) dy + g ' ( x3 ) f X ( x3 ) dy (2.32) and Eq. We shall take up a few examples. Hence. 2. We have g ' ( x ) = 1 and x = y − a .6 Y = g ( X ) = X + a . Note that if g ( x ) = y1 = constant for every x in the interval ( x0 . δ ( y − y1 ) of area FX ( x1 ) − FX ( x0 ) .

Let us find fY ( y ) . the transformation of adding a constant to the given variable simply results in the translation of its PDF.7 Let Y = b X . Fig. otherwise ⎩ and b = − 2 . 2. there is a unique b x .8. we have fY ( y ) = x ⎧ ⎪1 − .Principles of Communication Prof. Again for a given y . Venkata Rao ⎧1 − y + 1 . 2.8: FX ( x ) and FY ( y ) of example 2. we have X = 1 Y . − 2 ≤ y ≤ 0 ⎪ Hence fY ( y ) = ⎨ 0 .31 Indian Institute of Technology Madras . 0 ≤ x ≤ 2 Let f X ( x ) = ⎨ 2 ⎪ 0 . where b is a constant. elsewhere ⎪ ⎩ FX ( x ) and FY ( y ) are shown in Fig. V.6 As can be seen. Example 2. b ⎝b⎠ 2. As g ' ( x ) = b . Solving for X . then 1 ⎛y⎞ fX ⎜ ⎟ .

and b = 1. Fig. Let us find fY ( y ) .9. Venkata Rao ⎧1 ⎪ fY ( y ) = ⎨ 2 ⎪ ⎩ y⎤ ⎡ ⎢1 + 4 ⎥ . Example 2. 2.32 Indian Institute of Technology Madras .2. otherwise f X ( x ) and fY ( y ) are sketched in Fig. a > 0 .9: f X ( x ) and fY ( y ) of example 2. compute and sketch fY ( y ) for a = − 2 . − 4 ≤ y ≤ 0 ⎣ ⎦ 0 .4 Let Y = a X + b .8 Y = a X 2 . where a and b are constants. 2. Show that fY ( y ) = 1 ⎛y − b⎞ fX ⎜ ⎟. a ⎝ a ⎠ If f X ( x ) is as shown in Fig.Principles of Communication Prof. V.7 Exercise 2.8. 2.

otherwise ⎩ We shall compute and sketch fY ( y ) . 2. x1 = x2 = − y . Example 2. y ≥ 0 ⎪ = ⎨ 2π y ⎝ 2⎠ ⎪ 0 .9 Consider the half wave rectifier transformation given by ⎧0 .Principles of Communication Prof. X > 0 a) Let us find the general expression for fY ( y ) 1 3 ⎧1 ⎪ . Venkata Rao g ' ( x ) = 2 a x . If y < 0 . Hence fY ( y ) = 0 for y < 0 .31 yields a y and a fY ( y ) ⎧ 1 ⎡ ⎛ y⎞ ⎛ ⎪ ⎢f X ⎜ ⎟ + fX ⎜ − ⎜ a⎟ ⎜ ⎪ ⎠ ⎝ ⎣ = ⎨ 2a y ⎢ ⎝ a ⎪ ⎪ 0 ⎩ y ⎞⎤ ⎟⎥ . otherwise Let a = 1 . and f X ( x ) = ⎛ x2 ⎞ exp ⎜ − . then the equation y = a x 2 has no real solution. If y ≥ 0 . −∞ < x < ∞ ⎜ 2 ⎟ ⎟ 2π ⎝ ⎠ 1 (Note that exp ( α ) is the same as e α ) Then fY ( y ) = 1 2 y ⎡ ⎛ y⎞ ⎛ y ⎞⎤ ⎢ exp ⎜ − 2 ⎟ + exp ⎜ − 2 ⎟ ⎥ 2π ⎣ ⎝ ⎠ ⎝ ⎠⎦ ⎧ 1 ⎛ y⎞ exp ⎜ − ⎟ . − < x < Let f X ( x ) = ⎨ 2 2 2 ⎪ 0 . and Eq. y ≥ 0 a ⎟⎥ ⎠⎦ . then it has two solutions. V. otherwise ⎩ Sketching of f X ( x ) and fY ( y ) is left as an exercise. b) 2. X ≤ 0 Y =⎨ ⎩X .33 Indian Institute of Technology Madras .

Fig. otherwise ⎪0 ⎪ ⎩ Then. there is an impulse in fY ( y ) at y = 0 whose area is equal to FX ( 0 ) . As Y is nonnegative.9 2. 0 ) . y ≥ 0 where w ( y ) = ⎨ ⎩0 .Principles of Communication Prof. 2.34 Indian Institute of Technology Madras . 2.10: f X ( x ) and fY ( y ) for the example 2. V. As Y = X for x > 0 . Hence ⎧f ( y ) w ( y ) + FX ( 0 ) δ ( y ) ⎪ fY ( y ) = ⎨ X ⎪0 .10. elsewhere ⎩ ⎧1 ⎪4 δ(y ) . fY ( y ) f X ( x ) and fY ( y ) are sketched in Fig. otherwise ⎩ ⎧1 . y = 0 ⎪ 3 ⎪1 = ⎨ . Hence. − <x< Specifically. we have fY ( y ) = f X ( y ) for y > 0 . Venkata Rao a) Note that g ( X ) is a constant (equal to zero) for X in the range of ( − ∞ . 0 < y ≤ 2 ⎪2 . fY ( y ) = 0 for y < 0 . let f X ( x ) = ⎨ 2 2 2 ⎪0 . otherwise b) 1 3 ⎧1 ⎪ .

Example 2.11 4 4 Fig. a continuous RV is transformed into a mixed RV. 2. Y assumes only two values. In this case. namely ±1. Fig. Y . V. Let us write fY ( y ) as fY ( y ) = P1 δ ( y − 1) + P−1 δ ( y + 1) where a) P− 1 = P [ X < 0 ] and P1 [ X ≥ 0 ] b) Taking f X ( x ) of example 2. 2.Principles of Communication Prof.10 Note that this transformation has converted a continuous random variable X into a discrete random variable Y .9. We shall compute and sketch fY ( x ) assuming that f X ( x ) is the same as that of Example 2.11: f X ( x ) and fY ( y ) for the example 2. Hence the PDF of Y has only two impulses. X ≥ 0 a) b) Let us find the general expression for fY ( y ) .9.35 Indian Institute of Technology Madras . Venkata Rao Note that X . 3 1 and P− 1 = . 2. X < 0 Let Y = ⎨ ⎩+ 1.10 ⎧− 1. we have P1 = has the sketches f X ( x ) and fY ( y ) .

12(b).5 (b) Input-output transformation Exercise 2.6 The random variable X of exercise 2.12: (a) Input PDF for the transformation of exercise 2. 2. V.36 Indian Institute of Technology Madras .Principles of Communication Prof. 2. Compute and sketch fY ( y ) . Fig. Venkata Rao Exercise 2.5 Let a random variable X with the PDF shown in Fig.13.5 is applied as input to the X − Y transformation shown in Fig.12(a) be the input to a device with the input-output characteristic shown in Fig. 2. 2. 2. Compute and sketch fY ( y ) .

. 62 with probability 1 1 . 22 . . 2. respectively. If however. then 6 Y takes the values 0. the RV.Principles of Communication Prof. assuming the value Yk = g ( x k ) with probability Pk . then P [Y = y k ] = P [ X = xk ] = Pk .. Example 2.. 9 with probabilities That is.. Y ) 2. 1. 1 . 4. If y k = g ( x ) for only one x = xk . find fY ( y ) . Z and W are defined using the transformation Z = g ( X . . fY ( y ) = 6 6 i = 1 ∑ δ(x − i ) .. however. fY ( y ) = 1 1 1 1 . two new random variables. then Y takes the 6 a) If X takes the values (1. X takes the values − 2. 0.. then P [Y = y k ] = Pk + Pm .. 6 2 b) If. 3 with probability 1 .. 1.. .2 Functions of two random variables Given two random variables X and Y (assumed to be continuous type). Venkata Rao We now assume that the random variable X is of discrete type taking on the value x k with probability Pk ... − 1. Y = g ( X ) is also discrete. 2... a) 1 If f X ( x ) = 6 If f X ( x ) = i = 1 ∑ δ ( x − i ) . find fY ( y ) .4. y k = g ( x ) for x = xk and x = xm .. 6 ) with probability of values 12 . Y ) and W = h ( X . That is. 6 3 3 6 1 1 ⎡δ ( y ) + δ ( y − 9 ) ⎤ + ⎡δ ( y − 1) + δ ( y − 4 ) ⎤ ⎣ ⎦ 3⎣ ⎦ 6 2.37 Indian Institute of Technology Madras . . In this case. ∑ 3 6 b) 1 6 i = −2 δ ( x − i ) . V.11 Let Y = X 2 .

Y ( x . w ) = f X . y1 ) satisfying Eq.W ( z . y ⎠ ∂x = ∂z ∂y ∂w ∂y ∂ z ∂w ∂ z ∂w − ∂x ∂y ∂y ∂x That is. w ⎞ J⎜ ⎟ where ⎝ x.Y ( x . Theorem 2. solve the system of equations g ( x .Principles of Communication Prof.W ( z . we require the Jacobian of the transformation. w ) . given g ( x . y ) = z1 . denoted ⎛ z. y ) .33. Let ( x1 . For this. y ) = w1 . the Jacobian is the determinant of the appropriate partial derivatives. y1 ) be the result of the solution. V. w ⎞ J⎜ ⎟ ⎝ x1 . w ) . We shall now state the theorem which relates fZ .38 Indian Institute of Technology Madras . we would like to obtain fZ . Then.W ( z . y ) . w ) and f X .Y ( x1 .33a) (2. y ⎠ ∂z ∂x ⎛ z. We shall assume that the transformation is one-to-one. for x and y . y1 ⎠ (2. h ( x . Venkata Rao Given the above transformation and f X .33b) then there is a unique set of numbers. y ) = z1 . 2.2: To obtain fZ . fZ . For a more general version of this theorem. ( x1 .34) Proof of this theorem is given in appendix A2. y ) = w1 . w ⎞ J⎜ ⎟ = ∂w ⎝ x. y1 ) ⎛ z. h ( x . refer [1]. That is.1. 2.W ( z . (2.

39 Indian Institute of Technology Madras . We see that the mapping is one-to-one. 2.Principles of Communication Prof. Fig.14(a) 2 depicts the (product) space A on which f X . 2.W is non-zero 2.14: (a) The space where f X . If Z = X + Y and W = X − Y .12 X and Y are two independent RVs with the PDFs . we obtain x = 1 (z + w ) 2 and y = 1 ( z − w ) . let us find (a) fZ . V. y ) is non-zero.Y ( x . Fig. Venkata Rao Example 2. x ≤ 1 2 1 . fX ( x ) = fY ( y ) = 1 . y ≤ 1 2 and (b) fZ ( z ) .Y is non-zero (b) The space where fZ . w ) a) From the given transformations.W ( z .

40 Indian Institute of Technology Madras . w ) is non-zero) as follows: A space The line x = 1 B space 1 (z + w ) = 1 ⇒ w = − z + 2 2 1 ( z + w ) = − 1 ⇒ w = − ( z + 2) 2 1 (z − w ) = 1 ⇒ w = z − 2 2 1 (z − w ) = − 1 ⇒ w = z + 2 2 The line x = − 1 The line y = 1 The line y = − 1 The space is B is shown in Fig. Venkata Rao We can obtain the space B (on which fZ . b) fZ ( z ) = ∞ −∞ ∫ f (z.W ( z . w ⎞ J⎜ = − 2 and J ( ⎟ = 1 −1 ⎝ x.W ⎧1 1 4 = ⎪8 . for a given z ( z ≥ 0 ). 2. w ∈ B (z.Principles of Communication Prof. Hence fZ ( z ) = + z −z ∫ 8 dw 1 1 = 1 z.14(b). otherwise ⎩ ) = 2.14(b). w ) = ⎨ 2 ⎪0 . we can see that. w can take values only in the − z to z . we have fZ ( z ) = −z ∫ 8 dw z = − 1 z. w ) d w Z . V. y ⎠ Hence fZ .W From Fig. 2. z. 0 ≤ z ≤ 2 4 For z negative. The Jacobian of the transformation 1 1 ⎛ z. −2 ≤ z ≤ 0 4 2.

13 Let the random variables R and Φ be given by.Principles of Communication Prof. It is given that ⎝X⎠ ⎡ ⎛ x 2 + y 2 ⎞⎤ 1 f X . a conveniently chosen auxiliary or dummy variable W is introduced. Φ ( r . Typically W = X or W = Y .Y ( x . ∂r ∂x ∂ϕ ∂x ∂r ∂y cos ϕ = − sin ϕ ∂ϕ r ∂y sin ϕ 1 cos ϕ = r r J = ⎧ r ⎛ r2 ⎞ exp ⎜ − ⎟ . 2π 2 ⎠⎦ ⎣ ⎝ Let us find fR . R = X 2 + Y 2 and ⎛Y ⎞ Φ = arc tan ⎜ ⎟ where we assume R ≥ 0 and − π < Φ < π .W ( z . ϕ ) = ⎨ 2 π ⎝ 2⎠ ⎪ . otherwise ⎩0 It is left as an exercise to find fR ( r ) . ϕ ) . fR .41 Indian Institute of Technology Madras . As the given transformation is from cartesian-to-polar coordinates. 0 ≤ r ≤ ∞ . − π < ϕ < π ⎪ Hence. w ) is found from which fZ ( z ) can be obtained.2 can also be used when only one function Z = g ( X . y ) = exp ⎢ − ⎜ ⎟⎥ . Theorem 2. − ∞ < x. 2. To apply the theorem. and the transformation is one -to -one. Y ) is specified and what is required is fZ ( z ) . Venkata Rao Hence fZ ( z ) = 1 z . V. using the theorem fZ . z ≤ 2 4 Example 2. fΦ ( ϕ ) and to show that R and Φ are independent variables. we can write x = r cos φ and y = r sin φ . y < ∞ . Φ ( r .

w ) = f X . w ) and fZ ( z ) = ∞ −∞ ∫ f (z − w . As J = 1 . then Eq. Carrying out the convolution. 2. Let us introduce a dummy variable W = Y .W ( z . −1 ≤ x ≤ 1 fX ( x ) = ⎨ 2 ⎪0 . otherwise ⎩ ⎧1 ⎪ . V.14 Let X and Y be two independent random variables. X = Z − W .36a) (2. Venkata Rao Let Z = X + Y and we require fZ ( z ) . and Y = W . fZ = f X ∗ fY Example 2.Principles of Communication Prof.36(b). fZ ( z ) is the convolution of f X ( z ) and fY ( z ) . with ⎧1 ⎪ . −2 ≤ y ≤ 1 fY ( y ) = ⎨ 3 ⎪0 . we obtain fZ ( z ) as shown in Fig.36b) That is. otherwise ⎩ If Z = X + Y . From Eq. fZ .35 becomes fZ ( z ) = −∞ ∫ f ( z − w ) f (w ) d w X Y (2. 2. 2.15. 2.Y ( z − w .42 Indian Institute of Technology Madras . Then.35) If X and Y are independent. w ) d w X . let us find P [ Z ≤ − 2] .Y ∞ (2.

let us find an expression for fZ ( z ) . y ≤ 1 ⎪ Let f X .Y ( zw . Venkata Rao Fig. x ≤ 1.15: PDF of Z = X + Y of example 2. we have w ∞ fZ ( z ) = −∞ ∫ w f X . 12 Example 2.14 P [ Z ≤ − 2] is the shaded area = 1 . elsewhere ⎩ Then fZ ( z ) = ∞ −∞ ∫ w 1 + zw 2 dw 4 2. V.15 Let Z = X . y ) = ⎨ 4 ⎪0 . 2. w ) dw ⎧1 + x y . Y Introducing the dummy variable W = Y .Y ( x .43 Indian Institute of Technology Madras .Principles of Communication Prof. we have X = ZW Y = W As J = 1 .

y ) ∈ A .44 Indian Institute of Technology Madras .Principles of Communication Prof. Let fZ .W ( z . we have fZ ( z ) = 2 iii) 1 z 1 + zw 2 1⎛ 1 1 ⎞ ∫ 4 w dw = 4 ⎜ z2 + 2 z3 ⎟ ⎝ ⎠ 0 For z < − 1.16a).W ( z .16(b). i) Let z < 1. B will be as shown in Fig. y ) is non-zero if (x. Venkata Rao f X . w ) be non-zero if ( z .Y ( x .W is non-zero To obtain fZ ( z ) from fZ . V.2. 2.16: (a) The space where f X . where A is the product space B . w ) ∈ the given transformation. then 1 + zw 2 1 + zw 2 fZ ( z ) = ∫ w dw = 2 ∫ w dw 4 4 0 −1 1 1 = z⎞ 1⎛ ⎜1 + 2 ⎟ 4⎝ ⎠ ii) For z > 1 . Fig. we have to integrate out w over the appropriate ranges. w ) . 2.Y is non-zero (b) The space where fZ . Under x ≤ 1 and y ≤ 1 (Fig. we have fZ ( z ) = 2 − 1 z ∫ 0 1 + zw 2 1⎛ 1 1 ⎞ w dw = ⎜ 2 + ⎟ 4 4 ⎝z 2 z3 ⎠ 2.

In transformations involving two random variables. − ∞ < z < ∞. V.Y ⎜ . z > 1 2 z3 ⎠ ⎝z Exercise 2. z ≤ 1 ⎛ 1 1 ⎞ ⎜ 2 + ⎟. by making use of the distribution function approach.45 Indian Institute of Technology Madras . w ⎟ d w w ⎝w ⎠ X and Y be independent with fX ( x ) = 1 π 1 + x2 2 . we may encounter a situation where one of the variables is continuous and the other is discrete. x ≤ 1 and y ⎧ ⎪y e− fY ( y ) = ⎨ ⎪0 ⎩ 2 . Venkata Rao ⎧1 ⎪4 ⎪ Hence fZ ( z ) = ⎨ ⎪1 ⎪4 ⎩ z⎞ ⎛ ⎜1 + 2 ⎟ ⎝ ⎠ . otherwise Show that fZ ( z ) = 1 2π e 2 −z 2 .16 The input to a noisy channel is a binary random variable with P [ X = 0] = P [ X = 1] = 1 . Example 2. y ≥ 0 . a) b) Show that fZ ( z ) = ∞ −∞ ∫ 1 ⎛z ⎞ f X . The output of the channel is given by Z = X + Y 2 2.Principles of Communication Prof.7 Let Z = X Y and W = Y . as illustrated below. such cases are handled better.

Y ( x . P ( Z ≤ z ) = P [ Z ≤ z | X = 0] P [ X = 0 ] + P [ Z ≤ z | X = 1] P [ X = 1] As Z = X + Y . 2. we have P [ Z ≤ z | X = 0 ] = FY ( z ) Similarly P [ Z ≤ z | X = 1] = P ⎡Y ≤ ( z − 1) ⎤ = FY ( z − 1) ⎣ ⎦ Hence FZ ( z ) = 1 1 d FZ ( z ) .) We shall illustrate the distribution function method with another example. Similarly. Example 2. if Y = g ( X ) . as illustrated by means of example 2. V. Obtain fZ ( z ) . − ∞ < y < ∞ .16. we compute FY ( y ) and then obtain fY ( y ) = d FY ( y ) dy .1 or 2.46 Indian Institute of Technology Madras . given f X . (In this method. is a very basic method and can be used in all situations. for transformations involving more than one random variable. Venkata Rao where Y is the channel noise with fY ( y ) = fZ ( z ) . 1 2π e − y2 2 .Principles of Communication Prof. y ) .2 is called change of variable method.17 Let Z = X + Y . Find Let us first compute the distribution function of Z from which the density function can be derived. Of course computing the CDF may prove to be quite difficult in certain situations. The method of obtaining PDFs based on theorem 2. As fZ ( z ) = 2 2 dz ⎛ z − 1 2 ⎞⎤ ( ) ⎟⎥ exp ⎜ − ⎜ ⎟⎥ 2 2π ⎝ ⎠⎦ 1 ⎡ ⎛ z2 ⎞ 1⎢ 1 ⎟+ exp ⎜ − fZ ( z ) = ⎜ 2 ⎟ 2 ⎢ 2π ⎝ ⎠ ⎣ The distribution function method. we have FY ( z ) + FY ( z − 1) .

47 Indian Institute of Technology Madras . y ) d y ⎥ ∫∞ ⎢ − ∞ ⎥ − ⎣ ⎦ ∞ ∞ ⎡z − x ⎤⎤ ∂ ⎡ fZ ( z ) = ⎢ ∫ d x ⎢ ∫ f X . y ) d y ⎥ ⎥ ∂ z ⎢− ∞ ⎢−∞ ⎥⎥ ⎣ ⎦⎦ ⎣ = ⎡ ∂ ∫∞ d x ⎢ ∂ z ⎢ − ⎣ ∞ ∞ z− x −∞ ∫ ⎤ f X .17. namely. 2. y ) d y ⎥ ⎥ ⎦ (2.17: Shaded area is the P [ Z ≤ z ] That is. 2. Venkata Rao P [Z ≤ z ] = P [ X + Y ≤ z ] = P [Y ≤ z − x ] This probability is the probability of ( X .Principles of Communication Prof. Fig.37a) = −∞ ∫ f (x.Y ( x . FZ ( z ) = ⎡z − x ⎤ d x ⎢ ∫ f X . Y ) lying in the shaded area shown in Fig.Y ( x . 2.Y It is not too difficult to see the alternative form for fZ ( z ) .Y ( x . z − x) d x X . V.

Similarly. we might be in a situation where the PDF is not available but are able to estimate (with reasonable accuracy) certain (statistical) averages of the random variable. Details can be found in [1.48 Indian Institute of Technology Madras . We note that Eq. we have fZ ( z ) = f X ( z ) ∗ fY ( z ) .35. 2. Venkata Rao fZ ( z ) = ∞ −∞ ∫ f (z − y . 2. Note that m X is a constant. numbered 1. Some of these averages do provide a simple and fairly adequate (though incomplete) description of the random variable.37(b) is the same as Eq. 2]. g ( X ) .5 Statistical Averages The PDF of a random variable provides a complete statistical characterization of the variable. This can be generalized to the case of functions of n variables.Principles of Communication Prof. We now define a few of these averages and explore their significance.2 and 2 respectively are placed in bowl and are mixed. y ) d y X . The mean value (also called the expected value. A player is to be 2. This can be illustrated as follows: Three small similar discs.Y (2. However.38) where E denotes the expectation operator.37b) If X and Y are independent. V. 2.39) Remarks: The terminology expected value or expectation has its origin in games of chance. is defined by E ⎡g ( X ) ⎤ = g ( X ) = ⎣ ⎦ ∞ −∞ ∫ g (x) f (x) d x X (2. the expected value of a function of X . So far we have considered the transformations involving one or two variables. mathematical expectation or simply expectation) of random variable X is defined as mX = E [ X ] = X = ∞ −∞ ∫ x fX ( x ) d x (2.

Consider a function of two random variables. ⎣ ⎦ n E ⎡( X − m X ) ⎤ = ⎣ ⎦ ∞ −∞ ∫ (x − m ) X n fX ( x ) d x (2. It seems reasonable to assume that the player has a '1/3 claim' on the 9 dollars and '2/3 claim' on three dollars. Venkata Rao blindfolded and is to draw a disc from the bowl. then 3 3 ∞ ⎡ ⎤ E ⎣g ( X ) ⎦ = −∞ ∫ (15 − 6 x ) f ( x ) d x X = 5 That is.38) and the second moment ( n = 2 . E ⎡Xn ⎤ = ⎣ ⎦ ∞ −∞ ∫ x n fX ( x ) d x (2. or five dollars. which results in the mean value of Eq. 2. V. resulting in the mean square value of X ).42) We can extend the definition of expectation to the case of functions of k ( k ≥ 2 ) random variables. If he draws the disc numbered 1. g ( X . that is. that is.40) The most widely used moments are the first moment ( n = 1. then E ⎡ g ( X ) ⎤ gives n-th central moment.Principles of Communication Prof. 2. His total claim is 9(1/3) + 3(2/3).49 Indian Institute of Technology Madras . Note that g ( x ) is such that g (1) = 9 and g ( 2 ) = 3 . he will receive nine dollars. Then. If we take X to be (discrete) random variable with the PDF fX ( x ) = 1 2 δ ( x − 1) + δ ( x − 2 ) and g ( X ) = 15 − 6 X . the mathematical expectation of g ( X ) is precisely the player's claim or expectation [3]. if he draws either disc numbered 2. we obtain the n-th moment of the probability distribution of the RV. X .41) If g ( X ) = ( X − m X ) . he will receive 3 dollars. Y ) . For the special case of g ( X ) = X n . E ⎡X2⎤ = X2 = ⎣ ⎦ n ∞ −∞ ∫ x 2 fX ( x ) d x (2.

43. y ) f (x. Venkata Rao E ⎡g ( X .5.50 Indian Institute of Technology Madras . if Z = g ( X . y ) d x d y X .Principles of Communication Prof. y ) d x dy + ∞ − ∞− ∞ ∫ ∫ (β y ) f ( x . that is.Y ( x . V. y ) d x dy X . y ) d x dy X . 2 2 E ⎡ X 2 − 2 mX X + mX ⎤ = E ⎡ X 2 ⎤ − 2 mX E [ X ] + mX ⎣ ⎦ ⎣ ⎦ 2 2 = E ⎡ X 2 ⎤ − 2 mX + mX ⎣ ⎦ 2 = X 2 − mX = X 2 − ⎡ X ⎤ ⎣ ⎦ 2 2. we have E [Z ] = = ∞ − ∞− ∞ ∞ ∞ ∫ ∫ ( α x + β y ) f ( x .Y ∞ Integrating out the variable y in the first term and the variable x in the second term. we have E [Z ] = ∞ −∞ ∫ α x f (x) d x + ∫ β y f (y ) d y X Y −∞ ∞ = α X + βY 2.43) An important property of the expectation operator is linearity.1 Variance Coming back to the central moments. then Z = α X + βY . Y ) = α X + βY where α and β are constants. This result can be established as follows. we have the first central moment being always zero because. E ⎡( X − m X ) ⎤ = ⎣ ⎦ ∞ −∞ ∫ (x − m ) f (x) d x X X = mX − mX = 0 Consider the second central moment 2 2 E ⎡( X − m X ) ⎤ = E ⎡ X 2 − 2 m X X + m X ⎤ ⎣ ⎦ ⎣ ⎦ From the linearity property of expectation. Y ) ⎤ = ⎣ ⎦ ∞ − ∞− ∞ ∫ ∫ g (x.Y ∞ − ∞− ∞ ∫ ∫ ( α x ) fX . From Eq.Y ∞ (2. 2.

hence E ⎡g ( X )⎤ ≥ c ∫ f X ( x ) d x ⎣ ⎦ A But ∫ f X ( x ) d x = P [ x ∈ A] = P ⎡g ( X ) ≥ c ⎤ ⎣ ⎦ A That is. 2.44) Proof: Let A = { x : g ( x ) ≥ c} and B denote the complement of A . then for every positive constant c . This can be made more precise with the help of the Chebyshev Inequality which follows as a special case of the following theorem. ⎣ ⎦ ⎣ ⎦ Note: The kind of manipulations used in proving the theorem 2.3: Let g ( X ) be a non-negative function of the random variable X .51 Indian Institute of Technology Madras .Principles of Communication Prof. we can write E ⎡g ( X )⎤ ≥ ⎣ ⎦ ∫g (x) f (x) d x X A If x ∈ A . Venkata Rao The second central moment of a random variable is called the variance and its (positive) square root is called the standard deviation.3 is useful in establishing similar inequalities involving random variables. ⎣ ⎦ P ⎡g ( X ) ≥ c ⎤ ≤ ⎣ ⎦ E ⎡g ( X ) ⎤ ⎣ ⎦ c (2. The symbol σ 2 is generally used to denote the variance. Theorem 2. E ⎡ g ( X ) ⎤ ≥ c P ⎡ g ( X ) ≥ c ⎤ . If E ⎡ g ( X ) ⎤ exists. Specifying the variance essentially constrains the effective width of the density function. then g ( x ) ≥ c . (If necessary. V. E ⎡g ( X ) ⎤ = ⎣ ⎦ = ∞ −∞ ∫ g (x) f (x) d x X X X B ∫g (x) f (x) d x + ∫ g (x) f (x) d x A Since each integral on the RHS above is non-negative. which is the desired result. we use a subscript on σ 2 ) The variance provides a measure of the variable's spread or randomness.

smaller the standard deviation.3.45(a).6m . Venkata Rao To see how innocuous (or weak. let g ( X ) represent the height of a randomly chosen human being with E ⎡ g ( X ) ⎤ = 1. let g ( X ) = ( X − m X ) and c = k 2 σ2 in theorem X 2 2. 1 P ⎡ X − mX ≥ k σ X ⎤ ≤ 2 ⎣ ⎦ k which is the desired result.44 states that the probability of choosing a person over 16 m tall is at most 1 ! (In a population of 1 billion. where the required probability is concentrated. Chebyshev 2. With k = 2 for example. We then have. Chebyshev inequality can be interpreted as: the probability of observing any RV outside ± k standard deviations off its mean value is no larger than 1 . ⎣ ⎦ Then Eq. That is. smaller is the width of the interval around m X . 1 2 P ⎡( X − m X ) ≥ k 2 σ 2 ⎤ ≤ 2 X ⎣ ⎦ k In other words.45b) To establish 2. perhaps) the inequality 2. V. we would take the positive number k to be greater than one to have a meaningful result. the k2 probability of X − m X ≥ 2 σ X does not exceed 1/4 or 25%. Naturally. By the same token. 2. (2. k > 0 ⎣ ⎦ k 1 P ⎡ X − mX < k σ X ⎤ > 1 − 2 ⎣ ⎦ k where σ X is the standard deviation of X .Principles of Communication Prof. at most 100 million would be as tall as a 10 full grown Palmyra tree!) Chebyshev inequality can be stated in two equivalent forms: i) ii) 1 P ⎡ X − mX ≥ k σ X ⎤ ≤ 2 .52 Indian Institute of Technology Madras .44 is. we expect X to occur within the range ( m X ± 2 σ X ) for more than 75% of the observations.45a) (2.

Y ( x . E[XY] = = ∞ ∞ − ∞− ∞ ∞ ∞ ∫ ∫ x y f X . normalized with respect to the product σ X σY is termed as the correlation coefficient and we denote it by ρ . we have λ X Y = E [ X Y ] − m X mY (2.Principles of Communication Prof. Then. Note that it is not necessary that variance exists for every PDF. ρX Y = E [ X Y ] − mX mY σ X σY (2. − ∞ < x < ∞ and α > 0 .53 Indian Institute of Technology Madras . Y ) = ( X − m X ) (Y − mY ) in Eq. λ X Y = cov [ X .5.2 Covariance An important joint expectation is the quantity called covariance which is obtained by letting g ( X . Suppose X and Y are independent.47) The correlation coefficient is a measure of dependency between the variables. That is. Venkata Rao inequality thus enables us to give a quantitative interpretation to the statement 'variance is indicative of the spread of the random variable'.43. then X = 0 but X 2 is not finite. 2.Y ] = E ⎡( X − m X ) (Y − mY ) ⎤ ⎣ ⎦ (2. if α f X ( x ) = 2 π 2 . For example. We use the symbol λ to denote the covariance.46b) The cov [ X . α +x (This is called Cauchy’s PDF) 2.Y ] . That is. y ) d x d y x y f X ( x ) fY ( y ) d x d y ∞ − ∞− ∞ ∞ ∫ ∫ ∫ = x fX ( x ) d x −∞ −∞ ∫ y f (y ) d y Y = m X mY 2.46a) Using the linearity property of expectation. V.

then Y would occur sometimes positive with respect to mY .54 Indian Institute of Technology Madras . then Y = ± α x1 . number of trials tends to zero as the number of trials keep increasing. Then we expect ρ X Y to be positive. let X and Y be dependent. sum . the random variables X and Y are said to be uncorrelated. the outcome y is conditioned on the outcome x in such a manner that there is a greater possibility of (y − mY ) being of the same sign as ( x − mX ) . When the random variables are independent. On the other hand. if X and Y are uncorrelated. Two random variables X and Y are said to be orthogonal if E [ X Y ] = 0 . the sum of the numbers x1 ( y − mY ) would be very small and the quantity. That is. then X Y = X Y . let X and Y be so conditioned that. given X = x1 . and sometimes negative with respect to mY . In the course of many trials of the experiments and with the outcome X = x1 . Suppose for example. If the variables are neither independent nor totally dependent. ρ X Y = 1. and given X = x1 . When the joint experiment is performed many times. for X and Y be independent. That is. V. α being constant.Principles of Communication Prof. then we expect ρ X Y to be negative. they are uncorrelated. Taking the extreme case of this dependency. Venkata Rao Thus. then ρ will have a magnitude between 0 and 1. As an example. this result is appealing. If ρ X Y ( or λ X Y ) = 0 . let 2. we have λ X Y (and ρ X Y ) being zero. we have ρ X Y = 0 and for the totally dependent (y = ± α x ) case. the fact that they are uncorrelated does not ensure that they are independent. Intuitively. Then ρ X Y = ± 1 . Similarly if the probability of ( x − m X ) and ( y − mY ) being of the opposite sign is quite large. However. Assume X and Y to be independent.

X Y = 0 which means λ X Y = X Y − X Y = 0 . σ2 i = σ2 . if X = x1 . That is.48a) If X1 and X 2 are uncorrelated. 2 2 Then σ2 = σW = σ1 + σ2 . X i 2 We will now relate σY to the known quantities. otherwise ⎩ and Y = X 2 . then Y = x1 ! Let Y be the linear combination of the two random variables X1 and X 2 . ⎪0 . the sum as well as the difference random Z 2 2 variables have the same variance which is larger than σ1 or σ 2 ! 2 The above result can be generalized to the case of linear combination of n variables. Venkata Rao α α ⎧1 < x < ⎪ . Let E [ X i ] = mi .48b) Note: Let X1 and X 2 be uncorrelated. then 2 2 σY = k1 σ1 + k2 σ2 2 (2. if 2. But X and Y are 2 not independent because. Let ρ12 be the correlation coefficient between X1 and X 2 . Then. XY = X3 1 1 = ∫ x3 d x = α α −∞ ∞ α −α ∫ 2 x3 d x = 0 2 As X = 0 .55 Indian Institute of Technology Madras . That is. we can show that 2 2 σY = k1 σ1 + k 2 σ2 + 2 ρ12 k1 k 2 σ1 σ 2 2 (2. i = 1. 2 σY = E ⎡Y 2 ⎤ − ⎡E [Y ]⎤ ⎦ ⎣ ⎦ ⎣ 2 E [Y ] = k1 m1 + k 2 m2 2 E ⎡Y 2 ⎤ = E ⎡ k1 X12 + k 2 X 2 + 2 k1 k 2 X1 X 2 ⎤ ⎣ ⎦ ⎣ ⎦ With a little manipulation. and let Z = X1 + X 2 and W = X1 − X 2 . V. 2 . − fX ( x ) = ⎨ α 2 2. That is Y = k1 X1 + k 2 X 2 where k1 and k2 are constants.Principles of Communication Prof.

Example 2. 2. Venkata Rao Y = 2 σY = i = 1 ∑k n n i X i . E[X] = 1 1 2 ∫8x dx + ∫8x dx 0 2 2 4 2. 0 ≤ x ≤ 2 . 2 ≤ x ≤ 4 4 ≤ x Therefore. V. We shall now give a few examples based on the theory covered so far in this section.49 is quite obvious. 2 ≤ x ≤ 4 4 ≤ x We shall find a) X and b) σ2 X a) The given CDF implies the PDF ⎧0 ⎪1 ⎪ ⎪8 fX ( x ) = ⎨ ⎪1 x ⎪8 ⎪0 ⎩ .18 Let a random variable X have the CDF ⎧0 ⎪x ⎪ ⎪8 FX ( x ) = ⎨ 2 ⎪x ⎪16 ⎪ ⎩1 .Principles of Communication Prof. . .49) where the meaning of various symbols on the RHS of Eq.56 Indian Institute of Technology Madras . 0 ≤ x ≤ 2 . x < 0 . x < 0 . then 2 i i = 1 ∑k σ2 + 2 ∑∑ k i k j ρi j σi σ j i i j i < j (2.

where 1 1 ⎧ ⎪1.0636 π 1 = 0.96 2 π Example 2. elsewhere ⎩0 2.38 we have.Y ( x . 2. 0 < y < 1 f X . y ) = ⎨ . Venkata Rao = 1 7 31 + = 4 3 12 2 b) σ 2 = E ⎡ X 2 ⎤ − ⎡ E [ X ]⎤ X ⎦ ⎣ ⎦ ⎣ E ⎡X2⎤ = ⎣ ⎦ = σ 2 X 1 2 1 3 ∫8x dx + ∫8x dx 0 2 1 15 47 + = 3 2 6 2 2 4 47 ⎛ 31 ⎞ 167 = −⎜ ⎟ = 6 144 ⎝ 12 ⎠ Example 2. 0 < x < 1. − < x < fX ( x ) = ⎨ 2 2 ⎪0.57 Indian Institute of Technology Madras . From Eq.Principles of Communication Prof.20 Let X and Y have the joint PDF ⎧ x + y . E [Y ] = 1 2 − 1 2 ∫ cos ( π x ) d x = 1 2 2 = 0. otherwise ⎩ 2 Let us find E [Y ] and σY .19 Let Y = cos π X . V.5 2 E ⎡Y ⎤ = ⎣ ⎦ 2 2 Hence σY = − 1 2 ∫ cos2 ( π x ) d x = 1 4 − 2 = 0.

E[X | Y = y] = E[X |Y ] = ∞ −∞ ∫ x f X |Y ( x | y ) dx (2. 2. σ X and σY . E [ X | Y ] . y ) dx dy = ∫ ∫ x y ( x + y ) dx dy 2 0 0 1 1 = 17 72 b) To find ρ X Y . we can define the conditional variance etc. E ⎡g ( X ) | Y = y ⎤ = ⎣ ⎦ ∞ −∞ ∫ g ( x ) f ( x | y ) dx X |Y (2. we have E ⎡XY 2⎤ = ⎣ ⎦ ∞ − ∞− ∞ ∫ ∫ ∞ x y 2 f X . given Y = y . σ2 = σY = X 144 12 144 1 11 Hence ρ X Y = − Another statistical average that will be found useful in the study of communication theory is the conditional expectation.21 Let the joint PDF of the random variables X and Y be 2.51) Similarly. We shall illustrate the calculation of conditional mean with the help of an example. E [Y ] . Venkata Rao Let us find a) E ⎡ X Y 2 ⎤ and ⎣ ⎦ b) ρ X Y a) From Eq. namely. If g ( X ) = X . V.50) is called the conditional expectation of g ( X ) .Principles of Communication Prof. The quantity.43. we require E [ X Y ]. E [ X ].58 Indian Institute of Technology Madras . We can easily show that E [ X ] = E [Y ] = 7 11 48 2 and E [ X Y ] = .Y ( x . Example 2. then we have the conditional mean.

59 Indian Institute of Technology Madras . Let the experiment be repeated n times and let p be the 2.1. y ) ≡ ⎨ x . B . 2. 2. We will begin our discussion with discrete random variables.Principles of Communication Prof. V. y ) fY ( y ) ∫x dx y 1 1 = − ln y . That is. outside ⎩ Let us compute E [ X | Y ] . Discrete random variables i) Binomial: Consider a random experiment in which our interest is in the occurrence or nonoccurrence of an event A .6.6 Some Useful Probability Models In the concluding section of this chapter.Y ( x . we require the conditional PDF. ⎪0 . − ln y x ln y y < x < 1 Hence E ( X | Y ) = = ∫ x ⎢ − x ln y ⎥ d x ⎣ ⎦ y 1 ⎡ 1 ⎤ y −1 ln y Note that E [ X | Y = y ] is a function of y . we are interested in the event A or its complement. 0 < y < x f X . 0 < y < 1 1 1 x f X |Y ( x | y ) = = − . Venkata Rao ⎧1 ⎪ . say. f X |Y ( x | y ) = fY ( y ) = f X . we shall discuss certain probability distributions which are encountered quite often in the study of communication theory. To find E [ X | Y ] . 0 < x < 1.Y ( x .

we have the binomial density. given by 2.. say... 2.18. (Each sample point is actually an element of the five dimensional Cartesian product space).. 2. 1.. n .. 1. = p (1 − p ) p 2 (1 − p ) = p 3 (1 − p ) 2 ⎛5⎞ There are ⎜ ⎟ = 10 sample points for which X ( s ) = 3 . ⎝3⎠ In other words. . The sample point s1 could represent the sequence ABBBB . for n = 5 . s32 ...... A A A B B etc. Fig. The sample space (representing the outcomes of these five repetitions) has 32 sample points.. ‘number of occurrences of A in n trials'... The sample points such as ABAAB .. . X can be equal to 0... V... n ... then we can write f X ( x ) . Let X denote random variable. will map into real number 3 as shown in the Fig. Taking a special case. k = 0... ⎛ 5⎞ 2 P [ X = 3] = ⎜ ⎟ p3 (1 − p ) ⎝ 3⎠ Generalizing this to arbitrary n and k .60 Indian Institute of Technology Madras .. k = 3 .. let n = 5 and k = 3 .18: Binomial RV for the special case n = 5 P ( A B A A B ) = P ( A ) P ( B ) P ( A ) P ( A ) P ( B ) .Principles of Communication Prof. 2.. as trails are independent. If we can compute P [ X = k ]. s1 . ... Venkata Rao probability of occurrence of A on each trial and the trials are independent..

f X ( x ) ≥ 0 and ∞ −∞ ∫ fX ( x ) d x = i = 0 ∑ n Pi = i = ∑ ⎜i n ⎛n⎞ i n −i ⎟ p (1 − p ) 0⎝ ⎠ n = ⎡(1 − p ) + p ⎤ ⎣ ⎦ = 1 It is left as an exercise to show that E [ X ] = n p and σ2 = n p (1 − p ) .22 A digital communication system transmits binary digits over a noisy channel in blocks of 16 digits. Assume that the probability of a binary digit being in error is 0.01) .01 = 0.52) ⎛n⎞ n−i where Pi = ⎜ ⎟ p i (1 − p ) ⎝i ⎠ As can be seen.Principles of Communication Prof. (Though X the formulae for the mean and the variance of a binomial PDF are simple. 0. Venkata Rao fX ( x ) = i = 0 ∑ P δ(x − i ) i n (2. V. The following example illustrates the use of binomial PDF in a communication problem. 2. i) ii) Find the expected number of errors per block Find the probability that the number of errors per block is greater than or equal to 3. Then X is b (16.01 and that errors in various digit positions within a block are statistically independent. Example 2.16. We write X is b ( n . p ) to indicate X has a binomial PDF with parameters n and p defined above. Let X be the random variable representing the number of errors per block. i) E [ X ] = n p = 16 × 0. the algebra to derive them is laborious).61 Indian Institute of Technology Madras .

Note that Chebyshev inequality is not very A random variable X which takes on only integer values is Poisson distributed.Principles of Communication Prof. V. we obtain. ii) Poisson: 0.22.53) where λ is a positive constant.62 Indian Institute of Technology Madras . m = ∞ Since. if ∞ fX ( x ) = m = 0 ∑ δ ( x − m) λm e− λ m! (2. λm e = ∑ .002 Exercise 2. Evidently f X ( x ) ≥ 0 and We will now show that E [ X ] = λ = σ2 X ∞ −∞ ∫ f X ( x ) d x = 1 because λm ∑0 m ! = eλ . Chebyshev inequality results in P ⎡ X ≥ 3 ⎤ ≤ 0.02 .1) (1 − p ) 2 16 i 16 − i ⎝ ⎠ = 0. Venkata Rao ii) P ( X ≥ 3 ) = 1 − P [ X ≤ 2] = 1− i =0 ⎛ ⎞ ∑ ⎜ i ⎟ ( 0. m = 0 m! λ ∞ we have d eλ dλ ( ) = eλ = ∞ m λm − 1 1 ∑0 m ! = λ m = ∞ m = 1 ∑ ∞ m λm m! E[X] = m λm e− λ ∑ 1 m ! = λ eλ e − λ = λ m = Differentiating the series again.8 Show that for the example 2. 2.0196 ⎣ ⎦ tight.

X ⎣ ⎦ 2. it has the same variance. Therefore. elsewhere ⎩ A plot of f X ( x ) is shown in Fig.19. V. whether X is uniform in ( − 1. a ≤ x ≤ b ⎪ fX ( x ) = ⎨ b − a ⎪0 . namely 1 . (2.63 Indian Institute of Technology Madras . 2.2.2 Continuous random variables i) Uniform: A random variable X is said to be uniformly distributed in the interval a ≤ x ≤ b if. ⎧ 1 . 4 ) . Hence σ2 = λ . Venkata Rao E ⎡ X 2 ⎤ = λ 2 + λ .Principles of Communication Prof.54) Fig. 1) or ( 2.2.19: Uniform PDF It is easy show that (b − a) a+b E[X] = and σ2 = X 2 12 2 Note that the variance of the uniform PDF depends only on the width of the interval ( b − a ) .6. 3 ii) Rayleigh: An RV X is said to be Rayleigh distributed if.

20: Rayleigh PDF Rayleigh PDF frequently arises in radar and communication problems.2.55) A typical sketch of the Rayleigh PDF is given in Fig.2. V.12 is Rayleigh PDF. (2. elsewhere ⎩0 where b is a positive constant. x ≥ 0 fX ( x ) = ⎨ b ⎝ 2 b⎠ ⎪ . 2.64 Indian Institute of Technology Madras . Venkata Rao ⎧x ⎛ x2 ⎞ exp ⎜ − ⎪ ⎟.Principles of Communication Prof. We will encounter it later in the study of narrow-band noise processes.20.) Fig. ( fR ( r ) of example 2.

56 is a valid PDF. Venkata Rao Exercise 2. Then.55. x d x = b) Show that if X E ⎡X2⎤ = 2 b ⎣ ⎦ dz .9 a) Let f X ( x ) be as given in Eq. σ X . N ( 0. As can be seen from the Fig.56) where m X is the mean value and σ2 the variance. −∞ < x < ∞ exp ⎢− 2 σ2 ⎢ ⎥ X ⎣ ⎦ (2. Note that if X is N m X . 1) denotes the Gaussian PDF with zero mean and unit variance. That is. we show that X PT f X ( x ) as given by Eq. then E [ X ] = and iii) Gaussian By far the most widely used PDF. 2. In appendix A2. σ2 ) to denote the Gaussian density1. 2 πb 2 is Rayleigh distributed. in the context of communication theory is the Gaussian (also called normal) density. V. Hint: Make 0 ∞ the change of variable x 2 = z .Principles of Communication Prof.65 Indian Institute of Technology Madras . 1 TP PT In this notation. 1) .3. 2.21. the Gaussian PDF is X completely specified by the two parameters. ⎝ ⎠ X 2. Show that ∫ f X ( x ) d x = 1. We use the symbol X N ( m X . specified by fX ( x ) = 1 2 π σX ⎡ ( x − m X )2 ⎤ ⎥. m X and σ2 . 2. then Y = 2 ( ) ⎛ X − mX ⎞ ⎜ σ ⎟ is N ( 0. The Gaussian PDF is symmetrical with respect to m X .

1) .5 Consider P [ X ≥ a ] .57 is N ( 0.57) Note that the integrand on the RHS of Eq. Q( ) function table is available in most of the text books on communication theory as well as in standard mathematical tables. We have.66 Indian Institute of Technology Madras . V.2 at the end of the chapter. Venkata Rao Fig.21: Gaussian PDF Hence FX ( m X ) = mX −∞ ∫ f (x) d x X = 0. 2. P [ X ≥ a] = ∞ ∫ a 1 2 π σX ⎡ ( x − m X )2 ⎤ exp ⎢− ⎥dx 2 σ2 ⎢ ⎥ X ⎣ ⎦ This integral cannot be evaluated in closed form. A small list is given in appendix A2. 2. 2. we have ⎝ σX ⎠ P [ X ≥ a] = ∞ a − mX σX ∫ 1 2π e − z2 2 dz ⎛ a − mX ⎞ = Q⎜ ⎟ ⎝ σX ⎠ where Q ( y ) = ∞ ∫ y ⎛ x2 ⎞ exp ⎜ − ⎟dx 2π ⎝ 2 ⎠ 1 (2. By making a change of variable ⎛ x − mX ⎞ z = ⎜ ⎟ .Principles of Communication Prof.

23 A random variable Y is said to have a log-normal PDF if X = ln Y has a Gaussian (normal) PDF. Let Y have the PDF. then FX ( x ) approaches Gaussian as N becomes large. in Chapter 7. find m . fY ( y ) given by. regardless of the distribution of the individual components.Principles of Communication Prof. The electrical noise in a communication system is often due to the cumulative effects of a large number of randomly moving charged particles. Let X = ln Y or x = ln y (Note that the transformation is one-to-one) a) 2. each particle making an independent contribution of the same amount. Venkata Rao The importance of Gaussian density in communication theory is due to a theorem called central limit theorem which essentially states that: If the RV X is the weighted sum of N independent random components. where each component makes only a small contribution to the sum. Example 2. For a more precise statement and a thorough discussion of this theorem. you may refer [1-3]. a) b) c) Show that Y is log-normal Find E (Y ) If m is such that FY ( m ) = 0. we shall make use of this theory in our studies on the noise performance of various modulation schemes. otherwise ⎪ ⎩ where α and β are given constants. to the total. V. y ≥ 0 exp ⎢− ⎪ fY ( y ) = ⎨ 2 π y β 2 β2 ⎢ ⎥ ⎣ ⎦ ⎪ 0 . Hence the instantaneous value of the noise can be fairly adequately modeled as a Gaussian variable.67 Indian Institute of Technology Madras . We shall develop Gaussian random processes in detail in Chapter 3 and. ⎧ ⎡ ( ln y − α )2 ⎤ 1 ⎪ ⎥.5 .

V.Y ( x . ⎧ 1 ⎡ ( x − m X )2 ( y − mY )2 ( x − mX ) ( y − mY ) ⎤⎫ exp ⎨− fX . x → − ∞ and as y → ∞ .58) where. we will consider the bivariate Gaussian PDF. ln m = α or m = e α iv) Bivariate Gaussian As an example of a two dimensional density. y < ∞ given by. Y ( x . k1 = 2 π σ X σY 1 − ρ2 2. ∞ ) is 1.Principles of Communication Prof. β2 ) b) Y = E ⎡e X ⎤ = ⎣ ⎦ ⎡ − ( x − α) 2 1 x ⎢ e e 2β ∫ 2π β − ∞ ⎢ ⎣ ∞ 2 2 ⎤ ⎥dx ⎥ ⎦ = e β2 α+ 2 ⎡ x − ( α + β2 ) ⎤ ⎡∞ ⎤ ⎦ − ⎣ 1 ⎢ ⎥ 2 β2 e d x⎥ ⎢∫ ⎢− ∞ 2 π β ⎥ ⎣ ⎦ As the bracketed quantity being the integral of a Gaussian PDF between the limits ( − ∞ .5 That is. we have Y = e α + β2 2 c) P [Y ≤ m ] = P [ X ≤ ln m ] Hence if P [Y ≤ m ] = 0. exp ⎢− 2 β2 ⎥ 2π β ⎢ ⎣ ⎦ 1 −∞ < x < ∞ Note that X is N ( α . x → ∞ Hence f X ( x ) = ⎡ ( x − α )2 ⎤ ⎥. then P [ X ≤ ln m ] = 0. y ) = + −2ρ ⎢ ⎥⎬ 2 2 k1 σX σY σ X σY ⎩ k2 ⎣ ⎦⎭ 1 (2.5 . Venkata Rao dx 1 1 = → J = dy y y Also as y → 0 . y ) . − ∞ < x .68 Indian Institute of Technology Madras . f X .

Let fX and fY be obtained from fX . V. Y and let fX are and fY be Gaussian. with respect to non-Gaussian variables (we have already seen an example of this in Sec. X is N ( m X .5. that is. Venkata Rao k 2 = 2 (1 − ρ2 ) ρ = Correlation coefficient between X and Y The following properties of the bivariate Gaussian density can be verified: P1) If X and Y are jointly Gaussian. then the marginal density of X or Y is 2 Gaussian. This does not imply fX .69 Indian Institute of Technology Madras .Principles of Communication Prof.5. That is not true. 2. unless X and Y independent. 1 TP PT Note that the converse is not necessarily true. then Z is Gaussian. then they are independent. P3) If Z = α X + βY where α and β are constants and X and Y are jointly Gaussian.22 gives the plot of a bivariate Gaussian PDF for the case of ρ = 0 and σ X = σY . Y .2). σY ) 1 X TP PT P2) f X Y ( x . y ) = f X ( x ) fY ( y ) iff ρ = 0 That is. Z Figure 2. We can construct examples of a joint PDF fX . Y is jointly Gaussian. in general. which is not Gaussian but results in fX and fY that are Gaussian. σ2 ) and Y is N ( mY . Therefore fZ ( z ) can be written after computing mZ and σ2 with the help of the formulae given in section 2. 2. if the Gaussian variables are uncorrelated.

V.Y resembles a (temple) bell.Principles of Communication Prof.23. Let us find the probability of 2 (X. Venkata Rao Fig. negative.2. If ρ > 0 . 2. positive and (ii) ρ . imagine the bell being compressed along the X = − Y axis so that it elongates along the X = Y axis. Example 2.Y) lying in the shaded region D shown in Fig. with.70 Indian Institute of Technology Madras .22: Bivariate Gaussian PDF ( σ X = σY and ρ = 0 ) For ρ = 0 and σ X = σY . Similarly for ρ < 0. the striker missing! For ρ ≠ 0 . f X .24 2 Let X and Y be jointly Gaussian with X = − Y = 1 . of course. we have two cases (i) ρ . 2. σ2 = σY = 1 and X ρX Y = − 1 .

we have y ≥ − x + 1 and for the region B .24(a) and B be the shaded region in Fig.24: (a) Region A and (b) Region B used to obtain region D A ⎤ − P ⎡( x .23: The region D of example 2. Venkata Rao Fig. y ) ∈ B ⎤ ⎦ ⎣ ⎦ The required probability = P ⎡( x .Principles of Communication Prof. 2.71 Indian Institute of Technology Madras . 2. Hence the required probability is. Fig. 2 2. 2. V. y ) ∈ ⎣ For the region y ≥ − 1 2 A . 2.24 Let A be the shaded region shown in Fig.24(b). we have 1 x + 2 .

Principles of Communication

Prof. V. Venkata Rao

X X ⎡ ⎤ ⎡ ⎤ P ⎢Y + ≥ 1⎥ − P ⎢Y + ≥ 2⎥ 2 2 ⎣ ⎦ ⎣ ⎦

Let Z = Y +

X 2

Then Z is Gaussian with the parameters, Z =Y + σ2 = Z = 1 1 X = − 2 2

1 2 1 2 σ X + σY + 2 . ρ X Y 4 2 1 1 1 3 + 1− 2. . = 4 2 2 4
Z+ 3 4 1 2 is N ( 0, 1)

⎛ 1 3⎞ That is, Z is N ⎜ − , ⎟ . Then W = ⎝ 2 4⎠

P [ Z ≥ 1] = P ⎡W ≥ ⎣

3⎤ ⎦

5 ⎤ ⎡ P [ Z ≥ 2] = P ⎢W ≥ ⎥ 3⎦ ⎣
Hence the required probability = Q

( 3) − Q⎛ ⎜ ⎝

5 ⎞ ⎟ 3⎠

( 0.04 − 0.001) =

0.039

2.72
Indian Institute of Technology Madras

Principles of Communication

Prof. V. Venkata Rao

Exercise 2.10

X and Y are independent, identically distributed (iid) random variables, each being N ( 0, 1) . Find the probability of X , Y lying in the region Fig. 2.25.

A shown in

Fig. 2.25: Region

A of Exercise 2.10

Note: It would be easier to calculate this kind of probability, if the space is a product space. From example 2.12, we feel that if we transform

(X,Y)
B.

into

(Z , W )

such that Z = X + Y , W = X − Y , then the transformed space

B

would be square. Find fZ ,W ( z , w ) and compute the probability ( Z , W ) ∈

2.73
Indian Institute of Technology Madras

Principles of Communication

Prof. V. Venkata Rao

Exercise 2.11

Two random variables X and Y are obtained by means of the transformation given below. X = ( − 2 loge U1 ) 2 cos ( 2 π U2 ) Y = ( − 2 loge U1 ) 2 sin ( 2 π U2 )
1 1

(2.59a) (2.59b)

U1 and U2 are independent random variables, uniformly distributed in the range 0 < u1 , u2 < 1 . Show that X and Y are independent and each is
N ( 0, 1) .

Hint: Let X1 = − 2 loge U1 and Y1 = Show that Y1

X1

is Rayleigh. Find f X ,Y ( x , y ) using X = Y1 cos Θ and

Y = Y1 sin Θ , where Θ = 2 π U2 . Note: The transformation given by Eq. 2.59 is called the Box-Muller transformation and can be used to generate two Gaussian random number sequences from two independent uniformly distributed (in the range 0 to 1) sequences.

2.74
Indian Institute of Technology Madras

Principles of Communication

Prof. V. Venkata Rao

Appendix A2.1 Proof of Eq. 2.34
The proof of Eq. 2.34 depends on establishing a relationship between the differential area d z d w in the z − w plane and the differential area d x d y in the x − y plane. We know that
fZ ,W ( z , w ) d z d w = P [ z < Z ≤ z + d z , w < W ≤ w + d w ]

If we can find d x d y such that
fZ ,W ( z , w ) d z d w = f X ,Y ( x , y ) d x d y , then fZ ,W can be found. (Note that

the variables x and y can be replaced by their inverse transformation quantities, namely, x = g − 1 ( z , w ) and y = h − 1 ( z , w ) ) Let the transformation be one-to-one. (This can be generalized to the case of one-to-many.) Consider the mapping shown in Fig. A2.1.

Fig. A2.1: A typical transformation between the ( x − y ) plane and ( z − w ) plane in the z − w

Infinitesimal rectangle

ABC D

plane is mapped into the

parallelogram in the x − y plane. (We may assume that the vertex A transforms to A' , B to B' etc.) We shall now find the relation between the differential area of the rectangle and the differential area of the parallelogram.

2.75
Indian Institute of Technology Madras

2. Then. A = V1 × V2 2. y + dw⎟ ∂w ∂w ⎝ ⎠ Consider the vectors V1 and V2 shown in the Fig. A2. P2 . P3 and P4 . y + d z⎟ ⎜ ⎟ ∂z ∂z ⎝ ⎠ ⎛ ⎞ ∂x ∂y dz. Fig.76 Indian Institute of Technology Madras . V. Then the P2 and P3 are given by ⎛ ⎞ ∂ g− 1 ∂ h− 1 P2 = ⎜ x + dz. y + d z⎟ = ⎜x + ∂z ∂z ⎝ ⎠ ⎛ ⎞ ∂x ∂y P3 = ⎜ x + dw .2 where V1 = ( P2 − P1 ) and V2 = ( P3 − P1 ) . with vertices P1 . Venkata Rao Consider the parallelogram shown in Fig. y ) be the co-ordinates of P1 . That is. A2.2: Typical parallelogram Let ( x . A2.Principles of Communication Prof. the area A of the parallelogram is. V1 = V2 = ∂x ∂y dz i + dz j ∂z ∂z ∂x ∂y dw i + dw j ∂w ∂w where i and j are the unit vectors in the appropriate directions.

w ⎞ J⎜ ⎟ ⎝ x. y ⎠ .Y ( x . j × j = 0 . y ⎞ fZ. we have V1 × V2 = ∂x ∂y ∂y ∂x d z dw − ∂ z ∂w ∂ z ∂w ⎛ x. Y ( x . V. y ) ⎛ z. w ) = f X . y ) J ⎜ ⎟ ⎝ z. w ⎠ That is.Principles of Communication Prof. y ⎞ A = J⎜ ⎟ d z dw ⎝ z.77 Indian Institute of Technology Madras . and i × j = − ( j × i ) = k where k is the unit vector perpendicular to both i and j . 2. ⎛ x. w ⎠ = fX . Venkata Rao As i × i = 0 .W ( z .

50 0.0548 3.2 Q( ) Function Table Q (α) = ∞ ∫ α 1 2π e 2 − x 2 dx It is sufficient if we know Q ( α ) for α ≥ 0 .2119 1.00 0.00010 0.00048 0.55 0.15 0.0062 0.0156 2.1251 2.05 0.75 0.40 0.20 0.20 0.0013 0. Venkata Rao Appendix A2.27 4.0446 3.60 0.1151 2.2578 1.20 0.0256 3.0228 4.90 0.78 Indian Institute of Technology Madras .0606 3.00007 0.10 0.75 0.10 3.40 0.10 0.0322 3.0082 0.5 .1977 1.40 0.45 0.0359 3.00 0.0179 0.95 0. V.40 0.0735 2.30 0.2266 1.3264 1.60 0.2420 1.50 0.35 0.2912 1.00003 2.65 0.10 0.15 0.1841 1.0107 0.3085 1.05 0.4405 1.1469 2.65 0.95 0.30 0.80 0.70 0.85 0.70 0.60 0.00005 1.1357 2.0026 0.00 0.70 3.00034 0.00 0.0047 0.20 0.3632 1.80 0.90 0.4207 1.4013 1.2743 1.0668 3.25 0.70 0.0019 0. because Q ( − α ) = 1 − Q ( α ) .60 0.1711 1. Note that Q ( 0 ) = 0.50 0.45 0.90 0.0287 3.Principles of Communication Prof.0968 2. y Q (y ) y Q (y ) y Q (y ) y Q (y ) 0.1587 2.90 0.50 0.4602 1.70 0.0035 0.0010 10− 3 10− 3 2 10− 4 10− 4 2 10− 5 10− 6 3.30 0.10 0.30 0.00069 0.90 4.0139 0.80 0.85 0.25 0.28 3.3446 1.80 0.3821 1.0885 2.0401 3.78 0.0495 3.35 0.00016 0.00023 0.4801 1.55 0.0808 2.

V. erf ( α ) = Hence Q ( α ) = 2 π 2 ∫e α 0 − β2 dβ π ∫e α − β2 dβ 1 ⎛ α ⎞ erfc ⎜ ⎟. Venkata Rao Note that some authors use erfc ( given by ) . the complementary error function which is ∞ erfc ( α ) = 1 − erf ( α ) = and the error function.79 Indian Institute of Technology Madras . 2 ⎝ 2⎠ 2.Principles of Communication Prof.

3 in Eq. 2π ∞ I 2 = ∫ ∫e 0 0 − r2 2 r dr dθ 2π = 2 π or I = That is. (Cartesian to Polar coordinate transformation).2) (A2.56. A2.3) Then. 2.1. Let v = 1 2π ∞ −∞ ∫e − v2 2 dv = 1 (A2.3 Proof that N m X . 2 −y Let I = −∞ ∫e dv = −∞ ∫e 2 dy. V. is a valid PDF by ∞ ( ) establishing ∞ −∞ 2 −v ∫ f (x) d x X ∞ 2 = 1 . I 2 ⎡ ∞ − v2 ⎤ ⎡ ∞ − y2 ⎤ 2 dv⎥ ⎢ ∫ e 2 d y⎥ = ⎢∫e ⎢− ∞ ⎥ ⎢− ∞ ⎥ ⎣ ⎦⎣ ⎦ ∞ = − ∞− ∞ ∫ ∫e ∞ − v2 + y2 2 dv d y ⎛y⎞ Let v = r cos θ and y = r sin θ .Principles of Communication Prof.2 and Eq. (Note that f X ( x ) ≥ 0 for − ∞ < x < ∞ ).80 Indian Institute of Technology Madras . we have the required result. Venkata Rao Appendix A2. r = v 2 + y 2 and θ = tan−1 ⎜ ⎟ . σ2 is a valid PDF X We will show that f X ( x ) as given by Eq. A2. A2.3.3. d v = Using Eq. ⎝v ⎠ and d x d y = r d r d θ . 2.3.1) x − mX σx dx σx (A2. Then. Then.3.3.3.

1965. M. and Craig. T.. P P 2. R... and Jacobs. I. ‘Principles of Communication Engineering’. P P 2) Wozencraft. V. McGraw Hill (3rd edition). 1991. 2nd edition. J. John Wiley. V.. The Macmillan Co. J.Principles of Communication Prof. ‘Introduction to Mathematical Statistics”. 3) Hogg.. ‘Probability. A.81 Indian Institute of Technology Madras . Random Variables and Stochastic Processes’. 1965. A. Venkata Rao References 1) Papoulis.

Sign up to vote on this title
UsefulNot useful