- Random Variables
- CSE-IV-ENGINEERING MATHEMATICS - IV [10MAT41]-NOTES.pdf
- 9709_w03_qp_6
- Distributions
- Review of Probability
- Econometrics Complete Notes
- UT Dallas Syllabus for cs6352.581 06u taught by Galigekere Dattatreya (datta)
- Tutorial Excel Binomial Dist
- PROBABILITY IITK.pdf
- Stirzaker D.-probability and Random Variables. a Beginner's Guide-CUP (2003)
- Chapter 6
- Probability and Statistics for Scientists and Engineers
- e Massdensity
- 8-Random Variable and Discrete Probability Distribution (1)
- Probability - Presentation 1
- Anal Bioe Syllabus2014
- Basic Concepts of Probability
- MathStatII.pdf
- lecture25_MGF
- Worksheet Binomial Distribution Problems
- 01 Introduction to quantitative method
- Quiz2
- MB0048
- Sta 5093 Notes to c and Tables
- Prob&Stats
- SBEC3113 03Trucks
- Chapter 04 - Normal Distribution
- Probability Distribution Theory Class Test 2 Notes
- MCA First Year Syllabus Manipur University
- Dynamic Models for Freight Transportation
- ACD NOTES -2
- Probability
- 102 Headline Formulas
- Traffic
- memory Interface
- Fsm
- Production Management Module 1 Course Notes
- system programming
- 4IT.pdf
- SEMICONDUCTOR MEMORIES
- 8086 and Memory Interfacing
- Computer Graphics (Hand Written)
- COMPUTER SCIENCE (M Sc (CS))
- ADVANCED PROCESSOR ARCHITECTURES
- Conditional Probability
- BATCH
- CA
- Class X - 2014 - 2015
- 07a10591 Engineering Drawing
- [000665]
- Annauniversity CSE 2marks OOAD
- cnotes-pdf-120803231753-phpapp02
- Ai Two Marks
- 07a10291 Engineering Drawing
- Daa
- Software Engineering Question Bank With Answer IVSem
- CG-LAB
- Unit_1
- ACD NOTES -2

**Department of Electrical Engineering
**

Handout 12

Lecture Notes 8

1

**EE 325 Probability and Random Processes
**

August 25, 2014

Discrete Random Variables

**We now learn an important special case of random-variables, the ones that take discrete
**

random variables. One good thing about these variables is that there is less ambiguity and

pathological cases are very rare.

Definition 1 Consider a countable set E, a function from Ω to E is called a discrete

random variable if {ω ∶ X(ω) = n} ∈ F, ∀n ∈ E.

In other words, the definition ensures that X is measurable with respect to the space

(E, P(E)). (Recall that we have to only check measurability for a class of sets which generates the sigma-field of the range-space). Don’t be confused by the absence of conventional

(R, B(R)), it is indeed true that for any B ∈ B(R),

X −1 (B ⋂ E) ∈ F,

thus ensuring measurability with respect to (R, B(R)). So by slight abuse of the definition,

will define various random-quantities without explicitly stating measurability with respect

to (R, B(R)). For example, we will call X above as a random variable taking values in

E. We will use the notation {X = x} to signify X −1 (x) = {ω ∶ X(ω) = x}. Consequently,

P (X = x) = P (X −1 (x)), where P is the probability measure associated with the space

(Ω, F). In this way, we avoid referring to the probability measure induced by the random

variable X, and conveniently work with one probability measure, i.e. P (A), ∀A ∈ F.

Proposition 1 A discrete random variable X is completely specified by its probability

distribution function, denoted as P (X = x), ∀x ∈ E or PX (x).

Our first example of a discrete random variable is perhaps the most popular one.

1.1

Bernoulli Distribution: Bernoulli(p)

**A binary valued random variable with
**

P (X = 1) = p,

is called a Bernoulli random variable with parameter p. This is often denoted concisely

as X ∼ Bernoulli(p).

it is a statistical parameter. 3. n Example 1 Let E1 be RV in {1. it saves the notes from being just a monotonous listing of results. as its value is undefined when i = 4. there is no harm in using R or N as the random variable’s range. Proposition 2 If X1 and X2 are discrete random variables. Let us denote the n outcomes as X1 . Since measurability is the key property that ¯ valued random variables. Is Y = g(X) a random variable? Solution: If we stick to our original definition. However. observe that we can still ensure measurability of Y as a function. This is also known as statistical mean or average. In particular. and g ∶ E1 → E2 be an arbitrary function. 2 Expectation Expectation of a random variable is a key parameter. which is governed by the distribution of the random variable. then Y cannot be called a random variable. while preserving all the the measurability properties. ⋯. 2. by taking tan−1 ∞ as π2 . n ¯ ¯ As the last example shows. Theorem 1 Let E1 and E2 be countable sets. Thus Y above can be treated as a random variable. Observe that {Y = y} = {ω ∈ Ω ∶ X(ω) = x and g(x) = y} = ⋃ {X = x} x∈E1 ∶g(x)=y Clearly the RHS is the union of events which are already in F. While this may irk the mathematical purists. however the proof of this will have to wait for the next notes. Let X be a random variable taking values in E1 . What is the empirical 1¯ R is also known as the two-point compactification of R 2 . something that we can infer from past experiments or modeling experience. Let us now propose ther qualification. which has close connections to empirical averages of experimental measurements. R ⋃{±∞} is the possible we need.2 Function of a Discrete Random Variable Functions defined on random-variables allow us to zoom in and out on various aspects of randomness. Also. Then Y = g(X) is a random variable. we are dealing with an R something which will be of immediate use and simple. X2 . The proposition deals with a function of two random variables and will be covered in detail in later sections. whenever we talk about the function of a random-variable without any fur¯ valued random variable. Consider repeated draws of a random variable X from the state-space E in an iid fashion. we can also consider R− 1 range .e. In this sense. Proof: We have to show that {Y = y} ∈ F. i. In fact let us get an intuitive understanding by using the classical frequency interpretation. 4} and consider the function g(x) = tan( πi 8 ).1. and hence the proof. so is X1 + X2 . Xn . this is the frequency interpretation. Please accept the following and proceed. the fraction of times any event A occurs in n independent trials of an experiment can be approximated by P (A) for large n.

expectation is a linear operator. for a given value of n. T } be the outcome of toss i.average of these quantities. Observe that n Sn = ∑ 1{Xi =H} . E[g1 (X)] ≤ E[g2 (X)]. g is non-negative. ∑x∈E ∣g(x)∣P (X = x) < ∞. This is the concept behind the operator called expectation. If g1 (x) ≤ g2 (x). n ∑i=1 Xi ∑i=1 (∑j∈E j 1{Xi =j} ) = n n n ∑i=1 1{Xi =j} ) = ∑j( n j∈E n ≈ ∑ j P (X = j). Definition 2 Let X be a discrete random variable in E. Clearly. the following monotonicity property is also evident. which we more formally define below. Furthermore. E[λ1 g1 (X) + λ2 g2 (X)] = λ1 E[g1 (X)] + λ2 E[g2 (X)]. whereas a more accurate nomenclature is to call it summable. Solution: Let Xi ∈ {H. 2 =∑ 3 . Example 2 Let Sn represent the number of HEADs in n tosses of a fair coin. OR 2. Find the expectation of Sn . i=1 Thus n E[Sn ] = ∑ E 1{Xi =H} i=1 n 1 i=1 2 n = . and let g ∶ E → R be such that EITHER 1. The probability law that we choose for this experiment is P (Sn = k) = (nk) 2n . then the expectation of the random variable g(X) is △ E[g(X)] = ∑ g(x)P (X = x) (1) x∈E It is common to call g(X) as an integrable random variable when E[∣g(X)∣] < ∞. ∀x ∈ E. j∈E where the last step used the frequency interpretation.

Then (2) E[Y ] = E[g(X)] Proof: At the first look it is too obvious. with one coupon packed per box. Consider n boxes. (3) Proof: Clearly P (A) is the probability induced by the random variable X. What guarantees that they are the same? Let X ∶ E1 → E2 . where E2 is a countable subset of R.n In the last example. Then m Sn = ∑ 1{Xi =0} . Solution: Let Xi be the number of boxes with coupon i. 2. Recall that Y = g(X) is a random variable if X is a random variable. No wonder that both the quantities are specified by the distribution of a random variable. but observe that E[Y ] = ∑y yP (Y = y). The integrability assumption allows us to seamlessly interchange the two summations in the second step above. whereas E[g(X)] = ∑x g(x)P (X = x). i=1 4 .1 Probability is an Expectation The last example contains an additional lesson. E[Y ] = ∑ 1{x∈A} P (X = x) x = ∑ P (X = x) x∈A = P (A). ¯ Theorem 3 Consider a discrete random variable X in E ⊂ R. The coupon for each box is randomly chosen. Example 3 Coupon Collector Let there be m categories of coupons available. ∑ yP (Y = y) = ∑ y y∈E2 y∈E2 ∑ P (X = x) x∶g(x)=y = ∑ y ∑ P (X = x)1{g(x)=y} y∈E2 x∈E1 = ∑ P (X = x) ∑ y 1{g(x)=y} x∈E1 y∈E2 = ∑ P (X = x)g(x) x∈E1 = E[g(X)]. and let Sn represent the number of coupons which didn’t find its way to any of the boxes. By using Theorem 2. that the probability of an event is nothing but an appropriate expectation. Find E[Sn ]. Theorem 2 Let g(X) be an integrable random variable and take Y = g(X). T } and {0. we used the indicator function to seamlessly move between the space {H. Notice that Y = 1{X∈A} is indeed a random variable. E[1{X∈A} ] = P (A). Then for any A ∈ B(R). 1}. In fact this can be done in a more general fashion.

m = ∑ (1 − 3 Independent Random Variables We have defined independence of events in terms of the probability of events.Taking expectations. X2 = j) = 1 = P (X1 = i)P (X2 = j). v) ∈ E1 × E2 . x ∈ E is called the events corresponding to X. 36 since we assumed that the die is a fair one. Similarly. Example 4 Consider throwing 2 fair dice. Definition 4 Random Variables X1 and X2 taking values in E1 and E2 respectively are independent if ∀(i. are called independent if all possible tuple of the respective events associated with them are independent. X2−1 (v)) = P (X1−1 (u))P (X2−1 (v)) . ∀(u. any pair of their respective associated events are independent. Definition 3 Two discrete random variables X1 and X2 taking respective values in E1 and E2 are said to be independent if P (X1−1 (u). X2 = j) = 1 for 1 ≤ i. m E[Sn ] = ∑ E1{Xi =0} i=1 m 1 n ) m i=1 1 = m(1 − )n . 5 . The last two definitions are indeed the same. 36 Does this correspond to independent trials. we can define independence of two random variables by demanding the events corresponding to each random-variable be independent. the collection of sets of the form X −1 (x). i. X2 = j) = P (X1 = i)P (X2 = j). (Recall the definition of independence of many events in class). j) ∈ (E1 × E2 ) P (X1 = i. j ≤ 6. where the probability association is given by P (X1 = i. The standard textbook variety of this same statement is repeated below for more clarity. ⋯Xn . Solution: Verify that P (Xi = i. n We can extend the notion of independence to many random variables. Recall that for a discrete random variable X. or two fair dice are thrown independently. Definition 5 The discrete random variables X1 . X2 . Thus this probability assignment indeed models an experiment where a fair die is thrown two times.e.

let us denote by Xi the indicator of HEADs in toss i. This is evident since it is a sum of binary valued random variables. k We will denote a random variable having Binomial distribution as X ∼ Binomial(n. p). ⋯. To see this. then X1 . 1 ≤ i ≤ k. Solution: It is easily verified by our definition that these throws are independent. p). For any sequence x1 . Definition 6 A sequence of discrete random variables Xn . Xn . ⋯. When X is binary. 3. say X given sequence u¯ = (u1 . j=1 Example 5 Consider a discrete random variable taking values in {1. Recall that n n (p + q)n = ∑ ( )pi q n−i .Exercise 1 Show that iff. this is closely related the binomial expansion. Xn are independent random variables. Since there are (nk) sequences with k heads. We can even extend our definition to a countable collection of random variables. Then Sn is Binomial(n. i=0 i How does this connect to throwing a coin? Imagine a coin with P (HEADs) = p. Example 6 Compute E[X] for X ∼ Binomial(n. ∀ui ∈ Ei . Are the draws in X independent. In particular. n n i=1 i=1 P (⋂ Xi−1 (ui )) = ∏ P (Xi−1 (ui )). ⋯.Binomial(n. Xk = xk ) = pk (1 − p)n−k . P (X i i=1 ¯ where Ni (¯ u) counts the number of times i appears in the sequence u¯. k} with P (X = ¯ = X1 . For n draws of this random variable. 6 . xn with k HEADs P (X1 = x1 . ⋯.1 Binomial Distribution . n ≥ 1 are called independent if for all finite set of indices. ⋯. ⋯. un ) it is specified that k ¯ = u¯) = ∏ pNi (¯u) . i1 . Ni (u) is a random variable too. p) Like the name implies. Let Sn represent the number of HEADs in n independent throws of this coin. ⋯. n P (Sn = k) = ( )pk (1 − p)n−k . and for any i) = pi . independence of a sequence is defined in terms of finite dimensional probabilities. p). ik k P (Xi1 = xi1 . ⋯. the random variable Ni (1) is called the Binomial distribution. 2. Xik = xik ) = ∏ P (Xij = xij ). n Notice that in the previous example.

- Random VariablesUploaded byShōyōHinata
- CSE-IV-ENGINEERING MATHEMATICS - IV [10MAT41]-NOTES.pdfUploaded byGautam Dematti
- 9709_w03_qp_6Uploaded bymichael heng
- DistributionsUploaded bypandaprasad
- Review of ProbabilityUploaded bychameera82
- Econometrics Complete NotesUploaded byMunnah Bhai
- UT Dallas Syllabus for cs6352.581 06u taught by Galigekere Dattatreya (datta)Uploaded byUT Dallas Provost's Technology Group
- Tutorial Excel Binomial DistUploaded byHarry Monson
- PROBABILITY IITK.pdfUploaded byqQQSJwErty1234567890
- Stirzaker D.-probability and Random Variables. a Beginner's Guide-CUP (2003)Uploaded byAbdalmoedAlaiashy
- Chapter 6Uploaded byThiba Sathivel
- Probability and Statistics for Scientists and EngineersUploaded byHans Degay
- e MassdensityUploaded byImdadul Haque
- 8-Random Variable and Discrete Probability Distribution (1)Uploaded byMohan Karthikeya Koppolu
- Probability - Presentation 1Uploaded byVeenavijayanath Veena
- Anal Bioe Syllabus2014Uploaded byBethany Grayczyk
- Basic Concepts of ProbabilityUploaded byChirag
- MathStatII.pdfUploaded bynitin_y11
- lecture25_MGFUploaded byronalduck
- Worksheet Binomial Distribution ProblemsUploaded byonek1ed
- 01 Introduction to quantitative methodUploaded byAngela Chong
- Quiz2Uploaded byRoy Lin
- MB0048Uploaded byRajiv Kalia
- Sta 5093 Notes to c and TablesUploaded byShakyFerdous
- Prob&StatsUploaded byJBird
- SBEC3113 03TrucksUploaded byAkmarn Makmur
- Chapter 04 - Normal DistributionUploaded bySunshine_victoria
- Probability Distribution Theory Class Test 2 NotesUploaded byAnonymous 8nJXGPKnuW
- MCA First Year Syllabus Manipur UniversityUploaded byMeitei opubunq
- Dynamic Models for Freight TransportationUploaded bycsarmvs

- ACD NOTES -2Uploaded byChandra Sekhar D
- ProbabilityUploaded byChandra Sekhar D
- 102 Headline FormulasUploaded byChandra Sekhar D
- TrafficUploaded byChandra Sekhar D
- memory InterfaceUploaded byChandra Sekhar D
- FsmUploaded byChandra Sekhar D
- Production Management Module 1 Course NotesUploaded byChandra Sekhar D
- system programmingUploaded byChandra Sekhar D
- 4IT.pdfUploaded byChandra Sekhar D
- SEMICONDUCTOR MEMORIESUploaded byChandra Sekhar D
- 8086 and Memory InterfacingUploaded byChandra Sekhar D
- Computer Graphics (Hand Written)Uploaded byChandra Sekhar D
- COMPUTER SCIENCE (M Sc (CS))Uploaded byChandra Sekhar D
- ADVANCED PROCESSOR ARCHITECTURESUploaded byChandra Sekhar D
- Conditional ProbabilityUploaded byChandra Sekhar D
- BATCHUploaded byChandra Sekhar D
- CAUploaded byChandra Sekhar D
- Class X - 2014 - 2015Uploaded byChandra Sekhar D
- 07a10591 Engineering DrawingUploaded byChandra Sekhar D
- [000665]Uploaded byChandra Sekhar D
- Annauniversity CSE 2marks OOADUploaded byChandra Sekhar D
- cnotes-pdf-120803231753-phpapp02Uploaded byChandra Sekhar D
- Ai Two MarksUploaded byChandra Sekhar D
- 07a10291 Engineering DrawingUploaded byChandra Sekhar D
- DaaUploaded byChandra Sekhar D
- Software Engineering Question Bank With Answer IVSemUploaded byChandra Sekhar D
- CG-LABUploaded byChandra Sekhar D
- Unit_1Uploaded byChandra Sekhar D
- ACD NOTES -2Uploaded byChandra Sekhar D