This action might not be possible to undo. Are you sure you want to continue?

by Ω . A random variable is a function X : Ω → ℜ, where ℜ is the real line. Uppercase letters will be used to represent generic random variables, whilst lowercase letters will be used to represent possible numerical values of these variables. To describe the probability of possible values of X, consider the following definition. The distribution function of a random variable X is the function FX : ℜ → [0, 1] given by FX(x) = P(X ≤ x). F(-∞) = 0 F(∞)=1 and F(a) <= F(b) for all a <=b. Discrete random variables The random variable X is discrete if it takes values in some countable subset {x1, x2, …}, only, of ℜ. The distribution function of such a random variable has jump discontinuities at the values x1, x2, … and is constant in between. The function given by fX(x) = P(X = x) is called the (probability) mass function of X. The mean value, or expectation, or expected value of X with mass function fX, is defined to be

(1) The expected value of X is often written as µ . It is often of great interest to measure the extent to which a random variable X is dispersed. The variance of X or Var(X) is defined as follows: (2) The variance of X is often written as σ 2, while its positive square root is called the standard deviation. Since X is discrete, (2) can be re-expressed accordingly:

(3) In the special case where the mass function fX(x) is constant and X takes n real values, (3) reduces to a well known equation determining the variance of a set of n numbers:

(4)

fX(2) = P(X = 2) = 2 . fX(1) = P(X = 1) = 2 . X(CE) = X(DD) = X(EC) = 2. An equivalent statement is P(A ∩ B) = P(A)P(B). The scoring guidelines outlined in Figure 1 imply X(AA) = 8. AD. X and Y are independent if and only if fX. Y(x. 1] is given by fX. The mass function of X. BB. …. EC. EE }. y ∈ ℜ. Their joint mass function fX. BA. 3/25. AE. ED. y) = P(X ≤ x and Y ≤ y). AC. The probabilities of hitting A to E are 1/25. is then fX(0) = P(X = 0) = P(Hit E)P(Hit E) = 81/625. shooting arrows at the target shown in Figure 1. y) = P(X = x and Y = y). DE. X(EE) = 0. Similarly. Let the variable X(ω ) represent the score of a particular outcome. 5/25. 1] of X and Y is given by FX. The regions A to E are annuli with inner and outer radii as shown in Figure 1. Clearly X is a discrete random variable. target regions of equal area will have the same probability of being hit. the sample space Ω = { AA. The joint distribution function FX. X(DE) = X(ED) = 1. DD. Figure 1: An archery target. C scores 2 points.Events A and B are said to be independent if and only if the incidence of A does not change the probability of B occurring. Y : ℜ2 → [0. it is assumed the archer always hits the target.in other words. Y(x. P(Hit D)P(Hit E) = 126/625. the events {X = x} and {Y = y} are independent for all x and y. If the archer is allowed to fire two arrows. D scores 1 point and E scores nothing. Suppose the archer is a very poor shot and hits the target randomly . fX(x). X(AC) = X(BB) = X(CA) = 6. Consider an archer. X(AB) = X(BA) = 7. 7/25 and 9/25 respectively. y) = fX(x)fY(y) for all x. Y(x. In other words. …. mapping the sample space Ω to scores (real numbers). the discrete random variables X and Y are called independent if the numerical value of X does not affect the distribution of Y. P(Hit C)P(Hit E) + P(Hit D)P(Hit D) = 139/625 and so on. Y : ℜ2 → [0. B scores 3 points. AB. A hit in region A scores 4 points. For simplicity. The probability that an arrow hits a target region is directly proportional to the area of the region. EA. EB. .

42)⋅ 139/625 + (0. the variance of X is Var(X) = (2. The fundamental theorem of calculus and (5) imply . the expected value of X is E(X) = 0. From (3). FX(2) = P(X ≤ 2) = fX(2) + fX(1) + fX(0) and so on.62)⋅ 124/625 + … = 2. ∞).4.From (1). is then FX(0) = P(X ≤ 0) = fX(0). fX is called the (probability) density function of X. In this case.124/625 + … = 2.42)⋅ 126/625 + (0.139/625 + 3. The distribution function of X. Continuous random variables The random variable X is continuous if its distribution function can be expressed as (5) for some integrable function fX : ℜ → [0.126/625 + 2.81/625 + 1. Figure 2: The distribution function FX of X for the archery target. The distribution function FX(x) is shown in Figure 2.72. FX(1) = P(X ≤ 1) = fX(1) + fX(0).42)⋅ 81/625 + (1. FX(x).

Joint distributions – (Results are shown for discrete random variables. Example Let us consider the random variables associated to color blindness and gender (associate 0 if male. here for instance we have: . for more variables this is impossible to draw on paper. This is Since these are probabilities. the joint probability mass function can be written as Pr(X = x & Y = y). 1 if female). but they hold for continuous random variables as well) For a pair of discrete random variables. we can tabulate the probabilities of all 4 possible pairs of outcomes as: So that from this table of joint distribution we read: In general. of course. then X and Y are independent. they are called the marginal probabilities. The row-sums and column-sums produce the complete distribution functions for the individual random variables. we have Joint distribution of independent variables If for discrete random variables for all x and y. when we build the joint distribution of two random variables we can make such a two-way table.

(1) Proof: for X discrete for X continuous Property 2 For discrete or continuous random variables X and Y.y) we have the marginal distributions Some general results Property 1 For some constant c ∈ ℜ and random variable X. E(cX) = cE(X). 1] given by fX. for X and Y discrete. an extension of Part 1-(1) gives . Then. given the joint distribution on the pairs (x. y) = P(X = x and Y = y).Y(x.Y : ℜ2 → [0. E(X + Y) = E(X) + E(Y).y) for two random variables X and Y: P(x. (2) Proof: Suppose X and Y have joint mass function fX.In general.

Let X and Y have joint mass function fX.Y(x. y) : ℜ2 → [0. y) = fX(x)⋅ fY(y).Y(x. If X and Y are independent. 1] given by fX. then (by definition) the probability of Y occurring is not affected by the occurrence or non-occurrence of X. y) : ℜ2 → [0. P(X = x and Y = y) = P((X = x) ∩ (Y = y)) = P(X = x)⋅ P(Y = y). ∞) is the joint density function of X and Y. E(XY) = E(X)⋅ E(Y) (3) Proof: The proof of (3) is first presented for discrete random variables X and Y.for X and Y continuous the proof begins where fX. The proof then proceeds in a similar way to the discrete case with summations replaced by integrations. so that fX. Therefore. For X and Y independent.Y(x. Property 3 For discrete or continuous independent random variables X and Y. .Y = P(X = x and Y = y).

. (4) Proof: The proof of (4) holds for X discrete or continuous. let µ = E(X). the proof begins where fX.Y : ℜ2 → [0. Property 5 For the discrete or continuous random variable X and the constant c ∈ ℜ. Property 4 For the discrete or continuous random variable X. let µ = E(X). ∞) is the joint density function of X and Y.For X and Y continuous. Then. As a shorthand notation. Again. Then. The proof then proceeds in a similar way to the discrete case with summations replaced by integrations. (5) Proof: The proof of (5) holds for X discrete or continuous. Var(cX) = c2⋅ Var(X).

if X and Y are independent random variables.easymeasure.µ )2) = c2⋅ Var(X).uk/randomvariables1. Then. (6) Proof: The proof of (6) holds for X and Y discrete or continuous.µ )2) = c2⋅ E((X . X However.Var(cX) = E((cX .co. let µ = E(X) and µ Y = E(Y). Property 6 For discrete or continuous independent random variables X and Y. As a shorthand notation. Var(X + Y) = Var(X) + Var(Y).edu/~susan/courses/s116/node65. then Refernces: http://www.cµ )2) = E(c2⋅ (X .stanford.aspx http://www-stat. from (3).html .

Sign up to vote on this title

UsefulNot useful- Rv Intro
- Problem Set 3
- Chapter 8 Random Variables
- Lecture 8 Random Variables
- Chapter 2
- Ma1014 Unit i Qb
- Mast 20004 Course Notes 2014
- Lecture 4
- 11.1 Probabilistic Risk Analysis - Examples_with Notes
- Discrete_dist_week_3.pptx
- Random variables and probability distributions chapter 5.docx
- Lecture 18
- Probability Distribution
- Averages
- احصاء شبتر 2
- Lecture 2_Review of Probability Distributions
- Final Copy.docx
- Random Variables
- Rv Review.ps
- Frequency Analysis in hydrology
- Advanced Derivatives Course Chapter 4
- Time to Recruitment for a Single Grade Manpower System with Two Thresholds, Different Epochs for Exits and Geometric InterDecisions
- Lecture 1
- Stochastic Modeling of Microstructures
- pqt_2
- Expected Value
- Random Variables and Probability Distributions
- Chapter 3
- Important Stastics
- Random Variables and Probability Distributions
- 1253_Introduction to Random Variables