26 views

Original Title: Math Trm Ppr

Uploaded by summitz

Math Trm Ppr

Attribution Non-Commercial (BY-NC)

- Exam Questions
- Probability Theory and Stochastic Processes
- ch04
- F5 Add Math Folio (Normal Distribution)[1]
- STAT111Faighth107.pdf
- A05ans
- Optimum Operasi Dan Keandalan Sistem Tenaga Listrik
- Stochastic Process
- psqt
- ExamQuestions-Probabaility
- Note 1
- Statistics Notes on Random Variables
- 13 S241 Functions of Random Variables
- M1112SP-IIIa-1, 2, 3
- Math t Assignment
- W4 Probability Distributions
- QUIZ NO. 2 CEM118
- st202_2018_ex1.pdf
- Probability Theory
- Random Variables 15 Aug 10

You are on page 1of 15

OF

MTH:202

TOPIC: Disuss the various distributions of

continuous random variable.

Tejinder Paul Singh Summit Sakhre

E2803A10

B.Tech(IT)

10803929

ACKNOWLEDGEMENT

Tejinder Paul Singh who has assigned me this

term paper to bring out my creative

capabilities.

I express my gratitude to my parents for being a

continuous source of encouragement for all their

financial aid.

I would like to acknowledge the assistance

provided to me by the library staff of LOVELY

PROFESSIONAL UNIVERSITY.

My heartfelt gratitude to my class-mates and for

helping me to complete my work in time

Summit Sakhre

INDEX

Random Variable

Intuitive Description

Equality in Distribution

Discreet Random Variables

Random variable

In mathematics a random variable is a variable whose value is a function of the outcome of a

statistical experiment that has outcomes of equal probability e.g. the number returned by rolling a

die. Random variables are used in the study of probability. They were developed to assist in the

analysis of games of chance, stochastic events, and the results of scientific experiments by

capturing only the mathematical properties necessary to answer probabilistic questions. Further

formalizations have firmly grounded the entity in the theoretical domains of mathematics by

making use of measure theory.

The language and structure of random variables can be grasped at various levels of mathematical

fluency. Beyond an introductory level, set theory and calculus are fundamental. The concept of a

random variable is closely linked to the term "random variety": a random variety is a particular

outcome of a random variable.

There are two types of random variables: discrete and continuous. A discrete random variable

maps events to values of a countable set (e.g., the integers), with each value in the range having

probability greater than or equal to zero. A continuous random variable maps events to values of

an uncountable set (e.g., the real numbers). Usually in a continuous random variable the

probability of any specific value is zero, although the probability of an infinite set of values

(such as an interval of non-zero length) may be positive. However, sometimes a continuous

random variable can be "mixed", having part of its probability spread out over an interval like a

typical continuous variable, and part of it concentrated on particular values, like a discrete

variable. This categorization into types is directly equivalent to the categorization of probability

distributions.

A random variable has an associated probability distribution and frequently also a probability

density function. Probability density functions are commonly used for continuous variables.

Intuitive description

A random variable can be thought of as an unknown value that may change every time it is

inspected. Thus, a random variable can be thought of as a function mapping the sample space of

a random process to the real numbers. A few examples will highlight this.

Examples

For a coin toss, the possible events are heads or tails. The number of heads appearing in one fair

coin toss can be described using the following random variable:

and if the coin is equally likely to land on either side then it has a probability mass function

given by:

A random variable can also be used to describe the process of rolling a fair dice and the possible

outcomes. The most obvious representation is to take the set {1, 2, 3, 4, 5, 6} as the sample

space, defining the random variable X as the number rolled. In this case ,

An example of a continuous random variable would be one based on a spinner that can choose a

real number from the interval [0, 2π), with all values being "equally likely". In this case, X = the

number spun. Any real number has probability zero of being selected. But a positive probability

can be assigned to any range of values. For example, the probability of choosing a number in [0,

π] is ½. Instead of speaking of a probability mass function, we say that the probability density of

X is 1/2π. The probability of a subset of [0, 2π) can be calculated by multiplying the measure of

the set by 1/2π. In general, the probability of a set for a given continuous random variable can be

calculated by integrating the density over the given set.

An example of a random variable of mixed type would be based on an experiment where a coin

is flipped and the spinner is spun only if the result of the coin toss is heads. If the result is tails, X

= −1; otherwise X = the value of the spinner as in the preceding example. There is a probability

of ½ that this random variable will have the value −1. Other ranges of values would have half the

probability of the last example.

Typically, the observation space is the real numbers with a suitable measure. Recall,

is the probability space. For real observation space, the function is a real-valued

random variable if

This definition is a special case of the above because generates the Borel

sigma-algebra on the real numbers, and it is enough to check measurability on a generating set.

.)

of assigning a value to a variable. If the CDF is a (right continuous) Heaviside step function then

the variable takes on the value at the jump with probability 1. In general, the CDF specifies the

probability that the variable takes on particular values.

ask questions like "How likely is it that the value of X is bigger than 2?". This is the same as the

probability of the event which is often written as for short, and

easily obtained since

Recording all these probabilities of output ranges of a real-valued random variable X yields the

probability distribution of X. The probability distribution "forgets" about the particular

probability space used to define X and only records the probabilities of various values of X. Such

a probability distribution can always be captured by its cumulative distribution function

and sometimes also using a probability density function. In measure-theoretic terms, we use the

random variable X to "push-forward" the measure P on Ω to a measure dF on R. The underlying

probability space Ω is a technical device used to guarantee the existence of random variables,

and sometimes to construct them. In practice, one often disposes of the space Ω altogether and

just puts a measure on R that assigns measure 1 to the whole real line, i.e., one works with

probability distributions instead of random variables.

will also be a random variable on , since the composition of measurable functions

is also measurable. (However, this is not true if f is Lebesgue measurable.) The same procedure

that allowed one to go from a probability space to can be used to obtain the

distribution of . The cumulative distribution function of is

If function f is invertible, i.e. f − 1 exists, then the previous relation can be extended to obtain

and, again with the same hypotheses of inevitability of f, we can find the relation between the

probability density functions by differentiating both sides with respect to y, in order to obtain

If there is no invertibility but each y admits at most a countable number of roots (i.e. a finite

number of xi such that y = f(xi)) then the previous relation between the probability density

functions can be generalized with

where xi = fi−1(y).

Example 1

If y ≥ 0, then

so

Example 2

where is a fixed parameter. Consider the random variable Then,

Equality in distribution

Two random variables X and Y are equal in distribution if they have the same distribution

functions:

Two random variables having equal moment generating functions have the same distribution.

This provides, for example, a useful method of checking equality of certain functions of i.i.d.

random variables.

Equality in mean

Two random variables X and Y are equal in p-th mean if the pth moment of |X − Y| is zero, that

is,

As in the previous case, there is a related distance between the random variables, namely

This is equivalent to the following:

Two random variables X and Y are equal almost surely if, and only if, the probability that they

are different is zero:

For all practical purposes in probability theory, this notion of equivalence is as strong as actual

equality. It is associated to the following distance:

where 'sup' in this case represents the essential supremum in the sense of measure theory.

Equality

Finally, the two random variables X and Y are equal if they are equal as functions on their

probability space, that is,

A discrete random variable is one which may take on only a countable number of distinct values

such as 0, 1, 2, 3, 4, ... Discrete random variables are usually (but not necessarily) counts. If a

random variable can take only a finite number of distinct values, then it must be discrete.

Examples of discrete random variables include the number of children in a family, the Friday

night attendance at a cinema, the number of patients in a doctor's surgery, the number of

defective light bulbs in a box of ten.

A continuous random variable is one which takes an infinite number of possible values.

Continuous random variables are usually measurements. Examples include height, weight, the

amount of sugar in an orange, the time required to run a mile.

Two random variables X and Y say, are said to be independent if and only if the value of X has

no influence on the value of Y and vice versa.

The cumulative distribution functions of two independent random variables X and Y are related

by

F(x,y) = G(x).H(y)

where

G(x) and H(y) are the marginal distribution functions of X and Y for all pairs (x,y).

Knowledge of the value of X does not effect the probability distribution of Y and vice versa.

Thus there is no relationship between the values of independent random variables.

For continuous independent random variables, their probability density functions are related by

f(x,y) = g(x).h(y)

where

g(x) and h(y) are the marginal density functions of the random variables X and Y

respectively, for all pairs (x,y).

For discrete independent random variables, their probabilities are related by

P(X = xi ; Y = yj) = P(X = xi).P(Y=yj)

for each pair (xi,yj).

Suppose X and Y are random variables on the same sample space S. Then X +Y , kX and XY are

functions

on S defined as follows (where s ∈ S):

(X + Y )(s) = X(s) + Y (s), (kX)(s) = kX(s), (XY )(s) = X(s)Y(s)

More generally, for any polynomial or exponential function h(x, y, . . . , z), we define h(X, Y, . . .

, Z) to be the

function on S defined by

[h(X, Y, . . . , Z)](s) = h[X(s), Y(s), . . . , Z(s)]

It can be shown that these are also random variables. (This is trivial in the case that every subset

of S is an event.)

The short notation P(X = a) and P(a ≤ X ≤ b) will be used, respectively, for the probability that

“X maps

into a” and “X maps into the interval [a, b].” That is, for s ∈ S:

P(X = a) ≡ P({s | X(s) = a}) and P(a ≤ X ≤ b) ≡ P({s | a ≤ X(s) ≤ b})

Analogous meanings are given to P(X ≤ a), P(X = a, Y = b), P(a ≤ X ≤ b, c ≤ Y ≤ d), and so on.

Let X be a random variable on a finite sample space S with range space Rx

= {x1, x2, . . . , xt}. Then X

induces a function f which assigns probabilities pk to the points xk in Rx as follows:

f (xk) = pk

= P(X = xk) = sum of probabilities of points in S whose image is xk.

The set of ordered pairs (x1, f (x1)), (x2, f (x2)), . . . , (xt, f (xt )) is called the distribution of the

random variable

X; it is usually given by a table as i. This function f has the following two properties:

(i) f (xk) ≥ 0 and (ii)

_

k

f (xk) = 1

Thus RX with the above assignments of probabilities is a probability space. (Sometimes we will

use the pair

notation [xk, pk

] to denote the distribution of X instead of the functional notation [x, f (x)]).

Distribution f of a random variable X

Theorem : Let S be an equiprobable space, and let f be the distribution of a random variable X

on S with

the range space RX

= {x1, x2, . . . , xt}. Then

pi= f (xi ) = number of points in S whose image is xi

number of points in S

you might show the percentage of your data that falls into each of several categories.

For example, suppose you had data on family income. You might find that 20 percent of

families have an income below $30K, 27 percent have an income between $30 and $40k,

21 percent have an income between $40 and $50k, and 32 percent have an income over

$50k. A histogram would be a chart of that data with the income ranges on the X-axis and

the percentages on the Y-axis.

Similarly, a graph of a random variable shows the range of values of the random variable

on the X-axis and the probabilities on the Y-axis. Just as the percentages in a histogram

have to add to 100 percent, the probabilities in a graph of a random variable have to add

to 1. (We say that the area under the curve of a probability density function has to equal

one). Just as the percentages in a histogram have to be non-negative, the probabilities of

the values of a random variable have to be non-negative.

equations that allows you to calculate probability based on the value of x. Think of a pdf

as a formula for producing a histogram. For example, if X can take on the values 1, 2, 3,

4, 5 and the probabilities are equally likely, then we can write the pdf of X as:

f(X) = .2 for X = 1, 2, 3, 4, or 5

The point to remember about a pdf is that the probabilities have to be nonnegative and

sum to one. For a discrete distribution, it is straightforward to add all of the probabilities.

For a continuous distribution, you have to take the "area under the curve." In practice,

unless you know calculus, the only areas you can find are when the pdf is linear. See the

Uniform Distribution.

The mean of a distribution is a measure of the average. Suppose that we had a spinner where the

circle was broken into three unequal sections. The largest section is worth 5 points, one small

section is worth 2 points, and the remaining small section is worth 12 points. The spinner has a

probability of .6 of landing on 5, a probability of .2 of landing on 2, and a probability of .2 of

landing on 12. If you were to spin the spinner a hundred times, what do you think your average

score would be? To calculate the answer, you take the weighted average of the three numbers,

where the weights are equal to the probabilities. See the table below.

X P(X) P(X)*X

3 .2 0.6

5 .6 3.0

12 .2 2.4

mX = X = E(X) 6.0

The Greek letter m is pronounced "meiuw" and mX is pronounced "meiuw of X" or "meiuw sub

X."

X is pronounced "X bar."

E(X) is pronounced "The expected value of X."

mX, X, and E(X) are three ways of saying the same thing. It is the average value of X.

In our example, E(X) = 6.0, even though you could never get a 6 on the spinner. Again, you

should think of the mean or average as the number you would get if you averaged a large number

of spins. On your first spin, you might get a 12, and the average would start out at 12. As you

take more spins, you get other numbers, and the average gradually tends toward 6. In

mathematical terms, you could say that the average asymptotically approaches 6.

Another property of the distribution of a random variable is its average disperson around the

mean. For example, if you spin a 12, then this is 6 points away from the mean. If you spin a 2,

then this is 4 points away from the mean. You could spin the spinner 100 times and calculate the

average disperson around the mean. Mathematically, we calculate the average dispersion by

taking the square of the differences from the mean and weighting the squared differences by the

probabilities. This is shown in the table below.

3 .2 -3 9 1.8 The Greek letter s is pronounced "sigma." It is a lower

5 .6 -1 1 0.6 case sigma. The expression "var(X)" is pronounced

12 .2 6 36 7.2 "variance of X."

s = var(X) = E[(X-X)2]

2

9.6

Suppose that the values of X were raised to 4, 6, and 13.

What do you think would happen to the mean of X? What do you think would happen to the

variance of X? Verify your guesses by setting up the table and doing the calculation.

See if you can come up with values of X that would raise the mean and raise the variance. See if

you can come up with values of X that would raise the mean but lower the variance. Finally,

suppose we leave the values of X the same. Can you come up with different values of P(X) that

keep the same mean but lower the variance? Can you come up with values of P(X) that keep the

same mean but raise the variance?

Next, consider a weighted average of random variables. In terms of a histogram, suppose that

you had two zip codes with different average incomes. If you wanted to take the overall mean

income of the population in both zip codes, you would have to weight the means by the different

populations in the zip codes.

For example, suppose that there are 6000 families in one zip code, with a mean income of $50k.

Suppose that there are 4000 families in another zip code, with a mean income of $40k. The

overall mean income is equal to (1/10,000)(6000 * $50 + 4000 *$40) = $46k.

Similarly, suppose that you take a weighted average of two random variables. Let W = aX + bY.

Then the mean of W is equal to a times the mean of X plus b times the mean of Y.

References

http://en.wikipedia.org/wiki/Random_variable

http://www.stats.gla.ac.uk/steps/glossary/probab

ility_distributions.html

Schaum’s Outline of Discreet Mathematics

- Exam QuestionsUploaded byskipobrien
- Probability Theory and Stochastic ProcessesUploaded bysrihari
- ch04Uploaded byyuliandriansyah
- F5 Add Math Folio (Normal Distribution)[1]Uploaded byYuki Tan
- STAT111Faighth107.pdfUploaded bySharon Kim
- A05ansUploaded byvjvijay88
- Optimum Operasi Dan Keandalan Sistem Tenaga ListrikUploaded bywahyudwi14
- Stochastic ProcessUploaded byhopanz
- psqtUploaded byAnonymous YUArPAN
- ExamQuestions-ProbabailityUploaded byabcd2003959
- Note 1Uploaded byMadz Laviña Flores
- Statistics Notes on Random VariablesUploaded byDillon Chew
- 13 S241 Functions of Random VariablesUploaded bySaud Manzar
- M1112SP-IIIa-1, 2, 3Uploaded bychingferdie_111
- Math t AssignmentUploaded byDeryneTee
- W4 Probability DistributionsUploaded byDanny Manno
- QUIZ NO. 2 CEM118Uploaded bynaikin_1031
- st202_2018_ex1.pdfUploaded byCarlo
- Probability TheoryUploaded bymeetwithsanjay
- Random Variables 15 Aug 10Uploaded byGolamKibriabipu
- rugnewslettermarch2015Uploaded byapi-282322091
- Lecture 05. Introduction to Probability WebUploaded bymr_frederick87
- Lecture6 Normal DistributionUploaded byAsebaho Badr
- 2017 July Lecture - Methods - Distribution CopyUploaded byrichard langley
- Single Channel QueueUploaded bySalman Jawed
- ch_04Uploaded byd123k456
- 2014_ps01Uploaded byPatrick Sibanda
- lec3.pdfUploaded byVanitha Shah
- Erasmus Students - MatemathicsUploaded byPabloTra
- Normal Distribution:SamplingUploaded byrogerrabbit12

- MCS OverviewsUploaded bywillbark2day
- Review Problems With KeyUploaded byjumpman23
- MATLAB Functions for Common Probability DistributionsUploaded byNguyen Duy
- MATH PRESENTATION.pptUploaded byDaphane Kate Aureada
- probability introductionUploaded byMr Bhanushali
- Hwk1_SolutionUploaded byAbdou Rahman Jazar
- Binomial, Poisson & Normal Distribution TableUploaded bySyed Adil Javed
- M206Assignment05Uploaded bySuryanarayanan Balasubramanian
- Lecture-6 (Paper 1)Uploaded byAsk Bulls Bear
- Sila BusUploaded byMiljan Kovacevic
- adasdasUploaded bynikhilmahajan8893
- (5) What is the Intuition Behind Hoeffding Inequality_ - QuoraUploaded byAkon Akki
- l 05 Stats i Discrete Prob Dist sUploaded byyeshimebet
- Questions 2Uploaded bygobigupta
- Assignment on Bayes’ TheormUploaded bymohitdino
- randomprocJuly14.pdfUploaded bySiddharthJain
- Tutorial 3Uploaded byAshwin Kumar
- MA 2261Uploaded byxbslinks
- Probability Continuous 007Uploaded byNIKNISH
- Uncertainty Theory, 3rd ed., 2009; Liu B.Uploaded bygenacid
- Probability NotesUploaded byTomas Kojar
- 9A04303 Probability Theory & Stochastic ProcessesUploaded bysivabharathamurthy
- Fundamentals of Applied Probability TheoryUploaded byYared Tuemay
- 13Uploaded byAndresAmaya
- Expected ValueUploaded bymenilanjan89nL
- Probability and Queuing Theory - Question Bank.Uploaded byprooban
- Stoch Proc SimUploaded byJavier
- Applied probability and statisticsUploaded byRevathi Ram
- Binomial Probability DistributionUploaded byJyoti Prasad Sahu
- Final RSA With Question Papers Solution-new-May_16Uploaded byDenny