You are on page 1of 78

SETS

Anakha A Menon
Sneha Saji
Sets

• A set is a well-defined collection of objects.


• It is represented using a capital letter.
• The items present in a set are called either elements or members of a set.
• Examples:-
(i) collection of even natural numbers less than 10 can be represented in the form of a set, A =
{2, 4, 6, 8}
(ii) Set of prime numbers, B={2, 3, 5, 7, 11, 13, 17, ...}
  Venn Diagram

• It is a pictorial representation of sets


• Each set is represented as a circle.
• The elements of a set are present inside the circles
• A rectangle encloses the circles, which represents the universal set.
• The Venn diagram represents how the given sets are related to each other.

• Example:
(i) Out of 80 students who attended a test in English and Mathematics, 50 passed mathematics, 48 passed
English ,30 passed both and 12 passed neither.
80

18
30 20

12
Subset
• A set A is a subset of another set B if all elements of the set A are elements of the set B.
• i.e., if x є A then x є B
• In other words, the set A is contained inside the set B.
• The subset relationship is denoted as A⊂B.

• If A is a subset of B then , we can represent it in Venn diagram :-

• Example:- Let A be the set of numbers from 1 to 10


i.e., A={1,2,3,4,5,6,7,8,9,10}
and B be the set of even numbers between 1 and 10
i.e., B={2,4,6,8,10}
Clearly, all elements in set B are elements of set A
⇒ B⊂A
To prove that A is a subset of B
• The standard way to prove A is a subset of B is by proving
If an element x is in A then x is in B
• For example:
Let A={n є Z | n=4p, p є Z }
and B={m є Z | m=2q, q є Z }
To prove : A⊂B.
Proof :
Let x є A ⇒ x=4p ; p є Z
⇒ x=2(2p)
if q=2p
⇒ x=2q ; q є Z
⇒xєB
Hence, x є A ⇒ x є B
⇒ A⊂B.
PROBABILITY BASICS
-RIYASEN U
• Sample space: set of all possible outcomes
• Examples:
• Tossing a Coin
• When flipping a coin, two outcomes are possible, such as head and tail.
• Sample Space, S = { H, T } = { Head, Tail }
• A Die is Thrown
• When a single die is thrown, it has 6 outcomes then S = { 1, 2, 3, 4, 5, 6}
• Outcome
• It is a possible or single result of an experiment.
• Event
• It’s a set or collection of one or more outcomes of an experiment.
• Tossing a coin thrice:
• S = {(T , T , T) , (T , T , H) , (T , H , T) , (T , H , H ) , (H , T , T ) , (H , T , H) , (H , H, T) ,(H
, H , H)}
• Suppose, if we want to find only the outcomes which have at least two heads;
then the set of all such possibilities would be
• E = { (H , T , H) , (H , H ,T) , (H , H ,H) , (T , H , H)}
• Thus, an event is a subset of the sample space, i.e., E is a subset of S.
Types of events :
• Impossible and Sure Events
If the probability of occurrence of an event is 0, such an event is called
an impossible event .
If the probability of occurrence of an event is 1, it is called a sure event.
• Simple Events
Any event consisting of a single point of the sample space is known as a simple
event .
If S = {56 , 78 , 96 , 54 , 89} and E = {78} then E is a simple event.
• Compound Events
If any event consists of more than one single point of the sample space then
it’s called a compound event.
If S = {56 ,78 ,96 ,54 ,89}, E1 = {56 ,54 }, E2 = {78 ,56 ,89 } then, E1 and E2
represent two compound events.
• Independent Events and Dependent Events
If the occurrence of any event is completely unaffected by the occurrence of any
other event, it’s called independent events .
The events which are affected by other events are known as dependent events.
• Mutually Exclusive Events
If the occurrence of one event excludes the occurrence of another event, such
events are mutually exclusive events.
If S = {1 , 2 , 3 , 4 , 5 , 6} and E1, E2 are two events
E1 = {1,2} and E2 = {5,6} .
E1 and E2 are mutually exclusive.
• Sample point
Each outcome in a sample space is called a sample point.
• Experiment
It is a procedure that can be repeated several times and has a well-defined set of
possible outcomes.
An experiment is said to be random if it has more than one possible outcome, and
deterministic if it has only one.
• Random Experiments
Before rolling a die you do not know the result. This is an example of a random
experiment.
After the experiment, the result of the random experiment is known.
An outcome is a result of a random experiment.
PROBABILITY

E.JansiDevi
Probability
Probability denotes the possibility of outcome of any
random event

Formula for probability


P(E)= Number of favorable outcomes/ Total
number of outcomes
P(E)= n(E)/n(S)
Where, n(E) is no.of event favourable to event E
n(s) is Total number of outcomes
Example

If 8 boy arranged in a row, what is the probability


that 3 particular boys will sit together?
Solution:
P=No.of favorable outcomes/ Total no.of
ououtcomes
Total no.of ways=8!
No.of favorable outcomes=6! × 3!
P=6!×3! = 6!×3!×2! = 3
8! 8!×7!×6! 28
Axioms of probability

• Axiom 1:
For any event A, P(A)>0
• Axiom 2:
Probability of sample space S is P(S)=1
• Axiom 3:
P(AUB)= P(A)+P(B)
THE LAW OF TOTAL
PROBABILITY
SHITAL PATIL
What is law of probability?
• The rule states that if the probability of an event is unknown, it can
be calculated using the known probabilities of several distinct
events.
• The probability for a can be written as sums of event B. The total
probability rule is: P(A) = P(A∩B) + P(A∩Bc).
ASSUMPTIONS
1) Let there be n mutually exclusive and exhaustive events (Ei)
P( )= 0 i =1,2,….n ;j < i
P( ) = ( )=……. = 0
P( …….)= 1

2)Let there be an event A such that it has some outcomes common


with some or all events Ei
P(A)= P(A )+ P(A )+……P(P(A )
continued
2)Let there be an event A such that it has some outcomes common
with some or all events Ei
P(A)= P(A )+ P(A )+……P(P(A )
=P()P(A|)+P()P(A|)+…….+P()P(A|)

As we know
• P(A|B) = , it gives
• P(A B)=P(B) P(A|B)
Law of total probability by Venn diagram
CONDITIONAL
PROBABILITY
BY
PRARTHANA KARMAKAR
M.SC. ACTUARIAL SCIENCE
UNIVERSITY OF MADRAS
DEFINITION
The probability that one event happens given that another event is already known
to have happened is called a conditional probability.
Suppose we know that event A has happened. Then the probability that event B
happens given that event A has happened is denoted by P(B|A) .

P(B|A) = , if P(A)>0
• EXAMPLE:
Let us consider a random experiment of drawing a card from a pack of cards.
Then the probability of happening of the event A: "The card drawn is a king", is
given by:
P(A)= = .

Now suppose that a card is drawn and we are informed that the drawn card is
red. How does this information affect the likelihood of the event A?
Obviously, if the event B: 'The card drawn is red', has happened, the event
'Black card' is not possible. Hence the probability of the event A must be
computed relative to the new sample space 'B' which consists of 26 sample
points (red cards only), i.e., n(B) = 26. Among these 26 red cards, there are two
(red) kings so that n(A∩B) = 2. Hence, the required probability is given by:

P(A|B)= = = .
PROPERTIES:
• For two events A and B,
P(A∩B)= P(A). P(B|A), P(A)>0
= P(B). P(A|B), P(B)>0
where P(B|A) represents conditional probability of occurrences of B when the event
A has already happened.
• If A and B are two independent events then,
P(A∩B)= P(A). P(B)
• An event A is independent of B then,
P(A|B)= P(A)
and P(B|A)=P(B) when B is independent of A.
BAYES THEOREM.
- Venmani. J

Bayes theorem was given by a British Mathematician, THOMAS BAYES in


1763. Bayes theorem is also known as the Bayes Rule or Bayes Law.

Bayes theorem, in simple words, determines the conditional probability of


an event A given that event B has already occurred.

It describes the probability of occurrences of an event related to any


condition.

Bayes theorem is also known as the Bayes Rule or Bayes Law.

P(A|B) = .
* In it , we update a priori probabilities based on the given information
by calculating the revising probabilities.

Priori New Apply Bayes Posterior


Probabilities Information theorem probabilities
Bayes Theorem - Statement
If , , are mutually disjoint events with P() ≠ 0, then for any
arbitrary event E which is subset of such that P(E) ˃ 0, we have

P(|E) = , i=1,2,3,...,n

Disjoint Arbitrary
Bayes Theorem
Proof
Since E ⊂ , we have

E=E∩

Since E ∩ ⊂ are mutually disjoint events , thus

P( E ) = P )

= (E∩)

P( E ) = ( ) P (E │ )
Now, by definition of conditional probability,

P(|E) =
= P()
P(|E) =

Therefore, Which completes the proof .


Remarks:

• The probabilities P(A1), P(A2), ...... are termed as "a prior probabilities"
because they exist before we gain any information.

• The probabilities P() are called "likelihoods" because they indicate how
likely the event E occur given each and every a prior probability.

The probability P(|E) are called "posterior probability" because they are
determined after the results of the experiment are known.
Joint Probability
- K. RENUGA

Joint probability is a statistical measure that calculates the


likelihood of two events occurring together and at the same point in
time i.e., it is the likelihood of two events occurring together.

It is the probability of event A occurring at the same time that event


B occurs. It can be denoted as:
p(a , b) = P[A = a, B = b]
Pictorial representation of Joint probability:
Example:

When a coin is tossed, what will be the joint probability of getting a head in the
first toss and getting a tail in the following second toss?

Let the Event A be the probability of getting a head in the first toss,
i.e., P(A) = ½ = 0.5

And the Event B be the probability of getting a tail in the second toss,
i.e., P(B) = ½ = 0.5

Then, the joint probability will be,


P(A) X P(B) = 0.5 * 0.5 = 0.25 = 25%
Random variables
- Abyanka

Definition
•A random variable is a function that assigns a number ( real value) to each elements of
a sample space S of an experiment.
•It is a function with domain S ( sample space) and range -∞ to +∞
•Random variables are commonly labeled with letters like ‘X’ , ‘Y’ etc.

Example :
Suppose we have a random experiment of flipping a coin
Sample space = ( H , T)
We assign a real number to the each element of SS
say head – 0 , tail – 1
The X random variable can take either of two values based
on the outcome of the experiment
i.e X = { 0,1}
Types of random variables

Random variables

Discrete Continuous

• Random variables are classified into discrete and continuous variables.


• The main difference between the two categories is the type of possible values that
each variable can take.
Discrete random variables
A discrete random variable is a variable which takes only a finite or distinct number of
values such as 0,1,2,3,4….

Example:
• Throwing a dice. the dice can take only a finite number of outcomes {1, 2, 3, 4, 5,
and 6}
• number of children in a family,
• no of defective bulbs in a box of 10 bulbs etc.

Each outcome of a discrete random variable has a certain probability.


The probability distribution of a discrete random variable is called probability mass
function
Continuous random variable
Continuous random variables can take on an infinite number of possible values.
They are usually measurements is not defined at specific values. Instead, it is
defined over an interval of values.

Example:
• height, weight of a person
• Amount of sugar in an orange
• The time required to run a mile.

Probability density function is used to specify the probability distribution of


 continuous random variable.
EXPECTATION OF
RANDOM VARIABLE
BY
SOMSHREE BISWAS
M.SC. ACTUARIAL SCIENCE
UNIVERSITY OF MADRAS
EXPECTATION OF RANDOM VARIABLE

DEFINITION :
The expectation of a random variable X, denoted by
E(X) or µx , serves as a measure of central tendency of the
probability distribution of X. If X assume the values x1, x2, …,
with respective probabilities p1, p2 ,…, where =1, then
E(X) = , provided it is finite.
• The mathematical expressing for computing the expected value of a discrete
random variable X with probability mass function (p.m.f.) f(x) is :
E(X) = , (for discrete r.v.)

• The mathematical expressing for computing the expected value of a continuous


random variable X with probability density function (p.d.f.) f(x) is :
E(X) = , (for continuous r.v.)
PROPERTIES OF EXPECTATION
1. Addition Theorem of Expectation :
If X and Y are random variables, then
E(X+Y)=E(X)+E(Y), provided all the expectations exist.
2. Multiplication Theorem of Expectation :
If X and Y are independent random variables, then E(XY)=E(X)E(Y)
3. If X is a random variable and a and b are constants, then
E(aX+b) = aE(X)+b
4. If X=C, a constant , then E(X)=C
EX. A perfect die is thrown twice. Find the expected values of the sum and the
product of the number of points obtained in the two throws.

Solution: Let, X and Y denote respectively number of points obtained in the first
and second throws. Then both of them take the values 1,2,3,4,5 and 6, each with
probability .

E(X) = (1+2+3+4+5+6)* = .
Similarly, E(X) = .
We require,
E(X+Y)= E(X) + E(Y) = 7,
and E(XY)= E(X)E(Y) = . = = 12 .
CONDITIONAL EXPECTATION: SHARMILA K

The conditional expectation is also known


• Conditional expected value
• Conditional mean
The random variable can take on only a finite number of values.
The conditional expectation can be either a
• Random variable
• Function.
Example: Dice rolling
Consider the roll of a fair die
let A = 1 if the number is even (i.e., 2, 4, or 6) and A = 0 otherwise.
let B = 1 if the number is prime (i.e., 2, 3, or 5) and B = 0

  1 2 3 4 5 6
A 0 1 0 1 0 1
B 0 1 1 0 1 0
The unconditional expectation of A is , but the expectation of A is
E[A] = (0 + 1 + 0 + 1 + 0 +1) / 6 =1/2,

but the expectation of A conditional on B = 1 (i.e., conditional on the die roll being
2, 3, or 5) is
E[A | B = 1] = (1 + 0 + 0) / 3 = 1/3,

and the expectation of A conditional on B = 0 (i.e., conditional on the die roll being
1, 4, or 6) is
E[A | B = 0] = (0 + 1 + 1) / 3 = 2/3 .

Likewise, the expectation of B conditional on A = 1 is


E[B | A = 1] = (1 + 0 + 0) / 3 = 1/3,

and the expectation of B conditional on A = 0 is


E[B | A = 0] = (0 + 1 + 1) / 3 = 2/3
APPLICATION OF CONDITIONAL EXPECTATION:
Recovery of 1,00,000 defaulted loan
Scenario Probability Recovered Probability
1 30% 40,000 65%
25,000 35%
Total 100%

2 70% 75,000 80%


Total 100% 65,000 20%
Total 100%
Calculate the expected recovery amount
= 0.3(0.65 * 40,000 + 0.35 * 25,000) + 0.7(0.8 * 75,000 + 0.2 * 65,000)
= 0.3(26,000 + 0.35 * 8,750) + 0.7( 60,000 + 13,000)
= 10,425+ 51,000
= 61,525
Probability Distribution of a Random Variable
 A probability distribution is the mathematical function that gives the
probabilities of occurrence of different possible outcomes for an
experiment.
A random variable has a probability distribution, which defines the
probability of its unknown values.
Probability distributions can be classified into :
• Discrete Probability Distributions (for discrete random variables)
• Continuous Probability Distributions(for continuous random variables)
Discrete and Continuous Probability Distributions
• Discrete Probability Distributions 
• Discrete probability functions assume a discrete number of values. For example, coin tosses and
counts of events are discrete functions.
• These are discrete distributions because there are no in-between values. We can either have heads
or tails in a coin toss.
• For discrete probability distribution functions, each possible value has a non-zero probability.
Moreover, probabilities of all the values of the random variables must sum to one. 
• Continuous Probability Distributions
• A continuous probability distribution is the probability distribution of a continuous variable.
• A random variable X has a continuous probability distribution where it can take any values that
are infinite, and hence uncountable. 
• The graph of the continuous probability distribution is mostly a smooth curve. It is usually
represented by an equation of a function. 
• The total area beneath the curve is 1. 
BINOMIAL
DISTRIBUTION
BY,
T.G.PRETHIV RAJ
32821116
BINOMIAL DISTRIBUTION
BERNOULLI TRAIL:
The probability will not change from one trail to another
in an experiment.

BINOMIAL DISTRIBUTION:
If p and q are the probability of success and failure of any
event, then the probability of getting success x times in n trails is
given by,
P(X=x)=n, n=No. of trails
p=probability of success in 1trail
n= q=probability of failure in I trail
(1-p)
GRAPHICAL REPRESENTATION OF BINOMIAL
DISTRIBUTION

0.14

0.12

0.1

0.08
B(64,0.487)
B(64,0.532)
0.06

0.04

0.02

0
0 10 20 30 40 50 60 70
CONDITIONS FOR BINOMIAL DISTRIBUTION

(1)The trails are independent of each otherr.


(2) The number of trails ‘n’ is finite.
(3)The probability of success ‘p’ is constant for each trail.
(4)Mean > Variance.
(5)Each trail results in two mutually disjoint
outcomes( success and failure).
MEAN AND VARIANCE
MEAN:
=E{}
= .
=. . .
=. .
=.
By giving the r values,
=E{} = np= MEAN
VARIANCE
FOR VARIANCE,
=E{} =np
=E{} =
=n(n-1)
=E{} =
=n(n-1)(n-2)
variance= -
= n(n-1) -np
=-n-+np
=np-n=np(1-p)
=npq =VARIANCE
EXAMPLE 1
FIND THE PROBABLITY OF GETTING SUM 9 EXACTLY TWO
TIMES OUT OF THREE ROLL WITH A PAIR OF DICE.
n=3 , x=2 , p= , q=(1-p)=
P(X=2)=3
=
P(X=2)= =
EXAMPLE 2

MEAN AND VARIANCE OF BINOMIAL DISTRIBUTION ARE


16 AND 8 FIND P(X≥3).
Mean = np= 16 variance = npq = 8
q= = , p=1-q= ,n=32

P(X≥3)= 1-P(X<2) [P(a)+P(b)=1] LAW OF TOTAL


PROBABLITY
=1 – [P(X=0)+P(X=1)+P(X=2)
=1 – [32)() + 32) + 32) ]
=0.999.
APPLICATION OF BINOMIAL DISTRIBUTION IN REAL
LIFE
• Finding the quantity of raw and used materials while
making a product.
• In clinical trail binomial is used to detect the effectiveness
of the drug.
• Taking a survey of positive and negative reviews from the
public for any specific product or place.
• The number of votes collected by a candidate in an election
is counted based on 0 or 1 probability.
POISSON DISTRIBUTION

BY
SUBASH.M
POISSON DISTRIBUTION
DEFINITION:
A random variable X is said to follow a Poisson distribution if it assume only non-
negative values and its probability mass function is given by.
P(x, ; x = 0,1,2,…;
=0, otherwise
Here is known as the parameter of the distribution.
We shall use the notation X~P() to denote that X is a Poisson variate with parameter
POISSON DISTRIBUTION
MOMENTS OF POISSON DISTRIBUTION:
)
= = []
= +…)
=
=
Hence the mean of the Poisson distribution is
POISSON DISTRIBUTION
MOMENTS OF POISSON DISTRIBUTION:
)
=
=
=
=
=
+
=
=
Thus the mean and variance of the Poisson distribution are each equal to
POISSON DISTRIBUTION
EXAMPLE:
If 3% of electronic units manufactured by a company are defective. Find the probability that
in a sample of 200 units, less than 2 bulbs are defective.
Solution:
The probability of defective units p=3/100 = 0.03
Given n=200
Mean=
P(X=x) =
P(X<2)=P(X=0)+P(X=1)
=
= + *6
=0.00247+0.148
P(X<2)=0.01727
The probability that less than 2 bulbs are defective is 0.01727
POISSON DISTRIBUTION
PROPERTIES:
 The event are independent.
 The average number of success in the given period of time alone can occur. No
two events can occur at the same time.
 The Poisson distribution is limited when the number of trials n is indefinitely
large.
 Mean=Variance=
 Np=finite, where is constant.
 The standard deviation is always equal to the square root of the mean µ.
 If the mean is large, then the Poisson distribution is approximately a normal
distribution.
POISSON DISTRIBUTION
DISTRIBUTION IN REAL LIFE:
 Number of deaths from a disease(not in the form of an epidemic) such as heart attack
or cancer or due to snake bite.
 Number of suicides reported in a particular city.
 Number of faulty blades in a packet of 100.
 The number of defective material in a packing manufactured by a good concern.
 Number of printing mistake at each page of the book.
 Number of cars passing a crossing per minute during the busy hours of a day.
 The emission of radioactive(alpha) particles.
 Number of telephone calls received at a particular telephone exchange in some unit of
time or connections to wrong number in a telephone exchange.
Normal distribution
K.Hamsavarshini
Definition :
A random variable x is said to have a normal distribution with parameter
(called “mean” ) and (called “variance” ) if its density function is given by the
probability law:
f (x;) =- {
or f(x;
MGF OF NORMAL DISTRIBUTION : the m.g.f is given by
(t) = exp { ( /2}
=/ 2)
= ( - 2t
=

}
Mean for normal distribution
= x
= x /2 ) }
hence (t) =
mean for normal distribution :
M.D. =
=
=
=
Since integral is an even function of z .
Since in , = z ,we have
mean =
=
=
=
Example sum :
that
1) 26
2) X
1 SOLN : Here
where x= 26 , z = = = -0.8
And when x= 40 , z= =2
p( 26
=p ( - 0.8
=p(0 (from symmetry & tables )
= 0.2881 + 0.4772 = 0.7653
2) p(x
z= =3
p(x
= 0.5 – 0.49865 = 0.00135
Applications and interpretation of normal distribution
there are statistical methods to empirically test that assumption , see the above
Normally tests
•In biology , the logarithm of various variables tend to have a normal distribution ,that
is, they tend to have a log-normal distribution(after
separation on male/female subpopulations),with examples including:
•Measures of size of living tissue(length , height, skin area , weight
•Certain physiological measurements , such as blood pressure of adult humans.
•In finance , in particular the Black-Scholes model, changes in the logarithm of
exchange rates ,price indices , and stock market indices are assumed normal (these
variables behave like compound interest , not like simple interest )
•Measurement errors in physical experiments are often modeled by a normal
distribution . This use of a normal distribution does not imply that one is assuming the
measurement
EXPONENTIAL
DISTRIBUTION
by
Manimekalai.I
DEFINITION
• The exponential distribution is a continuous distribution that is
commonly used to measure the expected time for an event to occur
• The continuous random variable, say X is said to have an
exponential distribution, if it has the following probability density
function
• ()= for x ≥ 0
0 for x ≤0
• Mean of exponential distribution:
• The mean of the exponential distribution is calculated
using the integration by parts.

• Hence the mean of exponential distribution is


Variance of Exponential Distribution
• The variance of the exponential distribution, we need to find the second
moment of the exponential distribution, and it is given by:

Hence the mean of exponential distribution is


Memoryless Property of Exponential Distribution

You might also like