You are on page 1of 4

Probability

Probability is a branch of mathematics that deals with calculating the likelihood of a given event's
occurrence, which is expressed as a number between 1 and 0. An event with a probability of 1 can be
considered a certainty: for example, the probability of a coin toss resulting in either "heads" or "tails" is
1, because there are no other options, assuming the coin lands flat. An event with a probability of .5 can
be considered to have equal odds of occurring or not occurring: for example, the probability of a coin
toss resulting in "heads" is .5, because the toss is equally as likely to result in "tails." An event with a
probability of 0 can be considered an impossibility: for example, the probability that the coin will land
(flat) without either side facing up is 0, because either "heads" or "tails" must be facing up. A little
paradoxical, probability theory applies precise calculations to quantify uncertain measures of random
events.
In its simplest form, probability can be expressed mathematically as: the number of occurrences of a
targeted event divided by the number of occurrences plus the number of failures of occurrences (this
adds up to the total of possible outcomes):

p(a) = p(a)/[p(a) + p(b)]

Calculating probabilities in a situation like a coin toss is straightforward, because the outcomes are
mutually exclusive: either one event or the other must occur. Each coin toss is an independentevent; the
outcome of one trial has no effect on subsequent ones. No matter how many consecutive times one side
lands facing up, the probability that it will do so at the next toss is always .5 (50-50). The mistaken idea
that a number of consecutive results (six "heads" for example) makes it more likely that the next toss
will result in a "tails" is known as the gambler's fallacy , one that has led to the downfall of many a
bettor.
Probability theory had its start in the 17th century, when two French mathematicians, Blaise Pascal and
Pierre de Fermat carried on a correspondence discussing mathematical problems dealing with games of
chance. Contemporary applications of probability theory run the gamut of human inquiry, and include
aspects of computer programming, astrophysics, music, weather prediction, and medicine.
PROBABILITY VALUE P VALUE
In significance testing, the probability value (sometimes called the p value) is the probability of obtaining a

statistic as different or more different from the parameter specified in the null hypothesis as the statistic

obtained in the experiment. The probability value is computed assuming the null hypothesis is true. The

lower the probability value, the stronger the evidence that the null hypothesis is false. Traditionally, the null

hypothesis is rejected if the probability value is below 0.05.

Probability values can be either one tailed or two tailed.

Types of Probability
Subjective Probability
Subjective probability is a type of probability derived from an individual's personal judgment or own
experience about whether a specific outcome is likely to occur. It contains no formal calculations and only
reflects the subject's opinions and past experience. Subjective probabilities differ from person to person and
contain a high degree of personal bias. An example of subjective probability is a "gut instinct" when making
a trade.

Subjective probability can be contrasted with objective probability, which is the computed probability that an
event will occur based on an analysis in which each measure is based on a recorded observation or a long
history of collected data.

Subjective probabilities are the foundation for common errors and biases observed in the market that stem
from "old wives' tales" or "rules of thumb."

Empirical Probability
Let's concentrate on a specific type of probability called empirical probability. The empirical probability of
an event is found through observations and experiments. It is the likelihood that the event will happen
based on the results of data collected. For example, suppose we were to survey a group of 50 chefs. We ask
them to choose their favorite type of food, such as the Italian, Mexican, French, American, or Chinese
choices.

If 14 of the chefs chose Chinese as their favorite food, then based on these results, the empirical probability
that a chef in this group would prefer Chinese food over all other types of cuisine would be 0.28. This is an
example of empirical probability because we carried out an actual observation by collecting data through our
survey. Our probability is based on the results of this data.

Classical Probability
Probability is a statistical concept that measures the likelihood of something happening. Classical
probability is the statistical concept that measures the likelihood of something happening, but in a classic
sense, it also means that every statistical experiment will contain elements that are equally likely to happen.

The typical example of classical probability would be a fair dice roll because it is equally probable that you
will land on any of the 6 numbers on the die: 1, 2, 3, 4, 5, or 6.

Another example of classical probability would be a coin toss. There is an equal probability that your toss will
yield a heads or tails result.
Sample Space
In the study of probability, an experiment is a process or investigation from which results are observed or
recorded.

An outcome is a possible result of an experiment.

A sample space is the set of all possible outcomes in the experiment. It is usually denoted by the letter S .
Sample space can be written using the set notation, { }.

Experiment 1: Tossing a coin

Possible outcomes are head or tail.

Sample space, S = {head, tail}.

Experiment 2: Tossing a die

Possible outcomes are the numbers 1, 2, 3, 4, 5, and 6

Sample space, S = {1, 2, 3, 4, 5, 6}.

Sample Points
Sample points are the observed values of the variable. An experiment is a process or an act that has several
outcomes which cannot be forecasted with certainty. It is the most basic outcome an experiment can have.
The group of all sample points of an experiment is referred sample space. Sample points are the elements of
sample space and they simulate the experiment in terms of the sample space. The sample points are also
known as sampling units or observations.

Assume pi to be the probability of sample point i. Then the probability rules of sample points will be:
1. The probabilities of sample points lie between 0 and 1.
2. The sum of all the probabilities of the sample points of a sample space is equal to 1.
It is good to understand the concept with the help of examples:
1. Tossing two coins:
If two fair coins are tossed, the possible outcomes or the sample space is defined as:

There are four sample points and they are TT, TH, HT, and HH
2. Rolling of a die:
If a die is rolled, the possible outcomes or the sample space is defined as:

There are six sample points and they are 1, 2, 3, 4, 5 and 6.


In a sample space, the sample points can be independent, disjoint or equally likely. If the occurrence of one
sample point does not depend on other sample points, they are said to be independent sample points. If the
two sample points do not have anything in common, they are said to be disjoint sample points. If all the
sample points of the sample space have an equal probability of occurrence, then the sample points are said
to be equally likely.
Events
When we say "Event" we mean one (or more) outcomes.

Example Events:
Getting a Tail when tossing a coin is an event
Rolling a "5" is an event.
An event can include several outcomes:
Choosing a "King" from a deck of cards (any of the 4 Kings) is also an event
Rolling an "even number" (2, 4 or 6) is an event

Events can be:


Independent (each event is not affected by other events),
Dependent (also called "Conditional", where an event is affected by other events)
Mutually Exclusive (events can't happen at the same time)

Independent Events

Events can be "Independent", meaning each event is not affected by any other events.
This is an important idea! A coin does not "know" that it came up heads before ... each toss of a coin is a
perfect isolated thing.
Example: You toss a coin three times and it comes up "Heads" each time ... what is the chance that the next
toss will also be a "Head"?
The chance is simply 1/2, or 50%, just like ANY OTHER toss of the coin.
What it did in the past will not affect the current toss!
Some people think "it is overdue for a Tail", but really truly the next toss of the coin is totally independent of
any previous tosses.
Saying "a Tail is due", or "just one more go, my luck is due" is called The Gambler's Fallacy

Dependent Events

But some events can be "dependent" ... which means they can be affected by previous events.
Example: Drawing 2 Cards from a Deck
After taking one card from the deck there are less cards available, so the probabilities change!

Let's look at the chances of getting a King.


For the 1st card the chance of drawing a King is 4 out of 52
But for the 2nd card:
If the 1st card was a King, then the 2nd card is less likely to be a King, as only 3 of the 51 cards left are
Kings.
If the 1st card was not a King, then the 2nd card is slightly more likely to be a King, as 4 of the 51 cards left
are King.
This is because we are removing cards from the deck.
Replacement: When we put each card back after drawing it the chances don't change, as the events
are independent.
Without Replacement: The chances will change, and the events are dependent.

Mutually Exclusive
Mutually Exclusive means we can't get both events at the same time.
It is either one or the other, but not both

Examples:
Turning left or right are Mutually Exclusive (you can't do both at the same time)
Heads and Tails are Mutually Exclusive
Kings and Aces are Mutually Exclusive
What isn't Mutually Exclusive
Kings and Hearts are not Mutually Exclusive, because we can have a King of Hearts!