You are on page 1of 8

BBA 121 – BUSINESS MATHEMATICS

2.3 Quantiles and the Quartile Deviation


Quantiles are values that split sorted data or a probability distribution into equal parts. In
general terms, a q-quantile divides sorted data into q parts. The most commonly used
quantiles have special names: Quartiles (4-quantiles): Three quartiles split the data into four
parts.

Quartile deviation depends on the difference between the first quartile and the third quartile
in the frequency distribution. The difference is also known as the interquartile range. The
difference divided by two is known as quartile deviation or semi-interquartile range.

When one takes half of the difference or variance between the 3rd quartile and the 1st quartile
of a simple distribution or frequency distribution is the quartile deviation.
Importance of Quartile deviation:
Quartile deviation is also known as a semi-interquartile range. Again, the difference in the
variance between the 3rd and 1st quartiles is known as the interquartile range. The
interquartile range depicts the extent to which the observations or the values of the given
dataset spreads out from the mean or their average. The quartile deviation or semi-
interquartile range is the majority used in a case where one wants to learn or say a study about
the dispersion of the observations or the samples of the given data sets that lie in the main or
middle body of the given series. This case would usually happen in a distribution where the
data or the observations tend to lie intensely in the main body or middle of the given data set
or the series. The distribution or the values do not lie towards the extremes. If they lie, then
they are not of much significance for the calculation.
TOPIC 3: Regression and correlation

CORRELATION ANALYSIS: A group of techniques to measure the relationship between two variables.

INDEPENDENT VARIABLE: A variable that provides the basis for estimation.

DEPENDENT VARIABLE: The variable that is being predicted or estimated.

The Correlation Coefficient


The correlation coefficient describes the strength of the relationship between two sets of interval-scaled
or ratio-scaled variables.

Designated r, it is often referred to as Pearson’s r and as the Pearson product-moment correlation


coefficient.

It can assume any value from −1.00 to +1.00 inclusive. A correlation coefficient of −1.00 or +1.00
indicates perfect correlation. For example, a correlation coefficient for the preceding example computed
to be +1.00 would indicate that the number of sales calls and the number of copiers sold are perfectly
related in a positive linear sense.

A computed value of −1.00 would reveal that sales calls and the number of copiers sold are perfectly
related in an inverse linear sense. How the scatter diagram would appear if the relationship between the
two variables were linear and perfect is shown

If there is absolutely no relationship between the two sets of variables, Pearson’s r is zero. A correlation
coefficient r close to 0 (say, .08) shows that the linear relationship is quite weak. The same conclusion is
drawn if r = −.08. Coefficients of −.91 and +.91 have equal strength; both indicate very strong correlation
between the two variables. Thus, the strength of the correlation does not depend on the direction
(either − or +).
CORRELATION COEFFICIENT: A measure of the strength of the linear relationship between two
variables.
𝑁∑𝑋𝑌 − ∑𝑋∑𝑌
𝑟=
√𝑁∑𝑋 2 − (𝑋)2 √𝑁∑𝑌 2 − (𝑌)2

Example
Exercise

REGRESSION EQUATION: An equation that expresses the linear relationship between two variables.
Regression equation of Y on X:

𝑌 − 𝑌̅ = 𝑏𝑦𝑥 (𝑋 − 𝑋̅ )
𝑁∑𝑋𝑌 − ∑𝑋∑𝑌
𝑏𝑦𝑥 =
𝑁∑𝑋 2 − (∑𝑋)2
Regression equation of X on Y:

𝑋 − 𝑋̅ = 𝑏𝑥𝑦 (𝑌 − 𝑌̅ )
𝑁∑𝑋𝑌 − ∑𝑋∑𝑌
𝑏𝑥𝑦 =
𝑁∑𝑌 2 − (∑𝑌)2

Example
Exercise

You might also like