You are on page 1of 4

# ASSIGNMENT I DECISION SCIENCES SUB BY : TARUN GIANCHANDANI ENR NO: 02680303913 CLASS: MBA EVE A

Q1) Describe: (a) Coefficient of Variation Coefficient of Variation is expressed as the ratio of standard deviation and mean. It is often abbreviated as CV. Coefficient of variation is the measure of variability of the data. When the value of coefficient of variation is higher, it means that the data has high variability and less stability. When the value of coefficient of variation is lower, it means the data has less variability and high stability. The formula for coefficient of variation is given below:

The coefficient of variation (CV) is defined as the ratio of the standard deviation to the mean :

The formula for standard deviation may vary for sample and population data type. Standard deviation formulas for sample data and population data are given below:

## Where, xi = Terms given in the data x = Mean n = Total number of terms.

Advantages The coefficient of variation is useful because the standard deviation of data must always be understood in the context of the mean of the data. In contrast, the actual value of the CV is independent of the unit in which the measurement has been taken, so it is a dimensionless number. For comparison between data sets with different units or widely different means, one should use the coefficient of variation instead of the standard deviation. Disadvantages When the mean value is close to zero, the coefficient of variation will approach infinity and is therefore sensitive to small changes in the mean. This is often the case if the values do not originate from a ratio scale.

Unlike the standard deviation, it cannot be used directly to construct confidence intervals for the mean. (b) Quartiles Quartiles are those values of the variate that divide the total frequency into four equal parts.The median divides the total frequency into two equal parts.When the lower half before the median is divided into two equal parts,the value of the dividing variate is called the lower quartile or the first quartile and is denoted by Q1 .The values of the variate dividing the upper half is called the upper quartile or the third quartile and is denoted by Q3 . Median is also called the second quartile.The formulae for calculation of quartiles are: Q1 = l + {(N/4-F)/f}*h Q3 = l+{(3N/4-F)/f}*h Where l is the lower limit of the quartile class,F is the cumulative frequency before the quartile class,h is the width of the class interval and f is the frequency of the quartile class. first quartile (designated Q1) = lower quartile = 25th percentile (splits off the lowest 25% of data from the highest 75%) second quartile (designated Q2) = median = 50th percentile (cuts data set in half) third quartile (designated Q3) = upper quartile = 75th percentile (splits off the highest 25% of data from the lowest 75%) The difference between the upper and lower quartiles is called the interquartile range.

## (c) Coefficient of Correlation Correlation coefficient may refer to:

Pearson product-moment correlation coefficient, also known as r, R, or Pearson's r, a measure of the strength and direction of the linear relationship between two variables that is defined as the (sample) covariance of the variables divided by the product of their (sample) standard deviations. Intraclass correlation, a descriptive statistic that can be used when quantitative measurements are made on units that are organized into groups; describes how strongly units in the same group resemble each other. Rank correlation, the study of relationships between rankings of different variables or different rankings of the same variable. Pearson's correlation coefficient between two variables is defined as the covariance of the two variables divided by the product of their standard deviations. The form of the definition involves a "product moment", that is, the mean (the first moment about the origin) of the product of the mean-adjusted random variables; hence the modifier product-moment in the name.

Pearson correlation coefficients are less than or equal to 1. Correlations equal to 1 or -1 correspond to data points lying exactly on a line (in the case of the sample correlation), or to a bivariate distribution entirely supported on a line (in the case of the population correlation). The Pearson correlation coefficient is symmetric: corr(X,Y) = corr(Y,X). A key mathematical property of the Pearson correlation coefficient is that it is invariant (up to a sign) to separate changes in location and scale in the two variables. That is, we may transform X to a + bX and transform Y to c + dY, where a, b, c, and d are constants, without changing the correlation coefficient (this fact holds for both the population and sample Pearson correlation coefficients). Note that more general linear transformations do change the correlation: see a later section for an application of this. The Pearson correlation can be expressed in terms of uncentered moments. Since X = E(X), X2 = E[(X E(X))2] = E(X2) E2(X) and likewise for Y, and since the correlation can also be written as

Alternative formulae for the sample Pearson correlation coefficient are also available:

## The second formula above needs to be corrected for a sample:

Q2. Explain Correlation and its significance in management? Sol Correlation refers to any of a broad class of statistical relationships involving dependence.
Familiar examples of dependent phenomena include the correlation between the physical statures of parents and their offspring, and the correlation between the demand for a product and its price. Correlations are useful because they can indicate a predictive relationship that can be exploited in practice. For example, an electrical utility may produce less power on a mild day based on the correlation between electricity demand and weather. In this example there is a causal relationship, because extreme weather causes people to use more electricity for heating or cooling; however, statistical dependence is not sufficient to demonstrate the presence of such a causal relationship (i.e., correlation does not imply causation).