Professional Documents
Culture Documents
COLLEGE OF ENGINEERING
(Approved by AICTE, New Delhi & Affiliated to Anna
University, Chennai)
NH-47, Palakkad Main Road, Navakkarai (Po).
COIMBATORE – 641 105
2015 -2017
PRACTICAL RECORD
REGISTER NO :
NAME :
SEMESTER : I MBA – II SEMESTER
SUBJECT NAME : BA 7211 - DATA ANALYSIS AND
BUSINESS MODELING
CERTIFICATE
INTERNAL EXAMINER
EXTERNAL EXAMINER
CONTENTS
AIM:
Calculate the frequency distribution. Create a data file with the
following variables.
Label for the variables
Age: 1 (< 20), 2 (20-25), 3 (25-30), 4 (30-40), 5 (>40).
Gender: 1 (Male), 2(Female).
Education: 1 – High school, 2 – Graduate in Arts and Science degree,
3- Graduate in professional degree, 4-Post graduate degree.
Working Experience (years): 1 (< 1 year) 2(1-5 years), 3(5-10Year),
4(10-20), 5(>20).
Enter your own data set (minimum 25 data set) in the data view of
SPSS than Calculate the
Frequency Distribution. Graphically represent the variables in the form
of BAR Chart.
PROCEDURE:
Open Windows – all programs – IBM SPSS Statistics – IBM
SPSS Statistics 21 – New SPSS Data Sheet
Enter the Field Name using the variable view tab in the data
sheet of SPSS
Enter the data using Data View tab in SPSS
Open the menu Analyse–Descriptive Statistics -
Define the input as well as output range
Select whether the information is in column or row.
Specify the label, if any
Click ok.
OUTPUT:
[DataSet0]
Statistics
Gender Educatio Working_E Age
n xp
Valid 25 25 25 25
N Missin 2 2 2 2
g
Frequency Table
Gender
Frequenc Percent Valid Cumulative
y Percent Percent
Male 14 51.9 56.0 56.0
Femal 11 40.7 44.0 100.0
Valid e
Total 25 92.6 100.0
Missin Syste 2 7.4
g m
Total 27 100.0
Education
Frequenc Percent Valid Cumulative
y Percent Percent
High School 6 22.2 24.0 24.0
Gradution in Arts and 6 22.2 24.0 48.0
Sciece
Valid Professional Degree 6 22.2 24.0 72.0
Post Graduate 7 25.9 28.0 100.0
Total 25 92.6 100.0
Missin 2 7.4
System
g
Total 27 100.0
Working_Exp
Frequenc Percent Valid Cumulative Percent
y Percent
<1 5 18.5 20.0 20.0
1-5 7 25.9 28.0 48.0
5-10 3 11.1 12.0 60.0
Valid >20 5 18.5 20.0 80.0
5.00 5 18.5 20.0 100.0
Total 25 92.6 100.0
Missin Syste 2 7.4
g m
Total 27 100.0
Age
Frequenc Percent Valid Cumulative
y Percent Percent
<20 4 14.8 16.0 16.0
20-25 6 22.2 24.0 40.0
25-30 4 14.8 16.0 56.0
Valid 30-40 3 11.1 12.0 68.0
>40 8 29.6 32.0 100.0
Total 25 92.6 100.0
Missin Syste 2 7.4
g m
Total 27 100.0
RESULT:
The result is found using SPSS Ststistics 21.
Ex. No. 2
DESCRIPTIVE STATISTICS – MEASURES OF CENTRAL TENDENCY
AIM:
To determine the measures of central Tendency. Create a data file
with the following variables.
Enter your own data set (minimum 25 data set ) in the data view of
SPSS than Calculate the
frequency distribution. Graphically represent the variables in the form
of BAR Chart.
PROCEDURE:
Step1: Enter the variables as specified to determine SPSS
Step2: Enter the data given in the tabular column in data sheet.
Step3: Select the analyze menu->descriptive statistics-
>Frequencies
Step4: In the variable list, select the education and others to the
Frequencies List. Left click on the right arrow button between the
boxes to move this variable over to the Frequencies box.
Step5: Click on the options button. This will open the descriptive
options dialog box.
Step6: Click on mean, sum, standard deviation, variance, minimum
value, maximum value and range under the menu Central
Tendency.
Step7: Click ok
Step8: The Frequency dialog box closes and SPSS activates the
output navigator to illustrate the statistics.
OUTPUT:
Frequencies
Notes
Output Created 27-APR-2016 21:39:51
Comments
F:\Haresh\Subjects\BAS
FDP\Lab Record
Data 2016\SPSS\Exercise 2 - Measure
of Central Tendancy\Measures of
CentrlTendancyInput.sav
Input Active Dataset DataSet0
Filter <none>
Weight <none>
Split File <none>
N of Rows in Working 27
Data File
User-defined missing values are
Definition of Missing
Missing Value treated as missing.
Handling Statistics are based on all cases
Cases Used
with valid data.
FREQUENCIES
VARIABLES=Age Gender
Education Work_Exp
Syntax
/STATISTICS=MEAN MEDIAN
MODE SUM
/ORDER=ANALYSIS.
Processor Time 00:00:00.02
Resources
Elapsed Time 00:00:00.03
Statistics
Age Gender Education Work_Exp
Valid 26 26 26 26
N
Missing 1 1 1 1
Mean 2.77 1.35 2.31 2.81
Median 2.50 1.00 2.00 2.50
Mode 2 1 2 2
Sum 72 35 60 73
Frequency Table
Age
Frequency Percent Valid Cumulative
Percent Percent
<20 6 22.2 23.1 23.1
20-25 7 25.9 26.9 50.0
30-40 4 14.8 15.4 65.4
Valid 30-40 5 18.5 19.2 84.6
>40 4 14.8 15.4 100.0
Total 26 96.3 100.0
Total 27 100.0
Gender
Frequency Percent Valid Cumulative
Percent Percent
Male 17 63.0 65.4 65.4
Valid Female 9 33.3 34.6 100.0
Total 26 96.3 100.0
Total 27 100.0
Education
Frequency Percent Valid Cumulative
Percent Percent
Valid High School 6 22.2 23.1 23.1
Graduate in Arts and 11 40.7 42.3 65.4
Science degree
Graduate in 4 14.8 15.4 80.8
professional degree
Post graduate degree 5 18.5 19.2 100.0
Total 26 96.3 100.0
Total 27 100.0
Work_Exp
Frequency Percent Valid Cumulative
Percent Percent
<1 4 14.8 15.4 15.4
1-5 9 33.3 34.6 50.0
5-10 5 18.5 19.2 69.2
Valid 10-20 4 14.8 15.4 84.6
>20 4 14.8 15.4 100.0
Total 26 96.3 100.0
Total 27 100.0
Result:
Data file in SPSS for determining the measure of central tendency
was created and calculated using SPSS.
Ex. No. 3
DESCRIPTIVE STATISTICS - MEASURES OF CENTRAL TENDENCY
AIM:
Calculate the frequency distributions and measures of central
tendency from following table.
Label for Gender – 1 (Male), 2(Female)
Gende 1 1 2 1 2 1 2 1 2 1
r
Height 14 14 15 14 15 15 15 14 15 15
0 6 6 9 4 6 1 8 8 0
Weight 56 45 68 51 54 53 69 51 70 49
Gende 1 2 1 2 1 1 2 2 1 2
r
Height 15 15 15 14 15 14 15 15 14 15
1 9 3 8 5 6 0 2 9 6
Weight 45 68 50 55 61 53 65 64 47 59
PROCEDURE:
Step1: Enter the variables as specified to determine SPSS
Step2: Enter the data given in the tabular column in data sheet.
Step3: Select the analyze menu->descriptive statistics-
>Frequencies
Step4: In the variable list, select the education and others to the
Frequencies List. Left click on the right arrow button between the
boxes to move this variable over to the Frequencies box.
Step5: Click on the options button. This will open the descriptive
options dialog box.
Step6: Click on mean, sum, standard deviation, variance, minimum
value, maximum value and range under the menu Central
Tendency.
Step7: Click ok
OUTPUT:
FREQUENCIES VARIABLES=Gender Height Weight
/STATISTICS=MEAN MEDIAN MODE SUM
/ORDER=ANALYSIS.
Frequencies
Notes
Output Created 27-APR-2016 22:24:41
Comments
Active Dataset DataSet0
Filter <none>
Weight <none>
Input
Split File <none>
N of Rows in Working Data 20
File
User-defined missing values are
Definition of Missing
Missing Value treated as missing.
Handling Statistics are based on all cases
Cases Used
with valid data.
FREQUENCIES
VARIABLES=Gender Height
Weight
Syntax
/STATISTICS=MEAN MEDIAN
MODE SUM
/ORDER=ANALYSIS.
Processor Time 00:00:00.02
Resources
Elapsed Time 00:00:00.01
[DataSet0]
Statistics
Gender Height Weight
Valid 20 20 20
N
Missing 0 0 0
Mean 1.45 151.35 56.65
Median 1.00 151.00 54.50
Mode 1 156 45a
Sum 29 3027 1133
a. Multiple modes exist. The smallest value is
shown
Frequency Table
Gender
Frequency Percent Valid Cumulative
Percent Percent
Male 11 55.0 55.0 55.0
Valid Female 9 45.0 45.0 100.0
Total 20 100.0 100.0
Height
Frequency Percent Valid Cumulative
Percent Percent
140 1 5.0 5.0 5.0
146 2 10.0 10.0 15.0
148 2 10.0 10.0 25.0
149 2 10.0 10.0 35.0
150 2 10.0 10.0 45.0
151 2 10.0 10.0 55.0
152 1 5.0 5.0 60.0
Valid 153 1 5.0 5.0 65.0
154 1 5.0 5.0 70.0
155 1 5.0 5.0 75.0
156 3 15.0 15.0 90.0
158 1 5.0 5.0 95.0
159 1 5.0 5.0 100.0
Total 20 100.0 100.0
Weight
Frequency Percent Valid Cumulative
Percent Percent
Valid 45 2 10.0 10.0 10.0
47 1 5.0 5.0 15.0
49 1 5.0 5.0 20.0
50 1 5.0 5.0 25.0
51 2 10.0 10.0 35.0
53 2 10.0 10.0 45.0
54 1 5.0 5.0 50.0
55 1 5.0 5.0 55.0
56 1 5.0 5.0 60.0
59 1 5.0 5.0 65.0
61 1 5.0 5.0 70.0
64 1 5.0 5.0 75.0
65 1 5.0 5.0 80.0
68 2 10.0 10.0 90.0
69 1 5.0 5.0 95.0
70 1 5.0 5.0 100.0
Total 20 100.0 100.0
Result:
Data file in SPSS for determining the measure of central tendency
was created and calculated using SPSS.
Ex. No. 4
CORRELATION
AIM:
Eighteen students have taken Common Admission Test (CAT) after
their graduation. They were also given their aptitude had both CAT
percentile and their graduation percentage. As a research scholar,
determine the relationship between the scores of CAT and graduation
through correlation Analysis.
PROCEDURE:
Step 1: Once the data are entered, go to Analyze-> Correlation->
Bivariate to get this dialogue box.
Step 2: Select a CAT Percentile and their graduation percentage
and click on the ► button to move the variable into the box.
Step 3: Click Pearson check box and Flag Significant correlations
check box and two tailed
Step 4: Click Options - > select Statistics and Missing Values and
click continue.
Step 5: Click OK.
OUTPUT:
CORRELATIONS
/VARIABLES=Graduation CAT
/PRINT=TWOTAIL NOSIG
/MISSING=PAIRWISE.
Correlations
Notes
Output Created 03-Apr-2014 14:35:05
Comments
Input Active Dataset DataSet0
Filter <none>
Weight <none>
Split File <none>
N of Rows in Working
18
Data File
Missing Value Handling Definition of Missing User-defined missing values are
treated as missing.
Cases Used Statistics for each pair of variables
are based on all the cases with
valid data for that pair.
Syntax CORRELATIONS
/VARIABLES=Graduation CAT
/PRINT=TWOTAIL NOSIG
/MISSING=PAIRWISE.
[DataSet0]
Correlations
Graduation CAT
Percentage Percentage
Graduation Percentage Pearson Correlation 1 .734**
Sig. (2-tailed) .001
N 18 18
CAT Percentage Pearson Correlation .734** 1
Sig. (2-tailed) .001
N 18 18
**. Correlation is significant at the 0.01 level (2-tailed).
Inference:
The bivarient correlation is undertaken between the score of
CAT and graduation of the Students. It was hypothesized that a
relationship exists between the CAT and graduation Marks. The
result also shows that, there exists a positive relationship between
the CAT and graduation Marks.(r= 0.734 & p <0.05)
Result:
The relationship between the scores of CAT and graduation
through was determined using correlation Analysis.
Ex. No. 5 HYPOTHESIS- PARAMETRIC T-TEST USING SPSS
AIM:
To determine whether the second trail efficiency of cars is better
than the previous trail, whether efficiency of engine improves with
added ethanol and whether efficiency of engine with and without the
ethanol differ between manual and automatic cars using T-test.
S.No Ca With Withou S.No Ca With Withou
. r Ethano t . r Ethano t
l Ethano l Ethano
l l
1 1 15 15 16 1 14 12
2 1 16 15 17 2 20 19
3 2 20 19 18 1 18 17
4 2 22 18 19 2 25 20
5 1 18 15 20 1 16 15
6 2 20 18 21 2 15 14
7 1 10 11 22 1 12 13
8 2 19 20 23 2 20 19
9 1 9 8.5 24 1 19 20
10 1 8 8 25 2 24 22
11 1 6 5.5 26 1 11 10
12 2 15 14 27 1 10 9
13 2 16 13 28 1 16 17
14 1 11 10 29 2 26 20
15 2 19 18 30 2 28 20
OUTPUT:
T-TEST
/TESTVAL=12
/MISSING=ANALYSIS
/VARIABLES=WithEthanol
/CRITERIA=CI(.95).
T-Test
Notes
Output Created 05-Apr-2014 11:17:57
Comments
Input Active Dataset DataSet0
Filter <none>
Weight <none>
Split File <none>
N of Rows in Working
30
Data File
Missing Value Definition of Missing User defined missing values are
Handling treated as missing.
Cases Used Statistics for each analysis are
based on the cases with no
missing or out-of-range data for
any variable in the analysis.
Syntax T-TEST
/TESTVAL=12
/MISSING=ANALYSIS
/VARIABLES=WithEthanol
/CRITERIA=CI(.95).
[DataSet0]
One-Sample Statistics
Std. Std. Error
N Mean Deviation Mean
WithEthanol 30 16.6000 5.48100 1.00069
One-Sample Test
Test Value = 12
t df Sig. (2-tailed) Mean 95% Confidence Interval of
Difference the Difference
Lower Upper
WithEthanol 4.597 29 .000 4.60000 2.5534 6.6466
T-Test
Notes
Output Created 05-Apr-2014 11:18:30
Comments
Input Active Dataset DataSet0
Filter <none>
Weight <none>
Split File <none>
N of Rows in Working
30
Data File
Missing Value Definition of Missing User defined missing values are
Handling treated as missing.
Cases Used Statistics for each analysis are
based on the cases with no
missing or out-of-range data for
any variable in the analysis.
Syntax T-TEST PAIRS=WithEthanol
WITH Without (PAIRED)
/CRITERIA=CI(.9500)
/MISSING=ANALYSIS.
[DataSet0]
Explore
Notes
Output Created 05-Apr-2014 11:19:02
Comments
Input Active Dataset DataSet0
Filter <none>
Weight <none>
Split File <none>
N of Rows in Working
30
Data File
Missing Value Definition of Missing User-defined missing values for
Handling dependent variables are treated
as missing.
Cases Used Statistics are based on cases
with no missing values for any
dependent variable or factor
used.
Syntax EXAMINE
VARIABLES=WithEthanol
Without BY Car
/PLOT BOXPLOT STEMLEAF
/COMPARE GROUP
/STATISTICS DESCRIPTIVES
/CINTERVAL 95
/MISSING LISTWISE
/NOTOTAL.
[DataSet0]
Car
Case Processing Summary
Cases
Valid Missing Total
Car N Percent N Percent N Percent
WithEthanol Automatic 16 100.0% 0 .0% 16 100.0%
Manual 14 100.0% 0 .0% 14 100.0%
Without Automatic 16 100.0% 0 .0% 16 100.0%
Manual 14 100.0% 0 .0% 14 100.0%
Descriptives
Car Statistic Std. Error
WithEthanol Automatic Mean 13.0625 .98940
95% Confidence Lower Bound 10.9537
Interval for Mean
Upper Bound 15.1713
5% Trimmed Mean 13.1250
Median 13.0000
Variance 15.663
Std. Deviation 3.95759
Minimum 6.00
Maximum 19.00
Range 13.00
Interquartile Range 6.00
Skewness -.112 .564
Kurtosis -1.150 1.091
Manual Mean 20.6429 1.06702
95% Confidence Lower Bound 18.3377
Interval for Mean
Upper Bound 22.9480
5% Trimmed Mean 20.5476
Median 20.0000
Variance 15.940
Std. Deviation 3.99244
Minimum 15.00
Maximum 28.00
Range 13.00
Interquartile Range 6.00
Skewness .290 .597
Kurtosis -.589 1.154
Without Automatic Mean 12.5625 .98834
95% Confidence Lower Bound 10.4559
Interval for Mean
Upper Bound 14.6691
5% Trimmed Mean 12.5417
Median 12.5000
Variance 15.629
Std. Deviation 3.95337
Minimum 5.50
Maximum 20.00
Range 14.50
Interquartile Range 5.75
Skewness .058 .564
Kurtosis -.681 1.091
Manual Mean 18.1429 .70988
95% Confidence Lower Bound 16.6093
Interval for Mean
Upper Bound 19.6765
5% Trimmed Mean 18.2143
Median 19.0000
Variance 7.055
Std. Deviation 2.65611
Minimum 13.00
Maximum 22.00
Range 9.00
Interquartile Range 3.00
Skewness -.926 .597
Kurtosis -.007 1.154
Group Statistics
Std. Std. Error
Car N Mean Deviation Mean
WithEthanol Automatic 16 13.0625 3.95759 .98940
Manual 14 20.6429 3.99244 1.06702
Without Automatic 16 12.5625 3.95337 .98834
Manual 14 18.1429 2.65611 .70988
TEST RESULT:
The value of two tail significance is less than 0.05 (p<0.05). Therefore
there is a significant difference in engine efficiency between previous
and current trail. The cars in current trial have more engine efficiency
than those in earlier trial with t(29) = 4.597.
The value of two tail significance is less than 0.05, therefore the
difference between means is significant and there is a significant
difference in engine efficiency between without ethanol and with
ethanol trial. The car with ethanol additive have more engine efficiency
than those without ethanol, with t(29) =30753.
The analysis shows that normality is not violated.
The levene’s test in the without ethanol shows probability greater than
0.05(0.059). Therefore population variances are relatively equal.
Therefore there is no significant differences exist in car types in case of
without ethanol addition.
The levene’s test in the with ethanol shows probability greater than
0.05(0.059). Therefore population variances are relatively equal.
Therefore there is no significant differences exist in car types in case of
ethanol addition.
Result:
From the t-Test we can conclude that the Second trail efficiency of cars
is better than the previous trail, efficiency of engine improves with
added ethanol and there is no difference in efficiency of engine with and
without the ethanol between manual and automatic cars.
AIM:
To test the goodness of fit of attitude towards US military bases in
Iraq and its differences in frequency exist across response categories.
Perform a chi-square test for determining the goodness of fit of attitude
towards US military bases in Iraq and its differences in frequency exists
across response categories.
Attitude toward US Frequency of
military bases in Iraq response
In favour 8
Against 20
Undecided 32
PROCEDURE:
Step1: Enter the data in the data file, select the data menu.
Step 2: Click on the weight cases, to open the weight case dialogue
box.
Step 3: Click on the weight cases by radio button.
Step 4: Select the variable you require and click on button to
move the variable into the frequency variable box.
Step5: Click ok. The message weight on should appear on the
status bar at the bottom right of the application window.
Step 6: Select the analyze menu.
Step 7: Click on non-parametric and then on chi square, to open
the chi square test dialogue box.
Step 8: Click ok.
Hypothesis:
Level of Significance:
The level of significance is fixed as 5% and therefore the confidence level is
95%.
OUTPUT:
NPAR TESTS
/CHISQUARE=Frequency
/EXPECTED=EQUAL
/MISSING ANALYSIS.
NPar Tests
Notes
Output Created 03-Apr-2014 14:52:38
Comments
Input Active Dataset DataSet0
Filter <none>
Weight Frequency of Response
Split File <none>
N of Rows in Working
3
Data File
Missing Value Handling Definition of Missing User-defined missing values are
treated as missing.
Cases Used Statistics for each test are based
on all cases with valid data for the
variable(s) used in that test.
Syntax NPAR TESTS
/CHISQUARE=Frequency
/EXPECTED=EQUAL
/MISSING ANALYSIS.
[DataSet0]
Chi-Square Test
Frequencies
Frequency of Response
Observed N Expected N Residual
8.00 8 20.0 -12.0
20.00 20 20.0 .0
32.00 32 20.0 12.0
Total 60
Test Statistics
Frequency of Response
Chi-Square 14.400a
Df 2
Asymp. Sig. .001
a. 0 cells (.0%) have expected frequencies
less than 5. The minimum expected cell
frequency is 20.0.
RESULT:
The asymptotic value is 0.001 which is less than 0.05 and therefore
alternate Hypothesis is accepted.
The attitude towards goodness of fit towards US military base in
Iraq is not equally distributed across response categories.
Ex. No. 7 NON PARAMETRIC ONE WAY
ANOVA USING SPSS
AIM:
To compare the scores of CBSE students from four metro cities
in India i.e. Delhi, Kolkata, Mumbai and Chennai, using one way
ANOVA.
Questions Asked:
Vijender Gupta wants to compare the scores of CBSE students from four
metro cities on India (i.e) Delhi, Kolkata, Mumbai and Chennai. He obtained 20
participants scores based on random sampling from each of the four metro cities,
collecting 100 responses. Also note that, this is independent design, since the
respondents are from different cities.
Level of Significance:
The level of significance is fixed as 5% and therefore the confidence level is
95%.
OUTPUT:
ONEWAY Scores BY City
/STATISTICS DESCRIPTIVES HOMOGENEITY
/MISSING ANALYSIS
/POSTHOC=TUKEY ALPHA(0.05).
Oneway
Notes
Output Created 04-Apr-2014 10:18:32
Comments
Input Data C:\Users\Kumaran\Desktop\khi.sav
Active Dataset DataSet1
Filter <none>
Weight <none>
Split File <none>
N of Rows in Working
80
Data File
Missing Value Handling Definition of Missing User-defined missing values are
treated as missing.
Cases Used Statistics for each analysis are
based on cases with no missing
data for any variable in the
analysis.
Syntax ONEWAY Scores BY City
/STATISTICS DESCRIPTIVES
HOMOGENEITY
/MISSING ANALYSIS
/POSTHOC=TUKEY
ALPHA(0.05).
[DataSet1] C:\Users\Kumaran\Desktop\khi.sav
Descriptives
Scores
95% Confidence
Interval for Mean
Std. Std. Lower Upper
N Mean Deviation Error Bound Bound Minimum Maximum
Delhi 20 447.3500 104.69016 23.40943 398.3535 496.3465 269.00 599.00
Kolkata 20 437.8500 79.75771 17.83437 400.5222 475.1778 300.00 599.00
Mumbai 20 387.4000 67.25396 15.03844 355.9242 418.8758 250.00 498.00
Chennai 20 377.7000 68.49287 15.31547 345.6443 409.7557 259.00 498.00
Total 80 412.5750 85.54676 9.56442 393.5375 431.6125 250.00 599.00
ANOVA
Scores
Sum of
Squares df Mean Square F Sig.
Between Groups 73963.450 3 24654.483 3.716 .015
Within Groups 504178.100 76 6633.922
Total 578141.550 79
Post Hoc Tests
Multiple Comparisons
Scores
Tukey HSD
95% Confidence
Interval
(I) Name of (J) Name of Mean Std. Lower Upper
the city the city Difference (I-J) Error Sig. Bound Bound
Delhi Kolkata 9.50000 25.75640 .983 -58.1568 77.1568
Mumbai 59.95000 25.75640 .101 -7.7068 127.6068
Chennai 69.65000* 25.75640 .041 1.9932 137.3068
Kolkata Delhi -9.50000 25.75640 .983 -77.1568 58.1568
Mumbai 50.45000 25.75640 .213 -17.2068 118.1068
Chennai 60.15000 25.75640 .099 -7.5068 127.8068
Mumbai Delhi -59.95000 25.75640 .101 -127.6068 7.7068
Kolkata -50.45000 25.75640 .213 -118.1068 17.2068
Chennai 9.70000 25.75640 .982 -57.9568 77.3568
Chennai Delhi -69.65000* 25.75640 .041 -137.3068 -1.9932
Kolkata -60.15000 25.75640 .099 -127.8068 7.5068
Mumbai -9.70000 25.75640 .982 -77.3568 57.9568
*. The mean difference is significant at the 0.05 level.
Homogeneous Subsets
Scores
Tukey HSDa
Subset for alpha = 0.05
Name of the city N 1 2
Chennai 20 377.7000
Mumbai 20 387.4000 387.4000
Kolkata 20 437.8500 437.8500
Delhi 20 447.3500
Sig. .099 .101
Means for groups in homogeneous subsets are
displayed.
a. Uses Harmonic Mean Sample Size = 20.000.
RESULT:
Test Result:
Levene’s test shows that homogeneity of variance is not
significant (p> 0.05). We can be confident that population
variances for each group are approximately equal.
The F test values along with degrees of freedom (2, 76) and
significance of 0.15. Therefore p<0.05 we can reject null
hypothesis and accept the alternate hypothesis that there is
significant difference in scores from different metro cities of
India F(3,76)=3.716, p<0.05.
Using Tukey HSD futher, we can conclude that Delhi and
Chennai have significant difference in their scores
Result:
The CBSE students from four metro cities on India (i.e) Dehi, Kolkatta,
Mumbai and Chennai have significance difference in their scores.
The level of significance is 0.015 and p< 0.05.The CBSE students from
4 metro cities on India -Delhi, Kolkata, Mumbai and Chennai have
significance difference in their scores.
Ex. No. 8
REGRESSION USING SPSS
AIM:
To determine the effect of shelf space and price make on the sales
of pet food.
Questions Asked:
a) What contribution to both shelf space and price make to the
prediction of sales of pet food?
b) Which is the best predictor of sales of pet food?
c) Previous research has suggested that shelf space is the salient
predictor of sales of pet food. Is this hypothesis is correct?
PROCEDURE:
Step1:Select the Analyze menu
Step2:Click on regression and then on linear option to open the
linear regression dialogue box
Step3:Select a dependent variable(sales of pet food)and click on
the button to move the variable into the box
Step4:Select the independent variable(price and space)and click on
the button to move the variable into the box
Step6:Click ok
OUTPUT:
REGRESSION
/MISSING LISTWISE
/STATISTICS COEFF OUTS R ANOVA
/CRITERIA=PIN(.05) POUT(.10)
/NOORIGIN
/DEPENDENT Sales
/METHOD=ENTER Shelf_Space Price.
Regression
Notes
Output Created 09-APR-2016 09:04:29
Comments
F:\Haresh\Subjects\BAS FDP\Lab Record
Data
2016\Regression Using SPSS\Regression.sav
Active Dataset DataSet0
Filter <none>
Input
Weight <none>
Split File <none>
N of Rows in Working 15
Data File
User-defined missing values are treated as
Definition of Missing
Missing Value missing.
Handling Statistics are based on cases with no missing
Cases Used
values for any variable used.
REGRESSION
/MISSING LISTWISE
/STATISTICS COEFF OUTS R ANOVA
Syntax /CRITERIA=PIN(.05) POUT(.10)
/NOORIGIN
/DEPENDENT Sales
/METHOD=ENTER Shelf_Space Price.
Processor Time 00:00:00.00
Elapsed Time 00:00:00.00
Memory Required 1636 bytes
Resources
Additional Memory 0 bytes
Required for Residual
Plots
Variables Entered/Removeda
Model Variables Variables Method
Entered Removed
Price, . Enter
1
Shelf_Spaceb
a. Dependent Variable: Sales
b. All requested variables entered.
Model Summary
Model R R Square Adjusted R Std. Error of
Square the Estimate
1 .922a .850 .825 6.059
a. Predictors: (Constant), Price, Shelf_Space
ANOVAa
Model Sum of df Mean F Sig.
Squares Square
Regression 2502.390 2 1251.195 34.081 .000b
1 Residual 440.543 12 36.712
Total 2942.933 14
a. Dependent Variable: Sales
b. Predictors: (Constant), Price, Shelf_Space
Coefficientsa
Model Unstandardized Standardized t Sig.
Coefficients Coefficients
B Std. Error Beta
(Constant) 2.029 5.126 .396 .699
1 Shelf_Space 10.500 3.262 .916 3.219 .007
Price .057 2.613 .006 .022 .983
a. Dependent Variable: Sales
RESULT:
Shelf space and price predicted 85% of the movement of sales in
the supermarket;
Price is the best predictor of sales of pet food.
Hypothesis based on previous research is not true.
R square value = 0.850 therefore 85% of the sales movement has
been explained by shelf space and price
F ratio value = 34.081 and significance value = 0.000 which is less
than level of significance (0.05) and hence the model is fit
Significant value for price is 0.007 which is less than the level of
significance (0.05) and the significant value for space is 0.983
which is greater than the level of significance (0.05)
MANN-WHITNEY U TEST
Ex. No. 9
USING SPSS
AIM:
To create an SPSS data set and to check whether there is difference
existing in the sales of two retail outlets using Mann Whitney U test.
Question Asked:
a) Create an SPSS data
b) To check whether there is a differences exist in the sales of two
outlets using Mann-Whitney U test
S No Retail Outlets Sales(in lacs)
1 1 40
2 2 30
3 1 60
4 1 45
5 2 55
6 2 25
7 2 60
8 1 80
9 2 100
10 1 20
11 2 10
12 1 80
13 1 85
14 2 90
PROCEDURE:
Step1: Enter the data in the data file.
Step 2: Click analyze menu-> non parametric test->2 independent
samples. Dialogue box will be open
Step 3: Select dependent variables and grouping variables. Select
Mann Whitney U check box in test type box.
Step4 :Define group buttons
Step 5: Sub value dialog box opens. Enter the value. Click ok.
OUTPUT:
GET
FILE='F:\Haresh\Subjects\BAS FDP\Lab Record
2016\SPSS\Exercise 8 - Mann Whiteney U test\Mann Whitney
Test 2.sav'.
DATASET NAME DataSet1 WINDOW=FRONT.
NPAR TESTS
/M-W= Sales BY Retail(1 2)
/MISSING ANALYSIS.
NPar Tests
Notes
Output Created 15-APR-2016 21:09:50
Comments
F:\Haresh\Subjects\BAS
FDP\Lab Record
Data 2016\SPSS\Exercise 8 - Mann
Whiteney U test\Mann Whitney
Test 2.sav
Input Active Dataset DataSet1
Filter <none>
Weight <none>
Split File <none>
N of Rows in Working Data 20
File
User-defined missing values are
Definition of Missing
treated as missing.
Missing Value
Statistics for each test are based
Handling
Cases Used on all cases with valid data for
the variable(s) used in that test.
NPAR TESTS
Syntax /M-W= Sales BY Retail(1 2)
/MISSING ANALYSIS.
Processor Time 00:00:00.00
Resources Elapsed Time 00:00:00.01
Number of Cases Alloweda 112347
a. Based on availability of workspace memory.
Ranks
Retail N Mean Rank Sum of
Ranks
Delhi 10 11.90 119.00
Sales Mumbai 10 9.10 91.00
Total 20
Test Statisticsa
Mann- Wilcoxon Z Asymp. Sig. Exact Sig.
Whitney U W (2-tailed) [2*(1-tailed
Sig.)]
Sales 36.000 91.000 -1.061 .288 .315b
a. Grouping Variable: Retail
b. Not corrected for ties.
RESULT:
The result was not significant, z = - 0.987, p>0.05 and no significant
difference in the sales of two retail stores.
Ex. No. 10 HISTOGRAM, RANK AND
PERCENTILE
AIM:
The operating costs of the vehicle used by your company’s sales
people are too high. The major
component of operating expense is fuel cost; to analyze fuel costs,
you collect mileage data from
the company’s two wheelers for the previous month. Use Excel
sheet for calculation.
MPL 27 29 33 21 21 12 16 25 8 17 24 34 38 15 19 19 41
• Summary statistics.
• Histogram analysis.
• Rank and Percentile analysis tool.
PROCEDURE:
Step1: Enter the data in excel file.
Step 2: File – Options - Word options window appears –Select add
ins Menu – Choose Excel Add ins in Manage drop menu box –
press Go button
Step 3: Add ins window appears – choose Analysis tools pack to
install data analysis menu- press OK
Step4 : To select Summary Statistics – go to Data Menu in ribbon –
Select Data Analysis – Data Analysis Window appears- choose
Descriptive Statistics – Select input range in descriptive statistics
dialogue box – set your output range – choose summary statistics
option and click OK
Step 5: To select Histogram analysis – go to Data Menu in ribbon –
Select Data Analysis – Data Analysis Window appears- choose
Histogram analysis – Select input range in Histogram analysis
dialogue box – set your output range – choose Chart Output option
and click OK
Step 6: To select Rank and Percentile analysis tool – go to Data
Menu in ribbon – Select Data Analysis – Data Analysis Window
appears- choose Rank and Percentile analysis tool – Select input
range in Rank and Percentile analysis tool box – set your output
range – click OK
OUTPUT:
Ex. No. 11 RATE OF INTEREST
AIM:
To determine the rate of interest of Loan
PROCEDURE:
Step1: Enter the data in excel file to find the rate of Interest.
Step 2:NPER = 48 months , PMT = -3000 , PV = 100000
Step 3: Find the answer by using the Formula =Rate(NPER, PMT,
PV)
Step4 : By using the Window style – Select Formulas Menu –
Choose Financial Menu – Select Rate of interest option by double
click
Step 5: PV dialogue box appears – Provide the same details as
NPER,PMT and DATE – Press OK
OUTPUT:
RESULT:
The result obtained is that the Rate of Interest Bank Loan charges
is 2%
AIM:
To determine the Future Value
You deposit Rs.1,000 each and every month in your bank account.
The bank pays 12% annual
rate that is compound every month. Find out how much money will
be in your account at the
end of 24 months.
PROCEDURE:
Step1: Enter the data in excel file to find the Future value of
Investment.
Step 2: RATE = 12% , NPER = 24 , PMT = -1000
Step 3: Find the answer by using the Formula
=FV(RATE/12,NPER, PMT)
Step4 : By using the Window style – Select Formulas Menu –
Choose Financial Menu – Select FV option by double click
Step 5: PV dialogue box appears – Provide the same details as
RATE,NPER and PMT – Press OK
OUTPUT:
RESULT:
The result obtained is that the Future Value of the Investment is
26973.46 `
Ex. No. 13 CALCULATION OF TIME
AIM:
To determine NPER
You can afford only Rs.500/- per month. If you are crediting this
amount in a bank that pays an annual interest of 12% compounded
monthly. How long will it take for your investment to accumulate
to Rs.50, 000?
PROCEDURE:
Step1: Enter the data in excel file to find the Future Value.
Step 2: RATE = 12% , PMT = -500, PV = 0, FV = 50000
Step 3: Find the answer by using the Formula =FV(RATE/12,
PMT,PV,FV)
Step4 : By using the Window style – Select Formulas Menu –
Choose Financial Menu – Select NPER option by double click
Step 5: PV dialogue box appears – Provide the same details as
RATE, PMT, PV AND FV – Press OK
OUTPUT:
RESULT:
The result obtained is that the time taken for investment to
accumulate`50000 is 70 months
Ex. No. 14 EMI CALCULATION
AIM:
To determine the calculation for EMI
PROCEDURE:
Step1: Enter the data in excel file to find the time of Investment.
Step 2: RATE = 14% , NPER = 15, PV = - 200000
Step 3: Find the answer by using the Formula
=PMT(RATE/12,NPER*12,PV)
Step4 : By using the Window style – Select Formulas Menu –
Choose Financial Menu – Select PMT option by double click
Step 5: PV dialogue box appears – Provide the same details as
RATE, NPER, PV– Press OK
OUTPUT:
RESULT:
The result obtained is that the EMI to be paid monthly is 2663.43`
AIM:
To determine the calculation for NPV
PROCEDURE:
Step1: Enter the data in excel file to find NPV.
Step 2:RATE = 10% , PMT = 500,900,550,478,950
Step 3: Find the answer by using the Formula =NPV(RATE, PMT)
Step4 : By using the Window style – Select Formulas Menu –
Choose Financial Menu – Select NPV option by double click
Step 5: NPV dialogue box appears – Provide the same details as
RATE and PMT – Press OK
OUTPUT:
RESULT:
The result obtained that the Net Present Value of the cash flows is
2527.93`
Ex. No. 16 IRR CALCULATION
AIM:
To determine the calculation for IRR
PROCEDURE:
Step1: Enter the data in excel file to find Rate of Return.
Step 2: Values = (-capital : values )
Step 3: Find the answer by using the Formula
=IRR(value1,value2…value12)
Step4 : By using the Window style – Select Formulas Menu –
Choose Financial Menu – Select IRR option by double click
Step 5: IRR dialogue box appears – Provide the same values –
Press OK
OUTPUT:
RESULT:
The result obtained that the IRR is 8%
Ex. No. 17 CHI SQUARE USING EXCEL
AIM:
To determine the value of Chi Square
PROCEDURE:
Step1: Enter the data in excel file to find CHI SQUARE.
Step 2: First table indicates Observed Frequency
Step 3: Create a secondary table for expected frequency by the
formula =(Total of State*Total Product Value )/Total Market Value
Step4 : Compare both observed and expected by the formula =
CHITEST(Actual range, expected range)
Step 5: Check the Significance Relation
OUTPUT:
RESULT:
The result obtained is 0.0000001308 less than 0.005 which is less
than the expected level so it indicates that there is significant
relation between region and place
Ex. No. 18 LPP USING POM
AIM:
Solve the following linear programming problem using Simplex
Method.
Maximize Z = 6 X1 + 8 X2
Subject to
5 X1 + 10 X2 < 60
4 X1 + 4 X2 < 40
X1, X2 > 0
To create Linear Programming by POM Application.
PROCEDURE:
Step 1: Start - all programs – POM shortcut – POM
window appears
Step 2: Select modules – Linear Programming
Step 3: To Provide Title1 – Decision Variables as 2 –
Constraints as 4 – save the document
Step 4: Provide the input values - click solve menu
Step 5: Save the document
OUTPUT:
RESULT:
The LPP by Simplex Methods indicates the linear programming
lies as 8x1+2x2 = 64
Ex. No. 19 ASSIGNMENT PROBLEM USING
POM
AIM:
Solve the following Assignment problem.
The following matrix gives the cost involved to perform jobs 1,2
and 3 operators A,B and C.
Assign the operators and jobs to minimize the total time taken to
complete the jobs
PROCEDURE:
Step 1: Start - all programs – POM shortcut – POM
window appears
Step 2: Select modules – Assignment Problem
Step 3: To Provide Title1 – Number of Objects –
Minimize option tobe selected – save the document
Step 4: Provide the input values - click solve menu
Step 5: Save the document
OUTPUT:
=========================================================
=================
SOLUTION:
=========================================================
=================
ITERATION NUMBER 10
Z 28.000
=========================================================
=================
RESULT:
Thus the Job Assignment is done by Op1 - Second job, Op 2 –
Third job, Op 3 – First Job and the total cost will be 28`.
Ex. No. 20 EOQ using POM
AIM:
Alpha industry needs 15,000 units/year of a bought out component which
will be used in its main
product. The ordering cost is Rs. 125 per order and the carrying cost per
unit per unit per year is
20% of the purchase price per unit which is Rs 75.
Find a. Economic order quantity
b. Number of orders per year
c. Time between successive orders
PROCEDURE:
Step 1: Start - all programs – POM shortcut – POM window
appears
Step 2: Select modules – Fixed order Quantity Inv. Model
Step 3: Provide Problem Title – Select Model I Basic Economic
Order Quantity in General Tab
Step 4: Provide the input values in Production Tab- Annual
Demand = 36000 units – Average Ordering Cost = 125 – Carrying Cost
C = (75/20%) = 15 - click solve menu
Step 5: Save the document
OUTPUT:
---------------------------------------------------------
-----------------
RESULT:
a. Economic order quantity is 346.41
b. Number of orders per year is 103.923
c. Time between successive orders is 0.1
AIM:
Solve the following linear programming problem using Graphical
Method.
Maximize Z = 2 X1 + 3 X2
Subject to
X1 + X 2 > 6
7 X1 + X2 > 14
X1, X2 > 0
PROCEDURE:
Step 1: TORA – Main menu – Linear Programing – Press
Go to Input Screen
Step 2: Select Problem Title for appropriate title – Provide
No of Variables as 2 and Constraints as 2– Enter
Step 3: Enter the project title
Step 4: Enter the Maximize constraint and the other 2
constraints in the provided table
Step 5: Enter solve problem from menu solve/modify menu–
select solve menu – Graphical Method
Step 6: Finish the problem
OUTPUT:
RESULT:
The result is unbounded
Ex. No. 22 TRANSPORTATION USING
LEAST COST METHOD IN TORA
AIM:
Find the feasible solution for the transportation problem using
Least cost method:
Fro To D E F Supply
m
A 6 4 1 50
B 3 8 7 40
C 4 4 2 60
Demand 20 95 35 150
PROCEDURE:
Step 1: TORA – Main menu – Transportation Model –
Press Go to Input Screen
Step 2: Select Problem Title for appropriate title – No of
Sources = 3 and Destinations = 3
Step 3: Enter the Demand and Supply Numbers – press
solve menu
Step 4: Solve/Modify Menu – Go to solve Problem –
Iterations – Least cost Standing solution
Step 5: Finish the problem
OUTPUT:
RESULT:
The result for the Transportation is A-EF, B-DE, C-E with the
objective value of 555`.
Ex. No. 23 TRANSPORTATION USING
NORTH WEST CORNER RULE IN
TORA
AIM:
Find the feasible solution for the transportation problem using
North –West Corner rule:
From To D E F Supply
A 6 4 1 50
B 3 8 7 40
C 4 4 2 60
Demand 20 95 35 150
PROCEDURE:
Step 1: TORA – Main menu – Transportation Model –
Press Go to Input Screen
Step 2: Select Problem Title for appropriate title – No of
Sources = 3 and Destinations = 3
Step 3: Enter the Demand and Supply Numbers – press
solve menu
Step 4: Solve/Modify Menu – Go to solve Problem –
Iterations – North West Corner Starting Solution
Step 5: Finish the problem
OUTPUT:
RESULT:
The result for the Transportation is A-DE, B-E, C-EF with the
objective value of 730`.
Ex. No. 24
PERT USING TORA
AIM:
R and D project has a list of tasks to be performed whose time
estimates are given in the table.
Draw the network diagram for the R&D project.
PROCEDURE:
Step 1: TORA – Main menu – Project planning – PERT
Step 2: Select Enter new problem – Decimal notation as
normal – Enter Go to Input Screen
Step 3: Enter the project title
Step 4: Enter the From Node and To Node – Activity
Symbol –To, Tm, TpDuration - Enter Solve menu – Save the
document
Step 5: Enter solve problem from menu solve/modify menu
Step 6: Finish the problem
NETWORK DIAGRAM:
6
2 G J
D
A
C H
1 4 7
B E I
F
3 5
OUTPUT:
RESULT:
The result for the PERT program is resulted with the longest path
with the duration of 24.
Ex. No. 25
CPMUSING TORA
AIM:
A project schedule has the following characteristics as shown in
Table given below
NETWORK DIAGRAM:
9 L
2 F
C
A
1 4 6 10
I
B G
D
E K
3 5 7 8
H J
OUTPUT:
RESULT:
The result for the CPM program is
1. Te = 0and TL = 22
2. The critical path is B-E-H-J-K
3. Calculate the project duration is 92