You are on page 1of 91

DHANALAKSHMI SRINIVASAN

COLLEGE OF ENGINEERING
(Approved by AICTE, New Delhi & Affiliated to Anna
University, Chennai)
NH-47, Palakkad Main Road, Navakkarai (Po).
COIMBATORE – 641 105

2015 -2017

PRACTICAL RECORD

REGISTER NO :
NAME :
SEMESTER : I MBA – II SEMESTER
SUBJECT NAME : BA 7211 - DATA ANALYSIS AND
BUSINESS MODELING

CERTIFICATE

This is certified to be the bonafide record of work done by in the


Second Semester MBA in the computer lab of the college during the year
2015-2017.
STAFF INCHARGE HEAD OF THE
DEPARTMENT

Submitted to the Anna University Practical Examination held on


……………………………….. at Dhanalakshmi Srinivasan College of
Engineering, Coimbatore.

INTERNAL EXAMINER
EXTERNAL EXAMINER
CONTENTS

S.NO DATE NAME OF THE PROGRAM PAGE


NO SIGN
SPSS
1. Descriptive Statistics – Frequency
Distribution
2. Descriptive Statistics – Measures Of
Central Tendency
3. Descriptive Statistics - Measures Of
Central Tendency
4. Correlation
5. T-Test
6. Chi Square
7. ANOVA
8. Regression
9. Mann Whitney Test
EXCEL
10. HISTOGRAM, RANK AND
PERCENTILE
11. RATE
12. FUTURE VALUE
13. TIME
14. EMI
15. NPV
16. IRR
17. CHI SQUARE
POM
18. LPP
19. ASSIGNMENT
20. EOQ
TORA
21. LPP USING GRAPHICAL
METHOD
22. TRANSPORTATION PROBLEM –
LEAST COST
23. TRANSPORTATION PROBLEM –
NORTH WEST CORNER RULE
24. PERT
25. CPM
Ex. No.
DESCRIPTIVE STATISTICS –
1
FREQUENCY DISTRIBUTION

AIM:
Calculate the frequency distribution. Create a data file with the
following variables.
Label for the variables
Age: 1 (< 20), 2 (20-25), 3 (25-30), 4 (30-40), 5 (>40).
Gender: 1 (Male), 2(Female).
Education: 1 – High school, 2 – Graduate in Arts and Science degree,
3- Graduate in professional degree, 4-Post graduate degree.
Working Experience (years): 1 (< 1 year) 2(1-5 years), 3(5-10Year),
4(10-20), 5(>20).
Enter your own data set (minimum 25 data set) in the data view of
SPSS than Calculate the
Frequency Distribution. Graphically represent the variables in the form
of BAR Chart.

PROCEDURE:
Open Windows – all programs – IBM SPSS Statistics – IBM
SPSS Statistics 21 – New SPSS Data Sheet
Enter the Field Name using the variable view tab in the data
sheet of SPSS
Enter the data using Data View tab in SPSS
Open the menu Analyse–Descriptive Statistics -
Define the input as well as output range
Select whether the information is in column or row.
Specify the label, if any
Click ok.

OUTPUT:

FREQUENCIES VARIABLES=Gender Education Working_Exp Age


/ORDER=ANALYSIS.
Frequencies
Notes
Output Created 06-APR-2016 10:38:29
Comments
Active Dataset DataSet0
Filter <none>
Weight <none>
Input
Split File <none>
N of Rows in Working 27
Data File
User-defined missing values
Definition of Missing
Missing Value are treated as missing.
Handling Statistics are based on all
Cases Used
cases with valid data.
FREQUENCIES
VARIABLES=Gender
Syntax
Education Working_Exp Age
/ORDER=ANALYSIS.
Processor Time 00:00:00.02
Resources
Elapsed Time 00:00:00.02

[DataSet0]

Statistics
Gender Educatio Working_E Age
n xp
Valid 25 25 25 25
N Missin 2 2 2 2
g

Frequency Table
Gender
Frequenc Percent Valid Cumulative
y Percent Percent
Male 14 51.9 56.0 56.0
Femal 11 40.7 44.0 100.0
Valid e
Total 25 92.6 100.0
Missin Syste 2 7.4
g m
Total 27 100.0

Education
Frequenc Percent Valid Cumulative
y Percent Percent
High School 6 22.2 24.0 24.0
Gradution in Arts and 6 22.2 24.0 48.0
Sciece
Valid Professional Degree 6 22.2 24.0 72.0
Post Graduate 7 25.9 28.0 100.0
Total 25 92.6 100.0
Missin 2 7.4
System
g
Total 27 100.0
Working_Exp
Frequenc Percent Valid Cumulative Percent
y Percent
<1 5 18.5 20.0 20.0
1-5 7 25.9 28.0 48.0
5-10 3 11.1 12.0 60.0
Valid >20 5 18.5 20.0 80.0
5.00 5 18.5 20.0 100.0
Total 25 92.6 100.0
Missin Syste 2 7.4
g m
Total 27 100.0

Age
Frequenc Percent Valid Cumulative
y Percent Percent
<20 4 14.8 16.0 16.0
20-25 6 22.2 24.0 40.0
25-30 4 14.8 16.0 56.0
Valid 30-40 3 11.1 12.0 68.0
>40 8 29.6 32.0 100.0
Total 25 92.6 100.0
Missin Syste 2 7.4
g m
Total 27 100.0

RESULT:
The result is found using SPSS Ststistics 21.
Ex. No. 2
DESCRIPTIVE STATISTICS – MEASURES OF CENTRAL TENDENCY

AIM:
To determine the measures of central Tendency. Create a data file
with the following variables.

Label for the variables


Age: 1 (< 20), 2 (20-25), 3 (25-30), 4 (30-40), 5 (>40).
Gender: 1 (Male), 2(Female).
Education: 1 – High school, 2 – Graduate in Arts and Science
degree , 3- Graduate in
professional degree, 4-Post graduate degree.
Working Experience (years) : 1 (< 1 year) 2(1-5 years ), 3(5-
10Year),
4(10-20), 5(>20).

Enter your own data set (minimum 25 data set ) in the data view of
SPSS than Calculate the
frequency distribution. Graphically represent the variables in the form
of BAR Chart.
PROCEDURE:
Step1: Enter the variables as specified to determine SPSS
Step2: Enter the data given in the tabular column in data sheet.
Step3: Select the analyze menu->descriptive statistics-
>Frequencies
Step4: In the variable list, select the education and others to the
Frequencies List. Left click on the right arrow button between the
boxes to move this variable over to the Frequencies box.
Step5: Click on the options button. This will open the descriptive
options dialog box.
Step6: Click on mean, sum, standard deviation, variance, minimum
value, maximum value and range under the menu Central
Tendency.
Step7: Click ok
Step8: The Frequency dialog box closes and SPSS activates the
output navigator to illustrate the statistics.
OUTPUT:

Frequencies
Notes
Output Created 27-APR-2016 21:39:51
Comments
F:\Haresh\Subjects\BAS
FDP\Lab Record
Data 2016\SPSS\Exercise 2 - Measure
of Central Tendancy\Measures of
CentrlTendancyInput.sav
Input Active Dataset DataSet0
Filter <none>
Weight <none>
Split File <none>
N of Rows in Working 27
Data File
User-defined missing values are
Definition of Missing
Missing Value treated as missing.
Handling Statistics are based on all cases
Cases Used
with valid data.
FREQUENCIES
VARIABLES=Age Gender
Education Work_Exp
Syntax
/STATISTICS=MEAN MEDIAN
MODE SUM
/ORDER=ANALYSIS.
Processor Time 00:00:00.02
Resources
Elapsed Time 00:00:00.03

Statistics
Age Gender Education Work_Exp
Valid 26 26 26 26
N
Missing 1 1 1 1
Mean 2.77 1.35 2.31 2.81
Median 2.50 1.00 2.00 2.50
Mode 2 1 2 2
Sum 72 35 60 73

Frequency Table
Age
Frequency Percent Valid Cumulative
Percent Percent
<20 6 22.2 23.1 23.1
20-25 7 25.9 26.9 50.0
30-40 4 14.8 15.4 65.4
Valid 30-40 5 18.5 19.2 84.6
>40 4 14.8 15.4 100.0
Total 26 96.3 100.0

Missing System 1 3.7

Total 27 100.0

Gender
Frequency Percent Valid Cumulative
Percent Percent
Male 17 63.0 65.4 65.4
Valid Female 9 33.3 34.6 100.0
Total 26 96.3 100.0

Missing System 1 3.7

Total 27 100.0

Education
Frequency Percent Valid Cumulative
Percent Percent
Valid High School 6 22.2 23.1 23.1
Graduate in Arts and 11 40.7 42.3 65.4
Science degree
Graduate in 4 14.8 15.4 80.8
professional degree
Post graduate degree 5 18.5 19.2 100.0
Total 26 96.3 100.0

Missing System 1 3.7

Total 27 100.0

Work_Exp
Frequency Percent Valid Cumulative
Percent Percent
<1 4 14.8 15.4 15.4
1-5 9 33.3 34.6 50.0
5-10 5 18.5 19.2 69.2
Valid 10-20 4 14.8 15.4 84.6
>20 4 14.8 15.4 100.0
Total 26 96.3 100.0

Missing System 1 3.7

Total 27 100.0

Result:
Data file in SPSS for determining the measure of central tendency
was created and calculated using SPSS.
Ex. No. 3
DESCRIPTIVE STATISTICS - MEASURES OF CENTRAL TENDENCY

AIM:
Calculate the frequency distributions and measures of central
tendency from following table.
Label for Gender – 1 (Male), 2(Female)
Gende 1 1 2 1 2 1 2 1 2 1
r
Height 14 14 15 14 15 15 15 14 15 15
0 6 6 9 4 6 1 8 8 0
Weight 56 45 68 51 54 53 69 51 70 49
Gende 1 2 1 2 1 1 2 2 1 2
r
Height 15 15 15 14 15 14 15 15 14 15
1 9 3 8 5 6 0 2 9 6
Weight 45 68 50 55 61 53 65 64 47 59

PROCEDURE:
Step1: Enter the variables as specified to determine SPSS
Step2: Enter the data given in the tabular column in data sheet.
Step3: Select the analyze menu->descriptive statistics-
>Frequencies
Step4: In the variable list, select the education and others to the
Frequencies List. Left click on the right arrow button between the
boxes to move this variable over to the Frequencies box.
Step5: Click on the options button. This will open the descriptive
options dialog box.
Step6: Click on mean, sum, standard deviation, variance, minimum
value, maximum value and range under the menu Central
Tendency.
Step7: Click ok

OUTPUT:
FREQUENCIES VARIABLES=Gender Height Weight
/STATISTICS=MEAN MEDIAN MODE SUM
/ORDER=ANALYSIS.

Frequencies
Notes
Output Created 27-APR-2016 22:24:41
Comments
Active Dataset DataSet0
Filter <none>
Weight <none>
Input
Split File <none>
N of Rows in Working Data 20
File
User-defined missing values are
Definition of Missing
Missing Value treated as missing.
Handling Statistics are based on all cases
Cases Used
with valid data.
FREQUENCIES
VARIABLES=Gender Height
Weight
Syntax
/STATISTICS=MEAN MEDIAN
MODE SUM
/ORDER=ANALYSIS.
Processor Time 00:00:00.02
Resources
Elapsed Time 00:00:00.01

[DataSet0]
Statistics
Gender Height Weight
Valid 20 20 20
N
Missing 0 0 0
Mean 1.45 151.35 56.65
Median 1.00 151.00 54.50
Mode 1 156 45a
Sum 29 3027 1133
a. Multiple modes exist. The smallest value is
shown

Frequency Table
Gender
Frequency Percent Valid Cumulative
Percent Percent
Male 11 55.0 55.0 55.0
Valid Female 9 45.0 45.0 100.0
Total 20 100.0 100.0

Height
Frequency Percent Valid Cumulative
Percent Percent
140 1 5.0 5.0 5.0
146 2 10.0 10.0 15.0
148 2 10.0 10.0 25.0
149 2 10.0 10.0 35.0
150 2 10.0 10.0 45.0
151 2 10.0 10.0 55.0
152 1 5.0 5.0 60.0
Valid 153 1 5.0 5.0 65.0
154 1 5.0 5.0 70.0
155 1 5.0 5.0 75.0
156 3 15.0 15.0 90.0
158 1 5.0 5.0 95.0
159 1 5.0 5.0 100.0
Total 20 100.0 100.0

Weight
Frequency Percent Valid Cumulative
Percent Percent
Valid 45 2 10.0 10.0 10.0
47 1 5.0 5.0 15.0
49 1 5.0 5.0 20.0
50 1 5.0 5.0 25.0
51 2 10.0 10.0 35.0
53 2 10.0 10.0 45.0
54 1 5.0 5.0 50.0
55 1 5.0 5.0 55.0
56 1 5.0 5.0 60.0
59 1 5.0 5.0 65.0
61 1 5.0 5.0 70.0
64 1 5.0 5.0 75.0
65 1 5.0 5.0 80.0
68 2 10.0 10.0 90.0
69 1 5.0 5.0 95.0
70 1 5.0 5.0 100.0
Total 20 100.0 100.0

SAVE OUTFILE='F:\Haresh\Subjects\BAS FDP\Lab Record


2016\SPSS\Exercise 3 - Measures of Central '+
'Tendancy\INPUT.sav'
/COMPRESSED.

Result:
Data file in SPSS for determining the measure of central tendency
was created and calculated using SPSS.
Ex. No. 4
CORRELATION

AIM:
Eighteen students have taken Common Admission Test (CAT) after
their graduation. They were also given their aptitude had both CAT
percentile and their graduation percentage. As a research scholar,
determine the relationship between the scores of CAT and graduation
through correlation Analysis.

Name of the Graduation CAT percentile


Students percentage
MOHAMMAD ALI 70 80
JINNA
A.ARTHI 60 85
G.MUTHUKUMARAN 65 70
P.MANIKANDAN 68 65
A.MANOHARI 70 69
RAFICK RAJA 75 89
V.VAIRAVA SUNDARI 80 99
V.DEEPA 89 95
U.AARTHY 90 94
A EMILY LYDIA 95 98
S RAM PRAKASH 65 80
S PACKIYA LAKSHMI 68 75
C JEYASUDHA 72 89
C JANSI RANI 78 88
J 87 90
SULTHANALAVUDEEN
A KHAN ABDUL 91 89
KABAR KHAN
S SEENI SULTHAN 82 94
MAIDEEN
K USMAN ALI 84 93

Determine the relationship between the scores of CAT and


graduation through correlation Analysis.

PROCEDURE:
Step 1: Once the data are entered, go to Analyze-> Correlation->
Bivariate to get this dialogue box.
Step 2: Select a CAT Percentile and their graduation percentage
and click on the ► button to move the variable into the box.
Step 3: Click Pearson check box and Flag Significant correlations
check box and two tailed
Step 4: Click Options - > select Statistics and Missing Values and
click continue.
Step 5: Click OK.

OUTPUT:
CORRELATIONS
/VARIABLES=Graduation CAT
/PRINT=TWOTAIL NOSIG
/MISSING=PAIRWISE.

Correlations
Notes
Output Created 03-Apr-2014 14:35:05
Comments
Input Active Dataset DataSet0
Filter <none>
Weight <none>
Split File <none>
N of Rows in Working
18
Data File
Missing Value Handling Definition of Missing User-defined missing values are
treated as missing.
Cases Used Statistics for each pair of variables
are based on all the cases with
valid data for that pair.
Syntax CORRELATIONS
/VARIABLES=Graduation CAT
/PRINT=TWOTAIL NOSIG
/MISSING=PAIRWISE.

Resources Processor Time 00:00:00.000


Elapsed Time 00:00:00.000

[DataSet0]

Correlations
Graduation CAT
Percentage Percentage
Graduation Percentage Pearson Correlation 1 .734**
Sig. (2-tailed) .001
N 18 18
CAT Percentage Pearson Correlation .734** 1
Sig. (2-tailed) .001
N 18 18
**. Correlation is significant at the 0.01 level (2-tailed).
Inference:
The bivarient correlation is undertaken between the score of
CAT and graduation of the Students. It was hypothesized that a
relationship exists between the CAT and graduation Marks. The
result also shows that, there exists a positive relationship between
the CAT and graduation Marks.(r= 0.734 & p <0.05)
Result:
The relationship between the scores of CAT and graduation
through was determined using correlation Analysis.
Ex. No. 5 HYPOTHESIS- PARAMETRIC T-TEST USING SPSS

AIM:
To determine whether the second trail efficiency of cars is better
than the previous trail, whether efficiency of engine improves with
added ethanol and whether efficiency of engine with and without the
ethanol differ between manual and automatic cars using T-test.
S.No Ca With Withou S.No Ca With Withou
. r Ethano t . r Ethano t
l Ethano l Ethano
l l
1 1 15 15 16 1 14 12
2 1 16 15 17 2 20 19
3 2 20 19 18 1 18 17
4 2 22 18 19 2 25 20
5 1 18 15 20 1 16 15
6 2 20 18 21 2 15 14
7 1 10 11 22 1 12 13
8 2 19 20 23 2 20 19
9 1 9 8.5 24 1 19 20
10 1 8 8 25 2 24 22
11 1 6 5.5 26 1 11 10
12 2 15 14 27 1 10 9
13 2 16 13 28 1 16 17
14 1 11 10 29 2 26 20
15 2 19 18 30 2 28 20

The earlier trail shows that mean number of Kilometre per


litre was 12. Indian oil wants to know
 Second trail efficiency of cars is better than the previous trail.
 Whether efficiency of engine improves with added ethanol.
 Whether efficiency of engine with and without the ethanol
differ between manual and automatic cars.
PROCEDURE:
Procedure: T-test (Independent groups).
Click on the Analyze menu-> Descriptive Statistics -> Explore and
Explore dialogue box will be opened.
Select the Dependent variable (without and with ethanol) into
Dependent List and Independent variable (Car) into Factor List.
Click OK
Click on the Analyze menu-> Compare Means -> Independent
Sample T-test and Independent Sample T-test dialogue box will be
opened.
Select the variable (With Ethanol and without ethanol) in the Test
Variable List Box and click upper right arrow button to shift the
variables in variable List. Click the grouping variable (car) and
send it in Grouping Variable list box.
Click Define Groups button below Grouping Variable and define
groups sub dialogue box opens.
Enter the values Group 1: 1 and Group 2: 2 and click continue.
Click OK

OUTPUT:

T-TEST
/TESTVAL=12
/MISSING=ANALYSIS
/VARIABLES=WithEthanol
/CRITERIA=CI(.95).
T-Test
Notes
Output Created 05-Apr-2014 11:17:57
Comments
Input Active Dataset DataSet0
Filter <none>
Weight <none>
Split File <none>
N of Rows in Working
30
Data File
Missing Value Definition of Missing User defined missing values are
Handling treated as missing.
Cases Used Statistics for each analysis are
based on the cases with no
missing or out-of-range data for
any variable in the analysis.
Syntax T-TEST
/TESTVAL=12
/MISSING=ANALYSIS
/VARIABLES=WithEthanol
/CRITERIA=CI(.95).

Resources Processor Time 00:00:00.000


Elapsed Time 00:00:00.000

[DataSet0]

One-Sample Statistics
Std. Std. Error
N Mean Deviation Mean
WithEthanol 30 16.6000 5.48100 1.00069

One-Sample Test
Test Value = 12
t df Sig. (2-tailed) Mean 95% Confidence Interval of
Difference the Difference
Lower Upper
WithEthanol 4.597 29 .000 4.60000 2.5534 6.6466

T-TEST PAIRS=WithEthanol WITH Without (PAIRED)


/CRITERIA=CI(.9500)
/MISSING=ANALYSIS.

T-Test
Notes
Output Created 05-Apr-2014 11:18:30
Comments
Input Active Dataset DataSet0
Filter <none>
Weight <none>
Split File <none>
N of Rows in Working
30
Data File
Missing Value Definition of Missing User defined missing values are
Handling treated as missing.
Cases Used Statistics for each analysis are
based on the cases with no
missing or out-of-range data for
any variable in the analysis.
Syntax T-TEST PAIRS=WithEthanol
WITH Without (PAIRED)
/CRITERIA=CI(.9500)
/MISSING=ANALYSIS.

Resources Processor Time 00:00:00.016


Elapsed Time 00:00:00.015

[DataSet0]

Paired Samples Statistics


Std. Std. Error
Mean N Deviation Mean
Pair 1 WithEthanol 16.6000 30 5.48100 1.00069
Without 15.1667 30 4.38912 .80134
Paired Samples Correlations
N Correlation Sig.
Pair 1 WithEthanol& 30 .934 .000
Without

Paired Samples Test


Paired Differences
95% Confidence
Std. Interval of the
Std. Error Difference Sig. (2-
Mean Deviation Mean Lower Upper t df tailed)
Pair WithEthanol - 1.43333 2.09158 .38187 .65232 2.21434 3.753 29 .001
1 Without

EXAMINE VARIABLES=WithEthanol Without BY Car


/PLOT BOXPLOT STEMLEAF
/COMPARE GROUP
/STATISTICS DESCRIPTIVES
/CINTERVAL 95
/MISSING LISTWISE
/NOTOTAL.

Explore
Notes
Output Created 05-Apr-2014 11:19:02
Comments
Input Active Dataset DataSet0
Filter <none>
Weight <none>
Split File <none>
N of Rows in Working
30
Data File
Missing Value Definition of Missing User-defined missing values for
Handling dependent variables are treated
as missing.
Cases Used Statistics are based on cases
with no missing values for any
dependent variable or factor
used.
Syntax EXAMINE
VARIABLES=WithEthanol
Without BY Car
/PLOT BOXPLOT STEMLEAF
/COMPARE GROUP
/STATISTICS DESCRIPTIVES
/CINTERVAL 95
/MISSING LISTWISE
/NOTOTAL.

Resources Processor Time 00:00:01.123


Elapsed Time 00:00:01.357

[DataSet0]

Car
Case Processing Summary
Cases
Valid Missing Total
Car N Percent N Percent N Percent
WithEthanol Automatic 16 100.0% 0 .0% 16 100.0%
Manual 14 100.0% 0 .0% 14 100.0%
Without Automatic 16 100.0% 0 .0% 16 100.0%
Manual 14 100.0% 0 .0% 14 100.0%

Descriptives
Car Statistic Std. Error
WithEthanol Automatic Mean 13.0625 .98940
95% Confidence Lower Bound 10.9537
Interval for Mean
Upper Bound 15.1713
5% Trimmed Mean 13.1250
Median 13.0000
Variance 15.663
Std. Deviation 3.95759
Minimum 6.00
Maximum 19.00
Range 13.00
Interquartile Range 6.00
Skewness -.112 .564
Kurtosis -1.150 1.091
Manual Mean 20.6429 1.06702
95% Confidence Lower Bound 18.3377
Interval for Mean
Upper Bound 22.9480
5% Trimmed Mean 20.5476
Median 20.0000
Variance 15.940
Std. Deviation 3.99244
Minimum 15.00
Maximum 28.00
Range 13.00
Interquartile Range 6.00
Skewness .290 .597
Kurtosis -.589 1.154
Without Automatic Mean 12.5625 .98834
95% Confidence Lower Bound 10.4559
Interval for Mean
Upper Bound 14.6691
5% Trimmed Mean 12.5417
Median 12.5000
Variance 15.629
Std. Deviation 3.95337
Minimum 5.50
Maximum 20.00
Range 14.50
Interquartile Range 5.75
Skewness .058 .564
Kurtosis -.681 1.091
Manual Mean 18.1429 .70988
95% Confidence Lower Bound 16.6093
Interval for Mean
Upper Bound 19.6765
5% Trimmed Mean 18.2143
Median 19.0000
Variance 7.055
Std. Deviation 2.65611
Minimum 13.00
Maximum 22.00
Range 9.00
Interquartile Range 3.00
Skewness -.926 .597
Kurtosis -.007 1.154

Group Statistics
Std. Std. Error
Car N Mean Deviation Mean
WithEthanol Automatic 16 13.0625 3.95759 .98940
Manual 14 20.6429 3.99244 1.06702
Without Automatic 16 12.5625 3.95337 .98834
Manual 14 18.1429 2.65611 .70988

Independent Samples Test


Levene's t-test for Equality of Means
Test for
Equality
of
Variances
95% Confidence
Interval of the
Difference
Sig. Std.
(2- Mean Error
Sig taile Differenc Differenc
F . t df d) e e Lower Upper
WithEthan Equal .188 . - 28 .000 -7.58036 1.45426 - -
ol varianc 668 5.21 10.5592 4.6014
es 3 8 3
assume
d
Equal - 27.40 .000 -7.58036 1.45514 - -
varianc 5.20 6 10.5640 4.5967
es not 9 0 2
assume
d
Without Equal 3.88 . - 28 .000 -5.58036 1.24901 - -
varianc 0 059 4.46 8.13885 3.0218
es 8 7
assume
d
Equal - 26.37 .000 -5.58036 1.21686 - -
varianc 4.58 1 8.07994 3.0807
es not 6 8
assume
d

TEST RESULT:
The value of two tail significance is less than 0.05 (p<0.05). Therefore
there is a significant difference in engine efficiency between previous
and current trail. The cars in current trial have more engine efficiency
than those in earlier trial with t(29) = 4.597.
The value of two tail significance is less than 0.05, therefore the
difference between means is significant and there is a significant
difference in engine efficiency between without ethanol and with
ethanol trial. The car with ethanol additive have more engine efficiency
than those without ethanol, with t(29) =30753.
The analysis shows that normality is not violated.
The levene’s test in the without ethanol shows probability greater than
0.05(0.059). Therefore population variances are relatively equal.
Therefore there is no significant differences exist in car types in case of
without ethanol addition.
The levene’s test in the with ethanol shows probability greater than
0.05(0.059). Therefore population variances are relatively equal.
Therefore there is no significant differences exist in car types in case of
ethanol addition.

Result:
From the t-Test we can conclude that the Second trail efficiency of cars
is better than the previous trail, efficiency of engine improves with
added ethanol and there is no difference in efficiency of engine with and
without the ethanol between manual and automatic cars.

Ex. No. 6 CHI SQUARE TEST USING SPSS

AIM:
To test the goodness of fit of attitude towards US military bases in
Iraq and its differences in frequency exist across response categories.
Perform a chi-square test for determining the goodness of fit of attitude
towards US military bases in Iraq and its differences in frequency exists
across response categories.
Attitude toward US Frequency of
military bases in Iraq response
In favour 8
Against 20
Undecided 32

PROCEDURE:
Step1: Enter the data in the data file, select the data menu.
Step 2: Click on the weight cases, to open the weight case dialogue
box.
Step 3: Click on the weight cases by radio button.
Step 4: Select the variable you require and click on button to
move the variable into the frequency variable box.
Step5: Click ok. The message weight on should appear on the
status bar at the bottom right of the application window.
Step 6: Select the analyze menu.
Step 7: Click on non-parametric and then on chi square, to open
the chi square test dialogue box.
Step 8: Click ok.
Hypothesis:

H0-Null Hypothesis:The attitude towards US military base in Iraq


is equally distributed across response categories
H1-Alternate Hypothesis: The attitude towards US military base in Iraq is
not equally distributed across response categories

Level of Significance:
The level of significance is fixed as 5% and therefore the confidence level is
95%.
OUTPUT:
NPAR TESTS
/CHISQUARE=Frequency
/EXPECTED=EQUAL
/MISSING ANALYSIS.

NPar Tests
Notes
Output Created 03-Apr-2014 14:52:38
Comments
Input Active Dataset DataSet0
Filter <none>
Weight Frequency of Response
Split File <none>
N of Rows in Working
3
Data File
Missing Value Handling Definition of Missing User-defined missing values are
treated as missing.
Cases Used Statistics for each test are based
on all cases with valid data for the
variable(s) used in that test.
Syntax NPAR TESTS
/CHISQUARE=Frequency
/EXPECTED=EQUAL
/MISSING ANALYSIS.

Resources Processor Time 00:00:00.000


Elapsed Time 00:00:00.000
Number of Cases
196608
Alloweda
a.Based on availability of workspace memory.

[DataSet0]

Chi-Square Test

Frequencies
Frequency of Response
Observed N Expected N Residual
8.00 8 20.0 -12.0
20.00 20 20.0 .0
32.00 32 20.0 12.0
Total 60
Test Statistics
Frequency of Response
Chi-Square 14.400a
Df 2
Asymp. Sig. .001
a. 0 cells (.0%) have expected frequencies
less than 5. The minimum expected cell
frequency is 20.0.
RESULT:
The asymptotic value is 0.001 which is less than 0.05 and therefore
alternate Hypothesis is accepted.
The attitude towards goodness of fit towards US military base in
Iraq is not equally distributed across response categories.
Ex. No. 7 NON PARAMETRIC ONE WAY
ANOVA USING SPSS

AIM:
To compare the scores of CBSE students from four metro cities
in India i.e. Delhi, Kolkata, Mumbai and Chennai, using one way
ANOVA.

Questions Asked:
Vijender Gupta wants to compare the scores of CBSE students from four
metro cities on India (i.e) Delhi, Kolkata, Mumbai and Chennai. He obtained 20
participants scores based on random sampling from each of the four metro cities,
collecting 100 responses. Also note that, this is independent design, since the
respondents are from different cities.

S.N Citie Scores


o. s
1 Delhi 400,450,499,480,495,300,350,356,269,298,299,599,466,591,
502,598,548,459, 489,499
2 Kolk 389,398,399,599,598,457,498,400,300,369,368,348,499,475,
ata 489,498,399,398,378,498,
3 Mum 488,469,425,450,399,385,358,299,298,389,398,349,358,498,
bai 452,411,398,379,295,250
4 Chen 450,400,450,428,
nai 398,359,360,302,310,295,259,301,322,365,389,378,345,498,
489,456

Analyze through One way ANNOVA between group ANNOVA with


planned comparisions. Calculate F ratio along with past Hoc analysis.
PROCEDURE:
Step1:Enter the data in the data file
Step2:Click on the analyze menu->compare means->one way
ANOVA and one way ANOVA dialogue box will be opened.
Step3:Select score(dependent variable)in dependent list box and
city (independent variable)in the factor as shown in the figure
below
Step4: Click contrasts push button. Contrasts sub dialogue box will
be opened. Click Continue.
Step5: Click Post Hoc, push button. Click Tukey test and click
continue. The significant level is 0.05.
Step6: Click option-> select descriptive and homogeneity of
variance test checkbox and click continue.
Step7: Click ok
Hypothesis:
H0-Null Hypothesis: There is no significant difference in scores from
different metro cities of India.
H1-Alternate Hypothesis: There is significant difference in scores from
different metro cities of India.

Level of Significance:
The level of significance is fixed as 5% and therefore the confidence level is
95%.
OUTPUT:
ONEWAY Scores BY City
/STATISTICS DESCRIPTIVES HOMOGENEITY
/MISSING ANALYSIS
/POSTHOC=TUKEY ALPHA(0.05).

Oneway
Notes
Output Created 04-Apr-2014 10:18:32
Comments
Input Data C:\Users\Kumaran\Desktop\khi.sav
Active Dataset DataSet1
Filter <none>
Weight <none>
Split File <none>
N of Rows in Working
80
Data File
Missing Value Handling Definition of Missing User-defined missing values are
treated as missing.
Cases Used Statistics for each analysis are
based on cases with no missing
data for any variable in the
analysis.
Syntax ONEWAY Scores BY City
/STATISTICS DESCRIPTIVES
HOMOGENEITY
/MISSING ANALYSIS
/POSTHOC=TUKEY
ALPHA(0.05).

Resources Processor Time 00:00:00.015


Elapsed Time 00:00:00.063

[DataSet1] C:\Users\Kumaran\Desktop\khi.sav

Descriptives
Scores
95% Confidence
Interval for Mean
Std. Std. Lower Upper
N Mean Deviation Error Bound Bound Minimum Maximum
Delhi 20 447.3500 104.69016 23.40943 398.3535 496.3465 269.00 599.00
Kolkata 20 437.8500 79.75771 17.83437 400.5222 475.1778 300.00 599.00
Mumbai 20 387.4000 67.25396 15.03844 355.9242 418.8758 250.00 498.00
Chennai 20 377.7000 68.49287 15.31547 345.6443 409.7557 259.00 498.00
Total 80 412.5750 85.54676 9.56442 393.5375 431.6125 250.00 599.00

Test of Homogeneity of Variances


Scores
Levene
Statistic df1 df2 Sig.
2.371 3 76 .077

ANOVA
Scores
Sum of
Squares df Mean Square F Sig.
Between Groups 73963.450 3 24654.483 3.716 .015
Within Groups 504178.100 76 6633.922
Total 578141.550 79
Post Hoc Tests

Multiple Comparisons
Scores
Tukey HSD
95% Confidence
Interval
(I) Name of (J) Name of Mean Std. Lower Upper
the city the city Difference (I-J) Error Sig. Bound Bound
Delhi Kolkata 9.50000 25.75640 .983 -58.1568 77.1568
Mumbai 59.95000 25.75640 .101 -7.7068 127.6068
Chennai 69.65000* 25.75640 .041 1.9932 137.3068
Kolkata Delhi -9.50000 25.75640 .983 -77.1568 58.1568
Mumbai 50.45000 25.75640 .213 -17.2068 118.1068
Chennai 60.15000 25.75640 .099 -7.5068 127.8068
Mumbai Delhi -59.95000 25.75640 .101 -127.6068 7.7068
Kolkata -50.45000 25.75640 .213 -118.1068 17.2068
Chennai 9.70000 25.75640 .982 -57.9568 77.3568
Chennai Delhi -69.65000* 25.75640 .041 -137.3068 -1.9932
Kolkata -60.15000 25.75640 .099 -127.8068 7.5068
Mumbai -9.70000 25.75640 .982 -77.3568 57.9568
*. The mean difference is significant at the 0.05 level.

Homogeneous Subsets

Scores
Tukey HSDa
Subset for alpha = 0.05
Name of the city N 1 2
Chennai 20 377.7000
Mumbai 20 387.4000 387.4000
Kolkata 20 437.8500 437.8500
Delhi 20 447.3500
Sig. .099 .101
Means for groups in homogeneous subsets are
displayed.
a. Uses Harmonic Mean Sample Size = 20.000.

RESULT:

Test Result:
 Levene’s test shows that homogeneity of variance is not
significant (p> 0.05). We can be confident that population
variances for each group are approximately equal.
 The F test values along with degrees of freedom (2, 76) and
significance of 0.15. Therefore p<0.05 we can reject null
hypothesis and accept the alternate hypothesis that there is
significant difference in scores from different metro cities of
India F(3,76)=3.716, p<0.05.
 Using Tukey HSD futher, we can conclude that Delhi and
Chennai have significant difference in their scores
Result:
The CBSE students from four metro cities on India (i.e) Dehi, Kolkatta,
Mumbai and Chennai have significance difference in their scores.

The level of significance is 0.015 and p< 0.05.The CBSE students from
4 metro cities on India -Delhi, Kolkata, Mumbai and Chennai have
significance difference in their scores.
Ex. No. 8
REGRESSION USING SPSS

AIM:
To determine the effect of shelf space and price make on the sales
of pet food.
Questions Asked:
a) What contribution to both shelf space and price make to the
prediction of sales of pet food?
b) Which is the best predictor of sales of pet food?
c) Previous research has suggested that shelf space is the salient
predictor of sales of pet food. Is this hypothesis is correct?

S.N Sales Shelf space Price


O ( in sq. mt ) ( per
kg )
1 15.00 2.10 1.00
2 15.00 1.80 1.00
3 21.00 2.20 1.00
4 28.00 2.40 2.00
5 30.00 2.50 2.00
6 35.00 2.50 2.00
7 40.00 2.60 2.00
8 35.00 3.40 3.00
9 40.00 2.50 3.00
10 45.00 3.80 3.00
11 50.00 4.40 4.00
12 60.00 5.10 4.00
13 45.00 3.90 5.00
14 60.00 5.40 5.00
15 50.00 5.50 6.00

PROCEDURE:
Step1:Select the Analyze menu
Step2:Click on regression and then on linear option to open the
linear regression dialogue box
Step3:Select a dependent variable(sales of pet food)and click on
the button to move the variable into the box
Step4:Select the independent variable(price and space)and click on
the button to move the variable into the box
Step6:Click ok
OUTPUT:
REGRESSION
/MISSING LISTWISE
/STATISTICS COEFF OUTS R ANOVA
/CRITERIA=PIN(.05) POUT(.10)
/NOORIGIN
/DEPENDENT Sales
/METHOD=ENTER Shelf_Space Price.
Regression
Notes
Output Created 09-APR-2016 09:04:29
Comments
F:\Haresh\Subjects\BAS FDP\Lab Record
Data
2016\Regression Using SPSS\Regression.sav
Active Dataset DataSet0
Filter <none>
Input
Weight <none>
Split File <none>
N of Rows in Working 15
Data File
User-defined missing values are treated as
Definition of Missing
Missing Value missing.
Handling Statistics are based on cases with no missing
Cases Used
values for any variable used.
REGRESSION
/MISSING LISTWISE
/STATISTICS COEFF OUTS R ANOVA
Syntax /CRITERIA=PIN(.05) POUT(.10)
/NOORIGIN
/DEPENDENT Sales
/METHOD=ENTER Shelf_Space Price.
Processor Time 00:00:00.00
Elapsed Time 00:00:00.00
Memory Required 1636 bytes
Resources
Additional Memory 0 bytes
Required for Residual
Plots

[DataSet0] F:\Haresh\Subjects\BAS FDP\Lab Record


2016\Regression Using SPSS\Regression.sav

Variables Entered/Removeda
Model Variables Variables Method
Entered Removed
Price, . Enter
1
Shelf_Spaceb
a. Dependent Variable: Sales
b. All requested variables entered.

Model Summary
Model R R Square Adjusted R Std. Error of
Square the Estimate
1 .922a .850 .825 6.059
a. Predictors: (Constant), Price, Shelf_Space

ANOVAa
Model Sum of df Mean F Sig.
Squares Square
Regression 2502.390 2 1251.195 34.081 .000b
1 Residual 440.543 12 36.712

Total 2942.933 14
a. Dependent Variable: Sales
b. Predictors: (Constant), Price, Shelf_Space

Coefficientsa
Model Unstandardized Standardized t Sig.
Coefficients Coefficients
B Std. Error Beta
(Constant) 2.029 5.126 .396 .699
1 Shelf_Space 10.500 3.262 .916 3.219 .007
Price .057 2.613 .006 .022 .983
a. Dependent Variable: Sales
RESULT:
Shelf space and price predicted 85% of the movement of sales in
the supermarket;
Price is the best predictor of sales of pet food.
Hypothesis based on previous research is not true.
R square value = 0.850 therefore 85% of the sales movement has
been explained by shelf space and price
F ratio value = 34.081 and significance value = 0.000 which is less
than level of significance (0.05) and hence the model is fit
Significant value for price is 0.007 which is less than the level of
significance (0.05) and the significant value for space is 0.983
which is greater than the level of significance (0.05)
MANN-WHITNEY U TEST
Ex. No. 9
USING SPSS

AIM:
To create an SPSS data set and to check whether there is difference
existing in the sales of two retail outlets using Mann Whitney U test.
Question Asked:
a) Create an SPSS data
b) To check whether there is a differences exist in the sales of two
outlets using Mann-Whitney U test
S No Retail Outlets Sales(in lacs)
1 1 40
2 2 30
3 1 60
4 1 45
5 2 55
6 2 25
7 2 60
8 1 80
9 2 100
10 1 20
11 2 10
12 1 80
13 1 85
14 2 90

PROCEDURE:
Step1: Enter the data in the data file.
Step 2: Click analyze menu-> non parametric test->2 independent
samples. Dialogue box will be open
Step 3: Select dependent variables and grouping variables. Select
Mann Whitney U check box in test type box.
Step4 :Define group buttons
Step 5: Sub value dialog box opens. Enter the value. Click ok.

OUTPUT:
GET
FILE='F:\Haresh\Subjects\BAS FDP\Lab Record
2016\SPSS\Exercise 8 - Mann Whiteney U test\Mann Whitney
Test 2.sav'.
DATASET NAME DataSet1 WINDOW=FRONT.
NPAR TESTS
/M-W= Sales BY Retail(1 2)
/MISSING ANALYSIS.

NPar Tests

Notes
Output Created 15-APR-2016 21:09:50
Comments
F:\Haresh\Subjects\BAS
FDP\Lab Record
Data 2016\SPSS\Exercise 8 - Mann
Whiteney U test\Mann Whitney
Test 2.sav
Input Active Dataset DataSet1
Filter <none>
Weight <none>
Split File <none>
N of Rows in Working Data 20
File
User-defined missing values are
Definition of Missing
treated as missing.
Missing Value
Statistics for each test are based
Handling
Cases Used on all cases with valid data for
the variable(s) used in that test.
NPAR TESTS
Syntax /M-W= Sales BY Retail(1 2)
/MISSING ANALYSIS.
Processor Time 00:00:00.00
Resources Elapsed Time 00:00:00.01
Number of Cases Alloweda 112347
a. Based on availability of workspace memory.

[DataSet1] F:\Haresh\Subjects\BAS FDP\Lab Record


2016\SPSS\Exercise 8 - Mann Whiteney U test\Mann Whitney
Test 2.sav
Mann-Whitney Test

Ranks
Retail N Mean Rank Sum of
Ranks
Delhi 10 11.90 119.00
Sales Mumbai 10 9.10 91.00
Total 20

Test Statisticsa
Mann- Wilcoxon Z Asymp. Sig. Exact Sig.
Whitney U W (2-tailed) [2*(1-tailed
Sig.)]
Sales 36.000 91.000 -1.061 .288 .315b
a. Grouping Variable: Retail
b. Not corrected for ties.

RESULT:
The result was not significant, z = - 0.987, p>0.05 and no significant
difference in the sales of two retail stores.
Ex. No. 10 HISTOGRAM, RANK AND
PERCENTILE

AIM:
The operating costs of the vehicle used by your company’s sales
people are too high. The major
component of operating expense is fuel cost; to analyze fuel costs,
you collect mileage data from
the company’s two wheelers for the previous month. Use Excel
sheet for calculation.

MPL 27 29 33 21 21 12 16 25 8 17 24 34 38 15 19 19 41

• Summary statistics.
• Histogram analysis.
• Rank and Percentile analysis tool.

PROCEDURE:
Step1: Enter the data in excel file.
Step 2: File – Options - Word options window appears –Select add
ins Menu – Choose Excel Add ins in Manage drop menu box –
press Go button
Step 3: Add ins window appears – choose Analysis tools pack to
install data analysis menu- press OK
Step4 : To select Summary Statistics – go to Data Menu in ribbon –
Select Data Analysis – Data Analysis Window appears- choose
Descriptive Statistics – Select input range in descriptive statistics
dialogue box – set your output range – choose summary statistics
option and click OK
Step 5: To select Histogram analysis – go to Data Menu in ribbon –
Select Data Analysis – Data Analysis Window appears- choose
Histogram analysis – Select input range in Histogram analysis
dialogue box – set your output range – choose Chart Output option
and click OK
Step 6: To select Rank and Percentile analysis tool – go to Data
Menu in ribbon – Select Data Analysis – Data Analysis Window
appears- choose Rank and Percentile analysis tool – Select input
range in Rank and Percentile analysis tool box – set your output
range – click OK
OUTPUT:
Ex. No. 11 RATE OF INTEREST

AIM:
To determine the rate of interest of Loan

Suppose we have availed a loan of Rs.1, 00,000 that is to be paid


off in 48 monthly installments
of rupees 3,000 each.

PROCEDURE:
Step1: Enter the data in excel file to find the rate of Interest.
Step 2:NPER = 48 months , PMT = -3000 , PV = 100000
Step 3: Find the answer by using the Formula =Rate(NPER, PMT,
PV)
Step4 : By using the Window style – Select Formulas Menu –
Choose Financial Menu – Select Rate of interest option by double
click
Step 5: PV dialogue box appears – Provide the same details as
NPER,PMT and DATE – Press OK
OUTPUT:
RESULT:
The result obtained is that the Rate of Interest Bank Loan charges
is 2%

Ex. No. 12 FUTURE VALUE

AIM:
To determine the Future Value

You deposit Rs.1,000 each and every month in your bank account.
The bank pays 12% annual
rate that is compound every month. Find out how much money will
be in your account at the
end of 24 months.
PROCEDURE:
Step1: Enter the data in excel file to find the Future value of
Investment.
Step 2: RATE = 12% , NPER = 24 , PMT = -1000
Step 3: Find the answer by using the Formula
=FV(RATE/12,NPER, PMT)
Step4 : By using the Window style – Select Formulas Menu –
Choose Financial Menu – Select FV option by double click
Step 5: PV dialogue box appears – Provide the same details as
RATE,NPER and PMT – Press OK
OUTPUT:

RESULT:
The result obtained is that the Future Value of the Investment is
26973.46 `
Ex. No. 13 CALCULATION OF TIME

AIM:
To determine NPER

You can afford only Rs.500/- per month. If you are crediting this
amount in a bank that pays an annual interest of 12% compounded
monthly. How long will it take for your investment to accumulate
to Rs.50, 000?

PROCEDURE:
Step1: Enter the data in excel file to find the Future Value.
Step 2: RATE = 12% , PMT = -500, PV = 0, FV = 50000
Step 3: Find the answer by using the Formula =FV(RATE/12,
PMT,PV,FV)
Step4 : By using the Window style – Select Formulas Menu –
Choose Financial Menu – Select NPER option by double click
Step 5: PV dialogue box appears – Provide the same details as
RATE, PMT, PV AND FV – Press OK
OUTPUT:
RESULT:
The result obtained is that the time taken for investment to
accumulate`50000 is 70 months
Ex. No. 14 EMI CALCULATION

AIM:
To determine the calculation for EMI

Suppose if you want to take a loan of Rs.2, 00,000 at an annual


interest rate of 14%. The loan
has to be repaid in 15 years in equal monthly installments. Find out
the EMI.

PROCEDURE:
Step1: Enter the data in excel file to find the time of Investment.
Step 2: RATE = 14% , NPER = 15, PV = - 200000
Step 3: Find the answer by using the Formula
=PMT(RATE/12,NPER*12,PV)
Step4 : By using the Window style – Select Formulas Menu –
Choose Financial Menu – Select PMT option by double click
Step 5: PV dialogue box appears – Provide the same details as
RATE, NPER, PV– Press OK
OUTPUT:

RESULT:
The result obtained is that the EMI to be paid monthly is 2663.43`

Ex. No. 15 NPV WITHOUT INVESTMENT

AIM:
To determine the calculation for NPV

You are expected to get 5 monthly payments of Rs.500, 900, 550,


478, 950 respectively. At the discount rate of 10% per annum. Find
the Net Present Value (NPV).

PROCEDURE:
Step1: Enter the data in excel file to find NPV.
Step 2:RATE = 10% , PMT = 500,900,550,478,950
Step 3: Find the answer by using the Formula =NPV(RATE, PMT)
Step4 : By using the Window style – Select Formulas Menu –
Choose Financial Menu – Select NPV option by double click
Step 5: NPV dialogue box appears – Provide the same details as
RATE and PMT – Press OK
OUTPUT:

RESULT:
The result obtained that the Net Present Value of the cash flows is
2527.93`
Ex. No. 16 IRR CALCULATION

AIM:
To determine the calculation for IRR

Assuming that an initial investment of Rs.1, 00,000. Results in 12


annual cash outflows as given
below.

13200, 15,000, 13,000, 2,000, 12,400, 16,000, 14,000, 16,450,


17,690, 16,550, 16,500 and 12,200.

PROCEDURE:
Step1: Enter the data in excel file to find Rate of Return.
Step 2: Values = (-capital : values )
Step 3: Find the answer by using the Formula
=IRR(value1,value2…value12)
Step4 : By using the Window style – Select Formulas Menu –
Choose Financial Menu – Select IRR option by double click
Step 5: IRR dialogue box appears – Provide the same values –
Press OK
OUTPUT:
RESULT:
The result obtained that the IRR is 8%
Ex. No. 17 CHI SQUARE USING EXCEL

AIM:
To determine the value of Chi Square

A leading financial services company wants to find if there is


association between Investment
preference and geographic region. The Business analyst interviews
a random sample of 250
individual investors from the three states -Tamilnadu, Kerala and
Karnataka - and the records of
the observed frequencies are as shown below.

Instrumen Bank Gold Land Stock


t market
State
Tamilnadu 72 8 12 23
Kerala 26 10 16 33
Karnataka 7 10 14 19

Using Chi-Square test, find out if there is association between


investment instrument and region.

PROCEDURE:
Step1: Enter the data in excel file to find CHI SQUARE.
Step 2: First table indicates Observed Frequency
Step 3: Create a secondary table for expected frequency by the
formula =(Total of State*Total Product Value )/Total Market Value
Step4 : Compare both observed and expected by the formula =
CHITEST(Actual range, expected range)
Step 5: Check the Significance Relation
OUTPUT:

RESULT:
The result obtained is 0.0000001308 less than 0.005 which is less
than the expected level so it indicates that there is significant
relation between region and place
Ex. No. 18 LPP USING POM

AIM:
Solve the following linear programming problem using Simplex
Method.

Maximize Z = 6 X1 + 8 X2
Subject to
5 X1 + 10 X2 < 60
4 X1 + 4 X2 < 40
X1, X2 > 0
To create Linear Programming by POM Application.
PROCEDURE:
Step 1: Start - all programs – POM shortcut – POM
window appears
Step 2: Select modules – Linear Programming
Step 3: To Provide Title1 – Decision Variables as 2 –
Constraints as 4 – save the document
Step 4: Provide the input values - click solve menu
Step 5: Save the document

OUTPUT:

RESULT:
The LPP by Simplex Methods indicates the linear programming
lies as 8x1+2x2 = 64
Ex. No. 19 ASSIGNMENT PROBLEM USING
POM

AIM:
Solve the following Assignment problem.

The following matrix gives the cost involved to perform jobs 1,2
and 3 operators A,B and C.
Assign the operators and jobs to minimize the total time taken to
complete the jobs

Operator Job1 Job2 Job 3


A 10 16 7
B 9 17 6
C 6 13 5

PROCEDURE:
Step 1: Start - all programs – POM shortcut – POM
window appears
Step 2: Select modules – Assignment Problem
Step 3: To Provide Title1 – Number of Objects –
Minimize option tobe selected – save the document
Step 4: Provide the input values - click solve menu
Step 5: Save the document

OUTPUT:

*** LINEAR PROGRAMMING ***


=========================================================
=================
PROBLEM NAME: Asssignment
=========================================================
=================
Min Z= 10 X1 + 16 X2 + 7 X3 + 9 X4 + 17 X5 + 6 X6 + 6 X7
+ 13 X8 + 5 X9
ST
(1) 1 X1 + 1 X2 + 1 X3 = 1
(2) 1 X4 + 1 X5 + 1 X6 = 1
(3) 1 X7 + 1 X8 + 1 X9 = 1
(4) 1 X1 + 1 X4 + 1 X7 = 1
(5) 1 X2 + 1 X5 + 1 X8 = 1
(6) 1 X3 + 1 X6 + 1 X9 = 1

=========================================================
=================
SOLUTION:
=========================================================
=================
ITERATION NUMBER 10

VARIABLE MIX SOLUTION


------------ --------
X2 1.000
X8 0.000
X6 1.000
X4 0.000
X7 1.000
Artificial 6 0.000

Z 28.000

Assignment Problem Solution

JOB 1 JOB 2 JOB 3


OPERATOR 0 1 0
OPERATOR 0 0 1
OPERATOR 1 0 0

Total cost or profit is $ 28

=========================================================
=================

RESULT:
Thus the Job Assignment is done by Op1 - Second job, Op 2 –
Third job, Op 3 – First Job and the total cost will be 28`.
Ex. No. 20 EOQ using POM

AIM:
Alpha industry needs 15,000 units/year of a bought out component which
will be used in its main
product. The ordering cost is Rs. 125 per order and the carrying cost per
unit per unit per year is
20% of the purchase price per unit which is Rs 75.
Find a. Economic order quantity
b. Number of orders per year
c. Time between successive orders

PROCEDURE:
Step 1: Start - all programs – POM shortcut – POM window
appears
Step 2: Select modules – Fixed order Quantity Inv. Model
Step 3: Provide Problem Title – Select Model I Basic Economic
Order Quantity in General Tab
Step 4: Provide the input values in Production Tab- Annual
Demand = 36000 units – Average Ordering Cost = 125 – Carrying Cost
C = (75/20%) = 15 - click solve menu
Step 5: Save the document
OUTPUT:

*** FIXED ORDER QUANTITY INVENTORY MODEL ***


---------------------------------------------------------
-----------------
PROBLEM NAME: Untitled
---------------------------------------------------------
-----------------

MODEL I -- Basic Economic Order Quantity (EOQ)

Annual Demand (units per year) D = 36000


Average Ordering Cost ($ per order) S = 25
Carrying Cost ($ per unit per year) C = 15

Order Quantity = 346.41

Total Annual Stocking Cost = $5,196.15

Expected # of Orders per Year = 103.923

Maximum Inventory Level = 346.41

---------------------------------------------------------
-----------------
RESULT:
a. Economic order quantity is 346.41
b. Number of orders per year is 103.923
c. Time between successive orders is 0.1

Ex. No. 21 LPP USING GRAPHICAL


METHOD IN TORA

AIM:
Solve the following linear programming problem using Graphical
Method.

Maximize Z = 2 X1 + 3 X2

Subject to
X1 + X 2 > 6
7 X1 + X2 > 14
X1, X2 > 0
PROCEDURE:
Step 1: TORA – Main menu – Linear Programing – Press
Go to Input Screen
Step 2: Select Problem Title for appropriate title – Provide
No of Variables as 2 and Constraints as 2– Enter
Step 3: Enter the project title
Step 4: Enter the Maximize constraint and the other 2
constraints in the provided table
Step 5: Enter solve problem from menu solve/modify menu–
select solve menu – Graphical Method
Step 6: Finish the problem

OUTPUT:
RESULT:
The result is unbounded
Ex. No. 22 TRANSPORTATION USING
LEAST COST METHOD IN TORA

AIM:
Find the feasible solution for the transportation problem using
Least cost method:

Fro To D E F Supply
m
A 6 4 1 50
B 3 8 7 40
C 4 4 2 60
Demand 20 95 35 150

PROCEDURE:
Step 1: TORA – Main menu – Transportation Model –
Press Go to Input Screen
Step 2: Select Problem Title for appropriate title – No of
Sources = 3 and Destinations = 3
Step 3: Enter the Demand and Supply Numbers – press
solve menu
Step 4: Solve/Modify Menu – Go to solve Problem –
Iterations – Least cost Standing solution
Step 5: Finish the problem

OUTPUT:

RESULT:
The result for the Transportation is A-EF, B-DE, C-E with the
objective value of 555`.
Ex. No. 23 TRANSPORTATION USING
NORTH WEST CORNER RULE IN
TORA

AIM:
Find the feasible solution for the transportation problem using
North –West Corner rule:

From To D E F Supply
A 6 4 1 50
B 3 8 7 40
C 4 4 2 60
Demand 20 95 35 150

PROCEDURE:
Step 1: TORA – Main menu – Transportation Model –
Press Go to Input Screen
Step 2: Select Problem Title for appropriate title – No of
Sources = 3 and Destinations = 3
Step 3: Enter the Demand and Supply Numbers – press
solve menu
Step 4: Solve/Modify Menu – Go to solve Problem –
Iterations – North West Corner Starting Solution
Step 5: Finish the problem
OUTPUT:
RESULT:
The result for the Transportation is A-DE, B-E, C-EF with the
objective value of 730`.

Ex. No. 24
PERT USING TORA

AIM:
R and D project has a list of tasks to be performed whose time
estimates are given in the table.
Draw the network diagram for the R&D project.

Activit Activity Time(days) Time(days) Time(days)


y name To Tm TP
1-2 A 4 6 8
1-3 B 2 3 10
1-4 C 6 8 16
2-4 D 1 2 3
3-4 E 6 7 8
3-5 F 6 7 14
4-6 G 3 5 7
4-7 H 4 11 12
5-7 I 2 4 6
6-7 J 2 9 10

PROCEDURE:
Step 1: TORA – Main menu – Project planning – PERT
Step 2: Select Enter new problem – Decimal notation as
normal – Enter Go to Input Screen
Step 3: Enter the project title
Step 4: Enter the From Node and To Node – Activity
Symbol –To, Tm, TpDuration - Enter Solve menu – Save the
document
Step 5: Enter solve problem from menu solve/modify menu
Step 6: Finish the problem

NETWORK DIAGRAM:
6

2 G J
D
A

C H
1 4 7

B E I

F
3 5

OUTPUT:
RESULT:
The result for the PERT program is resulted with the longest path
with the duration of 24.
Ex. No. 25
CPMUSING TORA

AIM:
A project schedule has the following characteristics as shown in
Table given below

Activity Name Time(days) Activity Name Time(days)


1-2 A 4 5-6 G 4
1-2 B 1 5-7 H 8
2-4 C 1 6-8 I 1
3-4 D 1 7-8 J 2
3-5 E 6 8-10 K 5
4-9 F 5 9-10 L 7

1. Compute Teand TL for each activity.


2. Find the critical path.
3. Calculate the project duration.
PROCEDURE:
Step 1: TORA – Main menu – Project planning – CPM
Step 2: Select Enter new problem – Decimal notation as
normal – Enter Go to Input Screen
Step 3: Enter the project title
Step 4: Enter the From Node and To Node – Activity
Symbol - Duration - Enter Solve menu – Save the document
Step 5: Enter solve problem from menu solve/modify menu
Step 6: Finish the problem

NETWORK DIAGRAM:

9 L
2 F
C
A

1 4 6 10

I
B G
D

E K
3 5 7 8

H J

OUTPUT:
RESULT:
The result for the CPM program is
1. Te = 0and TL = 22
2. The critical path is B-E-H-J-K
3. Calculate the project duration is 92

You might also like