603 views

Uploaded by Kara Mhisyella Assad

- Chapter 09 IMSM
- costacctg13_SolPPT_ch10
- GE Cost15EChapter16 Solutions
- Question Bank - Management Accounting-1
- Ado File Programing
- BBS10 Ppt Mtb Ch13 Simple Linear Regression
- Questions Solutions
- 02 - Problem Solutions.pdf
- analysis of teeth estimation
- Waiting Lines (Class Problems).pdf
- Random Forest
- CHAPTER 3 Activity Cost Behavior
- Microsoft Word - Qa 1 Practical
- NLS Tools for
- Prediction of Construction Project Performance using Regression Analysis and Artificial Neural Network
- Data Analysis With Mstat
- Forecasting of Suspended Sediment in Rivers Using Artificial Neural Networks Approach
- MBFM Project
- Analysis Debt Mkt
- review 1

You are on page 1of 59

Ken Howard, financial analyst at KMW Corporation, is examining the behavior of quarterly maintenance costs

for budgeting purposes. Howard collects the following data on machine-hours worked and maintenance costs

for the past 12 quarters:

Quarter

1

2

3

4

5

6

7

8

9

10

11

12

Machine

Hours

100,000

120,000

110,000

130,000

95,000

115,000

105,000

125,000

105,000

125,000

115,000

140,000

Maintenance

Costs

$205,000

$240,000

$220,000

$260,000

$190,000

$235,000

$215,000

$255,000

$210,000

$245,000

$200,000

$280,000

1. Estimate the cost function for the quarterly data using the high-low method.

Total Cost

$280,000

$190,000

$90,000

Total Cost

H

$280,000

L

$190,000

Machine

Hours

140,000 High

95,000 Low

45,000

Total

Variable Costs

$280,000

$190,000

Total

Fixed Costs

$0 No fixed costs

$0 No fixed costs

2. Plot and comment on the estimated cost function.

Quarter

1

2

3

4

5

6

7

8

9

10

Machine

Hours

100,000

120,000

110,000

130,000

95,000

115,000

105,000

125,000

105,000

125,000

Actual

Maintenance

ACosts

$205,000

$240,000

$220,000

$260,000

$190,000

$235,000

$215,000

$255,000

$210,000

$245,000

Estimated

Maintenance

ECosts

$200,000

$240,000

$220,000

$260,000

$190,000

$230,000

$210,000

$250,000

$210,000

$250,000

Error

$5,000

$0

$0

$0

$0

$5,000

$5,000

$5,000

$0

($5,000)

11

12

115,000

140,000

$200,000

$280,000

$230,000

$280,000

($30,000)

$0

See P10-31(Chart)

There appears to be a clear-cut relationship between machine

hours and maintenance costs.

The high-low line appears to "fit" the data well. The vertical

differences between the actual and predicted costs appear

to be quite small.

3. Howard anticipates that KMW will operate machines for

100,000 hours in quarter 13. Calculate the predicted

maintenance costs in quarter 13 using the cost function

estimated in requirement 1.

Estimated maintenance costs = 100,000 X $2 = $200,000

90,000 hours?

aintenance costs

aintenance costs

$300,000

$250,000

$200,000

$150,000

$100,000

$50,000

$0

0

20,000

40,000

60,000

80,000

100,000

Linear (ECosts)

Linear (ECosts)

120,000

140,000

160,000

Exercise 10-40

Fashion Bling operates a chain of 10 retail department stores. Each department store makes its own purchasing deci

Fashion Bling, is interested in better understanding the drivers of purchasing department costs. For many years, Fas

costs to products on the basis of the dollar value of merchandise purchased. A $100 item is allocated 10 times as ma

department as a $10 item.

Barry Lee recently attended a seminar titled "Cost Drivers in the Retail Industry." In a presentation at the seminar, a le

system reported that "number of purchase orders" and "number of suppliers" were the two most important cost drive

value of merchandise purchased in each purchase order was not found to be a significant cost driver. Barry Lee inter

Department at Fashion Bling's Miami store. They believed that the competitors conclusions regarding cost drivers fo

Department. Mr. Barry Lee collects the following data for the most recent year for Fashion Bling's 10 retail departmen

Department

Store

Baltimore

Chicago

LA

Miami

NYC

Phoenix

Seattle

St. Louis

Toronto

Vancouver

$ Value of

Merchandise

Purchased

(MP$)

$68,307,000

33,463,000

121,800,000

119,450,000

33,575,000

29,836,000

102,840,000

38,725,000

139,300,000

130,110,000

Number of

Purchase

Orders

(# of PO)

4,345

2,548

1,420

5,935

2,786

1,334

7,581

3,623

1,712

4,736

Number of

Suppliers

(# of S)

125

230

8

188

21

29

101

127

202

196

Lee decides to use simple regression analysis to examine whether one or more of three variables are reasonable cost

Summary results for these regressions are as follows:

Regression 1: PDC = a + (b X MP$)

SUMMARY OUTPUT

Regression 1 (MP$)

$2,500,000

Regression Statistics

Multiple R

0.651131304

R Square

0.423971975

Adjusted R Square

0.351968472

Standard Error

401027.6454

$2,000,000

Observations

10

ANOVA

Purchase Dept. Dollars

df

Regression

$1,500,000

Residual

Total

Intercept

$1,000,000

(# of PO)

SS

9.46961E+11

1.28659E+12

2.23355E+12

MS

9.46961E+11

1.60823E+11

Coefficients

Standard Error

730715.82

265418.8246

156.9660646

64.68655284

t Stat

2.753067048

2.426564065

1

8

9

Purchase Dept. Do

$500,000

$0

$0

$20,000,000

$40,000,000

$60,000,000

Dollar Value of Merchandise

SUMMARY OUTPUT

(Data tab; Data Analysis; Regression)

Regression Statistics: Regression 1

Multiple R

0.282507267

R Square

0.079810356

Adjusted R Square

-0.03521335

Standard Error

510,550.35

Observations

10

PDC = $1,041,421.37 + .003126704 (MP$)

2.60662E+11

ANOVA

df

SS

1.80863E+11

2.08529E+12

2.26616E+12

MS

1.80863E+11

2.60662E+11

Coefficients

Standard Error

1,041,421.366

346,708.547

0.003126704

0.003753624

t Stat

3.003737214

0.832982632

Regression

Residual

Total

Intercept

(MP$)

1

8

9

From a t-table for df=8 and one tail equal to .025 (95% confidence interval), the t-value is:

Therefore the intercept confidence intervals are:

And the b-coefficient confidence intervals are:

For a normal curve: "+/-" 1.64 standard errors for 90% confidence interval.

For a normal curve: "+/-" 1.96 standard errors for 95% confidence interval.

For a normal curve: "+/-" 2.58 standard errors for 99% confidence interval.

Criterion

1. Economic Plausibility

2. Goodness of fit

3. Significance of "X" Variable

Regression 1: MP$

Purchasing personnel at the Miami store also believe MP$ is n

r2 = 0.0798 indicates a poor fit.

From a t-table for df=8 and one tail equal to .025 (95% confidence int

pg. 369

Constant variance is called homoscedasticity; violation if this assum

This assumption implies that the residual terms are unaffected by th

This assumption also implies that there is a uniform scatter, or dispe

C. Independence of residuals

For samples of 10-20 observations, a D/W statistic in the range of 1.1

that the residuals are independent.

D. Normality of residuals

Regression 2 (Number of Purchase Orders)

$2,500,000

$2,000,000

$1,500,000

$1,000,000

$500,000

$0

0

1,000

2,000

3,000

Number of Purchase Orders

SUMMARY OUTPUT

Regression Statistics: Regression 2

PDC = $722,537.85 + $159.48 (# of PO)

Multiple R

R Square

Adjusted R Square

Standard Error

Observations

0.656228523

0.430635874

0.359465358

401601.1595

10

1.61283E+11

ANOVA

df

SS

9.75888E+11

1.29027E+12

2.26616E+12

MS

9.75888E+11

1.61283E+11

Coefficients

Standard Error

722537.851

265834.6274

159.4842168

64.83547006

t Stat

2.717997494

2.459829731

Regression

Residual

Total

1

8

9

Intercept

(# of PO)

From a t-table for df=8 and one tail equal to .025 (95% confidence interval), the t-value is:

Therefore the intercept confidence intervals are:

And the b-coefficient confidence intervals are:

For a normal curve: "+/-" 1.64 standard errors for 90% confidence interval.

For a normal curve: "+/-" 1.96 standard errors for 95% confidence interval.

For a normal curve: "+/-" 2.58 standard errors for 99% confidence interval.

Regression 2: # of PO

Criterion

1. Economic Plausibility

2. Goodness of fit

pg. 369

Appears reasonable

C. Independence of residuals

For samples of 10-20 observations, a D/W statistic in the range of 1.1

that the residuals are independent.

D. Normality of residuals

$2,500,000

$2,000,000

$1,500,000

$1,000,000

$500,000

$0

50

100

SUMMARY OUTPUT

PDC = $828,814.24 + $3,815.69 (# of S)

Regression Statistics

Multiple R

0.621971696

R Square

0.386848791

Adjusted R Square

0.310204889

Standard Error

416757.7672

Observations

10

1.73687E+11

ANOVA

df

Regression

Residual

Total

Intercept

(# of S)

SS

8.7666E+11

1.3895E+12

2.26616E+12

MS

8.7666E+11

1.73687E+11

Coefficients

Standard Error

828814.2417

246570.4694

3815.694852

1698.407173

t Stat

3.361368633

2.24663138

1

8

9

From a t-table for df=8 and one tail equal to .025 (95% confidence interval), the t-value is:

Therefore the intercept confidence intervals are:

And the b-coefficient confidence intervals are:

For a normal curve: "+/-" 1.64 standard errors for 90% confidence interval.

For a normal curve: "+/-" 1.96 standard errors for 95% confidence interval.

For a normal curve: "+/-" 2.58 standard errors for 99% confidence interval.

Criterion

1. Economic Plausibility

Regression 3: # of Suppliers

the Fashion Bling-supplier relationships.

2. Goodness of fit

4. Specification Analysis

A. Linearity within the

relevant range

Appears reasonable.

C. Independence of residuals

For samples of 10-20 observations, a D/W statistic in the rang

that the residuals are independent.

D. Normality of residuals

2. Do the regression results support the competitor's presentation about the purchasing department's cost drivers?

Fashion Bling can either (a) develop a multiple regression equation for estimating purchasing departme

suppliers as cost allocation bases, or (2) divide the purchasing department cost pool into two separate

and another for costs related to suppliers, and estimate a separate simple regression equation for each

3. How might Lee gain additional evidence on drivers of Purchasing Department costs at each store?

a. Use physical relationships or engineering relationships to establish cause-and-effect links.

Lee could observe the purchasing department operations to gain insight into how costs are driven.

Lee could interview operating personnel in the purchasing department to obtain their insight on cost

Exercise 10-41

Barry Lee decides that the simple regression analysis used in P10-40 could be extended to a multiple regression analy

Regression 4: PDC = a + (b1)(# of PO) + (b2)(# of S)

SUMMARY OUTPUT

Regression Statistics: Regression 4

Multiple R

0.797723096

R Square

Adjusted R Square

Standard Error

Observations

0.636362138

0.532465606

343107.6721

10

7.21048E+11

1.17723E+11

ANOVA

df

SS

1.4421E+12

8.2406E+11

2.26616E+12

MS

7.21048E+11

1.17723E+11

Coefficients

Standard Error

484521.6346

256684.0955

126.6639997

57.7952084

2903.297788

1458.922564

t Stat

1.887618451

2.191600362

1.99002871

Regression

Residual

Total

Intercept

(# of PO)

(# of S)

2

7

9

From a t-table for df=7 and one tail equal to .025 (95% confidence interval), the t-value is:

Therefore the intercept confidence intervals is:

And the b-coefficient confidence intervals are:

Number of purchase orders

Number of Suppliers

For a normal curve: "+/-" 1.64 standard errors for 90% confidence interval.

For a normal curve: "+/-" 1.96 standard errors for 95% confidence interval.

For a normal curve: "+/-" 2.58 standard errors for 99% confidence interval.

Criterion

1. Economic Plausibility

the findings of the competitor's research and Bling's own rese

2. Goodness of fit

t-value of 1.99 is nearly significant for the (# of S) variable

4. Specification Analysis

A. Linearity within the

relevant range

Appears reasonable.

Appears reasonable.

C. Independence of residuals

D. Normality of residuals

1. Compare regression 4 with regression 2 and 3 in Problem 10-40. Which model would you recommend that Lee use

Regression 4 is economically feasible and has the highest r2 value. Lee should use the results from

SUMMARY OUTPUT

Multiple R

0.797724783

R Square

0.63636483

Adjusted R Square

0.454547245

Standard Error

370597.2708

Observations

10

4.80701E+11

1.37342E+11

ANOVA

df

Regression

Residual

Total

Intercept

(MP$)

(# of PO)

(# of S)

SS

1.4421E+12

8.24054E+11

2.26616E+12

MS

4.80701E+11

1.37342E+11

Coefficients

Standard Error

483559.9493

312554.2588

0.0000194148

0.002913205

126.5778427

63.75031137

2900.7309

1622.198995

t Stat

1.547123214

0.006664422

1.985525089

1.788147391

3

6

9

From a t-table for df=6 and one tail equal to .025 (95% confidence interval), the t-value is:

Therefore the intercept confidence intervals is:

And the b-coefficient confidence intervals are:

Dollars of Merchandise Purchased

Number of purchase orders

Number of Suppliers

For a normal curve: "+/-" 1.64 standard errors for 90% confidence interval.

For a normal curve: "+/-" 1.96 standard errors for 95% confidence interval.

For a normal curve: "+/-" 2.58 standard errors for 99% confidence interval.

2. Compare regression 5 with regression 4. Which model would you recommend that Lee use?

It is slightly less complicated (and therefore less costly), has about the same r2, and the standard errors

regression variables are slightly smaller.

3. Lee estimates the following data for the Baltimore store for next year:

Dollar value of merchandise purchased

Number of purchase orders

Number of suppliers

$75,000,000

4,000

95

PDC = $484,521.6364 + $126.6639997 (# of PO) + $2903.297788 (# of S)

PDC =

PDC = $483,559.95 + $126.5778427 (# of PO) + $2,900.7309 (# of S) + .000194148 (MP$)

PDC =

4. What difficulties do not arise in simple regression analysis that may arise in multiple regression analysis?

Multicollinearity is a frequently encountered problem in cost accounting; it does not arise in simple reg

because there is only one independent variable in a simple regression. Multicollinearity exists when tw

more independent variables are highly correlated with each other.

One consequence of multicollinearity is an increase in the standard errors of the coefficients of the indi

variables. This frequently shows up in reduced t-values for the independent variables in the multiple

regression relative to their t-values in the simple regression.

Variables

Regression 4

# of PO

# of S

Regression 5

# of PO

# of S

MP$

t-value

Multiple

Regression

t-value

Simple

Regression

2.191600362

1.99002871

2.459829731

2.24663138

1.985525089

1.788147391

0.006664422

2.459829731

2.24663138

0.832982632

The decline in the t-values in the multiple regressions is consistent with some (but not very high)

collinearity among the independent variables.

Generally, users of regression analysis believe that a coefficient of correlation between independent va

greater than 0.70 indicates multicollinearity.

The coefficients of correlation between the potential independent variables for Fashion Bling are:

Pair-wise

Correlation Values

# of PO / # of S

# of PO / MP$

# of S / MP$

0.285358

0.270157

0.296190

# of PO

# of Suppliers

MP$

# of PO

MP$

# of Suppliers

Excel path:

Data, Data Analysis, Correlation, OK

(MP$)

(MP$)

(# of PO)

(# of S)

(PDC)

1

0.270157066

0.296190300

0.282507267

(# of PO)

1

0.285358146

0.656228523

(# of S)

1

0.621971696

5. Give examples of decisions in which the regression results reported here could be informative.

Cost management decisions: Fashion Bling could restructure relationships with suppliers so that

fewer separate purchase orders are made. Alternatively, it may aggressively reduce the number

of existing suppliers.

Purchasing policy decisions: Fashion Bling could set up an internal charge system for individual

retail departments within each store. Separate charges to each department could be made for each

purchase order and each new supplier added to the existing ones. These internal charges would

signal to each department ways in which their own decisions affect the total costs of Fashion Bling.

Account system design decisions: Fashion Bling may want to discontinue allocating purchasing

department costs on the basis of dollar value of merchandise purchased. Allocation bases better

capturing cause-and-effect relations at Fashion Bling are the number of purchase orders and the

number of suppliers.

nt store makes its own purchasing decisions. Barry Lee, assistant to the president of

department costs. For many years, Fashion Bling has allocated purchasing department

A $100 item is allocated 10 times as many overhead costs associated with the purchasing

y." In a presentation at the seminar, a leading competitor that has implemented an ABC

were the two most important cost drivers of purchasing department costs. The dollar

a significant cost driver. Barry Lee interviewed several members of the Purchasing

s conclusions regarding cost drivers for purchasing costs also applied to their Purchasing

r for Fashion Bling's 10 retail department stores:

Purchasing

Department

Costs

(PDC)

$1,522,000

1,095,000

542,000

2,053,000

1,068,000

517,000

1,544,000

1,761,000

1,605,000

1,263,000

$ Value of

Merchandise

Purchased

(MP$)

$68,307,000

33,463,000

121,800,000

119,450,000

33,575,000

29,836,000

102,840,000

38,725,000

139,300,000

130,110,000

Number of

Suppliers

(# of S)

125

230

8

188

21

29

101

127

202

196

Regression 1 (MP$)

F

5.888213164

Significance F

0.041423692

P-value

0.024940453

0.041423692

Lower 95%

118658.5171

7.798509836

Upper 95%

1342773.123

306.1336195

Lower 95.0%

118658.5171

7.798509836

Upper 95.0%

1342773.123

306.1336195

$80,000,000

$100,000,000

$120,000,000

$140,000,000

$160,000,000

= $1,041,421.37 + .003126704 (MP$)

Intercept

Slope

R2

0.693860065

F

0.693860065

Significance F

0.429019986

P-value

0.016974731

0.429019986

Lower 95%

241,910.0229

-0.005529169

241,910.0223

(0.005529169)

or 95% confidence interval.

or 99% confidence interval.

260,661,660,427.11

$510,550.35

Std. error of the regression

On average, actual Y and the predicted y differ by this amoun

The smaller the std. error the better the goodness of fit

Upper 95%

1,840,932.7095

0.011782576

3.003737214

0.832982632

2.306004135

1,840,932.7102

0.011782576

1.644853627

1.959963985

2.575829304

l at the Miami store also believe MP$ is not a significant driver.

(r2 >.30 passes).

0.079810356

nd one tail equal to .025 (95% confidence interval), the t-value is:

gression analysis

1,041,421.366

0.003126704

0.079810356

0.282507267

2.306004135

3.003737225

0.832982631

hip appears questionable within the relevant range. (See Scatter diagram)

led homoscedasticity; violation if this assumption is called heteroscedasticity.

s that the residual terms are unaffected by the level of the cost driver.

mplies that there is a uniform scatter, or dispersion, of the data points about the

servations, a D/W statistic in the range of 1.10 - 2.90 range indicates

4,000

5,000

6,000

7,000

= $722,537.85 + $159.48 (# of PO)

R2

0.430635874

8,000

Slope

Intercept

159.4842168

722537.851

6.050762303

F

6.050762303

Significance F

0.039329201

2.717997494

2.459829731

P-value

0.026330178

0.039329201

Lower 95%

109522.1014

9.973354903

109,522.1009

9.973354779

or 95% confidence interval.

or 99% confidence interval.

Upper 95%

1335553.6

308.9950788

2.306004135

1,335,553.6

308.9950789

1.644853627

1.959963985

2.575829304

le. Increasing the number of purchase orders increases the purchasing tasks to be undertaken.

0.430635874

gression analysis

e Scatter diagram)

servations, a D/W statistic in the range of 1.10 - 2.90 range indicates

150

200

250

Number of Suppliers

= $828,814.24 + $3,815.69 (# of S)

R2

Slope

Intercept

0.386848791

3815.694852

828814.2417

5.047352558

F

5.047352558

Significance F

0.054854897

3.361368633

2.24663138

P-value

0.00991164

0.054854897

Lower 95%

260221.7201

-100.8391101

260,221.7197

(100.8391133)

Upper 95%

1397406.763

7732.228814

2.306004135

1,397,406.764

7,732.228817

or 95% confidence interval.

or 99% confidence interval.

1.644853627

1.959963985

2.575829304

le. Increasing the number of suppliers increases the costs of certifying vendors and managing

pplier relationships.

0.386848791

is significant.

observations, a D/W statistic in the range of 1.10 - 2.90 range indicates

independent.

tion for estimating purchasing department costs with the # of purchasee orders and the # of

department cost pool into two separate cost pools, one for costs related to purchase orders

ate simple regression equation for each pool using the appropriate cost drive.

gain insight into how costs are driven.

343107.6721

6.124960342

F

6.124960342

Significance F

0.02899625

P-value

0.10102841

0.064526149

0.08688758

Lower 95%

-122439.8024

-9.999951709

-546.5058866

confidence intervals is:

(122,439.8025)

nfidence intervals are:

ber of purchase orders

(9.999951746)

ber of Suppliers

(546.505887535)

or 95% confidence interval.

or 99% confidence interval.

Upper 95%

1091483.072

263.327951

6353.101463

2.364624252

1,091,483.072

263.327951

6,353.101464

1.644853627

1.959963985

2.575829304

mpetitor's research and Bling's own research.

y significant for the (# of S) variable

1.887618451

2.191600362

1.99002871

3.500018051

F

3.500018051

Significance F

0.089598866

P-value

0.172797006

0.994898658

0.094299047

0.123970458

Lower 95%

-281232.7692

-0.007108942

-29.41354943

-1068.647038

Upper 95%

1248352.668

0.007147771

282.5692348

6870.108839

confidence intervals is:

(281,232.7707)

nfidence intervals are:

rs of Merchandise Purchased

(0.007108942)

ber of purchase orders

(29.413549733)

ber of Suppliers

(1,068.647046)

2.446911851

or 95% confidence interval.

or 99% confidence interval.

1.644853627

1.959963985

2.575829304

out the same r2, and the standard errors around the

1.547123214

0.006664422

1.985525089

1.788147391

1,248,352.669

0.007147771

282.5692351

6,870.108846

of S) + .000194148 (MP$)

counting; it does not arise in simple regression

ession. Multicollinearity exists when two or

independent variables in the multiple

# of PO

1

0.285358146

MP$

1

0.270157066

MP$

1

0.296190300

# of Suppliers

0.285358146

1

# of PO

0.270157066

1

# of Suppliers

0.296190300

1

(PDC)

ould be informative.

elationships with suppliers so that

aggressively reduce the number

department could be made for each

es. These internal charges would

fect the total costs of Fashion Bling.

urchased. Allocation bases better

umber of purchase orders and the

0.282507267

the regression

predicted y differ by this amount.

the better the goodness of fit

0.016974731

0.429019985

Problem 10-36

The Nautilus Company, which is under contract to the U.S. Navy, assembles troop deployment boats.

As part of its research program, it completes the assembly of the first of a new model (PT109) of

deployment boat. The Navy is impressed with the PT109. It requests that Nautilus submit a proposal

on the cost of producing another 6 PT109s.

Nautilus reports the following cost information for the first PT109 assembled and uses an 90%

cumulative average-time learning model as a basis for forecasting direct manufacturing labor-hours

for the next 6 PT109s. (An 90% learning curve means b = -0.152004.)

Direct materials

Direct manufacturing labor time for first boat

Direct manufacturing labor rate

Variable manufacturing overhead cost

Other manufacturing overhead

Tooling costs (1)

Learning curve for manufacturing labor time per boat

$200,000

15,000

$40

$25

20%

$280,000

90%

labor-hours

per direct manufacturing labor-hour

per direct manufacturing labor-hour

of direct manufacturing labor costs

cumulative average time (2)

(1) Tooling can be reused at no extra cost; all of its cost have been assigned to the first deployment boat.

(2) Using the formula (p. 359), for an 90% learning curve, b = ln.90/ln2 = (-0.105361/.693147) = -0.152004

Required:

1. Calculate predicted total costs of producing the six PT109s for the Navy. (Nautilus will keep the first

deployment boat assembled, costed at $1,575,000, as a demonstration model for potential customers.)

Calculation of the direct manufacturing labor-hours

to produce the 2nd to 8th boats can be calculated

as follows: y = aX to the power of b where

y = cumulative average time per unit in labor-hours

a = labor hours required for first unit

X = cumulative number of units

b = ln(learning-curve % in decimal form) / ln2

b = ln 0.90 / ln2 = -0.152003093

-0.105360516

0.693147181

-0.152003093

b=

y=aXb

Extra unit

-0.152003093

Cumulative

# of

Units

1

2

3

4

5

6

7

8

The DLHs required to produce the 2nd through the 7th boats

Direct materials, 6 X $200,000

Direct manufacturing labor (DML)

Variable mfg. Overhead

Other mfg. Overhead (20% of DML$)

Total costs for boats 2 through 7

Cumulative

Average-time

Learning Model

1,200,000.00

2,524,580.90

1,577,863.06

504,916.18

$5,807,360.15

2. What is the dollar amount of the difference between (a) the predicted total costs for producing the

six PT109s in requirement 1, and (b) the predicted total costs for producing the six PT109s,

assuming that there is no learning curve for direct manufacturing labor? That is, for (b) assume a

linear function for units produced and direct manufacturing labor-hours.

Assumption

(a)

63,114.52

Direct labor hours required based on assumption

Learning Curve

$1,200,000.00

2,524,580.90

1,577,863.06

504,916.18

$5,807,360.15

Direct manufacturing labor (DML)

Variable mfg. Overhead

Other mfg. Overhead (20% of DML$)

Total costs for boats 2 through 7

Difference

Learning curve effects are most prevalent in large manufacturing industries such as airplanes and

boats where costs can run into the millions or hundreds of millions of dollars, resulting in very large

and monetarily significant differences between the two methods.

Problem 10-37

Assume the same information for the Nautilus Company as in Problem 10-36 with one exception. This exception is th

Nautilus uses an 90% incremental unit-time learning model as a basis for predicting direct manufacturing labor-hours

in its assembling operations.

1. Calculate predicted total costs of producing the 6 additional PT109s for the Navy. (Nautilus will keep the first

deployment boat assembled, costed at $1,575,000, as a demonstration model for potential customers.)

Calculation of the direct manufacturing labor-hours

to produce the 2nd to 7th boats can be calculated

as follows: y = aX to the power of b where

y = Time (in labor-hours) to produce the most recent unit

a = labor hours required for first unit

X = cumulative number of units

b = ln(learning-curve % in decimal form) / ln2

b = ln 0.90 / ln2 = -0.152003093

-0.105360516

0.693147181

-0.152003093

b=

y=aXb

Extra unit

-0.15200309

Cumulative

# of

Units

1

2

3

4

5

6

7

8

The DLHs required to produce the 2nd through the 7th boats

Direct materials, 6 X $200,000

Direct manufacturing labor (DML)

Variable mfg. Overhead

Other mfg. Overhead (20% of DML$)

Incremental

Unit-time Learning

Model

$1,200,000.00

2,906,835.56

1,816,772.22

581,367.11

$6,504,974.89

2. Compare the cost of the 2nd - 7th boats using the "Incremental Unit-time Learning Model" with the costs of the 2nd

boats using the "Cumulative Average-time Learning Model:

Incremental

Unit-time Learning

Cost to produce the 2nd through 7th boats

Model

Direct materials, 6 X $200,000

$1,200,000.00

Direct manufacturing labor (DML)

2,906,835.56

Variable mfg. Overhead

1,816,772.22

Other mfg. Overhead (20% of DML$)

581,367.11

Total costs for boats 2 through 7

$6,504,974.89

Difference

Why are the predictions different?

The incremental unit-time learning curve has a slower rate of decline in the time required to produce successive un

than does the cumulative average-time learning curve even though the same 90% factor is used for both curves.

The reason is that, in the incremental unit-time learning model, as the number of units double, only the last unit

produced has a time of 90% of the initial time. In the cumulative average-time learning model, doubling the numbe

of units causes the average time of all the additional units produced (not just the last unit) to be 90% of the initial ti

Cumulative

# of

Units

1

2

3

4

5

6

7

8

How should Nautilus decide which model it should use?

The company should examine its own internal records on past jobs and seek information from engineers, plant managers, and

when deciding which learning curve better describes the behavior of direct manufacturing labor-hours on the production of the

boats.

deployment boats.

del (PT109) of

submit a proposal

uring labor-hours

manufacturing labor-hour

manufacturing labor-hour

manufacturing labor costs

.693147) = -0.152004

r potential customers.)

90% LC

Average

Time per

Unit (y):

Labor-Hrs.

15,000.00

13,500.00

12,693.09

12,150.00

11,744.80

11,423.78

11,159.22

10,935.00

six PT109s,

0.9000

0.9402

0.9572

0.9667

0.9727

0.9768

0.9799

90% LC

Average

Time per

Cumulative

Unit (y):

Total Time:

Labor-Hrs.

Labor-Hrs.

15,000.00

15,000.00

13,500.00

27,000.00

38,079.27

12,150.00

48,600.00

58,724.00

68,542.68

78,114.52

10,935.00

87,480.00

63,114.52

Assumption

(b)

90,000.00

No

Learning Curve

$1,200,000.00

3,600,000.00

2,250,000.00

720,000.00

$7,770,000.00

$1,962,639.85

Proof

$7,770,000.00

h as airplanes and

esulting in very large

g direct manufacturing labor-hours

r potential customers.)

90% LC

Incremental

Unit Time for

Xth Unit (y)

Labor-Hrs.

15,000.00

13,500.00

12,693.09

12,150.00

11,744.80

11,423.78

11,159.22

10,935.00

0.90

90% LC

Incremental

Unit Time for

Cumulative

Xth Unit (y)

Total Time:

Labor-Hrs.

Labor-Hrs.

15,000.00

15,000.00

13,500.00

28,500.00

41,193.09

12,150.00

53,343.09

65,087.89

76,511.67

87,670.89

10,935.00

98,605.89

72,670.89

Cumulative

Average-time

Learning Model

$1,200,000.00

2,524,580.90

1,577,863.06

504,916.18

$5,807,360.15

($697,614.75)

% factor is used for both curves.

f units double, only the last unit

arning model, doubling the number

e last unit) to be 90% of the initial time.

Cumulative

Average-time

Learning Model

Cumulative

Total Time:

Labor-Hrs.

15,000.00

27,000.00

38,079.27

48,600.00

58,724.00

68,542.68

78,114.52

87,480.00

Incremental

Unit-time Learning

Model

Cumulative

Total Time:

Labor-Hrs.

15,000.00

28,500.00

41,193.09

53,343.09

65,087.89

76,511.67

87,670.89

98,605.89

ng labor-hours on the production of the PT109

Observation

1

2

3

4

5

6

7

8

9

10

Total

Mean

b-value

Intercept

Formula

X

6

10

8

11

5

12

9

7

4

14

86

Y

22

34

29

40

19

45

30

25

18

48

Hi-Low Method

Y

48

18

30

310

31

Hi-Low

Regression

3

6

y = 6 + 3X

3.268398268

2.891774892

y=2.891774892+ 3.268398268(X)

3.268398268

2.891774892

Regression Statistics

Multiple R

R Square

Adjusted R Square

Standard Error

Observations

0.988576473

0.977283443

0.974443873

1.693506825

10

0.977283443

0.977283443

0.977283443

1

8

9

SS

987.0562771

22.94372294

1010

MS

987.0562771

2.867965368

F

344.1660377

Coefficients

2.891774892

3.268398268

Standard Error

1.606987982

0.176177712

t Stat

1.799500011

18.55171253

1.799500011

18.55171253

P-value

0.109636756

7.34874E-08

ANOVA

df

Regression

Residual

Total

Intercept

X

Hi-Low

Best quess of Y, with knowledge of X equal to 12, is

Best quess of Y, with no knowledge of X, is the mean

Total Variation from the mean when X is = to 12

Explained Variation by knowing X is equal to 12

Unexplained Variation when X is equal to 12

45

42

45 - 31 =

42 - 31 =

45 - 42 =

31

14

11

3

Observation #

1

2

3

4

5

6

7

8

9

10

Total

X

6

10

8

11

5

12

9

7

4

14

Y

22

34

29

40

19

45

30

25

18

48

310

31

Regression

Explained by X

Predicted Y

Residuals

22.5021645

-8.497835498

35.57575758

4.575757576

29.03896104

-1.961038961

38.84415584

7.844155844

19.23376623

-11.76623377

42.11255411

11.11255411

32.30735931

1.307359307

25.77056277

-5.229437229

15.96536797

-15.03463203

48.64935065

310

17.64935065

0.000000000

31

60

50

Hi-Low Method

X

14

4

10

40

30

20

10

0

0

Significance F

7.34874E-08

Lower 95%

-0.813946037

2.862131736

Upper 95%

6.597495821

3.674664801

344.1660377

987.0562771

1.693506825

2.867965368

1.693506825

see pg. 368 Std. error of the regression

2.306004135 = T-value for df =8

-0.813946040

6.597495824

2.862131736

3.674664801

Regression

45

2.575829304

42.11255411

1.959963985

31

14

11.11255411

2.887445887

42.11233411 - 31

45 - 42.11255411

Unexplained

Squared

Residuals

Squared

Total Variation

Total Variation

Squared

72.21320815

-0.502164502

0.252169187

-9

-9

81

20.93755739

-1.575757576

2.483011938

3.845673807

-0.038961039

0.001517963

-2

-2

61.53078091

1.155844156

1.335975713

81

138.444257

-0.233766234

0.054646652

-12

-12

144

123.4888589

2.887445887

8.337343753

14

14

196

1.709188359

-2.307359307

5.323906973

-1

-1

27.34701374

-0.770562771

0.593766983

-6

-6

36

226.0401604

2.034632035

4.139727516

-13

-13

169

311.4995783

987.0562771

-0.649350649

0.000000000

0.421656266

22.94372294

17

0

17

0

289

1010

0.977283443

0.977283443

0.977283443

0.977283443

0.977283443

r= 0.988576473

r= 0.988576473

1010

Graph -- Data

8

x

10

12

12

14

16

advertising and sales revenue at the Casa Real restaurant where she is the financial

manager. She has obtained the following data for the past 10 months:

Month

Revenues

March

$50,000

April

$70,000

May

$55,000

June

$65,000

July

$56,000

August

$65,000

September

$45,000

October

$80,000

November

$55,000

December

$60,000

Advertising

Costs

$2,000

$3,000

$1,500

$3,500

$1,000

$2,000

$1,500

$4,000

$2,500

$2,500

Using the high-low method, determine the cost function which could

be used to estimate restaurant revenues based on advertising costs.

2

2

2

2

Revenues

$80,000

$56,000

$24,000

Costs

$4,000

$1,000

$3,000

$8.00

$80,000

$56,000

$32,000

$8,000

$48,000

$48,000

Cook Company provides you with the following original data for the past 12 months and some partial

regression analysis information:

Observation (Month)

DL Hours

January

27,750

February

24,150

March

20,835

April

16,170

May

15,000

June

19,050

July

20,850

August

22,500

September

25,650

October

28,500

November

32,250

December

33,000

Total

285,705

Mean

$23,808.75

Regression Statistics

Coefficients

Intercept

29654.03669

DL Hours

0.689807878

Observation (Month)

January

February

March

April

May

June

July

August

September

October

November

December

Total

Observation

1

2

3

4

5

6

7

8

9

OH $

$48,105 Regression

$47,850 Residual

$44,775 Total

$41,250

$39,000

$42,900

$43,950

$44,850

$46,500

$49,500

$51,750

$52,500

$552,930.00

$46,077.50

Standard Error

937.42118

0.038331436

t Stat

31.63363206

17.9958789

Predicted OH $

Explained Residuals Squared Residual

$48,796.20530

$2,718.71

$7,391,358.50

$46,312.89694

$235.40

$55,411.72

$44,026.18382

($2,051.32)

$4,207,898.06

$40,808.23007

($5,269.27)

$27,765,205.55

$40,001.15486

($6,076.35)

$36,921,970.29

$42,794.87676

($3,282.62)

$10,775,615.32

$44,036.53094

($2,040.97)

$4,165,554.70

$45,174.71394

($902.79)

$815,022.67

$47,347.60875

$1,270.11

$1,613,176.25

$49,313.56121

$3,236.06

$10,472,092.13

$51,900.34075

$5,822.84

$33,905,474.37

$52,417.69666

$6,340.20

$40,198,093.63

$552,930.00

$0.00

$178,286,873.18

$46,077.50

X

6

10

8

11

5

12

9

7

4

Y

22

34

29

40

19

45

30

25

18

10

Total

Mean

b-value

Intercept

Formula

14

86

48

310

31

Hi-Low

Regression

3

6

y = 6 + 3X

3.268398268

2.891774892

y=2.891774892+ 3.268398268(X)

Regression Statistics

Multiple R

R Square

Adjusted R Square

Standard Error

Observations

0.988576473

0.977283443

0.974443873

1.693506825

10

0.977283443

0.977283443

1

8

9

SS

987.0562771

22.94372294

1010

MS

987.0562771

2.867965368

Coefficients

2.891774892

3.268398268

Standard Error

1.606987982

0.176177712

t Stat

1.799500011

18.55171253

1.799500011

18.55171253

ANOVA

df

Regression

Residual

Total

Intercept

X

Best quess of Y, with knowledge of X equal to 12, is

Best quess of Y, with no knowledge of X, is the mean

Total Variation from the mean when X is = to 12

Explained Variation by knowing X is equal to 12

Unexplained Variation when X is equal to 12

RESIDUAL OUTPUT (Regression)

Observation #

1

2

3

4

5

6

7

8

9

10

X

6

10

8

11

5

12

9

7

4

14

45 - 31 =

42 - 31 =

45 - 42 =

Regression

Y

22

34

29

40

19

45

30

25

18

48

Predicted Y

22.5021645

35.57575758

29.03896104

38.84415584

19.23376623

42.11255411

32.30735931

25.77056277

15.96536797

48.64935065

Total

310

310

31

31

df

1

10

11

0.970046577

0.984909426

2.228138852

Lower 95%

27565.33215

0.604400117

2.228138842

Upper 95%

31742.74123

0.775215639

($691.21)

$477,764.76

$7,869,123.26

$1,537.10

$2,362,685.82

$2,418,097.54

$748.82

$560,725.67

$4,768,623.72

$441.77

$195,160.67

$27,960,366.22

($1,001.15)

$1,002,311.05

$37,924,281.34

$105.12

$11,050.90

$10,786,666.22

($86.53)

$7,487.60

$4,173,042.30

($324.71)

$105,439.14

$920,461.81

($847.61)

$718,440.60

$2,331,616.85

$186.44

$34,759.42

$10,506,851.55

($150.34)

$22,602.34

$33,928,076.71

$82.30

$6,773.84

$40,204,867.47

$0.00

$5,505,201.82

$183,792,075.00

Hi-Low Method

Y

48

14

18

30

4

10

3.268398268

2.891774892

Regression Statistics

F

344.1660377

Significance F

7.34874E-08

P-value

0.109636756

7.34874E-08

Lower 95%

-0.813946037

2.862131736

Hi-Low

Regression

45

45

42

42.11255411

31

14

31

14

11

11.11255411

2.887445887

Explained by X

Residuals

344.1660377

987.0562771

2.867965368

Upper 95%

6.597495821

3.674664801

2.306004135

-0.813946040

2.862131736

42.11233411 - 31

45 - 42.11255411

Unexplained

Squared

Residuals

Squared

-8.497835498

72.21320815

-0.502164502

0.252169187

4.575757576

20.93755739

-1.575757576

2.483011938

-1.961038961

3.845673807

-0.038961039

0.001517963

7.844155844

61.53078091

1.155844156

1.335975713

-11.76623377

138.444257

-0.233766234

0.054646652

11.11255411

123.4888589

2.887445887

8.337343753

1.307359307

1.709188359

-2.307359307

5.323906973

-5.229437229

27.34701374

-0.770562771

0.593766983

-15.03463203

226.0401604

2.034632035

4.139727516

17.64935065

311.4995783

-0.649350649

0.421656266

0.000000000

987.0562771

0.000000000

22.94372294

0.977283443

0.977283443

0.977283443

0.977283443

r=

0.988576473

r=

0.988576473

Graph -60

50

40

40

30

20

10

0

0

= T-value for df =8

6.597495824

3.674664801

Total Variation

Total Variation

Squared

-9

-9

81

-2

-2

81

-12

-12

144

14

14

196

-1

-1

-6

-6

36

-13

-13

169

17

17

289

1010

1010

Graph -- Data

8

x

10

12

14

16

Mhrs.

68

88

62

72

60

96

78

46

82

94

68

48

862

Labor Costs

1190

1211

1004

917

770

1456

1180

710

1316

1032

752

963

1002.219152

1208.467054

940.3447813

1043.468733

919.7199911

1290.966215

1105.343103

775.3464593

1146.592684

1270.341425

1002.219152

795.9712496

12501

1041.75

12501

SUMMARY OUTPUT

Regression Statistics

Multiple R

0.722103406

R Square

0.521433328

Adjusted R Square

0.473576661

Standard Error

170.5356646

Observations

12

170.5356646

ANOVA

df

Regression

Residual

Total

Intercept

Mhrs.

SS

316874.1211

290824.1289

607698.25

MS

316874.1211

29082.41289

Coefficients

Standard Error

300.9762837

229.754011

10.31239512

3.124146397

t Stat

1.309993598

3.300868081

1

10

11

187.780848

2.53294552

63.6552187

126.4687325

149.7199911

165.0337845

74.65689674

65.34645934

169.4073163

238.3414252

250.219152

167.0287504

1660.19152

138.3492934

F

Significance F

10.89573009

0.008001758

P-value

0.219491008

0.008001758

Lower 95%

Upper 95%

-210.9475525 812.9001199

3.351363186 17.27342706

- Chapter 09 IMSMUploaded byZachary Thomas Carney
- costacctg13_SolPPT_ch10Uploaded bymsaadnaeem
- GE Cost15EChapter16 SolutionsUploaded bykevin
- Question Bank - Management Accounting-1Uploaded byNeel Kapoor
- Ado File ProgramingUploaded byjoniakom
- BBS10 Ppt Mtb Ch13 Simple Linear RegressionUploaded byIrlya Noerofi Tyas
- Questions SolutionsUploaded byMane Sahakyan
- 02 - Problem Solutions.pdfUploaded byadfad157
- analysis of teeth estimationUploaded byAisyah Rieskiu
- Waiting Lines (Class Problems).pdfUploaded bydreanichole
- Random ForestUploaded byv
- CHAPTER 3 Activity Cost BehaviorUploaded byMudassar Hassan
- Microsoft Word - Qa 1 PracticalUploaded bymansi
- NLS Tools forUploaded byTyler
- Prediction of Construction Project Performance using Regression Analysis and Artificial Neural NetworkUploaded byIJRASETPublications
- Data Analysis With MstatUploaded bySubhas Roy
- Forecasting of Suspended Sediment in Rivers Using Artificial Neural Networks ApproachUploaded byIJAERS JOURNAL
- MBFM ProjectUploaded bySuhas Karanth
- Analysis Debt MktUploaded byManisa Biswas
- review 1Uploaded byMichelle
- RegressUploaded bysheriefmuhammed
- Monetary Policy Indicators Analysis Based on Regression FunctionUploaded byAmi Popescu
- The Relationship Between Nominal GDP on Money GrowthUploaded byNoraini Mohd Ibrahim
- uji regresi akhir.docUploaded bytripramono
- water resource-1Uploaded byRaman Lux
- 3447Uploaded bySohail Ahmad
- influence of leadership style on students performanceUploaded byHenry Micheal
- Statistics Review 7Uploaded byisojukou
- rmp_paper.pdfUploaded byvaliarmarcus912
- rmp paperUploaded byapi-322609518

- Reza Mohamad RizamanUploaded byKara Mhisyella Assad
- purnawan aropiUploaded byKara Mhisyella Assad
- 2010-04-20_075645_kuneUploaded byKara Mhisyella Assad
- Crysthfghfology and Master NumbersUploaded byKara Mhisyella Assad
- Chapter 12 Solutions Fall 2008Uploaded byKara Mhisyella Assad
- S-369-H-Ch4Uploaded byKara Mhisyella Assad
- 320C10Uploaded byKara Mhisyella Assad

- Add Math-HealthUploaded byPrime Sinista
- 5-journalUploaded byhayty
- Why Spreadsheets Are Inadequate for Uncertainty AnalysisUploaded byjuncar25
- Design of ExperimentsUploaded byAnand Pratik
- Lecture PCAUploaded bytaariy
- Engineering Mathematics 4 Jan 2014Uploaded byPrasad C M
- UNIT II Probability SolutionsUploaded byJohnny
- A Machine Learning Framework for Sport Result PredictionUploaded bySiddharth Shukla
- 7.1 Full FactorialUploaded bySaiKiran
- EXPASSVG-IHSTATmacrofreeUploaded bySebastian Antonio Diaz Fernandez
- Saipe Highlights 2009Uploaded byakrause
- sw3_exeol_oddonlyUploaded byxandercage
- LuoL.PDFUploaded bySafdar Haider
- copd-11-1287Uploaded byIrmadani
- Marital RapeUploaded byRishabh kumar
- Assignment(1)Uploaded byTILAK
- EkonometrikaUploaded byIbnu Kurniawan Soetomo
- Sgems quinta parte del MAnual para uso geoestadisticoUploaded byMileedy Vanessa Narro Lucano
- Grömping, U (2016) R Package FrF2 for Creating and Analyzing Fractional FactorialUploaded byMiguel Ángel Osorio López
- 5. Managerial Economics Ass-1Uploaded bysureshknit03
- List of VTU Journals (2)Uploaded bySyedZameer
- Release Guide Erdas Imagine 2016Uploaded byNimas Hayu Merlina Anggarini
- The Steps in Linguistics ResearchUploaded byGustavo Reges
- SasUploaded byEveryday Levine
- Thailand BABE GuidelineUploaded byDiane
- Maths FoundationUploaded byvlorraway
- 20310334-Models-and-Technique-of-Mp-Demand-and-Supply-Forecasting-by-shahid-elims.pptUploaded byAnu Pom
- Moore 4e CH15Uploaded byOutofbox
- predicting-erp-implementationeffort-jit2001.pdfUploaded bykamathrh
- Finger Flexion CascadeUploaded byJuniper Publishers