You are on page 1of 12

Print Page

Developing an activity-based costing

approach for system development and
By Roztocki, Narcyz
Publication: Engineering Management Journal
Date: Saturday, March 1 2003

This paper proposes the use of the Activity Based Costing (ABC) approach to software estimation.
Like other more traditional approaches to software estimation, ABC provides man-day estimates.
In addition, it also provides detailed costing information that is useful for management control and
decision making. The paper shows how the ABC approach can be applied to software estimation
by building an ABC model using data from twenty-two projects in a financial services firm. The
model is then used for estimation, and comparisons between estimated and actual man-day are
computed (variance analysis). The data generated by the model and from the variance analysis
are useful for management control and decision making in areas such as resource allocation,
outsourcing of specific development activities, and learning from adoption of new development
tools and practices.
ACM Categories: D2.8, D2.9, K6.1, K.6.3
Keywords: IS Project Planning, Effort Estimation, Time and Cost Estimation, Activity-based
Costing, Software Process Measurement, Organizational Learning.
Software development time and cost estimations are important to management in deciding
whether to approve a software development project. They are also used in departmental budgeting,
project planning, resource allocation, and outsourcing decisions. Both understated and overstated
costs have negative impacts on IS management and organization (Benjamin et al., 1984; Dugger,
1996; Lederer & Prasad, 1995).
Unfortunately, studies have shown that software development projects often run 100% to 200%
over budget (Maglitta, 1991; Rubin, 1991), and many projects were abandoned because of severe
cost overruns and schedule slippages (Keil et al., 1995). Budget overruns can arise from
inaccurate estimation of time and/or inaccurate allocation of resource cost.
Existing software estimation models focus on estimating development time. Many traditional
estimation approaches estimate total project time, and assume that accurate time estimation
automatically leads to accurate cost estimation. This assumption may not hold as the time spent in
different development and implementation activities differs across projects. Different activities have
different costs, and some activities cost more than others. For example, one man-day of project
management is more costly than one man-day of programming because project management is
performed by more experienced staff. In traditional approaches to costing software development,
two projects that require the same number of man-days will have the same cost estimate, even
though one project may be more complex and require a greater proportion of time to be spent in
more expensive activities such as project management.
Ideally, the estimation approach should also provide useful information to management for three
key management tasks: project approval, implementation monitoring, and post-implementation
evaluation. To accomplish this, not only is information needed for total project time and costs, but
also for time and cost by activity. This will enable IS and user managers to better manage and
schedule resources for the project, monitor the progress by activities, and evaluate and analyze
the project by activities upon completion. Second, in order to provide relevant management
information, the estimation model needs to be contextualized to the organization's development
practices. The cost drivers used in the models should reflect the actual development practices of
the project teams, and the cost driver rates should reflect their skills and productivity levels, the
different mix of human resources in different activities, and the compensation structure of the
organization. Third, the cost drivers and cost driver rates must be easily understandable by both IS
and user management so as to facilitate negotiation between them regarding the scope of the
project and the associated costs.
Activity-based Costing (ABC) is an integrated approach to estimating both project time and cost by
providing a simple and contextualized basis for estimating costs by activity. In this paper, we
compare and contrast the ABC approach with established approaches to software estimation,
develop an ABC model for software development and implementation, and test it using data
collected from twenty-two projects in a large bank. Our findings indicate that the ABC approach
provides reasonable estimation compared to the actual effort and costs reported. The approach
also provides useful costing information that facilitates management decision making and
organizational learning.
The remainder of this paper is organized as follows: the ABC approach is first presented, followed
by a comparison with more established approaches to software estimation. The research
methodology is then described and the results of the study presented, followed by a discussion of
the management information provided by the model and the potential for organizational learning
through variance analysis. We conclude with implications for research and practice.
Activity-Based Costing
Activity-based Costing (ABC) is a cost allocation model pioneered by Harvard's Cooper and
Kaplan (1988), in the field of management accounting. ABC has been successfully applied to
manufacturing and service industries (Helmi & Hindi, 1996; Kroll, 1996; Reimann & Kaplan, 1990)
for improving tactical and strategic decision-making and for enhancing corporate cost control and
customer profitability (Bradway & Ross, 2000; Mabberley, 1998). ABC provides management with
information to understand the use of scarce organizational resources in various business activities.
Management can then focus on areas of high cost, identify the factors that influence these costs,
benchmark performance, and quantify improvements in the time and cost of the activities
performed. This contributes to the better management of organizational resources in relation to
product costing and customer profitability.
In the ABC approach, resources are first traced to activities, and activity costs are then traced to
products/services, based on their consumption of the activities. This principle is fundamentally
different from the traditional costing system that assumes that products/services consume
resources directly. The generic ABC model is outlined in Figure 1.
The ABC model first groups the organization resources into various resource pools (see Figure 1),
such as salaries, rental expense, license fees, and depreciation. Various tasks performed in the
organization are then grouped into major and functionally homogeneous activities, such as
research and development (R&D), receiving, and delivery. Each activity consumes different
amounts of one or more resource pools. For example, R&D uses 5% of rental and 40% of salaries.
The costs of performing the activities are determined by allocating the costs from the resource
pools to each activity according to the percentage of the resource pool consumed.
Once the activity costs are determined, they can be traced to cost objects (such as product) using
cost drivers. Cost drivers measure the frequency and intensity of the demands placed on activities
by cost objects. For example, the cost driver for the delivery activity is the number of shipments.
The organization computes the total number of shipments in a given time period, and traces the
delivery activity costs to all the products based on the number of shipments consumed by each
product. The product with the largest number of shipments would bear the highest cost for delivery
Figure 1. Generic ABC Model
A comparison of the ABC approach with more established approaches to software estimation
provides an understanding of the relative advantages and limitations.
Major Approaches to Software Estimation
There are many approaches to software project estimation. Widely cited estimation approaches
include three versions of Constructive Cost Models (COCOMO) (Boehm, 1981; 1984), Function
Point Analysis (FPA) (Albrecht, 1979; Albrecht & Gaffney, 1983), Artificial Neural Networks (ANN)
(Albus, 1981; Finnie et al., 1997), and Case-based Reasoning (CBR) (Finnie et al., 1997;
Mukhopadhyay et al., 1992; Watson & Marir, 1994). More recent research has been conducted to
advance these estimation models (for example, Ebrahimi, 1999; Kemerer & Porter, 1992; Kemerer,
1993; Samson et al., 1997), and these models have improved in terms of providing early and
accurate estimation of development effort.
FPA and COCOMO have traditionally been applied to mainframe-based projects, using COBOL
and 4GL (Boehm, 1981; Dolado, 1997; Kemerer, 1987, 1993). ANN and CBR can be used for
different systems contexts as the models are capable of learning. However, research studies of
these techniques were in the context of projects that used COBOL (Finnie et al. 1997; Samson et
al. 1997). ABC has largely been used in manufacturing and service industries for estimating the
cost of products and services.
In the following section, the ABC approach is compared with the other established estimation
approaches along the following key dimensions: (1) model inputs, (2) estimation process, (3)
model outputs, (4) relative accuracy, and (5) set up costs.
Model Inputs. All the models require the estimation of many project parameters as inputs to the
estimation process. The inputs for FPA include five components of the proposed system, weighted
by the levels of complexity, and fourteen general application characteristics. Simple COCOMO
requires specification of the development modes and estimation of delivered source instructions
(DSI) as inputs. Intermediate COCOMO requires an additional fifteen effort adjustment factors
(EAF). ANN and CBR both build on FPA as they use function points and general application
characteristics as input parameters, as well as a database of prior projects.
Prior research shows that inter-rater variance in the estimation of inputs for FPA is about 30%
(Low & Jeffery, 1990; Rudolph, 1983). This is because input definition and estimation guidelines
are not strictly standardized and are therefore subject to project managers' interpretation and
judgement (Kemerer & Porter, 1992). These difficulties arise because FPA attempts to provide a
standardized model across many organizations with different development practices and levels of
resource capability. Since ANN and CBR use function points as inputs, they are also subject to the
problem of inter-rater variance. The main inputs for ABC are activity driver counts. These activity
drivers are customized to the organization's context, and are to be supported by customized
examples to aid organizational managers in their estimation of driver counts.
Estimation Process. FPA has a standard formula that uses the inputs on five components of the
proposed system and fourteen general system characteristics to compute the number of function
points. The function points in turn determine the lines of code and the man-days required for the
project. Simple COCOMO uses a standard formula to arrive at delivered source instructions, which
are then translated into the manmonth requirements. Intermediate COCOMO refines the estimated
man-month estimate by considering an additional fifteen adjustment factors. Detailed COCOMO
uses the adjustment factors, and computes the delivered source instructions for six specific
phases in the development process, to arrive at total effort required. ANN and CBR use function
points as input, and make use of previous cases to estimate effort required for the current project.
ABC uses the organizational resource mix and different activity drivers to estimate the effort and
cost required in each phase of the development process.
FPA and COCOMO use standard estimation rates in estimating time. The use of standard
estimation rates may result in estimates that are unrealistic if an organization's development
practices and resource capability (experience and ability of IT staff for example) are very different
from the model's norms. While ANN and CBR are able to take into account individual
organizational practices in the estimation process, the use of standard function points as inputs
limits their flexibility. ABC uses customized estimation rates to reflect the resource mix, cost, and
development practices of the organization.
Model Outputs. All the established approaches focus on estimation of development time, usually in
man-days. (Dolado, 1997; Finnie et al., 1997; Kemerer, 1993; Mukhopadhyay et al., 1992; Samson
et al., 1997). The main output for the ABC approach includes development time, resource mix
required, and estimated cost for each activity.
FPA, basic and intermediate COCOMO, ANN and CBR all provide a single estimate of total project
time. Detailed COCOMO divides the project into six predetermined phases, while ABC estimates
time and cost for all the development activities of interest to the management of the organization.
Relative Accuracy. Mean Absolute Relative Error (MARE) has often been used in prior research to
determine the accuracy of the software estimation models. The formula for MARE is:
Finnie et al. (1997) reported that FPA has a MARE of 62.3%, while the MARE for ANN and CBR
are 35.2% and 36.2%, respectively. Kemerer (1987) compared the estimation models and reported
MARE for FPA as 102.74%, Basic COCOMO as 610.09%, and Detailed COCOMO of 607.85%,
and concluded that that FPA is more accurate in estimating software development effort. While
these estimation models have been studied and compared, ABC has not been applied to software
development estimation. This study will however compute the MARE for the sample of projects
Set-up Costs. Since FPA and COCOMO use standard models (inputs, rates and formulae), the
set-up costs for the adopting organization is lower. The main cost to the organization is in training
their managers to estimate the inputs required by FPA. ANN and CBR need to make use of
historical cases/ projects in the organization, in order to predict the software estimation effort. Set
up time and costs are therefore required to set up the database of previous projects and to run the
ANN and CBR models to generate organization specific estimation rates. Managers will also need
to be trained to provide inputs for the estimation of Function Points since these serve as inputs to
both CBR and ANN. ABC has significant set up costs as analysis is required to identify
organizational resource pools, development activities, and activity cost drivers.
Comparing and contrasting these estimation models provides insights into the strengths and
weaknesses of each estimation model. From a management perspective, the common
characteristics of existing estimation approaches result in some shortcomings, i.e. cost
implications of resource mix are not considered, activity costs are not provided, input parameters
are complex and difficult to understand, and estimation rates are not contextualized to the
organization. The ABC approach potentially addresses these shortcomings.
First, ABC focuses on each activity in the software development process. This provides
management with more detailed data to monitor and evaluate performance by activity, and not
merely at the overall project level. Although COCOMO attempts to address the problem by having
six activities in its complex model, the activities may not be applicable to every organization. ABC
model allows full customization to the development and implementation practices of each
organization. second, ABC is more than an effort estimation tool in that it takes into account the
different types of resources and associated resource costs consumed by each activity. Instead of
using a fixed charge-out rate per man-day, ABC allows management to better understand the
actual cost of carrying out each development and implementation activity. In order to provide this
information, organizational specific data on organization structure and labor costs are incorporated
into the model. Third, ABC is likely to be more easily understood by top management and user
departments because it is a widely accepted approach in cost accounting. Finally, ABC supports
organizational learning. It provides analysis between estimated and actual time and cost used for
each activity and provides a feedback loop for project managers.
The ABC approach does require more upfront set-up costs, in order to build a model that is
customized to the organization. It also does not have a history of use in systems development, and
therefore does not have a track record in terms of prior research in estimation accuracy. This study
seeks to demonstrate the model building process, the benefits and limitations of the ABC approach,
and to provide some initial indication of estimation accuracy, within the limits of the study.
Site and Sample Selection
The research site in this study is a leading financial institution. The organization was selected for
the study because it had a large number of ongoing IT projects that could potentially be examined.
The organization also offered a number of other important advantages: it had a mix of in-house
development and package implementation projects, and a mix of traditional and newer
technologies such as host-based and client-server systems. Finally, it had reasonably detailed
records of time spent by project team members. This was necessary for the development of the
ABC model. We selected twenty-two software development and implementation projects for the
study. The project selection was limited by the requirement that the projects in progress be
completed within the duration of the study, so that variance analysis could be performed. Other
selection criteria included project size, technology platform, and adequate activity information in
project work-plans. The twenty-two projects consisted of five in-house enhancements, nine in-
house developments and eight package projects. Five were host-based systems, and seventeen
were client-server. Projects ranged from 49 man-days to 2640 man-days, with an average man-
day of 651.
While a larger sample size is desirable, it is often difficult to obtain a large number of projects
within the time frame of a single study. Software development projects generally take a long time
to complete and are not numerous. Also, it is difficult and often not a good research strategy to
accumulate projects across organizations, because one must then control for various organization
specific effects and the consistency of measures. The sample size is comparable to Albrecht and
Gaffney's (1983) and Kemerer's (1987) studies, which used twenty-four and fifteen projects,
respectively, for software estimation model validation.
Data Collection
Various well-established ABC development and implementation methodologies from American
Management Association, Institute of Management Accountants, and ABC Technologies, Inc.
(Miller, 1996) were synthesized and adapted to provide an approach suitable for software
development and implementation. In developing and applying the ABC approach, guidelines for
developing successful estimation models in software engineering were also considered. These
included transparency of the model, usefulness of the model, developer participation, feedback,
automated data collection, user training, dedicated implementation team, and a goal-oriented
approach (DeMarco, 1982; Hall & Fenton, 1997; Grandy & Caswell, 1987; Pfleeger, 1993).
Table 1. Process in Developing ABC Estimation Model
The activities involved in developing an ABC model for software development are summarized in
Table 1, under three broad stages: (1) Study current system development process, (2) Develop
ABC prototype, and (3) Refine ABC prototype.
One major advantage of the approach used in this study is the reliance on both quantitative and
qualitative data throughout the data collection process. Quantitative data from time sheets,
financial accounts, and driver counts were collected to build and validate the ABC model.
Qualitative data from interviews and focus group discussions on activities, drivers, and
explanations of variances were also used to provide a better understanding of the software
development and implementation process in the organization. It also helped to clarify
inconsistencies in quantitative findings and to explain variances. Triangulation of data from these
different sources strengthens the findings and increases the robustness of the results (Kaplan &
Duchon, 1988; Yin, 1984), and software engineering issues, in particular, have been said to be
best investigated using a combination of qualitative and quantitative methods (Athey, 1998;
Seaman, 1999).
There are four parts to the analysis, namely cost analysis, driver analysis, estimation using ABC
model, and variance analysis. Cost Analysis required the identification of resource pool costs
through the analysis of general ledger accounts. These costs were then traced to activities by
analyzing the time sheets kept by project team members.
Driver Analysis employed regression to relate activity time to cost drivers. The cost drivers had
previously been identified by project managers during interviews, and driver counts had been
collected for all twenty-two projects. Activity time was obtained from project time sheets. The
sample of twenty two projects was split into two comparable sets based on the project types and
size, in order to perform a split-half regression and estimation analysis. The first set consisted of
twelve projects that were used to build the ABC model. The model was then applied to the second
data set that consisted of ten projects. Ten regressions were performed for the ten development
activities identified, with the actual activity time as dependent variables and activity drivers as
independent variables. The beta coefficient of the regression was the driver rate for each activity.
In order to mitigate the problem of small sample size, only one independent variable was used, as
far as possible, for each regression. Tests were run to check for violation of the assumptions
required for regression analysis. These assumptions include linearity, normality, independence of
independent variables, and non-multicollinearity. Other than linearity, the other assumptions
appeared to be valid for the sample.
Prior studies suggest that there is a non-linear relationship between function points and software
maintenance effort (Banker and Slaughter, 1997; Banker et al., 1998). We therefore ran the box-
cox transformation on each activity to determine the linearity of the regressions (Kmenta, 1986).
The general model used was:
Estimation was conducted by applying the regression obtained above to the second data set
consisting of ten projects to estimate the time and cost required for every activity. The estimated
and actual man-days were then compared and the MARE computed for the ABC model. Relatively
low MARE would suggest that over-fitting was not a serious problem.
Variance analysis was used to examine the differences between the estimated and actual time
required for each project. Detailed variance analysis is a technique adopted from the costing
literature (Hongren, et al., 1997). The total variance between actual and estimated time and cost
can be better understood if we divide it into two components, namely volume variance and flexible
budget variance (Hongren et al., 1997). Volume variance is the difference between the standard
cost and the estimated cost. It arises from changes in production volume. In the context of systems
development, changes in user requirements after the commencement of the project result in
changes to the cost driver counts, and hence the actual effort expended. Flexible budget variance
reflects the productivity of the resources used in the project development. This can be due to
changes in resource mix, staff skill levels, use of productivity tools, learning, development
downtime, and others.
Projects with high variances for any activity were examined by reviewing interview notes and
discussing with project managers in order to understand the source of the variance.
Table 2. Computation of Rate per Man-day for Resource Pools
Results: ABC Model for Software Development
The results of the analysis are presented in four sections: (1) Cost Analysis, which includes the
analysis of resource pools, determination of total cost per resource, identification of activities, total
cost per activity, and activity costs per day; (2) Cost Driver Analysis, which reports the
identification of cost drivers, determination of driver rates, and the estimation of man-days required
based on the number of drivers; (3) Estimation using the ABC model, where we apply the
estimation model to the second set of projects and assess the degree of estimation accuracy is
then presented; and (4) Variance Analysis, where budgeted and actual time and costs for each
activity are compared and variances analyzed. The overall ABC model resulting from the analysis
is shown in Figure 2.
Cost Analysis
This analysis shows how cost from resource pools is traced to individual activities. Cost figures
have been transformed to protect the confidentiality of the organization being studied.
Resource Pool Costs
Departmental costs were categorized into the resource pools, namely: (1) Project Managers, (2)
System Analysts, (3) Business Analysts, (4) Programmers, and (5) Development Support, as
shown in Table 2. Development support included the cost of some technical staff, depreciation,
and rental expense.
Activity Costs
Ten activities were identified, namely: (1) Project Management, (2) Requirement Analysis, (3)
Detailed Design, (4) Front-end Programming, (5) Back-end Programming, (6) System Testing, (7)
User Acceptance Testing, (8) User Procedure and Training, (9) Migration, Conversion and Rollout,
and (10) Post Implementation Review. The average percentage of time spent by each resource
pool is presented in Table 3 tracing the percentage of time spent by each resource pool,
development support cost was allocated to each activity based on the total percentage of time
spent in each activity, as shown in the second last column of Table 3.
Figure 2. ABC Model for Software Development and Implementation
Table 3. Time Allocation from Resource Pools to Activities (average based on 22 projects)
The resource pool costs were then allocated to the ten activities based on the time allocation
percentages shown in Table 3. For example, since project managers spent 12.42% of their total
annual days in Requirement Analysis activity, 12.42% of the total annual salary for project
managers was assigned to this activity's cost. Note that the total activity costs reflect the
organizational-specific mix of different resource pools (see Table 4).
Table 4. An Example of Activity Cost Computation - Requirement Analysis Activity
Similar computation was carried out for each of the other nine activities. Table 5 shows the total
annual costs, total annual man-days and cost per man-day for each of the ten activities.
As expected, Project Management was the most expensive activity ($619/ man-day) as much of
the time spent was by project managers, the most expensive resource pool. Programming
activities were much cheaper, costing only $315 to $351 each day to perform.
Table 5. Costs Allocation from Resource Pools to Activities
Table 6. Definition of Drivers
Cost Driver Analysis
Seven potential cost drivers were initially identified from interviews and discussions with the project
team members, including (1) Project Duration, (2) Project Type, (3) Number of Functions, (4)
Number of Back-end Programs, (5) Number of Front-end Programs, (6) Number of Files, and (7)
Number of Screens. The definitions for these drivers are shown in Table 6 that follows. The nature
of project is taken into account through the project type driver (development, enhancement or
package implementation), and through number of back-end (mainframe host) and front-end
programs (client programs).
Stepwise regression was run for each activity, with actual time spent as the dependent variable
and potential cost drivers identified as independent variables. Out of the ten activities that we
analyzed, five of them were non-linear, three regressions were linear and two were not significant.
Table 7 shows the summary results from the regressions.
Table 7. Regressions for ABC Model (Part 1 of 2)
Project Management, Requirements Analysis, Back-end programming, User Acceptance Testing,
and Migration, Conversion, & Rollout are four of the activities that are non-linear. All these
activities demonstrate diseconomies of scale when the number of drivers increases. These
activities are characterized by significant user involvement. The larger the project in terms of
duration, functions, and files, the more time is required for IT personnel to communicate and
coordinate among themselves and with users. The driver for Project Management is duration of
the project, which implies that the bigger the project, the more time is required in management
activities, such as progress meetings and supervision. The time spent in Requirement Analysis
depends on the number of functions and the project type. For enhancement projects, the time
required for this activity is significantly less compared to development and package projects. This
finding is intuitive, as enhancement projects require less effort in analyzing requirements,
compared to new development and package projects. Number of files is the driver for two activities,
namely User Acceptance Testing and Migration, Conversion, and Rollout.
Table 7. Regressions for ABC Model (Part 1 of 2)
Back-end Programming also shows diseconomies of scale when the number of back-end
programs increases, even though it is not directly affected by the number of users involved. This
could be due to the increased demand for integration among back-end programs. As the number
of programs increases, programmers not only have to code the programs, but also to take into
account the interrelationships among programs.
Linear regressions include Detailed Design, Front-end Programming, and System Testing. These
activities are largely performed by IT personnel and require less interaction with users. An increase
in each additional driver does not result in a greater than proportional increase in effort for these
Efforts required in both Detailed Design and Front-end Programming are dependent on the
number of functions in the system. For System Testing, number of functions and integration factor
were the main drivers. The integration factor is a new driver that was not identified during the
earlier interviews. After the analysis of data, and subsequent discussions with project managers, it
was identified as one of the drivers for System Testing. This driver categorizes the project into two
groups (projects with up to 10 back-end programs, and projects with more than 10 back-end
programs), effectively capturing increased effort required in testing the integration among
programs in the back-end.
All the above regressions showed significant results except for User Procedures & Training, and
Post Implementation Review, as most projects did not document the time expended for these
activities. A constant of 10 and 5 man-days were recommended by many project managers to
estimate the time required for User Procedures & Training, and Post Implementation Review,
respectively. With more project data being collected, the ABC model can be updated to include the
drivers and rates for these two activities.
These findings provide insights into the composition of development costs. The non-linear
component in this study takes up about sixty percent of the total development time, and this may
explain the results reported in prior studies, that the relationship between total development time
and independent variables is non-linear.
Estimation Using ABC Model
The regressions obtained from the analysis were applied to the second set of ten projects to
evaluate the accuracy of the model. Total project costs were calculated by multiplying the
estimated number of man-days for each activity by the activity cost per man-day for each activity
and summing across activities. This computation clearly shows that the number of man-days is not
the only factor determining the cost of the project; the mix of the resources is also important. For
example, one project is estimated to use 209 man-days and cost $82,501; whereas another is
estimated to require a larger number of man-days (i.e., 252) but costs less, at $75,181. This is
because the first project consumed more time from an expensive activity.
The accuracy of the model in predicting the time required for the projects can be assessed in a
number of different ways. When we apply our model to the second set of ten projects, the
percentage of error (Boehm, 1981) is within 1.2 of the actual time (variance < 20%) 40% of the
time (four out of the total of ten projects); and the estimates are within a factor of 2 of the actual
time (variance < 100%) for all but one project. The magnitude of relative error (MRE) (Conte et al.,
1986; Kemerer, 1987) is derived by taking the actual time spent for the activity less the time
estimated, and the difference in absolute terms was then divided by the actual time spent. The
average of MRE is called the Mean Absolute Relative Error (MARE) previously described in this
paper (Finnie et al., 1997; Srinivasan & Fisher, 1995). 90% of the projects reported MRE of less
than 100%, and the MARE was a relatively low 39.06%.
Table 8. Example of Variance Analysis (Part 1 of 2)
Variance Analysis
An example of detailed variance analysis is presented in Table 8. The ABC model only requires
project managers to provide the estimated cost driver values at the beginning of the project, and
the actual cost driver values on completion of the project. Actual time spent is captured from
project members' time sheets. The estimated, standard and actual costs can then be computed
using the ABC model.
Total Variance is the difference between the estimated and actual time and cost recorded (column
(d)). In the example, the team exceeded the budget by $3,612. The change in user requirements
after the commencement of the project changes the cost driver counts, hence the volume of output.
Volume variance is the difference between the standard cost and the estimated cost (column (e)).
In this example, the positive variance of 32.8 man-days and $13,837 indicates that standard time
and cost allocated is more than the estimated time and cost. This is because changes in user
requirements that led to an increase in actual driver counts, including functions, back-end
programs, and total files. In this example, the total negative flexible budget variance of 18.8 man-
days and $10,266 shows that the project team spent less time and cost in the project than
expected based on the actual driver counts. Analysis can be carried out at the activity level to fully
understand the reasons for the variances.
The four sections above demonstrate the complete cycle of the development and use of the ABC
model, from the building of the model, which includes determining and validating the resource
pools, activities, activities' drivers and rates, to its use in cost estimation and variance analysis. In
the following sections, the usefulness of the data generated during the ABC cycle is discussed.
The data generated in the process of building and applying the ABC model, accumulated and
interpreted over time, can provide a useful pool of knowledge for the organization, particularly for
resource allocation and improving software development and implementation processes. Senior
managers view such knowledge as an important organizational resource (Adler, 1989; Baskerville
& Pries-Heje, 1999; Larsen & Levine, 1999), and its acquisition, articulation, and enhancement
over time contributes to the uniqueness of the organization (Dodgson, 1993). In the following
sections, we discuss the potential of different information components of the ABC approach for
management control and decision-making, and for organizational learning.
Resource and Activity Time and Cost
The resource pool analysis required by the ABC model provides cost and capacity information on
the different types of resources used in the software development process. This information helps
management in resource planning and in making outsourcing decisions. During the course of this
study, discussions with senior IT management highlighted the usefulness of specific information
provided by the model. For example, senior managers were very interested in the charge-out rate
for each resource pool (Table 2) because this sensitized them to the resource mix implications for
each project. Senior managers were also very interested in the average percentage of time spent
by each resource pool in the major activities (Table 3). The data showed that relatively expensive
resources such as project managers and system analysts were spending more time than was
desirable in lower value-adding activities such as programming. Finally, the average cost of each
activity (Table 5) potentially provided information for decisions about which development activities
to outsource.
Activities and Cost Drivers
The activities identified in this study reflect the project management structure and the terminology
used by the organization. Other organizations may have a somewhat different set of activities. In
general, because this organization's set of activities closely resemble the traditional systems
development life cycle, comparison of the ABC data on proportion of project time spent in each
activity with findings from other studies provided useful insights to management.
The total project time allocation patterns (Table 3, last column) were similar to those reported by
Beck and Perkins (1983) and Dolado (1997), with the exception of detailed design and
programming. Time spent in detailed design in the organization studied was significantly lower
than that reported in previously published studies (6% as compared to 27% and 22% for Beck &
Perkins' and Dolado's study, respectively); and the time spent in programming was much higher
(41% as compared to 24% and 34%). Discussion with project managers led to the conclusion that
design effort had been traded-off against programming/ bug fixing effort. This highlighted an area
of prevailing organizational development practice for managers to consider for change.
The cost driver for each activity is also contextualized to the software development practices and
platforms of the organization. For example, the use of front-end program and back-end program as
drivers arises from the organization's use of a client-server development platform for its systems.
The cost driver rates are of much interest to the management of the organization. The rate
indicates how much it costs to change requirements (i.e., increase the number of a cost driver).
This provides the management with a tool for better management and control of project time and
cost. It also provides a basis for negotiating with users on the cost of the project, both at the initial
estimation, and as subsequent user requests for changes arise.
Estimation and Variance Analysis Using ABC Model
At the end of each project, project managers and management should investigate activities with
significant variance between estimated and actual time and cost to surface the causes. This
variance analysis serves as a natural feedback loop for the managers to move towards more
effective and efficient project development and management practices.
As we examined and discussed the significant variances in detail with project managers, we
uncovered many learning opportunities. For example, we found that the variances for Project
Management and User Acceptance Testing activities were especially high for projects involving
external parties (e.g., other financial institutions, governance bodies, etc.). The short-run solution is
to revise and increase the effort estimated for new projects with similar characteristics. However,
this approach is merely reactive, and increasing the time allowed may actually legitimize inefficient
practices. In the long run, the proactive way of eliminating these variances is to improve on the
processes/ sub-tasks within the activities that contribute to the variances. In this case, better
coordination procedures with external stakeholders and technology for coordination should be
considered. In another example, we found that one of the projects had extremely low programming
time, relative to the model's estimates. When interviewing the project manager, we found that this
was because the use of a new code generator. The ABC data was thus able to provide the
organization with an early indicator of positive impacts from the use of this programming aid.
Learning can be done on a systematic basis if project managers continuously use ABC results to
track variances across projects adopting new tools or technology. Tools that deliver significant
benefits can be identified and their use institutionalized. Tracking such projects over time can also
give the managers a better idea of the learning curve for new technology. As new development
projects are added to the project database, analysis of their development processes and resource
consumption will result in periodic revision of the model.
Implications for Research and Practice
This study proposes the application of the ABC approach to software development and
implementation. Because of the novelty of the approach in this context, the paper provides a
description of how an ABC model can be created and grounds the process and model in empirical
data. The paper also examines the main advantages of applying ABC to software development
and implementation projects. First, the approach formalizes the relationship between software
development time and IS department costs using a standard organization-wide estimation practice.
The ABC approach supplements the time estimates provided by traditional software estimation
approaches with detailed costing information that highlights the impact of resource mix decisions,
and consumption of different development activities. Second, the costing model is contextualized
to the structure and practices of the organization. This is likely to increase accuracy in driver
counts by project managers as the contextualized drivers are a better fit with the organization. This,
together with the customized cost driver rates are likely to result in more accurate time and cost
estimates. Last, and most importantly, the integration with detailed costing information,
contextualization of the model, and variance analysis results in information that is useful for
decision making and organizational learning.
For the perspective of practitioners, the study also suggests a number of recommendations to
increase the benefits received. Activities and cost drivers must be clearly defined with a list of
specific examples in order to simplify counts and minimize inter-rater variances. Strong IS
participation in identifying activities and cost drivers is necessary to minimize resistance to
adoption of the model and to ensure that the model is relevant and useful. The model needs to tbe
regularly updated to reflect changes in organizational resource mix and development platforms
and practices. Collection of the data required (eg. timesheets) for the model should be automated
to minimize the effort required to maintain the model.
The ABC approach also has its limitations. While the approach presented in this paper for the
building of ABC model can be generalized to other organizations' software development and
implementation, and even to other functions such as IT operations, the model itself may require
significant customization before it can be used in another organization. The ABC model
parameters (e.g., resource rates and driver rates) cannot simply be adopted by another
organization. The model (activities and drivers) can be used as a starting point for other
organizations to adapt and use, but changes have to be made to customize the model to better
serve the needs of the organization. The analysis of resources, costs and utilization in activities
required to customize the model to an organization introduce significant setup costs for this
The study itself is subject to the limitation of having only twenty-two projects. The sample size
raises two concerns. One is with the reliability of the regression results. Attempts to mitigate this
included testing that the assumption for regression were not violated, and using only one
independent variable (cost driver) in most of the equations. The second issue is the inability to
model for a variety of differences in project characteristics. Only type of project (enhancement, in-
house development, or package implementation) was considered in the analysis. Despite the small
sample, study provides a rich description of the development of a novel approach to software
estimation, and of its application and informational benefits. Future studies can increase the
number of projects by using a longitudinal approach, and sampling over a longer time frame. This
raises challenges of continued access to the research site over a longer period of time, or access
to historical projects. The latter is often a problem due to lack of good archival records, and
turnover of the IT personnel involved.
Future research may include additional drivers to increase estimation accuracy of the ABC model.
System complexity and integration were two possible drivers suggested by the senior
management at the last focus group session. These drivers may account for the difference in time
required for coding programs and testing the programs and the system. However, the inclusion of
additional drivers, while possibly increasing accuracy, will also increase model complexity. The
problems of increased model complexity include user resistance, increased inter-rater variances
among project team members in driver counts, and delay in estimation.
Finally, research can also study the extension of the ABC approach to other IS domains such as
operations. The ABC approach can also be extended into the vendor organizations domain, where
the costing information may be particularly useful for costing client projects, and where there may
be more projects across several client organizations.
Adler, P.S. (1989). "When Knowledge is the Critical Resource, Knowledge Management is the
Critical Task," IEEE Transactions on Engineering Management, Vol.36, No.2, pp. 87-94.
Albrecht, A.J. (1979). "Measuring Application Development Productivity," Proceedings of Joint
SHARE/GUIDE/ IBM Application Development Symposium, pp. 34-43.
Albrecht, A.J. and Gaffney, J.E. (1983). "Software Function, Source Line of Codes, and
Development Effort Prediction: A Software Science Validation," IEEE Transaction on Software
Engineering, Vol. 19, pp. 639-648.
Albus, J.S (1981). Brain, Behavior, and Robotics, Peterborough: Byte Books, Subsidiary of
Athey, T. (1998). "Leadership Challenges for the Future," IEEE Software, Vol. 15, No. 3, pp. 72-77.
Banker, R.D., Davis, G., and Slaughter, S.A. (1998). "Software Development Practices, Software
Complexity, and Software Maintenance Performance," Management Science, Vol. 44, No. 4, pp.
Banker, R.D. and Slaughter, S.A. (1997). "Efficiency of Complexity Allocation in Software Design:
An Empirical Evaluation," Management Science, Vol. 43, No. 12, pp. 1709-1725.
Baskerville, R. and Pries-Heje, J. (1999). "Knowledge Capability and Maturity in Software
Management," The DATA BASE for Advances in Information Systems, Vol. 30, No. 2, pp. 26-42.
Beck, L.L., and Perkins, E.T. (1983). "A Survey of Software Engineering Practice: Tools, Methods,
and Results," IEEE Transactions on Software Engineering, Vol. SE-9, No. 5, pg. 541-561.
Benjamin, R.I., Rockart, J.F., Morton, M.S., and Wyman, J. (1984). "Information Technology: A
Strategic Opportunity," Sloan Management Review, Vol.25, No. 3, pp. 3-10.
Boehm, B.W. (1981). Software Engineering Economics, Englewood Cliffs, N.J.: Prentice-Hall.
Boehm, B.W. (1984). "Software Engineering Economics," IEEE Transactions of Software
Engineering, Vol. SE-10, No. 1, pp. 4-21.
Bradway, B. and Ross, S. (2000). "Measuring Corporate Customer Profitability: The Role of
Activity-based Cost Analysis," Corporate Customer Management, Vol. 4, Research Brief 6, pp. 1-
10, (
Conte, S., Dunsmore, H., and Shen, V. (1986). Software Engineering Metrics and Models. Menlo
Park, California: Benjamin/ Cummings.
Cooper, R. and Kaplan P.S. (1988). "Measure Costs Right: Make the Right Decisions," Harvard
Business Review, pp. 96-103.
DeMarco, T. (1982). Controlling Software Projects. Englewood Cliffs, N.J.: Prentice Hall.
Dodgson, M. (1993). "Organizational Learning: A Review of Some Literature," Organization
Studies, Vol.14, No. 3, pp. 375-394.
Dolado, J.J. (1997). "A Study of the Relationship Among Albrecht and MK II Function Points, Lines
of Codes, 4GL and Effort," The Journal of System and Software, Vol. 37, No. 2, pp. 161-173.
Dugger, R. (1996). "Selling the Decision Maker," Datamation, July, pp. 89-90.
Ebrihimi, N.B. (1999). "How to Improve Calibration of Cost Models," IEEE Transactions on
Software Engineering, Vol. 25, No. 1, pp. 136-140.
Finnie G.R., Wittig, G.E., and Desharnais, J.M. (1997). "A Comparison of Software Effort
Estimation Techniques: Using Function Points with Neural Networks, Case-based Reasoning and
Regression Models," Journal of Systems Software, Vol. 39, pp. 281-289.
Grandy, R.B., and Caswell, D.L. (1987). Software Metrics: Establishing a Company-Wide
Programs, Englewood Cliff, New Jersey: Prentice Hall.
Hall, T., and Fenton, N. (1997). "Implementing Effective Software Metrics Programs," IEEE
Software, pp. 55-64.
Helmi, M.A., and Hindi, N. (1996). "Activity-based Costing in Banking: A Big Challenge," The
Journal of Cost and Management Accounting, Vol. 9, Vol. 2, pp. 5-19.
Horngren, C.T., Foster, G., Datar, S.M. (1997). Cost Accounting - A Managerial Emphasis, 9th
Edition, New Jersey : Prentice Hall.
Kaplan, B. and Duchon, D. (1988). "Combining Qualitative and Quantitative Methods in Information
Systems Research: A Case Study," MIS Quarterly, Vol. 12, No. 4, pp. 571-586.
Keil, M., Mixon, R., Saarinen, T., and Tuunaiene, V. (1995). "Understanding Runaway Information
Technology Projects: Results from an International Research Program based on Escalation
Theory," Journal of Management Information Systems, Vol. 11, No. 3, pp. 65-85.
Kemerer C.F. and Porter, B.S. (1992). "Improving the Reliability of Function Point Measurement:
An Empirical Study," IEEE Transactions on Software Engineering, Vol. 18, No. 11, pp. 1011-1024.
Kemerer, C.F. (1993). "Reliability of Function Points Measurement," Communications of the ACM,
Vol.36, No.2, pp. 85-97.
Kemerer, C.F. (1987). "An Empirical Validation of Software Cost Estimation Models,"
Communications of the ACM, Vol.30, No.5, pp. 416-429.
Kmenta, J. (1986). Elements of Econometrics, New York: Macmillan Publishing Company; London:
Collier Macmillan Publishers, pp.517-521.
Kroll, K.M. (1996). "The ABCs revisited," Industry Week, Vol. 254, No. 22, pp. 19-21.
Larsen, T.J. and Levine, L. (1999). "DataBase Special Issue - Information Systems: Current Issues
and Future Changes," The DATA BASE for Advances in Information Systems, Vol. 30, No. 2, pp.
Lederer, A.L. and Prasad, L. (1995). "Perceptual Congruence and Systems Development Cost
Estimation," Information Resource Management Journal, pp. 16-27.
Low, G.G. and Jeffery, D.R. (1990). "Function Points in Estimation and Evaluation of the Software
Process," IEEE Transaction on Software Engineering, Vol. 16, No. 1, pp. 64-71.
Mabberley, J. (1998). Activity-based Costing in Financial Institutions, Financial Times Management,
Pitman Publishing.
Maglitta J. (1991). "It's Reality Time," Computerworld, pp. 81-84.
Miller, J. (1996). Implementing Activity-Based Management in Daily Operations, John Wiley &
Sons, Inc.
Mukhopadhyay., T., Vicinanza., S.S., and Prietula, M.J. (1992). "Examining the Feasibility of a
Case-based Reasoning Model for Software Effort Estimation," MIS Quarterly, Vol. 15, No. 2, pp.
Pfleeger, S.L. (1993). "Lessons Learned in Building a Corporate Metrics Program," IEEE Software,
pp. 67-74.
Reimann, B.C. and Kaplan, R.S. (1990). "The ABCs of Accounting for Value Creation," Planning
Review, Vol. 18, No. 4, pp. 33-34.
Rubin, H.A. (1991). "Measure for Measure," Computerworld, pp. 77-79.
Rudolph, E.E. (1983). "Productivity in Computer Application Development," Dep. of Management
Studies, University of Auckland, Australia.
Samson, B., Ellison, D., and Dugard, P. (1997). "Software Cost Estimation Using an Albus
Perceptron (CMAC)," Information Software Technology, Vol. 39, No. 1, pp. 55-60.
Seaman, C.B. (1999). "Qualitative Methods in Empirical Studies of Software Engineering," IEEE
Transactions on Software Engineering, Vol. 25, No. 4, pp. 557-572.
Srinivasan, K., and Fisher, D. (1995). "Machine Learning Approaches to Estimating Software
Development Effort," IEEE Transactions on Software Engineering, Vol. 21, No.2, pp. 2126-2137.
Watson, I.D. and Marir, F. (1994). " Case-Based Reasoning: An Overview," Knowledge
Engineering Review Journal, Vol. 9.4, pp. 327-354.
Yin, R.K. (1984). Case Study Research: Design and Methods, Beverly Hills, CA: Sage Publications.
Ginny Ooi
Nanyang Technological University
Christina Soh
Nanyang Technological University
An earlier version of this paper appeared in the Research-in-Progress track of the Proceedings of
the Nineteenth Annual International Conference on Information Systems, (Helsinki, Finland,
December 13-16, 1998), pp. 341-345.
About the Authors
Ginny Ooi is an Assistant Manager in the Finance department of a Singapore banking institution.
She received her Master by Research in Business (Information Systems) and Bachelor of
Accountancy (Honours) from the Nanyang Business School. Her research interests include cost
and effort estimation for software development and implementation, changes in business
processes and impacts on the organization with the use of IT and Internet, and ECommerce
business models.
Christina Soh is the Head of the Division of Information Technology and Operations Management
and Director of the Information Management Research Center (IMARC), Nanyang Business
School, Nanyang Technological University in Singapore. She received her Ph.D. in Management
from the University of California, Los Angeles. Her research focuses on ERP implementation,
ECommerce business models, IT business value and national IT policy.
© Copyright ACM-SIGMIS Summer 2003
Provided by ProQuest Information and Learning. All rights Reserved.

You may not repost, republish, reproduce, package and/or redistribute the content of this page, in
whole or in part, without the written permission of the copyright holder.
Close Window
Copyright © 1999 - 2009, Inc. All rights reserved.
Use of this site is governed by our Terms of Use Agreement and Privacy Policy.Copyright ACM-SIGMIS Summer
2003© Database for Advances in Information Systems 2010Provided by ProQuest, LLC. All rights Reserved.
You may not repost, republish, reproduce, package and/or redistribute the content of this page,
in whole or in part, without the written permission of the copyright holder.