You are on page 1of 24

International Journal of Information Technologies and Systems Approach

Volume 11 • Issue 1 • January-June 2018

A Review of Literature About


Models and Factors of Productivity
in the Software Factory
Pedro S. Castañeda Vargas, National University of San Marcos, Lima,
Peru David Mauricio, National University of San Marcos, Lima, Peru

ABSTRACT

Software factories seek to develop quality software in a manner comparable to the production of other
industrial products, establishing improvements in their production process so as to be more competitive.
Productivity, one of the competitiveness pillars, is related to the effort required to fulfill assigned
tasks. However, there is no standard way of measuring productivity, making it difficult to establish
policies and strategies to improve the factory. In this article, the authors perform a systematic review
of the literature on factors affecting productivity of software factories, and models for measuring it.
For the period 2005-2017, 74 factors and 10 models are identified. Most of the factors are related
to Programming, and a few to Analysis and Design and Testing. Also, most models for measuring
productivity only consider activities concerning programming.

Keywords
Analysis, Development, Effectiveness, Efficiency, Performance, Productivity, Software Factory, Testing

INTRODUCTION

Hernández, Colomo and García (2012), indicate that in software engineering, productivity
measurement has focused on the productivity of product delivery, perhaps influenced in part by the
formulas used to estimate software projects. Software development effort estimation is the process
of predicting the most realistic amount of effort required to develop or maintain software, while,
according to IEEE (1992), software productivity is defined as the ratio of work product to work effort.
Organizations interested in measuring the productivity of the software factory want a better view
of their ability to use available resources to produce profitable goods or services that meet customer
needs. Measuring productivity means knowing the performance of an organization, both internal and
external, in order to evaluate its progress. The organization measuring productivity has indicators that
allow it to compare itself to the market, in order to propose actions that increase its overall efficiency
and use all resources in an effective and efficient way. In other words, like everyone, managers need
to know how they are doing compared to performance in previous periods, addressing questions such
as whether performance is increasing, decreasing, advancing, or retreating; the magnitude of that
progress or regress; and program effectiveness.
According to Petersen (2011), the focus of the software improvement process is often on improving
productivity, which implies making a proper measurement. It must not only collect information from
various sources and then establish results, but also demand greater effort to understand the various
elements that comprise it. Figure 1 shows the conceptual model for productivity measurement by

DOI: 10.4018/IJITSA.2018010103

Copyright © 2018, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.


48
International Journal of Information Technologies and Systems Approach
Volume 11 • Issue 1 • January-June 2018

Figure 1. Conceptual model of productivity proposed in Arcudia-Abad et al. (2007)

Arcudia-Abad, Solís-Carcaño, and Cuesta-Santos (2007). The productivity process consists of types
of factors (input, output, context, and process) that influence it. Therefore, measurement requires
homogenized units for expressing the data obtained, to allow the design of measurement indicators
that will provide coherent and comparable results in differing environments.
It is necessary to establish the measurement objective before starting the productivity-
measurement process. This will allow setting the level at which to measure—for example, measuring
the productivity of the software factory to compare with the market; or of only certain components
of the software factory to verify possible flaws that could generate bottlenecks in final production;
or of the roles involved in the software development process.
The difficulty in measuring productivity is that there is no consensus about what to measure.
It incorporates several factors that in many cases are not taken into consideration (Scacchi, 1995;
Asmild, Paradi & Kulkarni, 2006; Moreira, Carneiro, Pires & Bessa, 2010; Yilmaz & O’Connor,
2011; Ondrej, Jiri & Jan, 2012; Cheikhi, Al-Qutaish, & Idri, 2012; Khan, Ahmed & Faisal, 2014).
Also, several studies do not define the scope of measurement, making detecting opportunities for
improvement in organizations difficult, and raising the research question: “How is productivity
measured in software factories?”
This article presents a systematic review of various models that measure productivity in software
factories, as well as the factors that affect it. At the same time, these are classified according to the
production unit of the software factory involved, in such a way as to enable researchers on the subject
to have current information on the studies carried out to date.
This article is organized to begin with an introduction to software factories, followed by
presentation of the research methodology. Then, statistics on the publications to date and an analysis
of the results of the review lead to an attempt to settle an argument about the facts found in the review
of the literature. The final section presents conclusions and suggestions for future research.

BACKGROUND OF THE SOFTWARE FACTORY

According to studies by Clements and Northrop (2001), the concept of the software factory brings
great advantages to the areas of software development:

• Decrease in production costs per product of up to 60%, saving time to market of up to 98%, and
reduction of labor requirement of up to 60%;

49
International Journal of Information Technologies and Systems Approach
Volume 11 • Issue 1 • January-June 2018

• Productivity improvement of around 10 times, and the quality of each product improved by up to
10 times fewer errors, increasing the portfolio of products and services offered by the software
factory, and the possibility of winning new markets.

The term software factory has been used since the 1960s and 1970s in the United States and Japan.
Cusumano defines it in terms of “mass-produced products including large-scale centralized operations,
standardized and deskilled job tasks, standardized controls, specialized but low-skill workers, divisions
of labor, mechanization and automation, and interchangeable parts. The development about factory
implies the good practices of Software Engineering are applied systematically” (Cusumano, 1989,
P.23).
Basili, Caldiera and Cantone (1992) propose dividing the software factory into two subareas:
software production and components production. The software production subarea responds to requests
for product (components for software production), data (statistics for estimating costs and deadlines),
and plans (models, methods for analysis, and software design). The component production subarea
has a reusable component base, along with statistics and historical data on which they are based, and
responds to requests made by the software production subarea.
Li C., Li H. and Li M.et al. (2001) organize the software factory, based on Capability Maturity
Model Integration (CMMI) and International Organization for Standardization 9001 (ISO 9001),
into five entities: techniques; process; workers involved; factory management and process assets;
and tools and code components. In Kruchten (2004) Rational Unified Process (RUP) describes the
process of software development in disciplinary terms (Business Modeling, Requirements, Analysis
and Design, Implementation, Testing, and Deployment) and support processes (Configuration and
Change Management, Project Management, and Environment).
Rockwell and Gera (1993) adapt the Eureka model (distributed software development process)
for the development of two-tier components (services and interface), establishing layered software
development. Fernandes and Teixeira (2004) propose a model that classifies the factory according
to the development scope or stages defined in the process, offering an idea of what the life cycle of
a project should be by classifying the stages as Expanded Projects, Factories of Software Projects,
Factories of Physical Projects, and Factories of Programs.
According to Yanosky (2005), a software factory must have a production organization model,
a component production unit, and a software production unit. Their main deficiencies are failure to
contemplate the management of the project and use the quality standards. Fabri, Trindade, Begosso,
Lerario, Silveira and Pessoa (2004); Fabri, Trindade, L’erário, Pessoa and Spinola (2004a); Fabri,
Trindade, Durscki, Spinola and Pessoa (2005); Fabri, Trindade, Begosso, Lerario and Pessoa (2007);
Fabri, Trindade, Silveira and Pessoa (2007a); Fabri, Scheible, Moreira, Trindade, Begosso, Braun and
Pessoa (2007b); Fabri, Trindade, Begosso, Pessoa and L’erário (2007c) and Pessoa, Fabri, L’erário,
Spinola and Begosso (2004) define the process of software production as a process with manufacturing
characteristics, establishing that a software factory has two units: software production and component
production. Nomura, Spinola, Hikage and Tonini (2006) propose a software factory structure, based
on the reuse concepts of the Experience Factory of Basili et al. (1992); the operational management
and organizational division of the Software Factory of Fernandes and Teixeira (2004); the application
of Software Engineering activities cited by Swanson, McComb, Smith and McCubbrey (1991);
and the improvement of the working methods cited by Cusumano (1989), as well as engineering
practices mentioned in the Project Management Body of Knowledge (PMBOK) and RUP concepts
and terminologies.
Trujillo (2007) proposes a model that considers the main elements defined by establishing a
development process based on a methodology, according to the size of the project and the staff. In
summary, Table 1 shows the studies made to date of the organizational structure of the software
factory, and draws an analogy between the various models proposed.

50
International Journal of Information Technologies and Systems Approach
Volume 11 • Issue 1 • January-June 2018

Table 1 indicates that the model by Nomura et al. (2006) proposes a software-factory structure,
which covers the various disciplines of software development, as shown in Figure 2.
As proposed by Nomura et al. (2006), a software factory has three major components: Project
Management, Software Production, and Support. The Project Management component is responsible
for the relationship with the client, negotiation, planning, elaboration, and contract management in
accordance with the strategic planning of the company. In addition, the management, elaboration,
programming, and distribution of the service orders, and distribution of the necessary resources to
execute the service, are structured in the following units of work: Service, Project Management and
Planning, and Production Control. The Software Production component is primary, where all the
activities of the development process are created, structured in the following work units: Business
Project Factory, Architecture and Engineering Factory, and Coding and Testing Factory. The Support

Table 1. Organizational Structure of Software Factories

Author Unit
Kruchten, Project Business Model Requirements Analysis Implementation Testing Configuration
2004 Management and and Change
Design Management,
Deployment
and
Environment
Basili et al., Design Construction Implantation
1992,
Fernandes Solution Conceptual Logical Detailed Construction Integration Acceptance
& Teixeira, Architecture Project Specification Project and Testing Testing Testing
2004
Nomura et Project Software Production Support
al., 2006 Management Business Architecture and Coding and Testing
Engineering
Trujillo., Project Development Group
2007. Management

Figure 2. Software factory structure. Adapted from Nomura et al. (2006)

51
International Journal of Information Technologies and Systems Approach
Volume 11 • Issue 1 • January-June 2018

component also provides for the integration and development of activities in the development process,
such as process management; infrastructure and support; interface design; security; and documentation.
This report is oriented toward the search of studies made of the Software Production component,
mainly in the Architecture and Engineering Factory (Analysis and Design is named in the article),
and the Coding and Testing Factory (Programming and Testing is named in the article).

RESEARCH METHODOLOGY

The systematic review considers the procedure proposed by Kitchenham and Charters (2007), which
has been adapted to develop the report and involves the following phases:

• Planning the Review: In this phase the research questions are raised and the review protocol
is defined.
• Conducting the Review: In this phase, the plan is executed and the primary studies are selected,
according to the inclusion and exclusion criteria established for the effect.
• Reporting the Review: In this phase, the statistics and analysis of the documents found and
selected, and discussed below, are shown.

Planning the Review


To answer the main research question, we ask the following questions about productivity in software
factories:

• Q1: What is the productivity of software factories?


• Q2: What factors influence the measurement of productivity, and how are they classified?
• Q3: What models have been developed?
• Q4: What factors include models?

IEEE Explore, ACM Digital Library, ScienceDirect, Springer, Directory of Open Access Journals
(DOAJ), Taylor and Francis, and other search engines (SCOPUS, Web of Knowledge) have been
considered. The selected reports correspond to the chains given in Table 2, which have been applied
to the title, abstract, and keywords, for the period between January 2005 and April 2017.
In addition, these terms were adapted to match the research questions and individual needs of
each search engine.
The selection and exclusion criteria set up in Table 3 have also been considered. Search sources
include journals, conferences, proceedings, and reports.

Conducting the Review


The potential primary studies identified in the search process, according to the proposed strategy,
were submitted to a selection process considering the inclusion and exclusion criteria established.
This was necessary to perform a precontent review, in order to determine its relevance for the present
study, and whether these studies apply to the aspects of software factories productivity. The majority
of the papers were discarded because they corresponded to estimation of costs and efforts for the
development of software; factors for the estimation of effort; or articles that had no Scimago Journal
and Country Rank (SJR) impact factor. The applied process is represented in Figure 3. Then, the
articles were analyzed to answer the research questions.
The development of the systematic review resulted in 2059 papers, and 25 investigations were
selected according to the inclusion and exclusion criteria, as shown in Table 4.
In the table above, ScienceDirect and Springer contain the largest number of publications,
concentrated at 40%. Likewise, in Taylor and Francis, articles were not selected if they did not meet
the selection and exclusion criteria.
52
International Journal of Information Technologies and Systems Approach
Volume 11 • Issue 1 • January-June 2018

Table 2. Search Chains Used in Sources

Source String Search


IEEE Explore (((“software factory” OR “software process” OR “software development” OR “software
development process” OR “software engineering”)) AND (productivity OR “development
efficiency” OR “development effectiveness” OR “development performance” OR “models’
performance” OR “project performance” OR “productivity model” OR “productivity factors”))
and refined by Content Type: Conference Publications Journals & Magazines Year: 2005-2017
ACM Digital “query”: {(+(“software +factory” +OR +“software +process” +OR +“software +development”
Library +OR +“software +development +process” +OR +“software +engineering”) +AND
+(productivity +OR +”development efficiency” +OR +”development effectiveness” +OR
+”development performance” +OR +”models performance” +OR +“project +performance”
+OR +“productivity +model” +OR +“productivity +factors”))}
“filter”: {”publicationYear”:{ “gte”:2005 }}, {owners.owner=ACM
ScienceDirect (“software factory” OR “software process” OR “software development” OR “software
development process” OR “software engineering”) AND (productivity OR “development
Springer
efficiency” OR “development effectiveness” OR “development performance” OR “models’
DOAJ performance” OR “project performance” OR “productivity model” OR “productivity factors” OR
“efficiency” OR “effectiveness” OR “performance”)
Taylor & Francis
SCOPUS, Web of
Knowledge

Table 3. Selection and Exclusion Criteria

Selection Criteria Exclusion Criteria


• Published in sources with impact factor SJR • Cost oriented studies and effort estimation.
• They present models, factors, techniques and • Articles that mention software productivity, but do not contribute to
/ or methodologies about software productivity the topic raised.
• They answer directly to the research • Items that are not within the context of software Factory.
questions • Books, Theses, Posters, Technical Reports.)

Figure 3. Literature review process

53
International Journal of Information Technologies and Systems Approach
Volume 11 • Issue 1 • January-June 2018

Table 4. Potentially Eligible Studies and Selected Studies

Source Potentially eligible studies Selected studies


IEEE Xplore 223 1
ACM Digital Library 430 2
ScienceDirect 358 5
Springer 324 5
DOAJ 264 3
Taylor and Francis 120 0
Others 340 9
TOTAL 2059 25

STATISTICS

Temporary Trend of Publications


Reviewing the software productivity studies that have been done, Figure 4 shows publications in
various sources, demonstrating the interest of researchers in the issue of productivity in the software
industry and, above all, the company software factory.

Number of Selected Publications


Table 5 shows the distribution of studies by type of publication and source. The largest number
of publications appears in Journals. In terms of sources, there have been more publications in
ScienceDirect and Springer—around 40% of all publications—and 48% of publications correspond
to proceedings and conferences.

ANALYSIS

Q1: What is the Productivity of Software Factories?


Banker and Kauffman (1991) set down that software productivity is expressed by the relation:

Productivity = (Size of Application Developed) / (Labor consumed during development)

IEEE (1992) defines software productivity as the ratio of work product to work effort. Fenton
and Pfleeger (1997) disagree, showing the productivity measurement based on:

Productivity=Size/Effort

The concept of many authors (Scacchi, 1995; Basili, Briand & Melo, 1996; Blackburn, Scudder
& Van Wassenhove, 1996; Briand, El-Emam & Bomarius, 1998; Jeffery, Ruhe & Wieczorek, 2001;
Maxwell, 2001; Wagner & Ruhe, 2008a; Nwelih & Amadin, 2008) is that measuring productivity
must be based on a constant-returns-to-scale (CRS) model, for which:

Productivity=Output/Input

54
International Journal of Information Technologies and Systems Approach
Volume 11 • Issue 1 • January-June 2018

Figure 4. Documents on software productivity by year and sources

Table 5. Distribution by Type of Publication and Source

    Publication Source
Type IEEE Xplore ACM Digital Library ScienceDirect Springer DOAJ Others Total
Journals 1 1 4 2 2 3 13
Proceedings 0 1 1 2 0 4 8
Conferences 0 0 0 1 1 2 4
Total 1 2 5 5 3 9 25

where output is represented by function points (FP) and source lines of code (SLOC), while input
is the effort of the project in person-hours (PH) or person-months (PM). According to Wagner and
Ruhe (2008a), the number of SLOC established, or the FP number, implemented per person-hour of
developer, is the measurement of productivity.
Moreover, Card defines productivity as “the ratio of outputs produced to the resources consumed”
(Card, 2006, p1).
In summary, studies of the software factory’s productivity intend to interpret productivity by the
unit of software programming work, as it is appreciable at the entrances and departures valued (e.g.,
SLOC, FP, PH of the developer, and others). This implies the development of many performance
measurement models that turned out to be inaccurate. Nevertheless, the studies did not inspect the
software factory integrally, since they consider other units of work, such as Analysis and Design, and
Tests. Furthermore, their productivity depends on various factors that influence the process, including
context (Arcudia-Abad et al., 2007; Nomura et al., 2006).

Q2: What Factors Influence Productivity and How Are They Classified?
Premraj, Shepperd, Kitchenham and Forselius (2005) state that the business sectors most influential
for people’s productivity are Banking, Telecommunications, and Insurance.

55
International Journal of Information Technologies and Systems Approach
Volume 11 • Issue 1 • January-June 2018

Moreira et al. (2010) present a model based on CMMI and Six Sigma, and its application for
the prediction of productivity on software organization projects. The model considers these factors:
Defect Density in Systemic Tests, Level of the Requirements Unstableness, Level of Continuous
Integration Utilization, Level of Experience, Development Environment, Defect Density in Systemic
Tests, Percentage of Defects in Technical Revisions, Unit Test Coverage.
Rodger, Pankaj and Nahouraii (2011) study the validity of the factors proposed by Subramanian
and Zarnich (1996) and Foss (1993). They show that the most significant factors affecting productivity
are Team Size, Volatility on the Software Development Type (New, Maintenance), Development
Platform, and Type of Programming Language. Likewise, it is determined that tools Computer Aided
Software Engineering (CASE) do not impact software productivity.
Rodríguez, Sicilia, García and Harrison (2011) demonstrate that a significant statistical relation
exists between Team Size, Effort, and Project Duration.
Yilmaz and O’Connor (2011) demonstrate the correlation between software productivity, social
productivity, and social capital, establishing the following factors: Motivation, Process, Complexity,
Reuse, Team Size, Team Leadership, Collective Outcomes, Information Awareness, Communication
Transparency, Social Relations, and Regular Meetings.
Çetin and Alabaş-Uslu (2015) did a study in software companies from Turkey. Through multiple
regressions, they demonstrate that a relation exists between the Type of Project and Severity factors,
Total number of issues and the Project Team Size; as well as the relation between Project duration
and Methodology factors, Number of baselines, Difference between actual and planned baseline
dates, Duration of analysis, Duration of development, Duration of test, Total number of issues, Total
number of risks and Project Complexity.
Other factors that impact on productivity are Complexity (Zhan, Zhou & Zhao, 2012; Ondrej et al.,
2012; Yilmaz & O’Connor, 2011; Khan, et al., 2014) and Level of Experience (Moreira et al., 2010).
Besides that, many existing jobs rank the factors (Paiva, Barbosa, Lima & Alburqueque, 2010),
demonstrating that “motivation” and “commitment” have a great positive influence.
Wagner and Ruhe (2008a) classify the factors using eight categories that end up being grouped as
soft and technical factors. The authors define the soft factors as “all non-technical factors influencing
productivity. These factors mainly stem from the team and its work environment” (Wagner & Ruhe,
2008, p.12); and the category of technical factors “contains all factors that have a direct relation
to the product, the technical aspects of the process and the tools the developer uses in the project”
(Wagner & Ruhe, 2008, p.12). Trendowicz (2007) identifies many contexts related to factors such as
Cross-Context Factors, Model-specific Factors, Development-Type-specific Factors, Domain-specific
Factors and Reuse-specific Factors, defining the factors as “factors that are included in the model in
order to explain productivity variance within a certain context” (Trendowicz, 2007, p.10). Table 6
visualizes the classification of the factors proposed by these authors.
As shown in Table 6, the classification by Wagner and Ruhe (2008a) contemplates categories that
in many cases do not appear in Trendowicz (2007), such as: Development Environment, Corporate
Culture, Team Culture, and Environment. However, these categories are necessary, since they aggregate
factors that common to all information, and are not exclusively of the project or product. That is why
the category proposed by Wagner and Ruhe (2008a) is used for the factors aggregation.
In that context, 74 factors that affect productivity have been collected and classified in the seven
categories proposed by Wagner and Ruhe: Product, Process, Development Environment, Corporate
Culture, Team Culture, Capabilities and Experience, and Project. Appendix A represents the factors
that correspond to each component and category. Table 7 shows the quantity of factors and frequencies
by category.

Q3: What Models Have Been Developed?


Cardoso, Bert and Podestá (2010) affirm that a model is a simplified representation of the reality,
defined through the components and processes that belong to the study system.

56
International Journal of Information Technologies and Systems Approach
Volume 11 • Issue 1 • January-June 2018

Table 6. Classification of Factors by Category Proposed by Wagner and Ruhe (2008a); Trendowicz, (2007)

Category proposed by Category Description


Wagner & Ruhe, 2008 proposed by
Trendowicz.,
2007
Technical Product Product “…the product category contains all factors that have a direct
relation to the product” (Wagner & Ruhe, 2008, p.12).
“characteristics of software products being developed throughout
all development phases. These factors refer to such products
as software code, requirements, documentation, etc., and
their characteristics, such as complexity, size, volatility, etc.”
(Trendowicz, 2007, p.10).
Process Process “…the category process is comprised of the technical aspects of
the process…”. (Wagner & Ruhe, 2008, p.12).
“…characteristics of software processes as well as methods, tools
and technologies applied during a software development project.
They include, for instance, the effectiveness of quality assurance,
testing quality, quality of analysis and documentation methods,
tool quality and usage, quality of process management, or the
extent of customer participation…” (Trendowicz, 2007, p.10).
Development “…the category development environment contains factors about
Environment the tools the developer uses in the project….” (Wagner & Ruhe,
2008, p.12).
Soft Corporate Culture “…contains the factors that are on a more company-wide level
whereas…” (Wagner & Ruhe, 2008, p.14).
Team Culture “…denotes similar factors on a team level…” (Wagner & Ruhe,
2008, p.14).
Capabilities and Personnel “…in Capabilities and Experiences are factors summarized that are
Experiences related to individuals…” (Wagner & Ruhe, 2008, p.14).
“…characteristics of personnel involved in the software
development project. These factors usually consider the experience
and capabilities of such project stakeholders as development
team members (e.g., analysts, designers, programmers, project
managers) as well as software users, customers, maintainers,
subcontractors, etc.” (Trendowicz, 2007, p.10).
Environment “…stands for properties of the working environment…” (Wagner
& Ruhe, 2008, p.14).
Project Project “…project specific factors…” (Wagner & Ruhe, 2008, p.14).
“…regard various qualities of project management and
organization, development constraints, working conditions, or staff
turnover…” (Trendowicz, 2007, p.10).

Ten models are identified that measure productivity in software development, based on differing
approaches. Table 8 summarizes the following models: Data Envelopment Analysis (DEA), CMMI
and Six Sigma, Software Reuse, Structural Equation Modeling (SEM), Total Factor Productivity,
IEEE Std. 1045, and ISO 9216-4.
On the other hand, there are also works that present models of software productivity evaluation,
but that have not been selected because they do not meet the criteria of the present study. Among
them are the models described by Sudhakar, Farooq and Patnaik (2012) as the Simple Model of
Productivity described by Card (2006); normalized Productivity Delivery Rate (PDR) by Jiang, Naud´e
and Comstock (2007); and structural Equation Modeling (SEM) of Khan(2014), presented in Table 9.
The following criteria were considered for the evaluation of the presented models:

57
International Journal of Information Technologies and Systems Approach
Volume 11 • Issue 1 • January-June 2018

Table 7. Number of Factors per Component and Category Impacting Productivity

Component Category Number References


of
Factor
Programming Product     17 Rodger et al., 2011; Ondrej et al., 2012; Yilmaz & O’Connor, 2011;
Cheikhi et al., 2012; Khan et al., 2014; Çetin & Alabaş-Uslu, 2015; López,
Kalichanin, Meda And Chavoya, 2010; Pai, Subramanian and Pendharkar,
2015; Unluturk & Kurtel, 2015
Process     13 Ondrej et al., 2012; Yilmaz & O’Connor, 2011; Khan et al., 2014; Cheikhi
et al., 2012; Çetin & Alabaş-Uslu, 2015
Development     1 Moreira et al., 2010
Environment
Corporate     3 Premraj et al., 2005; Khan et al., 2014; Çetin & Alabaş-Uslu, 2015
Culture
Team Culture     8 Yilmaz & O’Connor, 2011; Khan et al., 2014
Capabilities     6 Yilmaz & O’Connor, 2011; Moreira et al., 2010; Khan et al., 2014
and
Experience
Environment     0
Project     12 Rodger et al., 2011; Yilmaz & O’Connor, 2011; Khan et al., 2014; Çetin &
Alabaş-Uslu, 2015; Ondrej et al., 2012; Cheikhi et al., 2012
Testing Product     3 Moreira et al., 2010; Pai et al., 2015; Unluturk & Kurtel, 2015
Process     2 Moreira et al., 2010
Analysis & Product     9 Cao, Ching and Thompson, 2012
Design

• Formulation of Productivity: Explicit function to measure productivity.


• Coverage of the Model: Specifies if the model covers part or all units of the software factory
work (Analysis and Design, Programming and Testing).
• Theoretical Support: Specifies whether the model presents support or grounding in theories.
• Validation: Specifies if the evaluated mode has been tested in real projects of an organization,
or by data repositories.
• Preprocessing: The model describes a procedure for the capture and preprocessing of the data
to measure productivity.

Table 10 shows the criteria for productivity models defined.


Table 11 reviews in groups each of the components that make up the Software Factory.

Q4: What Factors Include the Models?


Taking into consideration the information found in the literature, Table 12 displays the factors included
in the models, classifying them by components.

DISCUSSION

According to the literature, several publications on software productivity have been reviewed in
recent years, likely justified by interest among researchers in the topic of productivity in the software
industry. However, very scarce—almost nonexistent—in the literature is research on the measurement

58
International Journal of Information Technologies and Systems Approach
Volume 11 • Issue 1 • January-June 2018

Table 8. Software Productivity Measurement Models

Id Approach Model Context Highlights Reference


M1 Data Productivity = e1.95 FP 0.7 (*) Large projects in Canadian bank. The authors measure the Asmild et
Envelopment Where: productivity considering as al., 2006
Analysis • Productivity: The productivity is output to function points (FP)
Work Effort and input, the development
• FP: Function Points effort, using DEA as a
measurement technique.
M2 CMMI & Six Productivity = 32.087 - 3.637 DDST 5 projects of the Atlantic The authors propose a Moreira et
Sigma + 11.71 LRU - 9.451 LCIU - 0.8187 Institute, a medium-sized productivity model based al., 2010
LEX DENV enterprise, assessed on CMMI on CMMI and Six Sigma.
Where: level 3, which the actual goal is In the article they define
• DDST: Defect Density in Systemic reach the maturity level 5. productivity as General Projects
Tests Productivity (GPP), using the
     DDST = 1.8955 – 0.5087 PDTR – linear regression technique to
1.6020 UTC establish their model.
• LRU: Level of the Requirements
Unstableness
• LCIU: Level of Continuous
Integration Utilization
• LEX: Level of Experience
• DENV: Development Environment
• PDTR: Percentage of Defects in
Technical Revisions
• UTC: Unit Test Coverage
M3 Fuzzy Logic Productivity = 10.3 + 0.31701* N&C Data set of 140 small programs The authors propose a fuzzy López et
N&C: New and Changed code. It is developed with practices based logic model could be used al., 2010
considered as physical lines of code on Personal Software Process for estimating and predicting
(LOC). N&C is composed of added (PSP). productivity of the software
and modified code development.
M4 Structural Productivity = f (Motivation, Process, The analysis was conducted The authors propose an Yilmaz &
Equation Complexity, Reuse, Team Size, Social with 227 participants. About empirically validated model to O’Connor,
Model Productivity, Social Capital) 65% of the participants were measure social capital, social 2011
• Social Productivity= f (Team postgraduates. productivity, and software
Leadership, Collective Outcomes, development productivity.
Information Awareness)
• Social Capital= f (Communication
Transparency, Social Relations,
Regular Meetings)
M5 Total Factor • Output definitions(y) Not defined Total productivity measurement Ondrej et
Productivity • Input Definitions (x) model of the software al., 2012
• Output and Input Prices process based on Total Factor
Determination (p, w) Productivity.
Total Factor Productivity Calculations
(Fisher or Törnqvist productivity
indexes)
M6 ISO 9126-4 Consolidated productivity model Not defined The authors review the Cheikhi et
IEEE Std. Productivity=f (Efficiency, international IEEE Std. 1045 al., 2012
1045 Completeness, Accuracy) and ISO 9216-4 standards,
Where: defining how both standards
• Efficiency=f (Task time, Task cost, can be used together for
Frequency of Error, User efficiency, productivity measurement.
User efficiency compared to an expert,
Task efficiency (time), Task Efficiency
(cost), Proportion of the time the user
is productive)
• Completeness= f (Percentage of tasks
Accomplished, Percentage of users
who were successful in completing the
task, Percentage goal achievement for
every unit of time)
• Accuracy=f (Percentage of tasks
Achieved correctly, Percentage of
users who achieve the task correctly,
How often does the user encounter
inaccurate results when accomplishing
the task? How often does the user
achieve the task with inadequate
precision?)

continued on following page

59
International Journal of Information Technologies and Systems Approach
Volume 11 • Issue 1 • January-June 2018

Table 8. Continued

Id Approach Model Context Highlights Reference


M7 Data Productivity = f(n (OT), n (RT), n (PT), Information on 25 UML-based An approach to assessing the Cao et al.,
Envelopment PO (MT, o), PXO (MT), PR (MT, e), PXR SAD projects from different relative efficiency of software 2012
Analysis (MT), RO (MT, o), RO(MT))/ (labor hour firms projects using these complexity
variable variable) measures as outputs.
returns to scale • n (OT): Count of object types per The model is applied in systems
(DEA VRS) technique analysis and design (SAD)
• n (RT): Count of relationship types process
per technique
• n (PT): Count of property types per
technique
• PO (MT, o): Count of number of
properties for a given object type
• PXO (MT): Average number of
properties for a given object type
• PR (MT, e): Number of properties of a
relationship type and its accompanying
role types
• PXR (MT): Average number of
properties per relationship type
• RO (MT, o): Number of relationship
types that can be connected to a certain
object type
• RO(MT): Average number of
relationship types that can be
connected to a certain object type
M8 Software Reuse n In Nigeria for instance, with This model considers factors Nwelih &
Productivity = ∑(ri + fi + l i + ci ) / special
∑ I references to IT industry like reuse, complexity, length, Amadin,
i=1 functionality and effort. 2008
Where:
ri = Reuse
fi = Functionality
li = Length
ci = Complexity
∑ I = Effort
M9 Data Productivity = FP, Quality/ Total Model is applied to data The authors develop a model Pai et al.,
Envelopment Effort (Development Effort + EoC + collected on 79 software for measuring the productivity 2015
Analysis EoNC) development projects from a of software projects and
variable • Conformance effort (EoC) = leading CMMI level 5 identifying best practices.
returns to scale (Appraisal + Prevention) Costs. organization. This study shows that including
(DEA VRS) Example: review effort EoC and EoNC as inputs has
• Non-conformance effort (EoNC) a positive impact on the best
(Internal + External) Failure Costs. practice frontier.
Example: The rework effort, including
failure effort
• Quality = f (Defects)
M10 Statistical n
During this phase, three senior
Productivity = (∑ (Qualityi * Quantityi ) / NetTask Hoursi ) *Weighted Methods
A method to measure the Unluturk
technique i =1 • level students worked in the productivity of individual & Kurtel,
Quality = 1- ((10 * total # of Serious measurement process of two software programmers. This 2015
Defects + 3 * total # of Medium projects. method provides a common
Defects + total # of Trivial Defects)/ opinion for understanding,
LOC) controlling and improving the
• Quantity = LOC software engineering practices.
• Weighted Methods = (3 * (total# of
constructors + total# of destructors) +
5 * total # of selectors + 9 * total # of
iterators + 15 * total # of modifiers)
* (1/N)
where N is the total number of methods
for an individual programmer
(*) For a better understanding of the model the formula has been simplified.

60
International Journal of Information Technologies and Systems Approach
Volume 11 • Issue 1 • January-June 2018

Table 9. Other Models of Measurement of Productivity of Software, Not Selected. (Sudhakar et al., 2012)

ID Technique/Model Formula/Description High Lights Reference


1 Simple Model of • Physical Productivity = Number of LOC / This model considers the Card, 2006
Productivity man hours or days or months entities such as Product,
• Functional Productivity = Number of Process or Sub-process,
Function Points / Man hours or days or Requirements, Value, Cost and
months Effort.
• Economic Productivity = Value / Cost
Where Value = f(Price, Time, Quality,
Functionality)
2 Normalized log (PDR) = 2.8400+0.3659 x log (Team It uses two continuous Jiang et al.,
Productivity Size)-0.6872 x I(3GL) - 1.2962 x I(4GL) – variables Average Team Size 2007
Delivery Rate 1.3225 x I(ApG) – 0.1627 x I(MR) – 0.4189 and PDR and six categorical
(PDR) x I(Multi) – 0.3201 x I (PC) – 0.4280 variables like Language
x I(OO) – 0.2812 x I (Event) + 0.7513 Type, Development Type,
x I(OO:Event) – 0.2588 x I(Business) Development Platform,
– 0.0805 x I(Regression) + 1.0506 x Development Techniques,
I(Business:Regression) Case Tool Used, and How
Normalized PDR = (Normalized Work Methodology Acquired.
Effort) / (Adjusted Function Points)
3 Structural Equation Productivity = f (Technology, Working The authors develop an Khan et al.,
Model Culture, Interest in Individual Job, empirical productivity model, 2014
Complexity, Team Size, Human showing the correlation
Productivity) between software productivity
• Human Productivity= f (Manager and human productivity
Skills, Team Unity, Social Life, Meeting
Frequency)

Table 10. Evaluation of Software Productivity Models

Criteria M1 M2 M3 M4 M5 M6 M7 M8 M9 M10
Formulation of √ √ √ No No No No Si No √
productivity
Model Coverage PRO PRO and PRO PRO PRO PRO AD PRO PRO and PRO and
TES TES TES
Theoretical support √ √ √ √ √ √ √ √ √ √
Validation NE √ √ NE NE NE √ NE √ √
Pre-Processing NE NE NE NE NE NE NE NE √ NE
√ If it meets the established criteria NE: Not specified PRO: Programming TES: Testing AD: Analysis and Design

of productivity in software factories, whose use of the model has yielded great advantages (Clements
& Northrop, 2001). Although models for measurement of software productivity that only consider the
programming work unit lead to biased measurement (Asmild et al., 2006; Nwelih & Amadin, 2008;
Moreira et al., 2010; Yilmaz & O’Connor, 2011; Ondrej et al., 2012; Cheikhi et al., 2012), resulting
in the lack of measurement indicators that facilitate decision making in the organization. The studies
have been oriented around two main aspects: techniques or models that measure productivity, and the
factors that affect it. These studies represent the development process as a unit, making measurement
difficult if it were to be performed independently for each of the components involved in the software
development life cycle. No evidence has been found of models that measure productivity in all phases
of the software development process. At the same time, there is no consensus among authors about

61
International Journal of Information Technologies and Systems Approach
Volume 11 • Issue 1 • January-June 2018

Table 11. Measurement of the Productivity of Software Component Models

Component Model Approach References


Programming M1 Data Envelopment Analysis Asmild et al., 2006
M2 CMMI & Six Sigma Moreira et al., 2010
M3 Fuzzy Logic López et al., 2010
M4 Structural Equation Model Yilmaz & O’Connor, 2011
M5 Total Factor Productivity Ondrej et al., 2012
M6 ISO 9126-4 Cheikhi et al., 2012
IEEE Std.1045
M8 Software Reuse Nwelih & Amadin, 2008
M9 Data Envelopment Analysis variable returns to scale Pai et al., 2015
(DEA VRS)
M10 Statistical technique Unluturk & Kurtel, 2015
Testing M2 CMMI & Six Sigma Moreira et al., 2010
M9 Data Envelopment Analysis variable returns to scale Pai et al., 2015
(DEA VRS)
M10 Statistical technique Unluturk & Kurtel, 2015
Analysis & Design M7 Data Envelopment Analysis variable returns to scale Cao et al., 2012
(DEA VRS)

the factors that generate a greater impact on productivity, and the unit of measurement taken into
account to measure it. Of the total of documents, more than 70% of the studies focus on the factors
that affect productivity. This implies the importance that researchers give to the factor aspect, seeking
to explain its impact on measurement. There are few studies (27.41%) focused on determining models
that measure productivity in the various units of the software factory.
This article seeks to respond to the following research questions.

What is the Productivity of Software Factories?


The studies explain software productivity, and at the same time focus on the work unit of Programming.
However, there is generally little information about productivity in the software factory as a whole.
Despite being a model that has been working in the industry for many years, there is no clear definition
of productivity.
Productivity measurement is very important for organizational development, which becomes
complex if there is no clear definition. In this context, the productivity of software factories is defined
as an indicator of efficiency of used resources over different units of work of factory software for the
achievement of the final product.
Applying this concept, productivity requires good management of resources in order to achieve
results that are efficient for the work the company develops, with regard to not only the manufacture or
production of the software, but also the methods used and the internal relationship of the organization.

What Factors Influence the Measurement of Productivity; How Are They Classified?
We have identified 74 factors, which have been grouped according to the categories proposed
by Wagner and Ruhe (2008a). These are categories not covered by Trendowicz (2009), such as
Development Environment, Corporate Culture, Team Culture, and Environment. Importantly, these
categories group factors within the whole organization, and not exclusively to the product or project.

62
International Journal of Information Technologies and Systems Approach
Volume 11 • Issue 1 • January-June 2018

Table 12. Factors Included in the Productivity Models of Software Components

Component Category M1 M2 M3 M4 M5 M6 M7 M8 M9 M10


Programming Product F10, F3, F4, F1, F2, F3, F9, F4, F5, F10 F9
F15, F17 F4, F5, F6, F10 F6,
F16 F7, F8, F17
F17
Process F35, F31, F32 F30, F34     F33
F37,
F38,
F39
Development F45
Environment
Corporate Culture
Team Culture F49,
F50,
F51,
F52,
F53,
F54
Capabilities and F58 F57,
Experience F59,
F60,
F61
Project F64, F65, F66,
F66, F67, F72
F68, F69,
F70, F71,
F73
Testing Product F18, F18, F18,
F19, F19 F19
F20
Process F43,
F44
Analysis & Design Product F21, F22,
F23, F24,
F25, F26,
F27, F28,
F29

What Models Have Been Developed?


We have identified 10 models of software productivity measurement, based on Data Envelopment
Analysis by (Asmild et al., 2006; Cao et al., 2012; Pai, et al., 2015), CMMI & Six Sigma (Moreira,
et al., 2010), Fuzzy Logic (López, et al., 2010), Structural Equation Model (Yilmaz & O’Connor,
2011), Software Reuse (Nwelih & Amadin, et al., 2008), Total Factor Productivity by Ondrej et al.
(2012); ISO 9126-4 IEEE Std. 1045 (Cheikhi et al., 2012); and other statistical techniques by Unluturk
and Kurtel (2015). In the way they are defined as productivity measurement models, many of them
pose a function; however, they do not lead to a formula (Yilmaz & O’Connor, 2011; Ondrej et al.,
2012; Cheikhi et al., 2012), only mentioning several factors to take into account in the measurement.
Models that have only been designed to perform a measurement of internal productivity are
incomplete because they have been oriented mostly toward the Programming work unit (Asmild
et al., 2006; Nwelih & Amadin, 2008; Moreira et al., 2010; Yilmaz & O’Connor, 2011; Ondrej et
al., 2012; Cheikhi et al., 2012), and very few toward testing and for analysis and design (Moreira et
al., 2010; Pai et al., 2015; Unluturk & Kurtel, 2015; Cao et al., 2012). This generates a bias in the
measurement, because it would not allow establishing the productivity in the other components of
the software factory. No models have been found to measure productivity at the software factory.
The models do not measure external productivity—in other words, they make no comparative
evaluation of productivity. This could have negative consequences for the organization, because if
it only compares internally, it would bias information about the market. Some models by Asmild et

63
International Journal of Information Technologies and Systems Approach
Volume 11 • Issue 1 • January-June 2018

al. (2006) and Moreira et al. (2010) have been validated in a real environment, but the information
used to design the model is obtained from the same organization, which makes it more difficult to
compare with other organizations.
Also, the models outline a qualitative model (Yilmaz & O’Connor, 2011; Ondrej et al., 2012;
Cheikhi et al., 2012). Although the accuracy of the model could be considered, this could not be
generalized because the models have no practical application, or have not been applied in case studies.

What Factors Include the Models?


The factors most used in the models are Team Size, Reuse, and Complexity. However, many factors
have not been taken into account in the models, such as Total numbers of launches, Issues, Risks,
Change Request, Occurrence of standby, Duration of standby, Reason of standby. Taking into account
the factors that exist in the literature and the models that have been developed to date, there is no
homogenization of the factors to be taken into account in a model, and in many cases, it has not been.
Some of the factors affect the accuracy of the model, since if more factors cover the model, it will
be possible to establish more aspects of reality.

CONCLUSION

If an organization needs to know the efficiency and effectiveness with which it has developed important
activities in the production process, a clear way to measure productivity is necessary. This is a complex
activity with no consensus on the aspects to be taken into account, because it incorporates several
factors that in many cases are not taken into consideration in most outstanding models, including
Team Size, Reuse, Complexity, Team Cohesion, Team Unity, Social Relations, Social Life, Regular
Meetings, Meetings Frequency. However, many factors have not been considered, which affects the
accuracy of the models. There are no models for the productivity measurement in software factories
as a whole. Most studies are oriented toward the work unit of Programming, and very few toward
Analysis and Design, and Testing.
These studies represent the development process as a unit, making measurement difficult if it were
to be performed independently by each of the components involved in the software development life
cycle. Evidence has not been found of models that measure productivity in all phases of the software
development process. The models do not measure external productivity; they are only oriented toward
measuring internal production of the organization. This leads to making bad decisions, because high
productivity internally could nonetheless be below standard, and thus inefficient. There is no standard
for the unit of measurement to be used in productivity measurement. A number of criteria are used
for input and output factors, making it difficult to benchmark projects and organizations.
As future work is suggested investigating the homologation of measure units to facilitate
measurement and comparison of the productivity of the software factory, identify new factors that
affect productivity considering the use of agile methodologies in software factories, and finally, design
new productivity models with non-parametric approaches that have successes in other areas such as
the Data Envelopment Analysis (DEA), and that could improve the measurement the productivity in
throughout of the software factory, its parties and competitors.

64
International Journal of Information Technologies and Systems Approach
Volume 11 • Issue 1 • January-June 2018

REFERENCES

Arcudia-Abad, E., Solís-Carcaño, R. G., & Cuesta-Santos, A. R. (2007). Propuesta tecnológica para incrementar
la productividad en la construcción masiva de vivienda. Ingeniería, Investigación y Tecnología, 8(2), 11.
Asmild M., Paradi, J. C., & Kulkarni, J. C. (2006). Using data envelopment analysis in software development
productivity measurement. Software Process Improvement and Practice, 11(6), 561-572. doi:10.1002/spip.298
Banker, R. D., & Kauffman, R. J. (1991). Reuse and productivity in integrated computer-aided software
engineering: An empirical study. Management Information Systems Quarterly, 14(3), 374–401.
Basili, V. R., Briand, L. C., & Melo, W. L. (1996). How reuse influences productivity in object-oriented systems.
Communications of the ACM, 39(10), 104–116. doi:10.1145/236156.236184
Basili, V. R., Caldiera, G., & Cantone, G. (1992). A reference architecture for the component factory. ACM
Transactions on Software Engineering and Methodology, 1(1), 53–80. doi:10.1145/125489.122823
Blackburn, J. D., Scudder, G. D., & Van Wassenhove, L. N. (1996). Improving speed and productivity of
software development: A global survey of software developers. IEEE Transactions on Software Engineering,
22(12), 875-885.
Briand, L. C., El-Emam, K., & Bomarius, F. (1998). COBRA: A hybrid method for software cost estimation,
benchmarking, and risk assessment. In Proceedings of The 20th International Conference on Software
Engineering, Kyoto, Japan, April 19-25 (pp. 390-399). doi:10.1109/ICSE.1998.671392
Cao, Q., Ching Gu, V., & Thompson, M.A. (2012). Using complexity measures to evaluate software development
projects: A nonparametric approach. The Engineering Economist: A Journal Devoted to the Problems of Capital
Investment, 57(4), 274-283.
Card, D. N. (2006). The challenge of productivity measurement. In Proceedings of Pacific Northwest Software
Quality Conference, Portland, January.
Cardoso C., Bert, F., & Podestá, G. (2010). Modelos Basados en Agentes (MBA): definición, alcances y
limitaciones. Landuse, biofuels and rural development in the La Plata Basin (IDRC Grant ID 104783-001).
Çetin, F., & Alabaş-Uslu, Ç. (2015). Performance evaluation of projects in software development. Journal of
Aeronautics and Space Technologies, 8(2), 1–6.
Cheikhi, L., Al-Qutaish, R. E., & Idri, A. (2012). Software productivity: Harmonization in ISO/IEEE software
engineering standards. Journal of Software, 7(2), 462–470. doi:10.4304/jsw.7.2.462-470
Clements, P. C., & Northrop, L. M. (2001). Software product lines: Practices and patterns. Boston: Addison-
Wesley Professional.
Cusumano, M. A. (1989). The software factory: A historical interpretation. IEEE Software, 6(2), 23–30.
doi:10.1109/MS.1989.1430446
Fabri, J. A., Scheible, A. C. F., Moreira, P. M. L., Trindade, A. L. P., Begosso, L. R., Braun, A. P., & Pessoa, M.
S. P. (2007b). Meta-process used for production process modeling of a software factory: The Unitech case. In
Managing Worldwide Operations and Communications with Information Technology. Vancouver, Canada: IRMA.
Fabri, J. A., Trindade, A. L. P., Begosso, L. R., Lerario, I., & Pessoa, M. S. P. (2007). A Organização de uma
Máquina de Processo e a Melhoria do Processo de Produção de Software em um Ambiente de Fábrica. In VI
Jornadas Iberoamericana de Ingeniería del Software e Ingeniería del Conocimiento, Lima, Perú.
Fabri, J. A., Trindade, A. L. P., Begosso, L. R., Lerario, I., Silveira, F. L. F., & Pessoa, M. S. P. (2004). Techniques
for the development of a software factory: Case CEPEIN-FEMA. In Proceedings of the 17th International
Conference Software & Systems Engineering and their Applications, Paris.
Fabri, J. A., Trindade, A. L. P., Begosso, L. R., Pessoa, M. S. P., & L’erário, A. (2007c). The use of the idef-
0 to model the process in a software factory. In Managing Worldwide Operations and Communications with
Information Technology–IRMA 2007, Vancouver, Canada.

65
International Journal of Information Technologies and Systems Approach
Volume 11 • Issue 1 • January-June 2018

Fabri, J. A., Trindade, A. L. P., Durscki, R., Spinola, M. M., & Pessoa, M. S. P. (2005). Proposta de um Mecanismo
de Desenvolvimento e Customização de uma Fábrica de Software Orientada a Dominios. In Proceedings of the
XXXI Latin American Computing Conference, Cali.
Fabri, J. A., Trindade, A. L. P., L’erário, A., Pessoa, M. S. P., & Spinola, M. (2004a). Desenvolvimento e
Replicação de uma Fábrica de uma Software. In VI Simpósio Internacional de Melhoria de Processo de Software.
São Paulo: SIMPROS.
Fabri, J. A., Trindade, A. L. P., Silveira, M., & Pessoa, M. S. P. (2007a). O Papel do CMMI na Configuração de
um Meta-Processo de Produção de Software com Características Fabris: Um Estudo de Caso. In: VI Jornadas
Iberoamericanas de Ingeniería del Software e Ingeniería del Conocimiento (pp. 375-383), Lima, Perú.
Fenton, N. E., & Pfleeger, S. L. (1997). Measuring productivity. In Software metrics: A Rigorous (pp. 412–425).
Practical Approach.
Fernandes, A. A., & Teixeira, D. S. (2004). Fábrica de software: Implantação e gestão de operações. São
Paulo: Editora Atlas.
Foss, W. B. (1993). Fast, faster, fastest development. Computerworld, 27(22), 81–83.
Hernández, A., Colomo, R., & García, A. (2012). Productivity in software engineering: A study of its meanings
for practitioners: Understanding the concept under their standpoint. In Information Systems and Technologies.
Madrid, Spain: CISTI.
IEEE. (1992). IEEE Standard for Software Productivity Metrics 1045, http://ieeexplore.ieee.org/servlet/
opac?punumber=2858
Jeffery, R., Ruhe, M., & Wieczorek, I. (2001). Using public domain metrics to estimate software development
effort. Proceedings of METRICS, 01, 16–27.
Jiang, Z., Naud’e, P., & Comstock, C. (2007). An investigation on the variation of software development
productivity. International Journal of Computer and Information Science and Engineering, 1(2), 72–81.
Khan, R., Ahmed, I., & Faisal, M. (2014). An industrial investigation of human factors effect on software
productivity: Analyzed by SEM model. International Journal of Computer Science and Mobile Computing,
3(5), 16–24.
Kitchenham, B., & Charters, S. (2007). Guidelines for performing systematic literature reviews in software
engineering. Technical Report EBSE-2007-01, Software Engineering Group, School of Computer Science and
Mathematics, Keele University.
Kruchten, P. (2004). The rational unified process: An introduction (3rd ed.). Boston: Addison-Wesley.
Li, C., Li, H., & Li, M. (2001). A software factory model based on ISO 9000 and CMM for Chinese small
organizations. In Proceedings of the Second Asia-Pacific Conference on Quality Software (APAQS’01), Hong
Kong, China.
López, C., Kalichanin, I., Meda, M. E., & Chavoya, A. (2010) Software development productivity prediction
of small programs using fuzzy logic. In Proceedings of the Seventh International Conference on Information
Technology, Las Vegas, Nevada, April 12-14.
Maxwell, K. D. (2001). Collecting data for comparability: Benchmarking software development productivity.
IEEE Software, 18(5), 22–25. doi:10.1109/52.951490
Moreira, C. I., Carneiro, C., Pires, C. S., & Bessa, A. (2010). A practical application of performance models
to predict the productivity of projects. In T. Sobh (Ed.), Innovations and advances in computer sciences and
engineering (pp. 273-277).
Nomura, L., Spinola, M., Hikage, O., & Tonini, A. C. (2006). FS-MDP: Um Modelo de Definição de Processos
de Fábrica de Software. In Proceedings of the XXVI Encontro Nacional de Engenharia de Producão, Fortaleza,
Brazil, October 9-11.
Nwelih, E., & Amadin, I. F. (2008). Modeling software reuse in traditional productivity model. Asian Journal
of Information Technology, 7(8), 484–488. Retrieved from http://medwelljournals.com/abstract/?doi=aj
it.2008.484.488

66
International Journal of Information Technologies and Systems Approach
Volume 11 • Issue 1 • January-June 2018

Ondrej, M., Jiri, H., & Jan, H. (2012). Estimating productivity of software development using the total factor
productivity approach. International Journal of Engineering Business Management, 4, 4–34. doi:10.5772/52797
Pai, D. R., Subramanian, G. H., & Pendharkar, P. C. (2015). Benchmarking software development productivity of
CMMI level 5 projects. Information Technology Management, 16(3), 235–251. doi:10.1007/s10799-015-0234-4
Paiva, E., Barbosa, D., Lima, R., & Albuquerque, A. (2010). Factors that influence the productivity of software
developers in a developer view. In T. Sobh & K. Elleithy (Eds.), Innovations in Computing Sciences and Software
Engineering (pp. 99–104). doi:10.1007/978-90-481-9112-3_17
Pessoa, M. S. P., Fabri, J. A., L’erário, A., Spinola, M., & Begosso, A. (2004). Desenvolvimento e Replicação
de uma Fábrica de uma Software. In Proceedings of the IV Jornadas Iberoamericanas de Ingeniería del Software
e Ingeniería del Conocimiento, Madrid, España.
Petersen, K. (2011). Measuring and predicting software productivity: A systematic map and review. Information
and Software Technology, 53(4), 317–343. doi:10.1016/j.infsof.2010.12.001
Premraj, R., Shepperd, M., Kitchenham, B., & Forselius, P. (2005). An empirical analysis of software productivity
over time. Presented at 11th IEEE International Software. Metrics Symposium (Metrics05), Como, Italy,
September 19-22. doi:10.1109/METRICS.2005.8
Rockwell, R., & Gera, M. H. (1993). The Eureka Software Factory CoRe: A conceptual reference model for
software factories. In Proceedings of the Software Engineering Environments Conference, UK, United Kingdom,
July 7-9. doi:10.1109/SEE.1993.388419
Rodger, J. A., Pankaj, P., & Nahouraii, A. (2011). Knowledge management of software productivity and
development time. Journal of Software Engineering and Applications, 4(11), 609–618. doi:10.4236/
jsea.2011.411072
Rodríguez, D., Sicilia, M. A., García, E., & Harrison, R. (2011). Empirical findings on team size and productivity
in software development. Journal of Systems and Software, 85(3), 562–570. doi:10.1016/j.jss.2011.09.009
Scacchi, W. (1995). Understanding software productivity. In D. Hurley (Ed.), Software Engineering and
Knowledge Engineering: Trends for the next decade (Vol. 4, pp. 37–70). Los Angeles: World Scientific Press.
Subramanian, G. H., & Zarnich, G. E. (1996). An examination of some software development effort and
productivity determinants in ICASE tool projects. Journal of Management Information Systems, 12(14), 143–160.
doi:10.1080/07421222.1996.11518104
Sudhakar, G., Farooq, A., & Patnaik, S. (2012). Measuring productivity of software development teams. Serbian
Journal of Management, 7(1), 65–75. doi:10.5937/sjm1201065S
Swanson, K., McComb, D., Smith, J., & McCubbrey, D. (1991). The application software factory: Applying
total quality techniques to systems development. Management Information Systems Quarterly, 15(4), 567–579.
doi:10.2307/249460
Trendowicz, A. (2007). Factors influencing software development productivity—state of the art and industrial
experiences (Technical Report 08.07/E). Kaiserslautern, Germany: Fraunhofer IESE.
Trujillo, Y. T. (2007). Modelo de factoría de software aplicando inteligencia. Serie Científica de la Universidad
de las Ciencias Informáticas, 1(1).
Unluturk, M. S., & Kurtel, K. (2015). Quantifying productivity of individual software programmers: Practical
approach. Computer Information, 34(4), 959–972.
Wagner, S., & Ruhe, M. (2008). A structural review of productivity factors in software development. Technical
Report. Technische Universität München.
Wagner, S., & Ruhe, M. (2008a). A systematic review of productivity factors in software development. In
Proceedings of 2nd International Workshop on Software Productivity Analysis and Cost Estimation, State Key
Laboratory of Computer Science, Institute of Software.
Yanosky, R., M. M. V. (2005). Modelo funcional de la Factoría de Software de la UCI para la línea Carrefour.

67
International Journal of Information Technologies and Systems Approach
Volume 11 • Issue 1 • January-June 2018

Yilmaz, M., & O’Connor, R. V. (2011). An empirical investigation into social productivity of a software process:
An approach by using the structural equation modeling. In Proceedings of the European Conference on Software
Process Improvement 2011, CCIS (Vol. 172, pp. 155-166).
Zhan, J., Zhou, X., & Zhao, J. (2012). Impact of software complexity on development productivity.
International Journal of Software Engineering and Knowledge Engineering, 22(8), 1103–1122. doi:10.1142/
S0218194012500301

68
International Journal of Information Technologies and Systems Approach
Volume 11 • Issue 1 • January-June 2018

APPENDIX A

Table 13. Factors that impact productivity

ID Component Category Factor Description Reference


Category: Product
F1 Programming Product Volatility Volatility in software Rodger et al., 2011
development type (New,
Maintenance)
F2 Programming Product Development Platform Development platform used in Rodger et al., 2011
the project
F3 Programming Product Programming Language Type of programming Rodger et al., 2011
language
F4 Programming Product Reuse Percentage of reused object Ondrej et al., 2012; Yilmaz &
points O’Connor, 2011; Nwelih &
Amadin, 2008
F5 Programming Product Functionality Function points count Ondrej et al., 2012; Nwelih &
Amadin, 2008
F6 Programming Product Length LOC corrected with respect to Ondrej et al., 2012; Nwelih &
the density of comments Amadin, 2008
F7 Programming Product Data Lines of produced useful data Ondrej et al., 2012
F8 Programming Product Documentation Lines of produced Ondrej et al., 2012
documentation
F9 Programming Product Source Statements Delivered new SS Cheikhi et al., 2012; Unluturk &
Delivered reused SS Kurtel, 2015
F10 Programming Product Fuction Points Number of function points Cheikhi et al., 2012; Pai et al.,
2015; López et al., 2010
F11 Programming Product Launches Total number of launches Çetin & Alabaş-Uslu, 2015
F12 Programming Product Issues Total number of bugs and Çetin & Alabaş-Uslu, 2015
issues seen throughout the
project
F13 Programming Product Risks Total number of an events that Çetin & Alabaş-Uslu, 2015
may end up with a negative
impact
F14 Programming Product Change Request A change request is a formal Çetin & Alabaş-Uslu, 2015
proposal for an alteration to
some product or system
F15 Programming Product Count type Physical/logical López et al., 2010
F16 Programming Product Statement Statement type (Executable, López et al., 2010
Nonexecutable)
F17 Programming Product Complexity Level of complexity of the Ondrej et al., 2012; Yilmaz &
project (can be complex, O’Connor, 2011; Khan et al.,
moderately complex or not 2014; Çetin & Alabaş-Uslu,
complex) 2015; Nwelih & Amadin, 2008
F18 Testing Product Defect Density in Systemic Number of confirmed defects Moreira et al., 2010; Pai et al.,
Tests detected in test during a 2015;Unluturk & Kurtel, 2015
defined period.
F19 Testing Product Defects in Technical Percentage of defects in Moreira et al., 2010; Pai et al.,
Revisions technical revisions 2015;Unluturk & Kurtel, 2015
F20 Testing Product Unit Test Coverage Percentage of unit test Moreira et al., 2010
coverage
F21 Analysis & Design Product Object types per technique Count of object types per Cao et al., 2012
technique
F22 Analysis & Design Product Relationship types per Count of relationship types per Cao et al., 2012
technique technique
F23 Analysis & Design Product Property types per Count of property types per Cao et al., 2012
technique technique
F24 Analysis & Design Product Number of properties for a Count of number of properties Cao et al., 2012
given object type for a given object type
F25 Analysis & Design Product Average number of Average number of properties Cao et al., 2012
properties for a given object for a given object type
type
F26 Analysis & Design Product Number of properties of Number of properties of Cao et al., 2012
a relationship type and its a relationship type and its
accompanying role types accompanying role types

continued on following page

69
International Journal of Information Technologies and Systems Approach
Volume 11 • Issue 1 • January-June 2018

Table 13. Continued

ID Component Category Factor Description Reference


F27 Analysis & Design Product Average number of Average number of properties Cao et al., 2012
properties per relationship per relationship type
type
F28 Analysis & Design Product Number of relationship Number of relationship types Cao et al., 2012
types that can be connected that can be connected to a
to a certain object type certain object type
F29 Analysis & Design Product Average number of Average number of Cao et al., 2012
relationship types that can relationship types that can be
be connected to a certain connected to a certain object
object type type
Category: Process
F30 Programming Process Training Number of trained persons Ondrej et al., 2012
F31 Programming Process Process Not defined Yilmaz & O’Connor, 2011
F32 Programming Process Meetings Regular Meetings, Meetings Yilmaz & O’Connor, 2011;
Frequency Khan et al., 2014
F33 Programming Process Documentation (Type, Document page count Cheikhi et al., 2012
Origin, Usage) Screen count
Number of words
Number of ideograms
Number of graphics
F34 Programming Process Methodology Software development Çetin & Alabaş-Uslu, 2015
methodologies are described
in here.
F35 Programming Process Baselines Number of baselines (baseline Çetin & Alabaş-Uslu, 2015
is a point of reference)
F36 Programming Process Severity Indicates the severity level of Çetin & Alabaş-Uslu, 2015
the project
F37 Programming Process Duration of analysis Time span for analysis Çetin & Alabaş-Uslu, 2015
F38 Programming Process Duration of development Time span for development Çetin & Alabaş-Uslu, 2015
F39 Programming Process Duration of test Time span for test Çetin & Alabaş-Uslu, 2015
F40 Programming Process Occurrence of stand by Number of occurrences of Çetin & Alabaş-Uslu, 2015
standbys (on-hold)
F41 Programming Process Duration of stand by Elapsed time until the project Çetin & Alabaş-Uslu, 2015
starts again
F42 Programming Process Reason of stand by The reason of the standby Çetin & Alabaş-Uslu, 2015
F43 Testing Process Requirements Unstableness Level of the Requirements Moreira et al., 2010
Unstableness
F44 Testing Process Continuous Integration Level of Continuous Moreira et al., 2010
Utilization Integration Utilization
Category: Development Environment
F45 Programming Development Development Environment Not defined Moreira et al., 2010
Environment
Category: Corporate Culture
F46 Programming Corporate Culture Business Sector Not defined Premraj et al., 2010
F47 Programming Corporate Culture Technology Not defined Khan et al., 2014
F48 Programming Corporate Culture Project Types of project such as Çetin & Alabaş-Uslu, 2015
infrastructure, service, product,
feasibility. Every project must
fall into one of these types
Category: Team Culture
F49 Programming Team Culture Cohesion Team Cohesion, Team Unity Yilmaz & O’Connor, 2011;
Khan et al., 2014
F50 Programming Team Culture Team Leadership Not defined Yilmaz & O’Connor, 2011
F51 Programming Team Culture Collective Outcomes Not defined Yilmaz & O’Connor, 2011
F52 Programming Team Culture Information Awareness Not defined Yilmaz & O’Connor, 2011
F53 Programming Team Culture Transparency Not defined Yilmaz & O’Connor, 2011
F54 Programming Team Culture Social Relations, Social Life Not defined Yilmaz & O’Connor, 2011;
Khan et al., 2014
F55 Programming Team Culture Working Culture Not defined Khan et al., 2014
F56 Programming Team Culture Interest in Individual Job Not defined Khan et al., 2014
Category: Capabilities and Experience

continued on following page

70
International Journal of Information Technologies and Systems Approach
Volume 11 • Issue 1 • January-June 2018

Table 13. Continued

ID Component Category Factor Description Reference


F57 Programming Capabilities and Motivation Not defined Yilmaz & O’Connor, 2011
Experience
F58 Programming Capabilities and Experience Not defined Moreira et al., 2010
Experience
F59 Programming Capabilities and Leadership Not defined Yilmaz & O’Connor, 2011
Experience
F60 Programming Capabilities and Trust Not defined Yilmaz & O’Connor, 2011
Experience
F61 Programming Capabilities and Communication Not defined Yilmaz & O’Connor, 2011
Experience
F62 Programming Capabilities and Manager Skills Not defined Khan et al., 2014
Experience
Category: Project
F63 Programming Project Team Size Size of the project team Rodger et al., 2011; Yilmaz &
O’Connor, 2011; Khan et al.,
2014; Çetin & Alabaş-Uslu,
2015
F64 Programming Project Engineering labour Man-hours Ondrej et al., 2012
F65 Programming Project Testing labour Man-hours Ondrej et al., 2012
F66 Programming Project Management labour Man-hours, Staff-Hour Ondrej et al., 2012; Cheikhi et
al., 2012
F67 Programming Project Support labour Man-hours Ondrej et al., 2012
F68 Programming Project Materials and services Deflated value Ondrej et al., 2012
F69 Programming Project Tangible capital stock Deflated value Ondrej et al., 2012
F70 Programming Project Intangible capital stock Deflated value Ondrej et al., 2012
F71 Programming Project Other capital Deflated value Ondrej et al., 2012
F72 Programming Project Staff-Cost Management cost Cheikhi et al., 2012
F73 Programming Project Duration of project Indicates total elapsed time of Çetin & Alabaş-Uslu, 2015
the project
F74 Programming Project Difference btw actual and Difference between the actual Çetin & Alabaş-Uslu, 2015
planned baseline dates and planned dates

Pedro S. Castañeda is a Systems Engineer, MBA in ESAN, Master in Management and Information Technology
Management - UNMSM. Experience in implementations of software factories in Peru and responsible for Electronic
Toll Solutions and Traffic Management. He has international certificates: Project Management Professional (PMP)
and SCRUM (Scrum Alliance, Scrum.org, SCRUMstudy). Professor of Undergraduate and Postgraduate in
public and private universities (UNMSM, UPN, UPC). Fields of interest: Productivity, Software Factory, Software
Engineering, Document Digitalization.

David Mauricio is a Doctor in Sciences in Engineering of Systems and Computation (1991-1994), and Master
in Sciences in Applied Mathematics (1989-1991) by the Federal University of Rio de Janeiro, Brazil. Degree in
Computer Science from the National University of San Marcos. He has been a professor at the Universidade
Estadual Norte Fluminense in Brazil from 1994 to 1998, and since 1998 he is a professor at the National University
of San Marcos. Fields of interest: Combinatorial Optimization, Mathematical Programming, Expert Systems, Machine
Learning, Artificial Intelligence, Software Engineering.

71

You might also like