Measuring Technical Debt

Patrick Morrison North Carolina State University pjmorris@ncsu.edu Abstract
The objective of this paper is to survey potential quantitative measures of the technical debt metaphor. It has long been observed that there is a tradeoff between software quality and ship date in the positive impacts of early availability and the negative impacts of maintenance expense when favoring ship date over quality. The metaphor of 'technical debt' was first described nearly two decades ago to describe a disciplined approach to measuring and managing this tradeoff. Over time the use of the term has spread, particularly where technical choices within the software development process have economic impacts. Consideration has been given recently to the idea of exploring and formalizing notions of technical debt. This paper examines how the metaphor has been defined and used, and it surveys past and present measures of software product, process and people with an eye toward their use in a quantitative definition of technical debt. on technical debt [2] notes "Little is known about technical debt, beyond feelings and opinions" and provides a list of questions about the meaning and use of the metaphor that depend on the ability to quantify and measure technical debt. Finding a quantitative measure for technical debt could aid software developers and management in deciding upon courses of action during software development that produce improved project outcomes. Determining that the concept is too vague to obtain general quantitative measures would also be useful. The objective of this paper is to identify quantitative measurements for its definition and use. In order to do so, this paper examines existing measures for the properties associated with technical debt that are defined by Brown et al. [2], namely visibility, value, present value, debt accretion, environment, origin, and impact. Visibility denotes the level of awareness of internal software quality, its impact on maintenance decisions, Value is the benefit of the chosen tradeoff, Present value incorporates the costs of debt's impact and uncertainty, Debt accretion measures maintenance cost increase over time, Environment reflects whole-system dependencies, Origin reflects the intent leading to the incurred debt, and Impact denotes the scope of the physical changes necessary to remove the debt. These properties will be set in the context of existing measures of software and software projects, and in the existing context of uses of the term ‘technical debt’. The remainder of the paper is organized as follows. Section 2 describes the observed properties of technical debt as described by the literature and by practitioners and also surveys desirable properties of a metric for technical debt, from the perspective of both software metrics and project management. Section 3 discusses existing measures of technical debt. Section 4 discusses current applications of technical debt. Section 5 documents limitations of the present paper's data, method and conclusions. Section 6 presents a discussion and summary of the paper, with an eye toward further work.

1

Introduction
"Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite. [..] The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. ", Ward Cunningham [1]

Almost 20 years ago [1], Ward Cunningham used the 'Technical Debt' metaphor to describe the long-term cost of making sub-optimal technical design and implementation choices in software in exchange for releasing that software at a given time. Technical debt is a project management tradeoff between internal software quality versus project scope and resources. The notion has proven to be both lasting and popular in discussions among software professionals. Technical debt, however, has not yet been quantified in a general way. The overview of a recent FSE workshop

One element of this is a borrower’s credit score or credit rating. The original definition of technical debt. and that of an FSE technical debt workshop from last year including input from the originator of the term [1. made explicit. architects. This is an estimate based on the same units Value. experience level. estimates of Value must be made. Impact denotes the scope of the physical changes necessary to remove the debt. geography. A more recent consideration of the properties of technical debt identifies the properties of visibility. Debt accretion measures debt increase over time. and organizational characteristics that influence the development. and designers working on the project. and manage sub-optimal code. the value of the program’s delivery at a point in time must be considered. origin and impact [2]. This is the comparison between the ‘ideal’ software and the software as it stands. Measuring this requires not only the definition of the ‘ideal’ or ‘production function’. and rarity of skills. range of values. The ‘loan’ is one of time. is not sufficient for evaluating technical debt. It appears that a key property of technical debt is the above-mentioned ‘ideal’ to which the software must (eventually) conform. and support of the affected software system. release. The words ‘software’ and ‘system’ are used interchangeably to describe the software under development. technical tools and knowledge available to developers as well as the entire set of procedures. It seems practical to express this in the units of Value. environment. The aggregate set of changes to remove the . present value. or Unintentional. it appears to be interrelated with Environment. When a significant monetary loan is extended. This conforms to the earlier notion of interest. and the changes necessary for installation of the software in its production environment. while necessary. The cost of programmer time depends on many dimensions including organization. Debt is incurred at the point in time when a believed-to-be suboptimal set of technical choices is implemented as part of delivering software to a customer. this could be viewed a microeconomic ‘production function’ applied at a more granular level than the usual firm level. This includes the language. documentation. The loan ‘collateral’ is the value of the software’s use from an earlier point in time than if the debt were not incurred. it is clear that measuring aspects of the code. standards. though some may be due to the organization. skill level. Finally. In an abstract sense. Present value incorporates the costs of debt's impact and uncertainty. In this. borrowed against the cost of conforming the software to the mentioned ideal of software quality. An accurate assessment of it will be available only after system use as it is measured in terms of actual economic results. In a practical sense this refers to the knowledge embedded in documents such as coding standards. Interest on the debt is paid during the interval between when the debt is incurred and when it is repaid. the ‘ideal’ of quality. but it additionally requires terms expressing the Impact of a given choice and the uncertainty with which the estimate is made. tests suites. From this description.2]. value. change. Visibility denotes the level of awareness of internal software quality. In the meantime. there is a process of due diligence considering the nature and terms of the loan. These could be viewed as non-functional requirements of the system. modification. Something analogous to this rating could be used to establish how Environment impacts technical debt Value. the collateral. Origin reflects the intent leading to the incurred debt.2 Properties of Technical Debt This section analyzes two definitions of technical debt. reflecting debt acquired without consideration of the tradeoffs involved. It appears that this would be measured in currency or other economic units. Programmer time can readily be translated to monetary cost in a given organization. but for a given project and organization there is typically a small. managers. Once this ‘distance’ has been calculated it must then be presented in a manner understandable to developers and management. known. and the borrower’s financial condition. Notably the programmer’s time. It can be inferred that this would include both the specific changes made to system artifacts such as source code files. organizational policies. incorporates notions of debt. debt accretion. and repayment [1]. Value is a measure of the economic difference between the system and the ‘ideal’ system [2]. Environment reflects whole-system dependencies. configuration files. reflecting a deliberate decision to acquire the debt. but of distance from that ideal along dimensions that can be influenced by the expenditure of programmer time. and the value of early delivery must all be considered. The origin of debt may be Intentional. quoted in the introduction. This implies the need to estimate the earned value at a point in time for both the development organization and the user organization (which may possibly be the same organization). design and architecture handbooks and in the knowledge of the developers. corresponding to increased effort to understand. interest. environment. The loan is repaid by taking the time to update the software to conform to the ideal of quality. An often-used unit of measure for these quantities is programmer time. its original definition.

for example. Should comment lines be counted? Blank lines? Isn’t a line of APL more capable than a line of Assembler? It is common to exclude blank lines and comments. project and people. Software Metrics is the study of properties of the software itself. The products of a software development process may include various types of documentation. and their values for each of the properties and their relationships. sometimes referred to as ‘fan-in’ or ‘afferent’ coupling to indicate the number of modules a module is depended on by. files and directories handled (added. executives and other employees. Visibility could be achieved by presenting a detailed list of all elements of Impact together with the Environment ‘credit rating’. Benefits can also be measured on these terms. we will examine measures of To ground our presentation of metrics. but this does not account for the benefit obtained by the use of the software as this use often goes beyond the bounds of the development organization. and source code. The most common unit of measure for software is the line of code (LOC). it is necessary to account for both the expense saved and the benefit gained by not conforming the system to its ideal. There are many other expense components. the definition of measures. Measures of Value are discussed first as a foundation for the discussion of other properties and measures. and the support structures required for developers. licensing. If Value is defined as “the economic difference between the system as it is and the ‘ideal’ system”. Halstead measures.debt reflects the Impact of that debt. For example. and the ability to measure how often the measures are used. This section surveys potential units of measure for each of these properties. but the goal is to identify the kinds of measures suitable for each property. Halstead volume. operating procedures. Number of modules (directories. but assumptions must be carefully identified when reporting or reading LOC figures. loops. We will present some of the most commonly used source code metrics now in use.” – Barry Boehm [3] Measurement provides the foundation of data upon which analysis. but there is an extensive body of literature on metrics [4. configuration files. Modularity is a primary concern in software metrics. The controversy only begins there. This paper restricts itself to measures of debt internal to the development organization. carefully preparing the well-defined observational data from which a deeper set of scientific insights may be derived. The study also identifies coupling as a worthwhile measure that was excluded because of difficulties in its calculation. The primary output typically measured is source code. Visibility allows presentation of the properties and measures described to this point. Expenses and benefits can be expressed in currency units. Modules here are taken to be distinct sections of the system. managers. McCabe’s Cyclomatic Complexity. and a range of metrics has been defined to measure its properties. Oman’s Maintainability index. 5]. functions). binaries. As a place to start. Coupling measures how many connections a module has with other modules. Cohesion measures how closely interdependent modules are physically located. 1. and the rate of release of new versions. do simple lines count in the same way as complicated lines? Some organizations look at LOC as a measure of programmer productivity. and effort. or modified). It is also necessary to recognize that each additional line of code increases both the necessary effort and the likelihood of defects [7]. Cyclomatic complexity [8] counts branches (ifs. and ‘fanout’ or ‘efferent’ coupling to indicate the number of modules a module depends on. As mentioned earlier. including hardware. the primary driver of internal costs is programmer time. In examining the properties of technical debt several units of measurement have been observed. indicating the complexity of its logic. but this penalizes. though these cannot necessarily be measured in programmer time as other organizational roles may be involved. It is also one of the most contentious. in this case distinguished by file system and language declaration boundaries. deleted. programmers who remove unneeded code. source code. The definitions here lie in the vague middle ground between concrete examples and general axioms. For measures of value. which can be translated to currency units. Each change has a Value component. Lines of code (LOC. theories.1 Metrics for Value 3 Measures of Technical Debt “The software field cannot hope to have its Kepler or its Newton until it has had its army of Tycho Brahes. files. There are at least two aspects to this. The expense can be measured in terms of human effort on the part of the organization. but they have different sources. and predictions can be built. The next section investigates how these properties may be measured. also . difficulty. and to indicate the language when reporting LOC. switch) and distinct conditions to measure the number of distinct paths through a piece of code. reported both with and without comments). we present the list of metrics used to evaluate the linux kernel in a recent study of its evolution [6].

permitting normalization and comparison of values for these metrics against other studies that can be defined and normalized in this way. When collecting project management data for software engineering purposes. ‘Don’t Repeat Yourself’. One research program has defined a set of common . Fast. are all unique. unambiguous. Size and length are roughly analogous to linear measures such as volume. cohesion and coupling are generalizations that include the analogous metrics mentioned earlier. the scope can be viewed as the aggregate of the functional and non-functional requirements defined for the project. Files and directories handled (added. weight and depth. There have been hundreds of metrics defined for measuring source code [4. but these are clearly measures of size. they were measured and reported to give context to the data presented in the study. depending on the complexity of what is being provided and the number of people involved. you will eventually see a sign hanging on the wall that says something like ‘Good. aiding comprehension and review of code changes. it is important to identify the time spent and how the time is spent from actual programming environments in as timely and unobtrusive a manner as is possible. 5]. One more type of metric should be mentioned for its application to technical debt. 3. Oman’s Maintainability index is a regression model built upon a set of software projects [10] that attempts to predict maintenance effort.measuring complexity. reflecting the strength of the relationships between the variables. namely size. by human ability. even programmers. coupling and cohesion [14]. length. though validation in complex professional development environments remains an open problem [17]. they are representative of the metrics used in empirical validation studies. This is sometimes referred to as the ‘DRY Principle’. ‘Schedule’. as a change in one clone may be required in the other(s). while reviewing data collection with the people from whom it is collected in order to find and remove sources of confusion [16]. where and how these metrics are valid and useful. This combination is sometimes referred to as the ‘Iron Triangle’. Of course people. For software projects. These each may be hierarchical relationships. and the rate of release of new versions are not defined in the literature. and ‘Budget’. stated more formally as ‘Every piece of information must have a single. This again requires reference to the Impact and Environment properties of technical debt. The budget can be measured in terms of the number and roles of people required as well as any additional resources not already available to the project organization. or modified). and this has generated a great deal of discussion about when. the smaller one is commonly seen as superior. Most projects. deleted. and machine-generated code exists. Clone detection is a developing research area [11] that correlates closely to a primary practitioner concern. hardware or consumables used by the project. Complexity. There is no direct measure of duplication/cloning. Several models of program comprehension have been built. These concepts are generically defined. 3. While the metrics presented above were selected from a single study. most of the time are evaluated on what will be delivered (scope). for example software. are built upon a count of total operators (operations) and operands (data) and total unique operators and operands within the module being developed [9]. This has played enough of a role in various kinds of projects over time that it has become part of the Project Management Body of Knowledge (PMBOK) [15]. but ultimately the size. authoritative representation within a system’ [12]. and so they seem to represent good candidates for application to measures of technical debt. when it will be delivered (schedule). Cheap: Pick any two’. complexity and rates of change possible for software is bounded. complexity. Environment must be considered as well for correlating this with technical debt properties. though the terms used there are ‘Scope’.1 Project Metrics If you visit enough auto mechanics’ shops. and how much it will cost (budget). This is a significant enough issue that a number of frameworks for summarizing metrics have been developed and discussed. and to LOC. the existence of duplicate code (clone(s)) suggests simplification is possible and that maintenance effort may be increased. One of the most significant of these frameworks establishes a set of concepts defined in measurement theoretical terms. It is possible to map the evolution study metrics on to this concept framework. Intuitively. removal of duplication. Recent related work focuses on identifying changes at a semantic level rather than at a textual level [13]. for now. day-by-day staffing requirements and task completion dependencies. Schedule must consider the end date. given two pieces of code having the same functionality. so that they can be used with artifacts beyond source code.2 People Metrics Underlying all software considerations is the one constant in all software development: human effort. files handled and release rate.

particularly of its technical debt. A separate study. It appears that expense portion of a given technical debt can be viewed as the Impact of making a desired change in a given Environment. many such decisions are made without record. quality assurance time to move the change to a quality assurance environment. data collection and analysis.programming constructs and assessed the difficulty with which they are comprehended and applied [18]. One of the most common approaches in improving quality and performance of organizational tasks is to measure the productivity of the people involved in performing those tasks. there are patterns of typical elements and how they are used that have been documented in the literature [21]. It turns out that this is also one of the worst possible approaches for software development tasks. At the same time. These studies seem to lend support to the notions of Environment and Impact. Depending on the organization. The nature of Impact ties it closely to Environment. the origin is intentional. Impact.25]. or it could involve three or more people and dozens or even hundreds of machines. A study of CMM level 5 organizations found that the primary factor affecting effort. holding the requirement specification constant but varying the development company suggests that each organization has its own mean level for effort. It has been observed in small experiments that maintenance. as well as reducing a given organization’s effort in building such a definition. scheduling of feature development [30]. It appears that the units chosen for Value can be applied in measuring Present Value. software quality and cycle time once development was normalized to CMM level 5 standards was software size (measured in SLOC) [7]. and cycle time [22]. Present Value. Where records exist indicating that a deliberate decision was made to incur technical debt. and management time to document. Making decisions that increase the amount of maintenance required in the future is one means of achieving debt accretion. if only because people will take the metrics in to account when they act and report on their actions [19]. In all other cases the origin is unintentional. facilitate and report the change. is our own ability to comprehend it. Further research could support this notion that a fundamental measure of software. the scope of physical changes necessary to effect the desired debt reduction. where the desired change conforms software to the ideal embodied in a given Environment. multiplied by the effort required to accomplish each task (and. and move it in to production on passing regression tests. It seems plausible that the more complicated the Environment. This idea is analogous to the term Effort often used in the literature. the greater the Impact will be when attempting to reduce debt. effort estimation for . For significant intellectual tasks the need to consider performance diminishes the ability to perform those tasks. development speed [7]. regression test. leading to greater likelihood of Debt Accretion. The specifics matter a great deal and they vary from organization to organization. this may require programmer time to comprehend. This concept provides the means for bringing future projections in to the present for comparison and decision-making. project and product quality assessment [1. personnel and Value representing the identified unit(s) of debt. Defining an organization’s specifics in terms of a published model supports tool development. 3. reflects the value at a given moment of a debt by including consideration of the debt accretion should it not be paid off and the uncertainty involved in keeping or paying off that debt. particularly corrective maintenance. compile and unit test the change in the development environment. is more expensive than new development [20]. It appears that the units chosen for Value can be applied in measuring Debt Accretion. Measuring Impact requires a list of tuples representing the tasks. Greater Visibility of technical debt and the tradeoffs involved could encourage better record keeping on this point. software quality. Balancing the project’s and organization’s need for certainty and quality with the individual’s need for focus and intrinsic motivation is one of the most challenging aspects of performance measurement. Origin is the simplest property to measure as its intentionality can be found in the existence of records indicating the intent to make a change in a particular way. the Environment may include a single person and a machine or two. possibly converted in to costs through multiplication by currency/effort rates). borrowing from the financial term. Simple does not mean easy. components. To illustrate for a typical piece of software in a typical organization. so it seems likely that an organization-specific model of the Environment is necessary for describing and quantifying the Impact of technical debt. denotes the collection of tasks required to remove a piece of debt from the system 4 Uses of Technical Debt Technical debt-like measures have been applied to refactoring decisions [1].3 Measuring non-Value properties Debt Accretion accounts for the buildup of debt over time.

and a lack of confirmation of others (e. Broadly. other studies show that simple measures of size explain most of the effort involved in initial system development [23]. but linux’ continued adoption appears to correlate with Lehman’s second law at least anecdotally. In parallel it has also been observed that conforming to specific definitions of good design can make code easier to understand and maintain. Meir Lehman proposed and refined laws of Software Evolution. 26]. while linux increases in overall size and complexity. Space does not permit a full examination. but one relevant law is presented. ‘preventative’. Perfective maintenance is the modification of software to correct latent faults. unless specific work is executed to maintain or reduce it. documentation or maintainability. David Parnas discussed the problem of Software Aging in terms of how keeping systems well modularized and well documented should be useful in lengthening their lifespan [27]. All of these uses are part of the domain of software maintenance.development [2. Adaptive maintenance is the modification of software to allow it to conform to changes in its hardware or software environment. selfregulation and feedback). but it is typically set in the context of executive decisions outside of the system development process. If metrics measure the value of technical debt. Caution must be taken here. there was confirmation of certain theses (e. There are some high level trends that have been observed in patterns of software maintenance that can be used to inform discussions of technical debt. a framework for making resource decisions in software projects builds a ‘casual model’ using a set of Bayesian Nets incorporating statistical knowledge of software engineering factors in order to assist managerial decision . Software maintenance examines the factors involved in making changes to software systems over time. The ISO standard for software maintenance characterizes maintenance as ‘corrective’.g. Software cost estimation has become an important and extensively studied technique [29]. and goes in to some detail on how this level of modularity supports greater management choice and economic value for software projects. while the interest that accrues may be from any form of maintenance. Preventative maintenance is the modification of software after its release to prevent a problem from occurring. compared to alternative release schedules for a given set of features [30]. and the cost of knowingly releasing software that will later require changes. The Incremental Funding Model (IFM) considers sequencing of software feature development in light of the benefit derived from release of a feature at a point in time.g. There is not direct statistical support here for this being due to ‘specific work’ being ‘executed to maintain or reduce [entropy]’. It introduces the notion of a ‘Minimum Marketable Feature’ (MMF). the smallest unit of software that a customer would find valuable. software maintenance speaks to its impact. and recognizing the necessity of clear documentation in making the continued use of aging software possible. or ‘perfective’ [24]. This suggests both the value of designing to accommodate change. In terms of entropy. in both student and professional development [25. Three recent frameworks extend work done in software cost estimation in meaningful ways. even Boehm’s COCOMO and COCOMO II. In this sense technical debt is both an application and extension of Barry Boehm’s Software Engineering Economics [3] where he defines economics as “the study of how people make decisions in resource-limited situations” and summarizes the field of software cost estimation and its analytical frameworks. and that there is a significant dependence upon expert judgment for making day-to-day technical decisions [29]. It has been observed that there has been little study of the application of cost estimation models in industry. Currently being evaluated on the MODIST project. beginning in the 1970’s after having observed patterns of behavior and change in the course of large systems development. the second law of software evolution: “The entropy of a system (its [sic] unstructuredness) increases with time. This parallels the software metrics community’s concern with coupling and cohesion. The chief reason for measures of technical debt is to give development teams a means for optimizing their efforts in achieving their goals. Development team members make decisions about technical debt during the development and maintenance of systems. Repayment of technical debt is a form of perfective maintenance. continuous growth and change).23] and maintenance [20] and resource selection [31]. A recent study used the evolution of linux to evaluate the laws of software evolution [6]. something Boehm refers to as the “internal dynamics of a software project” [3]. the average complexity of each function is actually declining. primarily during the development of OS/360 [28]. whether they affect program behavior.” It might be said that technical debt is a measure of this kind of entropy. and repayment. ‘adaptive’. Corrective maintenance is the modification of software to correct a discovered problem after its release. It has been observed that changing existing code is more expensive and difficult than adding or removing code [20].

and values across too wide a span of concerns to be hopeful of being precise. Finally. most teams in most places do not make use of this kind of information. Each item is assigned a description. and a type. in part. by the ability to update data at any point in the model to reflect the situation being modeled. to provide reference data for future projects. it may be the presence of internal politics. 5 Limitations The papers examined are a tiny proportion of the papers published on the topics of software quality. maintenance. 6 Discussion and Summary management. compiler and linker have merged in to an ‘Integrated Development Environment (IDE)’. The goal of the framework is to support project-level decisionmaking. I would conjecture that. The availability of data from these support systems permits richer metric and trend analysis than was previously possible. and software metrics. and decisions. This appears to be a promising conceptual framework for managing the many dimensions of technical debt and for building organization-specific models. However. much as the editor. and budget used by project management to make decisions. including links between each subsystem that permit a holistic view of the technical debt characteristics of software components under development. compiler and linker to include common use of systems for version control and bug tracking as well as the increased use of unit testing and acceptance testing frameworks. which reflects the project phase the debt is incurred in. And yet. ‘medium’. Even within the span of the papers surveyed there is too wide a range of dimensions. based on a conceptual model (or models) of the factors involved. a component location. This framework calls for the creation of a “technical debt item” record for each discovered piece of technical debt. the literature survey discovered one framework targeted explicitly at measuring technical debt [32]. the integration will continue with these other systems. cost estimation. It may be a lack of awareness. There are some broad observations that can be made about technical debt. time. There are also a number of studies supporting the notion that various design quality attributes support reduced maintenance effort. This touches on the measures of scope. a person responsible. it appears economically valuable to bring the technical attributes of software to the level of management attention for use in making project decisions. This touches on aspects of both project management and software project . Technical debt is an internal software quality measurement. While the aim of the paper is to serve as an introduction to the literature around technical debt. a date recorded. or it may be that there is insufficient value to most teams to pursue the effort involved in collecting and monitoring this kind of data.making [31]. Each item has attributes of principal. Each of the described frameworks is managementoriented. Deriving accurate. dates. the literature on measures of internal software quality in software metrics and software maintenance can be consulted. this cannot hope to be a thorough survey of the field. but attempts will continue to be made as effective measures would be economically valuable. metrics. interest amount and interest probability assigned an ordinal value of ‘low’. This is distinguished. key factors in managing technical debt. technical debt is accessible to the project management level of discussion in a project. There is an ocean of data available for making technical decisions in software projects. and to validate the proposed framework. rather than being limited to the programmer's view. from typical regression models for software cost estimation. useful measures of technical debt that reflect the inherent complexity of software development in terms simple enough to be incorporated in to project management and executive discussions is a significant challenge. Over the last decade or so the typical development environment has grown beyond the editor. as well better visibility in to the software development process. and ‘high’ to reflect a coarse-grained notion of the item’s debt impact. in order to avoid team and organizational dysfunctions. Some observations can be made at this point. Visibility of this data has both technical and social impacts. which supports development team-higher level management interaction as they translate software concerns in to dollars. given the wide range of topics addressed. Finally. Clearly this has to be carefully managed. as well as the notion of recording past experience for analysis and prediction. but a key to this unification would be to make the link between this software component development data and project management systems. for the most part. These estimated values are then refined through the use of historical data from the organization and the project as it proceeds. Technical debt is a prediction about the future. Internal tracking of these quantities is likely to take the form of a tree or graph. in the sense that its presence is a measure of expected future maintenance effort. but the resulting traceability would support higher quality software with lower effort. For this reason.

Construction and testing of polynomials predicting software maintainability. Cunningham. [19] R. pp. Pp. pp. Software effort. 513-530. Program comprehension during software maintenance and evolution. 2009. 225-53. McCabe. and cycle time a study of CMM level 5 projects. C. 2009. 4-21. 33(1).F. Elements of Software Science. 2001. Presented at 26th IEEE International Conference on Software . IEEE Trans. 2009. 963-986. Hagemeister and P. C. Meneely. 2(4). International Journal of Software Science and Computational Intelligence 1(2). Danphitsanuphan.187-196. C. Brown et al. [28] L. The incremental funding method: Data-driven software development. Cates.F. 1994. 27(11). G.D. 2009. An empirical study of code clone genealogies. ACM Sigsoft Software Engineering Notes 30(5) pp. 27(6). A controlled experiment for evaluating quality guidelines on the maintainability of object-oriented designs. Proc. [13] Kim. Md. Software Eng. Chapin. Sazawal. Zweig. SE-10(1). [16] V. D. 16Th Asia-Pacific Software Engineering Conference (APSEC 2009). Pp. K. and L. The Pragmatic Programmer. [31] N. M. languages. Proc. Daly. [17] A.". pp. On the cognitive complexity of software and its quantification and formal measurement. Software Complexity and Maintenance Costs. G. Software Eng. 3-30. 308-320. 2001. IEEE. and reductive maintenance tasks: A controlled experiment. Trans. 2010. 1995. [27] (R-C) D. Weiss. 728-738. Parnas. IEEE Trans. [8] T. Software engineering economics. M. Feitelson. Pp 47-52. Murphy. R. 251-266. 1984. pp. [29] M. Chari. 2001. 145-56. to appear [6] A. Dorset House. 485501. Modeling development effort in object-oriented systems using design properties. Software Eng. Denne and J. 22(1). M. Morasca and V. Wust. Nguyen. S. 35(3). Property-based Software Engineering Measurement. IEEE Trans. Addison-Wesley. IBM Syst J 15(3). [23] L. Halstead. [18] Y. IEEE Trans. C. S. K. D. 2011. Syst. A Systematic Review of Software Development Cost Estimation Studies. pp. Conf. Types of software evolution and software maintenance. [20] V. 1994. 381-388. Wang. Thomas. Software Eng. Tan. 2005. Briand. [22] B. Making resource decisions for software projects. [3] B. [10] J. 2006. [4] B. 1999. 33(3). M. Journal of Software Maintenance and Evolution: Research and Practice. Software Eng. 68-86. Proc. Kim. Von Mayrhauser and A. Computer 28(8). Austin. Boehm. Vans. W. [25] (R-J) L. 31-53. 2010. pp. pp. S. [15] Product Management Body of Knowledge. A. D. pp. Smith. J. [11] M. M. W. FSE Workshop on Future of Software Engineering Research. R. Marsh. pp. 1976. J. Kitchenham. 44-55. Basili. What’s up with software metrics? – A preliminary mapping study. Briand and J. Software aging.E. Datar. V. Belady and M. IEEE Trans. 10(6). W. Software Eng. on Software Engineering and Methodology. 4th Ed. Hale. 2009. enhancive. P. The linux kernel as a case study in software evolution. The WyCash portfolio management system. [9] M. Variability and reproducibility in software engineering: A study of four companies that developed the same system. [26] R. 36(11). and Notkin. Jorgensen. 2008. Pp 29-30. Shepperd. [2] N. "Software Metrics Validation Criteria: A Systematic Literature Review. 1996. 1992. IEEE Software 21(3). W. Neil. [7] M. Banker. pp. Williams. M. D. pp. 2007. 16th Intl. Software Eng. Discovering and representing systematic code changes. pp. IEEE Trans. Notkin. [5] A. 2nd ed. Elsevier Science Inc. Software Engineering – Software Life Cycle Processes – Maintenance.7 References [1] W. Tailor. Software Eng. METHODOLOGY FOR COLLECTING VALID SOFTWARE ENGINEERING DATA. Agrawal and K. 1976.319. Hunt and D. Cleland-Huang. I. Sjoberg and A. J. Communications of the ACM. Ramil. Lehman. Journal of Systems and Software 83(1). 1993. pp. Kemerer. Boehm and P. IEEE Trans. Conf. 13(1). Basili and D. [24] ISO/IEEE 14764-2006. Khan. and applications. Journal of Systems and Software 24(3). [14] L. Software 83(3). C. 279-287. Project Management Institute. pp. Forey and M. on Software Engineering (31) pp 309. IEEE Trans. 1996. Bunse and J. Anda. B. [21] N. 39-47. 2004. D. Israeli and D. 2001. IEEE Trans. B. L. 1977. 407-29. A model of large program development. [30] M. M. A complexity measure. Measuring and Monitoring Performance in Organizations. Oman. quality. pp. Briand. Addendum to the Proc. on Object-Oriented Programming systems. Intl. Managing Technical Debt in SoftwareReliant Systems. Fenton. pp. Software Eng. on Software Engineering. 81-94. 33-53. Assessing and estimating corrective. pp. Mockus. 1984. C. [12] A.

[32] C. . Advances in Computers 82() pp 25-46. Seaman and Y.Engineering. pp. 2004. 2011. Guo. Measuring and monitoring technical debt. 397-406.

Sign up to vote on this title
UsefulNot useful