You are on page 1of 5


the overall trend in the field is to push measurement methods and models back upstream to the design phase and even to measurement of the architecture itself. Although much of the software metrics technology used in the past was applied downstream. In any scientific measurement effort. Here we are primarily concerned with the quality of the software end product as seen from the end user's point of view. you must balance the sensitivity and the selectivity of the measures employed.   A large body of literature has appeared over the past three or four decades on how developers can measure various aspects of software development and use. Some metrics are broader than others. from the productivity of the programmers coding it to the satisfaction of the ultimate end users applying it to their business problems. The issue in measuring software performance and quality is clearly its complexity as compared even to the computer hardware on which it runs.   c   . Managing complexity and finding significant surrogate indicators of program complexity must go beyond merely estimating the number of lines of code the program is expected to require.

measures of error density per thousand lines of code discovered per year or per release were used. For example. a particular system is defined in terms of a collection of components. patterns that guide their composition. through computer architecture. Lower values of these measures implied higher build or release quality. building architecture. of course. that quality in software was the absence of bugs. and constraints on those patterns. Most of the leading writers on this topic do not define their subject term. for example. And. Historically software quality metrics have been the measurement of exactly their opposite²that is. of course. The inference was.1 This definition follows a straightforward inductive path from that of building architecture. We will start this article by reviewing some of the leading historical quality models and metrics to establish the state of the art in software metrics today and to develop a baseline on which we can build a true set of upstream quality metrics for robust software architecture. a density of two bugs per 1. Perhaps at this point we should attempt to settle on a definition of     as well. almost everyone does! There is no universally accepted definition of software architecture. at least²is . So. the key word in this definition²for software. the frequency of software defects or bugs. and interactions among that components. to software architecture. assuming that the reader will construct an intuitive working definition on the metaphor of computer architecture or even its earlier archetype. but one that seems very promising has been proposed by Shaw and Garlan: Abstractly. As you will see. software architecture involves the description of elements from which systems are built. but this is a very long way from today's Six Sigma goals. through system architecture. interactions among those elements.000 lines of code (LOC) discovered per year was considered pretty well. In general.

. . But first. we will review some classical software quality metrics to see what we must surrender to establish a new metric order for software. Having chosen a definition for software architecture. we are free to talk about measuring the quality of that architecture and ultimately its implementations in the form of running computer programs.

þ  c   .

performance. Moreover. usability and performance may conflict. We recently participated in an IBM Middleware product study of only the usability dimension. they have levels of abstraction beyond even the viewpoints of the developer or user. when we talk about software architecture. reliability. install ability. These separate design from implementation and may even accommodate the differing viewpoints of developer and user in each area.2 However. Two of his parameters of interest for software products were quality of design and quality of conformance. performance. He said products must possess multiple elements of fitness for use. Years ago Juran3 proposed a generic definition of quality. among many others. Although it is true that a flawed process is unlikely to produce a quality product. Some firms focus on process quality rather than product quality. very few end users will agree that a program that perfectly implements a flawed specification is a quality product. Some of these factors conflict with each other. reliability. It was five pages of questions plus a two-hour interview with a specialist consultant. The multiple professional views of product quality may be very different from popular or no specialist views. usability.  IBM's Measures of User Satisfaction . we are talking about a design stage well upstream from the program's specification. documentation. from architectural conception to end use. For example. Similarly. as may reliability and capability or performance and capability.1). IBM measures user satisfaction in eight dimensions for quality as well as overall user satisfaction: capability or functionality. Two leading firms that have placed a great deal of importance on software quality are IBM and Hewlett-Packard. Of course. Other computer and software vendor firms may use more or fewer quality parameters and may even weight them differently for different kinds of software or for the same software in different vertical markets. our focus here is entirely on software product quality. IBM has user evaluations down to a science. Hewlett-Packard uses five Juran quality parameters: functionality.   Software quality is a multidimensional concept. Crosby. has defined software quality as conformance to specification. and availability (see Table 3. and some support each other. maintainability. and serviceability. usability.


gathering customer requirements. W. Edwards Deming. The implementation of TQM has many varieties. The Malcolm Baldrige Award in the United States and the ISO 9000 standards are legacies of the TQM movement. à? 4    The objective is to reduce process variation and to achieve continuous process improvement of both business and product development processes. Since then. This approach requires the creation of a quality culture in the organization to improve processes. Kaoru Ishikawa.4 In 2000. many quality gurus published specific methods for achieving TQM. and measuring customer satisfaction. patterned after the Japanese-style management approach to quality improvement." Customer focus includes studying customer needs and wants. Armand V. in which organizational maturity level 5 represents the highest level of quality capability. it is a management approach to long-term success that is attained through a focus on customer satisfaction. but the four essential characteristics of the TQM approach are as follows: à? þ     The objective is to achieve total customer satisfaction²to "delight the customer. TQM has taken on many meanings across the world. and services. industry. Juran. the SW-CMM was upgraded to Capability Maturity Model Integration (CMMI). as is the Software Engineering Institute's (SEI's) Capability Maturity Model (CMM). In the 1980s and '90s. TQM methodology is based on the teachings of such quality gurus as Philip B. Simply put. à?   . and the method was applied in government. Feigenbaum. and even research universities.  The Naval Air Systems Command coined the term T      (TQM) in 1985 to describe its approach to quality improvement. Crosby. products. and Joseph M.

including leadership.    The objective is to create an organization-wide quality culture. total staff participation. management commitment. à?     . and employee empowerment.

  The objective is to drive continuous improvement in all quality parameters by a goal-oriented measurement system. when the class of 2000 enrolled and their student loans were set up). followed by (totally!) reengineering all administrative processes using TQM "delight-the-customer" measures. He also designed a (totally) new information system to meet the university's needs in the post-Y2K world (which began in 1996 in higher education. However. It came to IT just in time for the redevelopment of all existing enterprise software for Y2K. As CIO. Its introduction as an information technology initiative followed its successful application in manufacturing and service industries. Total Quality Management made an enormous contribution to the development of enterprise applications software in the 1990s. The efforts of one of the authors to introduce TQM in the internal administrative services sector of research universities encountered token resistance from faculty oversight committees. he attempted to explain TQM to a faculty IT oversight committee at the University of Pennsylvania that this name was merely a phrase to identify a commonly practiced worldwide methodology. They objected to the term "total" on the curious dogmatic grounds that nothing is really "total" in practice. But this didn't help much. he persevered with new information architecture.5 .

u c   .


one fault per 1.4 faults per 1.) For each validated metric at the metric level. It spans the development cycle in five steps. because Voice of the Customer (VOC) and Quality Function Deployment (QFD) are the means available not only to determine the metrics and their target values. Six Sigma quality would be 3. implementing. At this point we merely want to present a gestalt for the IEEE recommended methodology. and validating software quality metrics for software system development.6  IEEE Metric Set Description Paradigm7  !   Name Name of the metric Metric Mathematical function to compute the metric . There is no mystery at this point. The factors to be measured may vary from product to product. It was intended as a more systematic approach for establishing quality requirements and identifying. a direct final metric for the factor reliability could be faults per 1. but it is critical to rank the factors by priority and assign a direct metric value as a quantitative requirement for that factor.3 gives the IEEE's suggested paradigm for a description of the metrics set. a value should be assigned that will be achieved during development. Here we begin by summarizing this standard. In 1993 the IEEE published a standard for software quality metrics methodology that has since defined and led development in the field.59 Sigma. Table 3.000 KLOC or      . as shown in Table 3. (This level of quality is just 4. analyzing. For example. In the first step it is important to establish direct metrics with values as numerical targets to be met in the final product.  IEEE Software Quality Metrics Methodology A typical "catalog" of metrics in current use will be discussed later.000 lines of code (KLOC) with a target value²say.2. The second step is to identify the software quality metrics by decomposing each factor into sub factors and those further into the metrics. but also to prioritize them.000 lines of code (LOC).

sensitivity. Analyzing the metrics can help you identify any components of the developing system that appear to have unacceptable quality or that present development bottlenecks. and so on To implement the metrics in the metric set chosen for the project under design. and assumptions about the flow of data must be clarified. Any tools to be employed are defined. a metric must be revalidated every time it is used. calculate the metric. Validation of the metrics is a continuous process spanning multiple projects. the data to be collected must be determined. and any organizations to be involved are described.Cost Cost of using the metric Benefit Benefit of using the metric Impact Can the metric be used to alter or stop the project? Target value Numerical value to be achieved to meet the requirement Factors Factors related to the metric Tools Tools to gather data. ? . and the cost of employing them. If the metrics employed are to be useful. accuracy. and analyze the results Application How the metric is to be used Data items Input values needed to compute the metric Computation Steps involved in the computation Interpretation How to interpret the results of the computation Considerations Metric assumptions and appropriateness Training Training required to apply the metric Example An example of applying the metric History Projects that have used this metric and its validation history References List of projects used. they must accurately indicate whether quality requirements have been achieved or are likely to be achieved during development. It is also wise at this point to test the metrics on some known software to refine their use. as are any necessary training. Confidence in a metric will improve over time as further usage experience is gained. project details. Any components whose measured values deviate from their target values are noncompliant. Furthermore.