CPSC 547 – Software Measurement

Software Quality Metrics

Professor: Dr. Bing Cong

By:
Aakash Juneja
Jaskaran S. Gandhi
Phong Nguyen
Saba Yaghoobi Saray

Department of Computer Science
California State University Fullerton
Fall 2014

Table of Contents
Abstract ................................................................................................................................................. 4
1.

What is Software Quality? ............................................................................................................ 5

1.1 Why Measure Software? ................................................................................................................. 6
1.2. Software metrics and its objectives ................................................................................................ 7
1.3. The Need for Metrics ...................................................................................................................... 8
1.4 Types of Metrics- ............................................................................................................................10
2.

Background ................................................................................................................................... 11

2.1 Waterfall model .............................................................................................................................. 12
2.2. Spiral model ................................................................................................................................... 13
3.

Software Quality Metrics Classification ...................................................................................... 14

Figure 1. Classification of Software Metrics ........................................................................................16
3.1 Product Quality Metric .................................................................................................................. 17
3.1.1 Scenario ........................................................................................................................................18
Table 1: Requirements Coverage Table ............................................................................................. 20
3.1.2 Goal ............................................................................................................................................. 20
3.1.3 Product Risk Metrics ................................................................................................................... 21
Figure 2: Quality Risk Status (Early Test Execution) ........................................................................ 23
Figure 3: Quality Risk Status (Middle of Test Execution) ................................................................. 24
Figure 4: Quality Risk Status (Late Test Execution) .......................................................................... 25
3.2 In-Process Quality Metric ............................................................................................................. 26
Figure 5: Two contrasting Defect Arrival patterns during Testing ................................................... 28
Figure 6: Defect Removal by Phase for two products ........................................................................ 30
Figure 7: Phase Effectiveness of a Software Project .......................................................................... 32
3.3 Resources ....................................................................................................................................... 32
Table 2 : Comparison between resources ........................................................................................... 33
Table 3: Capacity/Load ....................................................................................................................... 33
Figure 8: Resource Distribution.......................................................................................................... 34
5.

Real World Examples of Software Metrics ................................................................................. 34

a.

Motorola....................................................................................................................................... 34

b.

Hewlett - Packard ........................................................................................................................ 40

Figure 9: Testing Metrics .................................................................................................................... 43
Figure 10: Defect Summary ................................................................................................................ 45

Figure 11: Time Data and Return ....................................................................................................... 45
5.

What are the advantages of using software metrics?.................................................................. 46

5.1 Limitations of Software Metrics .................................................................................................... 48
6.

References – ................................................................................................................................. 49

Abstract
Software development is the active conduct of program design systems as they are
retained and improved over their periods. Software systems need to repeatedly change during
their life cycle for numerous causes; adding new features to satisfy user requirements, changing
business needs, introducing novel technologies, correcting faults, improving quality, etc. The
increase of changes, along the development of a software system, can lead to a bad quality. It is,
thus vital to observe how software quality changes so that quality assurance activities can be
correctly scheduled. Software quality is basically a field of study and practice that describes the
necessary qualities of software products.
Software metrics can be used to examine the improvement of the software systems
quality. Empirical sign occurs presenting that there exists a connection among metrics and
software quality. Software metric is a tool for accepting changing phases of the code base, and
the development of the plan. The goal is gaining impartial, reproducible, and computable
measurements, which may have plentiful valued applications in plan and budget planning, cost
estimation, quality assurance testing, software debugging, software performance optimization,
and optimal personnel task assignments.
What seems to be challenging on the way of quality engineering and management is that
the word quality per se is kind of unclear and this problem can consequences to mistake in
delicate processes. There are numerous details behind this mistake. First of all, each person can
have different knowledge about the idea of quality based on their perspectives. Secondly, there
are different levels of concept for each term; while you are debating about quality, we may refer
to the inclusive meaning of quality, or you may refer to obvious sense of it. Moreover, since we
use the term quality in our day to day language it causes difficulties on the technique of using
this term in specialized positions, as these two are so different.
We are planning to discourse the goal of software quality metrics on this paper, while we
explain about metrics classification in software quality namely product metrics, project metrics,
and in-process quality metrics. Meanwhile, we are going to discuss the importance of collecting
software engineering data. In other words, the accumulated data will be the results of welldefined metrics and may help to improve our system quality.

low quality software is not satisfactory. reliability. satisfaction toward specific qualities is also estimated. and even each car depend on it. reliability. We express these two ways: defect rate which is no. and service ability. our world would fall apart. For example. IBM estimates satisfaction with their software products in terms of capability.1. of defects/million lines of code and reliability which is no. But what is software quality? High quality software meets the needs of customers while being reliable. every cell-phone needs software. service. usability. the requirement will be unfulfilled. methods such as blind surveys are frequently used. documentation. functionality. Hewlett-Packard focuses on functionality. With this in mind. portable. performance. performance. and overall. maintainable. To decrease unfairness. It is important that our requirements have as minimum bugs as possible because it will cost a lot of fix them later on in the software cycle. Customer approval is usually measured by taking the percent between (neutral and nonhappy) by the customer satisfaction surveys. Because it is widely used. install ability. Every profession depends on it. Without software. The most basic search in a software quality would be lack of bugs. usability. well supported. of failures per n hours of action and mean time failure. . maintainability. the quality of that software is important. Other corporations use related measurements of software customer satisfaction. What is Software Quality? Our world uses software everywhere. if the software contains too many defects. In addition to complete client contentment with the software product. and easily integrated with other tools.

1. performance and reliability might be the most significant attributes.1 Why Measure Software? • Determine the quality of the software • Calculate qualities of a software • Improve quality of a software • Estimation and cost of software • Estimate the efficiency effects of new tools and techniques • Establish efficiency developments over time • Improve software quality • Decrease future maintenance needs . documentation may be more important. different factors are mandatory for different feature attributes example real-time processing. For customers with individual systems and simple operations. Depending on the type of product and customers.We must take these quality attributes into account in the planning and design phases of the software to increase overall customer satisfaction. For example. these quality attributes are not always mix well with each other. However. it is hard to attain maintainability when the complexity of the product is high.

which is any object or document subsequent from the software development method. but their measurement slight off unless there is indication that the specific metric is related to an external attribute. coding or documentation which can be controlled and measured directly. process. . Software product objects descent into one of the four categories: specifications. tracks potential risks. The practice of metrics involves measures and metrics which leads to long term process improvement. It assesses the state of the project. such as design diagrams. Software metrics and its objectives Software metrics is used to define the wide-ranging of activities used with measurement. uncover problem areas. source code. Attributes can be internal or external. and evaluate team’s ability to control quality. or resource. and test records. Software metric is simply measurement of software and primary purpose is to help plan and foresee software growth. Internal attributes are those of the system representations.2.1. Software product metrics focuses on measuring attributes of the software product. Examples of internal attributes of source code are:  Size  Complexity  Coupling  Modularity and  Reuse. designs. Any software metric is an effort to measure or foresee some attribute of software. adjust workflow or tasks.

3.External attributes are usually those influenced by the system in implementation and cannot be measured directly. This indirectly . 1. The Need for Metrics Software testing is the process of recognizing the errors or defects in the system and make sure it is resistant with the customer requirements before releasing it to the market. But testing can never guarantee 100% bug free software. They are for example:  Functionality  Reliability  Usability  Efficiency  Maintainability and  Portability External attributes are important because knowing or foreseeing them plays a vital role in quality assurance. This might show the unpredictability and ineffectiveness of testing process or testing procedures followed. Some internal attributes are connected with external attributes therefore their metrics can be used as secondary measures for external attributes. it could also account the human mistakes in case of manual testing or script errors in case of automated testing.

This raises a distress to have an efficient metric and report based procedure. in order to increase the software quality. • Software Metrics and reports help in both project management and process management • Metrics can directly impact both the effectiveness and productivity of a software • Helps in early defect detection and defect removal. There should be high amount of clearness as to where the software views in terms of quality. Every software experiences the risk phase. certain procedures and guidelines need to be placed down and match the efficiency of the software. customer efficiency. quantity.influences the quality of the software. consistency and strength of the software. development. • Metrics helps in tracking of the project status and supports in presenting the statistics to the senior management in a structured way. Hence risk management is a very significant factor which needs to be taken care in order to recover the software quality. • Metrics and reports help in accumulating data using which further processes in testing can be completed more effectively. obedience to requirements etc. • Back tracking can be very stress-free if every action is followed correctly. Following are the arguments that support why software testing metrics are necessary. thus giving growth to test metrics and reports. Hence. thus reducing the cost of defects • Assists the managers in effective decision making • Metrics also act as a benchmark for estimations and provide bottleneck to the testers • Manages risk at ease .

foreseeing faulty code. 1. and predicting project risk. performance. Hence. and quality of software To summarize. cost of project. the goal of software metric is to understand the state of the software product. it is very much vital to select the correct set of metrics for our product during the testing process. The utility of metrics is limited to quantifying one of the following goals: Schedule of a software project.4 Types of MetricsRequirements metrics  Size of requirements  Traceability  Completeness  Volatility Product Metrics  Code metrics  Lines of code LOC . managing. correcting. Size/complexity of development involved. calculating project success.Software metrics are used to obtain objective measurements that can be useful for quality assurance. Finding defects in code (post release and prior to release). controlling and estimating costs.

and meetings. there are several development .helps in finding faults. and schedule 2. Based on which objectives we are pointing at. Design metrics – work out from requirements or design documents before the system has been implemented  Object oriented metrics. Depending on our project’s goals and objectives we designate the preferred software development model among the variety of different processes and methodologies. and allow team to see directly how to make their classes and objects simpler. Process metrics  Measure the process of software development  Frequently used by management to check the budget and processes for efficiency. Evaluate and track aspects of the software design process like: human resources. What seems impossible is discussing software metrics and models without considering the effect of software development process type on them. time. Background First thing we need to consider when we are talking about the background of software measurement is the potential impact of the different software development models on quality. email. Test metrics  Communication metrics – looking at artifacts i.e.

Meanwhile. we are going to explain why it is needed to merge the metrics. the software system development organization is obliged to be more arranged and controllable. Each of the known models defines different stages of the process and also dissimilar set of orders in which the stages should be performed. It seemed crucial to make sure of the appropriate execution and also delivery of good-quality products. test. by this approach. and exit.life cycle models to adopt. 2.1 Waterfall model During 60s and 70s. we are going to discuss two of the best or the most popular life cycles. almost all of the software development projects were facing gigantic cost escalations on one hand and schedule deferrals issues on the other hand. entry. which are supposed to deliver some halfway products leading to the final product. precise tracking of the project evolution is more possible while early recognition of errors is more likely. code. First. and etc. software developers were more concerned about controlling and planning difficulties. Second. At that era. hence. It was exactly when waterfall process aroused to put an end on the increasing complication of development projects and its consequences. each step was required to pass some criteria such as validation. It is simply known as collection and description of the system requirements. Waterfall process has various advantages because of this divide and conquer approach. The waterfall process model raises the development team’s sprits to clarify what are the expectations of this projects to be met before beginning development of the system. Here. This is well known as Entry-Task-Validation-Exit (ETVX) paradigm which is one of the major specifications of waterfall process. . The next motivating asset of waterfall model is its inspiration to break the whole development process into some reasonable phases such as design.

As Boehm has explained. being fallowed in some organizations. the most inclusive application of this model is related to the expansion of the TRW Software Productivity System. During these last few years spiral model has been positively recognized between engineers and project managers due to its high risk management ability. These features of spiral model increase the spiral model’s level of flexibility comparing to waterfall model. The most important aspect of this method is making large projects easier to manage and possible for on time delivery without cost escalations. The need for software metrics had been more palpable as introducing increasing amount of object oriented projects were occurring. we can say the available software development models were effective for the variety of software versions but there were some restrictions when it came to object oriented software’s. structural tactic plays a significant role. which is known as TRW-SPS. during the process. bunch of documents must be created for future use in testing and maintenance arenas. 2.For outsized companies.2. It has been formed after lots of improvements on the waterfall model while it applied to variety of enormous government software projects. In other words. having huge and complicated development projects. Spiral model The spiral model which has been developed by Boehm is one of the models of software development. That was when engineers were convinced the combination of software metrics and software development models is the best solution to the problem. The spiral model mostly relies on creating prototypes and using risk management methods. Using this approach. .

Software metrics can be categorized into 3 classifications:  Product Metrics  Process Metrics  Project Metrics Product metrics help software engineers to better appreciate the qualities of models and measure the quality of software by describing the characteristics of the product such as size. and also they help in measuring quality based on a set of distinct rules. Software Quality Metrics Classification Software Quality Metrics are a subclass of software metrics that emphasis on the quality features of the product. and project. etc. . difficulty. and aid in making strategic decisions. Project metrics are used by project managers and a team of software designers to adjust project workflow and practical activities that allows them to define project features and implementation designs. The goal is long-term process improvement that could be attained by providing a set of process indicators. presentation. that helps the software engineers. and they can be used to help improve software development & maintenance. At times. software quality metrics are more thoroughly related with process and product metrics rather than with project metrics.3. metrics belong to multiple categories: in-process quality metrics of a project both involve process metrics as well as project metrics. In most cases. design features. Process metrics are composed over a long period of time through all projects. A vision into the design and structure of the software is provided. process.

Having said above that. In the discussions below. we shall discuss several kinds of metrics in each of the three classifications of software quality metrics as described briefly above. the parameters related to project development such as the number of developers. . etc. The findings of this research helps in engineering improvements in both process and product quality. The intention of Software Quality Engineering is to look in to the relationships that exist amongst in-process metrics. the size of the project. certainly affects the quality of the product. software quality metrics are closely associated to process and product metrics rather than with project metrics. project characteristics. and endproduct quality. their individual skill levels.

Software Metrics Product Metrics Process Metrics Dynamic Metrics Static Metrics LOC Size Metrics Design Metrics Control Flow Metrics Information Metrics Token Count Function Count Function Count Henry & Kafura Figure 1. Classification of Software Metrics Weighed Metrics Data Structure Metrics Software Science Metrics Testability .

A Quality Metric is a predictor for Product Quality. . and well-executed way. however proper product metrics are needed to capture the test measures.1 Product Quality Metric Product Quality Metrics help us to better understand the attributes of models and assess the quality of software under test as well as help us gain insight into the design and construction of the software. A good test allows us to measure the quality and risk in a system. and both are used to degree the properties of software and at the same time help in refining a system module by matching them with current better-quality systems.  Effectiveness Product Metrics – The degree of measure to which the product is attaining anticipated levels of quality.3. Further discussing the 3 sub-groups of Product Quality Metrics:  Efficiency Product Metrics – The degree of measure to which a product can attain that anticipated level of quality carefully. and at the same time provide insights to guide where product improvements should occur. Product Quality Metrics provide an on-the-spot rather than after-the-fact insight into the software development of a product.  Elegance Product Metrics – The extent of measure to which a product effectively and efficiently achieves the results in an elegant.

efficiency.The steps to develop a good quality Product Metrics [2]:  Define test coverage and quality-related objectives for the product.  Devise measurable metrics.  Determine realistic goals for each metric. such that we can have a high level of confidence in the quality and test coverage for the product prior to release. and elegance question. for each effectiveness. either direct or surrogate.1.  The effectiveness. efficiency.1 Scenario We have the following information on a project:  95% of the tests have run  90% of the tests have passed  5% of the tests have failed  4% of the tests are ready to run  1% of the tests are blocked. . determining product status. and making test & project control decisions as needed to optimize product quality and test-coverage outcomes. 3.  Monitor progress towards those goals. and elegance with which the product is achieving those objectives should be taken into consideration.

A requirements coverage table would help in monitoring progress towards the goals of complete testing and fulfillment of the requirements. Since. In addition to making sure the quality of the product is at par. Testing Product Metrics are focused on the quality of the system under test. and they cannot control the behaviors of stakeholders or management with respect to quality. Product Metrics also reflect the entire team’s efforts towards quality. and the only role of the testing team is to measure quality. The objectives for test coverage and quality for every product vary. However. and other involved members are on the same track. and the management are key as they’re the ones who determine the software process and its quality capabilities. participants. as shown below [3]: .It has been assumed that we are on schedule as per the text execution. this information is not accurate. and it is important to have this information on testing dashboards so that the stakeholders. this information is not accurate enough to confirm that this product will be of standard quality by the last phase. and at the same time making sure that quality is on track for successful delivery of the product. we would need product metrics to assist us in determining the quality throughout test execution. but often includes ensuring complete coverage of requirements. Stakeholders.

During test execution. and we can report the results in terms of requirements fulfilled. and one or more tests should be created for every requirement during the test design & implementation phase. At the same time.1. The progress can be monitored towards these goals of complete testing and fulfillment of requirements by using the metrics analysis as shown in the table above. the table helps in understanding the quality of the product and it eases the decisions to make test and project control decisions that help us to achieve the best possible test coverage. we run these tests. .2 Goal An analytical requirements based test strategy is used. and unfilled using the traceability. If bi-directional traceability is maintained. complete coverage can then be assured.Table 1: Requirements Coverage Table 3.

 If all the tests relayed to requirement have been run. then it is very helpful for the engineers. Based on the analysis.  If all the tests related to requirement have been run and have passed. we can have a higher level of confidence when it comes to satisfaction. This helps to increase the level of confidence in terms of positive measurements of requirements. but now we shall consider risk based testing in which the objective is to typically reduce . then it is said that the requirement is tested. the status of each requirement may be determined as follows:  If all the tests related to requirement have been run. then it is said that the requirement is classified as failed. If the metrics in a table highlight problems. then it is said that the requirement is also classified as passed.The table above depicts quality stands on the major requirements groups for an ecommerce site.3 Product Risk Metrics If we consider product metrics for testing strategies other than requirements based testing. but one or more have failed. 3. it is difficult to develop metrics that will predict the quality for software. Practically.  If any of the tests related to requirements are blocked. and the status of each requirement in each group. then it is said that the requirement is classified as blocked.1. but with multidimensional coverage metrics.

With the help of bi-directional traceability. and which risks are unmitigated. how effectively are we reducing quality risk? Answering the first question. The tests are run and defects are reported during test execution. when using a risk based strategy. Bi-directional traceability is needed between the defects and risk items as well in addition to the test results and risk items. the coverage can be assured and measured between the tests and the risk items. and these two questions can be taken into consideration:  How effectively are we reducing quality risk overall?  For each quality risk category. each quality risk item deemed sufficiently important for testing should have one or more tests create for it during test design and implementation.product quality risk to an acceptable level. which risks are partially mitigated. . This enables us to report which risks are fully mitigated.

.Quality Risk Status Other Risks Risks for which at least one test has failed or must-fix bug known Risks for which all tests were run & passed Figure 2: Quality Risk Status (Early Test Execution) The above figure graphically represents all information across all risk items. which have no known must-fix bugs but still have tests pending to run. The region in orange represents risks for which at least one tests has failed or at least one must-fix bug is known. The region in green represents risks for which all tests were run and passed. The region in blue represents other risks.

Testers focus on running confidence-building tests that lower risk (turning blue region to green).Quality Risk Status Other Risks Risks for which at least one test has failed or must-fix bug known Risks for which all tests were run & passed Figure 3: Quality Risk Status (Middle of Test Execution) During the second half of the test execution. . The tests those are most likely to find out bugs in the early half of the test executions have already been carried out. while developers fix the bugs that were found. the green region starts to grow very quickly.

and quality risk mitigation has been optimized only if the project management team believes that the quality risks posed by known defects. and the blue region would slowly be faded out that is a good sign since it represents unmitigated risks. When a risk-based test strategy is followed.Quality Risk Status Other Risks Risks for which at least one test has failed or must-fix bug known Risks for which all tests were run & passed Figure 4: Quality Risk Status (Late Test Execution) As mentioned above. the green region completely takes over all the regions during the end of the testing phase. test failures or yet un-run tests are acceptable. The risks are said to be balance. . the project management team decides adequate test coverage. compared to the schedule and budget risks associated to continue the testing.

In some cases. . A higher defect rate is an indicator that during the development phase. the extra-ordinary testing effort could also be the reason for a high defect rate.3. A uniform distribution is never followed by software defect density. and they are used for making strategic decisions. Organizational sensitivity is to be kept in mind when interpreting metrics data. A set of indicators is provided by process metrics that leads to software process improvement. and at the same time it is required to provide regular feedback to engineers who are directly involved with the collection of measures and metrics. the quality perspective is positive.2 In-Process Quality Metric Process metrics are collected over long periods of time and they pretty much cover all projects involved. it is a result of more effective testing or it could be because of higher latent defects in the code. The team of engineers can use the following scenarios to judge the release quality [1]:  If the defect rate during testing is the same or lower than that of the previous release.  If the answer is yes. If a piece of code has higher testing defects. extra testing needs to be done. Defect density during Machine Testing: The defect rate is correlated to the defect rate during formal machine testing. The basis for in-process quality management is formed by the following metrics: 1. the software was injected with a lot of errors. then ask: Does the testing for the current release deteriorate?  If the answer is no.

which yields higher defect rates. [1] . More information is given by the pattern of defect arrivals.  If the answer is yes. Both the defect arrival rate and the cumulative defect rate are shown by the figure below. 2. then ask: Did we plan for and actually improve testing effectiveness?  If the answer is no. and the time unit for observing these patterns is usually weeks and months. the quality perspective is negative. Even with the same overall defect rate during testing. The objective of this pattern is always to look for defect arrivals that stabilize at a very low level. The only remedial approach in this case is to do more testing. before ending the testing effort and releasing the software to the field. If the defect rate during testing is substantially higher than that of the previous release. These declining patterns of defect arrival during testing are indeed the basic assumption of many software reliability models. then the quality perspective is the same or positive. different patterns of defect arrivals indicate different quality levels in the field. Defect arrival pattern during Machine Testing: The defect density during testing is a summary indicator. or times between failures that are far apart.

These are just the raw number of arrivals. . not all of which are valid defects.Figure 5: Two contrasting Defect Arrival patterns during Testing There are three slightly different metrics when we talk about defect arrival patterns during testing:  The defects reported during the testing phase by time interval.

A regression test is needed to ensure that targeted product quality levels are reached. The figure below shows the patterns of defect removal of two development projects: Project A was front-end and Project B was heavily testingdependent for removing defects. It requires the tracking of defects at all phases of the development cycle (including design reviews. component test (CT). A large defect backlog at the end of development cycle and a lot of fixes has yet to be integrated into the system. The different phases of defect removal are high-level design review (I0). The true defect pattern is the pattern of valid defect arrivals – when problem determination is done on the reported problems.  The pattern of defect backlog overtime. Phase-based defect removal pattern: The test defect density metric extends to the phase-based defect removal pattern. low-level design review (I1). unit test (UT). and system test (ST). the stability shall be affected. and the overall defect removal ability of the development process is reflected by the pattern of phase-based defect removal. 3. code inspections. Many development organizations use metrics like inspection coverage and inspection effort for in-process quality management. code inspection (I2). this is needed because development organizations cannot investigate and fix all reported problems immediately. and formal verifications) in addition to testing. Conducting formal reviews or functional verifications to enhance the defect removal capability of the process reduces error injection. .

Figure 6: Defect Removal by Phase for two products .

. The action plans to improve the effectiveness of these phases were then established and deployed. and component testing (CT). the metric can be calculated. the weakest phases were unit test (UT). and other specific phases. the more effective the development process and hence. The figure below shows DRE by phase for a software project. the fewer the defects are likely to escape to the next phase. and for the front end. It is known as early defect removal and phase effectiveness when used for the front end.4. It is usually estimated by: Defects removed during the phase + defects found later For each phase. The higher the value of the metric. code inspections (I2). DRE= (Defects removed during development phase/Defects latent in the product) x 100% The total number of latent defects in the product at any given phase is not known. Defect Removal Effectiveness: Also referred as efficiency. the denominator of the metric can only be approximated.

 Response Time: Using response time metric. capacity/load. The next .3 Resources There are certain metrics related to resource distribution measurement such as: utilization percentage. we will be able to recognize what is the state of our desired resource.Figure 7: Phase Effectiveness of a Software Project 3. we will have a scale to use later on. effort distribution. By picking a baseline. and response time.

step is comparing the response times to the baseline whenever we are trying to access a resource. Ratio amount Project status Ratio =1 Ideal Ratio <1 Project can take more load Ratio >1 Project is overloaded Table 3: Capacity/Load . the point to keep in mind is that the utilization stages should be kept optimal. It is so simple: Comparison Resource status Response time < baseline Resource is available Respond time > baseline Resource is overloaded Table 2: Comparison between resources  Utilization Percentage (Total Effort spent by resource) / (Total Budgeted Effort for the resource) Utilization Percentage simply provides the utilization related to a specific resource in order to avoid having under-utilized or over-utilized resources.  Capacity/Load: It is the best utensil to calculate the maximum load that your project can tolerate and by using this you can easily figure out if your project is kind of overloaded. Here.

05 0.2 Testing Documentation 0.27 Figure 8: Resource Distribution 5. Effort Distribution: It is supposed to show the share of particular resources among a number of tasks/components. one of the ways to ensure about the appropriate distribution of resources is using effort distribution measure. Resource Destribution Requirement Analysis 0. For sure.05 0.2 Design Implementation 0. less than 10% of the companies surveyed considered the use of metrics as “positive” and . We use pie charts to show the percentage of resources assigned to each task or component more clearly. Motorola Many companies attempt to use software metrics to better their position. Real World Examples of Software Metrics a.3 Rework 0. On one industry survey that was taken. but they often find it too complex and unnecessary to truly follow-through.

This philosophy was put in practice when senior management made it a requirement to software development process (in their Quality Policy for Software Development (QPSD)).  Estimation accuracy. analysis.  Time that problems remain open. The goal is improvement through measurement.” That was the philosophy of the Motorola software metrics initiative. The target areas in this requirement were:  Delivered defects and delivered defects per size. . they have been responsible for numerous processes and metrics in the company.“enthusiastic” [100].  Number of open customer problems. They wanted to see if improvement could be done to quality. productivity. The first step Motorola did was to define a Metrics Working Group (MWG) in their organization. and feedback.  Cost of nonconformance.  Total effectiveness throughout the process. One of the few companies that was able to positively turn metrics into improvement was Motorola. One of these being the Goal/Question/Metric approach. This group was and is very successful in their work. This group would be in charge of establishing and defining a company-wide adopted set of software metrics to be used for improving the quality of the software. “Measurement is not the goal. The managers and engineers at Motorola were yearning for more understanding and insight into the software development process of their company.  Adherence to schedule. and cycle time.

After the definitions and other small problems were settled. increase software reliability. increase defect containment. These metrics are by no means set in stone and can always change over time due to feedback and effectiveness: Metric 1. a Goal/Question/Metric structure was decided. 5.2: Effort Estimation Accuracy (EEA) = Actual Project effort / Estimated Project Effort (goal 1) . decrease software defect density. reduce the cost of nonconformance 7. I’ve listed some of the metrics that were borne to address these goals. Improve project planning. improve customer service. and many more. Below. 6. increase software productivity Using these goals. 2.1: Schedule Estimation Accuracy (SEA) = Actual Project Duration / Estimated Projection Duration (goal 1) Metric 1. they went on to crease metrics based on the scope. “fault”. These were to be eventually settled though for the progress of the company’s software metrics goals. 4. Software reliability. 3. There actually were some conflicts by all the different business units on deciding the proper definition on terms like “software problem”. and this structure had 7 goals: 1.

2: Phase Containment Effectiveness for phase I (PCEi) = number of phase I errors / (number of phase I errors + number of phase I defects) (goal 2) Metric 3.2a: Total Released Defects total (TRD total) = number of released defects / assemblyequivalent total source size (goal 4) Metric 4.Metric 2.2: Total Open Problems (TOP) = total number of post-release problems that remain open at the end of the month (goal 5) .1: New Open Problems (NOP) = total new post-release problems opened during the month (goal 5) Metric 5.3a: Customer-Found Defects total (CFD total) = number of customer-found defects / assembly-equivalent total source size (goal 4) Metric 4.1: Total Defect Containment Effectiveness (TDCE) = Number of pre-release defects / (number of pre-release defects + number of post-release defects) (goal 2) Metric 2.1a: In-Process Faults (IPF) = in-process faults caused by incremental software development / assembly-equivalent delta source size (goal 4) Metric 4.2b: Total Released Defects delta (TRD delta) = number of released defects caused by incremental software development / assembly-equivalent total source size (goal 4) Metric 4.3b: Customer-Found Defects delta (CFD delta) = number of customer-found defects found by incremental software development / assembly-equivalent total source size (goal 4) Metric 5.1: Failure Rate (FR) = number of failures / execution time (goal 3) Metric 4.1b: In-Process Defects (IPD) = in-process defects caused by incremental software development / assembly-equivalent delta source size (goal 4) Metric 4.

1b: Software Productivity delta (SP delta) = assembly-equivalent total source size / software development effort (goal 7) Alongside the goals and metrics. Motorola wanted to build a software metrics infrastructure. the information gathered would lead to significant improvement. not only would this include the MWG. it would . Because if properly analyzed and addressed. Not to say that one area would be more important than the other.3: (mean) Age of Open Problems (AOP) = (total time post-release problems remaining open at the end of the month have been open) / (number of open post-release problems remaining open at the end of the month) (goal 5) Metric 5.1a: Software Productivity total (SP total) = assembly-equivalent total source size / software development effort (goal 7) Metric 7.Metric 5. Motorola encouraged each software project/business unit to create their own goals using the agreed-upon metrics based on how they were faring (baseline). Other metric areas such as estimation accuracy and software problem-related really helped the units/projects get a better grip on things. To complete their engraining of software metrics into their company and its culture. but there were some that stood out. One of the most important areas was the defect data area.4: (mean) Age of Closed Problems (ACP) = total time post-release problem closed within the month were open / number of post-release problems closed within the month (goal 5) Metric 6.1: Cost of Fixing Problems (CFP) = dollar cost associated with fixing post-release problems within the month (goal 6) Metric 7.

 Guidelines to whichever unit is interested in creating a function responsible for software metrics implementation. A survey of the software engineers and managers showed that 67% of them use a software review package made by MWG. Also. The MWG was also faced with requests to centralize .  A criteria for the evaluation of metric tracking systems to choose one to buy if necessary. This group would meet every quarter to share their experiences with things like tools to automate metrics.  Support for analysis of the metric data collected and some generic defect classification scheme and their respective examples for using them to recommend improvements. One of MWG’s great contribution is their metric and process definition for software review and test processes.go to include the Metrics User Group (MUG).  Surveys involving customer satisfaction (from the software perspective). This group was established as a forum to share experiences of software metrics implementation. Bullet-pointed below are some of these changes established by the MWG:  The clarification of metric definition and interpretation. Also.  Two-day training workshops set up for training and consulting support of metric implementation  A standardized requirements for the automation of data collection and analysis to give to the tools group making them. there’s a current list of all the metric tools available if anyone wants to use it. there were many additions and outputs created by this infrastructure.  A method for software measurement technology assessment and it provides feedback on priority items that would help the implementation cause further.

Hewlett . HP went on to save countless effort and money. This successful project which they embarked on was called the sales and inventory tracking (SIT) project.5 years. Example 2b. Though there were many various goals for this project. the Motorola Company was awarded the First Malcolm Baldrige national Quality award. But in this case. it all involved the same type of data: computer dealer sales and inventory levels of HP products. the metrics were only being used for inspection and testing in the design. In the inspection process. and construction and testing phase. It’s not surprising to say that by finding those early defects.all of the metric data but they decided that it was best to keep the data localized to which ever entity needed it. That’s amazing. design. HP broke it down into an 8 step process: . HP decided to centralize all of the data for the SIT project to make it easy access. So how did software metrics turn out for Motorola? We’d have to say “pretty well!” Motorola’s focus on improving their software quality through metrics turned in an astounding 50 time reduction in software defect density within 3. and post-implementation review. HP followed their normal software life cycle of investigation. This reduction led straight to cost reduction and cycle time reduction too! Not to boot that in 1988. construction and testing. Because that was the case. One of HP’s great story is how the use of software inspection metrics helped them achieve different levels of success in finding defects early in the software development life cycle.Packard Another well-known successful metric-user is Hewlett-Packard (HP). implementation and release.

The second was the inspection summary form. Total time used by inspections 4. Total time saved by inspections . Number of noncritical defects found and fixed 3. So which metrics did HP decide to collect for this SIT project? There were three forms used to collect the inspection metrics. HP’s team selected for key metrics and those were: 1. And the last was the inspection data summary and analysis form. From these. Number of critical defects found and fixed 2. HP found it much more effective to change the term “defect” to “issue and question logging” because they realized calling something a defect makes the author/engineer less receptive to the process of improvement.Step 0: Planning Step 1: Kickoff Step 2: Preparation Step 3: Issue and Question Logging Step 4: Cause Brainstorming Step 5: Question and Answer Step 6: Rework Step 7: Follow-up On a funny side note. First one was the inspection issue log.

not to mention the given goals of optimum speed and efficiency. and technical difficulty. They took the modules and their sub modules and assigned a risk number to it. and there were two kids of those used to measure their testing effort: 1) Total number of critical defects and 2) Total testing time for each phase . The testing done had their own metrics. The Master Test Plan told of how the primary and secondary features were to be tested against their respective product specifications and how accurate the output of the data was. module testing. That number involved their operational importance to overall system functionality. The values gathered there was then inputted into their central SIT database. Each module performed a different specific function.Their testing process was broken down into test planning. ease of user use. HP had the author of those entities to do their own testing respectively to save money and time. Since after the unit tests were done. The module testing was more of a double-checking type test. Those authors then filled out the forms for further analysis. the module testing was there just to ensure that all of them worked together in a nice cohesive manner. Each program and job stream were counted as an individual entity. HP decided to conduct the system test along with a pilot test at one of their HP dealers. and system testing. unit testing. The unit testing was done on all the jobs and program streams. It was the risk assessment’s turn in their test planning phase. complexity. After that came the system testing. Of course not all of these are created equal and there’s only a finite amount of resources so HP created their Master Test Plan after their assessment to know where to spend the time and effort at. The system that was to be inspected would be broken down into logical modules. and system supportability.

Which looked like this: Figure 9: Testing Metrics So how did HP decide if all of this was worth it or not? HP used an ROI model to measure the value of inspections and testing in terms of how much time it saved and how quick it got it to the market. The model had these metrics to be gathered and analyzed: . is that there should be a value put on how much time is saved if a defect was found in the inspection/testing phase than if it was found after at the system test phase. Their reasoning. which is a sound one.

Total Time Saved = Total Time Saved by inspection + Total Time Saved by Unit and Module Testing Total Time Used = Total Time used by inspection + OTtal Time used by Unit and Module Testing Prerelease ROI = Total Time Saved / Total Time Used Inspection ROI = Total Time Saved by Inspection / Total Time used by inspection Testing ROI = Total Time Saved by Unit and Module Testing / Total Time used by unit and module testing Total Time Saved by Inspection = time saved on critical defects + time saved on noncritical defects Total Time used by inspection = Inspection Time + Time to Fix and follow up for defect resolution Time Saved on Critical Defects = Black Box Testing Time x Number of Critical Defects – Total Time Used Mean Total Time to Rework (MTTR) = Time to Find defect + time to Fix defect + Time to release to Production Time Saved on Noncritical Defects = MTTR x Number of Noncritical defects Total Time Saved = Time Saved on Critical defects Total Time Used = Time to design and build a test + time to execute + time to find and fix a defect .

With those metrics. and the chart below: Figure 10: Defect Summary HP was able to conclude that their return on investment in the prerelease was a whopping 355%: Figure 11: Time Data and Return .

It is good to have the numbers back up the practice. What are the advantages of using software metrics? We have gathered a list of all of the possible advantages to software metrics which are as following:  The basic advantage of software metrics become apparent during the process of providing feedback to managers about the evolution and quality of the software during several phases of software development life cycle. We also see. cooperation. considering their differences in characteristics. that it takes a lot of effort and commitment to make software measurement work. though. It takes time.Thus another case of software metrics gone right. In both cases we saw how successful they were and if done right.  In order to analyze and compare different programming languages. using software metrics can benefit the project/company in quite a big way. using software metrics is one of the best options. but it is well worth it. . Today. there is still resistance to the implementation and follow through of solid software metrics. and a lot of communication to get it done. 5.  One of the possible usages of software metrics is in the defining of software quality specifications. These two companies are actually in the minority of real-life examples gone right. Conclusion We have just shown two examples of real world applications of software metrics in action. effort.

 When it comes to comparing and assessing the proficiency of individuals in software development process. one may benefit from software metrics during the verification process. It may be possible by analyzing the old data related to some similar standard software processes.  Software metrics can be used to measure the complexity of the software under development. . Software metrics can simply be used to enhance the comparison process between the varieties of design approaches for software systems.  When it comes to checking software systems requirements in accordance with specifications.  In situations that you need to make important design decisions for software developments issues related to maintenance costs you can basically take advantage of software metrics to make design tradeoffs.  In some circumstances you need to divide the complicated modules in your system to smaller pieces in order to reduce the percentage of software’s complexity.  Software metrics are beneficial when you need a prediction about the amount of effort needed to be put in the design and development of the software systems. Using software metrics can help you to decide when the correct time is to stop the division process. one can definitely take advantage of software metrics approaches.  For appropriate utilization decisions resource managers need to use software metrics to provide them a correct view.

1 Limitations of Software Metrics  Software metrics are difficult to apply in some cases. whose definitions and derivations are not verified. and their quality could be verified but Software Metrics are not useful to evaluate the performance of the engineers. 5. its authenticity is difficult to verify. and that increases the cost of application. .  Since software metrics are based on data that has been recorded in the past.  As you can see above.  Most of the applications of Software Metrics are based on assumptions. When you are trying to find the best solution for allocating your available resources in order to test the developed software’s ode. software metrics seem such a useful tool.  Estimation is key in Software Metrics. and verifying the authenticity of these estimations is very hard. all the methods and tools of discussed are very old.  Software products can be managed. and empirical.

6. 03 Dec. Hp. "Estimating the Value of Inspections and Early Testing for Software Projects." Project Metrics & Measures. 2014.pdf” Web.. Web. Web. 2014. 03 Dec. 2014.hp. 06 Aug. Software Engineering. "Earned Value Management.com/images/documents/Metrics-Article-4. Www. Daskalantonakis. 30 Sept. Addison-Wesley Longman Publishing. 2. .p. Kan. Anand. 12 May 2003. "Five Project Management Performance Metrics Key to Successful Project Execution – Operational Excellence. Web. 2012.com. Stephen. 30 Nov.” Metrics for Software Testing: Managing with Facts Part 4: Product Metrics” http://www. "Metrics and Models in Software Quality Engineering. 4. 1994. "A Practical View of Software Measurement and Implementation Experiences within Motorola. 2014. "Project Metrics & Measures.rbcs-us. Slideshare. 29 Nov. 12 Dec." IEEE Xplore. 2014. 2014. 29 Nov. 5. 2014. References – 1. Wikimedia Foundation. Louis. IEEE Transactions on (Volume: 18." Metrics and Models in Software Quality Engineering." Wikipedia. 2014. 02 Dec. 28 July 2009.net. M. Web. Rex Black. Jonathan. pag. 03 Dec. 7." Suchitra Mishra." Estimating the Value of Inspections and Early Testing for Software Projects (1994): n. 3. Subramaniam. N. 2013. 6. Suchitra. Mishra. 03 Dec. Web. Web.