This action might not be possible to undo. Are you sure you want to continue?
· Measuring Quality · Quality Metrics · Overall Measures of Quality Quality: Ability of the product/service to fulfill its function Hard to define Impossible to measure Easy to recognize in its absence Transparent when present Definition of Quality:
Characteristics of Quality: Quality is not absolute Quality is multidimensional Quality is subject to constraints Quality is about acceptable compromises Quality criteria are not independent, but interact with each other causing conflicts. Software Quality:
. Views of Quality: Quality is a multidimensional construct. Software was particularly problematical for the following reasons: Software has no physical existence The lack of knowledge of client needs at the start The change of client needs over time The rapid rate of change on both hardware and software The high expectations of customers. the need to provide a solution that matches user needs is often considered as “design quality”. It may therefore be considered using a polyhedron metaphor. reliability. Within this metaphor.Kitchen ham (1989 b) refers to software quality “fitness for needs” and claims quality involves matching expectations. 1985) in the USA defines software quality as “the degree to which the attributes of the software enable it to perform its intended end use”. particularly with respect to adaptability. whilst ensuring a match to the specification is considered as “manufacturing quality”. Each view comes from a particular context. The Department of Defense (DOD. These views are often diverse and may conflict with each other. Within the software quality area. and efficiency. Two features of a piece of quality software: Conformance to its specification Fitness for its intended purpose. Each face represents a different aspect of quality such as correctness. a three-dimensional solid represents quality. It has been classified according to a number of ‘ views’ or perspective.
The views are generally presented in adversarial pairs such as versus designers. The software project has the following roles Project manager Business analyst Implementation programmer Quality auditor End user Line manager Project sponsor In an attempt to classify different and conflicting views of quality. The transcendent view · Innate excellence · Classical definition 2. The user-based view . Garvin (1984) has suggested five different views of quality 1. The product-based view · Higher the quality higher the cost · Greater functionality · Greater care in development 3.
it is necessary to establish a model of quality.· Fitness for purpose · Very hard to quantify 4. both qualitatively and quantitatively. Quality is people: Quality is determined by people because · It is people and human organizations who have problems to be solved by computer software · It is people who define the problems and specify the solutions · It is still currently people who implement designs and product code. along with a more quantified assessment. The manufacturing view · Measures quality in terms of conformance · Zero defects 5. A quantitative assessment is generally made. Many model suggested for quality. one by Boehm (1978) and one by McCall in 1977. The value-based view · Provides the data with what the customer requires at a price. . Most are hierarchical in nature. each of which has a set of measures or metrics associated with it. · It is people who test code HIERARCHICAL MODEL OF QUALITY: To compare quality in different situations. Two principal models of this type. A hierarchical model of software quality is based upon a set of quality criteria.
The three areas addressed by McCall’ s model (1977): Product operation: requires that it can be learnt easily. It was later revised as the MQ model. The criteria appear to be technically oriented. operated efficiently And it results are those required by the users. to be used during the In early attempt to bridge the gap between users and developers. the criteria were chosen in an attempt to reflect user’ s views as well as developer’ s priorities. 1977 AND 1980) / (McCall Model) This model was first proposed by McCall in 1977. Product revision: it is concerned with error correction and Adaptation Of the system and it is most costly part of software development. Product transition: it is an important application and it is .The issues relating to the criteria of quality are: What criteria of quality should be employed? How do they inter-relate? How may the associated metrics be combined into a meaningful overall measure of Quality? THE HIERARCHICAL MODELS OF BOEHM AND MCCALL THE GE MODEL (MCCALL. and it is aimed by system developers development process. but they are described by a series of questions which define them in terms to non specialist managers.
to ensure that it is error-free and meet its specification. processor time. Usability is the ease of use of the software. Testability is the ease of testing the programs. Reusability is the ease of refusing software in a different context. Portability is the effort required to transfer a program from one environment to another. Integrity is the protection of the program from unauthorized access. Flexibility is the ease of making changes required by changes in the operating environment. Maintainability is the effort required to locate and fix a fault in the program within its operating environment.distributed processing and the rapid rate of change in hardware is Likely to increase. Interoperability is the effort required to couple the system to another system. storage. McCall’ s criteria of quality defined Efficiency is concerned with the use of resources e. .g. It falls into two categories: execution efficiency and storage efficiency. Reliability is its ability not to fail. Correctness is the extent to which a program fulfils its specification.
It is hierarchical in nature but the hierarchy is extended. According to the uses made of the system and they are classed into ‘ general’ or ‘ as is’ and the utilities are a subtype of the general utilities. Boehm talks of modifiability where McCall distinguishes expandability from adaptability and documentation. Hierarchical models cannot be tested or validated.The Boehm model (1978) It is to provide a set of well-defined. The measurement of overall quality is achieved by a weighted summation of the characteristics. but retains the same emphasis on technical criteria. The models focus on the parts that designers can more readily analyze. It cannot be shown that the metrics accurately reflect the criteria. to the product operation. The two models share a number of common characteristics are. . the intermediate level being further split into primitive characteristics which are amenable to measurement. There are two levels of actual quality criteria. understandability and clarity. so that quality criteria are subdivided. The quality criteria are supposedly based upon the user’s view. well-differentiated characteristics of software quality. This model is based upon a much larger set of criteria than McCall’ s model.
efficiency (inverse) the generally required for a flexible system. The individual measures must be combined. reliability (inverse) reusable software is required to be general: maintaining accuracy and error tolerance across all cases is difficult. Integrity vs. reusability (direct) well structured easily maintainable code is easier to reuse in other programs either . integrity (inverse) Coupled system allow more avenues of access to more and different users. The individual measures of quality may conflict with each other. Interoperability vs. Maintainability and testability vs. flexibility (direct) maintainable code arises from code that is well structured. the use if interface routines and the modularity desirable for reusability will all decrease efficiency. efficiency (inverse) Optimized and compact code is not easy to maintain. Maintainability vs. Flexibility and reusability vs. efficiency (inverse) the control of access to data or software requires additional code and processing leading to a longer runtime and additional storage requirement. efficiency (inverse) the use of optimized software or system utilities will lead to decrease in probability. Some of these relationships are described below. Flexibility. integrity (inverse) the general flexible data structures required for flexible and reusable software increase the security and protection problem. reusability and interoperability vs. Maintainability vs. Reusability vs.HOW THE QUALITY CRITERIA INTERRELATE The individual measure of software quality provided do not provide an over all measure of software quality. efficiency (inverse) Improvements in the human / computer interface may significantly increase the amount of code and power required. Usability vs. Portability vs.
Correctness vs. Portability vs. efficiency (neutral) the correctness of code. As such. is usually expressed in terms of metrics. Software metric is a measurable property which is an indicator of one or more of the quality criteria that we are seeking to measure. It must: Be clearly linked to the quality criterion that it seeks to measure Be sensitive to the different degrees of the criterion Provide objective determination of the criterion that can be mapped onto a suitable scale. Measurement techniques applied to software are more akin to the social sciences. Metrics are not same as direct measures. adaptability and reusability are all related to structured ness of the source code. reusability (direct) portable code is likely to be free of environment-specific features. where properties are similarly complex and ambiguous. its conformance to specification does not influence its efficiency. there are a number of conditions that a quality metric must meet. A typically measurable property on which a metric may be based is structured ness.as a library of routines or as code placed directly within another program.e. The criteria of quality related to product revision. maintainability. . i. Structured ness as it simplest may be calculated in terms of the average length of code modules within the programs. Well-structured code will be easier to maintain or adapt than so called “spaghetti code”. where it is considered at all. MEASURING SOFTWARE QUALITY MEASURING QUALITY Quality measurement.
Different authors have taken different approaches to metrics. Structured ness is used to predict the maintainability of the software product in use. . A predictive metric is used to make predictions about the software later in the lifecycle. using scores derived from equations such as Where: n01 = no of modules containing one or zero exit points only ntot = total number of modules Generally. McCall’s approach is more quantities. to allow for easier combination and comparison. in this approach. scores are normalized to a range between 0 and 1. Structured ness is measured by questions such as: Have the rules for transfer of control between modules been followed?(y/n) Are modules limited in size?(y/n) Do all modules have only one exit point ?(y/n) Do all modules have only one entry point?(y/n) A well-structured program will produce positive answers to such questions.SOFTWARE METRICS Metrics are classified into two types according to whether they are predictive or descriptive. A descriptive metric describes the state of the software at the time of measurement.
Automation is also desirable. Usefulness the measure must address a need. What makes a good metric? Seven criteria for a good metric. the cheaper the measure is to use. Reliability the results should be precise and repeatable. after Watts (1987) Objectivity the results should be free from subjective influences. to give unjustified credibility to the results obtained. not simply measure a property for its own sake. It is also possible to validate whether the dependence of maintainability structured ness in identical to that of adaptability or reusability. the Better.This appears attractive. Metrics cited in the literature: Metrics available for each criterion (after Watts. Validity the metric must measure the correct characteristic. To validate this relationship and determine whether it is a linear relationship or more complex in nature. It must not matter who the measurer is. Comparability the metric must be comparable with other measures of the same Criterion. Economy the simpler and therefore. 1987) \ . Standardization the metric must be unambiguous and allow for comparison. A further important feature is consistency.
3. complexity. 1. Error detection as measure of correctness 4. and mean time to failure (MTTF). testability. Readability as a measure of usability may be applied to documentation in order to assess how such documentation may assist in the usability of a piece of software. error prediction. Error prediction as a measure of correctness this measure is depends upon the stable software development environment.The metrics cited depends to a very large extent upon just seven distinct measurable properties: readability. modularity. 2. Mean time to failure (MTTF) as a measure of reliability . error detection.
. Realization modules for abstract data. The overall quality is given by the mean of the individual scores. 8. each criterion is allocated a score. Yau and Collofello (1979) measured “ stability” as the number of modules affected by program modification. 7. Complexity as a measure of reliability the assumption underpinning these measures is that as complexity increases. Complexity as a measure of maintainability is also indicative of maintainability. 1. Simple scoring: In this method. Kentger (1981) defined a four-level hierarchy of module types: Control modules. Problem-oriented modules. Five such methods are detailed by Watts (1987) as part of the MQ approach. 9. so reliability decrease. Four measures have been suggested. Readability of code as a measure of maintainability has also been suggested as a measure of maintainability. Management modules for abstract data. Testability as a measure of maintainability the ease and effectiveness of testing will have an impact upon the maintainability of a product. An overall measure of quality Much of the work in this area has been concerned with simple reduction of a set of scores to a single ‘figure-ofmerit’.5. Modularity as a measure of maintainability increased modularity is generally assumed to increase maintainability. 6.
A minimum value is specified for each essential criterion and any software failing to reach these scores is designated unsuitable. Each criterion is evaluated to produce a score between 0 and 1. The Kepner-Tregoe method (1981): The criteria are divided into ‘ essential’ and ‘ desirable’ . 3.65 4.633 Overall measure by PWF method = ((2/3) x0. A weighting is assigned to a group characteristics before each individual weighting is considered. Phased weighting factor method: This is an extension of weighted scoring.660) + ((1/3) x0.2. The phased weighting factor method Product operation weighted mean = 0. Each score is weighted before summation and the resulting figure reflects the relative importance if the different factors.633) = 0.660 Product transition weighted mean = 0. Weighted scoring: This scheme allows the user to weight each criterion according to how important they consider them to be. .
‘ Suitable’ software is then judged by use of the weighting factor method. When a user complains of poor quality. they tend to improve the product further in these areas. Unfortunately. Often the product has already exceeded the user’ s expectations in these areas. UNIT – I TEST QUESTIONS . Worse. the user’ s needs still have not been met in other critical areas. and a further improvement does not improve their overall view of the quality of the product. POLARITY PROFILING: In this scheme. Two different outcomes result. This effort wasted. Using the chosen criteria. resulting in an overall perception of quality and satisfied users. The consequence is that all criteria are now at the required level. each product is ranked in order. In the second case. quality is represented by series of ranges from -3 to +3. reliability and efficiency are still not up to the required standard. 5. improvements in reliability and efficiency are traded for a reduction in adaptability and maintainability. leading to tensions between the developers and users. 1975): This method is designed with comparative evaluation is mind. In the first case. and usability was already considered satisfactory. The required quality may be represented and compared to the actual quality achieved. It is a common problem amongst software developers that they focus upon particular aspects of quality. usability is improved. The Cologne combination method (Schmitz. perhaps by‘ tweaking’ the code.
I Explain how the quality criterions are interrelated (8) II Explain briefly about overall measure of quality (8) Question Bank: 2 Marks Questions: 1. Define Quality 2. McCall’s Model of quality (8) (Or) 7. I Explain briefly about views of quality (7) II State and explain. Give some insights about quality. 2. Define software quality.PART –A (5 X 2 = 10 Marks) 1. What is meant by manufacturing quality? 7. What are the two quality factors that fall under software quality area? 5. What are five different views of quality suggested by Garvin? 3. . What should be the aim of any software production process? 9. 3. What are the common characteristics of both Boehm and McCall models? 4. 5. Correctness Vs efficiency-Differentiate. What makes a good metric? PART – B 6. Define software quality. What are all the problems in measuring the quality of software? 4. What is design quality? 6.
Define descriptive metric. 24. Write down the problems with metrics 37. What are two types of software metric? 32. Explain the simple scoring method. 19. What are the methods that are used in overall measure of quality? 38. Give any two examples of hierarchical models. Give an example of neutral relationship . What are the metrics associated with reliability? 16. 15. How is cologne combination method used? . How is weighted scoring method used? 40. What makes a good metric? 35. Define software metric. What is the purpose of hierarchical modeling? 14. 27. What are five different views of quality suggested by Garvin? 11. 34. What are the methodologies used in the manufacturer’ s view? 12. 18. Write about GE model. What is the objectivity criterion for a good metric? 36. 31. 39.suggest.10. Give the interrelationships between quality criteria. Explain Boehm’ s model. What are the three areas addressed by McCall’ model? 20. How does phased weighting factor method differ from weighted scoring method? 42. Give few relationships that have inverse relation ships. Define interoperability 23. 29.based view? 13. What are the common characteristics of both Boehm and McCall models? 25. 17. Give examples of direct relationship 28. What is value . What are all McCall’ s criteria of quality? 21. Give some examples of quality criteria employed in software quality. What is portability? 22. What is meant by predictive metric? 33. Correctness Vs efficiency-Differentiate 30. Give a Schematic hierarchical view of software quality. 26.
(I). 2) Explain Garvin’s five views of quality. modularity. 54. . What is meant by software quality? 55.based view? 49. What does user . Who is called the implementation programmer? 46. McCal’l s Model of quality. What are the five methods detailed by watts (1987) as part of the MQ approach? 53. (III). measuring quality available? Explain. 6) Explain briefly about overall measure of quality. What is the role of a project manager? 44. 8) Explain briefly about hierarchical modeling. Describe the possible ways in which the measures could be assessed. Why is software quality important? 51. Explain the terms “ explicit” and “ implicit” requirements in the context of Garvin’ s views of quality 16 Marks Questions: 1) McCall suggest that simplicity. Define Quality as defined by International Standard Organization. (II). 7) Explain about the interrelation between quality criteria and software process. Explain each of the above four criteria. Define Software quality assurance. 3) State and explain. instrumentation and self descriptiveness are software quality criteria that is internal characteristics that promote external quality testability. 4) Explain briefly about views of quality. Describe the possible measures for each of the criteria. What do you mean by the transcendent view? 48. Define Structured ness 45. What is the role of a quality auditor? 47. 5) What are the quality metrics. Quality is conformance to requirements to both implicit and explicit. What is product .43.based view mean? 50. 52.