You are on page 1of 13

Unit-5

Pareto analysis
The Pareto principle (also known as the 8020 rule, the law of the vital few, and the principle of factor sparsity) states that, for many events, roughly 80% of the effects come from 20% of the causes. Business-management consultant Joseph M. Juran suggested the principle and named it after Italian economist Vilfredo Pareto, who observed in 1906 that 80% of the land in Italy was owned by 20% of the population; Pareto developed the principle by observing that 20% of the pea pods in his garden contained 80% of the peas. It is a common rule of thumb in business; e.g., "80% of your sales come from 20% of your clients". Mathematically, the 80-20 rule is roughly followed by a power law distribution (also known as a Pareto distribution) for a particular set of parameters, and many natural phenomena have been shown empirically to exhibit such a distribution.[3] The Pareto principle is only tangentially related to Pareto efficiency. Pareto developed both concepts in the context of the distribution of income and wealth among the population. In computer science and engineering control theory such as for electromechanical energy converters, the Pareto principle can be applied to optimization efforts. For example, Microsoft noted that by fixing the top 20% most reported bugs, 80% of the errors and crashes would be eliminated.[ In load testing it is common practice to estimate the 80% of the traffic occurs during 20% of the time.

Pareto analysis is a formal technique useful where many possible courses of action are competing for attention. In essence, the problem-solver estimates the benefit delivered by each action, then selects a number of the most effective actions that deliver a total benefit reasonably close to the maximal possible one. Pareto analysis is a creative way of looking at causes of problems because it helps stimulate thinking and organize thoughts. However, it can be limited by its exclusion of possibly important problems which may be small initially, but which grow with time. It should be combined with other analytical tools such as failure mode and effects analysis and fault tree analysis for example.

This technique helps to identify the top portion of causes that need to be addressed to resolve the majority of problems. Once the predominant causes are identified, then tools like the Ishikawa diagram or Fish-bone Analysis can be used to identify the root causes of the problems. While it is common to refer to pareto as "80/20" rule, under the assumption that, in all situations, 20% of causes determine 80% of problems, this ratio is merely a convenient rule of thumb and is not nor should it be considered immutable law of nature. The application of the Pareto analysis in risk management allows management to focus on those risks that have the most impact on the project.

Steps to identify the important causes using 80/20 rule


1. Form an explicit table listing the causes and their frequency as a percentage. 2. Arrange the rows in the decreasing order of importance of the causes (i.e., the most important cause first) 3. Add a cumulative percentage column to the table 4. Plot with causes on x- and cumulative percentage on y-axis 5. Join the above points to form a curve 6. Plot (on the same graph) a bar graph with causes on x- and percent frequency on y-axis 7. Draw line at 80% on y-axis parallel to x-axis. Then drop the line at the point of intersection with the curve on x-axis. This point on the x-axis separates the important causes (on the left) and trivial causes (on the right) 8. Explicitly review the chart to ensure that at least 80% of the causes are captured

McCalls Quality model (1977)

Also called as General Electrics Model. This model was mainly developed for US military to bridge the gap between users and developers. It mainly has 3 major representations for defining and identifying the quality of a software product, namely Product Revision : This identifies quality factors that influence the ability to change the software product. (1) Maintainability : Effort required to locate and fix a fault in the program within its operating environment. (2) Flexibility : The ease of making changes required as dictated by business by changes in the operating environment. (3) Testability : The ease of testing program to ensure that it is error-free and meets its specification, i.e, validating the software requirements. Product Transition : This identifies quality factors that influence the ability to adapt the software to new environments. (1) Portability : The effort required to transfer a program from one environment to another. (2) Re-usability : The ease of reusing software in a different context.

(3) Interoperability: The effort required to couple the system to another system. Product Operations : This identifies quality factors that influence the extent to which the software fulfills its specification. (1) Correctness : The extent to which a functionality matches its specification. (2) Reliability : The systems ability not to fail/ the extent to which the system fails. (3) Efficiency : Further categorized into execution efficiency and storage efficiency and generally means the usage of system resources, example: processor time, memory. (4) Integrity : The protection of program from unauthorized access. (5) Usability : The ease of using software.

McCalls Quality Model

McCalls 11 Quality attributes hierarchy

Boehms Quality model (1978): Boehms model is similar to the McCall Quality Model in that it also presents a hierarchical quality model structured around high-level characteristics, intermediate level characteristics, primitive characteristics each of which contributes to the overall quality level. The high-level characteristics represent basic high-level requirements of actual use to which evaluation of software quality could be put the general utility of software. The high-level characteristics address three main questions that a buyer of software has: As-is utility : How well (easily, reliably, efficiently) can I use it as-is? Maintainability: How easy is it to understand, modify and retest? Portability : Can I still use it if I change my environment? The intermediate level characteristic represents Boehms 7 quality factors that together represent the qualities expected from a software system: Portability (General utility characteristics): Code possesses the characteristic portability to the extent that it can be operated easily and well on computer configurations other than its current one. Reliability (As-is utility characteristics): Code possesses the characteristic reliability to the extent that it can be expected to perform its intended functions satisfactorily. Efficiency (As-is utility characteristics): Code possesses the characteristic efficiency to the extent that it fulfills its purpose without waste of resources. Usability (As-is utility characteristics, Human Engineering): Code possesses the characteristic usability to the extent that it is reliable, efficient and human-engineered. Testability (Maintainability characteristics): Code possesses the characteristic testability to the extent that it facilitates the establishment of verification criteria and supports evaluation of its performance. Understandability (Maintainability characteristics): Code possesses the characteristic understandability to the extent that its purpose is clear to the inspector. Flexibility (Maintainability characteristics, Modifiability): Code possesses the characteristic modifiability to the extent that it facilitates the incorporation of changes, once the nature of the desired change has been determined. The lowest level structure of the characteristics hierarchy in Boehms model is the primitive characteristics metrics hierarchy. The primitive characteristics provide the foundation for defining qualities metrics which was one of the goals when Boehm constructed his quality model. Consequently, the model presents one more metrics supposedly measuring a given primitive characteristic. Though Boehms and McCalls models might appear very similar, the difference is that McCalls model primarily focuses on the precise measurement of the high-level characteristics As-is utility, whereas Boehms quality model is based on a wider range of characteristics with an extended and detailed focus on primarily maintainability.

Boehms Quality Model Dromeys Quality model(1995): Dromey has built a quality evaluation framework that analyzes the quality of software components through the measurement of tangible quality properties. Each artifact produced in the software life-cycle can be associated with a quality evaluation model. Dromey gives the following examples of what he means by software components for each of the different models: Variables, functions, statements, etc. can be considered components of the Implementation model; A requirement can be considered a component of the requirements model; A module can be considered a component of the design model; According to Dromey, all these components possess intrinsic properties that can be classified into four categories: Correctness : Evaluates if some basic principles are violated. Internal : Measure how well a component has been deployed according to its intended use. Contextual : Deals with the external influences by and on the use of a component. Descriptive : Measure the descriptiveness of a component (for example, does it have a meaningful name.) Dromey proposes a product based quality model that recognizes that quality evaluation differs for each product and that a more dynamic idea for modeling the process is needed to be wide enough to apply for different systems. Dromey is focusing on the relationship between the quality attributes and the sub-attributes, as well as attempting to connect software

product properties with software quality attributes. This quality model presented by R. Geoff Dromey is most recent model which is also similar to McCalls and Boehms model. Dromey proposes a product based quality model that recognizes that quality evaluation differs for each product and that a more dynamic idea for modeling the process is needed to be wide enough to apply for different systems. Dromey focuses on the relationship between the quality attributes and the sub-attributes, as well as attempts to connect software product properties with software quality attributes. 1) Product properties that influence quality. 2) High level quality attributes. 3) Means of linking the product properties with the quality attributes. Dromeys Quality Model is further structured around a 5 step process: 1) Choose a set of high-level quality attributes necessary for the evaluation. 2) List components/modules in your system. 3) Identify quality-carrying properties for the components/modules (qualities of the component that have the most impact on the product properties from the list above). 4) Determine how each property effects the quality attributes. 5) Evaluate the model and identify weaknesses.

Capability Maturity Model (CMM)


The Capability Maturity Model (CMM) is a methodology used to develop and refine an organization's software development process. The model describes a five-level evolutionary path of increasingly organized and systematically more mature processes. CMM was developed and is promoted by the Software Engineering Institute (SEI), a research and development center sponsored by the U.S. Department of Defense (DoD). SEI was founded in 1984 to address software engineering issues and, in a broad sense, to advance software engineering methodologies. More specifically, SEI was established to optimize the process of developing, acquiring, and maintaining heavily software-reliant systems for the DoD. Because the processes involved are equally applicable to the software industry as a whole, SEI advocates industry-wide adoption of the CMM. The CMM is similar to ISO 9001, one of the ISO 9000 series of standards specified by the International Organization for Standardization (ISO). The ISO 9000 standards specify an effective quality system for manufacturing and service industries; ISO 9001 deals specifically with software development and maintenance. The main difference between the two systems lies in their respective purposes: ISO 9001 specifies a minimal acceptable quality level for software processes, while the CMM establishes a framework for continuous process improvement and is more explicit than the ISO standard in defining the means to be employed to that end.

CMM's Five Maturity Levels of Software Processes

At the initial level, processes are disorganized, even chaotic. Success is likely to depend on individual efforts, and is not considered to be repeatable, because processes would not be sufficiently defined and documented to allow them to be replicated.

At the repeatable level, basic project management techniques are established, and successes could be repeated, because the requisite processes would have been made established, defined, and documented. At the defined level, an organization has developed its own standard software process through greater attention to documentation, standardization, and integration. At the managed level, an organization monitors and controls its own processes through data collection and analysis. At the optimizing level, processes are constantly being improved through monitoring feedback from current processes and introducing innovative processes to better serve the organization's particular needs.

ISO/IEC 9126
ISO/IEC 9126 Software engineering Product quality is an international standard for the evaluation of software quality. The fundamental objective of this standard is to address some of the well known human biases that can adversely affect the delivery and perception of a software development project. These biases include changing priorities after the start of a project or not having any clear definitions of "success." By clarifying, then agreeing on the project priorities and subsequently converting abstract priorities (compliance) to measurable values (output data can be validated against schema X with zero intervention), ISO/IEC 9126 tries to develop a common understanding of the project's objectives and goals. The standard is divided into four parts:

quality model external metrics internal metrics quality in use metrics.

Quality Model
The quality model presented in the first part of the standard, ISO/IEC 9126-1, classifies software quality in a structured set of characteristics and sub-characteristics as follows:

Functionality - A set of attributes that bear on the existence of a set of functions and their specified properties. The functions are those that satisfy stated or implied needs. o Suitability o Accuracy o Interoperability o Security o Functionality Compliance Reliability - A set of attributes that bear on the capability of software to maintain its level of performance under stated conditions for a stated period of time. o Maturity

Fault Tolerance Recoverability Reliability Compliance Usability - A set of attributes that bear on the effort needed for use, and on the individual assessment of such use, by a stated or implied set of users. o Understandability o Learnability o Operability o Attractiveness o Usability Compliance Efficiency - A set of attributes that bear on the relationship between the level of performance of the software and the amount of resources used, under stated conditions. o Time Behaviour o Resource Utilization o Efficiency Compliance Maintainability - A set of attributes that bear on the effort needed to make specified modifications. o Analyzability o Changeability o Stability o Testability o Maintainability Compliance Portability - A set of attributes that bear on the ability of software to be transferred from one environment to another. o Adaptability o Installability o Co-Existence o Replaceability o Portability Compliance

o o o

Each quality sub-characteristic (e.g. adaptability) is further divided into attributes. An attribute is an entity which can be verified or measured in the software product. Attributes are not defined in the standard, as they vary between different software products. Software product is defined in a broad sense: it encompasses executables, source code, architecture descriptions, and so on. As a result, the notion of user extends to operators as well as to programmers, which are users of components such as software libraries. The standard provides a framework for organizations to define a quality model for a software product. On doing so, however, it leaves up to each organization the task of specifying precisely its own model. This may be done, for example, by specifying target values for quality metrics which evaluates the degree of presence of quality attributes.

Internal Metrics
Internal metrics are those which do not rely on software execution (static measure)

External Metrics
External metrics are applicable to running software.

Quality in Use Metrics


Quality in use metrics are only available when the final product is used in real conditions. Ideally, the internal quality determines the external quality and external quality determines quality in use. This standard stems from the GE model for describing software quality, presented in 1977 by McCall et al., which is organized around three types of Quality Characteristics:

Factors (To specify): They describe the external view of the software, as viewed by the users. Criteria (To build): They describe the internal view of the software, as seen by the developer. Metrics (To control): They are defined and used to provide a scale and method for measurement.

ISO/IEC 9126 distinguishes between a defect and a nonconformity, a defect being The nonfulfilment of intended usage requirements, whereas a nonconformity is The nonfulfilment of specified requirements. A similar distinction is made between validation and verification, known as V&V in the testing trade.

History
ISO/IEC 9126 was issued in 1991; a revision was issued in 2001 in four parts (ISO/IEC 9126-1 to 9126-4).

Development
ISO/IEC then started work on SQuaRE (Software product Quality Requirements and Evaluation), a more extensive series of standards to replace ISO/IEC 9126, with numbers of the form ISO/IEC 250mn. For instance, ISO/IEC 25000 was issued in 2005, and ISO/IEC 25010, which supersedes ISO/IEC 9126-1, was issued in March 2011. ISO 25010 has eight product quality characteristics (in contrast to ISO 9126's six), and 31 subcharacteristics.

Functionality is renamed Functional suitability. Functional completeness is added as a subcharacteristic, and interoperability and security are moved

elsewhere. Accuracy is renamed functional correctness, and Suitability is renamed Functional appropriateness. Efficiency is renamed Performance efficiency. Capacity is added as a subcharactersitic. Compatibility is a new characteristic, with Co-existence moved from Portability and Interoperability moved from Functionality. Usability has new subcharacteristics of User error protection and Accessibility (use by people with a wide range of characteristics). Understandability is renamed Appropriateness recognizability, and Attractiveness is renamed User interface aesthetics. Reliability has a new subcharacteristic of Availability (when required for use). Security is a new characteristic with subcharacteristics of Confidentiality (data accessible only by those authorized), Integrity (protection from unauthorized modification), Non-repudiation (actions can be proven to have taken place), Accountibility (actions can be traced to who did them), and Authenticity (identity can be proved to be the one claimed). Maintainability has new subcharacteristics of Modularity (changed in one component has a minimal impact on others) and Reusability, and Changeability and Stability are rolled up into Modifiability. Portability has Co-existence moved elsewhere.

The 7 quality management tools Tools and techniques

The Japanese began applying the thinking developed by Walter


Shewhart and W Edward Deming during the 1930s and 1940s. Japan's progress in continuous improvement led to the expansion of the use of these tools. Kaoru Ishikawa, the then head of the Japanese Union of Scientists and Engineers (JUSE), thus, decided to expand the use of these

approaches in Japanese manufacturing in the 1960s with the introduction of the seven quality control (7QC) tools. 7QC tools are fundamental instruments to improve the quality of products. They are used to analyse the production process, identify major problems, control fluctuations of product quality and provide solutions to avoid future defects. These tools use statistical techniques and knowledge to accumulate data and analyse them. They help organise the collected data in a way that is easy to understand. Moreover, by using 7QC tools, specific problems in a process can be identified. 1. check sheet The first is the check sheet, which shows the history and pattern of variations. This tool is used at the beginning of the change process to identify the problems and collect data easily. The team using it can study observed data (a performance measure of a process) for patterns over a specified period of time. It is also used at the end of the change process to see whether the change has resulted in permanent improvement. 2. Pareto chart The Pareto chart is named after Wilfredo Pareto, the Italian economist who determined that wealth is not evenly distributed. The chart shows the distribution of items and arranges them from the most frequent to the least frequent, with the final bar being miscellaneous. The Pareto chart is used to define problems, to set their priority, to illustrate the problems detected and determine their frequency in the process. It is a graphic picture of the most frequent causes of a particular problem. Most people use it to determine where to put their initial efforts to get maximum gain. 3. cause and effect diagram The cause and effect diagram is also called the "fishbone chart" because of its appearance and the Ishikawa chart after the man who

popularised its use in Japan. It is used to list the cause of particular problems. Lines come off the core horizontal line to display the main causes; the lines coming off the main causes are the subcauses. This tool is used to figure out any possible causes of a problem. It allows a team to identify, explore, and graphically display, in increasing detail, all of the possible causes related to a problem or condition to discover its root cause(s).

5. Histogram The histogram is a bar chart showing a distribution of variables. This tool helps identify the cause of problems in a process by the shape as well as the width of the distribution. It shows a bar chart of accumulated data and provides the easiest way to evaluate the distribution of data. 6. Scatter diagram Then there's the scatter diagram, which shows the pattern of relationship between two variables that are thought to be related. The closer the points are to the diagonal line, the more closely there is a one-to-one relationship. The scatter diagram is a graphical tool that plots many data points and shows a pattern of correlation between two variables. 7. Graphs Graphs are among the simplest and best techniques to analyse and display data for easy communication in a visual format. Data can be depicted graphically using bar graphs, line charts, pie charts and control charts. While the first three are commonly used, the last is a line chart with control limits. By mathematically constructing control limits at three standard deviations above and below the average, one can determine what

variation is due to normal ongoing causes (common causes) and what variation is produced by unique events (special causes). By eliminating the special causes first and then reducing common causes, quality can be improved. Control chart provides control limits that are three standard deviations above and below average, whether or not our process is in control. This tool enables the user to monitor, control and improve process performance over time by studying variation and its source.

You might also like