Professional Documents
Culture Documents
UNIT-IV
School of CSA
Dr.Hemanth.K.S
Software Quality Management Process
& Advanced Topics
UNIT IV
UNIT-IV
Advanced Topics: Software Process Improvement, The SPI Process, Trends and Return
of Investment, Technology Directions.
3
SOFTWARE QUALITY MANAGEMENT PROCESS
❖ Quality Concepts
❖ Software Quality Assurance
❖ Testing Strategies
❖ Security Engineering,
❖ Software Configuration Management Process
4
SOFTWARE QUALITY
• Bad software plagues nearly every organization that uses computers, causing lost
work hours during computer downtime, lost or corrupted data, missed sales
opportunities, high IT support and maintenance costs, and low customer
satisfaction
5
QUALITY
• The American Heritage Dictionary defines quality as
“a characteristic or attribute of something.”
• For software, two kinds of quality may be encountered:
Quality of design encompasses requirements, specifications, and the
design of the system.
Quality of conformance is an issue focused primarily on
implementation.
• User satisfaction = compliant product + good quality + delivery within
budget and schedule
6
QUALITY—A PRAGMATIC VIEW
• The transcendental view argues that quality is something that you immediately
recognize, but cannot explicitly define.
• The user view sees quality in terms of an end-user’s specific goals. If a product
meets those goals, it exhibits quality.
• The manufacturer’s view defines quality in terms of the original specification of
the product. If the product conforms to the spec, it exhibits quality.
• The product view suggests that quality can be tied to inherent characteristics
(e.g., functions and features) of a product.
• Finally, the value-based view measures quality based on how much a customer is
willing to pay for a product. In reality, quality encompasses all of these views and
more.
7
SOFTWARE QUALITY
8
EFFECTIVE SOFTWARE PROCESS
9
USEFUL PRODUCT
• A useful product delivers the content, functions, and features that the end-
user desires
• But as important, it delivers these assets in a reliable, error free way.
• A useful product always satisfies those requirements that have been
explicitly stated by stakeholders.
• In addition, it satisfies a set of implicit requirements (e.g., ease of use) that
are expected of all high-quality software.
10
ADDING VALUE
• By adding value for both the producer and user of a software product, high quality software
provides benefits for the software organization and the end-user community.
• The software organization gains added value because high quality software requires less
maintenance effort, fewer bug fixes, and reduced customer support.
• The user community gains added value because the application provides a useful
capability in a way that expedites some business process.
11
QUALITY DIMENSIONS
• Feature quality. Does the software provide features that surprise and delight first-time end-
users?
• Reliability. Does the software deliver all features and capability without failure? Is it available
when it is needed? Does it deliver functionality that is error free?
• Conformance. Does the software conform to local and external software standards that are
relevant to the application? Does it conform to de facto design and coding conventions?
12
QUALITY DIMENSIONS
• Durability. Can the software be maintained (changed) or corrected (debugged) without the
inadvertent generation of unintended side effects? Will changes cause the error rate or reliability
to degrade with time?
• Aesthetics. Most of us would agree that an aesthetic entity has a certain elegance, a unique flow,
and an obvious “presence” that are hard to quantify but evident, nonetheless.
• Perception. In some situations, you have a set of prejudices that will influence your perception of
quality.
13
OTHER VIEWS
14
MCCALL’S QUALITY FACTOR
• Correctness: The extent to which a program satisfies its specification and fulfills the customer’s
mission objectives.
• Reliability: extent to which a program can be expected to perform its intended function with
required precision.
• Efficiency: the amount of computing resources and code required by a program to perform its
function.
• Integrity: Extent to which access to software or data by unauthorized persons can be controlled.
• Usability: Effort required to learn, operate, prepare input for and interpret output of a program.
• Maintainability: Effort required to locate and fix an error in a program.
• Flexibility: Effort required to modify an operational program.
• Testability: effort required to test a program to ensure that it performs its intended function.
• Portability: Effort required to transfer the program from one hardware and / or software system
environment to another.
• Reusability: part of a program can be reused in other applications
• Interoperability: effort required to couple one system to another.
15
ISO 9126 QUALITY FACTORS
16
TARGETED FACTORS
17
TARGETED FACTORS
18
THE SOFTWARE QUALITY DILEMMA
• If you produce a software system that has terrible quality, you lose because no one will
want to buy it.
• If on the other hand you spend infinite time, extremely large effort, and huge sums of
money to build the absolutely perfect piece of software, then it's going to take so long to
complete, and it will be so expensive to produce that you'll be out of business anyway.
• Either you missed the market window, or you simply exhausted all your resources.
• So, people in industry try to get to that magical middle ground where the product is good
enough not to be rejected right away, such as during evaluation, but also not the object of
so much perfectionism and so much work that it would take too long or cost too much to
complete. [Ven03]
19
“GOOD ENOUGH” SOFTWARE
• Good enough software delivers high quality functions and features that end-users desire, but at the
same time it delivers other more obscure or specialized functions and features that contain known
bugs.
• If you work for a small company be wary of this philosophy. If you deliver a “good enough” (buggy) product,
you risk permanent damage to your company’s reputation.
• You may never get a chance to deliver version 2.0 because bad buzz may cause your sales to plummet and
your company to fold.
• If you work in certain application domains (e.g., real time embedded software, application software that is
integrated with hardware can be negligent and open your company to expensive litigation.
20
COST OF QUALITY
1. Quality costs us time and money, but lack of quality also has a cost.
2. Prevention costs
21
1. External failure costs are
COST
1. The relative costs to find and repair an error or defect increase dramatically
as we go from prevention to detection to internal failure to external failure
costs.
22
RISK
• “People bet their jobs, their comforts, their safety, their entertainment, their
decisions, and their very lives on computer software. It better be right.
• The implication is that the low-quality software increases risks for both developer
and user.
Example:
• Throughout the month of November 2000 at a hospital in Panama, 28 patients
received massive overdoses of gamma rays during treatment for a variety of
cancers. In the months that followed, five of these patients died from radiation
poisoning and 15 others developed serious complications. What caused this
tragedy? A software package, developed by a U.S. company, was modified by
hospital technicians to compute modified doses of radiation for each patient.
23
NEGLIGENCE AND LIABILITY
• The story is all too common. A governmental or corporate entity hires a major software developer
or consulting company to analyze requirements and then design and construct a software-based
“system” to support some major activity.
• The system might support a major corporate function (e.g., pension management) or some
governmental function (e.g., healthcare administration or homeland security).
• Work begins with the best of intentions on both sides, but by the time the system is delivered,
things have gone bad.
• The system is late, fails to deliver desired features and functions, is error-prone, and does not
meet with customer approval.
• Litigation ensues.
• In most cases, customer claims that the developer has been negligent and is therefore not entitled
to payment.
• Whereas developer claims that customer has repeatedly changed their requirements.
• But in every case , quality of the delivered system comes into question.
24
QUALITY AND SECURITY
• “Software security relates entirely and completely to quality. You must think
about security, reliability, availability, dependability—at the beginning, in the
design, architecture, test, and coding phases, all through the software life cycle
[process].
• Even people aware of the software security problem have focused on late life-
cycle stuff. The earlier you find the software problem, the better. And there are
two kinds of software problems. One is bugs, which are implementation
problems. The other is software flaws—architectural problems in the design.
People pay too much attention to bugs and not enough on flaws.”
25
IMPACT OF MANAGEMENT ACTIONS
26
ACHIEVING SOFTWARE QUALITY
27
SOFTWARE QUALITY ASSURANCE(SQA)
Topics covered
1. Elements of SQA
2. SQA Tasks, Goals and Metrics
3. Statistical Software Quality Assurance
4. Software Reliability
28
ELEMENTS OF SQA
• Standards
• Reviews and Audits
• Testing
• Error/defect collection and analysis
• Change management
• Education
• Vendor management
• Security management
• Safety
• Risk management
29
ROLE OF THE SQA GROUP-I
• The SQA group reviews the process description for compliance with organizational policy,
internal software standards, externally imposed standards (e.g., ISO-9001), and other parts of 30
the software project plan.
ROLE OF THE SQA GROUP-II
Reviews software engineering activities to verify compliance with the defined software process.
• Identifies, documents, and tracks deviations from the process and verifies that corrections have been
made.
Audits designated software work products to verify compliance with those defined as part of the software
process.
• reviews selected work products; identifies, documents, and tracks deviations; verifies that corrections
have been made
Ensures that deviations in software work and work products are documented and handled according to a
documented procedure.
32
Statistical SQA
Product Collect information on all defects
& Process Find the causes of the defects
Move to provide fixes for the process
measurement
33
STATISTICAL SQA
• Information about software errors and defects is collected and categorized.
• An attempt is made to trace each error and defect to its underlying cause (e.g., non-
conformance to specifications, design error, violation of standards, poor communication
with the customer).
• Using the Pareto principle (80 percent of the defects can be traced to 20 percent of all
possible causes), isolate the 20 percent (the vital few).
• Once the vital few causes have been identified, move to correct the problems that have
caused the errors and defects.
34
SIX-SIGMA FOR SOFTWARE ENGINEERING
• The term “six sigma” is derived from six standard deviations—3.4 instances (defects) per million
occurrences—implying an extremely high quality standard.
• Define customer requirements and deliverables and project goals via well-defined methods of customer
communication
• Measure the existing process and its output to determine current quality performance (collect defect
metrics)
• Analyze defect metrics and determine the vital few causes.
35
SOFTWARE RELIABILITY
36
SOFTWARE SAFETY
37
ISO 9001:2000 STANDARD
• ISO 9001:2000 is the quality assurance standard that applies to software engineering.
• The standard contains 20 requirements that must be present for an effective quality assurance
system.
• The requirements delineated by ISO 9001:2000 address topics such as
• management responsibility, quality system, contract review, design control, document and data control,
product identification and traceability, process control, inspection and testing, corrective and preventive
action, control of quality records, internal quality audits, training, servicing, and statistical techniques.
38
SOFTWARE TESTING STRATEGIES
39
WHAT TESTING SHOWS
errors
requirements conformance
performance
an indication
of quality
40
A STRATEGIC APPROACH TO S/W
TESTING
1. Verification refers to the set of tasks that ensure that software correctly
implements a specific function.
2. Validation refers to a different set of tasks that ensure that the software that
has been built is traceable to customer requirements. Boehm [Boe81] states
this another way:
• Verification: "Are we building the product right?"
• Validation: "Are we building the right product?"
42
ORGANIZING FOR SOFTWARE TESTING
43
TESTING STRATEGY- THE BIG
PICTURE
System engineering
Analysis modeling
Design modeling
Code generation
Integration test
Validation test
System test
44
TESTING STRATEGY- THE BIG
PICTURE
45
STRATEGIC ISSUES
46
TESTING STRATEGY
47
TESTING STRATEGIES FOR CONVENTIONAL SOFTWARE
48
UNIT TESTING
49
UNIT TESTING
Module to
be tested
interface
local data structures
boundary conditions
independent paths
error handling paths
test cases
50
UNIT TEST ENVIRONMENT
driver
interface
local data structures
stub stub
test cases
RESULTS
51
II. INTEGRATION TESTING
• Top-down integration
• Bottom –up integration
• Regression testing
• Smoke testing
52
II. INTEGRATION TESTING
Options:
• the “big bang” approach
• an incremental construction strategy
53
TOP DOWN INTEGRATION
A
top module is tested with
stubs
B F G
54
TOP DOWN INTEGRATION
55
BOTTOM-UP INTEGRATION
B F G
A
Top modules are
tested with stubs
B F G
cluster
57
BOTTOM UP INTEGRATION
58
BOTTOM UP INTEGRATION
59
REGRESSION TESTING
60
REGRESSION TESTING
61
SMOKE TESTING
A build includes all data files, libraries, reusable modules, and engineered components that are required to implement one or more
product functions.
2. A series of tests is designed to expose errors that will keep the build from properly performing its function.
The intent should be to uncover “show stopper” errors that have the highest likelihood of throwing the software project behind schedule.
3. The build is integrated with other builds and the entire product (in its current form) is smoke tested daily.
The integration approach may be top down or bottom up.
62
OBJECT-ORIENTED TESTING
63
BROADENING THE VIEW OF “TESTING”
1. It can be argued that the review of OO analysis and design models is especially
useful because the same semantic constructs (e.g., classes, attributes, operations,
messages) appear at the analysis, design, and code level.
2. Therefore, a problem in the definition of class attributes that is uncovered during
analysis will circumvent side effects that might occur if the problem were not
discovered until design or code (or even the next iteration of analysis).
64
TESTING THE CRC MODEL
1. Revisit the CRC model and the object-relationship model.
2. Inspect the description of each CRC index card to determine if a delegated responsibility
is part of the collaborator’s definition.
3. Invert the connection to ensure that each collaborator that is asked for service is receiving
requests from a reasonable source.
4. Using the inverted connections examined in step 3, determine whether other classes might
be required or whether responsibilities are properly grouped among the classes.
6. Steps 1 to 5 are applied iteratively to each class and through each evolution of the analysis
model.
65
OO TESTING STRATEGY
66
WEBAPP TESTING - I
67
WEBAPP TESTING - II
68
HIGH ORDER TESTING
1. Validation testing
1. Focus is on software requirements
2. System testing
1. Focus is on system integration
3. Alpha/Beta testing
1. Focus is on customer usage
4. Recovery testing
1. forces the software to fail in a variety of ways and verifies that recovery is properly performed
5. Security testing
1. verifies that protection mechanisms built into a system will, in fact, protect it from improper penetration
6. Stress testing
1. executes a system in a manner that demands resources in abnormal quantity, frequency, or volume
7. Performance Testing
1. test the run-time performance of software within the context of an integrated system
69
THE ART OF DEBUGGING
THE DEBUGGING PROCESS
70
DEBUGGING PROCESS
71
DEBUGGING STRATEGIES
1. Objective of debugging is to find and correct the course of software error or defect.
2. In general, three debugging strategies have been proposed:
• Brute force / testing
• Backtracking
• Cause Elimination
3. All the three techniques need to address the following :
• Debugging tactics
• Automated debugging
• The people factor
72
SYMPTOMS & CAUSES
infectious
damage
catastrophic
extreme
serious
disturbing
annoying
mild
Bug Type
1. Is the cause of the bug reproduced in another part of the program? In many
situations, a program defect is caused by an erroneous pattern of logic that
may be reproduced elsewhere.
2. What "next bug" might be introduced by the fix I'm about to make? Before the
correction is made, the source code (or, better, the design) should be
evaluated to assess coupling of logic and data structures.
3. What could we have done to prevent this bug in the first place? This question is
the first step toward establishing a statistical software quality assurance
approach. If you correct the process as well as the product, the bug will be
removed from the current program and may be eliminated from all future
programs.
75
FINAL THOUGHTS
76
SECURITY ENGINEERING
Topics covered
1. Security engineering and security management
Security engineering concerned with applications; security
management with infrastructure.
2. Security risk assessment
Designing a system based on the assessment of security risks.
3. Design for security
How system architectures have to be designed for security.
77
SECURITY ENGINEERING
78
APPLICATION/INFRASTRUCTURE
SECURITY
79
SYSTEM LAYERS WHERE SECURITY MAY BE
COMPROMISED
80
SYSTEM SECURITY MANAGEMENT
81
SECURITY RISK MANAGEMENT
82
PRELIMINARY RISK ASSESSMENT
83
MISUSE CASES
84
ASSET ANALYSIS
The information system High. Required to support all High. Financial loss as clinics
clinical consultations. Potentially may have to be canceled. Costs
safety-critical. of restoring system. Possible
patient harm if treatment cannot
be prescribed.
The patient database High. Required to support all High. Financial loss as clinics
clinical consultations. Potentially may have to be canceled. Costs
safety-critical. of restoring system. Possible
patient harm if treatment cannot
be prescribed.
An individual patient record Normally low although may be Low direct losses but possible
high for specific high-profile loss of reputation.
patients.
85
THREAT AND CONTROL ANALYSIS
86
SECURITY REQUIREMENTS
87
LIFE CYCLE RISK ASSESSMENT
1. Risk assessment while the system is being developed and after it has
been deployed
2. More information is available - system platform, middleware and the
system architecture and data organisation.
3. Vulnerabilities that arise from design choices may therefore be identified.
88
LIFE-CYCLE RISK ANALYSIS
89
DESIGN DECISIONS FROM USE OF COTS
90
VULNERABILITIES ASSOCIATED WITH
TECHNOLOGY CHOICES
91
SECURITY REQUIREMENTS
1. A password checker shall be made available and shall be run daily. Weak
passwords shall be reported to system administrators.
2. Access to the system shall only be allowed by approved client computers.
3. All client computers shall have a single, approved web browser installed
by system administrators.
92
OPERATIONAL RISK ASSESSMENT
93
DESIGN FOR SECURITY
1. Architectural design
how do architectural design decisions affect the security of a system?
2. Good practice
what is accepted good practice when designing secure systems?
3. Design for deployment
what support should be designed into a system to avoid the
introduction of vulnerabilities when a system is deployed for use?
94
ARCHITECTURAL DESIGN
95
PROTECTION
1. Platform-level protection
Top-level controls on the platform on which a system runs.
2. Application-level protection
Specific protection mechanisms built into the application itself e.g.
additional password protection.
3. Record-level protection
Protection that is invoked when access to specific information is
requested
4. These lead to a layered protection architecture
96
A LAYERED PROTECTION
ARCHITECTURE
97
DISTRIBUTION
98
KEY POINTS
99
SOFTWARE CONFIGURATION MANAGEMENT
100
NEED OF S/W CONFIGURATION MANAGEMENT
1. There are multiple people working on software which is continually
updating
2. It may be a case where multiple version, branches, authors are involved in
a software config project, and the team is geographically distributed and
works concurrently
3. Changes in user requirement, policy, budget, schedule need to be
accommodated.
4. Software should able to run on various machines and Operating Systems
5. Helps to develop coordination among stakeholders
6. SCM process is also beneficial to control the costs involved in making
changes to a system.
101
102
TASKS IN SCM PROCESS
1. Configuration Identification
2. Baselines
3. Change Control
4. Configuration Status Accounting
5. Configuration Audits and Reviews
103
CONFIGURATION IDENTIFICATION:
104
ACTIVITIES DURING THIS PROCESS:
105
BASELINE:
106
ACTIVITIES DURING THIS PROCESS:
107
CHANGE CONTROL:
108
ACTIVITIES DURING THIS PROCESS:
109
CONFIGURATION STATUS ACCOUNTING:
110
ACTIVITIES DURING THIS PROCESS:
1. Keeps a record of all the changes made to the previous baseline to reach a
new baseline
2. Identify all items to define the software configuration
3. Monitor status of change requests
4. Complete listing of all changes since the last baseline
5. Allows tracking of progress to next baseline
6. Allows to check previous releases/versions to be extracted for testing
111
CONFIGURATION AUDITS AND REVIEWS:
1. Software Configuration audits verify that all the software product satisfies
the baseline needs.
2. It ensures that what is built is what is delivered.
112
ACTIVITIES DURING THIS PROCESS:
113
PARTICIPANT OF SCM PROCESS:
114
1. Configuration Manager
• Configuration Manager is the head who is Responsible for identifying configuration
items.
• CM ensures team follows the SCM process
• He/She needs to approve or reject change requests
2. Developer
• The developer needs to change the code as per standard development activities or
change requests. He is responsible for maintaining configuration of code.
• The developer should check the changes and resolves conflicts
3. Auditor
• The auditor is responsible for SCM audits and reviews.
• Need to ensure the consistency and completeness of release.
115
4. Project Manager:
• Ensure that the product is developed within a certain time frame
• Monitors the progress of development and recognizes issues in the SCM
process
• Generate reports about the status of the software system
• Make sure that processes and policies are followed for creating,
changing, and testing
5. User
• The end user should understand the key SCM terms to ensure he has the
latest version of the software
116
CONCLUSION:
• Configuration Management best practices helps organizations to
systematically manage, organize, and control the changes in the
documents, codes, and other entities during the Software Development
Life Cycle.
• The primary goal of the SCM process is to increase productivity with
minimal mistakes
• The main reason behind configuration management process is that there
are multiple people working on software which is continually updating.
SCM helps establish concurrency, synchronization, and version control.
• A baseline is a formally accepted version of a software configuration item
• Change control is a procedural method which ensures quality and
consistency when changes are made in the configuration object.
117
• Configuration status accounting tracks each release during the SCM
process
• Software Configuration audits verify that all the software product satisfies
the baseline needs
• Project manager, Configuration manager, Developer, Auditor, and user are
participants in SCM process
• The SCM process planning begins at the early phases of a project.
• Git, Team foundation Sever and Ansible are few popular SCM tools.
118
ADVANCED TOPICS
119
ADVANCED TOPICS
120
SOFTWARE PROCESS
IMPROVEMENT
121
APPROACH TO SPI
122
ELEMENTS OF A SPI FRAMEWORK
123
CONSTITUENCIES
124
CONSTITUENCIES
125
CONSTITUENCIES
126
MATURITY MODELS
127
MATURITY MODELS
128
IS SPI FOR EVERYONE?
129
THE SPI PROCESS
130
ASSESSMENT AND GAP ANALYSIS
1. is to uncover both strengths and weaknesses
2. It examines a wide range of actions and tasks that will lead to a high quality process.
3. The issues should be focused during process assessment are
Consistency. Are important activities, actions and tasks applied consistently across all software projects and
by all software teams?
Sophistication. Are management and technical actions performed with a level of sophistication that implies a
thorough understanding of best practice?
Acceptance. Is the software process and software engineering practice widely accepted by management and
technical staff?
Commitment. Has management committed the resources required to achieve consistency, sophistication and
acceptance?
4. Gap analysis—The difference between local application and best practice represents a “gap” that
offers opportunities for improvement.
131
ASSESSMENT AND GAP ANALYSIS
Gap analysis
1. The difference between local application and best practice represents a “gap” that offers
opportunities for improvement.
2. The degree to which consistency, sophistication, acceptance and commitment are achieved indicates
the amount of cultural change that will be required to achieve meaningful improvement.
132
THE SPI PROCESS—II
• Specific technology and tools. Directed primarily toward practitioners, this category stresses technologies and tools that have
been adopted for local use. For example, if UML has been chosen for analysis and design modeling, a training curriculum for
software engineering using UML would be established.
• Business communication and quality-related topics. Directed toward all stakeholders, this category focuses on “soft” topics
that help enable better communication among stakeholders and foster a greater quality focus.
133
THE SPI PROCESS—III
134
THE SPI PROCESS—IV
1. Installation/Migration
1. actually software process redesign (SPR) activities. Scacchi [Sca00] states
that “SPR is concerned with identification, application, and refinement of
new ways to dramatically improve and transform software processes.”
2. three different process models are considered:
the existing (“as-is”) process,
a transitional (“here-to-there”) process, and
the target (“to be”) process.
135
THE SPI PROCESS—V
1. Evaluation
• assesses the degree to which changes have been instantiated and adopted,
• the degree to which such changes result in better software quality or other tangible process
benefits, and
• the overall status of the process and the organizational culture as SPI activities proceed
2. From a qualitative point of view, past management and practitioner attitudes about the
software process can be compared to attitudes polled after installation of process changes.
136
RISK MANAGEMENT FOR SPI
1. manage risk at three key points in the SPI process [Sta97b]:
• prior to the initiation of the SPI roadmap,
• during the execution of SPI activities (assessment, education, selection, installation), and
• during the evaluation activity that follows the instantiation of some process characteristic.
2. In general, the following categories [Sta97b] can be identified for SPI risk factors:
• budget and cost
• content and deliverables culture
• maintenance of SPI deliverables
• mission and goals
• organizational management and organizational stability
• process stakeholders
• schedule for SPI development
• SPI development environment and process
• SPI project management and SPI staff
137
CRITICAL SUCCESS FACTORS
138
THE CMMI
139
THE PEOPLE CMM
140
P-CMM PROCESS AREAS
141
OTHER SPI FRAMEWORKS
142
SPI RETURN ON INVESTMENT
1. “How do I know that we’ll achieve a reasonable return for the money we’re spending?”
1. ROI = [S (benefits) – S (costs)] / S (costs)] 3 100%
2. where
benefits include the cost savings associated with higher product quality (fewer defects), less rework, reduced
effort associated with changes, and the income that accrues from shorter time-to-market.
costs include both direct SPI costs (e.g., training, measurement) and indirect costs associated with greater
emphasis on quality control and change management activities and more rigorous application of software
engineering methods (e.g., the creation of a design model).
143
SPI TRENDS
144
TECHNOLOGY DIRECTIONS
1. Process Trends
2. The Grand Challenge
3. Collaborative Development
4. Requirements Engineering
5. Model-Driven Software Development
6. Postmodern Design
7. Test-Driven Development
145
TECHNOLOGY DIRECTIONS
146